1
0
Fork 0
alistair23-linux/drivers/tty/serial/amba-pl011.c

2845 lines
71 KiB
C
Raw Normal View History

tty: add SPDX identifiers to all remaining files in drivers/tty/ It's good to have SPDX identifiers in all files to make it easier to audit the kernel tree for correct licenses. Update the drivers/tty files files with the correct SPDX license identifier based on the license text in the file itself. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This work is based on a script and data from Thomas Gleixner, Philippe Ombredanne, and Kate Stewart. Cc: Jiri Slaby <jslaby@suse.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Jiri Kosina <jikos@kernel.org> Cc: David Sterba <dsterba@suse.com> Cc: James Hogan <jhogan@kernel.org> Cc: Rob Herring <robh@kernel.org> Cc: Eric Anholt <eric@anholt.net> Cc: Stefan Wahren <stefan.wahren@i2se.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Ray Jui <rjui@broadcom.com> Cc: Scott Branden <sbranden@broadcom.com> Cc: bcm-kernel-feedback-list@broadcom.com Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Helge Deller <deller@gmx.de> Cc: Joachim Eastwood <manabian@gmail.com> Cc: Matthias Brugger <matthias.bgg@gmail.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Tobias Klauser <tklauser@distanz.ch> Cc: Russell King <linux@armlinux.org.uk> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Richard Genoud <richard.genoud@gmail.com> Cc: Alexander Shiyan <shc_work@mail.ru> Cc: Baruch Siach <baruch@tkos.co.il> Cc: "Maciej W. Rozycki" <macro@linux-mips.org> Cc: "Uwe Kleine-König" <kernel@pengutronix.de> Cc: Pat Gefre <pfg@sgi.com> Cc: "Guilherme G. Piccoli" <gpiccoli@linux.vnet.ibm.com> Cc: Jason Wessel <jason.wessel@windriver.com> Cc: Vladimir Zapolskiy <vz@mleia.com> Cc: Sylvain Lemieux <slemieux.tyco@gmail.com> Cc: Carlo Caione <carlo@caione.org> Cc: Kevin Hilman <khilman@baylibre.com> Cc: Liviu Dudau <liviu.dudau@arm.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Andy Gross <andy.gross@linaro.org> Cc: David Brown <david.brown@linaro.org> Cc: "Andreas Färber" <afaerber@suse.de> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Laxman Dewangan <ldewangan@nvidia.com> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Jonathan Hunter <jonathanh@nvidia.com> Cc: Barry Song <baohua@kernel.org> Cc: Patrice Chotard <patrice.chotard@st.com> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Peter Korsgaard <jacmet@sunsite.dk> Cc: Timur Tabi <timur@tabi.org> Cc: Tony Prisk <linux@prisktech.co.nz> Cc: Michal Simek <michal.simek@xilinx.com> Cc: "Sören Brinkmann" <soren.brinkmann@xilinx.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Kate Stewart <kstewart@linuxfoundation.org> Cc: Philippe Ombredanne <pombredanne@nexb.com> Cc: Jiri Slaby <jslaby@suse.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-06 10:11:51 -07:00
// SPDX-License-Identifier: GPL-2.0+
/*
* Driver for AMBA serial ports
*
* Based on drivers/char/serial.c, by Linus Torvalds, Theodore Ts'o.
*
* Copyright 1999 ARM Limited
* Copyright (C) 2000 Deep Blue Solutions Ltd.
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
* Copyright (C) 2010 ST-Ericsson SA
*
* This is a generic driver for ARM AMBA-type serial ports. They
* have a lot of 16550-like features, but are not register compatible.
* Note that although they do have CTS, DCD and DSR inputs, they do
* not have an RI input, nor do they have DTR or RTS outputs. If
* required, these have to be supplied via some other means (eg, GPIO)
* and hooked into this driver.
*/
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
#if defined(CONFIG_SERIAL_AMBA_PL011_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
#define SUPPORT_SYSRQ
#endif
#include <linux/module.h>
#include <linux/ioport.h>
#include <linux/init.h>
#include <linux/console.h>
#include <linux/sysrq.h>
#include <linux/device.h>
#include <linux/tty.h>
#include <linux/tty_flip.h>
#include <linux/serial_core.h>
#include <linux/serial.h>
#include <linux/amba/bus.h>
#include <linux/amba/serial.h>
#include <linux/clk.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 02:04:11 -06:00
#include <linux/slab.h>
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
#include <linux/dmaengine.h>
#include <linux/dma-mapping.h>
#include <linux/scatterlist.h>
#include <linux/delay.h>
#include <linux/types.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/pinctrl/consumer.h>
#include <linux/sizes.h>
#include <linux/io.h>
#include <linux/acpi.h>
#include "amba-pl011.h"
#define UART_NR 14
#define SERIAL_AMBA_MAJOR 204
#define SERIAL_AMBA_MINOR 64
#define SERIAL_AMBA_NR UART_NR
#define AMBA_ISR_PASS_LIMIT 256
#define UART_DR_ERROR (UART011_DR_OE|UART011_DR_BE|UART011_DR_PE|UART011_DR_FE)
#define UART_DUMMY_DR_RX (1 << 16)
static u16 pl011_std_offsets[REG_ARRAY_SIZE] = {
[REG_DR] = UART01x_DR,
[REG_FR] = UART01x_FR,
[REG_LCRH_RX] = UART011_LCRH,
[REG_LCRH_TX] = UART011_LCRH,
[REG_IBRD] = UART011_IBRD,
[REG_FBRD] = UART011_FBRD,
[REG_CR] = UART011_CR,
[REG_IFLS] = UART011_IFLS,
[REG_IMSC] = UART011_IMSC,
[REG_RIS] = UART011_RIS,
[REG_MIS] = UART011_MIS,
[REG_ICR] = UART011_ICR,
[REG_DMACR] = UART011_DMACR,
};
/* There is by now at least one vendor with differing details, so handle it */
struct vendor_data {
const u16 *reg_offset;
unsigned int ifls;
unsigned int fr_busy;
unsigned int fr_dsr;
unsigned int fr_cts;
unsigned int fr_ri;
unsigned int inv_fr;
bool access_32b;
bool oversampling;
bool dma_threshold;
serial: pl011: implement workaround for CTS clear event issue Problem Observed: - interrupt status is set by rising or falling edge on CTS line - interrupt status is cleared on a .0. to .1. transition of the interrupt-clear register bit 1. - interrupt-clear register is reset by hardware once the interrupt status is .0.. Remark: It seems not possible to read this register back by the CPU though, but internally this register exists. - when simultaneous set and reset event on the interrupt status happens, then the set-event has priority and the status remains .1.. As a result the interrupt-clear register is not reset to .0., and no new .0. to .1. transition can be detected on it when writing a .1. to it. This implies race condition, the clear must be performed at least one UARTCLK the riding edge of CTS RIS interrupt. Fix: Instead of resetting UART as done in commit c16d51a32bbb61ac8fd96f78b5ce2fccfe0fb4c3 "amba pl011: workaround for uart registers lockup" do the following: write .0. and then .1. to the interrupt-clear register to make sure that this transition is detected. According to the datasheet writing a .0. does not have any effect, but actually it allows to reset the internal interrupt-clear register. Take into account: The .0. needs to last at least for one clk_uart clock period (~ 38 MHz, 26.08ns) This way we can do away with the tasklet and keep only a tiny fix triggered by the variant flag introduced in this patch. Signed-off-by: Guillaume Jaunet <guillaume.jaunet@stericsson.com> Signed-off-by: Christophe Arnal <christophe.arnal@stericsson.com> Signed-off-by: Matthias Locher <Matthias.Locher@stericsson.com> Signed-off-by: Rajanikanth H.V <rajanikanth.hv@stericsson.com> Reviewed-by: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-03-26 03:17:02 -06:00
bool cts_event_workaround;
bool always_enabled;
bool fixed_options;
unsigned int (*get_fifosize)(struct amba_device *dev);
};
static unsigned int get_fifosize_arm(struct amba_device *dev)
{
return amba_rev(dev) < 3 ? 16 : 32;
}
static struct vendor_data vendor_arm = {
.reg_offset = pl011_std_offsets,
.ifls = UART011_IFLS_RX4_8|UART011_IFLS_TX4_8,
.fr_busy = UART01x_FR_BUSY,
.fr_dsr = UART01x_FR_DSR,
.fr_cts = UART01x_FR_CTS,
.fr_ri = UART011_FR_RI,
.oversampling = false,
.dma_threshold = false,
serial: pl011: implement workaround for CTS clear event issue Problem Observed: - interrupt status is set by rising or falling edge on CTS line - interrupt status is cleared on a .0. to .1. transition of the interrupt-clear register bit 1. - interrupt-clear register is reset by hardware once the interrupt status is .0.. Remark: It seems not possible to read this register back by the CPU though, but internally this register exists. - when simultaneous set and reset event on the interrupt status happens, then the set-event has priority and the status remains .1.. As a result the interrupt-clear register is not reset to .0., and no new .0. to .1. transition can be detected on it when writing a .1. to it. This implies race condition, the clear must be performed at least one UARTCLK the riding edge of CTS RIS interrupt. Fix: Instead of resetting UART as done in commit c16d51a32bbb61ac8fd96f78b5ce2fccfe0fb4c3 "amba pl011: workaround for uart registers lockup" do the following: write .0. and then .1. to the interrupt-clear register to make sure that this transition is detected. According to the datasheet writing a .0. does not have any effect, but actually it allows to reset the internal interrupt-clear register. Take into account: The .0. needs to last at least for one clk_uart clock period (~ 38 MHz, 26.08ns) This way we can do away with the tasklet and keep only a tiny fix triggered by the variant flag introduced in this patch. Signed-off-by: Guillaume Jaunet <guillaume.jaunet@stericsson.com> Signed-off-by: Christophe Arnal <christophe.arnal@stericsson.com> Signed-off-by: Matthias Locher <Matthias.Locher@stericsson.com> Signed-off-by: Rajanikanth H.V <rajanikanth.hv@stericsson.com> Reviewed-by: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-03-26 03:17:02 -06:00
.cts_event_workaround = false,
.always_enabled = false,
.fixed_options = false,
.get_fifosize = get_fifosize_arm,
};
static const struct vendor_data vendor_sbsa = {
.reg_offset = pl011_std_offsets,
.fr_busy = UART01x_FR_BUSY,
.fr_dsr = UART01x_FR_DSR,
.fr_cts = UART01x_FR_CTS,
.fr_ri = UART011_FR_RI,
.access_32b = true,
.oversampling = false,
.dma_threshold = false,
.cts_event_workaround = false,
.always_enabled = true,
.fixed_options = true,
};
tty: pl011: fix initialization order of QDF2400 E44 The work-around for Qualcomm Technologies QDF2400 Erratum 44 hinges on a global variable defined in the pl011 driver. The ACPI SPCR parsing code determines whether the work-around is needed, and if so, it changes the console name from "pl011" to "qdf2400_e44". The expectation is that the pl011 driver will implement the work-around when it sees the console name. The global variable qdf2400_e44_present is set when that happens. The problem is that work-around needs to be enabled when the pl011 driver probes, not when the console name is queried. However, sbsa_probe() is called before pl011_console_match(). The work-around appeared to work previously because the default console on QDF2400 platforms was always ttyAMA1. The first time sbsa_probe() is called (for ttyAMA0), qdf2400_e44_present is still false. Then pl011_console_match() is called, and it sets qdf2400_e44_present to true. All subsequent calls to sbsa_probe() enable the work-around. The solution is to move the global variable into spcr.c and let the pl011 driver query it during probe time. This works because all QDF2400 platforms require SPCR, so parse_spcr() will always be called. pl011_console_match still checks for the "qdf2400_e44" console name, but it doesn't do anything else special. Fixes: 5a0722b898f8 ("tty: pl011: use "qdf2400_e44" as the earlycon name for QDF2400 E44") Tested-by: Jeffrey Hugo <jhugo@codeaurora.org> Signed-off-by: Timur Tabi <timur@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-27 15:15:52 -06:00
#ifdef CONFIG_ACPI_SPCR_TABLE
static const struct vendor_data vendor_qdt_qdf2400_e44 = {
.reg_offset = pl011_std_offsets,
.fr_busy = UART011_FR_TXFE,
.fr_dsr = UART01x_FR_DSR,
.fr_cts = UART01x_FR_CTS,
.fr_ri = UART011_FR_RI,
.inv_fr = UART011_FR_TXFE,
.access_32b = true,
.oversampling = false,
.dma_threshold = false,
.cts_event_workaround = false,
.always_enabled = true,
.fixed_options = true,
};
tty: pl011: fix initialization order of QDF2400 E44 The work-around for Qualcomm Technologies QDF2400 Erratum 44 hinges on a global variable defined in the pl011 driver. The ACPI SPCR parsing code determines whether the work-around is needed, and if so, it changes the console name from "pl011" to "qdf2400_e44". The expectation is that the pl011 driver will implement the work-around when it sees the console name. The global variable qdf2400_e44_present is set when that happens. The problem is that work-around needs to be enabled when the pl011 driver probes, not when the console name is queried. However, sbsa_probe() is called before pl011_console_match(). The work-around appeared to work previously because the default console on QDF2400 platforms was always ttyAMA1. The first time sbsa_probe() is called (for ttyAMA0), qdf2400_e44_present is still false. Then pl011_console_match() is called, and it sets qdf2400_e44_present to true. All subsequent calls to sbsa_probe() enable the work-around. The solution is to move the global variable into spcr.c and let the pl011 driver query it during probe time. This works because all QDF2400 platforms require SPCR, so parse_spcr() will always be called. pl011_console_match still checks for the "qdf2400_e44" console name, but it doesn't do anything else special. Fixes: 5a0722b898f8 ("tty: pl011: use "qdf2400_e44" as the earlycon name for QDF2400 E44") Tested-by: Jeffrey Hugo <jhugo@codeaurora.org> Signed-off-by: Timur Tabi <timur@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-27 15:15:52 -06:00
#endif
static u16 pl011_st_offsets[REG_ARRAY_SIZE] = {
[REG_DR] = UART01x_DR,
[REG_ST_DMAWM] = ST_UART011_DMAWM,
[REG_ST_TIMEOUT] = ST_UART011_TIMEOUT,
[REG_FR] = UART01x_FR,
[REG_LCRH_RX] = ST_UART011_LCRH_RX,
[REG_LCRH_TX] = ST_UART011_LCRH_TX,
[REG_IBRD] = UART011_IBRD,
[REG_FBRD] = UART011_FBRD,
[REG_CR] = UART011_CR,
[REG_IFLS] = UART011_IFLS,
[REG_IMSC] = UART011_IMSC,
[REG_RIS] = UART011_RIS,
[REG_MIS] = UART011_MIS,
[REG_ICR] = UART011_ICR,
[REG_DMACR] = UART011_DMACR,
[REG_ST_XFCR] = ST_UART011_XFCR,
[REG_ST_XON1] = ST_UART011_XON1,
[REG_ST_XON2] = ST_UART011_XON2,
[REG_ST_XOFF1] = ST_UART011_XOFF1,
[REG_ST_XOFF2] = ST_UART011_XOFF2,
[REG_ST_ITCR] = ST_UART011_ITCR,
[REG_ST_ITIP] = ST_UART011_ITIP,
[REG_ST_ABCR] = ST_UART011_ABCR,
[REG_ST_ABIMSC] = ST_UART011_ABIMSC,
};
static unsigned int get_fifosize_st(struct amba_device *dev)
{
return 64;
}
static struct vendor_data vendor_st = {
.reg_offset = pl011_st_offsets,
.ifls = UART011_IFLS_RX_HALF|UART011_IFLS_TX_HALF,
.fr_busy = UART01x_FR_BUSY,
.fr_dsr = UART01x_FR_DSR,
.fr_cts = UART01x_FR_CTS,
.fr_ri = UART011_FR_RI,
.oversampling = true,
.dma_threshold = true,
serial: pl011: implement workaround for CTS clear event issue Problem Observed: - interrupt status is set by rising or falling edge on CTS line - interrupt status is cleared on a .0. to .1. transition of the interrupt-clear register bit 1. - interrupt-clear register is reset by hardware once the interrupt status is .0.. Remark: It seems not possible to read this register back by the CPU though, but internally this register exists. - when simultaneous set and reset event on the interrupt status happens, then the set-event has priority and the status remains .1.. As a result the interrupt-clear register is not reset to .0., and no new .0. to .1. transition can be detected on it when writing a .1. to it. This implies race condition, the clear must be performed at least one UARTCLK the riding edge of CTS RIS interrupt. Fix: Instead of resetting UART as done in commit c16d51a32bbb61ac8fd96f78b5ce2fccfe0fb4c3 "amba pl011: workaround for uart registers lockup" do the following: write .0. and then .1. to the interrupt-clear register to make sure that this transition is detected. According to the datasheet writing a .0. does not have any effect, but actually it allows to reset the internal interrupt-clear register. Take into account: The .0. needs to last at least for one clk_uart clock period (~ 38 MHz, 26.08ns) This way we can do away with the tasklet and keep only a tiny fix triggered by the variant flag introduced in this patch. Signed-off-by: Guillaume Jaunet <guillaume.jaunet@stericsson.com> Signed-off-by: Christophe Arnal <christophe.arnal@stericsson.com> Signed-off-by: Matthias Locher <Matthias.Locher@stericsson.com> Signed-off-by: Rajanikanth H.V <rajanikanth.hv@stericsson.com> Reviewed-by: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-03-26 03:17:02 -06:00
.cts_event_workaround = true,
.always_enabled = false,
.fixed_options = false,
.get_fifosize = get_fifosize_st,
};
static const u16 pl011_zte_offsets[REG_ARRAY_SIZE] = {
[REG_DR] = ZX_UART011_DR,
[REG_FR] = ZX_UART011_FR,
[REG_LCRH_RX] = ZX_UART011_LCRH,
[REG_LCRH_TX] = ZX_UART011_LCRH,
[REG_IBRD] = ZX_UART011_IBRD,
[REG_FBRD] = ZX_UART011_FBRD,
[REG_CR] = ZX_UART011_CR,
[REG_IFLS] = ZX_UART011_IFLS,
[REG_IMSC] = ZX_UART011_IMSC,
[REG_RIS] = ZX_UART011_RIS,
[REG_MIS] = ZX_UART011_MIS,
[REG_ICR] = ZX_UART011_ICR,
[REG_DMACR] = ZX_UART011_DMACR,
};
static unsigned int get_fifosize_zte(struct amba_device *dev)
{
return 16;
}
static struct vendor_data vendor_zte = {
.reg_offset = pl011_zte_offsets,
.access_32b = true,
.ifls = UART011_IFLS_RX4_8|UART011_IFLS_TX4_8,
.fr_busy = ZX_UART01x_FR_BUSY,
.fr_dsr = ZX_UART01x_FR_DSR,
.fr_cts = ZX_UART01x_FR_CTS,
.fr_ri = ZX_UART011_FR_RI,
.get_fifosize = get_fifosize_zte,
};
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
/* Deals with DMA transactions */
struct pl011_sgbuf {
struct scatterlist sg;
char *buf;
};
struct pl011_dmarx_data {
struct dma_chan *chan;
struct completion complete;
bool use_buf_b;
struct pl011_sgbuf sgbuf_a;
struct pl011_sgbuf sgbuf_b;
dma_cookie_t cookie;
bool running;
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
struct timer_list timer;
unsigned int last_residue;
unsigned long last_jiffies;
bool auto_poll_rate;
unsigned int poll_rate;
unsigned int poll_timeout;
};
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
struct pl011_dmatx_data {
struct dma_chan *chan;
struct scatterlist sg;
char *buf;
bool queued;
};
/*
* We wrap our port structure around the generic uart_port.
*/
struct uart_amba_port {
struct uart_port port;
const u16 *reg_offset;
struct clk *clk;
const struct vendor_data *vendor;
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
unsigned int dmacr; /* dma control reg */
unsigned int im; /* interrupt mask */
unsigned int old_status;
unsigned int fifosize; /* vendor-specific */
unsigned int old_cr; /* state during shutdown */
unsigned int fixed_baud; /* vendor-set fixed baud rate */
char type[12];
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
#ifdef CONFIG_DMA_ENGINE
/* DMA stuff */
bool using_tx_dma;
bool using_rx_dma;
struct pl011_dmarx_data dmarx;
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
struct pl011_dmatx_data dmatx;
bool dma_probed;
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
#endif
};
static unsigned int pl011_reg_to_offset(const struct uart_amba_port *uap,
unsigned int reg)
{
return uap->reg_offset[reg];
}
static unsigned int pl011_read(const struct uart_amba_port *uap,
unsigned int reg)
{
void __iomem *addr = uap->port.membase + pl011_reg_to_offset(uap, reg);
return (uap->port.iotype == UPIO_MEM32) ?
readl_relaxed(addr) : readw_relaxed(addr);
}
static void pl011_write(unsigned int val, const struct uart_amba_port *uap,
unsigned int reg)
{
void __iomem *addr = uap->port.membase + pl011_reg_to_offset(uap, reg);
if (uap->port.iotype == UPIO_MEM32)
writel_relaxed(val, addr);
else
writew_relaxed(val, addr);
}
/*
* Reads up to 256 characters from the FIFO or until it's empty and
* inserts them into the TTY layer. Returns the number of characters
* read from the FIFO.
*/
static int pl011_fifo_to_tty(struct uart_amba_port *uap)
{
unsigned int ch, flag, fifotaken;
serial: pl011: Fix lockdep splat when handling magic-sysrq interrupt commit 534cf755d9df99e214ddbe26b91cd4d81d2603e2 upstream. Issuing a magic-sysrq via the PL011 causes the following lockdep splat, which is easily reproducible under QEMU: | sysrq: Changing Loglevel | sysrq: Loglevel set to 9 | | ====================================================== | WARNING: possible circular locking dependency detected | 5.9.0-rc7 #1 Not tainted | ------------------------------------------------------ | systemd-journal/138 is trying to acquire lock: | ffffab133ad950c0 (console_owner){-.-.}-{0:0}, at: console_lock_spinning_enable+0x34/0x70 | | but task is already holding lock: | ffff0001fd47b098 (&port_lock_key){-.-.}-{2:2}, at: pl011_int+0x40/0x488 | | which lock already depends on the new lock. [...] | Possible unsafe locking scenario: | | CPU0 CPU1 | ---- ---- | lock(&port_lock_key); | lock(console_owner); | lock(&port_lock_key); | lock(console_owner); | | *** DEADLOCK *** The issue being that CPU0 takes 'port_lock' on the irq path in pl011_int() before taking 'console_owner' on the printk() path, whereas CPU1 takes the two locks in the opposite order on the printk() path due to setting the "console_owner" prior to calling into into the actual console driver. Fix this in the same way as the msm-serial driver by dropping 'port_lock' before handling the sysrq. Cc: <stable@vger.kernel.org> # 4.19+ Cc: Russell King <linux@armlinux.org.uk> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Jiri Slaby <jirislaby@kernel.org> Link: https://lore.kernel.org/r/20200811101313.GA6970@willie-the-truck Signed-off-by: Peter Zijlstra <peterz@infradead.org> Tested-by: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200930120432.16551-1-will@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-30 06:04:32 -06:00
int sysrq;
u16 status;
for (fifotaken = 0; fifotaken != 256; fifotaken++) {
status = pl011_read(uap, REG_FR);
if (status & UART01x_FR_RXFE)
break;
/* Take chars from the FIFO and update status */
ch = pl011_read(uap, REG_DR) | UART_DUMMY_DR_RX;
flag = TTY_NORMAL;
uap->port.icount.rx++;
if (unlikely(ch & UART_DR_ERROR)) {
if (ch & UART011_DR_BE) {
ch &= ~(UART011_DR_FE | UART011_DR_PE);
uap->port.icount.brk++;
if (uart_handle_break(&uap->port))
continue;
} else if (ch & UART011_DR_PE)
uap->port.icount.parity++;
else if (ch & UART011_DR_FE)
uap->port.icount.frame++;
if (ch & UART011_DR_OE)
uap->port.icount.overrun++;
ch &= uap->port.read_status_mask;
if (ch & UART011_DR_BE)
flag = TTY_BREAK;
else if (ch & UART011_DR_PE)
flag = TTY_PARITY;
else if (ch & UART011_DR_FE)
flag = TTY_FRAME;
}
serial: pl011: Fix lockdep splat when handling magic-sysrq interrupt commit 534cf755d9df99e214ddbe26b91cd4d81d2603e2 upstream. Issuing a magic-sysrq via the PL011 causes the following lockdep splat, which is easily reproducible under QEMU: | sysrq: Changing Loglevel | sysrq: Loglevel set to 9 | | ====================================================== | WARNING: possible circular locking dependency detected | 5.9.0-rc7 #1 Not tainted | ------------------------------------------------------ | systemd-journal/138 is trying to acquire lock: | ffffab133ad950c0 (console_owner){-.-.}-{0:0}, at: console_lock_spinning_enable+0x34/0x70 | | but task is already holding lock: | ffff0001fd47b098 (&port_lock_key){-.-.}-{2:2}, at: pl011_int+0x40/0x488 | | which lock already depends on the new lock. [...] | Possible unsafe locking scenario: | | CPU0 CPU1 | ---- ---- | lock(&port_lock_key); | lock(console_owner); | lock(&port_lock_key); | lock(console_owner); | | *** DEADLOCK *** The issue being that CPU0 takes 'port_lock' on the irq path in pl011_int() before taking 'console_owner' on the printk() path, whereas CPU1 takes the two locks in the opposite order on the printk() path due to setting the "console_owner" prior to calling into into the actual console driver. Fix this in the same way as the msm-serial driver by dropping 'port_lock' before handling the sysrq. Cc: <stable@vger.kernel.org> # 4.19+ Cc: Russell King <linux@armlinux.org.uk> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Jiri Slaby <jirislaby@kernel.org> Link: https://lore.kernel.org/r/20200811101313.GA6970@willie-the-truck Signed-off-by: Peter Zijlstra <peterz@infradead.org> Tested-by: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200930120432.16551-1-will@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-30 06:04:32 -06:00
spin_unlock(&uap->port.lock);
sysrq = uart_handle_sysrq_char(&uap->port, ch & 255);
spin_lock(&uap->port.lock);
serial: pl011: Fix lockdep splat when handling magic-sysrq interrupt commit 534cf755d9df99e214ddbe26b91cd4d81d2603e2 upstream. Issuing a magic-sysrq via the PL011 causes the following lockdep splat, which is easily reproducible under QEMU: | sysrq: Changing Loglevel | sysrq: Loglevel set to 9 | | ====================================================== | WARNING: possible circular locking dependency detected | 5.9.0-rc7 #1 Not tainted | ------------------------------------------------------ | systemd-journal/138 is trying to acquire lock: | ffffab133ad950c0 (console_owner){-.-.}-{0:0}, at: console_lock_spinning_enable+0x34/0x70 | | but task is already holding lock: | ffff0001fd47b098 (&port_lock_key){-.-.}-{2:2}, at: pl011_int+0x40/0x488 | | which lock already depends on the new lock. [...] | Possible unsafe locking scenario: | | CPU0 CPU1 | ---- ---- | lock(&port_lock_key); | lock(console_owner); | lock(&port_lock_key); | lock(console_owner); | | *** DEADLOCK *** The issue being that CPU0 takes 'port_lock' on the irq path in pl011_int() before taking 'console_owner' on the printk() path, whereas CPU1 takes the two locks in the opposite order on the printk() path due to setting the "console_owner" prior to calling into into the actual console driver. Fix this in the same way as the msm-serial driver by dropping 'port_lock' before handling the sysrq. Cc: <stable@vger.kernel.org> # 4.19+ Cc: Russell King <linux@armlinux.org.uk> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Jiri Slaby <jirislaby@kernel.org> Link: https://lore.kernel.org/r/20200811101313.GA6970@willie-the-truck Signed-off-by: Peter Zijlstra <peterz@infradead.org> Tested-by: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200930120432.16551-1-will@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-30 06:04:32 -06:00
if (!sysrq)
uart_insert_char(&uap->port, ch, UART011_DR_OE, ch, flag);
}
return fifotaken;
}
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
/*
* All the DMA operation mode stuff goes inside this ifdef.
* This assumes that you have a generic DMA device interface,
* no custom DMA interfaces are supported.
*/
#ifdef CONFIG_DMA_ENGINE
#define PL011_DMA_BUFFER_SIZE PAGE_SIZE
static int pl011_sgbuf_init(struct dma_chan *chan, struct pl011_sgbuf *sg,
enum dma_data_direction dir)
{
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
dma_addr_t dma_addr;
sg->buf = dma_alloc_coherent(chan->device->dev,
PL011_DMA_BUFFER_SIZE, &dma_addr, GFP_KERNEL);
if (!sg->buf)
return -ENOMEM;
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
sg_init_table(&sg->sg, 1);
sg_set_page(&sg->sg, phys_to_page(dma_addr),
PL011_DMA_BUFFER_SIZE, offset_in_page(dma_addr));
sg_dma_address(&sg->sg) = dma_addr;
sg_dma_len(&sg->sg) = PL011_DMA_BUFFER_SIZE;
return 0;
}
static void pl011_sgbuf_free(struct dma_chan *chan, struct pl011_sgbuf *sg,
enum dma_data_direction dir)
{
if (sg->buf) {
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
dma_free_coherent(chan->device->dev,
PL011_DMA_BUFFER_SIZE, sg->buf,
sg_dma_address(&sg->sg));
}
}
static void pl011_dma_probe(struct uart_amba_port *uap)
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
{
/* DMA is the sole user of the platform data right now */
struct amba_pl011_data *plat = dev_get_platdata(uap->port.dev);
struct device *dev = uap->port.dev;
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
struct dma_slave_config tx_conf = {
.dst_addr = uap->port.mapbase +
pl011_reg_to_offset(uap, REG_DR),
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE,
.direction = DMA_MEM_TO_DEV,
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
.dst_maxburst = uap->fifosize >> 1,
.device_fc = false,
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
};
struct dma_chan *chan;
dma_cap_mask_t mask;
uap->dma_probed = true;
chan = dma_request_slave_channel_reason(dev, "tx");
if (IS_ERR(chan)) {
if (PTR_ERR(chan) == -EPROBE_DEFER) {
uap->dma_probed = false;
return;
}
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
/* We need platform data */
if (!plat || !plat->dma_filter) {
dev_info(uap->port.dev, "no DMA platform data\n");
return;
}
/* Try to acquire a generic DMA engine slave TX channel */
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);
chan = dma_request_channel(mask, plat->dma_filter,
plat->dma_tx_param);
if (!chan) {
dev_err(uap->port.dev, "no TX DMA channel!\n");
return;
}
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
}
dmaengine_slave_config(chan, &tx_conf);
uap->dmatx.chan = chan;
dev_info(uap->port.dev, "DMA channel TX %s\n",
dma_chan_name(uap->dmatx.chan));
/* Optionally make use of an RX channel as well */
chan = dma_request_slave_channel(dev, "rx");
if (!chan && plat && plat->dma_rx_param) {
chan = dma_request_channel(mask, plat->dma_filter, plat->dma_rx_param);
if (!chan) {
dev_err(uap->port.dev, "no RX DMA channel!\n");
return;
}
}
if (chan) {
struct dma_slave_config rx_conf = {
.src_addr = uap->port.mapbase +
pl011_reg_to_offset(uap, REG_DR),
.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE,
.direction = DMA_DEV_TO_MEM,
.src_maxburst = uap->fifosize >> 2,
.device_fc = false,
};
struct dma_slave_caps caps;
/*
* Some DMA controllers provide information on their capabilities.
* If the controller does, check for suitable residue processing
* otherwise assime all is well.
*/
if (0 == dma_get_slave_caps(chan, &caps)) {
if (caps.residue_granularity ==
DMA_RESIDUE_GRANULARITY_DESCRIPTOR) {
dma_release_channel(chan);
dev_info(uap->port.dev,
"RX DMA disabled - no residue processing\n");
return;
}
}
dmaengine_slave_config(chan, &rx_conf);
uap->dmarx.chan = chan;
uap->dmarx.auto_poll_rate = false;
if (plat && plat->dma_rx_poll_enable) {
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
/* Set poll rate if specified. */
if (plat->dma_rx_poll_rate) {
uap->dmarx.auto_poll_rate = false;
uap->dmarx.poll_rate = plat->dma_rx_poll_rate;
} else {
/*
* 100 ms defaults to poll rate if not
* specified. This will be adjusted with
* the baud rate at set_termios.
*/
uap->dmarx.auto_poll_rate = true;
uap->dmarx.poll_rate = 100;
}
/* 3 secs defaults poll_timeout if not specified. */
if (plat->dma_rx_poll_timeout)
uap->dmarx.poll_timeout =
plat->dma_rx_poll_timeout;
else
uap->dmarx.poll_timeout = 3000;
} else if (!plat && dev->of_node) {
uap->dmarx.auto_poll_rate = of_property_read_bool(
dev->of_node, "auto-poll");
if (uap->dmarx.auto_poll_rate) {
u32 x;
if (0 == of_property_read_u32(dev->of_node,
"poll-rate-ms", &x))
uap->dmarx.poll_rate = x;
else
uap->dmarx.poll_rate = 100;
if (0 == of_property_read_u32(dev->of_node,
"poll-timeout-ms", &x))
uap->dmarx.poll_timeout = x;
else
uap->dmarx.poll_timeout = 3000;
}
}
dev_info(uap->port.dev, "DMA channel RX %s\n",
dma_chan_name(uap->dmarx.chan));
}
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
}
static void pl011_dma_remove(struct uart_amba_port *uap)
{
if (uap->dmatx.chan)
dma_release_channel(uap->dmatx.chan);
if (uap->dmarx.chan)
dma_release_channel(uap->dmarx.chan);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
}
serial/amba-pl011: Activate TX IRQ passively The current PL011 driver transmits a dummy character when the UART is opened, to assert the TX IRQ for the first time (see pl011_startup()). The UART is put in loopback mode temporarily, so the receiver presumably shouldn't see anything. However... At least some platforms containing a PL011 send characters down the wire even when loopback mode is enabled. This means that a spurious NUL character may be seen at the receiver when the PL011 is opened through the TTY layer. The current code also temporarily sets the baud rate to maximum and the character width to the minimum, to that the dummy TX completes as quickly as possible. If this is seen by the receiver it will result in a framing error and can knock the receiver out of sync -- turning subsequent output into garbage until synchronisation is reestablished. (Particularly problematic during boot with systemd.) To avoid spurious transmissions, this patch removes assumptions about whether the TX IRQ will fire until at least one TX IRQ has been seen. Instead, the UART will unmask the TX IRQ and then slow-start via polling and timer-based soft IRQs initially. If the TTY layer writes enough data to fill the FIFO to the interrupt threshold in one go, the TX IRQ should assert, at which point the driver changes to fully interrupt-driven TX. In this way, the TX IRQ is activated as a side-effect instead of being done deliberately. This should also mean that the driver works on the SBSA Generic UART[1] (a cut-down PL011) without invasive changes. The Generic UART lacks some features needed for the dummy TX approach to work (FIFO disabling and loopback). [1] Server Base System Architecture (ARM-DEN-0029-v2.3) http://infocenter.arm.com/ (click-thru required :/) Signed-off-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-04 05:27:33 -07:00
/* Forward declare these for the refill routine */
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
static int pl011_dma_tx_refill(struct uart_amba_port *uap);
serial/amba-pl011: Activate TX IRQ passively The current PL011 driver transmits a dummy character when the UART is opened, to assert the TX IRQ for the first time (see pl011_startup()). The UART is put in loopback mode temporarily, so the receiver presumably shouldn't see anything. However... At least some platforms containing a PL011 send characters down the wire even when loopback mode is enabled. This means that a spurious NUL character may be seen at the receiver when the PL011 is opened through the TTY layer. The current code also temporarily sets the baud rate to maximum and the character width to the minimum, to that the dummy TX completes as quickly as possible. If this is seen by the receiver it will result in a framing error and can knock the receiver out of sync -- turning subsequent output into garbage until synchronisation is reestablished. (Particularly problematic during boot with systemd.) To avoid spurious transmissions, this patch removes assumptions about whether the TX IRQ will fire until at least one TX IRQ has been seen. Instead, the UART will unmask the TX IRQ and then slow-start via polling and timer-based soft IRQs initially. If the TTY layer writes enough data to fill the FIFO to the interrupt threshold in one go, the TX IRQ should assert, at which point the driver changes to fully interrupt-driven TX. In this way, the TX IRQ is activated as a side-effect instead of being done deliberately. This should also mean that the driver works on the SBSA Generic UART[1] (a cut-down PL011) without invasive changes. The Generic UART lacks some features needed for the dummy TX approach to work (FIFO disabling and loopback). [1] Server Base System Architecture (ARM-DEN-0029-v2.3) http://infocenter.arm.com/ (click-thru required :/) Signed-off-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-04 05:27:33 -07:00
static void pl011_start_tx_pio(struct uart_amba_port *uap);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
/*
* The current DMA TX buffer has been sent.
* Try to queue up another DMA buffer.
*/
static void pl011_dma_tx_callback(void *data)
{
struct uart_amba_port *uap = data;
struct pl011_dmatx_data *dmatx = &uap->dmatx;
unsigned long flags;
u16 dmacr;
spin_lock_irqsave(&uap->port.lock, flags);
if (uap->dmatx.queued)
dma_unmap_sg(dmatx->chan->device->dev, &dmatx->sg, 1,
DMA_TO_DEVICE);
dmacr = uap->dmacr;
uap->dmacr = dmacr & ~UART011_TXDMAE;
pl011_write(uap->dmacr, uap, REG_DMACR);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
/*
* If TX DMA was disabled, it means that we've stopped the DMA for
* some reason (eg, XOFF received, or we want to send an X-char.)
*
* Note: we need to be careful here of a potential race between DMA
* and the rest of the driver - if the driver disables TX DMA while
* a TX buffer completing, we must update the tx queued status to
* get further refills (hence we check dmacr).
*/
if (!(dmacr & UART011_TXDMAE) || uart_tx_stopped(&uap->port) ||
uart_circ_empty(&uap->port.state->xmit)) {
uap->dmatx.queued = false;
spin_unlock_irqrestore(&uap->port.lock, flags);
return;
}
serial/amba-pl011: Activate TX IRQ passively The current PL011 driver transmits a dummy character when the UART is opened, to assert the TX IRQ for the first time (see pl011_startup()). The UART is put in loopback mode temporarily, so the receiver presumably shouldn't see anything. However... At least some platforms containing a PL011 send characters down the wire even when loopback mode is enabled. This means that a spurious NUL character may be seen at the receiver when the PL011 is opened through the TTY layer. The current code also temporarily sets the baud rate to maximum and the character width to the minimum, to that the dummy TX completes as quickly as possible. If this is seen by the receiver it will result in a framing error and can knock the receiver out of sync -- turning subsequent output into garbage until synchronisation is reestablished. (Particularly problematic during boot with systemd.) To avoid spurious transmissions, this patch removes assumptions about whether the TX IRQ will fire until at least one TX IRQ has been seen. Instead, the UART will unmask the TX IRQ and then slow-start via polling and timer-based soft IRQs initially. If the TTY layer writes enough data to fill the FIFO to the interrupt threshold in one go, the TX IRQ should assert, at which point the driver changes to fully interrupt-driven TX. In this way, the TX IRQ is activated as a side-effect instead of being done deliberately. This should also mean that the driver works on the SBSA Generic UART[1] (a cut-down PL011) without invasive changes. The Generic UART lacks some features needed for the dummy TX approach to work (FIFO disabling and loopback). [1] Server Base System Architecture (ARM-DEN-0029-v2.3) http://infocenter.arm.com/ (click-thru required :/) Signed-off-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-04 05:27:33 -07:00
if (pl011_dma_tx_refill(uap) <= 0)
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
/*
* We didn't queue a DMA buffer for some reason, but we
* have data pending to be sent. Re-enable the TX IRQ.
*/
serial/amba-pl011: Activate TX IRQ passively The current PL011 driver transmits a dummy character when the UART is opened, to assert the TX IRQ for the first time (see pl011_startup()). The UART is put in loopback mode temporarily, so the receiver presumably shouldn't see anything. However... At least some platforms containing a PL011 send characters down the wire even when loopback mode is enabled. This means that a spurious NUL character may be seen at the receiver when the PL011 is opened through the TTY layer. The current code also temporarily sets the baud rate to maximum and the character width to the minimum, to that the dummy TX completes as quickly as possible. If this is seen by the receiver it will result in a framing error and can knock the receiver out of sync -- turning subsequent output into garbage until synchronisation is reestablished. (Particularly problematic during boot with systemd.) To avoid spurious transmissions, this patch removes assumptions about whether the TX IRQ will fire until at least one TX IRQ has been seen. Instead, the UART will unmask the TX IRQ and then slow-start via polling and timer-based soft IRQs initially. If the TTY layer writes enough data to fill the FIFO to the interrupt threshold in one go, the TX IRQ should assert, at which point the driver changes to fully interrupt-driven TX. In this way, the TX IRQ is activated as a side-effect instead of being done deliberately. This should also mean that the driver works on the SBSA Generic UART[1] (a cut-down PL011) without invasive changes. The Generic UART lacks some features needed for the dummy TX approach to work (FIFO disabling and loopback). [1] Server Base System Architecture (ARM-DEN-0029-v2.3) http://infocenter.arm.com/ (click-thru required :/) Signed-off-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-04 05:27:33 -07:00
pl011_start_tx_pio(uap);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
spin_unlock_irqrestore(&uap->port.lock, flags);
}
/*
* Try to refill the TX DMA buffer.
* Locking: called with port lock held and IRQs disabled.
* Returns:
* 1 if we queued up a TX DMA buffer.
* 0 if we didn't want to handle this by DMA
* <0 on error
*/
static int pl011_dma_tx_refill(struct uart_amba_port *uap)
{
struct pl011_dmatx_data *dmatx = &uap->dmatx;
struct dma_chan *chan = dmatx->chan;
struct dma_device *dma_dev = chan->device;
struct dma_async_tx_descriptor *desc;
struct circ_buf *xmit = &uap->port.state->xmit;
unsigned int count;
/*
* Try to avoid the overhead involved in using DMA if the
* transaction fits in the first half of the FIFO, by using
* the standard interrupt handling. This ensures that we
* issue a uart_write_wakeup() at the appropriate time.
*/
count = uart_circ_chars_pending(xmit);
if (count < (uap->fifosize >> 1)) {
uap->dmatx.queued = false;
return 0;
}
/*
* Bodge: don't send the last character by DMA, as this
* will prevent XON from notifying us to restart DMA.
*/
count -= 1;
/* Else proceed to copy the TX chars to the DMA buffer and fire DMA */
if (count > PL011_DMA_BUFFER_SIZE)
count = PL011_DMA_BUFFER_SIZE;
if (xmit->tail < xmit->head)
memcpy(&dmatx->buf[0], &xmit->buf[xmit->tail], count);
else {
size_t first = UART_XMIT_SIZE - xmit->tail;
size_t second;
if (first > count)
first = count;
second = count - first;
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
memcpy(&dmatx->buf[0], &xmit->buf[xmit->tail], first);
if (second)
memcpy(&dmatx->buf[first], &xmit->buf[0], second);
}
dmatx->sg.length = count;
if (dma_map_sg(dma_dev->dev, &dmatx->sg, 1, DMA_TO_DEVICE) != 1) {
uap->dmatx.queued = false;
dev_dbg(uap->port.dev, "unable to map TX DMA\n");
return -EBUSY;
}
desc = dmaengine_prep_slave_sg(chan, &dmatx->sg, 1, DMA_MEM_TO_DEV,
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
if (!desc) {
dma_unmap_sg(dma_dev->dev, &dmatx->sg, 1, DMA_TO_DEVICE);
uap->dmatx.queued = false;
/*
* If DMA cannot be used right now, we complete this
* transaction via IRQ and let the TTY layer retry.
*/
dev_dbg(uap->port.dev, "TX DMA busy\n");
return -EBUSY;
}
/* Some data to go along to the callback */
desc->callback = pl011_dma_tx_callback;
desc->callback_param = uap;
/* All errors should happen at prepare time */
dmaengine_submit(desc);
/* Fire the DMA transaction */
dma_dev->device_issue_pending(chan);
uap->dmacr |= UART011_TXDMAE;
pl011_write(uap->dmacr, uap, REG_DMACR);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
uap->dmatx.queued = true;
/*
* Now we know that DMA will fire, so advance the ring buffer
* with the stuff we just dispatched.
*/
xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1);
uap->port.icount.tx += count;
if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
uart_write_wakeup(&uap->port);
return 1;
}
/*
* We received a transmit interrupt without a pending X-char but with
* pending characters.
* Locking: called with port lock held and IRQs disabled.
* Returns:
* false if we want to use PIO to transmit
* true if we queued a DMA buffer
*/
static bool pl011_dma_tx_irq(struct uart_amba_port *uap)
{
if (!uap->using_tx_dma)
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
return false;
/*
* If we already have a TX buffer queued, but received a
* TX interrupt, it will be because we've just sent an X-char.
* Ensure the TX DMA is enabled and the TX IRQ is disabled.
*/
if (uap->dmatx.queued) {
uap->dmacr |= UART011_TXDMAE;
pl011_write(uap->dmacr, uap, REG_DMACR);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
uap->im &= ~UART011_TXIM;
pl011_write(uap->im, uap, REG_IMSC);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
return true;
}
/*
* We don't have a TX buffer queued, so try to queue one.
* If we successfully queued a buffer, mask the TX IRQ.
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
*/
if (pl011_dma_tx_refill(uap) > 0) {
uap->im &= ~UART011_TXIM;
pl011_write(uap->im, uap, REG_IMSC);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
return true;
}
return false;
}
/*
* Stop the DMA transmit (eg, due to received XOFF).
* Locking: called with port lock held and IRQs disabled.
*/
static inline void pl011_dma_tx_stop(struct uart_amba_port *uap)
{
if (uap->dmatx.queued) {
uap->dmacr &= ~UART011_TXDMAE;
pl011_write(uap->dmacr, uap, REG_DMACR);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
}
}
/*
* Try to start a DMA transmit, or in the case of an XON/OFF
* character queued for send, try to get that character out ASAP.
* Locking: called with port lock held and IRQs disabled.
* Returns:
* false if we want the TX IRQ to be enabled
* true if we have a buffer queued
*/
static inline bool pl011_dma_tx_start(struct uart_amba_port *uap)
{
u16 dmacr;
if (!uap->using_tx_dma)
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
return false;
if (!uap->port.x_char) {
/* no X-char, try to push chars out in DMA mode */
bool ret = true;
if (!uap->dmatx.queued) {
if (pl011_dma_tx_refill(uap) > 0) {
uap->im &= ~UART011_TXIM;
pl011_write(uap->im, uap, REG_IMSC);
serial/amba-pl011: Activate TX IRQ passively The current PL011 driver transmits a dummy character when the UART is opened, to assert the TX IRQ for the first time (see pl011_startup()). The UART is put in loopback mode temporarily, so the receiver presumably shouldn't see anything. However... At least some platforms containing a PL011 send characters down the wire even when loopback mode is enabled. This means that a spurious NUL character may be seen at the receiver when the PL011 is opened through the TTY layer. The current code also temporarily sets the baud rate to maximum and the character width to the minimum, to that the dummy TX completes as quickly as possible. If this is seen by the receiver it will result in a framing error and can knock the receiver out of sync -- turning subsequent output into garbage until synchronisation is reestablished. (Particularly problematic during boot with systemd.) To avoid spurious transmissions, this patch removes assumptions about whether the TX IRQ will fire until at least one TX IRQ has been seen. Instead, the UART will unmask the TX IRQ and then slow-start via polling and timer-based soft IRQs initially. If the TTY layer writes enough data to fill the FIFO to the interrupt threshold in one go, the TX IRQ should assert, at which point the driver changes to fully interrupt-driven TX. In this way, the TX IRQ is activated as a side-effect instead of being done deliberately. This should also mean that the driver works on the SBSA Generic UART[1] (a cut-down PL011) without invasive changes. The Generic UART lacks some features needed for the dummy TX approach to work (FIFO disabling and loopback). [1] Server Base System Architecture (ARM-DEN-0029-v2.3) http://infocenter.arm.com/ (click-thru required :/) Signed-off-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-04 05:27:33 -07:00
} else
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
ret = false;
} else if (!(uap->dmacr & UART011_TXDMAE)) {
uap->dmacr |= UART011_TXDMAE;
pl011_write(uap->dmacr, uap, REG_DMACR);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
}
return ret;
}
/*
* We have an X-char to send. Disable DMA to prevent it loading
* the TX fifo, and then see if we can stuff it into the FIFO.
*/
dmacr = uap->dmacr;
uap->dmacr &= ~UART011_TXDMAE;
pl011_write(uap->dmacr, uap, REG_DMACR);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
if (pl011_read(uap, REG_FR) & UART01x_FR_TXFF) {
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
/*
* No space in the FIFO, so enable the transmit interrupt
* so we know when there is space. Note that once we've
* loaded the character, we should just re-enable DMA.
*/
return false;
}
pl011_write(uap->port.x_char, uap, REG_DR);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
uap->port.icount.tx++;
uap->port.x_char = 0;
/* Success - restore the DMA state */
uap->dmacr = dmacr;
pl011_write(dmacr, uap, REG_DMACR);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
return true;
}
/*
* Flush the transmit buffer.
* Locking: called with port lock held and IRQs disabled.
*/
static void pl011_dma_flush_buffer(struct uart_port *port)
__releases(&uap->port.lock)
__acquires(&uap->port.lock)
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
if (!uap->using_tx_dma)
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
return;
serial: pl011: Fix DMA ->flush_buffer() commit f6a196477184b99a31d16366a8e826558aa11f6d upstream. PL011's ->flush_buffer() implementation releases and reacquires the port lock. Due to a race condition here, data can end up being added to the circular buffer but neither being discarded nor being sent out. This leads to, for example, tcdrain(2) waiting indefinitely. Process A Process B uart_flush_buffer() - acquire lock - circ_clear - pl011_flush_buffer() -- release lock -- dmaengine_terminate_all() uart_write() - acquire lock - add chars to circ buffer - start_tx() -- start DMA - release lock -- acquire lock -- turn off DMA -- release lock // Data in circ buffer but DMA is off According to the comment in the code, the releasing of the lock around dmaengine_terminate_all() is to avoid a deadlock with the DMA engine callback. However, since the time this code was written, the DMA engine API documentation seems to have been clarified to say that dmaengine_terminate_all() (in the identically implemented but differently named dmaengine_terminate_async() variant) does not wait for any running complete callback to be completed and can even be called from a complete callback. So there is no possibility of deadlock if the DMA engine driver implements this API correctly. So we should be able to just remove this release and reacquire of the lock to prevent the aforementioned race condition. Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com> Cc: stable <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20191118092547.32135-1-vincent.whitchurch@axis.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-11-18 02:25:47 -07:00
dmaengine_terminate_async(uap->dmatx.chan);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
if (uap->dmatx.queued) {
dma_unmap_sg(uap->dmatx.chan->device->dev, &uap->dmatx.sg, 1,
DMA_TO_DEVICE);
uap->dmatx.queued = false;
uap->dmacr &= ~UART011_TXDMAE;
pl011_write(uap->dmacr, uap, REG_DMACR);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
}
}
static void pl011_dma_rx_callback(void *data);
static int pl011_dma_rx_trigger_dma(struct uart_amba_port *uap)
{
struct dma_chan *rxchan = uap->dmarx.chan;
struct pl011_dmarx_data *dmarx = &uap->dmarx;
struct dma_async_tx_descriptor *desc;
struct pl011_sgbuf *sgbuf;
if (!rxchan)
return -EIO;
/* Start the RX DMA job */
sgbuf = uap->dmarx.use_buf_b ?
&uap->dmarx.sgbuf_b : &uap->dmarx.sgbuf_a;
desc = dmaengine_prep_slave_sg(rxchan, &sgbuf->sg, 1,
DMA_DEV_TO_MEM,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
/*
* If the DMA engine is busy and cannot prepare a
* channel, no big deal, the driver will fall back
* to interrupt mode as a result of this error code.
*/
if (!desc) {
uap->dmarx.running = false;
dmaengine_terminate_all(rxchan);
return -EBUSY;
}
/* Some data to go along to the callback */
desc->callback = pl011_dma_rx_callback;
desc->callback_param = uap;
dmarx->cookie = dmaengine_submit(desc);
dma_async_issue_pending(rxchan);
uap->dmacr |= UART011_RXDMAE;
pl011_write(uap->dmacr, uap, REG_DMACR);
uap->dmarx.running = true;
uap->im &= ~UART011_RXIM;
pl011_write(uap->im, uap, REG_IMSC);
return 0;
}
/*
* This is called when either the DMA job is complete, or
* the FIFO timeout interrupt occurred. This must be called
* with the port spinlock uap->port.lock held.
*/
static void pl011_dma_rx_chars(struct uart_amba_port *uap,
u32 pending, bool use_buf_b,
bool readfifo)
{
struct tty_port *port = &uap->port.state->port;
struct pl011_sgbuf *sgbuf = use_buf_b ?
&uap->dmarx.sgbuf_b : &uap->dmarx.sgbuf_a;
int dma_count = 0;
u32 fifotaken = 0; /* only used for vdbg() */
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
struct pl011_dmarx_data *dmarx = &uap->dmarx;
int dmataken = 0;
if (uap->dmarx.poll_rate) {
/* The data can be taken by polling */
dmataken = sgbuf->sg.length - dmarx->last_residue;
/* Recalculate the pending size */
if (pending >= dmataken)
pending -= dmataken;
}
/* Pick the remain data from the DMA */
if (pending) {
/*
* First take all chars in the DMA pipe, then look in the FIFO.
* Note that tty_insert_flip_buf() tries to take as many chars
* as it can.
*/
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
dma_count = tty_insert_flip_string(port, sgbuf->buf + dmataken,
pending);
uap->port.icount.rx += dma_count;
if (dma_count < pending)
dev_warn(uap->port.dev,
"couldn't insert all characters (TTY is full?)\n");
}
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
/* Reset the last_residue for Rx DMA poll */
if (uap->dmarx.poll_rate)
dmarx->last_residue = sgbuf->sg.length;
/*
* Only continue with trying to read the FIFO if all DMA chars have
* been taken first.
*/
if (dma_count == pending && readfifo) {
/* Clear any error flags */
pl011_write(UART011_OEIS | UART011_BEIS | UART011_PEIS |
UART011_FEIS, uap, REG_ICR);
/*
* If we read all the DMA'd characters, and we had an
* incomplete buffer, that could be due to an rx error, or
* maybe we just timed out. Read any pending chars and check
* the error status.
*
* Error conditions will only occur in the FIFO, these will
* trigger an immediate interrupt and stop the DMA job, so we
* will always find the error in the FIFO, never in the DMA
* buffer.
*/
fifotaken = pl011_fifo_to_tty(uap);
}
spin_unlock(&uap->port.lock);
dev_vdbg(uap->port.dev,
"Took %d chars from DMA buffer and %d chars from the FIFO\n",
dma_count, fifotaken);
tty_flip_buffer_push(port);
spin_lock(&uap->port.lock);
}
static void pl011_dma_rx_irq(struct uart_amba_port *uap)
{
struct pl011_dmarx_data *dmarx = &uap->dmarx;
struct dma_chan *rxchan = dmarx->chan;
struct pl011_sgbuf *sgbuf = dmarx->use_buf_b ?
&dmarx->sgbuf_b : &dmarx->sgbuf_a;
size_t pending;
struct dma_tx_state state;
enum dma_status dmastat;
/*
* Pause the transfer so we can trust the current counter,
* do this before we pause the PL011 block, else we may
* overflow the FIFO.
*/
if (dmaengine_pause(rxchan))
dev_err(uap->port.dev, "unable to pause DMA transfer\n");
dmastat = rxchan->device->device_tx_status(rxchan,
dmarx->cookie, &state);
if (dmastat != DMA_PAUSED)
dev_err(uap->port.dev, "unable to pause DMA transfer\n");
/* Disable RX DMA - incoming data will wait in the FIFO */
uap->dmacr &= ~UART011_RXDMAE;
pl011_write(uap->dmacr, uap, REG_DMACR);
uap->dmarx.running = false;
pending = sgbuf->sg.length - state.residue;
BUG_ON(pending > PL011_DMA_BUFFER_SIZE);
/* Then we terminate the transfer - we now know our residue */
dmaengine_terminate_all(rxchan);
/*
* This will take the chars we have so far and insert
* into the framework.
*/
pl011_dma_rx_chars(uap, pending, dmarx->use_buf_b, true);
/* Switch buffer & re-trigger DMA job */
dmarx->use_buf_b = !dmarx->use_buf_b;
if (pl011_dma_rx_trigger_dma(uap)) {
dev_dbg(uap->port.dev, "could not retrigger RX DMA job "
"fall back to interrupt mode\n");
uap->im |= UART011_RXIM;
pl011_write(uap->im, uap, REG_IMSC);
}
}
static void pl011_dma_rx_callback(void *data)
{
struct uart_amba_port *uap = data;
struct pl011_dmarx_data *dmarx = &uap->dmarx;
struct dma_chan *rxchan = dmarx->chan;
bool lastbuf = dmarx->use_buf_b;
struct pl011_sgbuf *sgbuf = dmarx->use_buf_b ?
&dmarx->sgbuf_b : &dmarx->sgbuf_a;
size_t pending;
struct dma_tx_state state;
int ret;
/*
* This completion interrupt occurs typically when the
* RX buffer is totally stuffed but no timeout has yet
* occurred. When that happens, we just want the RX
* routine to flush out the secondary DMA buffer while
* we immediately trigger the next DMA job.
*/
spin_lock_irq(&uap->port.lock);
/*
* Rx data can be taken by the UART interrupts during
* the DMA irq handler. So we check the residue here.
*/
rxchan->device->device_tx_status(rxchan, dmarx->cookie, &state);
pending = sgbuf->sg.length - state.residue;
BUG_ON(pending > PL011_DMA_BUFFER_SIZE);
/* Then we terminate the transfer - we now know our residue */
dmaengine_terminate_all(rxchan);
uap->dmarx.running = false;
dmarx->use_buf_b = !lastbuf;
ret = pl011_dma_rx_trigger_dma(uap);
pl011_dma_rx_chars(uap, pending, lastbuf, false);
spin_unlock_irq(&uap->port.lock);
/*
* Do this check after we picked the DMA chars so we don't
* get some IRQ immediately from RX.
*/
if (ret) {
dev_dbg(uap->port.dev, "could not retrigger RX DMA job "
"fall back to interrupt mode\n");
uap->im |= UART011_RXIM;
pl011_write(uap->im, uap, REG_IMSC);
}
}
/*
* Stop accepting received characters, when we're shutting down or
* suspending this port.
* Locking: called with port lock held and IRQs disabled.
*/
static inline void pl011_dma_rx_stop(struct uart_amba_port *uap)
{
/* FIXME. Just disable the DMA enable */
uap->dmacr &= ~UART011_RXDMAE;
pl011_write(uap->dmacr, uap, REG_DMACR);
}
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
/*
* Timer handler for Rx DMA polling.
* Every polling, It checks the residue in the dma buffer and transfer
* data to the tty. Also, last_residue is updated for the next polling.
*/
static void pl011_dma_rx_poll(struct timer_list *t)
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
{
struct uart_amba_port *uap = from_timer(uap, t, dmarx.timer);
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
struct tty_port *port = &uap->port.state->port;
struct pl011_dmarx_data *dmarx = &uap->dmarx;
struct dma_chan *rxchan = uap->dmarx.chan;
unsigned long flags = 0;
unsigned int dmataken = 0;
unsigned int size = 0;
struct pl011_sgbuf *sgbuf;
int dma_count;
struct dma_tx_state state;
sgbuf = dmarx->use_buf_b ? &uap->dmarx.sgbuf_b : &uap->dmarx.sgbuf_a;
rxchan->device->device_tx_status(rxchan, dmarx->cookie, &state);
if (likely(state.residue < dmarx->last_residue)) {
dmataken = sgbuf->sg.length - dmarx->last_residue;
size = dmarx->last_residue - state.residue;
dma_count = tty_insert_flip_string(port, sgbuf->buf + dmataken,
size);
if (dma_count == size)
dmarx->last_residue = state.residue;
dmarx->last_jiffies = jiffies;
}
tty_flip_buffer_push(port);
/*
* If no data is received in poll_timeout, the driver will fall back
* to interrupt mode. We will retrigger DMA at the first interrupt.
*/
if (jiffies_to_msecs(jiffies - dmarx->last_jiffies)
> uap->dmarx.poll_timeout) {
spin_lock_irqsave(&uap->port.lock, flags);
pl011_dma_rx_stop(uap);
uap->im |= UART011_RXIM;
pl011_write(uap->im, uap, REG_IMSC);
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
spin_unlock_irqrestore(&uap->port.lock, flags);
uap->dmarx.running = false;
dmaengine_terminate_all(rxchan);
del_timer(&uap->dmarx.timer);
} else {
mod_timer(&uap->dmarx.timer,
jiffies + msecs_to_jiffies(uap->dmarx.poll_rate));
}
}
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
static void pl011_dma_startup(struct uart_amba_port *uap)
{
int ret;
if (!uap->dma_probed)
pl011_dma_probe(uap);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
if (!uap->dmatx.chan)
return;
uap->dmatx.buf = kmalloc(PL011_DMA_BUFFER_SIZE, GFP_KERNEL | __GFP_DMA);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
if (!uap->dmatx.buf) {
dev_err(uap->port.dev, "no memory for DMA TX buffer\n");
uap->port.fifosize = uap->fifosize;
return;
}
sg_init_one(&uap->dmatx.sg, uap->dmatx.buf, PL011_DMA_BUFFER_SIZE);
/* The DMA buffer is now the FIFO the TTY subsystem can use */
uap->port.fifosize = PL011_DMA_BUFFER_SIZE;
uap->using_tx_dma = true;
if (!uap->dmarx.chan)
goto skip_rx;
/* Allocate and map DMA RX buffers */
ret = pl011_sgbuf_init(uap->dmarx.chan, &uap->dmarx.sgbuf_a,
DMA_FROM_DEVICE);
if (ret) {
dev_err(uap->port.dev, "failed to init DMA %s: %d\n",
"RX buffer A", ret);
goto skip_rx;
}
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
ret = pl011_sgbuf_init(uap->dmarx.chan, &uap->dmarx.sgbuf_b,
DMA_FROM_DEVICE);
if (ret) {
dev_err(uap->port.dev, "failed to init DMA %s: %d\n",
"RX buffer B", ret);
pl011_sgbuf_free(uap->dmarx.chan, &uap->dmarx.sgbuf_a,
DMA_FROM_DEVICE);
goto skip_rx;
}
uap->using_rx_dma = true;
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
skip_rx:
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
/* Turn on DMA error (RX/TX will be enabled on demand) */
uap->dmacr |= UART011_DMAONERR;
pl011_write(uap->dmacr, uap, REG_DMACR);
/*
* ST Micro variants has some specific dma burst threshold
* compensation. Set this to 16 bytes, so burst will only
* be issued above/below 16 bytes.
*/
if (uap->vendor->dma_threshold)
pl011_write(ST_UART011_DMAWM_RX_16 | ST_UART011_DMAWM_TX_16,
uap, REG_ST_DMAWM);
if (uap->using_rx_dma) {
if (pl011_dma_rx_trigger_dma(uap))
dev_dbg(uap->port.dev, "could not trigger initial "
"RX DMA job, fall back to interrupt mode\n");
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
if (uap->dmarx.poll_rate) {
timer_setup(&uap->dmarx.timer, pl011_dma_rx_poll, 0);
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
mod_timer(&uap->dmarx.timer,
jiffies +
msecs_to_jiffies(uap->dmarx.poll_rate));
uap->dmarx.last_residue = PL011_DMA_BUFFER_SIZE;
uap->dmarx.last_jiffies = jiffies;
}
}
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
}
static void pl011_dma_shutdown(struct uart_amba_port *uap)
{
if (!(uap->using_tx_dma || uap->using_rx_dma))
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
return;
/* Disable RX and TX DMA */
while (pl011_read(uap, REG_FR) & uap->vendor->fr_busy)
cpu_relax();
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
spin_lock_irq(&uap->port.lock);
uap->dmacr &= ~(UART011_DMAONERR | UART011_RXDMAE | UART011_TXDMAE);
pl011_write(uap->dmacr, uap, REG_DMACR);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
spin_unlock_irq(&uap->port.lock);
if (uap->using_tx_dma) {
/* In theory, this should already be done by pl011_dma_flush_buffer */
dmaengine_terminate_all(uap->dmatx.chan);
if (uap->dmatx.queued) {
dma_unmap_sg(uap->dmatx.chan->device->dev, &uap->dmatx.sg, 1,
DMA_TO_DEVICE);
uap->dmatx.queued = false;
}
kfree(uap->dmatx.buf);
uap->using_tx_dma = false;
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
}
if (uap->using_rx_dma) {
dmaengine_terminate_all(uap->dmarx.chan);
/* Clean up the RX DMA */
pl011_sgbuf_free(uap->dmarx.chan, &uap->dmarx.sgbuf_a, DMA_FROM_DEVICE);
pl011_sgbuf_free(uap->dmarx.chan, &uap->dmarx.sgbuf_b, DMA_FROM_DEVICE);
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
if (uap->dmarx.poll_rate)
del_timer_sync(&uap->dmarx.timer);
uap->using_rx_dma = false;
}
}
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
static inline bool pl011_dma_rx_available(struct uart_amba_port *uap)
{
return uap->using_rx_dma;
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
}
static inline bool pl011_dma_rx_running(struct uart_amba_port *uap)
{
return uap->using_rx_dma && uap->dmarx.running;
}
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
#else
/* Blank functions if the DMA engine is not available */
static inline void pl011_dma_probe(struct uart_amba_port *uap)
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
{
}
static inline void pl011_dma_remove(struct uart_amba_port *uap)
{
}
static inline void pl011_dma_startup(struct uart_amba_port *uap)
{
}
static inline void pl011_dma_shutdown(struct uart_amba_port *uap)
{
}
static inline bool pl011_dma_tx_irq(struct uart_amba_port *uap)
{
return false;
}
static inline void pl011_dma_tx_stop(struct uart_amba_port *uap)
{
}
static inline bool pl011_dma_tx_start(struct uart_amba_port *uap)
{
return false;
}
static inline void pl011_dma_rx_irq(struct uart_amba_port *uap)
{
}
static inline void pl011_dma_rx_stop(struct uart_amba_port *uap)
{
}
static inline int pl011_dma_rx_trigger_dma(struct uart_amba_port *uap)
{
return -EIO;
}
static inline bool pl011_dma_rx_available(struct uart_amba_port *uap)
{
return false;
}
static inline bool pl011_dma_rx_running(struct uart_amba_port *uap)
{
return false;
}
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
#define pl011_dma_flush_buffer NULL
#endif
static void pl011_stop_tx(struct uart_port *port)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
uap->im &= ~UART011_TXIM;
pl011_write(uap->im, uap, REG_IMSC);
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
pl011_dma_tx_stop(uap);
}
static bool pl011_tx_chars(struct uart_amba_port *uap, bool from_irq);
serial/amba-pl011: Activate TX IRQ passively The current PL011 driver transmits a dummy character when the UART is opened, to assert the TX IRQ for the first time (see pl011_startup()). The UART is put in loopback mode temporarily, so the receiver presumably shouldn't see anything. However... At least some platforms containing a PL011 send characters down the wire even when loopback mode is enabled. This means that a spurious NUL character may be seen at the receiver when the PL011 is opened through the TTY layer. The current code also temporarily sets the baud rate to maximum and the character width to the minimum, to that the dummy TX completes as quickly as possible. If this is seen by the receiver it will result in a framing error and can knock the receiver out of sync -- turning subsequent output into garbage until synchronisation is reestablished. (Particularly problematic during boot with systemd.) To avoid spurious transmissions, this patch removes assumptions about whether the TX IRQ will fire until at least one TX IRQ has been seen. Instead, the UART will unmask the TX IRQ and then slow-start via polling and timer-based soft IRQs initially. If the TTY layer writes enough data to fill the FIFO to the interrupt threshold in one go, the TX IRQ should assert, at which point the driver changes to fully interrupt-driven TX. In this way, the TX IRQ is activated as a side-effect instead of being done deliberately. This should also mean that the driver works on the SBSA Generic UART[1] (a cut-down PL011) without invasive changes. The Generic UART lacks some features needed for the dummy TX approach to work (FIFO disabling and loopback). [1] Server Base System Architecture (ARM-DEN-0029-v2.3) http://infocenter.arm.com/ (click-thru required :/) Signed-off-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-04 05:27:33 -07:00
/* Start TX with programmed I/O only (no DMA) */
static void pl011_start_tx_pio(struct uart_amba_port *uap)
{
if (pl011_tx_chars(uap, false)) {
uap->im |= UART011_TXIM;
pl011_write(uap->im, uap, REG_IMSC);
}
serial/amba-pl011: Activate TX IRQ passively The current PL011 driver transmits a dummy character when the UART is opened, to assert the TX IRQ for the first time (see pl011_startup()). The UART is put in loopback mode temporarily, so the receiver presumably shouldn't see anything. However... At least some platforms containing a PL011 send characters down the wire even when loopback mode is enabled. This means that a spurious NUL character may be seen at the receiver when the PL011 is opened through the TTY layer. The current code also temporarily sets the baud rate to maximum and the character width to the minimum, to that the dummy TX completes as quickly as possible. If this is seen by the receiver it will result in a framing error and can knock the receiver out of sync -- turning subsequent output into garbage until synchronisation is reestablished. (Particularly problematic during boot with systemd.) To avoid spurious transmissions, this patch removes assumptions about whether the TX IRQ will fire until at least one TX IRQ has been seen. Instead, the UART will unmask the TX IRQ and then slow-start via polling and timer-based soft IRQs initially. If the TTY layer writes enough data to fill the FIFO to the interrupt threshold in one go, the TX IRQ should assert, at which point the driver changes to fully interrupt-driven TX. In this way, the TX IRQ is activated as a side-effect instead of being done deliberately. This should also mean that the driver works on the SBSA Generic UART[1] (a cut-down PL011) without invasive changes. The Generic UART lacks some features needed for the dummy TX approach to work (FIFO disabling and loopback). [1] Server Base System Architecture (ARM-DEN-0029-v2.3) http://infocenter.arm.com/ (click-thru required :/) Signed-off-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-04 05:27:33 -07:00
}
static void pl011_start_tx(struct uart_port *port)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
serial/amba-pl011: Activate TX IRQ passively The current PL011 driver transmits a dummy character when the UART is opened, to assert the TX IRQ for the first time (see pl011_startup()). The UART is put in loopback mode temporarily, so the receiver presumably shouldn't see anything. However... At least some platforms containing a PL011 send characters down the wire even when loopback mode is enabled. This means that a spurious NUL character may be seen at the receiver when the PL011 is opened through the TTY layer. The current code also temporarily sets the baud rate to maximum and the character width to the minimum, to that the dummy TX completes as quickly as possible. If this is seen by the receiver it will result in a framing error and can knock the receiver out of sync -- turning subsequent output into garbage until synchronisation is reestablished. (Particularly problematic during boot with systemd.) To avoid spurious transmissions, this patch removes assumptions about whether the TX IRQ will fire until at least one TX IRQ has been seen. Instead, the UART will unmask the TX IRQ and then slow-start via polling and timer-based soft IRQs initially. If the TTY layer writes enough data to fill the FIFO to the interrupt threshold in one go, the TX IRQ should assert, at which point the driver changes to fully interrupt-driven TX. In this way, the TX IRQ is activated as a side-effect instead of being done deliberately. This should also mean that the driver works on the SBSA Generic UART[1] (a cut-down PL011) without invasive changes. The Generic UART lacks some features needed for the dummy TX approach to work (FIFO disabling and loopback). [1] Server Base System Architecture (ARM-DEN-0029-v2.3) http://infocenter.arm.com/ (click-thru required :/) Signed-off-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-04 05:27:33 -07:00
if (!pl011_dma_tx_start(uap))
pl011_start_tx_pio(uap);
}
static void pl011_stop_rx(struct uart_port *port)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
uap->im &= ~(UART011_RXIM|UART011_RTIM|UART011_FEIM|
UART011_PEIM|UART011_BEIM|UART011_OEIM);
pl011_write(uap->im, uap, REG_IMSC);
pl011_dma_rx_stop(uap);
}
static void pl011_enable_ms(struct uart_port *port)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
uap->im |= UART011_RIMIM|UART011_CTSMIM|UART011_DCDMIM|UART011_DSRMIM;
pl011_write(uap->im, uap, REG_IMSC);
}
IRQ: Maintain regs pointer globally rather than passing to IRQ handlers Maintain a per-CPU global "struct pt_regs *" variable which can be used instead of passing regs around manually through all ~1800 interrupt handlers in the Linux kernel. The regs pointer is used in few places, but it potentially costs both stack space and code to pass it around. On the FRV arch, removing the regs parameter from all the genirq function results in a 20% speed up of the IRQ exit path (ie: from leaving timer_interrupt() to leaving do_IRQ()). Where appropriate, an arch may override the generic storage facility and do something different with the variable. On FRV, for instance, the address is maintained in GR28 at all times inside the kernel as part of general exception handling. Having looked over the code, it appears that the parameter may be handed down through up to twenty or so layers of functions. Consider a USB character device attached to a USB hub, attached to a USB controller that posts its interrupts through a cascaded auxiliary interrupt controller. A character device driver may want to pass regs to the sysrq handler through the input layer which adds another few layers of parameter passing. I've build this code with allyesconfig for x86_64 and i386. I've runtested the main part of the code on FRV and i386, though I can't test most of the drivers. I've also done partial conversion for powerpc and MIPS - these at least compile with minimal configurations. This will affect all archs. Mostly the changes should be relatively easy. Take do_IRQ(), store the regs pointer at the beginning, saving the old one: struct pt_regs *old_regs = set_irq_regs(regs); And put the old one back at the end: set_irq_regs(old_regs); Don't pass regs through to generic_handle_irq() or __do_IRQ(). In timer_interrupt(), this sort of change will be necessary: - update_process_times(user_mode(regs)); - profile_tick(CPU_PROFILING, regs); + update_process_times(user_mode(get_irq_regs())); + profile_tick(CPU_PROFILING); I'd like to move update_process_times()'s use of get_irq_regs() into itself, except that i386, alone of the archs, uses something other than user_mode(). Some notes on the interrupt handling in the drivers: (*) input_dev() is now gone entirely. The regs pointer is no longer stored in the input_dev struct. (*) finish_unlinks() in drivers/usb/host/ohci-q.c needs checking. It does something different depending on whether it's been supplied with a regs pointer or not. (*) Various IRQ handler function pointers have been moved to type irq_handler_t. Signed-Off-By: David Howells <dhowells@redhat.com> (cherry picked from 1b16e7ac850969f38b375e511e3fa2f474a33867 commit)
2006-10-05 07:55:46 -06:00
static void pl011_rx_chars(struct uart_amba_port *uap)
__releases(&uap->port.lock)
__acquires(&uap->port.lock)
{
pl011_fifo_to_tty(uap);
spin_unlock(&uap->port.lock);
tty_flip_buffer_push(&uap->port.state->port);
/*
* If we were temporarily out of DMA mode for a while,
* attempt to switch back to DMA mode again.
*/
if (pl011_dma_rx_available(uap)) {
if (pl011_dma_rx_trigger_dma(uap)) {
dev_dbg(uap->port.dev, "could not trigger RX DMA job "
"fall back to interrupt mode again\n");
uap->im |= UART011_RXIM;
pl011_write(uap->im, uap, REG_IMSC);
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
} else {
#ifdef CONFIG_DMA_ENGINE
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
/* Start Rx DMA poll */
if (uap->dmarx.poll_rate) {
uap->dmarx.last_jiffies = jiffies;
uap->dmarx.last_residue = PL011_DMA_BUFFER_SIZE;
mod_timer(&uap->dmarx.timer,
jiffies +
msecs_to_jiffies(uap->dmarx.poll_rate));
}
#endif
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
}
}
spin_lock(&uap->port.lock);
}
static bool pl011_tx_char(struct uart_amba_port *uap, unsigned char c,
bool from_irq)
serial/amba-pl011: Activate TX IRQ passively The current PL011 driver transmits a dummy character when the UART is opened, to assert the TX IRQ for the first time (see pl011_startup()). The UART is put in loopback mode temporarily, so the receiver presumably shouldn't see anything. However... At least some platforms containing a PL011 send characters down the wire even when loopback mode is enabled. This means that a spurious NUL character may be seen at the receiver when the PL011 is opened through the TTY layer. The current code also temporarily sets the baud rate to maximum and the character width to the minimum, to that the dummy TX completes as quickly as possible. If this is seen by the receiver it will result in a framing error and can knock the receiver out of sync -- turning subsequent output into garbage until synchronisation is reestablished. (Particularly problematic during boot with systemd.) To avoid spurious transmissions, this patch removes assumptions about whether the TX IRQ will fire until at least one TX IRQ has been seen. Instead, the UART will unmask the TX IRQ and then slow-start via polling and timer-based soft IRQs initially. If the TTY layer writes enough data to fill the FIFO to the interrupt threshold in one go, the TX IRQ should assert, at which point the driver changes to fully interrupt-driven TX. In this way, the TX IRQ is activated as a side-effect instead of being done deliberately. This should also mean that the driver works on the SBSA Generic UART[1] (a cut-down PL011) without invasive changes. The Generic UART lacks some features needed for the dummy TX approach to work (FIFO disabling and loopback). [1] Server Base System Architecture (ARM-DEN-0029-v2.3) http://infocenter.arm.com/ (click-thru required :/) Signed-off-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-04 05:27:33 -07:00
{
if (unlikely(!from_irq) &&
pl011_read(uap, REG_FR) & UART01x_FR_TXFF)
return false; /* unable to transmit character */
pl011_write(c, uap, REG_DR);
serial/amba-pl011: Activate TX IRQ passively The current PL011 driver transmits a dummy character when the UART is opened, to assert the TX IRQ for the first time (see pl011_startup()). The UART is put in loopback mode temporarily, so the receiver presumably shouldn't see anything. However... At least some platforms containing a PL011 send characters down the wire even when loopback mode is enabled. This means that a spurious NUL character may be seen at the receiver when the PL011 is opened through the TTY layer. The current code also temporarily sets the baud rate to maximum and the character width to the minimum, to that the dummy TX completes as quickly as possible. If this is seen by the receiver it will result in a framing error and can knock the receiver out of sync -- turning subsequent output into garbage until synchronisation is reestablished. (Particularly problematic during boot with systemd.) To avoid spurious transmissions, this patch removes assumptions about whether the TX IRQ will fire until at least one TX IRQ has been seen. Instead, the UART will unmask the TX IRQ and then slow-start via polling and timer-based soft IRQs initially. If the TTY layer writes enough data to fill the FIFO to the interrupt threshold in one go, the TX IRQ should assert, at which point the driver changes to fully interrupt-driven TX. In this way, the TX IRQ is activated as a side-effect instead of being done deliberately. This should also mean that the driver works on the SBSA Generic UART[1] (a cut-down PL011) without invasive changes. The Generic UART lacks some features needed for the dummy TX approach to work (FIFO disabling and loopback). [1] Server Base System Architecture (ARM-DEN-0029-v2.3) http://infocenter.arm.com/ (click-thru required :/) Signed-off-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-04 05:27:33 -07:00
uap->port.icount.tx++;
return true;
serial/amba-pl011: Activate TX IRQ passively The current PL011 driver transmits a dummy character when the UART is opened, to assert the TX IRQ for the first time (see pl011_startup()). The UART is put in loopback mode temporarily, so the receiver presumably shouldn't see anything. However... At least some platforms containing a PL011 send characters down the wire even when loopback mode is enabled. This means that a spurious NUL character may be seen at the receiver when the PL011 is opened through the TTY layer. The current code also temporarily sets the baud rate to maximum and the character width to the minimum, to that the dummy TX completes as quickly as possible. If this is seen by the receiver it will result in a framing error and can knock the receiver out of sync -- turning subsequent output into garbage until synchronisation is reestablished. (Particularly problematic during boot with systemd.) To avoid spurious transmissions, this patch removes assumptions about whether the TX IRQ will fire until at least one TX IRQ has been seen. Instead, the UART will unmask the TX IRQ and then slow-start via polling and timer-based soft IRQs initially. If the TTY layer writes enough data to fill the FIFO to the interrupt threshold in one go, the TX IRQ should assert, at which point the driver changes to fully interrupt-driven TX. In this way, the TX IRQ is activated as a side-effect instead of being done deliberately. This should also mean that the driver works on the SBSA Generic UART[1] (a cut-down PL011) without invasive changes. The Generic UART lacks some features needed for the dummy TX approach to work (FIFO disabling and loopback). [1] Server Base System Architecture (ARM-DEN-0029-v2.3) http://infocenter.arm.com/ (click-thru required :/) Signed-off-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-04 05:27:33 -07:00
}
/* Returns true if tx interrupts have to be (kept) enabled */
static bool pl011_tx_chars(struct uart_amba_port *uap, bool from_irq)
{
struct circ_buf *xmit = &uap->port.state->xmit;
int count = uap->fifosize >> 1;
serial/amba-pl011: Activate TX IRQ passively The current PL011 driver transmits a dummy character when the UART is opened, to assert the TX IRQ for the first time (see pl011_startup()). The UART is put in loopback mode temporarily, so the receiver presumably shouldn't see anything. However... At least some platforms containing a PL011 send characters down the wire even when loopback mode is enabled. This means that a spurious NUL character may be seen at the receiver when the PL011 is opened through the TTY layer. The current code also temporarily sets the baud rate to maximum and the character width to the minimum, to that the dummy TX completes as quickly as possible. If this is seen by the receiver it will result in a framing error and can knock the receiver out of sync -- turning subsequent output into garbage until synchronisation is reestablished. (Particularly problematic during boot with systemd.) To avoid spurious transmissions, this patch removes assumptions about whether the TX IRQ will fire until at least one TX IRQ has been seen. Instead, the UART will unmask the TX IRQ and then slow-start via polling and timer-based soft IRQs initially. If the TTY layer writes enough data to fill the FIFO to the interrupt threshold in one go, the TX IRQ should assert, at which point the driver changes to fully interrupt-driven TX. In this way, the TX IRQ is activated as a side-effect instead of being done deliberately. This should also mean that the driver works on the SBSA Generic UART[1] (a cut-down PL011) without invasive changes. The Generic UART lacks some features needed for the dummy TX approach to work (FIFO disabling and loopback). [1] Server Base System Architecture (ARM-DEN-0029-v2.3) http://infocenter.arm.com/ (click-thru required :/) Signed-off-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-04 05:27:33 -07:00
if (uap->port.x_char) {
if (!pl011_tx_char(uap, uap->port.x_char, from_irq))
return true;
uap->port.x_char = 0;
serial/amba-pl011: Activate TX IRQ passively The current PL011 driver transmits a dummy character when the UART is opened, to assert the TX IRQ for the first time (see pl011_startup()). The UART is put in loopback mode temporarily, so the receiver presumably shouldn't see anything. However... At least some platforms containing a PL011 send characters down the wire even when loopback mode is enabled. This means that a spurious NUL character may be seen at the receiver when the PL011 is opened through the TTY layer. The current code also temporarily sets the baud rate to maximum and the character width to the minimum, to that the dummy TX completes as quickly as possible. If this is seen by the receiver it will result in a framing error and can knock the receiver out of sync -- turning subsequent output into garbage until synchronisation is reestablished. (Particularly problematic during boot with systemd.) To avoid spurious transmissions, this patch removes assumptions about whether the TX IRQ will fire until at least one TX IRQ has been seen. Instead, the UART will unmask the TX IRQ and then slow-start via polling and timer-based soft IRQs initially. If the TTY layer writes enough data to fill the FIFO to the interrupt threshold in one go, the TX IRQ should assert, at which point the driver changes to fully interrupt-driven TX. In this way, the TX IRQ is activated as a side-effect instead of being done deliberately. This should also mean that the driver works on the SBSA Generic UART[1] (a cut-down PL011) without invasive changes. The Generic UART lacks some features needed for the dummy TX approach to work (FIFO disabling and loopback). [1] Server Base System Architecture (ARM-DEN-0029-v2.3) http://infocenter.arm.com/ (click-thru required :/) Signed-off-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-04 05:27:33 -07:00
--count;
}
if (uart_circ_empty(xmit) || uart_tx_stopped(&uap->port)) {
pl011_stop_tx(&uap->port);
return false;
}
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
/* If we are using DMA mode, try to send some characters. */
if (pl011_dma_tx_irq(uap))
return true;
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
do {
if (likely(from_irq) && count-- == 0)
break;
if (!pl011_tx_char(uap, xmit->buf[xmit->tail], from_irq))
break;
xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
} while (!uart_circ_empty(xmit));
if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
uart_write_wakeup(&uap->port);
if (uart_circ_empty(xmit)) {
pl011_stop_tx(&uap->port);
return false;
}
return true;
}
static void pl011_modem_status(struct uart_amba_port *uap)
{
unsigned int status, delta;
status = pl011_read(uap, REG_FR) & UART01x_FR_MODEM_ANY;
delta = status ^ uap->old_status;
uap->old_status = status;
if (!delta)
return;
if (delta & UART01x_FR_DCD)
uart_handle_dcd_change(&uap->port, status & UART01x_FR_DCD);
if (delta & uap->vendor->fr_dsr)
uap->port.icount.dsr++;
if (delta & uap->vendor->fr_cts)
uart_handle_cts_change(&uap->port,
status & uap->vendor->fr_cts);
wake_up_interruptible(&uap->port.state->port.delta_msr_wait);
}
static void check_apply_cts_event_workaround(struct uart_amba_port *uap)
{
unsigned int dummy_read;
if (!uap->vendor->cts_event_workaround)
return;
/* workaround to make sure that all bits are unlocked.. */
pl011_write(0x00, uap, REG_ICR);
/*
* WA: introduce 26ns(1 uart clk) delay before W1C;
* single apb access will incur 2 pclk(133.12Mhz) delay,
* so add 2 dummy reads
*/
dummy_read = pl011_read(uap, REG_ICR);
dummy_read = pl011_read(uap, REG_ICR);
}
IRQ: Maintain regs pointer globally rather than passing to IRQ handlers Maintain a per-CPU global "struct pt_regs *" variable which can be used instead of passing regs around manually through all ~1800 interrupt handlers in the Linux kernel. The regs pointer is used in few places, but it potentially costs both stack space and code to pass it around. On the FRV arch, removing the regs parameter from all the genirq function results in a 20% speed up of the IRQ exit path (ie: from leaving timer_interrupt() to leaving do_IRQ()). Where appropriate, an arch may override the generic storage facility and do something different with the variable. On FRV, for instance, the address is maintained in GR28 at all times inside the kernel as part of general exception handling. Having looked over the code, it appears that the parameter may be handed down through up to twenty or so layers of functions. Consider a USB character device attached to a USB hub, attached to a USB controller that posts its interrupts through a cascaded auxiliary interrupt controller. A character device driver may want to pass regs to the sysrq handler through the input layer which adds another few layers of parameter passing. I've build this code with allyesconfig for x86_64 and i386. I've runtested the main part of the code on FRV and i386, though I can't test most of the drivers. I've also done partial conversion for powerpc and MIPS - these at least compile with minimal configurations. This will affect all archs. Mostly the changes should be relatively easy. Take do_IRQ(), store the regs pointer at the beginning, saving the old one: struct pt_regs *old_regs = set_irq_regs(regs); And put the old one back at the end: set_irq_regs(old_regs); Don't pass regs through to generic_handle_irq() or __do_IRQ(). In timer_interrupt(), this sort of change will be necessary: - update_process_times(user_mode(regs)); - profile_tick(CPU_PROFILING, regs); + update_process_times(user_mode(get_irq_regs())); + profile_tick(CPU_PROFILING); I'd like to move update_process_times()'s use of get_irq_regs() into itself, except that i386, alone of the archs, uses something other than user_mode(). Some notes on the interrupt handling in the drivers: (*) input_dev() is now gone entirely. The regs pointer is no longer stored in the input_dev struct. (*) finish_unlinks() in drivers/usb/host/ohci-q.c needs checking. It does something different depending on whether it's been supplied with a regs pointer or not. (*) Various IRQ handler function pointers have been moved to type irq_handler_t. Signed-Off-By: David Howells <dhowells@redhat.com> (cherry picked from 1b16e7ac850969f38b375e511e3fa2f474a33867 commit)
2006-10-05 07:55:46 -06:00
static irqreturn_t pl011_int(int irq, void *dev_id)
{
struct uart_amba_port *uap = dev_id;
unsigned long flags;
unsigned int status, pass_counter = AMBA_ISR_PASS_LIMIT;
int handled = 0;
spin_lock_irqsave(&uap->port.lock, flags);
status = pl011_read(uap, REG_RIS) & uap->im;
if (status) {
do {
check_apply_cts_event_workaround(uap);
pl011_write(status & ~(UART011_TXIS|UART011_RTIS|
UART011_RXIS),
uap, REG_ICR);
if (status & (UART011_RTIS|UART011_RXIS)) {
if (pl011_dma_rx_running(uap))
pl011_dma_rx_irq(uap);
else
pl011_rx_chars(uap);
}
if (status & (UART011_DSRMIS|UART011_DCDMIS|
UART011_CTSMIS|UART011_RIMIS))
pl011_modem_status(uap);
if (status & UART011_TXIS)
pl011_tx_chars(uap, true);
serial: pl011: implement workaround for CTS clear event issue Problem Observed: - interrupt status is set by rising or falling edge on CTS line - interrupt status is cleared on a .0. to .1. transition of the interrupt-clear register bit 1. - interrupt-clear register is reset by hardware once the interrupt status is .0.. Remark: It seems not possible to read this register back by the CPU though, but internally this register exists. - when simultaneous set and reset event on the interrupt status happens, then the set-event has priority and the status remains .1.. As a result the interrupt-clear register is not reset to .0., and no new .0. to .1. transition can be detected on it when writing a .1. to it. This implies race condition, the clear must be performed at least one UARTCLK the riding edge of CTS RIS interrupt. Fix: Instead of resetting UART as done in commit c16d51a32bbb61ac8fd96f78b5ce2fccfe0fb4c3 "amba pl011: workaround for uart registers lockup" do the following: write .0. and then .1. to the interrupt-clear register to make sure that this transition is detected. According to the datasheet writing a .0. does not have any effect, but actually it allows to reset the internal interrupt-clear register. Take into account: The .0. needs to last at least for one clk_uart clock period (~ 38 MHz, 26.08ns) This way we can do away with the tasklet and keep only a tiny fix triggered by the variant flag introduced in this patch. Signed-off-by: Guillaume Jaunet <guillaume.jaunet@stericsson.com> Signed-off-by: Christophe Arnal <christophe.arnal@stericsson.com> Signed-off-by: Matthias Locher <Matthias.Locher@stericsson.com> Signed-off-by: Rajanikanth H.V <rajanikanth.hv@stericsson.com> Reviewed-by: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-03-26 03:17:02 -06:00
if (pass_counter-- == 0)
break;
status = pl011_read(uap, REG_RIS) & uap->im;
} while (status != 0);
handled = 1;
}
spin_unlock_irqrestore(&uap->port.lock, flags);
return IRQ_RETVAL(handled);
}
static unsigned int pl011_tx_empty(struct uart_port *port)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
/* Allow feature register bits to be inverted to work around errata */
unsigned int status = pl011_read(uap, REG_FR) ^ uap->vendor->inv_fr;
return status & (uap->vendor->fr_busy | UART01x_FR_TXFF) ?
0 : TIOCSER_TEMT;
}
static unsigned int pl011_get_mctrl(struct uart_port *port)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
unsigned int result = 0;
unsigned int status = pl011_read(uap, REG_FR);
#define TIOCMBIT(uartbit, tiocmbit) \
if (status & uartbit) \
result |= tiocmbit
TIOCMBIT(UART01x_FR_DCD, TIOCM_CAR);
TIOCMBIT(uap->vendor->fr_dsr, TIOCM_DSR);
TIOCMBIT(uap->vendor->fr_cts, TIOCM_CTS);
TIOCMBIT(uap->vendor->fr_ri, TIOCM_RNG);
#undef TIOCMBIT
return result;
}
static void pl011_set_mctrl(struct uart_port *port, unsigned int mctrl)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
unsigned int cr;
cr = pl011_read(uap, REG_CR);
#define TIOCMBIT(tiocmbit, uartbit) \
if (mctrl & tiocmbit) \
cr |= uartbit; \
else \
cr &= ~uartbit
TIOCMBIT(TIOCM_RTS, UART011_CR_RTS);
TIOCMBIT(TIOCM_DTR, UART011_CR_DTR);
TIOCMBIT(TIOCM_OUT1, UART011_CR_OUT1);
TIOCMBIT(TIOCM_OUT2, UART011_CR_OUT2);
TIOCMBIT(TIOCM_LOOP, UART011_CR_LBE);
if (port->status & UPSTAT_AUTORTS) {
/* We need to disable auto-RTS if we want to turn RTS off */
TIOCMBIT(TIOCM_RTS, UART011_CR_RTSEN);
}
#undef TIOCMBIT
pl011_write(cr, uap, REG_CR);
}
static void pl011_break_ctl(struct uart_port *port, int break_state)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
unsigned long flags;
unsigned int lcr_h;
spin_lock_irqsave(&uap->port.lock, flags);
lcr_h = pl011_read(uap, REG_LCRH_TX);
if (break_state == -1)
lcr_h |= UART01x_LCRH_BRK;
else
lcr_h &= ~UART01x_LCRH_BRK;
pl011_write(lcr_h, uap, REG_LCRH_TX);
spin_unlock_irqrestore(&uap->port.lock, flags);
}
#ifdef CONFIG_CONSOLE_POLL
static void pl011_quiesce_irqs(struct uart_port *port)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
pl011_write(pl011_read(uap, REG_MIS), uap, REG_ICR);
/*
* There is no way to clear TXIM as this is "ready to transmit IRQ", so
* we simply mask it. start_tx() will unmask it.
*
* Note we can race with start_tx(), and if the race happens, the
* polling user might get another interrupt just after we clear it.
* But it should be OK and can happen even w/o the race, e.g.
* controller immediately got some new data and raised the IRQ.
*
* And whoever uses polling routines assumes that it manages the device
* (including tx queue), so we're also fine with start_tx()'s caller
* side.
*/
pl011_write(pl011_read(uap, REG_IMSC) & ~UART011_TXIM, uap,
REG_IMSC);
}
static int pl011_get_poll_char(struct uart_port *port)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
unsigned int status;
/*
* The caller might need IRQs lowered, e.g. if used with KDB NMI
* debugger.
*/
pl011_quiesce_irqs(port);
status = pl011_read(uap, REG_FR);
if (status & UART01x_FR_RXFE)
return NO_POLL_CHAR;
return pl011_read(uap, REG_DR);
}
static void pl011_put_poll_char(struct uart_port *port,
unsigned char ch)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
while (pl011_read(uap, REG_FR) & UART01x_FR_TXFF)
cpu_relax();
pl011_write(ch, uap, REG_DR);
}
#endif /* CONFIG_CONSOLE_POLL */
tty/serial/amba-pl011: Implement poll_init callback The callback is used to initialize the hardware, nothing else should be done, i.e. we should not request interrupts (but we can and do unmask some of them, as they might be useful for NMI entry). As a side-effect, the patch also fixes a division by zero[1] when booting with kgdboc options specified (e.g. kgdboc=ttyAMA0,115200n8). The issue happens because serial core calls set_termios callback, but the driver doesn't know clock frequency, and thus cannot calculate proper baud rate values. [1] WARNING: at drivers/tty/serial/serial_core.c:400 uart_get_baud_rate+0xe8/0x14c() Modules linked in: [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c0020ae8>] (warn_slowpath_common+0x4c/0x64) [<c0020ae8>] (warn_slowpath_common+0x4c/0x64) from [<c0020b1c>] (warn_slowpath_null+0x1c/0x24) [<c0020b1c>] (warn_slowpath_null+0x1c/0x24) from [<c0185ed8>] (uart_get_baud_rate+0xe8/0x14c) [<c0185ed8>] (uart_get_baud_rate+0xe8/0x14c) from [<c0187078>] (pl011_set_termios+0x48/0x278) [<c0187078>] (pl011_set_termios+0x48/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) ---[ end trace 7d41c9186f342c40 ]--- Division by zero in kernel. [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c014546c>] (Ldiv0+0x8/0x10) [<c014546c>] (Ldiv0+0x8/0x10) from [<c0187098>] (pl011_set_termios+0x68/0x278) [<c0187098>] (pl011_set_termios+0x68/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) Division by zero in kernel. [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c014546c>] (Ldiv0+0x8/0x10) [<c014546c>] (Ldiv0+0x8/0x10) from [<c0183a98>] (uart_update_timeout+0x4c/0x5c) [<c0183a98>] (uart_update_timeout+0x4c/0x5c) from [<c01870f8>] (pl011_set_termios+0xc8/0x278) [<c01870f8>] (pl011_set_termios+0xc8/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) Signed-off-by: Anton Vorontsov <anton.vorontsov@linaro.org> Acked-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-09-24 15:27:54 -06:00
static int pl011_hwinit(struct uart_port *port)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
int retval;
/* Optionaly enable pins to be muxed in and configured */
pinctrl_pm_select_default_state(port->dev);
/*
* Try to enable the clock producer.
*/
retval = clk_prepare_enable(uap->clk);
if (retval)
return retval;
uap->port.uartclk = clk_get_rate(uap->clk);
/* Clear pending error and receive interrupts */
pl011_write(UART011_OEIS | UART011_BEIS | UART011_PEIS |
UART011_FEIS | UART011_RTIS | UART011_RXIS,
uap, REG_ICR);
tty/serial/amba-pl011: Implement poll_init callback The callback is used to initialize the hardware, nothing else should be done, i.e. we should not request interrupts (but we can and do unmask some of them, as they might be useful for NMI entry). As a side-effect, the patch also fixes a division by zero[1] when booting with kgdboc options specified (e.g. kgdboc=ttyAMA0,115200n8). The issue happens because serial core calls set_termios callback, but the driver doesn't know clock frequency, and thus cannot calculate proper baud rate values. [1] WARNING: at drivers/tty/serial/serial_core.c:400 uart_get_baud_rate+0xe8/0x14c() Modules linked in: [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c0020ae8>] (warn_slowpath_common+0x4c/0x64) [<c0020ae8>] (warn_slowpath_common+0x4c/0x64) from [<c0020b1c>] (warn_slowpath_null+0x1c/0x24) [<c0020b1c>] (warn_slowpath_null+0x1c/0x24) from [<c0185ed8>] (uart_get_baud_rate+0xe8/0x14c) [<c0185ed8>] (uart_get_baud_rate+0xe8/0x14c) from [<c0187078>] (pl011_set_termios+0x48/0x278) [<c0187078>] (pl011_set_termios+0x48/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) ---[ end trace 7d41c9186f342c40 ]--- Division by zero in kernel. [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c014546c>] (Ldiv0+0x8/0x10) [<c014546c>] (Ldiv0+0x8/0x10) from [<c0187098>] (pl011_set_termios+0x68/0x278) [<c0187098>] (pl011_set_termios+0x68/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) Division by zero in kernel. [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c014546c>] (Ldiv0+0x8/0x10) [<c014546c>] (Ldiv0+0x8/0x10) from [<c0183a98>] (uart_update_timeout+0x4c/0x5c) [<c0183a98>] (uart_update_timeout+0x4c/0x5c) from [<c01870f8>] (pl011_set_termios+0xc8/0x278) [<c01870f8>] (pl011_set_termios+0xc8/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) Signed-off-by: Anton Vorontsov <anton.vorontsov@linaro.org> Acked-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-09-24 15:27:54 -06:00
/*
* Save interrupts enable mask, and enable RX interrupts in case if
* the interrupt is used for NMI entry.
*/
uap->im = pl011_read(uap, REG_IMSC);
pl011_write(UART011_RTIM | UART011_RXIM, uap, REG_IMSC);
tty/serial/amba-pl011: Implement poll_init callback The callback is used to initialize the hardware, nothing else should be done, i.e. we should not request interrupts (but we can and do unmask some of them, as they might be useful for NMI entry). As a side-effect, the patch also fixes a division by zero[1] when booting with kgdboc options specified (e.g. kgdboc=ttyAMA0,115200n8). The issue happens because serial core calls set_termios callback, but the driver doesn't know clock frequency, and thus cannot calculate proper baud rate values. [1] WARNING: at drivers/tty/serial/serial_core.c:400 uart_get_baud_rate+0xe8/0x14c() Modules linked in: [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c0020ae8>] (warn_slowpath_common+0x4c/0x64) [<c0020ae8>] (warn_slowpath_common+0x4c/0x64) from [<c0020b1c>] (warn_slowpath_null+0x1c/0x24) [<c0020b1c>] (warn_slowpath_null+0x1c/0x24) from [<c0185ed8>] (uart_get_baud_rate+0xe8/0x14c) [<c0185ed8>] (uart_get_baud_rate+0xe8/0x14c) from [<c0187078>] (pl011_set_termios+0x48/0x278) [<c0187078>] (pl011_set_termios+0x48/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) ---[ end trace 7d41c9186f342c40 ]--- Division by zero in kernel. [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c014546c>] (Ldiv0+0x8/0x10) [<c014546c>] (Ldiv0+0x8/0x10) from [<c0187098>] (pl011_set_termios+0x68/0x278) [<c0187098>] (pl011_set_termios+0x68/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) Division by zero in kernel. [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c014546c>] (Ldiv0+0x8/0x10) [<c014546c>] (Ldiv0+0x8/0x10) from [<c0183a98>] (uart_update_timeout+0x4c/0x5c) [<c0183a98>] (uart_update_timeout+0x4c/0x5c) from [<c01870f8>] (pl011_set_termios+0xc8/0x278) [<c01870f8>] (pl011_set_termios+0xc8/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) Signed-off-by: Anton Vorontsov <anton.vorontsov@linaro.org> Acked-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-09-24 15:27:54 -06:00
if (dev_get_platdata(uap->port.dev)) {
tty/serial/amba-pl011: Implement poll_init callback The callback is used to initialize the hardware, nothing else should be done, i.e. we should not request interrupts (but we can and do unmask some of them, as they might be useful for NMI entry). As a side-effect, the patch also fixes a division by zero[1] when booting with kgdboc options specified (e.g. kgdboc=ttyAMA0,115200n8). The issue happens because serial core calls set_termios callback, but the driver doesn't know clock frequency, and thus cannot calculate proper baud rate values. [1] WARNING: at drivers/tty/serial/serial_core.c:400 uart_get_baud_rate+0xe8/0x14c() Modules linked in: [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c0020ae8>] (warn_slowpath_common+0x4c/0x64) [<c0020ae8>] (warn_slowpath_common+0x4c/0x64) from [<c0020b1c>] (warn_slowpath_null+0x1c/0x24) [<c0020b1c>] (warn_slowpath_null+0x1c/0x24) from [<c0185ed8>] (uart_get_baud_rate+0xe8/0x14c) [<c0185ed8>] (uart_get_baud_rate+0xe8/0x14c) from [<c0187078>] (pl011_set_termios+0x48/0x278) [<c0187078>] (pl011_set_termios+0x48/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) ---[ end trace 7d41c9186f342c40 ]--- Division by zero in kernel. [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c014546c>] (Ldiv0+0x8/0x10) [<c014546c>] (Ldiv0+0x8/0x10) from [<c0187098>] (pl011_set_termios+0x68/0x278) [<c0187098>] (pl011_set_termios+0x68/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) Division by zero in kernel. [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c014546c>] (Ldiv0+0x8/0x10) [<c014546c>] (Ldiv0+0x8/0x10) from [<c0183a98>] (uart_update_timeout+0x4c/0x5c) [<c0183a98>] (uart_update_timeout+0x4c/0x5c) from [<c01870f8>] (pl011_set_termios+0xc8/0x278) [<c01870f8>] (pl011_set_termios+0xc8/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) Signed-off-by: Anton Vorontsov <anton.vorontsov@linaro.org> Acked-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-09-24 15:27:54 -06:00
struct amba_pl011_data *plat;
plat = dev_get_platdata(uap->port.dev);
tty/serial/amba-pl011: Implement poll_init callback The callback is used to initialize the hardware, nothing else should be done, i.e. we should not request interrupts (but we can and do unmask some of them, as they might be useful for NMI entry). As a side-effect, the patch also fixes a division by zero[1] when booting with kgdboc options specified (e.g. kgdboc=ttyAMA0,115200n8). The issue happens because serial core calls set_termios callback, but the driver doesn't know clock frequency, and thus cannot calculate proper baud rate values. [1] WARNING: at drivers/tty/serial/serial_core.c:400 uart_get_baud_rate+0xe8/0x14c() Modules linked in: [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c0020ae8>] (warn_slowpath_common+0x4c/0x64) [<c0020ae8>] (warn_slowpath_common+0x4c/0x64) from [<c0020b1c>] (warn_slowpath_null+0x1c/0x24) [<c0020b1c>] (warn_slowpath_null+0x1c/0x24) from [<c0185ed8>] (uart_get_baud_rate+0xe8/0x14c) [<c0185ed8>] (uart_get_baud_rate+0xe8/0x14c) from [<c0187078>] (pl011_set_termios+0x48/0x278) [<c0187078>] (pl011_set_termios+0x48/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) ---[ end trace 7d41c9186f342c40 ]--- Division by zero in kernel. [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c014546c>] (Ldiv0+0x8/0x10) [<c014546c>] (Ldiv0+0x8/0x10) from [<c0187098>] (pl011_set_termios+0x68/0x278) [<c0187098>] (pl011_set_termios+0x68/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) Division by zero in kernel. [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c014546c>] (Ldiv0+0x8/0x10) [<c014546c>] (Ldiv0+0x8/0x10) from [<c0183a98>] (uart_update_timeout+0x4c/0x5c) [<c0183a98>] (uart_update_timeout+0x4c/0x5c) from [<c01870f8>] (pl011_set_termios+0xc8/0x278) [<c01870f8>] (pl011_set_termios+0xc8/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) Signed-off-by: Anton Vorontsov <anton.vorontsov@linaro.org> Acked-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-09-24 15:27:54 -06:00
if (plat->init)
plat->init();
}
return 0;
}
static bool pl011_split_lcrh(const struct uart_amba_port *uap)
{
return pl011_reg_to_offset(uap, REG_LCRH_RX) !=
pl011_reg_to_offset(uap, REG_LCRH_TX);
}
static void pl011_write_lcr_h(struct uart_amba_port *uap, unsigned int lcr_h)
{
pl011_write(lcr_h, uap, REG_LCRH_RX);
if (pl011_split_lcrh(uap)) {
int i;
/*
* Wait 10 PCLKs before writing LCRH_TX register,
* to get this delay write read only register 10 times
*/
for (i = 0; i < 10; ++i)
pl011_write(0xff, uap, REG_MIS);
pl011_write(lcr_h, uap, REG_LCRH_TX);
}
}
static int pl011_allocate_irq(struct uart_amba_port *uap)
{
pl011_write(uap->im, uap, REG_IMSC);
return request_irq(uap->port.irq, pl011_int, IRQF_SHARED, "uart-pl011", uap);
}
/*
* Enable interrupts, only timeouts when using DMA
* if initial RX DMA job failed, start in interrupt mode
* as well.
*/
static void pl011_enable_interrupts(struct uart_amba_port *uap)
{
tty: pl011: Avoid spuriously stuck-off interrupts Commit 9b96fbacda34 ("serial: PL011: clear pending interrupts") clears the RX and receive timeout interrupts on pl011 startup, to avoid a screaming-interrupt scenario that can occur when the firmware or bootloader leaves these interrupts asserted. This has been noted as an issue when running Linux on qemu [1]. Unfortunately, the above fix seems to lead to potential misbehaviour if the RX FIFO interrupt is asserted _non_ spuriously on driver startup, if the RX FIFO is also already full to the trigger level. Clearing the RX FIFO interrupt does not change the FIFO fill level. In this scenario, because the interrupt is now clear and because the FIFO is already full to the trigger level, no new assertion of the RX FIFO interrupt can occur unless the FIFO is drained back below the trigger level. This never occurs because the pl011 driver is waiting for an RX FIFO interrupt to tell it that there is something to read, and does not read the FIFO at all until that interrupt occurs. Thus, simply clearing "spurious" interrupts on startup may be misguided, since there is no way to be sure that the interrupts are truly spurious, and things can go wrong if they are not. This patch instead clears the interrupt condition by draining the RX FIFO during UART startup, after clearing any potentially spurious interrupt. This should ensure that an interrupt will definitely be asserted if the RX FIFO subsequently becomes sufficiently full. The drain is done at the point of enabling interrupts only. This means that it will occur any time the UART is newly opened through the tty layer. It will not apply to polled-mode use of the UART by kgdboc: since that scenario cannot use interrupts by design, this should not matter. kgdboc will interact badly with "normal" use of the UART in any case: this patch makes no attempt to paper over such issues. This patch does not attempt to address the case where the RX FIFO fills faster than it can be drained: that is a pathological hardware design problem that is beyond the scope of the driver to work around. As a failsafe, the number of poll iterations for draining the FIFO is limited to twice the FIFO size. This will ensure that the kernel at least boots even if it is impossible to drain the FIFO for some reason. [1] [Qemu-devel] [Qemu-arm] [PATCH] pl011: do not put into fifo before enabled the interruption https://lists.gnu.org/archive/html/qemu-devel/2018-01/msg06446.html Reported-by: Wei Xu <xuwei5@hisilicon.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Peter Maydell <peter.maydell@linaro.org> Fixes: 9b96fbacda34 ("serial: PL011: clear pending interrupts") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Cc: stable <stable@vger.kernel.org> Tested-by: Wei Xu <xuwei5@hisilicon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-05-10 11:08:23 -06:00
unsigned int i;
spin_lock_irq(&uap->port.lock);
/* Clear out any spuriously appearing RX interrupts */
pl011_write(UART011_RTIS | UART011_RXIS, uap, REG_ICR);
tty: pl011: Avoid spuriously stuck-off interrupts Commit 9b96fbacda34 ("serial: PL011: clear pending interrupts") clears the RX and receive timeout interrupts on pl011 startup, to avoid a screaming-interrupt scenario that can occur when the firmware or bootloader leaves these interrupts asserted. This has been noted as an issue when running Linux on qemu [1]. Unfortunately, the above fix seems to lead to potential misbehaviour if the RX FIFO interrupt is asserted _non_ spuriously on driver startup, if the RX FIFO is also already full to the trigger level. Clearing the RX FIFO interrupt does not change the FIFO fill level. In this scenario, because the interrupt is now clear and because the FIFO is already full to the trigger level, no new assertion of the RX FIFO interrupt can occur unless the FIFO is drained back below the trigger level. This never occurs because the pl011 driver is waiting for an RX FIFO interrupt to tell it that there is something to read, and does not read the FIFO at all until that interrupt occurs. Thus, simply clearing "spurious" interrupts on startup may be misguided, since there is no way to be sure that the interrupts are truly spurious, and things can go wrong if they are not. This patch instead clears the interrupt condition by draining the RX FIFO during UART startup, after clearing any potentially spurious interrupt. This should ensure that an interrupt will definitely be asserted if the RX FIFO subsequently becomes sufficiently full. The drain is done at the point of enabling interrupts only. This means that it will occur any time the UART is newly opened through the tty layer. It will not apply to polled-mode use of the UART by kgdboc: since that scenario cannot use interrupts by design, this should not matter. kgdboc will interact badly with "normal" use of the UART in any case: this patch makes no attempt to paper over such issues. This patch does not attempt to address the case where the RX FIFO fills faster than it can be drained: that is a pathological hardware design problem that is beyond the scope of the driver to work around. As a failsafe, the number of poll iterations for draining the FIFO is limited to twice the FIFO size. This will ensure that the kernel at least boots even if it is impossible to drain the FIFO for some reason. [1] [Qemu-devel] [Qemu-arm] [PATCH] pl011: do not put into fifo before enabled the interruption https://lists.gnu.org/archive/html/qemu-devel/2018-01/msg06446.html Reported-by: Wei Xu <xuwei5@hisilicon.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Peter Maydell <peter.maydell@linaro.org> Fixes: 9b96fbacda34 ("serial: PL011: clear pending interrupts") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Cc: stable <stable@vger.kernel.org> Tested-by: Wei Xu <xuwei5@hisilicon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-05-10 11:08:23 -06:00
/*
* RXIS is asserted only when the RX FIFO transitions from below
* to above the trigger threshold. If the RX FIFO is already
* full to the threshold this can't happen and RXIS will now be
* stuck off. Drain the RX FIFO explicitly to fix this:
*/
for (i = 0; i < uap->fifosize * 2; ++i) {
if (pl011_read(uap, REG_FR) & UART01x_FR_RXFE)
break;
pl011_read(uap, REG_DR);
}
uap->im = UART011_RTIM;
if (!pl011_dma_rx_running(uap))
uap->im |= UART011_RXIM;
pl011_write(uap->im, uap, REG_IMSC);
spin_unlock_irq(&uap->port.lock);
}
tty/serial/amba-pl011: Implement poll_init callback The callback is used to initialize the hardware, nothing else should be done, i.e. we should not request interrupts (but we can and do unmask some of them, as they might be useful for NMI entry). As a side-effect, the patch also fixes a division by zero[1] when booting with kgdboc options specified (e.g. kgdboc=ttyAMA0,115200n8). The issue happens because serial core calls set_termios callback, but the driver doesn't know clock frequency, and thus cannot calculate proper baud rate values. [1] WARNING: at drivers/tty/serial/serial_core.c:400 uart_get_baud_rate+0xe8/0x14c() Modules linked in: [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c0020ae8>] (warn_slowpath_common+0x4c/0x64) [<c0020ae8>] (warn_slowpath_common+0x4c/0x64) from [<c0020b1c>] (warn_slowpath_null+0x1c/0x24) [<c0020b1c>] (warn_slowpath_null+0x1c/0x24) from [<c0185ed8>] (uart_get_baud_rate+0xe8/0x14c) [<c0185ed8>] (uart_get_baud_rate+0xe8/0x14c) from [<c0187078>] (pl011_set_termios+0x48/0x278) [<c0187078>] (pl011_set_termios+0x48/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) ---[ end trace 7d41c9186f342c40 ]--- Division by zero in kernel. [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c014546c>] (Ldiv0+0x8/0x10) [<c014546c>] (Ldiv0+0x8/0x10) from [<c0187098>] (pl011_set_termios+0x68/0x278) [<c0187098>] (pl011_set_termios+0x68/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) Division by zero in kernel. [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c014546c>] (Ldiv0+0x8/0x10) [<c014546c>] (Ldiv0+0x8/0x10) from [<c0183a98>] (uart_update_timeout+0x4c/0x5c) [<c0183a98>] (uart_update_timeout+0x4c/0x5c) from [<c01870f8>] (pl011_set_termios+0xc8/0x278) [<c01870f8>] (pl011_set_termios+0xc8/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) Signed-off-by: Anton Vorontsov <anton.vorontsov@linaro.org> Acked-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-09-24 15:27:54 -06:00
static int pl011_startup(struct uart_port *port)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
serial/amba-pl011: Activate TX IRQ passively The current PL011 driver transmits a dummy character when the UART is opened, to assert the TX IRQ for the first time (see pl011_startup()). The UART is put in loopback mode temporarily, so the receiver presumably shouldn't see anything. However... At least some platforms containing a PL011 send characters down the wire even when loopback mode is enabled. This means that a spurious NUL character may be seen at the receiver when the PL011 is opened through the TTY layer. The current code also temporarily sets the baud rate to maximum and the character width to the minimum, to that the dummy TX completes as quickly as possible. If this is seen by the receiver it will result in a framing error and can knock the receiver out of sync -- turning subsequent output into garbage until synchronisation is reestablished. (Particularly problematic during boot with systemd.) To avoid spurious transmissions, this patch removes assumptions about whether the TX IRQ will fire until at least one TX IRQ has been seen. Instead, the UART will unmask the TX IRQ and then slow-start via polling and timer-based soft IRQs initially. If the TTY layer writes enough data to fill the FIFO to the interrupt threshold in one go, the TX IRQ should assert, at which point the driver changes to fully interrupt-driven TX. In this way, the TX IRQ is activated as a side-effect instead of being done deliberately. This should also mean that the driver works on the SBSA Generic UART[1] (a cut-down PL011) without invasive changes. The Generic UART lacks some features needed for the dummy TX approach to work (FIFO disabling and loopback). [1] Server Base System Architecture (ARM-DEN-0029-v2.3) http://infocenter.arm.com/ (click-thru required :/) Signed-off-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-04 05:27:33 -07:00
unsigned int cr;
tty/serial/amba-pl011: Implement poll_init callback The callback is used to initialize the hardware, nothing else should be done, i.e. we should not request interrupts (but we can and do unmask some of them, as they might be useful for NMI entry). As a side-effect, the patch also fixes a division by zero[1] when booting with kgdboc options specified (e.g. kgdboc=ttyAMA0,115200n8). The issue happens because serial core calls set_termios callback, but the driver doesn't know clock frequency, and thus cannot calculate proper baud rate values. [1] WARNING: at drivers/tty/serial/serial_core.c:400 uart_get_baud_rate+0xe8/0x14c() Modules linked in: [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c0020ae8>] (warn_slowpath_common+0x4c/0x64) [<c0020ae8>] (warn_slowpath_common+0x4c/0x64) from [<c0020b1c>] (warn_slowpath_null+0x1c/0x24) [<c0020b1c>] (warn_slowpath_null+0x1c/0x24) from [<c0185ed8>] (uart_get_baud_rate+0xe8/0x14c) [<c0185ed8>] (uart_get_baud_rate+0xe8/0x14c) from [<c0187078>] (pl011_set_termios+0x48/0x278) [<c0187078>] (pl011_set_termios+0x48/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) ---[ end trace 7d41c9186f342c40 ]--- Division by zero in kernel. [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c014546c>] (Ldiv0+0x8/0x10) [<c014546c>] (Ldiv0+0x8/0x10) from [<c0187098>] (pl011_set_termios+0x68/0x278) [<c0187098>] (pl011_set_termios+0x68/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) Division by zero in kernel. [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c014546c>] (Ldiv0+0x8/0x10) [<c014546c>] (Ldiv0+0x8/0x10) from [<c0183a98>] (uart_update_timeout+0x4c/0x5c) [<c0183a98>] (uart_update_timeout+0x4c/0x5c) from [<c01870f8>] (pl011_set_termios+0xc8/0x278) [<c01870f8>] (pl011_set_termios+0xc8/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) Signed-off-by: Anton Vorontsov <anton.vorontsov@linaro.org> Acked-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-09-24 15:27:54 -06:00
int retval;
retval = pl011_hwinit(port);
if (retval)
goto clk_dis;
retval = pl011_allocate_irq(uap);
if (retval)
goto clk_dis;
pl011_write(uap->vendor->ifls, uap, REG_IFLS);
serial/amba-pl011: Activate TX IRQ passively The current PL011 driver transmits a dummy character when the UART is opened, to assert the TX IRQ for the first time (see pl011_startup()). The UART is put in loopback mode temporarily, so the receiver presumably shouldn't see anything. However... At least some platforms containing a PL011 send characters down the wire even when loopback mode is enabled. This means that a spurious NUL character may be seen at the receiver when the PL011 is opened through the TTY layer. The current code also temporarily sets the baud rate to maximum and the character width to the minimum, to that the dummy TX completes as quickly as possible. If this is seen by the receiver it will result in a framing error and can knock the receiver out of sync -- turning subsequent output into garbage until synchronisation is reestablished. (Particularly problematic during boot with systemd.) To avoid spurious transmissions, this patch removes assumptions about whether the TX IRQ will fire until at least one TX IRQ has been seen. Instead, the UART will unmask the TX IRQ and then slow-start via polling and timer-based soft IRQs initially. If the TTY layer writes enough data to fill the FIFO to the interrupt threshold in one go, the TX IRQ should assert, at which point the driver changes to fully interrupt-driven TX. In this way, the TX IRQ is activated as a side-effect instead of being done deliberately. This should also mean that the driver works on the SBSA Generic UART[1] (a cut-down PL011) without invasive changes. The Generic UART lacks some features needed for the dummy TX approach to work (FIFO disabling and loopback). [1] Server Base System Architecture (ARM-DEN-0029-v2.3) http://infocenter.arm.com/ (click-thru required :/) Signed-off-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-03-04 05:27:33 -07:00
spin_lock_irq(&uap->port.lock);
/* restore RTS and DTR */
cr = uap->old_cr & (UART011_CR_RTS | UART011_CR_DTR);
cr |= UART01x_CR_UARTEN | UART011_CR_RXE | UART011_CR_TXE;
pl011_write(cr, uap, REG_CR);
spin_unlock_irq(&uap->port.lock);
/*
* initialise the old status of the modem signals
*/
uap->old_status = pl011_read(uap, REG_FR) & UART01x_FR_MODEM_ANY;
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
/* Startup DMA */
pl011_dma_startup(uap);
pl011_enable_interrupts(uap);
return 0;
clk_dis:
clk_disable_unprepare(uap->clk);
return retval;
}
static int sbsa_uart_startup(struct uart_port *port)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
int retval;
retval = pl011_hwinit(port);
if (retval)
return retval;
retval = pl011_allocate_irq(uap);
if (retval)
return retval;
/* The SBSA UART does not support any modem status lines. */
uap->old_status = 0;
pl011_enable_interrupts(uap);
return 0;
}
static void pl011_shutdown_channel(struct uart_amba_port *uap,
unsigned int lcrh)
{
unsigned long val;
val = pl011_read(uap, lcrh);
val &= ~(UART01x_LCRH_BRK | UART01x_LCRH_FEN);
pl011_write(val, uap, lcrh);
}
/*
* disable the port. It should not disable RTS and DTR.
* Also RTS and DTR state should be preserved to restore
* it during startup().
*/
static void pl011_disable_uart(struct uart_amba_port *uap)
{
unsigned int cr;
uap->port.status &= ~(UPSTAT_AUTOCTS | UPSTAT_AUTORTS);
spin_lock_irq(&uap->port.lock);
cr = pl011_read(uap, REG_CR);
uap->old_cr = cr;
cr &= UART011_CR_RTS | UART011_CR_DTR;
cr |= UART01x_CR_UARTEN | UART011_CR_TXE;
pl011_write(cr, uap, REG_CR);
spin_unlock_irq(&uap->port.lock);
/*
* disable break condition and fifos
*/
pl011_shutdown_channel(uap, REG_LCRH_RX);
if (pl011_split_lcrh(uap))
pl011_shutdown_channel(uap, REG_LCRH_TX);
}
static void pl011_disable_interrupts(struct uart_amba_port *uap)
{
spin_lock_irq(&uap->port.lock);
/* mask all interrupts and clear all pending ones */
uap->im = 0;
pl011_write(uap->im, uap, REG_IMSC);
pl011_write(0xffff, uap, REG_ICR);
spin_unlock_irq(&uap->port.lock);
}
static void pl011_shutdown(struct uart_port *port)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
pl011_disable_interrupts(uap);
pl011_dma_shutdown(uap);
free_irq(uap->port.irq, uap);
pl011_disable_uart(uap);
/*
* Shut down the clock producer
*/
clk_disable_unprepare(uap->clk);
/* Optionally let pins go into sleep states */
pinctrl_pm_select_sleep_state(port->dev);
if (dev_get_platdata(uap->port.dev)) {
struct amba_pl011_data *plat;
plat = dev_get_platdata(uap->port.dev);
if (plat->exit)
plat->exit();
}
if (uap->port.ops->flush_buffer)
uap->port.ops->flush_buffer(port);
}
static void sbsa_uart_shutdown(struct uart_port *port)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
pl011_disable_interrupts(uap);
free_irq(uap->port.irq, uap);
if (uap->port.ops->flush_buffer)
uap->port.ops->flush_buffer(port);
}
static void
pl011_setup_status_masks(struct uart_port *port, struct ktermios *termios)
{
port->read_status_mask = UART011_DR_OE | 255;
if (termios->c_iflag & INPCK)
port->read_status_mask |= UART011_DR_FE | UART011_DR_PE;
if (termios->c_iflag & (IGNBRK | BRKINT | PARMRK))
port->read_status_mask |= UART011_DR_BE;
/*
* Characters to ignore
*/
port->ignore_status_mask = 0;
if (termios->c_iflag & IGNPAR)
port->ignore_status_mask |= UART011_DR_FE | UART011_DR_PE;
if (termios->c_iflag & IGNBRK) {
port->ignore_status_mask |= UART011_DR_BE;
/*
* If we're ignoring parity and break indicators,
* ignore overruns too (for real raw support).
*/
if (termios->c_iflag & IGNPAR)
port->ignore_status_mask |= UART011_DR_OE;
}
/*
* Ignore all characters if CREAD is not set.
*/
if ((termios->c_cflag & CREAD) == 0)
port->ignore_status_mask |= UART_DUMMY_DR_RX;
}
static void
pl011_set_termios(struct uart_port *port, struct ktermios *termios,
struct ktermios *old)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
unsigned int lcr_h, old_cr;
unsigned long flags;
unsigned int baud, quot, clkdiv;
if (uap->vendor->oversampling)
clkdiv = 8;
else
clkdiv = 16;
/*
* Ask the core to calculate the divisor for us.
*/
baud = uart_get_baud_rate(port, termios, old, 0,
port->uartclk / clkdiv);
#ifdef CONFIG_DMA_ENGINE
ARM: PL011: Add support for Rx DMA buffer polling. In DMA support, The received data is not pushed to tty until the DMA buffer is filled. But some megabyte rate chips such as BT expect fast response and data should be pushed immediately. In order to fix this issue, We suggest the use of the timer for polling DMA buffer. In our test, no data loss occurred at high-baudrate as compared with interrupt- driven (We tested with 3Mbps). We changes: - We add timer for polling. If we set poll_timer to 10, every 10ms, timer handler checks the residue in the dma buffer and transfer data to the tty. Also, last_residue is updated for the next polling. - poll_timeout is used to prevent the timer's system cost. If poll_timeout is set to 3000 and no data is received in 3 seconds, we inactivate poll timer and driver falls back to interrupt-driven. When data is received again in FIFO and UART irq is occurred, we switch back to DMA mode and start polling. - We use consistent DMA mappings to avoid from the frequent cache operation of the timer function for default. - pl011_dma_rx_chars is modified. the pending size is recalculated because data can be taken by polling. - the polling time is adjusted if dma rx poll is enabled but no rate is specified. Ideal polling interval to push 1 character at every interval is the reciprocal of 'baud rate / 10 line bits per character / 1000 ms per sec'. But It is very aggressive to system. Experimentally, '10000000 / baud' is suitable to receive dozens of characters. the poll rate can be specified statically by dma_rx_poll_rate of the platform data as well. Changes compared to v1: - Use of consistent DMA mappings. - Added dma_rx_poll_rate in platform data to specify the polling interval. - Added dma_rx_poll_timeout in platform data to specify the polling timeout. Changes compared to v2: - Use of consistent DMA mappings for default. - Added dma_rx_poll_enable in platform data to adjust the polling time according to the baud rate. - remove unnecessary lock from the polling function. Signed-off-by: Chanho Min <chanho.min@lge.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-03-27 03:38:11 -06:00
/*
* Adjust RX DMA polling rate with baud rate if not specified.
*/
if (uap->dmarx.auto_poll_rate)
uap->dmarx.poll_rate = DIV_ROUND_UP(10000000, baud);
#endif
if (baud > port->uartclk/16)
quot = DIV_ROUND_CLOSEST(port->uartclk * 8, baud);
else
quot = DIV_ROUND_CLOSEST(port->uartclk * 4, baud);
switch (termios->c_cflag & CSIZE) {
case CS5:
lcr_h = UART01x_LCRH_WLEN_5;
break;
case CS6:
lcr_h = UART01x_LCRH_WLEN_6;
break;
case CS7:
lcr_h = UART01x_LCRH_WLEN_7;
break;
default: // CS8
lcr_h = UART01x_LCRH_WLEN_8;
break;
}
if (termios->c_cflag & CSTOPB)
lcr_h |= UART01x_LCRH_STP2;
if (termios->c_cflag & PARENB) {
lcr_h |= UART01x_LCRH_PEN;
if (!(termios->c_cflag & PARODD))
lcr_h |= UART01x_LCRH_EPS;
if (termios->c_cflag & CMSPAR)
lcr_h |= UART011_LCRH_SPS;
}
if (uap->fifosize > 1)
lcr_h |= UART01x_LCRH_FEN;
spin_lock_irqsave(&port->lock, flags);
/*
* Update the per-port timeout.
*/
uart_update_timeout(port, termios->c_cflag, baud);
pl011_setup_status_masks(port, termios);
if (UART_ENABLE_MS(port, termios->c_cflag))
pl011_enable_ms(port);
/* first, disable everything */
old_cr = pl011_read(uap, REG_CR);
pl011_write(0, uap, REG_CR);
if (termios->c_cflag & CRTSCTS) {
if (old_cr & UART011_CR_RTS)
old_cr |= UART011_CR_RTSEN;
old_cr |= UART011_CR_CTSEN;
port->status |= UPSTAT_AUTOCTS | UPSTAT_AUTORTS;
} else {
old_cr &= ~(UART011_CR_CTSEN | UART011_CR_RTSEN);
port->status &= ~(UPSTAT_AUTOCTS | UPSTAT_AUTORTS);
}
if (uap->vendor->oversampling) {
if (baud > port->uartclk / 16)
old_cr |= ST_UART011_CR_OVSFACT;
else
old_cr &= ~ST_UART011_CR_OVSFACT;
}
serial: pl011: handle corruption at high clock speeds This works around a few glitches in the ST version of the PL011 serial driver when using very high baud rates, as we do in the Ux500: 3, 3.25, 4 and 4.05 Mbps. Problem Observed/rootcause: When using high baud-rates, and the baudrate*8 is getting close to the provided clock frequency (so a division factor close to 1), when using bursts of characters (so they are abutted), then it seems as if there is not enough time to detect the beginning of the start-bit which is a timing reference for the entire character, and thus the sampling moment of character bits is moving towards the end of each bit, instead of the middle. Fix: Increase slightly the RX baud rate of the UART above the theoretical baudrate by 5%. This will definitely give more margin time to the UART_RX to correctly sample the data at the middle of the bit period. Also fix the ages old copy-paste error in the very stressed comment, it's referencing the registers used in the PL010 driver rather than the PL011 ones. Signed-off-by: Guillaume Jaunet <guillaume.jaunet@stericsson.com> Signed-off-by: Christophe Arnal <christophe.arnal@stericsson.com> Signed-off-by: Matthias Locher <matthias.locher@stericsson.com> Signed-off-by: Rajanikanth HV <rajanikanth.hv@stericsson.com> Cc: stable <stable@vger.kernel.org> Cc: Bibek Basu <bibek.basu@stericsson.com> Cc: Par-Gunnar Hjalmdahl <par-gunnar.hjalmdahl@stericsson.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-09-26 09:21:36 -06:00
/*
* Workaround for the ST Micro oversampling variants to
* increase the bitrate slightly, by lowering the divisor,
* to avoid delayed sampling of start bit at high speeds,
* else we see data corruption.
*/
if (uap->vendor->oversampling) {
if ((baud >= 3000000) && (baud < 3250000) && (quot > 1))
quot -= 1;
else if ((baud > 3250000) && (quot > 2))
quot -= 2;
}
/* Set baud rate */
pl011_write(quot & 0x3f, uap, REG_FBRD);
pl011_write(quot >> 6, uap, REG_IBRD);
/*
* ----------v----------v----------v----------v-----
* NOTE: REG_LCRH_TX and REG_LCRH_RX MUST BE WRITTEN AFTER
* REG_FBRD & REG_IBRD.
* ----------^----------^----------^----------^-----
*/
pl011_write_lcr_h(uap, lcr_h);
pl011_write(old_cr, uap, REG_CR);
spin_unlock_irqrestore(&port->lock, flags);
}
static void
sbsa_uart_set_termios(struct uart_port *port, struct ktermios *termios,
struct ktermios *old)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
unsigned long flags;
tty_termios_encode_baud_rate(termios, uap->fixed_baud, uap->fixed_baud);
/* The SBSA UART only supports 8n1 without hardware flow control. */
termios->c_cflag &= ~(CSIZE | CSTOPB | PARENB | PARODD);
termios->c_cflag &= ~(CMSPAR | CRTSCTS);
termios->c_cflag |= CS8 | CLOCAL;
spin_lock_irqsave(&port->lock, flags);
uart_update_timeout(port, CS8, uap->fixed_baud);
pl011_setup_status_masks(port, termios);
spin_unlock_irqrestore(&port->lock, flags);
}
static const char *pl011_type(struct uart_port *port)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
return uap->port.type == PORT_AMBA ? uap->type : NULL;
}
/*
* Release the memory region(s) being used by 'port'
*/
static void pl011_release_port(struct uart_port *port)
{
release_mem_region(port->mapbase, SZ_4K);
}
/*
* Request the memory region(s) being used by 'port'
*/
static int pl011_request_port(struct uart_port *port)
{
return request_mem_region(port->mapbase, SZ_4K, "uart-pl011")
!= NULL ? 0 : -EBUSY;
}
/*
* Configure/autoconfigure the port.
*/
static void pl011_config_port(struct uart_port *port, int flags)
{
if (flags & UART_CONFIG_TYPE) {
port->type = PORT_AMBA;
pl011_request_port(port);
}
}
/*
* verify the new serial_struct (for TIOCSSERIAL).
*/
static int pl011_verify_port(struct uart_port *port, struct serial_struct *ser)
{
int ret = 0;
if (ser->type != PORT_UNKNOWN && ser->type != PORT_AMBA)
ret = -EINVAL;
if (ser->irq < 0 || ser->irq >= nr_irqs)
ret = -EINVAL;
if (ser->baud_base < 9600)
ret = -EINVAL;
return ret;
}
tty: serial: constify uart_ops structures Declare uart_ops structures as const as they are only stored in the ops field of an uart_port structure. This field is of type const, so uart_ops structures having this property can be made const too. File size details before and after patching. First line of every .o file shows the file size before patching and second line shows the size after patching. text data bss dec hex filename 2977 456 64 3497 da9 drivers/tty/serial/amba-pl010.o 3169 272 64 3505 db1 drivers/tty/serial/amba-pl010.o 3109 456 0 3565 ded drivers/tty/serial/efm32-uart.o 3301 272 0 3573 df5 drivers/tty/serial/efm32-uart.o 10668 753 1 11422 2c9e drivers/tty/serial/icom.o 10860 561 1 11422 2c9e drivers/tty/serial/icom.o 23904 408 8 24320 5f00 drivers/tty/serial/ioc3_serial.o 24088 224 8 24320 5f00 drivers/tty/serial/ioc3_serial.o 10516 560 4 11080 2b48 drivers/tty/serial/ioc4_serial.o 10709 368 4 11081 2b49 drivers/tty/serial/ioc4_serial.o 7853 648 1216 9717 25f5 drivers/tty/serial/mpsc.o 8037 456 1216 9709 25ed drivers/tty/serial/mpsc.o 10248 456 0 10704 29d0 drivers/tty/serial/omap-serial.o 10440 272 0 10712 29d8 drivers/tty/serial/omap-serial.o 8122 532 1984 10638 298e drivers/tty/serial/pmac_zilog.o 8306 340 1984 10630 2986 drivers/tty/serial/pmac_zilog.o 3808 456 0 4264 10a8 drivers/tty/serial/pxa.o 4000 264 0 4264 10a8 drivers/tty/serial/pxa.o 21781 3864 0 25645 642d drivers/tty/serial/serial-tegra.o 22037 3608 0 25645 642d drivers/tty/serial/serial-tegra.o 2481 456 96 3033 bd9 drivers/tty/serial/sprd_serial.o 2673 272 96 3041 be1 drivers/tty/serial/sprd_serial.o 5534 300 512 6346 18ca drivers/tty/serial/vr41xx_siu.o 5630 204 512 6346 18ca drivers/tty/serial/vr41xx_siu.o 6730 1576 128 8434 20f2 drivers/tty/serial/vt8500_serial.o 6986 1320 128 8434 20f2 drivers/tty/serial/vt8500_serial.o Cross compiled for mips architecture. 3005 488 0 3493 da5 drivers/tty/serial/pnx8xxx_uart.o 3189 304 0 3493 da5 drivers/tty/serial/pnx8xxx_uart.o 4272 196 1056 5524 1594 drivers/tty/serial/dz.o 4368 100 1056 5524 1594 drivers/tty/serial/dz.o 6551 144 16 6711 1a37 drivers/tty/serial/ip22zilog.o 6647 48 16 6711 1a37 drivers/tty/serial/ip22zilog.o 9612 428 1520 11560 2d28 drivers/tty/serial/serial_txx9.o 9708 332 1520 11560 2d28 drivers/tty/serial/serial_txx9.o 4156 296 16 4468 1174 drivers/tty/serial/ar933x_uart.o 4252 200 16 4468 1174 drivers/tty/serial/ar933x_uart.o Cross compiled for arm archiecture. 11716 1780 44 13540 34e4 drivers/tty/serial/sirfsoc_uart.o 11808 1688 44 13540 34e4 drivers/tty/serial/sirfsoc_uart.o 13352 596 56 14004 36b4 drivers/tty/serial/amba-pl011.o 13444 504 56 14004 36b4 drivers/tty/serial/amba-pl011.o Cross compiled for sparc architecture. 4664 528 32 5224 1468 drivers/tty/serial/sunhv.o 4848 344 32 5224 1468 drivers/tty/serial/sunhv.o 8080 332 28 8440 20f8 drivers/tty/serial/sunzilog.o 8184 228 28 8440 20f8 drivers/tty/serial/sunzilog.o Cross compiled for ia64 architecture. 10226 549 472 11247 2bef drivers/tty/serial/sn_console.o 10414 365 472 11251 2bf3 drivers/tty/serial/sn_console.o The files drivers/tty/serial/zs.o, drivers/tty/serial/lpc32xx_hs.o and drivers/tty/serial/lantiq.o did not compile. Signed-off-by: Bhumika Goyal <bhumirks@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-25 10:48:52 -07:00
static const struct uart_ops amba_pl011_pops = {
.tx_empty = pl011_tx_empty,
.set_mctrl = pl011_set_mctrl,
.get_mctrl = pl011_get_mctrl,
.stop_tx = pl011_stop_tx,
.start_tx = pl011_start_tx,
.stop_rx = pl011_stop_rx,
.enable_ms = pl011_enable_ms,
.break_ctl = pl011_break_ctl,
.startup = pl011_startup,
.shutdown = pl011_shutdown,
ARM: PL011: Add support for transmit DMA Add DMA engine support for transmit to the PL011 driver. Based on a patch from Linus Walliej, with the following changes: - remove RX DMA support. As PL011 doesn't give us receive timeout interrupts, we only get notified of received data when the RX DMA has completed. This rather sucks for interactive use of the TTY. - remove abuse of completions. Completions are supposed to be for events, not to tell what condition buffers are in. Replace it with a simple 'queued' bool. - fix locking - it is only safe to access the circular buffer with the port lock held. - only map the DMA buffer when required - if we're ever behind an IOMMU this helps keep IOMMU usage down, and also ensures that we're legal when we change the scatterlist entry length. - fix XON/XOFF sending - we must send XON/XOFF characters out as soon as possible - waiting for up to 4095 characters in the DMA buffer to be sent first is not acceptable. - fix XON/XOFF receive handling - we need to stop DMA when instructed to by the TTY layer, and restart it again when instructed to. There is a subtle problem here: we must not completely empty the circular buffer with DMA, otherwise we will not be notified of XON. - change the 'enable_dma' flag into a 'using DMA' flag, and track whether we can use TX DMA by whether the channel pointer is non-NULL. This gives us more control over whether we use DMA in the driver. - we don't need to have the TX DMA buffer continually allocated for each port - instead, allocate it when the port starts up, and free it when it's shut down. Update the 'using DMA' flag if we get the buffer, and adjust the TTY FIFO size appropriately. - if we're going to use PIO to send characters, use the existing IRQ based functionality rather than reimplementing it. This also ensures we call uart_write_wakeup() at the appropriate time, otherwise we'll stall. - use DMA engine helper functions for type safety. - fix init when built as a module - we can't have to initcall functions, so we must settle on one. This means we can eliminate the deferred DMA initialization. - there is no need to terminate transfers on a failed prep_slave_sg() call - nothing has been setup, so nothing needs to be terminated. This avoids a potential deadlock in the DMA engine code (tasklet->callback->failed prepare->terminate->tasklet_disable which then ends up waiting for the tasklet to finish running.) - Dan says that the submission callback should not return an error: | dma_submit_error() is something I should have removed after commit | a0587bcf "ioat1: move descriptor allocation from submit to prep" all | errors should be notified by prep failing to return a descriptor | handle. Negative dma_cookie_t values are only returned by the | dma_async_memcpy* calls which translate a prep failure into -ENOMEM. So remove the error handling at that point. This also solves the potential deadlock mentioned in the previous comment. Acked-by: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-22 10:24:39 -07:00
.flush_buffer = pl011_dma_flush_buffer,
.set_termios = pl011_set_termios,
.type = pl011_type,
.release_port = pl011_release_port,
.request_port = pl011_request_port,
.config_port = pl011_config_port,
.verify_port = pl011_verify_port,
#ifdef CONFIG_CONSOLE_POLL
tty/serial/amba-pl011: Implement poll_init callback The callback is used to initialize the hardware, nothing else should be done, i.e. we should not request interrupts (but we can and do unmask some of them, as they might be useful for NMI entry). As a side-effect, the patch also fixes a division by zero[1] when booting with kgdboc options specified (e.g. kgdboc=ttyAMA0,115200n8). The issue happens because serial core calls set_termios callback, but the driver doesn't know clock frequency, and thus cannot calculate proper baud rate values. [1] WARNING: at drivers/tty/serial/serial_core.c:400 uart_get_baud_rate+0xe8/0x14c() Modules linked in: [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c0020ae8>] (warn_slowpath_common+0x4c/0x64) [<c0020ae8>] (warn_slowpath_common+0x4c/0x64) from [<c0020b1c>] (warn_slowpath_null+0x1c/0x24) [<c0020b1c>] (warn_slowpath_null+0x1c/0x24) from [<c0185ed8>] (uart_get_baud_rate+0xe8/0x14c) [<c0185ed8>] (uart_get_baud_rate+0xe8/0x14c) from [<c0187078>] (pl011_set_termios+0x48/0x278) [<c0187078>] (pl011_set_termios+0x48/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) ---[ end trace 7d41c9186f342c40 ]--- Division by zero in kernel. [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c014546c>] (Ldiv0+0x8/0x10) [<c014546c>] (Ldiv0+0x8/0x10) from [<c0187098>] (pl011_set_termios+0x68/0x278) [<c0187098>] (pl011_set_termios+0x68/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) Division by zero in kernel. [<c0018e50>] (unwind_backtrace+0x0/0xf0) from [<c014546c>] (Ldiv0+0x8/0x10) [<c014546c>] (Ldiv0+0x8/0x10) from [<c0183a98>] (uart_update_timeout+0x4c/0x5c) [<c0183a98>] (uart_update_timeout+0x4c/0x5c) from [<c01870f8>] (pl011_set_termios+0xc8/0x278) [<c01870f8>] (pl011_set_termios+0xc8/0x278) from [<c01850b0>] (uart_set_options+0xe8/0x114) [<c01850b0>] (uart_set_options+0xe8/0x114) from [<c0185de4>] (uart_poll_init+0xd4/0xe0) [<c0185de4>] (uart_poll_init+0xd4/0xe0) from [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) [<c016da8c>] (tty_find_polling_driver+0x100/0x17c) from [<c0188538>] (configure_kgdboc+0xc8/0x1b8) [<c0188538>] (configure_kgdboc+0xc8/0x1b8) from [<c00088a4>] (do_one_initcall+0x30/0x168) [<c00088a4>] (do_one_initcall+0x30/0x168) from [<c033784c>] (do_basic_setup+0x94/0xc8) [<c033784c>] (do_basic_setup+0x94/0xc8) from [<c03378e0>] (kernel_init+0x60/0xf4) [<c03378e0>] (kernel_init+0x60/0xf4) from [<c00144a0>] (kernel_thread_exit+0x0/0x8) Signed-off-by: Anton Vorontsov <anton.vorontsov@linaro.org> Acked-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-09-24 15:27:54 -06:00
.poll_init = pl011_hwinit,
.poll_get_char = pl011_get_poll_char,
.poll_put_char = pl011_put_poll_char,
#endif
};
static void sbsa_uart_set_mctrl(struct uart_port *port, unsigned int mctrl)
{
}
static unsigned int sbsa_uart_get_mctrl(struct uart_port *port)
{
return 0;
}
static const struct uart_ops sbsa_uart_pops = {
.tx_empty = pl011_tx_empty,
.set_mctrl = sbsa_uart_set_mctrl,
.get_mctrl = sbsa_uart_get_mctrl,
.stop_tx = pl011_stop_tx,
.start_tx = pl011_start_tx,
.stop_rx = pl011_stop_rx,
.startup = sbsa_uart_startup,
.shutdown = sbsa_uart_shutdown,
.set_termios = sbsa_uart_set_termios,
.type = pl011_type,
.release_port = pl011_release_port,
.request_port = pl011_request_port,
.config_port = pl011_config_port,
.verify_port = pl011_verify_port,
#ifdef CONFIG_CONSOLE_POLL
.poll_init = pl011_hwinit,
.poll_get_char = pl011_get_poll_char,
.poll_put_char = pl011_put_poll_char,
#endif
};
static struct uart_amba_port *amba_ports[UART_NR];
#ifdef CONFIG_SERIAL_AMBA_PL011_CONSOLE
static void pl011_console_putchar(struct uart_port *port, int ch)
{
struct uart_amba_port *uap =
container_of(port, struct uart_amba_port, port);
while (pl011_read(uap, REG_FR) & UART01x_FR_TXFF)
cpu_relax();
pl011_write(ch, uap, REG_DR);
}
static void
pl011_console_write(struct console *co, const char *s, unsigned int count)
{
struct uart_amba_port *uap = amba_ports[co->index];
unsigned int old_cr = 0, new_cr;
unsigned long flags;
int locked = 1;
clk_enable(uap->clk);
local_irq_save(flags);
if (uap->port.sysrq)
locked = 0;
else if (oops_in_progress)
locked = spin_trylock(&uap->port.lock);
else
spin_lock(&uap->port.lock);
/*
* First save the CR then disable the interrupts
*/
if (!uap->vendor->always_enabled) {
old_cr = pl011_read(uap, REG_CR);
new_cr = old_cr & ~UART011_CR_CTSEN;
new_cr |= UART01x_CR_UARTEN | UART011_CR_TXE;
pl011_write(new_cr, uap, REG_CR);
}
uart_console_write(&uap->port, s, count, pl011_console_putchar);
/*
* Finally, wait for transmitter to become empty and restore the
* TCR. Allow feature register bits to be inverted to work around
* errata.
*/
while ((pl011_read(uap, REG_FR) ^ uap->vendor->inv_fr)
& uap->vendor->fr_busy)
cpu_relax();
if (!uap->vendor->always_enabled)
pl011_write(old_cr, uap, REG_CR);
if (locked)
spin_unlock(&uap->port.lock);
local_irq_restore(flags);
clk_disable(uap->clk);
}
serial: pl011: Fix oops on -EPROBE_DEFER commit 27afac93e3bd7fa89749cf11da5d86ac9cde4dba upstream. If probing of a pl011 gets deferred until after free_initmem(), an oops ensues because pl011_console_match() is called which has been freed. Fix by removing the __init attribute from the function and those it calls. Commit 10879ae5f12e ("serial: pl011: add console matching function") introduced pl011_console_match() not just for early consoles but regular preferred consoles, such as those added by acpi_parse_spcr(). Regular consoles may be registered after free_initmem() for various reasons, one being deferred probing, another being dynamic enablement of serial ports using a DeviceTree overlay. Thus, pl011_console_match() must not be declared __init and the functions it calls mustn't either. Stack trace for posterity: Unable to handle kernel paging request at virtual address 80c38b58 Internal error: Oops: 8000000d [#1] PREEMPT SMP ARM PC is at pl011_console_match+0x0/0xfc LR is at register_console+0x150/0x468 [<80187004>] (register_console) [<805a8184>] (uart_add_one_port) [<805b2b68>] (pl011_register_port) [<805b3ce4>] (pl011_probe) [<80569214>] (amba_probe) [<805ca088>] (really_probe) [<805ca2ec>] (driver_probe_device) [<805ca5b0>] (__device_attach_driver) [<805c8060>] (bus_for_each_drv) [<805c9dfc>] (__device_attach) [<805ca630>] (device_initial_probe) [<805c90a8>] (bus_probe_device) [<805c95a8>] (deferred_probe_work_func) Fixes: 10879ae5f12e ("serial: pl011: add console matching function") Signed-off-by: Lukas Wunner <lukas@wunner.de> Cc: stable@vger.kernel.org # v4.10+ Cc: Aleksey Makarov <amakarov@marvell.com> Cc: Peter Hurley <peter@hurleysoftware.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Christopher Covington <cov@codeaurora.org> Link: https://lore.kernel.org/r/f827ff09da55b8c57d316a1b008a137677b58921.1597315557.git.lukas@wunner.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-13 04:52:40 -06:00
static void pl011_console_get_options(struct uart_amba_port *uap, int *baud,
int *parity, int *bits)
{
if (pl011_read(uap, REG_CR) & UART01x_CR_UARTEN) {
unsigned int lcr_h, ibrd, fbrd;
lcr_h = pl011_read(uap, REG_LCRH_TX);
*parity = 'n';
if (lcr_h & UART01x_LCRH_PEN) {
if (lcr_h & UART01x_LCRH_EPS)
*parity = 'e';
else
*parity = 'o';
}
if ((lcr_h & 0x60) == UART01x_LCRH_WLEN_7)
*bits = 7;
else
*bits = 8;
ibrd = pl011_read(uap, REG_IBRD);
fbrd = pl011_read(uap, REG_FBRD);
*baud = uap->port.uartclk * 4 / (64 * ibrd + fbrd);
if (uap->vendor->oversampling) {
if (pl011_read(uap, REG_CR)
& ST_UART011_CR_OVSFACT)
*baud *= 2;
}
}
}
serial: pl011: Fix oops on -EPROBE_DEFER commit 27afac93e3bd7fa89749cf11da5d86ac9cde4dba upstream. If probing of a pl011 gets deferred until after free_initmem(), an oops ensues because pl011_console_match() is called which has been freed. Fix by removing the __init attribute from the function and those it calls. Commit 10879ae5f12e ("serial: pl011: add console matching function") introduced pl011_console_match() not just for early consoles but regular preferred consoles, such as those added by acpi_parse_spcr(). Regular consoles may be registered after free_initmem() for various reasons, one being deferred probing, another being dynamic enablement of serial ports using a DeviceTree overlay. Thus, pl011_console_match() must not be declared __init and the functions it calls mustn't either. Stack trace for posterity: Unable to handle kernel paging request at virtual address 80c38b58 Internal error: Oops: 8000000d [#1] PREEMPT SMP ARM PC is at pl011_console_match+0x0/0xfc LR is at register_console+0x150/0x468 [<80187004>] (register_console) [<805a8184>] (uart_add_one_port) [<805b2b68>] (pl011_register_port) [<805b3ce4>] (pl011_probe) [<80569214>] (amba_probe) [<805ca088>] (really_probe) [<805ca2ec>] (driver_probe_device) [<805ca5b0>] (__device_attach_driver) [<805c8060>] (bus_for_each_drv) [<805c9dfc>] (__device_attach) [<805ca630>] (device_initial_probe) [<805c90a8>] (bus_probe_device) [<805c95a8>] (deferred_probe_work_func) Fixes: 10879ae5f12e ("serial: pl011: add console matching function") Signed-off-by: Lukas Wunner <lukas@wunner.de> Cc: stable@vger.kernel.org # v4.10+ Cc: Aleksey Makarov <amakarov@marvell.com> Cc: Peter Hurley <peter@hurleysoftware.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Christopher Covington <cov@codeaurora.org> Link: https://lore.kernel.org/r/f827ff09da55b8c57d316a1b008a137677b58921.1597315557.git.lukas@wunner.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-13 04:52:40 -06:00
static int pl011_console_setup(struct console *co, char *options)
{
struct uart_amba_port *uap;
int baud = 38400;
int bits = 8;
int parity = 'n';
int flow = 'n';
int ret;
/*
* Check whether an invalid uart number has been specified, and
* if so, search for the first available port that does have
* console support.
*/
if (co->index >= UART_NR)
co->index = 0;
uap = amba_ports[co->index];
if (!uap)
return -ENODEV;
/* Allow pins to be muxed in and configured */
pinctrl_pm_select_default_state(uap->port.dev);
ret = clk_prepare(uap->clk);
if (ret)
return ret;
if (dev_get_platdata(uap->port.dev)) {
struct amba_pl011_data *plat;
plat = dev_get_platdata(uap->port.dev);
if (plat->init)
plat->init();
}
uap->port.uartclk = clk_get_rate(uap->clk);
if (uap->vendor->fixed_options) {
baud = uap->fixed_baud;
} else {
if (options)
uart_parse_options(options,
&baud, &parity, &bits, &flow);
else
pl011_console_get_options(uap, &baud, &parity, &bits);
}
return uart_set_options(&uap->port, co, baud, parity, bits, flow);
}
/**
* pl011_console_match - non-standard console matching
* @co: registering console
* @name: name from console command line
* @idx: index from console command line
* @options: ptr to option string from console command line
*
* Only attempts to match console command lines of the form:
* console=pl011,mmio|mmio32,<addr>[,<options>]
* console=pl011,0x<addr>[,<options>]
* This form is used to register an initial earlycon boot console and
* replace it with the amba_console at pl011 driver init.
*
* Performs console setup for a match (as required by interface)
* If no <options> are specified, then assume the h/w is already setup.
*
* Returns 0 if console matches; otherwise non-zero to use default matching
*/
serial: pl011: Fix oops on -EPROBE_DEFER commit 27afac93e3bd7fa89749cf11da5d86ac9cde4dba upstream. If probing of a pl011 gets deferred until after free_initmem(), an oops ensues because pl011_console_match() is called which has been freed. Fix by removing the __init attribute from the function and those it calls. Commit 10879ae5f12e ("serial: pl011: add console matching function") introduced pl011_console_match() not just for early consoles but regular preferred consoles, such as those added by acpi_parse_spcr(). Regular consoles may be registered after free_initmem() for various reasons, one being deferred probing, another being dynamic enablement of serial ports using a DeviceTree overlay. Thus, pl011_console_match() must not be declared __init and the functions it calls mustn't either. Stack trace for posterity: Unable to handle kernel paging request at virtual address 80c38b58 Internal error: Oops: 8000000d [#1] PREEMPT SMP ARM PC is at pl011_console_match+0x0/0xfc LR is at register_console+0x150/0x468 [<80187004>] (register_console) [<805a8184>] (uart_add_one_port) [<805b2b68>] (pl011_register_port) [<805b3ce4>] (pl011_probe) [<80569214>] (amba_probe) [<805ca088>] (really_probe) [<805ca2ec>] (driver_probe_device) [<805ca5b0>] (__device_attach_driver) [<805c8060>] (bus_for_each_drv) [<805c9dfc>] (__device_attach) [<805ca630>] (device_initial_probe) [<805c90a8>] (bus_probe_device) [<805c95a8>] (deferred_probe_work_func) Fixes: 10879ae5f12e ("serial: pl011: add console matching function") Signed-off-by: Lukas Wunner <lukas@wunner.de> Cc: stable@vger.kernel.org # v4.10+ Cc: Aleksey Makarov <amakarov@marvell.com> Cc: Peter Hurley <peter@hurleysoftware.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Christopher Covington <cov@codeaurora.org> Link: https://lore.kernel.org/r/f827ff09da55b8c57d316a1b008a137677b58921.1597315557.git.lukas@wunner.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-13 04:52:40 -06:00
static int pl011_console_match(struct console *co, char *name, int idx,
char *options)
{
unsigned char iotype;
resource_size_t addr;
int i;
tty: pl011: fix initialization order of QDF2400 E44 The work-around for Qualcomm Technologies QDF2400 Erratum 44 hinges on a global variable defined in the pl011 driver. The ACPI SPCR parsing code determines whether the work-around is needed, and if so, it changes the console name from "pl011" to "qdf2400_e44". The expectation is that the pl011 driver will implement the work-around when it sees the console name. The global variable qdf2400_e44_present is set when that happens. The problem is that work-around needs to be enabled when the pl011 driver probes, not when the console name is queried. However, sbsa_probe() is called before pl011_console_match(). The work-around appeared to work previously because the default console on QDF2400 platforms was always ttyAMA1. The first time sbsa_probe() is called (for ttyAMA0), qdf2400_e44_present is still false. Then pl011_console_match() is called, and it sets qdf2400_e44_present to true. All subsequent calls to sbsa_probe() enable the work-around. The solution is to move the global variable into spcr.c and let the pl011 driver query it during probe time. This works because all QDF2400 platforms require SPCR, so parse_spcr() will always be called. pl011_console_match still checks for the "qdf2400_e44" console name, but it doesn't do anything else special. Fixes: 5a0722b898f8 ("tty: pl011: use "qdf2400_e44" as the earlycon name for QDF2400 E44") Tested-by: Jeffrey Hugo <jhugo@codeaurora.org> Signed-off-by: Timur Tabi <timur@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-27 15:15:52 -06:00
/*
* Systems affected by the Qualcomm Technologies QDF2400 E44 erratum
* have a distinct console name, so make sure we check for that.
* The actual implementation of the erratum occurs in the probe
* function.
*/
if ((strcmp(name, "qdf2400_e44") != 0) && (strcmp(name, "pl011") != 0))
return -ENODEV;
if (uart_parse_earlycon(options, &iotype, &addr, &options))
return -ENODEV;
if (iotype != UPIO_MEM && iotype != UPIO_MEM32)
return -ENODEV;
/* try to match the port specified on the command line */
for (i = 0; i < ARRAY_SIZE(amba_ports); i++) {
struct uart_port *port;
if (!amba_ports[i])
continue;
port = &amba_ports[i]->port;
if (port->mapbase != addr)
continue;
co->index = i;
port->cons = co;
return pl011_console_setup(co, options);
}
return -ENODEV;
}
static struct uart_driver amba_reg;
static struct console amba_console = {
.name = "ttyAMA",
.write = pl011_console_write,
.device = uart_console_device,
.setup = pl011_console_setup,
.match = pl011_console_match,
.flags = CON_PRINTBUFFER | CON_ANYTIME,
.index = -1,
.data = &amba_reg,
};
#define AMBA_CONSOLE (&amba_console)
static void qdf2400_e44_putc(struct uart_port *port, int c)
{
while (readl(port->membase + UART01x_FR) & UART01x_FR_TXFF)
cpu_relax();
writel(c, port->membase + UART01x_DR);
while (!(readl(port->membase + UART01x_FR) & UART011_FR_TXFE))
cpu_relax();
}
static void qdf2400_e44_early_write(struct console *con, const char *s, unsigned n)
{
struct earlycon_device *dev = con->data;
uart_console_write(&dev->port, s, n, qdf2400_e44_putc);
}
static void pl011_putc(struct uart_port *port, int c)
{
while (readl(port->membase + UART01x_FR) & UART01x_FR_TXFF)
cpu_relax();
if (port->iotype == UPIO_MEM32)
writel(c, port->membase + UART01x_DR);
else
writeb(c, port->membase + UART01x_DR);
while (readl(port->membase + UART01x_FR) & UART01x_FR_BUSY)
cpu_relax();
}
static void pl011_early_write(struct console *con, const char *s, unsigned n)
{
struct earlycon_device *dev = con->data;
uart_console_write(&dev->port, s, n, pl011_putc);
}
/*
* On non-ACPI systems, earlycon is enabled by specifying
* "earlycon=pl011,<address>" on the kernel command line.
*
* On ACPI ARM64 systems, an "early" console is enabled via the SPCR table,
* by specifying only "earlycon" on the command line. Because it requires
* SPCR, the console starts after ACPI is parsed, which is later than a
* traditional early console.
*
* To get the traditional early console that starts before ACPI is parsed,
* specify the full "earlycon=pl011,<address>" option.
*/
static int __init pl011_early_console_setup(struct earlycon_device *device,
const char *opt)
{
if (!device->port.membase)
return -ENODEV;
device->con->write = pl011_early_write;
return 0;
}
OF_EARLYCON_DECLARE(pl011, "arm,pl011", pl011_early_console_setup);
OF_EARLYCON_DECLARE(pl011, "arm,sbsa-uart", pl011_early_console_setup);
/*
* On Qualcomm Datacenter Technologies QDF2400 SOCs affected by
* Erratum 44, traditional earlycon can be enabled by specifying
* "earlycon=qdf2400_e44,<address>". Any options are ignored.
*
* Alternatively, you can just specify "earlycon", and the early console
* will be enabled with the information from the SPCR table. In this
* case, the SPCR code will detect the need for the E44 work-around,
* and set the console name to "qdf2400_e44".
*/
static int __init
qdf2400_e44_early_console_setup(struct earlycon_device *device,
const char *opt)
{
if (!device->port.membase)
return -ENODEV;
device->con->write = qdf2400_e44_early_write;
return 0;
}
EARLYCON_DECLARE(qdf2400_e44, qdf2400_e44_early_console_setup);
#else
#define AMBA_CONSOLE NULL
#endif
static struct uart_driver amba_reg = {
.owner = THIS_MODULE,
.driver_name = "ttyAMA",
.dev_name = "ttyAMA",
.major = SERIAL_AMBA_MAJOR,
.minor = SERIAL_AMBA_MINOR,
.nr = UART_NR,
.cons = AMBA_CONSOLE,
};
static int pl011_probe_dt_alias(int index, struct device *dev)
{
struct device_node *np;
static bool seen_dev_with_alias = false;
static bool seen_dev_without_alias = false;
int ret = index;
if (!IS_ENABLED(CONFIG_OF))
return ret;
np = dev->of_node;
if (!np)
return ret;
ret = of_alias_get_id(np, "serial");
remove lots of IS_ERR_VALUE abuses Most users of IS_ERR_VALUE() in the kernel are wrong, as they pass an 'int' into a function that takes an 'unsigned long' argument. This happens to work because the type is sign-extended on 64-bit architectures before it gets converted into an unsigned type. However, anything that passes an 'unsigned short' or 'unsigned int' argument into IS_ERR_VALUE() is guaranteed to be broken, as are 8-bit integers and types that are wider than 'unsigned long'. Andrzej Hajda has already fixed a lot of the worst abusers that were causing actual bugs, but it would be nice to prevent any users that are not passing 'unsigned long' arguments. This patch changes all users of IS_ERR_VALUE() that I could find on 32-bit ARM randconfig builds and x86 allmodconfig. For the moment, this doesn't change the definition of IS_ERR_VALUE() because there are probably still architecture specific users elsewhere. Almost all the warnings I got are for files that are better off using 'if (err)' or 'if (err < 0)'. The only legitimate user I could find that we get a warning for is the (32-bit only) freescale fman driver, so I did not remove the IS_ERR_VALUE() there but changed the type to 'unsigned long'. For 9pfs, I just worked around one user whose calling conventions are so obscure that I did not dare change the behavior. I was using this definition for testing: #define IS_ERR_VALUE(x) ((unsigned long*)NULL == (typeof (x)*)NULL && \ unlikely((unsigned long long)(x) >= (unsigned long long)(typeof(x))-MAX_ERRNO)) which ends up making all 16-bit or wider types work correctly with the most plausible interpretation of what IS_ERR_VALUE() was supposed to return according to its users, but also causes a compile-time warning for any users that do not pass an 'unsigned long' argument. I suggested this approach earlier this year, but back then we ended up deciding to just fix the users that are obviously broken. After the initial warning that caused me to get involved in the discussion (fs/gfs2/dir.c) showed up again in the mainline kernel, Linus asked me to send the whole thing again. [ Updated the 9p parts as per Al Viro - Linus ] Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: Andrzej Hajda <a.hajda@samsung.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: https://lkml.org/lkml/2016/1/7/363 Link: https://lkml.org/lkml/2016/5/27/486 Acked-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> # For nvmem part Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-27 15:23:25 -06:00
if (ret < 0) {
seen_dev_without_alias = true;
ret = index;
} else {
seen_dev_with_alias = true;
if (ret >= ARRAY_SIZE(amba_ports) || amba_ports[ret] != NULL) {
dev_warn(dev, "requested serial port %d not available.\n", ret);
ret = index;
}
}
if (seen_dev_with_alias && seen_dev_without_alias)
dev_warn(dev, "aliased and non-aliased serial devices found in device tree. Serial port enumeration may be unpredictable.\n");
return ret;
}
/* unregisters the driver also if no more ports are left */
static void pl011_unregister_port(struct uart_amba_port *uap)
{
int i;
bool busy = false;
for (i = 0; i < ARRAY_SIZE(amba_ports); i++) {
if (amba_ports[i] == uap)
amba_ports[i] = NULL;
else if (amba_ports[i])
busy = true;
}
pl011_dma_remove(uap);
if (!busy)
uart_unregister_driver(&amba_reg);
}
static int pl011_find_free_port(void)
{
int i;
for (i = 0; i < ARRAY_SIZE(amba_ports); i++)
if (amba_ports[i] == NULL)
return i;
return -EBUSY;
}
static int pl011_setup_port(struct device *dev, struct uart_amba_port *uap,
struct resource *mmiobase, int index)
{
void __iomem *base;
base = devm_ioremap_resource(dev, mmiobase);
if (IS_ERR(base))
return PTR_ERR(base);
index = pl011_probe_dt_alias(index, dev);
uap->old_cr = 0;
uap->port.dev = dev;
uap->port.mapbase = mmiobase->start;
uap->port.membase = base;
uap->port.fifosize = uap->fifosize;
uap->port.flags = UPF_BOOT_AUTOCONF;
uap->port.line = index;
serial: amba-pl011: Make sure we initialize the port.lock spinlock [ Upstream commit 8508f4cba308f785b2fd4b8c38849c117b407297 ] Valentine reported seeing: [ 3.626638] INFO: trying to register non-static key. [ 3.626639] the code is fine but needs lockdep annotation. [ 3.626640] turning off the locking correctness validator. [ 3.626644] CPU: 7 PID: 51 Comm: kworker/7:1 Not tainted 5.7.0-rc2-00115-g8c2e9790f196 #116 [ 3.626646] Hardware name: HiKey960 (DT) [ 3.626656] Workqueue: events deferred_probe_work_func [ 3.632476] sd 0:0:0:0: [sda] Optimal transfer size 8192 bytes not a multiple of physical block size (16384 bytes) [ 3.640220] Call trace: [ 3.640225] dump_backtrace+0x0/0x1b8 [ 3.640227] show_stack+0x20/0x30 [ 3.640230] dump_stack+0xec/0x158 [ 3.640234] register_lock_class+0x598/0x5c0 [ 3.640235] __lock_acquire+0x80/0x16c0 [ 3.640236] lock_acquire+0xf4/0x4a0 [ 3.640241] _raw_spin_lock_irqsave+0x70/0xa8 [ 3.640245] uart_add_one_port+0x388/0x4b8 [ 3.640248] pl011_register_port+0x70/0xf0 [ 3.640250] pl011_probe+0x184/0x1b8 [ 3.640254] amba_probe+0xdc/0x180 [ 3.640256] really_probe+0xe0/0x338 [ 3.640257] driver_probe_device+0x60/0xf8 [ 3.640259] __device_attach_driver+0x8c/0xd0 [ 3.640260] bus_for_each_drv+0x84/0xd8 [ 3.640261] __device_attach+0xe4/0x140 [ 3.640263] device_initial_probe+0x1c/0x28 [ 3.640265] bus_probe_device+0xa4/0xb0 [ 3.640266] deferred_probe_work_func+0x7c/0xb8 [ 3.640269] process_one_work+0x2c0/0x768 [ 3.640271] worker_thread+0x4c/0x498 [ 3.640272] kthread+0x14c/0x158 [ 3.640275] ret_from_fork+0x10/0x1c Which seems to be due to the fact that after allocating the uap structure, nothing initializes the spinlock. Its a little confusing, as uart_port_spin_lock_init() is one place where the lock is supposed to be initialized, but it has an exception for the case where the port is a console. This makes it seem like a deeper fix is needed to properly register the console, but I'm not sure what that entails, and Andy suggested that this approach is less invasive. Thus, this patch resolves the issue by initializing the spinlock in the driver, and resolves the resulting warning. Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Jiri Slaby <jslaby@suse.com> Cc: linux-serial@vger.kernel.org Reported-by: Valentin Schneider <valentin.schneider@arm.com> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: John Stultz <john.stultz@linaro.org> Reviewed-and-tested-by: Valentin Schneider <valentin.schneider@arm.com> Link: https://lore.kernel.org/r/20200428184050.6501-1-john.stultz@linaro.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-04-28 12:40:50 -06:00
spin_lock_init(&uap->port.lock);
amba_ports[index] = uap;
return 0;
}
static int pl011_register_port(struct uart_amba_port *uap)
{
int ret, i;
/* Ensure interrupts from this UART are masked and cleared */
pl011_write(0, uap, REG_IMSC);
pl011_write(0xffff, uap, REG_ICR);
if (!amba_reg.state) {
ret = uart_register_driver(&amba_reg);
if (ret < 0) {
dev_err(uap->port.dev,
"Failed to register AMBA-PL011 driver\n");
for (i = 0; i < ARRAY_SIZE(amba_ports); i++)
if (amba_ports[i] == uap)
amba_ports[i] = NULL;
return ret;
}
}
ret = uart_add_one_port(&amba_reg, &uap->port);
if (ret)
pl011_unregister_port(uap);
return ret;
}
static int pl011_probe(struct amba_device *dev, const struct amba_id *id)
{
struct uart_amba_port *uap;
struct vendor_data *vendor = id->data;
int portnr, ret;
portnr = pl011_find_free_port();
if (portnr < 0)
return portnr;
uap = devm_kzalloc(&dev->dev, sizeof(struct uart_amba_port),
GFP_KERNEL);
if (!uap)
return -ENOMEM;
uap->clk = devm_clk_get(&dev->dev, NULL);
if (IS_ERR(uap->clk))
return PTR_ERR(uap->clk);
uap->reg_offset = vendor->reg_offset;
uap->vendor = vendor;
uap->fifosize = vendor->get_fifosize(dev);
uap->port.iotype = vendor->access_32b ? UPIO_MEM32 : UPIO_MEM;
uap->port.irq = dev->irq[0];
uap->port.ops = &amba_pl011_pops;
snprintf(uap->type, sizeof(uap->type), "PL011 rev%u", amba_rev(dev));
ret = pl011_setup_port(&dev->dev, uap, &dev->res, portnr);
if (ret)
return ret;
amba_set_drvdata(dev, uap);
return pl011_register_port(uap);
}
static int pl011_remove(struct amba_device *dev)
{
struct uart_amba_port *uap = amba_get_drvdata(dev);
uart_remove_one_port(&amba_reg, &uap->port);
pl011_unregister_port(uap);
return 0;
}
#ifdef CONFIG_PM_SLEEP
static int pl011_suspend(struct device *dev)
{
struct uart_amba_port *uap = dev_get_drvdata(dev);
if (!uap)
return -EINVAL;
return uart_suspend_port(&amba_reg, &uap->port);
}
static int pl011_resume(struct device *dev)
{
struct uart_amba_port *uap = dev_get_drvdata(dev);
if (!uap)
return -EINVAL;
return uart_resume_port(&amba_reg, &uap->port);
}
#endif
static SIMPLE_DEV_PM_OPS(pl011_dev_pm_ops, pl011_suspend, pl011_resume);
static int sbsa_uart_probe(struct platform_device *pdev)
{
struct uart_amba_port *uap;
struct resource *r;
int portnr, ret;
int baudrate;
/*
* Check the mandatory baud rate parameter in the DT node early
* so that we can easily exit with the error.
*/
if (pdev->dev.of_node) {
struct device_node *np = pdev->dev.of_node;
ret = of_property_read_u32(np, "current-speed", &baudrate);
if (ret)
return ret;
} else {
baudrate = 115200;
}
portnr = pl011_find_free_port();
if (portnr < 0)
return portnr;
uap = devm_kzalloc(&pdev->dev, sizeof(struct uart_amba_port),
GFP_KERNEL);
if (!uap)
return -ENOMEM;
ret = platform_get_irq(pdev, 0);
if (ret < 0)
return ret;
uap->port.irq = ret;
tty: pl011: fix initialization order of QDF2400 E44 The work-around for Qualcomm Technologies QDF2400 Erratum 44 hinges on a global variable defined in the pl011 driver. The ACPI SPCR parsing code determines whether the work-around is needed, and if so, it changes the console name from "pl011" to "qdf2400_e44". The expectation is that the pl011 driver will implement the work-around when it sees the console name. The global variable qdf2400_e44_present is set when that happens. The problem is that work-around needs to be enabled when the pl011 driver probes, not when the console name is queried. However, sbsa_probe() is called before pl011_console_match(). The work-around appeared to work previously because the default console on QDF2400 platforms was always ttyAMA1. The first time sbsa_probe() is called (for ttyAMA0), qdf2400_e44_present is still false. Then pl011_console_match() is called, and it sets qdf2400_e44_present to true. All subsequent calls to sbsa_probe() enable the work-around. The solution is to move the global variable into spcr.c and let the pl011 driver query it during probe time. This works because all QDF2400 platforms require SPCR, so parse_spcr() will always be called. pl011_console_match still checks for the "qdf2400_e44" console name, but it doesn't do anything else special. Fixes: 5a0722b898f8 ("tty: pl011: use "qdf2400_e44" as the earlycon name for QDF2400 E44") Tested-by: Jeffrey Hugo <jhugo@codeaurora.org> Signed-off-by: Timur Tabi <timur@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-27 15:15:52 -06:00
#ifdef CONFIG_ACPI_SPCR_TABLE
if (qdf2400_e44_present) {
dev_info(&pdev->dev, "working around QDF2400 SoC erratum 44\n");
uap->vendor = &vendor_qdt_qdf2400_e44;
} else
#endif
uap->vendor = &vendor_sbsa;
uap->reg_offset = uap->vendor->reg_offset;
uap->fifosize = 32;
tty: pl011: fix initialization order of QDF2400 E44 The work-around for Qualcomm Technologies QDF2400 Erratum 44 hinges on a global variable defined in the pl011 driver. The ACPI SPCR parsing code determines whether the work-around is needed, and if so, it changes the console name from "pl011" to "qdf2400_e44". The expectation is that the pl011 driver will implement the work-around when it sees the console name. The global variable qdf2400_e44_present is set when that happens. The problem is that work-around needs to be enabled when the pl011 driver probes, not when the console name is queried. However, sbsa_probe() is called before pl011_console_match(). The work-around appeared to work previously because the default console on QDF2400 platforms was always ttyAMA1. The first time sbsa_probe() is called (for ttyAMA0), qdf2400_e44_present is still false. Then pl011_console_match() is called, and it sets qdf2400_e44_present to true. All subsequent calls to sbsa_probe() enable the work-around. The solution is to move the global variable into spcr.c and let the pl011 driver query it during probe time. This works because all QDF2400 platforms require SPCR, so parse_spcr() will always be called. pl011_console_match still checks for the "qdf2400_e44" console name, but it doesn't do anything else special. Fixes: 5a0722b898f8 ("tty: pl011: use "qdf2400_e44" as the earlycon name for QDF2400 E44") Tested-by: Jeffrey Hugo <jhugo@codeaurora.org> Signed-off-by: Timur Tabi <timur@codeaurora.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-27 15:15:52 -06:00
uap->port.iotype = uap->vendor->access_32b ? UPIO_MEM32 : UPIO_MEM;
uap->port.ops = &sbsa_uart_pops;
uap->fixed_baud = baudrate;
snprintf(uap->type, sizeof(uap->type), "SBSA");
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
ret = pl011_setup_port(&pdev->dev, uap, r, portnr);
if (ret)
return ret;
platform_set_drvdata(pdev, uap);
return pl011_register_port(uap);
}
static int sbsa_uart_remove(struct platform_device *pdev)
{
struct uart_amba_port *uap = platform_get_drvdata(pdev);
uart_remove_one_port(&amba_reg, &uap->port);
pl011_unregister_port(uap);
return 0;
}
static const struct of_device_id sbsa_uart_of_match[] = {
{ .compatible = "arm,sbsa-uart", },
{},
};
MODULE_DEVICE_TABLE(of, sbsa_uart_of_match);
static const struct acpi_device_id sbsa_uart_acpi_match[] = {
{ "ARMH0011", 0 },
{},
};
MODULE_DEVICE_TABLE(acpi, sbsa_uart_acpi_match);
static struct platform_driver arm_sbsa_uart_platform_driver = {
.probe = sbsa_uart_probe,
.remove = sbsa_uart_remove,
.driver = {
.name = "sbsa-uart",
.of_match_table = of_match_ptr(sbsa_uart_of_match),
.acpi_match_table = ACPI_PTR(sbsa_uart_acpi_match),
serial: set suppress_bind_attrs flag only if builtin When the test 'CONFIG_DEBUG_TEST_DRIVER_REMOVE=y' is enabled, arch_initcall(pl011_init) came before subsys_initcall(default_bdi_init). devtmpfs gets killed because we try to remove a file and decrement the wb reference count before the noop_backing_device_info gets initialized. [ 0.332075] Serial: AMBA PL011 UART driver [ 0.485276] 9000000.pl011: ttyAMA0 at MMIO 0x9000000 (irq = 39, base_baud = 0) is a PL011 rev1 [ 0.502382] console [ttyAMA0] enabled [ 0.515710] Unable to handle kernel paging request at virtual address 0000800074c12000 [ 0.516053] Mem abort info: [ 0.516222] ESR = 0x96000004 [ 0.516417] Exception class = DABT (current EL), IL = 32 bits [ 0.516641] SET = 0, FnV = 0 [ 0.516826] EA = 0, S1PTW = 0 [ 0.516984] Data abort info: [ 0.517149] ISV = 0, ISS = 0x00000004 [ 0.517339] CM = 0, WnR = 0 [ 0.517553] [0000800074c12000] user address but active_mm is swapper [ 0.517928] Internal error: Oops: 96000004 [#1] PREEMPT SMP [ 0.518305] Modules linked in: [ 0.518839] CPU: 0 PID: 13 Comm: kdevtmpfs Not tainted 4.19.0-rc5-next-20180928-00002-g2ba39ab0cd01-dirty #82 [ 0.519307] Hardware name: linux,dummy-virt (DT) [ 0.519681] pstate: 80000005 (Nzcv daif -PAN -UAO) [ 0.519959] pc : __destroy_inode+0x94/0x2a8 [ 0.520212] lr : __destroy_inode+0x78/0x2a8 [ 0.520401] sp : ffff0000098c3b20 [ 0.520590] x29: ffff0000098c3b20 x28: 00000000087a3714 [ 0.520904] x27: 0000000000002000 x26: 0000000000002000 [ 0.521179] x25: ffff000009583000 x24: 0000000000000000 [ 0.521467] x23: ffff80007bb52000 x22: ffff80007bbaa7c0 [ 0.521737] x21: ffff0000093f9338 x20: 0000000000000000 [ 0.522033] x19: ffff80007bbb05d8 x18: 0000000000000400 [ 0.522376] x17: 0000000000000000 x16: 0000000000000000 [ 0.522727] x15: 0000000000000400 x14: 0000000000000400 [ 0.523068] x13: 0000000000000001 x12: 0000000000000001 [ 0.523421] x11: 0000000000000000 x10: 0000000000000970 [ 0.523749] x9 : ffff0000098c3a60 x8 : ffff80007bbab190 [ 0.524017] x7 : ffff80007bbaa880 x6 : 0000000000000c88 [ 0.524305] x5 : ffff0000093d96c8 x4 : 61c8864680b583eb [ 0.524567] x3 : ffff0000093d6180 x2 : ffffffffffffffff [ 0.524872] x1 : 0000800074c12000 x0 : 0000800074c12000 [ 0.525207] Process kdevtmpfs (pid: 13, stack limit = 0x(____ptrval____)) [ 0.525529] Call trace: [ 0.525806] __destroy_inode+0x94/0x2a8 [ 0.526108] destroy_inode+0x34/0x88 [ 0.526370] evict+0x144/0x1c8 [ 0.526636] iput+0x184/0x230 [ 0.526871] dentry_unlink_inode+0x118/0x130 [ 0.527152] d_delete+0xd8/0xe0 [ 0.527420] vfs_unlink+0x240/0x270 [ 0.527665] handle_remove+0x1d8/0x330 [ 0.527875] devtmpfsd+0x138/0x1c8 [ 0.528085] kthread+0x14c/0x158 [ 0.528291] ret_from_fork+0x10/0x18 [ 0.528720] Code: 92800002 aa1403e0 d538d081 8b010000 (c85f7c04) [ 0.529367] ---[ end trace 5a3dee47727f877c ]--- Rework to set suppress_bind_attrs flag to avoid removing the device when CONFIG_DEBUG_TEST_DRIVER_REMOVE=y. This applies for pic32_uart and xilinx_uartps as well. Co-developed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-10-30 05:35:44 -06:00
.suppress_bind_attrs = IS_BUILTIN(CONFIG_SERIAL_AMBA_PL011),
},
};
static const struct amba_id pl011_ids[] = {
{
.id = 0x00041011,
.mask = 0x000fffff,
.data = &vendor_arm,
},
{
.id = 0x00380802,
.mask = 0x00ffffff,
.data = &vendor_st,
},
{
.id = AMBA_LINUX_ID(0x00, 0x1, 0xffe),
.mask = 0x00ffffff,
.data = &vendor_zte,
},
{ 0, 0 },
};
MODULE_DEVICE_TABLE(amba, pl011_ids);
static struct amba_driver pl011_driver = {
.drv = {
.name = "uart-pl011",
.pm = &pl011_dev_pm_ops,
serial: set suppress_bind_attrs flag only if builtin When the test 'CONFIG_DEBUG_TEST_DRIVER_REMOVE=y' is enabled, arch_initcall(pl011_init) came before subsys_initcall(default_bdi_init). devtmpfs gets killed because we try to remove a file and decrement the wb reference count before the noop_backing_device_info gets initialized. [ 0.332075] Serial: AMBA PL011 UART driver [ 0.485276] 9000000.pl011: ttyAMA0 at MMIO 0x9000000 (irq = 39, base_baud = 0) is a PL011 rev1 [ 0.502382] console [ttyAMA0] enabled [ 0.515710] Unable to handle kernel paging request at virtual address 0000800074c12000 [ 0.516053] Mem abort info: [ 0.516222] ESR = 0x96000004 [ 0.516417] Exception class = DABT (current EL), IL = 32 bits [ 0.516641] SET = 0, FnV = 0 [ 0.516826] EA = 0, S1PTW = 0 [ 0.516984] Data abort info: [ 0.517149] ISV = 0, ISS = 0x00000004 [ 0.517339] CM = 0, WnR = 0 [ 0.517553] [0000800074c12000] user address but active_mm is swapper [ 0.517928] Internal error: Oops: 96000004 [#1] PREEMPT SMP [ 0.518305] Modules linked in: [ 0.518839] CPU: 0 PID: 13 Comm: kdevtmpfs Not tainted 4.19.0-rc5-next-20180928-00002-g2ba39ab0cd01-dirty #82 [ 0.519307] Hardware name: linux,dummy-virt (DT) [ 0.519681] pstate: 80000005 (Nzcv daif -PAN -UAO) [ 0.519959] pc : __destroy_inode+0x94/0x2a8 [ 0.520212] lr : __destroy_inode+0x78/0x2a8 [ 0.520401] sp : ffff0000098c3b20 [ 0.520590] x29: ffff0000098c3b20 x28: 00000000087a3714 [ 0.520904] x27: 0000000000002000 x26: 0000000000002000 [ 0.521179] x25: ffff000009583000 x24: 0000000000000000 [ 0.521467] x23: ffff80007bb52000 x22: ffff80007bbaa7c0 [ 0.521737] x21: ffff0000093f9338 x20: 0000000000000000 [ 0.522033] x19: ffff80007bbb05d8 x18: 0000000000000400 [ 0.522376] x17: 0000000000000000 x16: 0000000000000000 [ 0.522727] x15: 0000000000000400 x14: 0000000000000400 [ 0.523068] x13: 0000000000000001 x12: 0000000000000001 [ 0.523421] x11: 0000000000000000 x10: 0000000000000970 [ 0.523749] x9 : ffff0000098c3a60 x8 : ffff80007bbab190 [ 0.524017] x7 : ffff80007bbaa880 x6 : 0000000000000c88 [ 0.524305] x5 : ffff0000093d96c8 x4 : 61c8864680b583eb [ 0.524567] x3 : ffff0000093d6180 x2 : ffffffffffffffff [ 0.524872] x1 : 0000800074c12000 x0 : 0000800074c12000 [ 0.525207] Process kdevtmpfs (pid: 13, stack limit = 0x(____ptrval____)) [ 0.525529] Call trace: [ 0.525806] __destroy_inode+0x94/0x2a8 [ 0.526108] destroy_inode+0x34/0x88 [ 0.526370] evict+0x144/0x1c8 [ 0.526636] iput+0x184/0x230 [ 0.526871] dentry_unlink_inode+0x118/0x130 [ 0.527152] d_delete+0xd8/0xe0 [ 0.527420] vfs_unlink+0x240/0x270 [ 0.527665] handle_remove+0x1d8/0x330 [ 0.527875] devtmpfsd+0x138/0x1c8 [ 0.528085] kthread+0x14c/0x158 [ 0.528291] ret_from_fork+0x10/0x18 [ 0.528720] Code: 92800002 aa1403e0 d538d081 8b010000 (c85f7c04) [ 0.529367] ---[ end trace 5a3dee47727f877c ]--- Rework to set suppress_bind_attrs flag to avoid removing the device when CONFIG_DEBUG_TEST_DRIVER_REMOVE=y. This applies for pic32_uart and xilinx_uartps as well. Co-developed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-10-30 05:35:44 -06:00
.suppress_bind_attrs = IS_BUILTIN(CONFIG_SERIAL_AMBA_PL011),
},
.id_table = pl011_ids,
.probe = pl011_probe,
.remove = pl011_remove,
};
static int __init pl011_init(void)
{
printk(KERN_INFO "Serial: AMBA PL011 UART driver\n");
if (platform_driver_register(&arm_sbsa_uart_platform_driver))
pr_warn("could not register SBSA UART platform driver\n");
return amba_driver_register(&pl011_driver);
}
static void __exit pl011_exit(void)
{
platform_driver_unregister(&arm_sbsa_uart_platform_driver);
amba_driver_unregister(&pl011_driver);
}
/*
* While this can be a module, if builtin it's most likely the console
* So let's leave module_exit but move module_init to an earlier place
*/
arch_initcall(pl011_init);
module_exit(pl011_exit);
MODULE_AUTHOR("ARM Ltd/Deep Blue Solutions Ltd");
MODULE_DESCRIPTION("ARM AMBA serial port driver");
MODULE_LICENSE("GPL");