1
0
Fork 0
alistair23-linux/drivers/pci/hotplug/pciehp_hpc.c

929 lines
25 KiB
C
Raw Normal View History

// SPDX-License-Identifier: GPL-2.0+
/*
* PCI Express PCI Hot Plug Driver
*
* Copyright (C) 1995,2001 Compaq Computer Corporation
* Copyright (C) 2001 Greg Kroah-Hartman (greg@kroah.com)
* Copyright (C) 2001 IBM Corp.
* Copyright (C) 2003-2004 Intel Corporation
*
* All rights reserved.
*
* Send feedback to <greg@kroah.com>,<kristen.c.accardi@intel.com>
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/jiffies.h>
#include <linux/kthread.h>
#include <linux/pci.h>
PCI: pciehp: Support interrupts sent from D3hot If a hotplug port is able to send an interrupt, one would naively assume that it is accessible at that moment. After all, if it wouldn't be accessible, i.e. if its parent is in D3hot and the link to the hotplug port is thus down, how should an interrupt come through? It turns out that assumption is wrong at least for Thunderbolt: Even though its parents are in D3hot, a Thunderbolt hotplug port is able to signal interrupts. Because the port's config space is inaccessible and resuming the parents may sleep, the hard IRQ handler has to defer runtime resuming the parents and reading the Slot Status register to the IRQ thread. If the hotplug port uses a level-triggered INTx interrupt, it needs to be masked until the IRQ thread has cleared the signaled events. For simplicity, this commit also masks edge-triggered MSI/MSI-X interrupts. Note that if the interrupt is shared (which can only happen for INTx), other devices are starved from receiving interrupts until the IRQ thread is scheduled, has runtime resumed the hotplug port's parents and has read and cleared the Slot Status register. That delay is dominated by the 10 ms D3hot->D0 transition time of each parent port. The worst case is a Thunderbolt downstream port at the end of a daisy chain: There may be up to six Thunderbolt controllers in-between it and the root port, each comprising an upstream and downstream port, plus its own upstream port. That's 13 x 10 = 130 ms. Possible mitigations are polling the interrupt while it's disabled or reducing the d3_delay of Thunderbolt ports if possible. Open code masking of the interrupt instead of requesting it with the IRQF_ONESHOT flag to minimize the period during which it is masked. (IRQF_ONESHOT unmasks the IRQ only after the IRQ thread has finished.) PCIe r4.0 sec 6.7.3.4 states that "If wake generation is required by the associated form factor specification, a hotplug capable Downstream Port must support generation of a wakeup event (using the PME mechanism) on hotplug events that occur when the system is in a sleep state or the Port is in device state D1, D2, or D3Hot." This would seem to imply that PME needs to be enabled on the hotplug port when it is runtime suspended. pci_enable_wake() currently doesn't enable PME on bridges, it may be necessary to add an exemption for hotplug bridges there. On "Light Ridge" Thunderbolt controllers, the PME_Status bit is not set when an interrupt occurs while the hotplug port is in D3hot, even if PME is enabled. (I've tested this on a Mac and we hardcode the OSC_PCI_EXPRESS_PME_CONTROL bit to 0 on Macs in negotiate_os_control(), modifying it to 1 didn't change the behavior.) (Side note: Section 6.7.3.4 also states that "PME and Hot-Plug Event interrupts (when both are implemented) always share the same MSI or MSI-X vector". That would only seem to apply to Root Ports, however the section never mentions Root Ports, only Downstream Ports. This is explained in the definition of "Downstream Port" in the "Terms and Acronyms" section of the PCIe Base Spec: "The Ports on a Switch that are not the Upstream Port are Downstream Ports. All Ports on a Root Complex are Downstream Ports.") Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Mika Westerberg <mika.westerberg@linux.intel.com> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Keith Busch <keith.busch@intel.com> Cc: Yinghai Lu <yinghai@kernel.org>
2018-07-27 23:18:00 -06:00
#include <linux/pm_runtime.h>
#include <linux/interrupt.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 02:04:11 -06:00
#include <linux/slab.h>
#include "../pci.h"
#include "pciehp.h"
static inline struct pci_dev *ctrl_dev(struct controller *ctrl)
{
return ctrl->pcie->port;
}
PCI: pciehp: Convert to threaded IRQ pciehp's IRQ handler queues up a work item for each event signaled by the hardware. A more modern alternative is to let a long running kthread service the events. The IRQ handler's sole job is then to check whether the IRQ originated from the device in question, acknowledge its receipt to the hardware to quiesce the interrupt and wake up the kthread. One benefit is reduced latency to handle the IRQ, which is a necessity for realtime environments. Another benefit is that we can make pciehp simpler and more robust by handling events synchronously in process context, rather than asynchronously by queueing up work items. pciehp's usage of work items is a historic artifact, it predates the introduction of threaded IRQ handlers by two years. (The former was introduced in 2007 with commit 5d386e1ac402 ("pciehp: Event handling rework"), the latter in 2009 with commit 3aa551c9b4c4 ("genirq: add threaded interrupt handler support").) Convert pciehp to threaded IRQ handling by retrieving the pending events in pciehp_isr(), saving them for later consumption by the thread handler pciehp_ist() and clearing them in the Slot Status register. By clearing the Slot Status (and thereby acknowledging the events) in pciehp_isr(), we can avoid requesting the IRQ with IRQF_ONESHOT, which would have the unpleasant side effect of starving devices sharing the IRQ until pciehp_ist() has finished. pciehp_isr() does not count how many times each event occurred, but merely records the fact *that* an event occurred. If the same event occurs a second time before pciehp_ist() is woken, that second event will not be recorded separately, which is problematic according to commit fad214b0aa72 ("PCI: pciehp: Process all hotplug events before looking for new ones") because we may miss removal of a card in-between two back-to-back insertions. We're about to make pciehp_ist() resilient to missed events. The present commit regresses the driver's behavior temporarily in order to separate the changes into reviewable chunks. This doesn't affect regular slow-motion hotplug, only plug-unplug-plug operations that happen in a timespan shorter than wakeup of the IRQ thread. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mayurkumar Patel <mayurkumar.patel@intel.com> Cc: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
2018-07-19 16:27:38 -06:00
static irqreturn_t pciehp_isr(int irq, void *dev_id);
static irqreturn_t pciehp_ist(int irq, void *dev_id);
static int pciehp_poll(void *data);
static inline int pciehp_request_irq(struct controller *ctrl)
{
int retval, irq = ctrl->pcie->irq;
if (pciehp_poll_mode) {
ctrl->poll_thread = kthread_run(&pciehp_poll, ctrl,
"pciehp_poll-%s",
slot_name(ctrl));
return PTR_ERR_OR_ZERO(ctrl->poll_thread);
}
/* Installs the interrupt handler */
PCI: pciehp: Convert to threaded IRQ pciehp's IRQ handler queues up a work item for each event signaled by the hardware. A more modern alternative is to let a long running kthread service the events. The IRQ handler's sole job is then to check whether the IRQ originated from the device in question, acknowledge its receipt to the hardware to quiesce the interrupt and wake up the kthread. One benefit is reduced latency to handle the IRQ, which is a necessity for realtime environments. Another benefit is that we can make pciehp simpler and more robust by handling events synchronously in process context, rather than asynchronously by queueing up work items. pciehp's usage of work items is a historic artifact, it predates the introduction of threaded IRQ handlers by two years. (The former was introduced in 2007 with commit 5d386e1ac402 ("pciehp: Event handling rework"), the latter in 2009 with commit 3aa551c9b4c4 ("genirq: add threaded interrupt handler support").) Convert pciehp to threaded IRQ handling by retrieving the pending events in pciehp_isr(), saving them for later consumption by the thread handler pciehp_ist() and clearing them in the Slot Status register. By clearing the Slot Status (and thereby acknowledging the events) in pciehp_isr(), we can avoid requesting the IRQ with IRQF_ONESHOT, which would have the unpleasant side effect of starving devices sharing the IRQ until pciehp_ist() has finished. pciehp_isr() does not count how many times each event occurred, but merely records the fact *that* an event occurred. If the same event occurs a second time before pciehp_ist() is woken, that second event will not be recorded separately, which is problematic according to commit fad214b0aa72 ("PCI: pciehp: Process all hotplug events before looking for new ones") because we may miss removal of a card in-between two back-to-back insertions. We're about to make pciehp_ist() resilient to missed events. The present commit regresses the driver's behavior temporarily in order to separate the changes into reviewable chunks. This doesn't affect regular slow-motion hotplug, only plug-unplug-plug operations that happen in a timespan shorter than wakeup of the IRQ thread. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mayurkumar Patel <mayurkumar.patel@intel.com> Cc: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
2018-07-19 16:27:38 -06:00
retval = request_threaded_irq(irq, pciehp_isr, pciehp_ist,
IRQF_SHARED, MY_NAME, ctrl);
if (retval)
ctrl_err(ctrl, "Cannot get irq %d for the hotplug controller\n",
irq);
return retval;
}
static inline void pciehp_free_irq(struct controller *ctrl)
{
if (pciehp_poll_mode)
kthread_stop(ctrl->poll_thread);
else
free_irq(ctrl->pcie->irq, ctrl);
}
static int pcie_poll_cmd(struct controller *ctrl, int timeout)
{
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_status;
while (true) {
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status);
PCI: pciehp: Handle invalid data when reading from non-existent devices It's platform-dependent, but an MMIO read to a non-existent PCI device generally returns data with all bits set. This happens when the host bridge or Root Complex times out waiting for a response from the device and fabricates return data to complete the CPU's read. One example, reported in the bugzilla below, involved this hierarchy: pci 0000:00:1c.0: PCI bridge to [bus 02-3a] Root Port pci 0000:02:00.0: PCI bridge to [bus 03-0a] Upstream Port pci 0000:03:03.0: PCI bridge to [bus 05-07] Downstream Port pci 0000:05:00.0: PCI bridge to [bus 06-07] Thunderbolt Upstream Port pci 0000:06:00.0: PCI bridge to [bus 07] Thunderbolt Downstream Port pci 0000:07:00.0: BCM57762 NIC Unplugging the Thunderbolt switch and the NIC below it resulted in this: pciehp 0000:03:03.0: Surprise Removal tg3 0000:07:00.0: tg3_abort_hw timed out, TX_MODE_ENABLE will not clear MAC_TX_MODE=ffffffff pciehp 0000:06:00.0: unloading service driver pciehp pciehp 0000:06:00.0: pcie_isr: intr_loc 11f pciehp 0000:06:00.0: Switch interrupt received pciehp 0000:06:00.0: Latch open on Slot pciehp 0000:06:00.0: Attention button interrupt received pciehp 0000:06:00.0: Button pressed on Slot pciehp 0000:06:00.0: Presence/Notify input change pciehp 0000:06:00.0: Card present on Slot pciehp 0000:06:00.0: Power fault interrupt received pciehp 0000:06:00.0: Data Link Layer State change pciehp 0000:06:00.0: Link Up event The pciehp driver correctly noticed that the Thunderbolt switch (05:00.0 and 06:00.0) and NIC (07:00.0) had been removed, and it called their driver remove methods. Since the NIC was already gone, tg3 received 0xffffffff when it tried to read from the device. The resulting timeout is a tg3 issue and not of interest here. Similarly, since the 06:00.0 Thunderbolt switch was already gone, pcie_isr() received 0xffff when it tried to read PCI_EXP_SLTSTA, and pciehp thought that was valid status showing that many events had happened: the latch had been opened, the attention button had been pressed, a card was now present, and the link was now up. These are all wrong, of course, but pciehp went on to try to power up and enumerate devices below the non-existent bridge: pciehp 0000:06:00.0: PCI slot - powering on due to button press pciehp 0000:06:00.0: Surprise Insertion pci 0000:07:00.0 id reading try 50 times with interval 20 ms to get ffffffff [bhelgaas: changelog, also check in pcie_poll_cmd() & pcie_do_write_cmd()] Link: https://bugzilla.kernel.org/show_bug.cgi?id=99841 Suggested-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Jarod Wilson <jarod@redhat.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2015-07-21 10:25:30 -06:00
if (slot_status == (u16) ~0) {
ctrl_info(ctrl, "%s: no response from device\n",
__func__);
return 0;
}
if (slot_status & PCI_EXP_SLTSTA_CC) {
pcie_capability_write_word(pdev, PCI_EXP_SLTSTA,
PCI_EXP_SLTSTA_CC);
return 1;
}
if (timeout < 0)
break;
msleep(10);
timeout -= 10;
}
return 0; /* timeout */
}
static void pcie_wait_cmd(struct controller *ctrl)
{
unsigned int msecs = pciehp_poll_mode ? 2500 : 1000;
unsigned long duration = msecs_to_jiffies(msecs);
unsigned long cmd_timeout = ctrl->cmd_started + duration;
unsigned long now, timeout;
int rc;
/*
* If the controller does not generate notifications for command
* completions, we never need to wait between writes.
*/
if (NO_CMD_CMPL(ctrl))
return;
if (!ctrl->cmd_busy)
return;
/*
* Even if the command has already timed out, we want to call
* pcie_poll_cmd() so it can clear PCI_EXP_SLTSTA_CC.
*/
now = jiffies;
if (time_before_eq(cmd_timeout, now))
timeout = 1;
else
timeout = cmd_timeout - now;
if (ctrl->slot_ctrl & PCI_EXP_SLTCTL_HPIE &&
ctrl->slot_ctrl & PCI_EXP_SLTCTL_CCIE)
rc = wait_event_timeout(ctrl->queue, !ctrl->cmd_busy, timeout);
else
rc = pcie_poll_cmd(ctrl, jiffies_to_msecs(timeout));
if (!rc)
ctrl_info(ctrl, "Timeout on hotplug command %#06x (issued %u msec ago)\n",
ctrl->slot_ctrl,
jiffies_to_msecs(jiffies - ctrl->cmd_started));
}
PCI: pciehp: Add quirk for Command Completed errata Several PCIe hotplug controllers have errata that mean they do not set the Command Completed bit unless writes to the Slot Command register change "Control" bits. Command Completed is never set for writes that only change software notification "Enable" bits. This results in timeouts like this: pciehp 0000:00:1c.0:pcie004: Timeout on hotplug command 0x1038 (issued 65284 msec ago) When this erratum is present, avoid these timeouts by marking commands "completed" immediately unless they change the "Control" bits. Here's the text of the Intel erratum CF118. We assume this applies to all Intel parts: CF118 PCIe Slot Status Register Command Completed bit not always updated on any configuration write to the Slot Control Register Problem: For PCIe root ports (devices 0 - 10) supporting hot-plug, the Slot Status Register (offset AAh) Command Completed (bit[4]) status is updated under the following condition: IOH will set Command Completed bit after delivering the new commands written in the Slot Controller register (offset A8h) to VPP. The IOH detects new commands written in Slot Control register by checking the change of value for Power Controller Control (bit[10]), Power Indicator Control (bits[9:8]), Attention Indicator Control (bits[7:6]), or Electromechanical Interlock Control (bit[11]) fields. Any other configuration writes to the Slot Control register without changing the values of these fields will not cause Command Completed bit to be set. The PCIe Base Specification Revision 2.0 or later describes the “Slot Control Register” in section 7.8.10, as follows (Reference section 7.8.10, Slot Control Register, Offset 18h). In hot-plug capable Downstream Ports, a write to the Slot Control register must cause a hot-plug command to be generated (see Section 6.7.3.2 for details on hot-plug commands). A write to the Slot Control register in a Downstream Port that is not hotplug capable must not cause a hot-plug command to be executed. The PCIe Spec intended that every write to the Slot Control Register is a command and expected a command complete status to abstract the VPP implementation specific nuances from the OS software. IOH PCIe Slot Control Register implementation is not fully conforming to the PCIe Specification in this respect. Implication: Software checking on the Command Completed status after writing to the Slot Control register may time out. Workaround: Software can read the Slot Control register and compare the existing and new values to determine if it should check the Command Completed status after writing to the Slot Control register. Per Sinan, the Qualcomm QDF2400 controller also does not set the Command Completed bit unless writes to the Slot Command register change "Control" bits. Link: http://www.intel.com/content/www/us/en/processors/xeon/xeon-e7-v2-spec-update.html Link: https://lkml.kernel.org/r/8770820b-85a0-172b-7230-3a44524e6c9f@molgen.mpg.de Reported-by: Paul Menzel <pmenzel+linux-pci@molgen.mpg.de> # Lenovo X60 Tested-by: Paul Menzel <pmenzel+linux-pci@molgen.mpg.de> # Lenovo X60 Signed-off-by: Sinan Kaya <okaya@codeaurora.org> # Qcom quirk Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
2018-05-03 17:39:38 -06:00
#define CC_ERRATUM_MASK (PCI_EXP_SLTCTL_PCC | \
PCI_EXP_SLTCTL_PIC | \
PCI_EXP_SLTCTL_AIC | \
PCI_EXP_SLTCTL_EIC)
PCI: pciehp: Wait for hotplug command completion where necessary The commit referenced below deferred waiting for command completion until the start of the next command, allowing hardware to do the latching asynchronously. Unfortunately, being ready to accept a new command is the only indication we have that the previous command is completed. In cases where we need that state change to be enabled, we must still wait for completion. For instance, pciehp_reset_slot() attempts to disable anything that might generate a surprise hotplug on slots that support presence detection. If we don't wait for those settings to latch before the secondary bus reset, we negate any value in attempting to prevent the spurious hotplug. Create a base function with optional wait and helper functions so that pcie_write_cmd() turns back into the "safe" interface which waits before and after issuing a command and add pcie_write_cmd_nowait(), which eliminates the trailing wait for asynchronous completion. The following functions are returned to their previous behavior: pciehp_power_on_slot pciehp_power_off_slot pcie_disable_notification pciehp_reset_slot The rationale is that pciehp_power_on_slot() enables the link and therefore relies on completion of power-on. pciehp_power_off_slot() and pcie_disable_notification() need a wait because data structures may be freed after these calls and continued signaling from the device would be unexpected. And, of course, pciehp_reset_slot() needs to wait for the scenario outlined above. Fixes: 3461a068661c ("PCI: pciehp: Wait for hotplug command completion lazily") Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> CC: stable@vger.kernel.org # v3.17+
2015-06-08 17:10:50 -06:00
static void pcie_do_write_cmd(struct controller *ctrl, u16 cmd,
u16 mask, bool wait)
{
struct pci_dev *pdev = ctrl_dev(ctrl);
PCI: pciehp: Add quirk for Command Completed errata Several PCIe hotplug controllers have errata that mean they do not set the Command Completed bit unless writes to the Slot Command register change "Control" bits. Command Completed is never set for writes that only change software notification "Enable" bits. This results in timeouts like this: pciehp 0000:00:1c.0:pcie004: Timeout on hotplug command 0x1038 (issued 65284 msec ago) When this erratum is present, avoid these timeouts by marking commands "completed" immediately unless they change the "Control" bits. Here's the text of the Intel erratum CF118. We assume this applies to all Intel parts: CF118 PCIe Slot Status Register Command Completed bit not always updated on any configuration write to the Slot Control Register Problem: For PCIe root ports (devices 0 - 10) supporting hot-plug, the Slot Status Register (offset AAh) Command Completed (bit[4]) status is updated under the following condition: IOH will set Command Completed bit after delivering the new commands written in the Slot Controller register (offset A8h) to VPP. The IOH detects new commands written in Slot Control register by checking the change of value for Power Controller Control (bit[10]), Power Indicator Control (bits[9:8]), Attention Indicator Control (bits[7:6]), or Electromechanical Interlock Control (bit[11]) fields. Any other configuration writes to the Slot Control register without changing the values of these fields will not cause Command Completed bit to be set. The PCIe Base Specification Revision 2.0 or later describes the “Slot Control Register” in section 7.8.10, as follows (Reference section 7.8.10, Slot Control Register, Offset 18h). In hot-plug capable Downstream Ports, a write to the Slot Control register must cause a hot-plug command to be generated (see Section 6.7.3.2 for details on hot-plug commands). A write to the Slot Control register in a Downstream Port that is not hotplug capable must not cause a hot-plug command to be executed. The PCIe Spec intended that every write to the Slot Control Register is a command and expected a command complete status to abstract the VPP implementation specific nuances from the OS software. IOH PCIe Slot Control Register implementation is not fully conforming to the PCIe Specification in this respect. Implication: Software checking on the Command Completed status after writing to the Slot Control register may time out. Workaround: Software can read the Slot Control register and compare the existing and new values to determine if it should check the Command Completed status after writing to the Slot Control register. Per Sinan, the Qualcomm QDF2400 controller also does not set the Command Completed bit unless writes to the Slot Command register change "Control" bits. Link: http://www.intel.com/content/www/us/en/processors/xeon/xeon-e7-v2-spec-update.html Link: https://lkml.kernel.org/r/8770820b-85a0-172b-7230-3a44524e6c9f@molgen.mpg.de Reported-by: Paul Menzel <pmenzel+linux-pci@molgen.mpg.de> # Lenovo X60 Tested-by: Paul Menzel <pmenzel+linux-pci@molgen.mpg.de> # Lenovo X60 Signed-off-by: Sinan Kaya <okaya@codeaurora.org> # Qcom quirk Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
2018-05-03 17:39:38 -06:00
u16 slot_ctrl_orig, slot_ctrl;
mutex_lock(&ctrl->ctrl_lock);
PCI: pciehp: Wait for hotplug command completion where necessary The commit referenced below deferred waiting for command completion until the start of the next command, allowing hardware to do the latching asynchronously. Unfortunately, being ready to accept a new command is the only indication we have that the previous command is completed. In cases where we need that state change to be enabled, we must still wait for completion. For instance, pciehp_reset_slot() attempts to disable anything that might generate a surprise hotplug on slots that support presence detection. If we don't wait for those settings to latch before the secondary bus reset, we negate any value in attempting to prevent the spurious hotplug. Create a base function with optional wait and helper functions so that pcie_write_cmd() turns back into the "safe" interface which waits before and after issuing a command and add pcie_write_cmd_nowait(), which eliminates the trailing wait for asynchronous completion. The following functions are returned to their previous behavior: pciehp_power_on_slot pciehp_power_off_slot pcie_disable_notification pciehp_reset_slot The rationale is that pciehp_power_on_slot() enables the link and therefore relies on completion of power-on. pciehp_power_off_slot() and pcie_disable_notification() need a wait because data structures may be freed after these calls and continued signaling from the device would be unexpected. And, of course, pciehp_reset_slot() needs to wait for the scenario outlined above. Fixes: 3461a068661c ("PCI: pciehp: Wait for hotplug command completion lazily") Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> CC: stable@vger.kernel.org # v3.17+
2015-06-08 17:10:50 -06:00
/*
* Always wait for any previous command that might still be in progress
*/
PCI: pciehp: Wait for hotplug command completion lazily Previously we issued a hotplug command and waited for it to complete. But there's no need to wait until we're ready to issue the *next* command. The next command will probably be much later, so the first one may have already completed and we may not have to actually wait at all. Because of hardware errata, some controllers generate command completion events for some commands but not others. In the case of Intel CF118 (see spec update reference), the controller indicates command completion only for Slot Control writes that change the value of the following bits: Power Controller Control Power Indicator Control Attention Indicator Control Electromechanical Interlock Control Changes to other bits, e.g., the interrupt enable bits, do not cause the Command Completed bit to be set. Controllers from AMD and Nvidia are reported to have similar errata. These errata cause timeouts when pcie_enable_notification() enables interrupts. Previously that timeout occurred at boot-time. With this change, the timeout occurs later, when we change the state of the slot power, indicators, or interlock. This speeds up boot but causes a timeout at the first hotplug event on the slot. Subsequent events don't timeout because only the first (boot-time) hotplug command updates Slot Control without touching the power/indicator/interlock controls. Link: http://www.intel.com/content/www/us/en/processors/xeon/xeon-e7-v2-spec-update.html Tested-by: Rajat Jain <rajatxjain@gmail.com> (IDT 807a controller) Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Yinghai Lu <yinghai@kernel.org>
2014-06-13 15:06:40 -06:00
pcie_wait_cmd(ctrl);
pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &slot_ctrl);
PCI: pciehp: Handle invalid data when reading from non-existent devices It's platform-dependent, but an MMIO read to a non-existent PCI device generally returns data with all bits set. This happens when the host bridge or Root Complex times out waiting for a response from the device and fabricates return data to complete the CPU's read. One example, reported in the bugzilla below, involved this hierarchy: pci 0000:00:1c.0: PCI bridge to [bus 02-3a] Root Port pci 0000:02:00.0: PCI bridge to [bus 03-0a] Upstream Port pci 0000:03:03.0: PCI bridge to [bus 05-07] Downstream Port pci 0000:05:00.0: PCI bridge to [bus 06-07] Thunderbolt Upstream Port pci 0000:06:00.0: PCI bridge to [bus 07] Thunderbolt Downstream Port pci 0000:07:00.0: BCM57762 NIC Unplugging the Thunderbolt switch and the NIC below it resulted in this: pciehp 0000:03:03.0: Surprise Removal tg3 0000:07:00.0: tg3_abort_hw timed out, TX_MODE_ENABLE will not clear MAC_TX_MODE=ffffffff pciehp 0000:06:00.0: unloading service driver pciehp pciehp 0000:06:00.0: pcie_isr: intr_loc 11f pciehp 0000:06:00.0: Switch interrupt received pciehp 0000:06:00.0: Latch open on Slot pciehp 0000:06:00.0: Attention button interrupt received pciehp 0000:06:00.0: Button pressed on Slot pciehp 0000:06:00.0: Presence/Notify input change pciehp 0000:06:00.0: Card present on Slot pciehp 0000:06:00.0: Power fault interrupt received pciehp 0000:06:00.0: Data Link Layer State change pciehp 0000:06:00.0: Link Up event The pciehp driver correctly noticed that the Thunderbolt switch (05:00.0 and 06:00.0) and NIC (07:00.0) had been removed, and it called their driver remove methods. Since the NIC was already gone, tg3 received 0xffffffff when it tried to read from the device. The resulting timeout is a tg3 issue and not of interest here. Similarly, since the 06:00.0 Thunderbolt switch was already gone, pcie_isr() received 0xffff when it tried to read PCI_EXP_SLTSTA, and pciehp thought that was valid status showing that many events had happened: the latch had been opened, the attention button had been pressed, a card was now present, and the link was now up. These are all wrong, of course, but pciehp went on to try to power up and enumerate devices below the non-existent bridge: pciehp 0000:06:00.0: PCI slot - powering on due to button press pciehp 0000:06:00.0: Surprise Insertion pci 0000:07:00.0 id reading try 50 times with interval 20 ms to get ffffffff [bhelgaas: changelog, also check in pcie_poll_cmd() & pcie_do_write_cmd()] Link: https://bugzilla.kernel.org/show_bug.cgi?id=99841 Suggested-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Jarod Wilson <jarod@redhat.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2015-07-21 10:25:30 -06:00
if (slot_ctrl == (u16) ~0) {
ctrl_info(ctrl, "%s: no response from device\n", __func__);
goto out;
}
PCI: pciehp: Add quirk for Command Completed errata Several PCIe hotplug controllers have errata that mean they do not set the Command Completed bit unless writes to the Slot Command register change "Control" bits. Command Completed is never set for writes that only change software notification "Enable" bits. This results in timeouts like this: pciehp 0000:00:1c.0:pcie004: Timeout on hotplug command 0x1038 (issued 65284 msec ago) When this erratum is present, avoid these timeouts by marking commands "completed" immediately unless they change the "Control" bits. Here's the text of the Intel erratum CF118. We assume this applies to all Intel parts: CF118 PCIe Slot Status Register Command Completed bit not always updated on any configuration write to the Slot Control Register Problem: For PCIe root ports (devices 0 - 10) supporting hot-plug, the Slot Status Register (offset AAh) Command Completed (bit[4]) status is updated under the following condition: IOH will set Command Completed bit after delivering the new commands written in the Slot Controller register (offset A8h) to VPP. The IOH detects new commands written in Slot Control register by checking the change of value for Power Controller Control (bit[10]), Power Indicator Control (bits[9:8]), Attention Indicator Control (bits[7:6]), or Electromechanical Interlock Control (bit[11]) fields. Any other configuration writes to the Slot Control register without changing the values of these fields will not cause Command Completed bit to be set. The PCIe Base Specification Revision 2.0 or later describes the “Slot Control Register” in section 7.8.10, as follows (Reference section 7.8.10, Slot Control Register, Offset 18h). In hot-plug capable Downstream Ports, a write to the Slot Control register must cause a hot-plug command to be generated (see Section 6.7.3.2 for details on hot-plug commands). A write to the Slot Control register in a Downstream Port that is not hotplug capable must not cause a hot-plug command to be executed. The PCIe Spec intended that every write to the Slot Control Register is a command and expected a command complete status to abstract the VPP implementation specific nuances from the OS software. IOH PCIe Slot Control Register implementation is not fully conforming to the PCIe Specification in this respect. Implication: Software checking on the Command Completed status after writing to the Slot Control register may time out. Workaround: Software can read the Slot Control register and compare the existing and new values to determine if it should check the Command Completed status after writing to the Slot Control register. Per Sinan, the Qualcomm QDF2400 controller also does not set the Command Completed bit unless writes to the Slot Command register change "Control" bits. Link: http://www.intel.com/content/www/us/en/processors/xeon/xeon-e7-v2-spec-update.html Link: https://lkml.kernel.org/r/8770820b-85a0-172b-7230-3a44524e6c9f@molgen.mpg.de Reported-by: Paul Menzel <pmenzel+linux-pci@molgen.mpg.de> # Lenovo X60 Tested-by: Paul Menzel <pmenzel+linux-pci@molgen.mpg.de> # Lenovo X60 Signed-off-by: Sinan Kaya <okaya@codeaurora.org> # Qcom quirk Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
2018-05-03 17:39:38 -06:00
slot_ctrl_orig = slot_ctrl;
slot_ctrl &= ~mask;
slot_ctrl |= (cmd & mask);
ctrl->cmd_busy = 1;
smp_mb();
pcie_capability_write_word(pdev, PCI_EXP_SLTCTL, slot_ctrl);
ctrl->cmd_started = jiffies;
ctrl->slot_ctrl = slot_ctrl;
PCI: pciehp: Add quirk for Command Completed errata Several PCIe hotplug controllers have errata that mean they do not set the Command Completed bit unless writes to the Slot Command register change "Control" bits. Command Completed is never set for writes that only change software notification "Enable" bits. This results in timeouts like this: pciehp 0000:00:1c.0:pcie004: Timeout on hotplug command 0x1038 (issued 65284 msec ago) When this erratum is present, avoid these timeouts by marking commands "completed" immediately unless they change the "Control" bits. Here's the text of the Intel erratum CF118. We assume this applies to all Intel parts: CF118 PCIe Slot Status Register Command Completed bit not always updated on any configuration write to the Slot Control Register Problem: For PCIe root ports (devices 0 - 10) supporting hot-plug, the Slot Status Register (offset AAh) Command Completed (bit[4]) status is updated under the following condition: IOH will set Command Completed bit after delivering the new commands written in the Slot Controller register (offset A8h) to VPP. The IOH detects new commands written in Slot Control register by checking the change of value for Power Controller Control (bit[10]), Power Indicator Control (bits[9:8]), Attention Indicator Control (bits[7:6]), or Electromechanical Interlock Control (bit[11]) fields. Any other configuration writes to the Slot Control register without changing the values of these fields will not cause Command Completed bit to be set. The PCIe Base Specification Revision 2.0 or later describes the “Slot Control Register” in section 7.8.10, as follows (Reference section 7.8.10, Slot Control Register, Offset 18h). In hot-plug capable Downstream Ports, a write to the Slot Control register must cause a hot-plug command to be generated (see Section 6.7.3.2 for details on hot-plug commands). A write to the Slot Control register in a Downstream Port that is not hotplug capable must not cause a hot-plug command to be executed. The PCIe Spec intended that every write to the Slot Control Register is a command and expected a command complete status to abstract the VPP implementation specific nuances from the OS software. IOH PCIe Slot Control Register implementation is not fully conforming to the PCIe Specification in this respect. Implication: Software checking on the Command Completed status after writing to the Slot Control register may time out. Workaround: Software can read the Slot Control register and compare the existing and new values to determine if it should check the Command Completed status after writing to the Slot Control register. Per Sinan, the Qualcomm QDF2400 controller also does not set the Command Completed bit unless writes to the Slot Command register change "Control" bits. Link: http://www.intel.com/content/www/us/en/processors/xeon/xeon-e7-v2-spec-update.html Link: https://lkml.kernel.org/r/8770820b-85a0-172b-7230-3a44524e6c9f@molgen.mpg.de Reported-by: Paul Menzel <pmenzel+linux-pci@molgen.mpg.de> # Lenovo X60 Tested-by: Paul Menzel <pmenzel+linux-pci@molgen.mpg.de> # Lenovo X60 Signed-off-by: Sinan Kaya <okaya@codeaurora.org> # Qcom quirk Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
2018-05-03 17:39:38 -06:00
/*
* Controllers with the Intel CF118 and similar errata advertise
* Command Completed support, but they only set Command Completed
* if we change the "Control" bits for power, power indicator,
* attention indicator, or interlock. If we only change the
* "Enable" bits, they never set the Command Completed bit.
*/
if (pdev->broken_cmd_compl &&
(slot_ctrl_orig & CC_ERRATUM_MASK) == (slot_ctrl & CC_ERRATUM_MASK))
ctrl->cmd_busy = 0;
PCI: pciehp: Wait for hotplug command completion where necessary The commit referenced below deferred waiting for command completion until the start of the next command, allowing hardware to do the latching asynchronously. Unfortunately, being ready to accept a new command is the only indication we have that the previous command is completed. In cases where we need that state change to be enabled, we must still wait for completion. For instance, pciehp_reset_slot() attempts to disable anything that might generate a surprise hotplug on slots that support presence detection. If we don't wait for those settings to latch before the secondary bus reset, we negate any value in attempting to prevent the spurious hotplug. Create a base function with optional wait and helper functions so that pcie_write_cmd() turns back into the "safe" interface which waits before and after issuing a command and add pcie_write_cmd_nowait(), which eliminates the trailing wait for asynchronous completion. The following functions are returned to their previous behavior: pciehp_power_on_slot pciehp_power_off_slot pcie_disable_notification pciehp_reset_slot The rationale is that pciehp_power_on_slot() enables the link and therefore relies on completion of power-on. pciehp_power_off_slot() and pcie_disable_notification() need a wait because data structures may be freed after these calls and continued signaling from the device would be unexpected. And, of course, pciehp_reset_slot() needs to wait for the scenario outlined above. Fixes: 3461a068661c ("PCI: pciehp: Wait for hotplug command completion lazily") Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> CC: stable@vger.kernel.org # v3.17+
2015-06-08 17:10:50 -06:00
/*
* Optionally wait for the hardware to be ready for a new command,
* indicating completion of the above issued command.
*/
if (wait)
pcie_wait_cmd(ctrl);
PCI: pciehp: Handle invalid data when reading from non-existent devices It's platform-dependent, but an MMIO read to a non-existent PCI device generally returns data with all bits set. This happens when the host bridge or Root Complex times out waiting for a response from the device and fabricates return data to complete the CPU's read. One example, reported in the bugzilla below, involved this hierarchy: pci 0000:00:1c.0: PCI bridge to [bus 02-3a] Root Port pci 0000:02:00.0: PCI bridge to [bus 03-0a] Upstream Port pci 0000:03:03.0: PCI bridge to [bus 05-07] Downstream Port pci 0000:05:00.0: PCI bridge to [bus 06-07] Thunderbolt Upstream Port pci 0000:06:00.0: PCI bridge to [bus 07] Thunderbolt Downstream Port pci 0000:07:00.0: BCM57762 NIC Unplugging the Thunderbolt switch and the NIC below it resulted in this: pciehp 0000:03:03.0: Surprise Removal tg3 0000:07:00.0: tg3_abort_hw timed out, TX_MODE_ENABLE will not clear MAC_TX_MODE=ffffffff pciehp 0000:06:00.0: unloading service driver pciehp pciehp 0000:06:00.0: pcie_isr: intr_loc 11f pciehp 0000:06:00.0: Switch interrupt received pciehp 0000:06:00.0: Latch open on Slot pciehp 0000:06:00.0: Attention button interrupt received pciehp 0000:06:00.0: Button pressed on Slot pciehp 0000:06:00.0: Presence/Notify input change pciehp 0000:06:00.0: Card present on Slot pciehp 0000:06:00.0: Power fault interrupt received pciehp 0000:06:00.0: Data Link Layer State change pciehp 0000:06:00.0: Link Up event The pciehp driver correctly noticed that the Thunderbolt switch (05:00.0 and 06:00.0) and NIC (07:00.0) had been removed, and it called their driver remove methods. Since the NIC was already gone, tg3 received 0xffffffff when it tried to read from the device. The resulting timeout is a tg3 issue and not of interest here. Similarly, since the 06:00.0 Thunderbolt switch was already gone, pcie_isr() received 0xffff when it tried to read PCI_EXP_SLTSTA, and pciehp thought that was valid status showing that many events had happened: the latch had been opened, the attention button had been pressed, a card was now present, and the link was now up. These are all wrong, of course, but pciehp went on to try to power up and enumerate devices below the non-existent bridge: pciehp 0000:06:00.0: PCI slot - powering on due to button press pciehp 0000:06:00.0: Surprise Insertion pci 0000:07:00.0 id reading try 50 times with interval 20 ms to get ffffffff [bhelgaas: changelog, also check in pcie_poll_cmd() & pcie_do_write_cmd()] Link: https://bugzilla.kernel.org/show_bug.cgi?id=99841 Suggested-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Jarod Wilson <jarod@redhat.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2015-07-21 10:25:30 -06:00
out:
mutex_unlock(&ctrl->ctrl_lock);
}
PCI: pciehp: Wait for hotplug command completion where necessary The commit referenced below deferred waiting for command completion until the start of the next command, allowing hardware to do the latching asynchronously. Unfortunately, being ready to accept a new command is the only indication we have that the previous command is completed. In cases where we need that state change to be enabled, we must still wait for completion. For instance, pciehp_reset_slot() attempts to disable anything that might generate a surprise hotplug on slots that support presence detection. If we don't wait for those settings to latch before the secondary bus reset, we negate any value in attempting to prevent the spurious hotplug. Create a base function with optional wait and helper functions so that pcie_write_cmd() turns back into the "safe" interface which waits before and after issuing a command and add pcie_write_cmd_nowait(), which eliminates the trailing wait for asynchronous completion. The following functions are returned to their previous behavior: pciehp_power_on_slot pciehp_power_off_slot pcie_disable_notification pciehp_reset_slot The rationale is that pciehp_power_on_slot() enables the link and therefore relies on completion of power-on. pciehp_power_off_slot() and pcie_disable_notification() need a wait because data structures may be freed after these calls and continued signaling from the device would be unexpected. And, of course, pciehp_reset_slot() needs to wait for the scenario outlined above. Fixes: 3461a068661c ("PCI: pciehp: Wait for hotplug command completion lazily") Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> CC: stable@vger.kernel.org # v3.17+
2015-06-08 17:10:50 -06:00
/**
* pcie_write_cmd - Issue controller command
* @ctrl: controller to which the command is issued
* @cmd: command value written to slot control register
* @mask: bitmask of slot control register to be modified
*/
static void pcie_write_cmd(struct controller *ctrl, u16 cmd, u16 mask)
{
pcie_do_write_cmd(ctrl, cmd, mask, true);
}
/* Same as above without waiting for the hardware to latch */
static void pcie_write_cmd_nowait(struct controller *ctrl, u16 cmd, u16 mask)
{
pcie_do_write_cmd(ctrl, cmd, mask, false);
}
bool pciehp_check_link_active(struct controller *ctrl)
{
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 lnk_status;
bool ret;
pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status);
ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA);
if (ret)
ctrl_dbg(ctrl, "%s: lnk_status = %x\n", __func__, lnk_status);
return ret;
}
static void pcie_wait_link_active(struct controller *ctrl)
{
struct pci_dev *pdev = ctrl_dev(ctrl);
pcie_wait_for_link(pdev, true);
}
static bool pci_bus_check_dev(struct pci_bus *bus, int devfn)
{
u32 l;
int count = 0;
int delay = 1000, step = 20;
bool found = false;
do {
found = pci_bus_read_dev_vendor_id(bus, devfn, &l, 0);
count++;
if (found)
break;
msleep(step);
delay -= step;
} while (delay > 0);
if (count > 1 && pciehp_debug)
printk(KERN_DEBUG "pci %04x:%02x:%02x.%d id reading try %d times with interval %d ms to get %08x\n",
pci_domain_nr(bus), bus->number, PCI_SLOT(devfn),
PCI_FUNC(devfn), count, step, l);
return found;
}
int pciehp_check_link_status(struct controller *ctrl)
{
struct pci_dev *pdev = ctrl_dev(ctrl);
bool found;
u16 lnk_status;
/*
* Data Link Layer Link Active Reporting must be capable for
* hot-plug capable downstream port. But old controller might
* not implement it. In this case, we wait for 1000 ms.
*/
if (ctrl->link_active_reporting)
pcie_wait_link_active(ctrl);
else
msleep(1000);
/* wait 100ms before read pci conf, and try in 1s */
msleep(100);
found = pci_bus_check_dev(ctrl->pcie->port->subordinate,
PCI_DEVFN(0, 0));
PCI: pciehp: Tolerate initially unstable link When a device is hotplugged, Presence Detect and Link Up events often do not occur simultaneously, but with a lag of a few milliseconds. Only the first event received is relevant, the other one can be disregarded. Moreover, Stefan Roese reports that on certain platforms, Link State and Presence Detect may flap for up to 100 ms before stabilizing, suggesting that such events should be disregarded for at least this long: https://lkml.kernel.org/r/20180130084121.18653-1-sr@denx.de On slot enablement, pciehp_check_link_status() waits for 100 ms per PCIe r4.0, sec 6.7.3.3, then probes the hotplugged device's vendor register for up to 1 second. If this succeeds, the link is definitely up, so ignore any Presence Detect or Link State events that occurred up to this point. pciehp_check_link_status() then checks the Link Training bit in the Link Status register. This is the final opportunity to detect inaccessibility of the device and abort slot enablement. Any link or presence change that occurs afterwards will cause the slot to be disabled again immediately after attempting to enable it. The astute reviewer may appreciate that achieving this behavior would be more complicated had pciehp not just been converted to enable/disable the slot exclusively from the IRQ thread: When the slot is enabled via sysfs, each link or presence flap would otherwise cause the IRQ thread to run and it would have to sense that those events are belonging to a concurrent slot enablement operation and disregard them. It would be much more difficult than this mere 3 line change. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Stefan Roese <sr@denx.de>
2018-07-19 16:27:49 -06:00
/* ignore link or presence changes up to this point */
if (found)
atomic_and(~(PCI_EXP_SLTSTA_DLLSC | PCI_EXP_SLTSTA_PDC),
&ctrl->pending_events);
pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status);
ctrl_dbg(ctrl, "%s: lnk_status = %x\n", __func__, lnk_status);
if ((lnk_status & PCI_EXP_LNKSTA_LT) ||
!(lnk_status & PCI_EXP_LNKSTA_NLW)) {
ctrl_err(ctrl, "link training error: status %#06x\n",
lnk_status);
return -1;
}
pcie_update_link_speed(ctrl->pcie->port->subordinate, lnk_status);
if (!found)
return -1;
return 0;
}
static int __pciehp_link_set(struct controller *ctrl, bool enable)
{
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 lnk_ctrl;
pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &lnk_ctrl);
if (enable)
lnk_ctrl &= ~PCI_EXP_LNKCTL_LD;
else
lnk_ctrl |= PCI_EXP_LNKCTL_LD;
pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnk_ctrl);
ctrl_dbg(ctrl, "%s: lnk_ctrl = %x\n", __func__, lnk_ctrl);
return 0;
}
static int pciehp_link_enable(struct controller *ctrl)
{
return __pciehp_link_set(ctrl, true);
}
int pciehp_get_raw_indicator_status(struct hotplug_slot *hotplug_slot,
u8 *status)
{
PCI: hotplug: Embed hotplug_slot When the PCI hotplug core and its first user, cpqphp, were introduced in February 2002 with historic commit a8a2069f432c, cpqphp allocated a slot struct for its internal use plus a hotplug_slot struct to be registered with the hotplug core and linked the two with pointers: https://git.kernel.org/tglx/history/c/a8a2069f432c Nowadays, the predominant pattern in the tree is to embed ("subclass") such structures in one another and cast to the containing struct with container_of(). But it wasn't until July 2002 that container_of() was introduced with historic commit ec4f214232cf: https://git.kernel.org/tglx/history/c/ec4f214232cf pnv_php, introduced in 2016, did the right thing and embedded struct hotplug_slot in its internal struct pnv_php_slot, but all other drivers cargo-culted cpqphp's design and linked separate structs with pointers. Embedding structs is preferrable to linking them with pointers because it requires fewer allocations, thereby reducing overhead and simplifying error paths. Casting an embedded struct to the containing struct becomes a cheap subtraction rather than a dereference. And having fewer pointers reduces the risk of them pointing nowhere either accidentally or due to an attack. Convert all drivers to embed struct hotplug_slot in their internal slot struct. The "private" pointer in struct hotplug_slot thereby becomes unused, so drop it. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Tyrel Datwyler <tyreld@linux.vnet.ibm.com> # drivers/pci/hotplug/rpa* Acked-by: Sebastian Ott <sebott@linux.ibm.com> # drivers/pci/hotplug/s390* Acked-by: Andy Shevchenko <andy.shevchenko@gmail.com> # drivers/platform/x86 Cc: Len Brown <lenb@kernel.org> Cc: Scott Murray <scott@spiteful.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Oliver OHalloran <oliveroh@au1.ibm.com> Cc: Gavin Shan <gwshan@linux.vnet.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Cc: Corentin Chary <corentin.chary@gmail.com> Cc: Darren Hart <dvhart@infradead.org>
2018-09-08 01:59:01 -06:00
struct controller *ctrl = to_ctrl(hotplug_slot);
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_ctrl;
pci_config_pm_runtime_get(pdev);
pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &slot_ctrl);
pci_config_pm_runtime_put(pdev);
*status = (slot_ctrl & (PCI_EXP_SLTCTL_AIC | PCI_EXP_SLTCTL_PIC)) >> 6;
return 0;
}
int pciehp_get_attention_status(struct hotplug_slot *hotplug_slot, u8 *status)
{
PCI: hotplug: Embed hotplug_slot When the PCI hotplug core and its first user, cpqphp, were introduced in February 2002 with historic commit a8a2069f432c, cpqphp allocated a slot struct for its internal use plus a hotplug_slot struct to be registered with the hotplug core and linked the two with pointers: https://git.kernel.org/tglx/history/c/a8a2069f432c Nowadays, the predominant pattern in the tree is to embed ("subclass") such structures in one another and cast to the containing struct with container_of(). But it wasn't until July 2002 that container_of() was introduced with historic commit ec4f214232cf: https://git.kernel.org/tglx/history/c/ec4f214232cf pnv_php, introduced in 2016, did the right thing and embedded struct hotplug_slot in its internal struct pnv_php_slot, but all other drivers cargo-culted cpqphp's design and linked separate structs with pointers. Embedding structs is preferrable to linking them with pointers because it requires fewer allocations, thereby reducing overhead and simplifying error paths. Casting an embedded struct to the containing struct becomes a cheap subtraction rather than a dereference. And having fewer pointers reduces the risk of them pointing nowhere either accidentally or due to an attack. Convert all drivers to embed struct hotplug_slot in their internal slot struct. The "private" pointer in struct hotplug_slot thereby becomes unused, so drop it. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Tyrel Datwyler <tyreld@linux.vnet.ibm.com> # drivers/pci/hotplug/rpa* Acked-by: Sebastian Ott <sebott@linux.ibm.com> # drivers/pci/hotplug/s390* Acked-by: Andy Shevchenko <andy.shevchenko@gmail.com> # drivers/platform/x86 Cc: Len Brown <lenb@kernel.org> Cc: Scott Murray <scott@spiteful.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Oliver OHalloran <oliveroh@au1.ibm.com> Cc: Gavin Shan <gwshan@linux.vnet.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Cc: Corentin Chary <corentin.chary@gmail.com> Cc: Darren Hart <dvhart@infradead.org>
2018-09-08 01:59:01 -06:00
struct controller *ctrl = to_ctrl(hotplug_slot);
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_ctrl;
pci_config_pm_runtime_get(pdev);
pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &slot_ctrl);
pci_config_pm_runtime_put(pdev);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x, value read %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_ctrl);
switch (slot_ctrl & PCI_EXP_SLTCTL_AIC) {
case PCI_EXP_SLTCTL_ATTN_IND_ON:
*status = 1; /* On */
break;
case PCI_EXP_SLTCTL_ATTN_IND_BLINK:
*status = 2; /* Blink */
break;
case PCI_EXP_SLTCTL_ATTN_IND_OFF:
*status = 0; /* Off */
break;
default:
*status = 0xFF;
break;
}
return 0;
}
void pciehp_get_power_status(struct controller *ctrl, u8 *status)
{
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_ctrl;
pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &slot_ctrl);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x value read %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_ctrl);
switch (slot_ctrl & PCI_EXP_SLTCTL_PCC) {
case PCI_EXP_SLTCTL_PWR_ON:
*status = 1; /* On */
break;
case PCI_EXP_SLTCTL_PWR_OFF:
*status = 0; /* Off */
break;
default:
*status = 0xFF;
break;
}
}
void pciehp_get_latch_status(struct controller *ctrl, u8 *status)
{
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_status;
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status);
*status = !!(slot_status & PCI_EXP_SLTSTA_MRLSS);
}
PCI: pciehp: Tolerate Presence Detect hardwired to zero The WiGig Bus Extension (WBE) specification allows tunneling PCIe over IEEE 802.11. A product implementing this spec is the wil6210 from Wilocity (now part of Qualcomm Atheros). It integrates a PCIe switch with a wireless network adapter: 00.0-+ [1ae9:0101] Upstream Port +-00.0-+ [1ae9:0200] Downstream Port | +-00.0 [168c:0034] Atheros AR9462 Wireless Network Adapter +-02.0 [1ae9:0201] Downstream Port +-03.0 [1ae9:0201] Downstream Port Wirelessly attached devices presumably appear below the hotplug ports with device ID [1ae9:0201]. Oddly, the Downstream Port [1ae9:0200] leading to the wireless network adapter is likewise Hotplug Capable, but has its Presence Detect State bit hardwired to zero. Even if the Link Active bit is set, Presence Detect is zero, so this cannot be caused by in-band presence detection but only by broken hardware. pciehp assumes an empty slot if Presence Detect State is zero, regardless of Link Active being one. Consequently, up until v4.18 it removes the wireless network adapter in pciehp_resume(). From v4.19 it already does so in pciehp_probe(). Be lenient towards broken hardware and assume the slot is occupied if Link Active is set: Introduce pciehp_card_present_or_link_active() and use it in lieu of pciehp_get_adapter_status() everywhere, except in pciehp_handle_presence_or_link_change() whose log messages depend on which of Presence Detect State or Link Active is set. Remove the Presence Detect State check from __pciehp_enable_slot() because it is only called if either of Presence Detect State or Link Active is set. Caution: There is a possibility that broken hardware exists which has working Presence Detect but hardwires Link Active to one. On such hardware the slot will now incorrectly be considered always occupied. If such hardware is discovered, this commit can be rolled back and a quirk can be added which sets is_hotplug_bridge = 0 for [1ae9:0200]. Link: https://bugzilla.kernel.org/show_bug.cgi?id=200839 Reported-and-tested-by: David Yang <mmyangfl@gmail.com> Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Rajat Jain <rajatja@google.com> Cc: Ashok Raj <ashok.raj@intel.com>
2018-09-08 01:59:01 -06:00
bool pciehp_card_present(struct controller *ctrl)
{
PCI: pciehp: Tolerate Presence Detect hardwired to zero The WiGig Bus Extension (WBE) specification allows tunneling PCIe over IEEE 802.11. A product implementing this spec is the wil6210 from Wilocity (now part of Qualcomm Atheros). It integrates a PCIe switch with a wireless network adapter: 00.0-+ [1ae9:0101] Upstream Port +-00.0-+ [1ae9:0200] Downstream Port | +-00.0 [168c:0034] Atheros AR9462 Wireless Network Adapter +-02.0 [1ae9:0201] Downstream Port +-03.0 [1ae9:0201] Downstream Port Wirelessly attached devices presumably appear below the hotplug ports with device ID [1ae9:0201]. Oddly, the Downstream Port [1ae9:0200] leading to the wireless network adapter is likewise Hotplug Capable, but has its Presence Detect State bit hardwired to zero. Even if the Link Active bit is set, Presence Detect is zero, so this cannot be caused by in-band presence detection but only by broken hardware. pciehp assumes an empty slot if Presence Detect State is zero, regardless of Link Active being one. Consequently, up until v4.18 it removes the wireless network adapter in pciehp_resume(). From v4.19 it already does so in pciehp_probe(). Be lenient towards broken hardware and assume the slot is occupied if Link Active is set: Introduce pciehp_card_present_or_link_active() and use it in lieu of pciehp_get_adapter_status() everywhere, except in pciehp_handle_presence_or_link_change() whose log messages depend on which of Presence Detect State or Link Active is set. Remove the Presence Detect State check from __pciehp_enable_slot() because it is only called if either of Presence Detect State or Link Active is set. Caution: There is a possibility that broken hardware exists which has working Presence Detect but hardwires Link Active to one. On such hardware the slot will now incorrectly be considered always occupied. If such hardware is discovered, this commit can be rolled back and a quirk can be added which sets is_hotplug_bridge = 0 for [1ae9:0200]. Link: https://bugzilla.kernel.org/show_bug.cgi?id=200839 Reported-and-tested-by: David Yang <mmyangfl@gmail.com> Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Rajat Jain <rajatja@google.com> Cc: Ashok Raj <ashok.raj@intel.com>
2018-09-08 01:59:01 -06:00
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_status;
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status);
PCI: pciehp: Tolerate Presence Detect hardwired to zero The WiGig Bus Extension (WBE) specification allows tunneling PCIe over IEEE 802.11. A product implementing this spec is the wil6210 from Wilocity (now part of Qualcomm Atheros). It integrates a PCIe switch with a wireless network adapter: 00.0-+ [1ae9:0101] Upstream Port +-00.0-+ [1ae9:0200] Downstream Port | +-00.0 [168c:0034] Atheros AR9462 Wireless Network Adapter +-02.0 [1ae9:0201] Downstream Port +-03.0 [1ae9:0201] Downstream Port Wirelessly attached devices presumably appear below the hotplug ports with device ID [1ae9:0201]. Oddly, the Downstream Port [1ae9:0200] leading to the wireless network adapter is likewise Hotplug Capable, but has its Presence Detect State bit hardwired to zero. Even if the Link Active bit is set, Presence Detect is zero, so this cannot be caused by in-band presence detection but only by broken hardware. pciehp assumes an empty slot if Presence Detect State is zero, regardless of Link Active being one. Consequently, up until v4.18 it removes the wireless network adapter in pciehp_resume(). From v4.19 it already does so in pciehp_probe(). Be lenient towards broken hardware and assume the slot is occupied if Link Active is set: Introduce pciehp_card_present_or_link_active() and use it in lieu of pciehp_get_adapter_status() everywhere, except in pciehp_handle_presence_or_link_change() whose log messages depend on which of Presence Detect State or Link Active is set. Remove the Presence Detect State check from __pciehp_enable_slot() because it is only called if either of Presence Detect State or Link Active is set. Caution: There is a possibility that broken hardware exists which has working Presence Detect but hardwires Link Active to one. On such hardware the slot will now incorrectly be considered always occupied. If such hardware is discovered, this commit can be rolled back and a quirk can be added which sets is_hotplug_bridge = 0 for [1ae9:0200]. Link: https://bugzilla.kernel.org/show_bug.cgi?id=200839 Reported-and-tested-by: David Yang <mmyangfl@gmail.com> Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Rajat Jain <rajatja@google.com> Cc: Ashok Raj <ashok.raj@intel.com>
2018-09-08 01:59:01 -06:00
return slot_status & PCI_EXP_SLTSTA_PDS;
}
/**
* pciehp_card_present_or_link_active() - whether given slot is occupied
* @ctrl: PCIe hotplug controller
*
* Unlike pciehp_card_present(), which determines presence solely from the
* Presence Detect State bit, this helper also returns true if the Link Active
* bit is set. This is a concession to broken hotplug ports which hardwire
* Presence Detect State to zero, such as Wilocity's [1ae9:0200].
*/
bool pciehp_card_present_or_link_active(struct controller *ctrl)
{
return pciehp_card_present(ctrl) || pciehp_check_link_active(ctrl);
}
int pciehp_query_power_fault(struct controller *ctrl)
{
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_status;
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status);
return !!(slot_status & PCI_EXP_SLTSTA_PFD);
}
int pciehp_set_raw_indicator_status(struct hotplug_slot *hotplug_slot,
u8 status)
{
PCI: hotplug: Embed hotplug_slot When the PCI hotplug core and its first user, cpqphp, were introduced in February 2002 with historic commit a8a2069f432c, cpqphp allocated a slot struct for its internal use plus a hotplug_slot struct to be registered with the hotplug core and linked the two with pointers: https://git.kernel.org/tglx/history/c/a8a2069f432c Nowadays, the predominant pattern in the tree is to embed ("subclass") such structures in one another and cast to the containing struct with container_of(). But it wasn't until July 2002 that container_of() was introduced with historic commit ec4f214232cf: https://git.kernel.org/tglx/history/c/ec4f214232cf pnv_php, introduced in 2016, did the right thing and embedded struct hotplug_slot in its internal struct pnv_php_slot, but all other drivers cargo-culted cpqphp's design and linked separate structs with pointers. Embedding structs is preferrable to linking them with pointers because it requires fewer allocations, thereby reducing overhead and simplifying error paths. Casting an embedded struct to the containing struct becomes a cheap subtraction rather than a dereference. And having fewer pointers reduces the risk of them pointing nowhere either accidentally or due to an attack. Convert all drivers to embed struct hotplug_slot in their internal slot struct. The "private" pointer in struct hotplug_slot thereby becomes unused, so drop it. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Tyrel Datwyler <tyreld@linux.vnet.ibm.com> # drivers/pci/hotplug/rpa* Acked-by: Sebastian Ott <sebott@linux.ibm.com> # drivers/pci/hotplug/s390* Acked-by: Andy Shevchenko <andy.shevchenko@gmail.com> # drivers/platform/x86 Cc: Len Brown <lenb@kernel.org> Cc: Scott Murray <scott@spiteful.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Oliver OHalloran <oliveroh@au1.ibm.com> Cc: Gavin Shan <gwshan@linux.vnet.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Cc: Corentin Chary <corentin.chary@gmail.com> Cc: Darren Hart <dvhart@infradead.org>
2018-09-08 01:59:01 -06:00
struct controller *ctrl = to_ctrl(hotplug_slot);
struct pci_dev *pdev = ctrl_dev(ctrl);
pci_config_pm_runtime_get(pdev);
pcie_write_cmd_nowait(ctrl, status << 6,
PCI_EXP_SLTCTL_AIC | PCI_EXP_SLTCTL_PIC);
pci_config_pm_runtime_put(pdev);
return 0;
}
void pciehp_set_attention_status(struct controller *ctrl, u8 value)
{
u16 slot_cmd;
if (!ATTN_LED(ctrl))
return;
switch (value) {
case 0: /* turn off */
slot_cmd = PCI_EXP_SLTCTL_ATTN_IND_OFF;
break;
case 1: /* turn on */
slot_cmd = PCI_EXP_SLTCTL_ATTN_IND_ON;
break;
case 2: /* turn blink */
slot_cmd = PCI_EXP_SLTCTL_ATTN_IND_BLINK;
break;
default:
return;
}
PCI: pciehp: Wait for hotplug command completion where necessary The commit referenced below deferred waiting for command completion until the start of the next command, allowing hardware to do the latching asynchronously. Unfortunately, being ready to accept a new command is the only indication we have that the previous command is completed. In cases where we need that state change to be enabled, we must still wait for completion. For instance, pciehp_reset_slot() attempts to disable anything that might generate a surprise hotplug on slots that support presence detection. If we don't wait for those settings to latch before the secondary bus reset, we negate any value in attempting to prevent the spurious hotplug. Create a base function with optional wait and helper functions so that pcie_write_cmd() turns back into the "safe" interface which waits before and after issuing a command and add pcie_write_cmd_nowait(), which eliminates the trailing wait for asynchronous completion. The following functions are returned to their previous behavior: pciehp_power_on_slot pciehp_power_off_slot pcie_disable_notification pciehp_reset_slot The rationale is that pciehp_power_on_slot() enables the link and therefore relies on completion of power-on. pciehp_power_off_slot() and pcie_disable_notification() need a wait because data structures may be freed after these calls and continued signaling from the device would be unexpected. And, of course, pciehp_reset_slot() needs to wait for the scenario outlined above. Fixes: 3461a068661c ("PCI: pciehp: Wait for hotplug command completion lazily") Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> CC: stable@vger.kernel.org # v3.17+
2015-06-08 17:10:50 -06:00
pcie_write_cmd_nowait(ctrl, slot_cmd, PCI_EXP_SLTCTL_AIC);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd);
}
void pciehp_green_led_on(struct controller *ctrl)
{
if (!PWR_LED(ctrl))
return;
PCI: pciehp: Wait for hotplug command completion where necessary The commit referenced below deferred waiting for command completion until the start of the next command, allowing hardware to do the latching asynchronously. Unfortunately, being ready to accept a new command is the only indication we have that the previous command is completed. In cases where we need that state change to be enabled, we must still wait for completion. For instance, pciehp_reset_slot() attempts to disable anything that might generate a surprise hotplug on slots that support presence detection. If we don't wait for those settings to latch before the secondary bus reset, we negate any value in attempting to prevent the spurious hotplug. Create a base function with optional wait and helper functions so that pcie_write_cmd() turns back into the "safe" interface which waits before and after issuing a command and add pcie_write_cmd_nowait(), which eliminates the trailing wait for asynchronous completion. The following functions are returned to their previous behavior: pciehp_power_on_slot pciehp_power_off_slot pcie_disable_notification pciehp_reset_slot The rationale is that pciehp_power_on_slot() enables the link and therefore relies on completion of power-on. pciehp_power_off_slot() and pcie_disable_notification() need a wait because data structures may be freed after these calls and continued signaling from the device would be unexpected. And, of course, pciehp_reset_slot() needs to wait for the scenario outlined above. Fixes: 3461a068661c ("PCI: pciehp: Wait for hotplug command completion lazily") Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> CC: stable@vger.kernel.org # v3.17+
2015-06-08 17:10:50 -06:00
pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON,
PCI_EXP_SLTCTL_PIC);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
PCI_EXP_SLTCTL_PWR_IND_ON);
}
void pciehp_green_led_off(struct controller *ctrl)
{
if (!PWR_LED(ctrl))
return;
PCI: pciehp: Wait for hotplug command completion where necessary The commit referenced below deferred waiting for command completion until the start of the next command, allowing hardware to do the latching asynchronously. Unfortunately, being ready to accept a new command is the only indication we have that the previous command is completed. In cases where we need that state change to be enabled, we must still wait for completion. For instance, pciehp_reset_slot() attempts to disable anything that might generate a surprise hotplug on slots that support presence detection. If we don't wait for those settings to latch before the secondary bus reset, we negate any value in attempting to prevent the spurious hotplug. Create a base function with optional wait and helper functions so that pcie_write_cmd() turns back into the "safe" interface which waits before and after issuing a command and add pcie_write_cmd_nowait(), which eliminates the trailing wait for asynchronous completion. The following functions are returned to their previous behavior: pciehp_power_on_slot pciehp_power_off_slot pcie_disable_notification pciehp_reset_slot The rationale is that pciehp_power_on_slot() enables the link and therefore relies on completion of power-on. pciehp_power_off_slot() and pcie_disable_notification() need a wait because data structures may be freed after these calls and continued signaling from the device would be unexpected. And, of course, pciehp_reset_slot() needs to wait for the scenario outlined above. Fixes: 3461a068661c ("PCI: pciehp: Wait for hotplug command completion lazily") Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> CC: stable@vger.kernel.org # v3.17+
2015-06-08 17:10:50 -06:00
pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
PCI_EXP_SLTCTL_PIC);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
PCI_EXP_SLTCTL_PWR_IND_OFF);
}
void pciehp_green_led_blink(struct controller *ctrl)
{
if (!PWR_LED(ctrl))
return;
PCI: pciehp: Wait for hotplug command completion where necessary The commit referenced below deferred waiting for command completion until the start of the next command, allowing hardware to do the latching asynchronously. Unfortunately, being ready to accept a new command is the only indication we have that the previous command is completed. In cases where we need that state change to be enabled, we must still wait for completion. For instance, pciehp_reset_slot() attempts to disable anything that might generate a surprise hotplug on slots that support presence detection. If we don't wait for those settings to latch before the secondary bus reset, we negate any value in attempting to prevent the spurious hotplug. Create a base function with optional wait and helper functions so that pcie_write_cmd() turns back into the "safe" interface which waits before and after issuing a command and add pcie_write_cmd_nowait(), which eliminates the trailing wait for asynchronous completion. The following functions are returned to their previous behavior: pciehp_power_on_slot pciehp_power_off_slot pcie_disable_notification pciehp_reset_slot The rationale is that pciehp_power_on_slot() enables the link and therefore relies on completion of power-on. pciehp_power_off_slot() and pcie_disable_notification() need a wait because data structures may be freed after these calls and continued signaling from the device would be unexpected. And, of course, pciehp_reset_slot() needs to wait for the scenario outlined above. Fixes: 3461a068661c ("PCI: pciehp: Wait for hotplug command completion lazily") Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> CC: stable@vger.kernel.org # v3.17+
2015-06-08 17:10:50 -06:00
pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK,
PCI_EXP_SLTCTL_PIC);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
PCI_EXP_SLTCTL_PWR_IND_BLINK);
}
int pciehp_power_on_slot(struct controller *ctrl)
{
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_status;
int retval;
PCI: pciehp: Fix hot-add vs powerfault detection order If both hot-add and power fault were observed in a single interrupt, we handled the hot-add first, then the power fault, in this path: pciehp_ist if (events & (PDC | DLLSC)) pciehp_handle_presence_or_link_change case OFF_STATE: pciehp_enable_slot __pciehp_enable_slot board_added pciehp_power_on_slot ctrl->power_fault_detected = 0 pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_ON, PCI_EXP_SLTCTL_PCC) pciehp_green_led_on(p_slot) # power LED on pciehp_set_attention_status(p_slot, 0) # attention LED off if ((events & PFD) && !ctrl->power_fault_detected) ctrl->power_fault_detected = 1 pciehp_set_attention_status(1) # attention LED on pciehp_green_led_off(slot) # power LED off This left the attention indicator on (even though the hot-add succeeded) and the power indicator off (even though the slot power was on). Fix this by checking for power faults before checking for new devices. Prior to 0e94916e6091, this was successful because everything was chained through work queues and the order was: INT_PRESENCE_ON -> INT_POWER_FAULT -> ENABLE_REQ The ENABLE_REQ cleared the power fault at the end, but now everything is handled inline with the interrupt thread, such that the work ENABLE_REQ was doing happens before power fault handling now. Fixes: 0e94916e6091 ("PCI: pciehp: Handle events synchronously") Signed-off-by: Keith Busch <keith.busch@intel.com> [bhelgaas: changelog] Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Lukas Wunner <lukas@wunner.de>
2018-09-05 14:35:41 -06:00
/* Clear power-fault bit from previous power failures */
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status);
if (slot_status & PCI_EXP_SLTSTA_PFD)
pcie_capability_write_word(pdev, PCI_EXP_SLTSTA,
PCI_EXP_SLTSTA_PFD);
ctrl->power_fault_detected = 0;
pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_ON, PCI_EXP_SLTCTL_PCC);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
PCI_EXP_SLTCTL_PWR_ON);
retval = pciehp_link_enable(ctrl);
if (retval)
ctrl_err(ctrl, "%s: Can not enable the link!\n", __func__);
return retval;
}
void pciehp_power_off_slot(struct controller *ctrl)
{
pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_OFF, PCI_EXP_SLTCTL_PCC);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
PCI_EXP_SLTCTL_PWR_OFF);
}
static irqreturn_t pciehp_isr(int irq, void *dev_id)
{
struct controller *ctrl = (struct controller *)dev_id;
struct pci_dev *pdev = ctrl_dev(ctrl);
PCI: pciehp: Support interrupts sent from D3hot If a hotplug port is able to send an interrupt, one would naively assume that it is accessible at that moment. After all, if it wouldn't be accessible, i.e. if its parent is in D3hot and the link to the hotplug port is thus down, how should an interrupt come through? It turns out that assumption is wrong at least for Thunderbolt: Even though its parents are in D3hot, a Thunderbolt hotplug port is able to signal interrupts. Because the port's config space is inaccessible and resuming the parents may sleep, the hard IRQ handler has to defer runtime resuming the parents and reading the Slot Status register to the IRQ thread. If the hotplug port uses a level-triggered INTx interrupt, it needs to be masked until the IRQ thread has cleared the signaled events. For simplicity, this commit also masks edge-triggered MSI/MSI-X interrupts. Note that if the interrupt is shared (which can only happen for INTx), other devices are starved from receiving interrupts until the IRQ thread is scheduled, has runtime resumed the hotplug port's parents and has read and cleared the Slot Status register. That delay is dominated by the 10 ms D3hot->D0 transition time of each parent port. The worst case is a Thunderbolt downstream port at the end of a daisy chain: There may be up to six Thunderbolt controllers in-between it and the root port, each comprising an upstream and downstream port, plus its own upstream port. That's 13 x 10 = 130 ms. Possible mitigations are polling the interrupt while it's disabled or reducing the d3_delay of Thunderbolt ports if possible. Open code masking of the interrupt instead of requesting it with the IRQF_ONESHOT flag to minimize the period during which it is masked. (IRQF_ONESHOT unmasks the IRQ only after the IRQ thread has finished.) PCIe r4.0 sec 6.7.3.4 states that "If wake generation is required by the associated form factor specification, a hotplug capable Downstream Port must support generation of a wakeup event (using the PME mechanism) on hotplug events that occur when the system is in a sleep state or the Port is in device state D1, D2, or D3Hot." This would seem to imply that PME needs to be enabled on the hotplug port when it is runtime suspended. pci_enable_wake() currently doesn't enable PME on bridges, it may be necessary to add an exemption for hotplug bridges there. On "Light Ridge" Thunderbolt controllers, the PME_Status bit is not set when an interrupt occurs while the hotplug port is in D3hot, even if PME is enabled. (I've tested this on a Mac and we hardcode the OSC_PCI_EXPRESS_PME_CONTROL bit to 0 on Macs in negotiate_os_control(), modifying it to 1 didn't change the behavior.) (Side note: Section 6.7.3.4 also states that "PME and Hot-Plug Event interrupts (when both are implemented) always share the same MSI or MSI-X vector". That would only seem to apply to Root Ports, however the section never mentions Root Ports, only Downstream Ports. This is explained in the definition of "Downstream Port" in the "Terms and Acronyms" section of the PCIe Base Spec: "The Ports on a Switch that are not the Upstream Port are Downstream Ports. All Ports on a Root Complex are Downstream Ports.") Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Mika Westerberg <mika.westerberg@linux.intel.com> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Keith Busch <keith.busch@intel.com> Cc: Yinghai Lu <yinghai@kernel.org>
2018-07-27 23:18:00 -06:00
struct device *parent = pdev->dev.parent;
u16 status, events;
PCI: pciehp: Convert to threaded IRQ pciehp's IRQ handler queues up a work item for each event signaled by the hardware. A more modern alternative is to let a long running kthread service the events. The IRQ handler's sole job is then to check whether the IRQ originated from the device in question, acknowledge its receipt to the hardware to quiesce the interrupt and wake up the kthread. One benefit is reduced latency to handle the IRQ, which is a necessity for realtime environments. Another benefit is that we can make pciehp simpler and more robust by handling events synchronously in process context, rather than asynchronously by queueing up work items. pciehp's usage of work items is a historic artifact, it predates the introduction of threaded IRQ handlers by two years. (The former was introduced in 2007 with commit 5d386e1ac402 ("pciehp: Event handling rework"), the latter in 2009 with commit 3aa551c9b4c4 ("genirq: add threaded interrupt handler support").) Convert pciehp to threaded IRQ handling by retrieving the pending events in pciehp_isr(), saving them for later consumption by the thread handler pciehp_ist() and clearing them in the Slot Status register. By clearing the Slot Status (and thereby acknowledging the events) in pciehp_isr(), we can avoid requesting the IRQ with IRQF_ONESHOT, which would have the unpleasant side effect of starving devices sharing the IRQ until pciehp_ist() has finished. pciehp_isr() does not count how many times each event occurred, but merely records the fact *that* an event occurred. If the same event occurs a second time before pciehp_ist() is woken, that second event will not be recorded separately, which is problematic according to commit fad214b0aa72 ("PCI: pciehp: Process all hotplug events before looking for new ones") because we may miss removal of a card in-between two back-to-back insertions. We're about to make pciehp_ist() resilient to missed events. The present commit regresses the driver's behavior temporarily in order to separate the changes into reviewable chunks. This doesn't affect regular slow-motion hotplug, only plug-unplug-plug operations that happen in a timespan shorter than wakeup of the IRQ thread. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mayurkumar Patel <mayurkumar.patel@intel.com> Cc: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
2018-07-19 16:27:38 -06:00
/*
* Interrupts only occur in D3hot or shallower (PCIe r4.0, sec 6.7.3.4).
*/
if (pdev->current_state == PCI_D3cold)
return IRQ_NONE;
PCI: pciehp: Support interrupts sent from D3hot If a hotplug port is able to send an interrupt, one would naively assume that it is accessible at that moment. After all, if it wouldn't be accessible, i.e. if its parent is in D3hot and the link to the hotplug port is thus down, how should an interrupt come through? It turns out that assumption is wrong at least for Thunderbolt: Even though its parents are in D3hot, a Thunderbolt hotplug port is able to signal interrupts. Because the port's config space is inaccessible and resuming the parents may sleep, the hard IRQ handler has to defer runtime resuming the parents and reading the Slot Status register to the IRQ thread. If the hotplug port uses a level-triggered INTx interrupt, it needs to be masked until the IRQ thread has cleared the signaled events. For simplicity, this commit also masks edge-triggered MSI/MSI-X interrupts. Note that if the interrupt is shared (which can only happen for INTx), other devices are starved from receiving interrupts until the IRQ thread is scheduled, has runtime resumed the hotplug port's parents and has read and cleared the Slot Status register. That delay is dominated by the 10 ms D3hot->D0 transition time of each parent port. The worst case is a Thunderbolt downstream port at the end of a daisy chain: There may be up to six Thunderbolt controllers in-between it and the root port, each comprising an upstream and downstream port, plus its own upstream port. That's 13 x 10 = 130 ms. Possible mitigations are polling the interrupt while it's disabled or reducing the d3_delay of Thunderbolt ports if possible. Open code masking of the interrupt instead of requesting it with the IRQF_ONESHOT flag to minimize the period during which it is masked. (IRQF_ONESHOT unmasks the IRQ only after the IRQ thread has finished.) PCIe r4.0 sec 6.7.3.4 states that "If wake generation is required by the associated form factor specification, a hotplug capable Downstream Port must support generation of a wakeup event (using the PME mechanism) on hotplug events that occur when the system is in a sleep state or the Port is in device state D1, D2, or D3Hot." This would seem to imply that PME needs to be enabled on the hotplug port when it is runtime suspended. pci_enable_wake() currently doesn't enable PME on bridges, it may be necessary to add an exemption for hotplug bridges there. On "Light Ridge" Thunderbolt controllers, the PME_Status bit is not set when an interrupt occurs while the hotplug port is in D3hot, even if PME is enabled. (I've tested this on a Mac and we hardcode the OSC_PCI_EXPRESS_PME_CONTROL bit to 0 on Macs in negotiate_os_control(), modifying it to 1 didn't change the behavior.) (Side note: Section 6.7.3.4 also states that "PME and Hot-Plug Event interrupts (when both are implemented) always share the same MSI or MSI-X vector". That would only seem to apply to Root Ports, however the section never mentions Root Ports, only Downstream Ports. This is explained in the definition of "Downstream Port" in the "Terms and Acronyms" section of the PCIe Base Spec: "The Ports on a Switch that are not the Upstream Port are Downstream Ports. All Ports on a Root Complex are Downstream Ports.") Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Mika Westerberg <mika.westerberg@linux.intel.com> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Keith Busch <keith.busch@intel.com> Cc: Yinghai Lu <yinghai@kernel.org>
2018-07-27 23:18:00 -06:00
/*
* Keep the port accessible by holding a runtime PM ref on its parent.
* Defer resume of the parent to the IRQ thread if it's suspended.
* Mask the interrupt until then.
*/
if (parent) {
pm_runtime_get_noresume(parent);
if (!pm_runtime_active(parent)) {
pm_runtime_put(parent);
disable_irq_nosync(irq);
atomic_or(RERUN_ISR, &ctrl->pending_events);
return IRQ_WAKE_THREAD;
}
}
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &status);
if (status == (u16) ~0) {
ctrl_info(ctrl, "%s: no response from device\n", __func__);
PCI: pciehp: Support interrupts sent from D3hot If a hotplug port is able to send an interrupt, one would naively assume that it is accessible at that moment. After all, if it wouldn't be accessible, i.e. if its parent is in D3hot and the link to the hotplug port is thus down, how should an interrupt come through? It turns out that assumption is wrong at least for Thunderbolt: Even though its parents are in D3hot, a Thunderbolt hotplug port is able to signal interrupts. Because the port's config space is inaccessible and resuming the parents may sleep, the hard IRQ handler has to defer runtime resuming the parents and reading the Slot Status register to the IRQ thread. If the hotplug port uses a level-triggered INTx interrupt, it needs to be masked until the IRQ thread has cleared the signaled events. For simplicity, this commit also masks edge-triggered MSI/MSI-X interrupts. Note that if the interrupt is shared (which can only happen for INTx), other devices are starved from receiving interrupts until the IRQ thread is scheduled, has runtime resumed the hotplug port's parents and has read and cleared the Slot Status register. That delay is dominated by the 10 ms D3hot->D0 transition time of each parent port. The worst case is a Thunderbolt downstream port at the end of a daisy chain: There may be up to six Thunderbolt controllers in-between it and the root port, each comprising an upstream and downstream port, plus its own upstream port. That's 13 x 10 = 130 ms. Possible mitigations are polling the interrupt while it's disabled or reducing the d3_delay of Thunderbolt ports if possible. Open code masking of the interrupt instead of requesting it with the IRQF_ONESHOT flag to minimize the period during which it is masked. (IRQF_ONESHOT unmasks the IRQ only after the IRQ thread has finished.) PCIe r4.0 sec 6.7.3.4 states that "If wake generation is required by the associated form factor specification, a hotplug capable Downstream Port must support generation of a wakeup event (using the PME mechanism) on hotplug events that occur when the system is in a sleep state or the Port is in device state D1, D2, or D3Hot." This would seem to imply that PME needs to be enabled on the hotplug port when it is runtime suspended. pci_enable_wake() currently doesn't enable PME on bridges, it may be necessary to add an exemption for hotplug bridges there. On "Light Ridge" Thunderbolt controllers, the PME_Status bit is not set when an interrupt occurs while the hotplug port is in D3hot, even if PME is enabled. (I've tested this on a Mac and we hardcode the OSC_PCI_EXPRESS_PME_CONTROL bit to 0 on Macs in negotiate_os_control(), modifying it to 1 didn't change the behavior.) (Side note: Section 6.7.3.4 also states that "PME and Hot-Plug Event interrupts (when both are implemented) always share the same MSI or MSI-X vector". That would only seem to apply to Root Ports, however the section never mentions Root Ports, only Downstream Ports. This is explained in the definition of "Downstream Port" in the "Terms and Acronyms" section of the PCIe Base Spec: "The Ports on a Switch that are not the Upstream Port are Downstream Ports. All Ports on a Root Complex are Downstream Ports.") Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Mika Westerberg <mika.westerberg@linux.intel.com> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Keith Busch <keith.busch@intel.com> Cc: Yinghai Lu <yinghai@kernel.org>
2018-07-27 23:18:00 -06:00
if (parent)
pm_runtime_put(parent);
return IRQ_NONE;
}
pciehp: Fix interrupt event handlig Current pciehp implementation disables and re-enables hotplug interrupts in its interrupt handler. This operation might be intend to guarantee that interrupts for the events newly occured during previous events are being handled will be successfully generated. But current implementaion has the following prolems. - Current interrupt service routin clears status changes without waiting command completion. Because of this, events might not be cleared properly. - Current interrupt service routine clears status changes caused by disabling or enabling hotplug interrupts itself. This will lose new events that occurs during previous interrupts are being handled. - Current implementation doesn't have any serialization mechanism between the code to wait for command completion and the interrupt handler that clears the command completion events caused by itself. There is clearly race conditions between them, and it may cause the problem that waiting for command completion doesn't work for example. To fix those problems, this patch stops disabling/re-enabling hotplug interrupts in interrupt service routine. Instead of this, this patch re-inspects Slot Status register after clearing what is presumed to be the last bending interrupt in order to guarantee that all interrupt events are serviced. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Kristen Carlson Accardi <kristen.c.accardi@intel.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
2008-04-25 15:38:57 -06:00
/*
* Slot Status contains plain status bits as well as event
* notification bits; right now we only want the event bits.
pciehp: Fix interrupt event handlig Current pciehp implementation disables and re-enables hotplug interrupts in its interrupt handler. This operation might be intend to guarantee that interrupts for the events newly occured during previous events are being handled will be successfully generated. But current implementaion has the following prolems. - Current interrupt service routin clears status changes without waiting command completion. Because of this, events might not be cleared properly. - Current interrupt service routine clears status changes caused by disabling or enabling hotplug interrupts itself. This will lose new events that occurs during previous interrupts are being handled. - Current implementation doesn't have any serialization mechanism between the code to wait for command completion and the interrupt handler that clears the command completion events caused by itself. There is clearly race conditions between them, and it may cause the problem that waiting for command completion doesn't work for example. To fix those problems, this patch stops disabling/re-enabling hotplug interrupts in interrupt service routine. Instead of this, this patch re-inspects Slot Status register after clearing what is presumed to be the last bending interrupt in order to guarantee that all interrupt events are serviced. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Kristen Carlson Accardi <kristen.c.accardi@intel.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
2008-04-25 15:38:57 -06:00
*/
events = status & (PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD |
PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_CC |
PCI_EXP_SLTSTA_DLLSC);
/*
* If we've already reported a power fault, don't report it again
* until we've done something to handle it.
*/
if (ctrl->power_fault_detected)
events &= ~PCI_EXP_SLTSTA_PFD;
PCI: pciehp: Support interrupts sent from D3hot If a hotplug port is able to send an interrupt, one would naively assume that it is accessible at that moment. After all, if it wouldn't be accessible, i.e. if its parent is in D3hot and the link to the hotplug port is thus down, how should an interrupt come through? It turns out that assumption is wrong at least for Thunderbolt: Even though its parents are in D3hot, a Thunderbolt hotplug port is able to signal interrupts. Because the port's config space is inaccessible and resuming the parents may sleep, the hard IRQ handler has to defer runtime resuming the parents and reading the Slot Status register to the IRQ thread. If the hotplug port uses a level-triggered INTx interrupt, it needs to be masked until the IRQ thread has cleared the signaled events. For simplicity, this commit also masks edge-triggered MSI/MSI-X interrupts. Note that if the interrupt is shared (which can only happen for INTx), other devices are starved from receiving interrupts until the IRQ thread is scheduled, has runtime resumed the hotplug port's parents and has read and cleared the Slot Status register. That delay is dominated by the 10 ms D3hot->D0 transition time of each parent port. The worst case is a Thunderbolt downstream port at the end of a daisy chain: There may be up to six Thunderbolt controllers in-between it and the root port, each comprising an upstream and downstream port, plus its own upstream port. That's 13 x 10 = 130 ms. Possible mitigations are polling the interrupt while it's disabled or reducing the d3_delay of Thunderbolt ports if possible. Open code masking of the interrupt instead of requesting it with the IRQF_ONESHOT flag to minimize the period during which it is masked. (IRQF_ONESHOT unmasks the IRQ only after the IRQ thread has finished.) PCIe r4.0 sec 6.7.3.4 states that "If wake generation is required by the associated form factor specification, a hotplug capable Downstream Port must support generation of a wakeup event (using the PME mechanism) on hotplug events that occur when the system is in a sleep state or the Port is in device state D1, D2, or D3Hot." This would seem to imply that PME needs to be enabled on the hotplug port when it is runtime suspended. pci_enable_wake() currently doesn't enable PME on bridges, it may be necessary to add an exemption for hotplug bridges there. On "Light Ridge" Thunderbolt controllers, the PME_Status bit is not set when an interrupt occurs while the hotplug port is in D3hot, even if PME is enabled. (I've tested this on a Mac and we hardcode the OSC_PCI_EXPRESS_PME_CONTROL bit to 0 on Macs in negotiate_os_control(), modifying it to 1 didn't change the behavior.) (Side note: Section 6.7.3.4 also states that "PME and Hot-Plug Event interrupts (when both are implemented) always share the same MSI or MSI-X vector". That would only seem to apply to Root Ports, however the section never mentions Root Ports, only Downstream Ports. This is explained in the definition of "Downstream Port" in the "Terms and Acronyms" section of the PCIe Base Spec: "The Ports on a Switch that are not the Upstream Port are Downstream Ports. All Ports on a Root Complex are Downstream Ports.") Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Mika Westerberg <mika.westerberg@linux.intel.com> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Keith Busch <keith.busch@intel.com> Cc: Yinghai Lu <yinghai@kernel.org>
2018-07-27 23:18:00 -06:00
if (!events) {
if (parent)
pm_runtime_put(parent);
return IRQ_NONE;
PCI: pciehp: Support interrupts sent from D3hot If a hotplug port is able to send an interrupt, one would naively assume that it is accessible at that moment. After all, if it wouldn't be accessible, i.e. if its parent is in D3hot and the link to the hotplug port is thus down, how should an interrupt come through? It turns out that assumption is wrong at least for Thunderbolt: Even though its parents are in D3hot, a Thunderbolt hotplug port is able to signal interrupts. Because the port's config space is inaccessible and resuming the parents may sleep, the hard IRQ handler has to defer runtime resuming the parents and reading the Slot Status register to the IRQ thread. If the hotplug port uses a level-triggered INTx interrupt, it needs to be masked until the IRQ thread has cleared the signaled events. For simplicity, this commit also masks edge-triggered MSI/MSI-X interrupts. Note that if the interrupt is shared (which can only happen for INTx), other devices are starved from receiving interrupts until the IRQ thread is scheduled, has runtime resumed the hotplug port's parents and has read and cleared the Slot Status register. That delay is dominated by the 10 ms D3hot->D0 transition time of each parent port. The worst case is a Thunderbolt downstream port at the end of a daisy chain: There may be up to six Thunderbolt controllers in-between it and the root port, each comprising an upstream and downstream port, plus its own upstream port. That's 13 x 10 = 130 ms. Possible mitigations are polling the interrupt while it's disabled or reducing the d3_delay of Thunderbolt ports if possible. Open code masking of the interrupt instead of requesting it with the IRQF_ONESHOT flag to minimize the period during which it is masked. (IRQF_ONESHOT unmasks the IRQ only after the IRQ thread has finished.) PCIe r4.0 sec 6.7.3.4 states that "If wake generation is required by the associated form factor specification, a hotplug capable Downstream Port must support generation of a wakeup event (using the PME mechanism) on hotplug events that occur when the system is in a sleep state or the Port is in device state D1, D2, or D3Hot." This would seem to imply that PME needs to be enabled on the hotplug port when it is runtime suspended. pci_enable_wake() currently doesn't enable PME on bridges, it may be necessary to add an exemption for hotplug bridges there. On "Light Ridge" Thunderbolt controllers, the PME_Status bit is not set when an interrupt occurs while the hotplug port is in D3hot, even if PME is enabled. (I've tested this on a Mac and we hardcode the OSC_PCI_EXPRESS_PME_CONTROL bit to 0 on Macs in negotiate_os_control(), modifying it to 1 didn't change the behavior.) (Side note: Section 6.7.3.4 also states that "PME and Hot-Plug Event interrupts (when both are implemented) always share the same MSI or MSI-X vector". That would only seem to apply to Root Ports, however the section never mentions Root Ports, only Downstream Ports. This is explained in the definition of "Downstream Port" in the "Terms and Acronyms" section of the PCIe Base Spec: "The Ports on a Switch that are not the Upstream Port are Downstream Ports. All Ports on a Root Complex are Downstream Ports.") Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Mika Westerberg <mika.westerberg@linux.intel.com> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Keith Busch <keith.busch@intel.com> Cc: Yinghai Lu <yinghai@kernel.org>
2018-07-27 23:18:00 -06:00
}
pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, events);
ctrl_dbg(ctrl, "pending interrupts %#06x from Slot Status\n", events);
PCI: pciehp: Support interrupts sent from D3hot If a hotplug port is able to send an interrupt, one would naively assume that it is accessible at that moment. After all, if it wouldn't be accessible, i.e. if its parent is in D3hot and the link to the hotplug port is thus down, how should an interrupt come through? It turns out that assumption is wrong at least for Thunderbolt: Even though its parents are in D3hot, a Thunderbolt hotplug port is able to signal interrupts. Because the port's config space is inaccessible and resuming the parents may sleep, the hard IRQ handler has to defer runtime resuming the parents and reading the Slot Status register to the IRQ thread. If the hotplug port uses a level-triggered INTx interrupt, it needs to be masked until the IRQ thread has cleared the signaled events. For simplicity, this commit also masks edge-triggered MSI/MSI-X interrupts. Note that if the interrupt is shared (which can only happen for INTx), other devices are starved from receiving interrupts until the IRQ thread is scheduled, has runtime resumed the hotplug port's parents and has read and cleared the Slot Status register. That delay is dominated by the 10 ms D3hot->D0 transition time of each parent port. The worst case is a Thunderbolt downstream port at the end of a daisy chain: There may be up to six Thunderbolt controllers in-between it and the root port, each comprising an upstream and downstream port, plus its own upstream port. That's 13 x 10 = 130 ms. Possible mitigations are polling the interrupt while it's disabled or reducing the d3_delay of Thunderbolt ports if possible. Open code masking of the interrupt instead of requesting it with the IRQF_ONESHOT flag to minimize the period during which it is masked. (IRQF_ONESHOT unmasks the IRQ only after the IRQ thread has finished.) PCIe r4.0 sec 6.7.3.4 states that "If wake generation is required by the associated form factor specification, a hotplug capable Downstream Port must support generation of a wakeup event (using the PME mechanism) on hotplug events that occur when the system is in a sleep state or the Port is in device state D1, D2, or D3Hot." This would seem to imply that PME needs to be enabled on the hotplug port when it is runtime suspended. pci_enable_wake() currently doesn't enable PME on bridges, it may be necessary to add an exemption for hotplug bridges there. On "Light Ridge" Thunderbolt controllers, the PME_Status bit is not set when an interrupt occurs while the hotplug port is in D3hot, even if PME is enabled. (I've tested this on a Mac and we hardcode the OSC_PCI_EXPRESS_PME_CONTROL bit to 0 on Macs in negotiate_os_control(), modifying it to 1 didn't change the behavior.) (Side note: Section 6.7.3.4 also states that "PME and Hot-Plug Event interrupts (when both are implemented) always share the same MSI or MSI-X vector". That would only seem to apply to Root Ports, however the section never mentions Root Ports, only Downstream Ports. This is explained in the definition of "Downstream Port" in the "Terms and Acronyms" section of the PCIe Base Spec: "The Ports on a Switch that are not the Upstream Port are Downstream Ports. All Ports on a Root Complex are Downstream Ports.") Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Mika Westerberg <mika.westerberg@linux.intel.com> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Keith Busch <keith.busch@intel.com> Cc: Yinghai Lu <yinghai@kernel.org>
2018-07-27 23:18:00 -06:00
if (parent)
pm_runtime_put(parent);
PCI: pciehp: Convert to threaded IRQ pciehp's IRQ handler queues up a work item for each event signaled by the hardware. A more modern alternative is to let a long running kthread service the events. The IRQ handler's sole job is then to check whether the IRQ originated from the device in question, acknowledge its receipt to the hardware to quiesce the interrupt and wake up the kthread. One benefit is reduced latency to handle the IRQ, which is a necessity for realtime environments. Another benefit is that we can make pciehp simpler and more robust by handling events synchronously in process context, rather than asynchronously by queueing up work items. pciehp's usage of work items is a historic artifact, it predates the introduction of threaded IRQ handlers by two years. (The former was introduced in 2007 with commit 5d386e1ac402 ("pciehp: Event handling rework"), the latter in 2009 with commit 3aa551c9b4c4 ("genirq: add threaded interrupt handler support").) Convert pciehp to threaded IRQ handling by retrieving the pending events in pciehp_isr(), saving them for later consumption by the thread handler pciehp_ist() and clearing them in the Slot Status register. By clearing the Slot Status (and thereby acknowledging the events) in pciehp_isr(), we can avoid requesting the IRQ with IRQF_ONESHOT, which would have the unpleasant side effect of starving devices sharing the IRQ until pciehp_ist() has finished. pciehp_isr() does not count how many times each event occurred, but merely records the fact *that* an event occurred. If the same event occurs a second time before pciehp_ist() is woken, that second event will not be recorded separately, which is problematic according to commit fad214b0aa72 ("PCI: pciehp: Process all hotplug events before looking for new ones") because we may miss removal of a card in-between two back-to-back insertions. We're about to make pciehp_ist() resilient to missed events. The present commit regresses the driver's behavior temporarily in order to separate the changes into reviewable chunks. This doesn't affect regular slow-motion hotplug, only plug-unplug-plug operations that happen in a timespan shorter than wakeup of the IRQ thread. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mayurkumar Patel <mayurkumar.patel@intel.com> Cc: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
2018-07-19 16:27:38 -06:00
/*
* Command Completed notifications are not deferred to the
* IRQ thread because it may be waiting for their arrival.
*/
if (events & PCI_EXP_SLTSTA_CC) {
ctrl->cmd_busy = 0;
smp_mb();
wake_up(&ctrl->queue);
PCI: pciehp: Convert to threaded IRQ pciehp's IRQ handler queues up a work item for each event signaled by the hardware. A more modern alternative is to let a long running kthread service the events. The IRQ handler's sole job is then to check whether the IRQ originated from the device in question, acknowledge its receipt to the hardware to quiesce the interrupt and wake up the kthread. One benefit is reduced latency to handle the IRQ, which is a necessity for realtime environments. Another benefit is that we can make pciehp simpler and more robust by handling events synchronously in process context, rather than asynchronously by queueing up work items. pciehp's usage of work items is a historic artifact, it predates the introduction of threaded IRQ handlers by two years. (The former was introduced in 2007 with commit 5d386e1ac402 ("pciehp: Event handling rework"), the latter in 2009 with commit 3aa551c9b4c4 ("genirq: add threaded interrupt handler support").) Convert pciehp to threaded IRQ handling by retrieving the pending events in pciehp_isr(), saving them for later consumption by the thread handler pciehp_ist() and clearing them in the Slot Status register. By clearing the Slot Status (and thereby acknowledging the events) in pciehp_isr(), we can avoid requesting the IRQ with IRQF_ONESHOT, which would have the unpleasant side effect of starving devices sharing the IRQ until pciehp_ist() has finished. pciehp_isr() does not count how many times each event occurred, but merely records the fact *that* an event occurred. If the same event occurs a second time before pciehp_ist() is woken, that second event will not be recorded separately, which is problematic according to commit fad214b0aa72 ("PCI: pciehp: Process all hotplug events before looking for new ones") because we may miss removal of a card in-between two back-to-back insertions. We're about to make pciehp_ist() resilient to missed events. The present commit regresses the driver's behavior temporarily in order to separate the changes into reviewable chunks. This doesn't affect regular slow-motion hotplug, only plug-unplug-plug operations that happen in a timespan shorter than wakeup of the IRQ thread. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mayurkumar Patel <mayurkumar.patel@intel.com> Cc: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
2018-07-19 16:27:38 -06:00
if (events == PCI_EXP_SLTSTA_CC)
return IRQ_HANDLED;
events &= ~PCI_EXP_SLTSTA_CC;
}
if (pdev->ignore_hotplug) {
ctrl_dbg(ctrl, "ignoring hotplug event %#06x\n", events);
return IRQ_HANDLED;
PCI: Add pci_ignore_hotplug() to ignore hotplug events for a device Powering off a hot-pluggable device, e.g., with pci_set_power_state(D3cold), normally generates a hot-remove event that unbinds the driver. Some drivers expect to remain bound to a device even while they power it off and back on again. This can be dangerous, because if the device is removed or replaced while it is powered off, the driver doesn't know that anything changed. But some drivers accept that risk. Add pci_ignore_hotplug() for use by drivers that know their device cannot be removed. Using pci_ignore_hotplug() tells the PCI core that hot-plug events for the device should be ignored. The radeon and nouveau drivers use this to switch between a low-power, integrated GPU and a higher-power, higher-performance discrete GPU. They power off the unused GPU, but they want to remain bound to it. This is a reimplementation of f244d8b623da ("ACPIPHP / radeon / nouveau: Fix VGA switcheroo problem related to hotplug") but extends it to work with both acpiphp and pciehp. This fixes a problem where systems with dual GPUs using the radeon drivers become unusable, freezing every few seconds (see bugzillas below). The resume of the radeon device may also fail, e.g., This fixes problems on dual GPU systems where the radeon driver becomes unusable because of problems while suspending the device, as in bug 79701: [drm] radeon: finishing device. radeon 0000:01:00.0: Userspace still has active objects ! radeon 0000:01:00.0: ffff8800cb4ec288 ffff8800cb4ec000 16384 4294967297 force free ... WARNING: CPU: 0 PID: 67 at /home/apw/COD/linux/drivers/gpu/drm/radeon/radeon_gart.c:234 radeon_gart_unbind+0xd2/0xe0 [radeon]() trying to unbind memory from uninitialized GART ! or while resuming it, as in bug 77261: radeon 0000:01:00.0: ring 0 stalled for more than 10158msec radeon 0000:01:00.0: GPU lockup ... radeon 0000:01:00.0: GPU pci config reset pciehp 0000:00:01.0:pcie04: Card not present on Slot(1-1) radeon 0000:01:00.0: GPU reset succeeded, trying to resume *ERROR* radeon: dpm resume failed radeon 0000:01:00.0: Wait for MC idle timedout ! Link: https://bugzilla.kernel.org/show_bug.cgi?id=77261 Link: https://bugzilla.kernel.org/show_bug.cgi?id=79701 Reported-by: Shawn Starr <shawn.starr@rogers.com> Reported-by: Jose P. <lbdkmjdf@sharklasers.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Rajat Jain <rajatxjain@gmail.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Dave Airlie <airlied@redhat.com> CC: stable@vger.kernel.org # v3.15+
2014-09-10 13:45:01 -06:00
}
PCI: pciehp: Convert to threaded IRQ pciehp's IRQ handler queues up a work item for each event signaled by the hardware. A more modern alternative is to let a long running kthread service the events. The IRQ handler's sole job is then to check whether the IRQ originated from the device in question, acknowledge its receipt to the hardware to quiesce the interrupt and wake up the kthread. One benefit is reduced latency to handle the IRQ, which is a necessity for realtime environments. Another benefit is that we can make pciehp simpler and more robust by handling events synchronously in process context, rather than asynchronously by queueing up work items. pciehp's usage of work items is a historic artifact, it predates the introduction of threaded IRQ handlers by two years. (The former was introduced in 2007 with commit 5d386e1ac402 ("pciehp: Event handling rework"), the latter in 2009 with commit 3aa551c9b4c4 ("genirq: add threaded interrupt handler support").) Convert pciehp to threaded IRQ handling by retrieving the pending events in pciehp_isr(), saving them for later consumption by the thread handler pciehp_ist() and clearing them in the Slot Status register. By clearing the Slot Status (and thereby acknowledging the events) in pciehp_isr(), we can avoid requesting the IRQ with IRQF_ONESHOT, which would have the unpleasant side effect of starving devices sharing the IRQ until pciehp_ist() has finished. pciehp_isr() does not count how many times each event occurred, but merely records the fact *that* an event occurred. If the same event occurs a second time before pciehp_ist() is woken, that second event will not be recorded separately, which is problematic according to commit fad214b0aa72 ("PCI: pciehp: Process all hotplug events before looking for new ones") because we may miss removal of a card in-between two back-to-back insertions. We're about to make pciehp_ist() resilient to missed events. The present commit regresses the driver's behavior temporarily in order to separate the changes into reviewable chunks. This doesn't affect regular slow-motion hotplug, only plug-unplug-plug operations that happen in a timespan shorter than wakeup of the IRQ thread. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mayurkumar Patel <mayurkumar.patel@intel.com> Cc: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
2018-07-19 16:27:38 -06:00
/* Save pending events for consumption by IRQ thread. */
atomic_or(events, &ctrl->pending_events);
return IRQ_WAKE_THREAD;
}
static irqreturn_t pciehp_ist(int irq, void *dev_id)
{
struct controller *ctrl = (struct controller *)dev_id;
PCI: pciehp: Support interrupts sent from D3hot If a hotplug port is able to send an interrupt, one would naively assume that it is accessible at that moment. After all, if it wouldn't be accessible, i.e. if its parent is in D3hot and the link to the hotplug port is thus down, how should an interrupt come through? It turns out that assumption is wrong at least for Thunderbolt: Even though its parents are in D3hot, a Thunderbolt hotplug port is able to signal interrupts. Because the port's config space is inaccessible and resuming the parents may sleep, the hard IRQ handler has to defer runtime resuming the parents and reading the Slot Status register to the IRQ thread. If the hotplug port uses a level-triggered INTx interrupt, it needs to be masked until the IRQ thread has cleared the signaled events. For simplicity, this commit also masks edge-triggered MSI/MSI-X interrupts. Note that if the interrupt is shared (which can only happen for INTx), other devices are starved from receiving interrupts until the IRQ thread is scheduled, has runtime resumed the hotplug port's parents and has read and cleared the Slot Status register. That delay is dominated by the 10 ms D3hot->D0 transition time of each parent port. The worst case is a Thunderbolt downstream port at the end of a daisy chain: There may be up to six Thunderbolt controllers in-between it and the root port, each comprising an upstream and downstream port, plus its own upstream port. That's 13 x 10 = 130 ms. Possible mitigations are polling the interrupt while it's disabled or reducing the d3_delay of Thunderbolt ports if possible. Open code masking of the interrupt instead of requesting it with the IRQF_ONESHOT flag to minimize the period during which it is masked. (IRQF_ONESHOT unmasks the IRQ only after the IRQ thread has finished.) PCIe r4.0 sec 6.7.3.4 states that "If wake generation is required by the associated form factor specification, a hotplug capable Downstream Port must support generation of a wakeup event (using the PME mechanism) on hotplug events that occur when the system is in a sleep state or the Port is in device state D1, D2, or D3Hot." This would seem to imply that PME needs to be enabled on the hotplug port when it is runtime suspended. pci_enable_wake() currently doesn't enable PME on bridges, it may be necessary to add an exemption for hotplug bridges there. On "Light Ridge" Thunderbolt controllers, the PME_Status bit is not set when an interrupt occurs while the hotplug port is in D3hot, even if PME is enabled. (I've tested this on a Mac and we hardcode the OSC_PCI_EXPRESS_PME_CONTROL bit to 0 on Macs in negotiate_os_control(), modifying it to 1 didn't change the behavior.) (Side note: Section 6.7.3.4 also states that "PME and Hot-Plug Event interrupts (when both are implemented) always share the same MSI or MSI-X vector". That would only seem to apply to Root Ports, however the section never mentions Root Ports, only Downstream Ports. This is explained in the definition of "Downstream Port" in the "Terms and Acronyms" section of the PCIe Base Spec: "The Ports on a Switch that are not the Upstream Port are Downstream Ports. All Ports on a Root Complex are Downstream Ports.") Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Mika Westerberg <mika.westerberg@linux.intel.com> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Keith Busch <keith.busch@intel.com> Cc: Yinghai Lu <yinghai@kernel.org>
2018-07-27 23:18:00 -06:00
struct pci_dev *pdev = ctrl_dev(ctrl);
irqreturn_t ret;
PCI: pciehp: Convert to threaded IRQ pciehp's IRQ handler queues up a work item for each event signaled by the hardware. A more modern alternative is to let a long running kthread service the events. The IRQ handler's sole job is then to check whether the IRQ originated from the device in question, acknowledge its receipt to the hardware to quiesce the interrupt and wake up the kthread. One benefit is reduced latency to handle the IRQ, which is a necessity for realtime environments. Another benefit is that we can make pciehp simpler and more robust by handling events synchronously in process context, rather than asynchronously by queueing up work items. pciehp's usage of work items is a historic artifact, it predates the introduction of threaded IRQ handlers by two years. (The former was introduced in 2007 with commit 5d386e1ac402 ("pciehp: Event handling rework"), the latter in 2009 with commit 3aa551c9b4c4 ("genirq: add threaded interrupt handler support").) Convert pciehp to threaded IRQ handling by retrieving the pending events in pciehp_isr(), saving them for later consumption by the thread handler pciehp_ist() and clearing them in the Slot Status register. By clearing the Slot Status (and thereby acknowledging the events) in pciehp_isr(), we can avoid requesting the IRQ with IRQF_ONESHOT, which would have the unpleasant side effect of starving devices sharing the IRQ until pciehp_ist() has finished. pciehp_isr() does not count how many times each event occurred, but merely records the fact *that* an event occurred. If the same event occurs a second time before pciehp_ist() is woken, that second event will not be recorded separately, which is problematic according to commit fad214b0aa72 ("PCI: pciehp: Process all hotplug events before looking for new ones") because we may miss removal of a card in-between two back-to-back insertions. We're about to make pciehp_ist() resilient to missed events. The present commit regresses the driver's behavior temporarily in order to separate the changes into reviewable chunks. This doesn't affect regular slow-motion hotplug, only plug-unplug-plug operations that happen in a timespan shorter than wakeup of the IRQ thread. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mayurkumar Patel <mayurkumar.patel@intel.com> Cc: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
2018-07-19 16:27:38 -06:00
u32 events;
PCI: pciehp: Support interrupts sent from D3hot If a hotplug port is able to send an interrupt, one would naively assume that it is accessible at that moment. After all, if it wouldn't be accessible, i.e. if its parent is in D3hot and the link to the hotplug port is thus down, how should an interrupt come through? It turns out that assumption is wrong at least for Thunderbolt: Even though its parents are in D3hot, a Thunderbolt hotplug port is able to signal interrupts. Because the port's config space is inaccessible and resuming the parents may sleep, the hard IRQ handler has to defer runtime resuming the parents and reading the Slot Status register to the IRQ thread. If the hotplug port uses a level-triggered INTx interrupt, it needs to be masked until the IRQ thread has cleared the signaled events. For simplicity, this commit also masks edge-triggered MSI/MSI-X interrupts. Note that if the interrupt is shared (which can only happen for INTx), other devices are starved from receiving interrupts until the IRQ thread is scheduled, has runtime resumed the hotplug port's parents and has read and cleared the Slot Status register. That delay is dominated by the 10 ms D3hot->D0 transition time of each parent port. The worst case is a Thunderbolt downstream port at the end of a daisy chain: There may be up to six Thunderbolt controllers in-between it and the root port, each comprising an upstream and downstream port, plus its own upstream port. That's 13 x 10 = 130 ms. Possible mitigations are polling the interrupt while it's disabled or reducing the d3_delay of Thunderbolt ports if possible. Open code masking of the interrupt instead of requesting it with the IRQF_ONESHOT flag to minimize the period during which it is masked. (IRQF_ONESHOT unmasks the IRQ only after the IRQ thread has finished.) PCIe r4.0 sec 6.7.3.4 states that "If wake generation is required by the associated form factor specification, a hotplug capable Downstream Port must support generation of a wakeup event (using the PME mechanism) on hotplug events that occur when the system is in a sleep state or the Port is in device state D1, D2, or D3Hot." This would seem to imply that PME needs to be enabled on the hotplug port when it is runtime suspended. pci_enable_wake() currently doesn't enable PME on bridges, it may be necessary to add an exemption for hotplug bridges there. On "Light Ridge" Thunderbolt controllers, the PME_Status bit is not set when an interrupt occurs while the hotplug port is in D3hot, even if PME is enabled. (I've tested this on a Mac and we hardcode the OSC_PCI_EXPRESS_PME_CONTROL bit to 0 on Macs in negotiate_os_control(), modifying it to 1 didn't change the behavior.) (Side note: Section 6.7.3.4 also states that "PME and Hot-Plug Event interrupts (when both are implemented) always share the same MSI or MSI-X vector". That would only seem to apply to Root Ports, however the section never mentions Root Ports, only Downstream Ports. This is explained in the definition of "Downstream Port" in the "Terms and Acronyms" section of the PCIe Base Spec: "The Ports on a Switch that are not the Upstream Port are Downstream Ports. All Ports on a Root Complex are Downstream Ports.") Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Mika Westerberg <mika.westerberg@linux.intel.com> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Keith Busch <keith.busch@intel.com> Cc: Yinghai Lu <yinghai@kernel.org>
2018-07-27 23:18:00 -06:00
pci_config_pm_runtime_get(pdev);
/* rerun pciehp_isr() if the port was inaccessible on interrupt */
if (atomic_fetch_and(~RERUN_ISR, &ctrl->pending_events) & RERUN_ISR) {
ret = pciehp_isr(irq, dev_id);
enable_irq(irq);
if (ret != IRQ_WAKE_THREAD) {
pci_config_pm_runtime_put(pdev);
return ret;
}
}
PCI: pciehp: Convert to threaded IRQ pciehp's IRQ handler queues up a work item for each event signaled by the hardware. A more modern alternative is to let a long running kthread service the events. The IRQ handler's sole job is then to check whether the IRQ originated from the device in question, acknowledge its receipt to the hardware to quiesce the interrupt and wake up the kthread. One benefit is reduced latency to handle the IRQ, which is a necessity for realtime environments. Another benefit is that we can make pciehp simpler and more robust by handling events synchronously in process context, rather than asynchronously by queueing up work items. pciehp's usage of work items is a historic artifact, it predates the introduction of threaded IRQ handlers by two years. (The former was introduced in 2007 with commit 5d386e1ac402 ("pciehp: Event handling rework"), the latter in 2009 with commit 3aa551c9b4c4 ("genirq: add threaded interrupt handler support").) Convert pciehp to threaded IRQ handling by retrieving the pending events in pciehp_isr(), saving them for later consumption by the thread handler pciehp_ist() and clearing them in the Slot Status register. By clearing the Slot Status (and thereby acknowledging the events) in pciehp_isr(), we can avoid requesting the IRQ with IRQF_ONESHOT, which would have the unpleasant side effect of starving devices sharing the IRQ until pciehp_ist() has finished. pciehp_isr() does not count how many times each event occurred, but merely records the fact *that* an event occurred. If the same event occurs a second time before pciehp_ist() is woken, that second event will not be recorded separately, which is problematic according to commit fad214b0aa72 ("PCI: pciehp: Process all hotplug events before looking for new ones") because we may miss removal of a card in-between two back-to-back insertions. We're about to make pciehp_ist() resilient to missed events. The present commit regresses the driver's behavior temporarily in order to separate the changes into reviewable chunks. This doesn't affect regular slow-motion hotplug, only plug-unplug-plug operations that happen in a timespan shorter than wakeup of the IRQ thread. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mayurkumar Patel <mayurkumar.patel@intel.com> Cc: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
2018-07-19 16:27:38 -06:00
synchronize_hardirq(irq);
events = atomic_xchg(&ctrl->pending_events, 0);
PCI: pciehp: Support interrupts sent from D3hot If a hotplug port is able to send an interrupt, one would naively assume that it is accessible at that moment. After all, if it wouldn't be accessible, i.e. if its parent is in D3hot and the link to the hotplug port is thus down, how should an interrupt come through? It turns out that assumption is wrong at least for Thunderbolt: Even though its parents are in D3hot, a Thunderbolt hotplug port is able to signal interrupts. Because the port's config space is inaccessible and resuming the parents may sleep, the hard IRQ handler has to defer runtime resuming the parents and reading the Slot Status register to the IRQ thread. If the hotplug port uses a level-triggered INTx interrupt, it needs to be masked until the IRQ thread has cleared the signaled events. For simplicity, this commit also masks edge-triggered MSI/MSI-X interrupts. Note that if the interrupt is shared (which can only happen for INTx), other devices are starved from receiving interrupts until the IRQ thread is scheduled, has runtime resumed the hotplug port's parents and has read and cleared the Slot Status register. That delay is dominated by the 10 ms D3hot->D0 transition time of each parent port. The worst case is a Thunderbolt downstream port at the end of a daisy chain: There may be up to six Thunderbolt controllers in-between it and the root port, each comprising an upstream and downstream port, plus its own upstream port. That's 13 x 10 = 130 ms. Possible mitigations are polling the interrupt while it's disabled or reducing the d3_delay of Thunderbolt ports if possible. Open code masking of the interrupt instead of requesting it with the IRQF_ONESHOT flag to minimize the period during which it is masked. (IRQF_ONESHOT unmasks the IRQ only after the IRQ thread has finished.) PCIe r4.0 sec 6.7.3.4 states that "If wake generation is required by the associated form factor specification, a hotplug capable Downstream Port must support generation of a wakeup event (using the PME mechanism) on hotplug events that occur when the system is in a sleep state or the Port is in device state D1, D2, or D3Hot." This would seem to imply that PME needs to be enabled on the hotplug port when it is runtime suspended. pci_enable_wake() currently doesn't enable PME on bridges, it may be necessary to add an exemption for hotplug bridges there. On "Light Ridge" Thunderbolt controllers, the PME_Status bit is not set when an interrupt occurs while the hotplug port is in D3hot, even if PME is enabled. (I've tested this on a Mac and we hardcode the OSC_PCI_EXPRESS_PME_CONTROL bit to 0 on Macs in negotiate_os_control(), modifying it to 1 didn't change the behavior.) (Side note: Section 6.7.3.4 also states that "PME and Hot-Plug Event interrupts (when both are implemented) always share the same MSI or MSI-X vector". That would only seem to apply to Root Ports, however the section never mentions Root Ports, only Downstream Ports. This is explained in the definition of "Downstream Port" in the "Terms and Acronyms" section of the PCIe Base Spec: "The Ports on a Switch that are not the Upstream Port are Downstream Ports. All Ports on a Root Complex are Downstream Ports.") Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Mika Westerberg <mika.westerberg@linux.intel.com> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Keith Busch <keith.busch@intel.com> Cc: Yinghai Lu <yinghai@kernel.org>
2018-07-27 23:18:00 -06:00
if (!events) {
pci_config_pm_runtime_put(pdev);
PCI: pciehp: Convert to threaded IRQ pciehp's IRQ handler queues up a work item for each event signaled by the hardware. A more modern alternative is to let a long running kthread service the events. The IRQ handler's sole job is then to check whether the IRQ originated from the device in question, acknowledge its receipt to the hardware to quiesce the interrupt and wake up the kthread. One benefit is reduced latency to handle the IRQ, which is a necessity for realtime environments. Another benefit is that we can make pciehp simpler and more robust by handling events synchronously in process context, rather than asynchronously by queueing up work items. pciehp's usage of work items is a historic artifact, it predates the introduction of threaded IRQ handlers by two years. (The former was introduced in 2007 with commit 5d386e1ac402 ("pciehp: Event handling rework"), the latter in 2009 with commit 3aa551c9b4c4 ("genirq: add threaded interrupt handler support").) Convert pciehp to threaded IRQ handling by retrieving the pending events in pciehp_isr(), saving them for later consumption by the thread handler pciehp_ist() and clearing them in the Slot Status register. By clearing the Slot Status (and thereby acknowledging the events) in pciehp_isr(), we can avoid requesting the IRQ with IRQF_ONESHOT, which would have the unpleasant side effect of starving devices sharing the IRQ until pciehp_ist() has finished. pciehp_isr() does not count how many times each event occurred, but merely records the fact *that* an event occurred. If the same event occurs a second time before pciehp_ist() is woken, that second event will not be recorded separately, which is problematic according to commit fad214b0aa72 ("PCI: pciehp: Process all hotplug events before looking for new ones") because we may miss removal of a card in-between two back-to-back insertions. We're about to make pciehp_ist() resilient to missed events. The present commit regresses the driver's behavior temporarily in order to separate the changes into reviewable chunks. This doesn't affect regular slow-motion hotplug, only plug-unplug-plug operations that happen in a timespan shorter than wakeup of the IRQ thread. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mayurkumar Patel <mayurkumar.patel@intel.com> Cc: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
2018-07-19 16:27:38 -06:00
return IRQ_NONE;
PCI: pciehp: Support interrupts sent from D3hot If a hotplug port is able to send an interrupt, one would naively assume that it is accessible at that moment. After all, if it wouldn't be accessible, i.e. if its parent is in D3hot and the link to the hotplug port is thus down, how should an interrupt come through? It turns out that assumption is wrong at least for Thunderbolt: Even though its parents are in D3hot, a Thunderbolt hotplug port is able to signal interrupts. Because the port's config space is inaccessible and resuming the parents may sleep, the hard IRQ handler has to defer runtime resuming the parents and reading the Slot Status register to the IRQ thread. If the hotplug port uses a level-triggered INTx interrupt, it needs to be masked until the IRQ thread has cleared the signaled events. For simplicity, this commit also masks edge-triggered MSI/MSI-X interrupts. Note that if the interrupt is shared (which can only happen for INTx), other devices are starved from receiving interrupts until the IRQ thread is scheduled, has runtime resumed the hotplug port's parents and has read and cleared the Slot Status register. That delay is dominated by the 10 ms D3hot->D0 transition time of each parent port. The worst case is a Thunderbolt downstream port at the end of a daisy chain: There may be up to six Thunderbolt controllers in-between it and the root port, each comprising an upstream and downstream port, plus its own upstream port. That's 13 x 10 = 130 ms. Possible mitigations are polling the interrupt while it's disabled or reducing the d3_delay of Thunderbolt ports if possible. Open code masking of the interrupt instead of requesting it with the IRQF_ONESHOT flag to minimize the period during which it is masked. (IRQF_ONESHOT unmasks the IRQ only after the IRQ thread has finished.) PCIe r4.0 sec 6.7.3.4 states that "If wake generation is required by the associated form factor specification, a hotplug capable Downstream Port must support generation of a wakeup event (using the PME mechanism) on hotplug events that occur when the system is in a sleep state or the Port is in device state D1, D2, or D3Hot." This would seem to imply that PME needs to be enabled on the hotplug port when it is runtime suspended. pci_enable_wake() currently doesn't enable PME on bridges, it may be necessary to add an exemption for hotplug bridges there. On "Light Ridge" Thunderbolt controllers, the PME_Status bit is not set when an interrupt occurs while the hotplug port is in D3hot, even if PME is enabled. (I've tested this on a Mac and we hardcode the OSC_PCI_EXPRESS_PME_CONTROL bit to 0 on Macs in negotiate_os_control(), modifying it to 1 didn't change the behavior.) (Side note: Section 6.7.3.4 also states that "PME and Hot-Plug Event interrupts (when both are implemented) always share the same MSI or MSI-X vector". That would only seem to apply to Root Ports, however the section never mentions Root Ports, only Downstream Ports. This is explained in the definition of "Downstream Port" in the "Terms and Acronyms" section of the PCIe Base Spec: "The Ports on a Switch that are not the Upstream Port are Downstream Ports. All Ports on a Root Complex are Downstream Ports.") Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Mika Westerberg <mika.westerberg@linux.intel.com> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Keith Busch <keith.busch@intel.com> Cc: Yinghai Lu <yinghai@kernel.org>
2018-07-27 23:18:00 -06:00
}
PCI: pciehp: Convert to threaded IRQ pciehp's IRQ handler queues up a work item for each event signaled by the hardware. A more modern alternative is to let a long running kthread service the events. The IRQ handler's sole job is then to check whether the IRQ originated from the device in question, acknowledge its receipt to the hardware to quiesce the interrupt and wake up the kthread. One benefit is reduced latency to handle the IRQ, which is a necessity for realtime environments. Another benefit is that we can make pciehp simpler and more robust by handling events synchronously in process context, rather than asynchronously by queueing up work items. pciehp's usage of work items is a historic artifact, it predates the introduction of threaded IRQ handlers by two years. (The former was introduced in 2007 with commit 5d386e1ac402 ("pciehp: Event handling rework"), the latter in 2009 with commit 3aa551c9b4c4 ("genirq: add threaded interrupt handler support").) Convert pciehp to threaded IRQ handling by retrieving the pending events in pciehp_isr(), saving them for later consumption by the thread handler pciehp_ist() and clearing them in the Slot Status register. By clearing the Slot Status (and thereby acknowledging the events) in pciehp_isr(), we can avoid requesting the IRQ with IRQF_ONESHOT, which would have the unpleasant side effect of starving devices sharing the IRQ until pciehp_ist() has finished. pciehp_isr() does not count how many times each event occurred, but merely records the fact *that* an event occurred. If the same event occurs a second time before pciehp_ist() is woken, that second event will not be recorded separately, which is problematic according to commit fad214b0aa72 ("PCI: pciehp: Process all hotplug events before looking for new ones") because we may miss removal of a card in-between two back-to-back insertions. We're about to make pciehp_ist() resilient to missed events. The present commit regresses the driver's behavior temporarily in order to separate the changes into reviewable chunks. This doesn't affect regular slow-motion hotplug, only plug-unplug-plug operations that happen in a timespan shorter than wakeup of the IRQ thread. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mayurkumar Patel <mayurkumar.patel@intel.com> Cc: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
2018-07-19 16:27:38 -06:00
pciehp: Fix interrupt event handlig Current pciehp implementation disables and re-enables hotplug interrupts in its interrupt handler. This operation might be intend to guarantee that interrupts for the events newly occured during previous events are being handled will be successfully generated. But current implementaion has the following prolems. - Current interrupt service routin clears status changes without waiting command completion. Because of this, events might not be cleared properly. - Current interrupt service routine clears status changes caused by disabling or enabling hotplug interrupts itself. This will lose new events that occurs during previous interrupts are being handled. - Current implementation doesn't have any serialization mechanism between the code to wait for command completion and the interrupt handler that clears the command completion events caused by itself. There is clearly race conditions between them, and it may cause the problem that waiting for command completion doesn't work for example. To fix those problems, this patch stops disabling/re-enabling hotplug interrupts in interrupt service routine. Instead of this, this patch re-inspects Slot Status register after clearing what is presumed to be the last bending interrupt in order to guarantee that all interrupt events are serviced. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Kristen Carlson Accardi <kristen.c.accardi@intel.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
2008-04-25 15:38:57 -06:00
/* Check Attention Button Pressed */
if (events & PCI_EXP_SLTSTA_ABP) {
ctrl_info(ctrl, "Slot(%s): Attention button pressed\n",
slot_name(ctrl));
pciehp_handle_button_press(ctrl);
}
PCI: pciehp: Fix hot-add vs powerfault detection order If both hot-add and power fault were observed in a single interrupt, we handled the hot-add first, then the power fault, in this path: pciehp_ist if (events & (PDC | DLLSC)) pciehp_handle_presence_or_link_change case OFF_STATE: pciehp_enable_slot __pciehp_enable_slot board_added pciehp_power_on_slot ctrl->power_fault_detected = 0 pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_ON, PCI_EXP_SLTCTL_PCC) pciehp_green_led_on(p_slot) # power LED on pciehp_set_attention_status(p_slot, 0) # attention LED off if ((events & PFD) && !ctrl->power_fault_detected) ctrl->power_fault_detected = 1 pciehp_set_attention_status(1) # attention LED on pciehp_green_led_off(slot) # power LED off This left the attention indicator on (even though the hot-add succeeded) and the power indicator off (even though the slot power was on). Fix this by checking for power faults before checking for new devices. Prior to 0e94916e6091, this was successful because everything was chained through work queues and the order was: INT_PRESENCE_ON -> INT_POWER_FAULT -> ENABLE_REQ The ENABLE_REQ cleared the power fault at the end, but now everything is handled inline with the interrupt thread, such that the work ENABLE_REQ was doing happens before power fault handling now. Fixes: 0e94916e6091 ("PCI: pciehp: Handle events synchronously") Signed-off-by: Keith Busch <keith.busch@intel.com> [bhelgaas: changelog] Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Lukas Wunner <lukas@wunner.de>
2018-09-05 14:35:41 -06:00
/* Check Power Fault Detected */
if ((events & PCI_EXP_SLTSTA_PFD) && !ctrl->power_fault_detected) {
ctrl->power_fault_detected = 1;
ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(ctrl));
pciehp_set_attention_status(ctrl, 1);
pciehp_green_led_off(ctrl);
PCI: pciehp: Fix hot-add vs powerfault detection order If both hot-add and power fault were observed in a single interrupt, we handled the hot-add first, then the power fault, in this path: pciehp_ist if (events & (PDC | DLLSC)) pciehp_handle_presence_or_link_change case OFF_STATE: pciehp_enable_slot __pciehp_enable_slot board_added pciehp_power_on_slot ctrl->power_fault_detected = 0 pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_ON, PCI_EXP_SLTCTL_PCC) pciehp_green_led_on(p_slot) # power LED on pciehp_set_attention_status(p_slot, 0) # attention LED off if ((events & PFD) && !ctrl->power_fault_detected) ctrl->power_fault_detected = 1 pciehp_set_attention_status(1) # attention LED on pciehp_green_led_off(slot) # power LED off This left the attention indicator on (even though the hot-add succeeded) and the power indicator off (even though the slot power was on). Fix this by checking for power faults before checking for new devices. Prior to 0e94916e6091, this was successful because everything was chained through work queues and the order was: INT_PRESENCE_ON -> INT_POWER_FAULT -> ENABLE_REQ The ENABLE_REQ cleared the power fault at the end, but now everything is handled inline with the interrupt thread, such that the work ENABLE_REQ was doing happens before power fault handling now. Fixes: 0e94916e6091 ("PCI: pciehp: Handle events synchronously") Signed-off-by: Keith Busch <keith.busch@intel.com> [bhelgaas: changelog] Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Lukas Wunner <lukas@wunner.de>
2018-09-05 14:35:41 -06:00
}
/*
PCI: pciehp: Enable/disable exclusively from IRQ thread Besides the IRQ thread, there are several other places in the driver which enable or disable the slot: - pciehp_probe() enables the slot if it's occupied and the pciehp_force module parameter is used. - pciehp_resume() enables or disables the slot after system sleep. - pciehp_queue_pushbutton_work() enables or disables the slot after the 5 second delay following an Attention Button press. - pciehp_sysfs_enable_slot() and pciehp_sysfs_disable_slot() enable or disable the slot on sysfs write. This requires locking and complicates pciehp's state machine. A simplification can be achieved by enabling and disabling the slot exclusively from the IRQ thread. Amend the functions listed above to request slot enable/disablement from the IRQ thread by either synthesizing a Presence Detect Changed event or, in the case of a disable user request (via sysfs or an Attention Button press), submitting a newly introduced force disable request. The latter is needed because the slot shall be forced off despite being occupied. For this force disable request, avoid colliding with Slot Status register bits by using a bit number greater than 16. For synchronous execution of requests (on sysfs write), wait for the request to finish and retrieve the result. There can only ever be one sysfs write in flight due to the locking in kernfs_fop_write(), hence there is no risk of returning the result of a different sysfs request to user space. The POWERON_STATE and POWEROFF_STATE is now no longer entered by the above-listed functions, but solely by the IRQ thread when it begins a power transition. Afterwards, it moves to STATIC_STATE. The same applies to canceling the Attention Button work, it likewise becomes an IRQ thread only operation. An immediate consequence is that the POWERON_STATE and POWEROFF_STATE is never observed by the IRQ thread itself, only by functions called in a different context, such as pciehp_sysfs_enable_slot(). So remove handling of these states from pciehp_handle_button_press() and pciehp_handle_link_change() which are exclusively called from the IRQ thread. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2018-07-19 16:27:46 -06:00
* Disable requests have higher priority than Presence Detect Changed
* or Data Link Layer State Changed events.
*/
PCI: pciehp: Avoid slot access during reset The ->reset_slot callback introduced by commits: 2e35afaefe64 ("PCI: pciehp: Add reset_slot() method") and 06a8d89af551 ("PCI: pciehp: Disable link notification across slot reset") disables notification of Presence Detect Changed and Data Link Layer State Changed events for the duration of a secondary bus reset. However a bus reset not only triggers these events, but may also clear the Presence Detect State bit in the Slot Status register and the Data Link Layer Link Active bit in the Link Status register momentarily. According to Sinan Kaya: "I know for a fact that bus reset clears the Data Link Layer Active bit as soon as link goes down. It gets set again following link up. Presence detect depends on the HW implementation. QDT root ports don't change presence detect for instance since nobody actually removed the card. If an implementation supports in-band presence detect, the answer is yes. As soon as the link goes down, presence detect bit will get cleared until recovery." https://lkml.kernel.org/r/42e72f83-3b24-f7ef-e5bc-290fae99259a@codeaurora.org In-band presence detect is also covered in Table 4-15 in PCIe r4.0, sec 4.2.6. pciehp should therefore ensure that any parts of the driver that access those bits do not run concurrently to a bus reset. The only precaution the commits took to that effect was to halt interrupt polling. They made no effort to drain the slot workqueue, cancel an outstanding Attention Button work, or block slot enable/disable requests via sysfs and in the ->probe hook. Now that pciehp is converted to enable/disable the slot exclusively from the IRQ thread, the only places accessing the two above-mentioned bits are the IRQ thread and the ->probe hook. Add locking to serialize them with a bus reset. This obviates the need to halt interrupt polling. Do not add locking to the ->get_adapter_status sysfs callback to afford users unfettered access to that bit. Use an rw_semaphore in lieu of a regular mutex to allow parallel execution of the non-reset code paths accessing the critical bits, i.e. the IRQ thread and the ->probe hook. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Rajat Jain <rajatja@google.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Sinan Kaya <okaya@kernel.org>
2018-07-27 23:18:00 -06:00
down_read(&ctrl->reset_lock);
PCI: pciehp: Enable/disable exclusively from IRQ thread Besides the IRQ thread, there are several other places in the driver which enable or disable the slot: - pciehp_probe() enables the slot if it's occupied and the pciehp_force module parameter is used. - pciehp_resume() enables or disables the slot after system sleep. - pciehp_queue_pushbutton_work() enables or disables the slot after the 5 second delay following an Attention Button press. - pciehp_sysfs_enable_slot() and pciehp_sysfs_disable_slot() enable or disable the slot on sysfs write. This requires locking and complicates pciehp's state machine. A simplification can be achieved by enabling and disabling the slot exclusively from the IRQ thread. Amend the functions listed above to request slot enable/disablement from the IRQ thread by either synthesizing a Presence Detect Changed event or, in the case of a disable user request (via sysfs or an Attention Button press), submitting a newly introduced force disable request. The latter is needed because the slot shall be forced off despite being occupied. For this force disable request, avoid colliding with Slot Status register bits by using a bit number greater than 16. For synchronous execution of requests (on sysfs write), wait for the request to finish and retrieve the result. There can only ever be one sysfs write in flight due to the locking in kernfs_fop_write(), hence there is no risk of returning the result of a different sysfs request to user space. The POWERON_STATE and POWEROFF_STATE is now no longer entered by the above-listed functions, but solely by the IRQ thread when it begins a power transition. Afterwards, it moves to STATIC_STATE. The same applies to canceling the Attention Button work, it likewise becomes an IRQ thread only operation. An immediate consequence is that the POWERON_STATE and POWEROFF_STATE is never observed by the IRQ thread itself, only by functions called in a different context, such as pciehp_sysfs_enable_slot(). So remove handling of these states from pciehp_handle_button_press() and pciehp_handle_link_change() which are exclusively called from the IRQ thread. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2018-07-19 16:27:46 -06:00
if (events & DISABLE_SLOT)
pciehp_handle_disable_request(ctrl);
PCI: pciehp: Become resilient to missed events A hotplug port's Slot Status register does not count how often each type of event occurred, it only records the fact *that* an event has occurred. Previously pciehp queued a work item for each event. But if it missed an event, e.g. removal of a card in-between two back-to-back insertions, it queued up the wrong work item or no work item at all. Commit fad214b0aa72 ("PCI: pciehp: Process all hotplug events before looking for new ones") sought to improve the situation by shrinking the window during which events may be missed. But Stefan Roese reports unbalanced Card present and Link Up events, suggesting that we're still missing events if they occur very rapidly. Bjorn Helgaas responds that he considers pciehp's event handling "baroque" and calls for its simplification and rationalization: https://lkml.kernel.org/r/20180202192045.GA53759@bhelgaas-glaptop.roam.corp.google.com It gets worse once a hotplug port is runtime suspended: The port can signal an interrupt while it and its parents are in D3hot, i.e. while it is inaccessible. By the time we've runtime resumed all parents to D0 and read the port's Slot Status register, we may have missed an arbitrary number of events. Event handling therefore needs to be reworked to become resilient to missed events. Assume that a Presence Detect Changed event has occurred. Consider the following truth table: - Slot is in OFF_STATE and is currently empty. => Do nothing. (The event is trailing a Link Down or we've missed an insertion and subsequent removal.) - Slot is in OFF_STATE and is currently occupied. => Turn the slot on. - Slot is in ON_STATE and is currently empty. => Turn the slot off. - Slot is in ON_STATE and is currently occupied. => Turn the slot off, (Be cautious and assume the card in then back on. the slot isn't the same as before.) This leads to the following simple algorithm: 1 If the slot is in ON_STATE, turn it off unconditionally. 2 If the slot is currently occupied, turn it on. Because those actions are now carried out synchronously, rather than by scheduled work items, pciehp reacts to the *current* situation and missed events no longer matter. Data Link Layer State Changed events can be handled identically to Presence Detect Changed events. Note that in the above truth table, a Link Up trailing a Card present event didn't have to be accounted for: It is filtered out by pciehp_check_link_status(). As for Attention Button Pressed events, PCIe r4.0, sec 6.7.1.5 says: "Once the Power Indicator begins blinking, a 5-second abort interval exists during which a second depression of the Attention Button cancels the operation." In other words, the user can only expect the system to react to a button press after it starts blinking. Missed button presses that occur in-between are irrelevant. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Stefan Roese <sr@denx.de> Cc: Mayurkumar Patel <mayurkumar.patel@intel.com> Cc: Mika Westerberg <mika.westerberg@linux.intel.com> Cc: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
2018-07-19 16:27:49 -06:00
else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
pciehp_handle_presence_or_link_change(ctrl, events);
PCI: pciehp: Avoid slot access during reset The ->reset_slot callback introduced by commits: 2e35afaefe64 ("PCI: pciehp: Add reset_slot() method") and 06a8d89af551 ("PCI: pciehp: Disable link notification across slot reset") disables notification of Presence Detect Changed and Data Link Layer State Changed events for the duration of a secondary bus reset. However a bus reset not only triggers these events, but may also clear the Presence Detect State bit in the Slot Status register and the Data Link Layer Link Active bit in the Link Status register momentarily. According to Sinan Kaya: "I know for a fact that bus reset clears the Data Link Layer Active bit as soon as link goes down. It gets set again following link up. Presence detect depends on the HW implementation. QDT root ports don't change presence detect for instance since nobody actually removed the card. If an implementation supports in-band presence detect, the answer is yes. As soon as the link goes down, presence detect bit will get cleared until recovery." https://lkml.kernel.org/r/42e72f83-3b24-f7ef-e5bc-290fae99259a@codeaurora.org In-band presence detect is also covered in Table 4-15 in PCIe r4.0, sec 4.2.6. pciehp should therefore ensure that any parts of the driver that access those bits do not run concurrently to a bus reset. The only precaution the commits took to that effect was to halt interrupt polling. They made no effort to drain the slot workqueue, cancel an outstanding Attention Button work, or block slot enable/disable requests via sysfs and in the ->probe hook. Now that pciehp is converted to enable/disable the slot exclusively from the IRQ thread, the only places accessing the two above-mentioned bits are the IRQ thread and the ->probe hook. Add locking to serialize them with a bus reset. This obviates the need to halt interrupt polling. Do not add locking to the ->get_adapter_status sysfs callback to afford users unfettered access to that bit. Use an rw_semaphore in lieu of a regular mutex to allow parallel execution of the non-reset code paths accessing the critical bits, i.e. the IRQ thread and the ->probe hook. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Rajat Jain <rajatja@google.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Sinan Kaya <okaya@kernel.org>
2018-07-27 23:18:00 -06:00
up_read(&ctrl->reset_lock);
PCI: pciehp: Support interrupts sent from D3hot If a hotplug port is able to send an interrupt, one would naively assume that it is accessible at that moment. After all, if it wouldn't be accessible, i.e. if its parent is in D3hot and the link to the hotplug port is thus down, how should an interrupt come through? It turns out that assumption is wrong at least for Thunderbolt: Even though its parents are in D3hot, a Thunderbolt hotplug port is able to signal interrupts. Because the port's config space is inaccessible and resuming the parents may sleep, the hard IRQ handler has to defer runtime resuming the parents and reading the Slot Status register to the IRQ thread. If the hotplug port uses a level-triggered INTx interrupt, it needs to be masked until the IRQ thread has cleared the signaled events. For simplicity, this commit also masks edge-triggered MSI/MSI-X interrupts. Note that if the interrupt is shared (which can only happen for INTx), other devices are starved from receiving interrupts until the IRQ thread is scheduled, has runtime resumed the hotplug port's parents and has read and cleared the Slot Status register. That delay is dominated by the 10 ms D3hot->D0 transition time of each parent port. The worst case is a Thunderbolt downstream port at the end of a daisy chain: There may be up to six Thunderbolt controllers in-between it and the root port, each comprising an upstream and downstream port, plus its own upstream port. That's 13 x 10 = 130 ms. Possible mitigations are polling the interrupt while it's disabled or reducing the d3_delay of Thunderbolt ports if possible. Open code masking of the interrupt instead of requesting it with the IRQF_ONESHOT flag to minimize the period during which it is masked. (IRQF_ONESHOT unmasks the IRQ only after the IRQ thread has finished.) PCIe r4.0 sec 6.7.3.4 states that "If wake generation is required by the associated form factor specification, a hotplug capable Downstream Port must support generation of a wakeup event (using the PME mechanism) on hotplug events that occur when the system is in a sleep state or the Port is in device state D1, D2, or D3Hot." This would seem to imply that PME needs to be enabled on the hotplug port when it is runtime suspended. pci_enable_wake() currently doesn't enable PME on bridges, it may be necessary to add an exemption for hotplug bridges there. On "Light Ridge" Thunderbolt controllers, the PME_Status bit is not set when an interrupt occurs while the hotplug port is in D3hot, even if PME is enabled. (I've tested this on a Mac and we hardcode the OSC_PCI_EXPRESS_PME_CONTROL bit to 0 on Macs in negotiate_os_control(), modifying it to 1 didn't change the behavior.) (Side note: Section 6.7.3.4 also states that "PME and Hot-Plug Event interrupts (when both are implemented) always share the same MSI or MSI-X vector". That would only seem to apply to Root Ports, however the section never mentions Root Ports, only Downstream Ports. This is explained in the definition of "Downstream Port" in the "Terms and Acronyms" section of the PCIe Base Spec: "The Ports on a Switch that are not the Upstream Port are Downstream Ports. All Ports on a Root Complex are Downstream Ports.") Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Mika Westerberg <mika.westerberg@linux.intel.com> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Keith Busch <keith.busch@intel.com> Cc: Yinghai Lu <yinghai@kernel.org>
2018-07-27 23:18:00 -06:00
pci_config_pm_runtime_put(pdev);
PCI: pciehp: Enable/disable exclusively from IRQ thread Besides the IRQ thread, there are several other places in the driver which enable or disable the slot: - pciehp_probe() enables the slot if it's occupied and the pciehp_force module parameter is used. - pciehp_resume() enables or disables the slot after system sleep. - pciehp_queue_pushbutton_work() enables or disables the slot after the 5 second delay following an Attention Button press. - pciehp_sysfs_enable_slot() and pciehp_sysfs_disable_slot() enable or disable the slot on sysfs write. This requires locking and complicates pciehp's state machine. A simplification can be achieved by enabling and disabling the slot exclusively from the IRQ thread. Amend the functions listed above to request slot enable/disablement from the IRQ thread by either synthesizing a Presence Detect Changed event or, in the case of a disable user request (via sysfs or an Attention Button press), submitting a newly introduced force disable request. The latter is needed because the slot shall be forced off despite being occupied. For this force disable request, avoid colliding with Slot Status register bits by using a bit number greater than 16. For synchronous execution of requests (on sysfs write), wait for the request to finish and retrieve the result. There can only ever be one sysfs write in flight due to the locking in kernfs_fop_write(), hence there is no risk of returning the result of a different sysfs request to user space. The POWERON_STATE and POWEROFF_STATE is now no longer entered by the above-listed functions, but solely by the IRQ thread when it begins a power transition. Afterwards, it moves to STATIC_STATE. The same applies to canceling the Attention Button work, it likewise becomes an IRQ thread only operation. An immediate consequence is that the POWERON_STATE and POWEROFF_STATE is never observed by the IRQ thread itself, only by functions called in a different context, such as pciehp_sysfs_enable_slot(). So remove handling of these states from pciehp_handle_button_press() and pciehp_handle_link_change() which are exclusively called from the IRQ thread. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2018-07-19 16:27:46 -06:00
wake_up(&ctrl->requester);
return IRQ_HANDLED;
}
static int pciehp_poll(void *data)
{
struct controller *ctrl = data;
schedule_timeout_idle(10 * HZ); /* start with 10 sec delay */
while (!kthread_should_stop()) {
PCI: pciehp: Enable/disable exclusively from IRQ thread Besides the IRQ thread, there are several other places in the driver which enable or disable the slot: - pciehp_probe() enables the slot if it's occupied and the pciehp_force module parameter is used. - pciehp_resume() enables or disables the slot after system sleep. - pciehp_queue_pushbutton_work() enables or disables the slot after the 5 second delay following an Attention Button press. - pciehp_sysfs_enable_slot() and pciehp_sysfs_disable_slot() enable or disable the slot on sysfs write. This requires locking and complicates pciehp's state machine. A simplification can be achieved by enabling and disabling the slot exclusively from the IRQ thread. Amend the functions listed above to request slot enable/disablement from the IRQ thread by either synthesizing a Presence Detect Changed event or, in the case of a disable user request (via sysfs or an Attention Button press), submitting a newly introduced force disable request. The latter is needed because the slot shall be forced off despite being occupied. For this force disable request, avoid colliding with Slot Status register bits by using a bit number greater than 16. For synchronous execution of requests (on sysfs write), wait for the request to finish and retrieve the result. There can only ever be one sysfs write in flight due to the locking in kernfs_fop_write(), hence there is no risk of returning the result of a different sysfs request to user space. The POWERON_STATE and POWEROFF_STATE is now no longer entered by the above-listed functions, but solely by the IRQ thread when it begins a power transition. Afterwards, it moves to STATIC_STATE. The same applies to canceling the Attention Button work, it likewise becomes an IRQ thread only operation. An immediate consequence is that the POWERON_STATE and POWEROFF_STATE is never observed by the IRQ thread itself, only by functions called in a different context, such as pciehp_sysfs_enable_slot(). So remove handling of these states from pciehp_handle_button_press() and pciehp_handle_link_change() which are exclusively called from the IRQ thread. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2018-07-19 16:27:46 -06:00
/* poll for interrupt events or user requests */
while (pciehp_isr(IRQ_NOTCONNECTED, ctrl) == IRQ_WAKE_THREAD ||
atomic_read(&ctrl->pending_events))
pciehp_ist(IRQ_NOTCONNECTED, ctrl);
if (pciehp_poll_time <= 0 || pciehp_poll_time > 60)
pciehp_poll_time = 2; /* clamp to sane value */
schedule_timeout_idle(pciehp_poll_time * HZ);
}
return 0;
}
static void pcie_enable_notification(struct controller *ctrl)
{
u16 cmd, mask;
/*
* TBD: Power fault detected software notification support.
*
* Power fault detected software notification is not enabled
* now, because it caused power fault detected interrupt storm
* on some machines. On those machines, power fault detected
* bit in the slot status register was set again immediately
* when it is cleared in the interrupt service routine, and
* next power fault detected interrupt was notified again.
*/
/*
* Always enable link events: thus link-up and link-down shall
* always be treated as hotplug and unplug respectively. Enable
* presence detect only if Attention Button is not present.
*/
cmd = PCI_EXP_SLTCTL_DLLSCE;
if (ATTN_BUTTN(ctrl))
cmd |= PCI_EXP_SLTCTL_ABPE;
else
cmd |= PCI_EXP_SLTCTL_PDCE;
if (!pciehp_poll_mode)
cmd |= PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_CCIE;
mask = (PCI_EXP_SLTCTL_PDCE | PCI_EXP_SLTCTL_ABPE |
PCI_EXP_SLTCTL_PFDE |
PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_CCIE |
PCI_EXP_SLTCTL_DLLSCE);
PCI: pciehp: Wait for hotplug command completion where necessary The commit referenced below deferred waiting for command completion until the start of the next command, allowing hardware to do the latching asynchronously. Unfortunately, being ready to accept a new command is the only indication we have that the previous command is completed. In cases where we need that state change to be enabled, we must still wait for completion. For instance, pciehp_reset_slot() attempts to disable anything that might generate a surprise hotplug on slots that support presence detection. If we don't wait for those settings to latch before the secondary bus reset, we negate any value in attempting to prevent the spurious hotplug. Create a base function with optional wait and helper functions so that pcie_write_cmd() turns back into the "safe" interface which waits before and after issuing a command and add pcie_write_cmd_nowait(), which eliminates the trailing wait for asynchronous completion. The following functions are returned to their previous behavior: pciehp_power_on_slot pciehp_power_off_slot pcie_disable_notification pciehp_reset_slot The rationale is that pciehp_power_on_slot() enables the link and therefore relies on completion of power-on. pciehp_power_off_slot() and pcie_disable_notification() need a wait because data structures may be freed after these calls and continued signaling from the device would be unexpected. And, of course, pciehp_reset_slot() needs to wait for the scenario outlined above. Fixes: 3461a068661c ("PCI: pciehp: Wait for hotplug command completion lazily") Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> CC: stable@vger.kernel.org # v3.17+
2015-06-08 17:10:50 -06:00
pcie_write_cmd_nowait(ctrl, cmd, mask);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, cmd);
}
static void pcie_disable_notification(struct controller *ctrl)
{
u16 mask;
mask = (PCI_EXP_SLTCTL_PDCE | PCI_EXP_SLTCTL_ABPE |
PCI_EXP_SLTCTL_MRLSCE | PCI_EXP_SLTCTL_PFDE |
PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_CCIE |
PCI_EXP_SLTCTL_DLLSCE);
pcie_write_cmd(ctrl, 0, mask);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 0);
}
PCI: pciehp: Clear spurious events earlier on resume Thunderbolt hotplug ports that were occupied before system sleep resume with their downstream link in "off" state. Only after the Thunderbolt controller has reestablished the PCIe tunnels does the link go up. As a result, a spurious Presence Detect Changed and/or Data Link Layer State Changed event occurs. The events are not immediately acted upon because tunnel reestablishment happens in the ->resume_noirq phase, when interrupts are still disabled. Also, notification of events may initially be disabled in the Slot Control register when coming out of system sleep and is reenabled in the ->resume_noirq phase through: pci_pm_resume_noirq() pci_pm_default_resume_early() pci_restore_state() pci_restore_pcie_state() It is not guaranteed that the events are acted upon at all: PCIe r4.0, sec 6.7.3.4 says that "a port may optionally send an MSI when there are hot-plug events that occur while interrupt generation is disabled, and interrupt generation is subsequently enabled." Note the "optionally". If an MSI is sent, pciehp will gratuitously turn the slot off and back on once the ->resume_early phase has commenced. If an MSI is not sent, the extant, unacknowledged events in the Slot Status register will prevent future notification of presence or link changes. Commit 13c65840feab ("PCI: pciehp: Clear Presence Detect and Data Link Layer Status Changed on resume") fixed the latter by clearing the events in the ->resume phase. Move this to the ->resume_noirq phase to also fix the gratuitous disable/enablement of the slot. The commit further restored the Slot Control register in the ->resume phase, but that's dispensable because as shown above it's already been done in the ->resume_noirq phase. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Mika Westerberg <mika.westerberg@linux.intel.com>
2018-07-19 16:27:53 -06:00
void pcie_clear_hotplug_events(struct controller *ctrl)
{
pcie_capability_write_word(ctrl_dev(ctrl), PCI_EXP_SLTSTA,
PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC);
}
/*
* pciehp has a 1:1 bus:slot relationship so we ultimately want a secondary
* bus reset of the bridge, but at the same time we want to ensure that it is
* not seen as a hot-unplug, followed by the hot-plug of the device. Thus,
* disable link state notification and presence detection change notification
* momentarily, if we see that they could interfere. Also, clear any spurious
* events after.
*/
int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, int probe)
{
PCI: hotplug: Embed hotplug_slot When the PCI hotplug core and its first user, cpqphp, were introduced in February 2002 with historic commit a8a2069f432c, cpqphp allocated a slot struct for its internal use plus a hotplug_slot struct to be registered with the hotplug core and linked the two with pointers: https://git.kernel.org/tglx/history/c/a8a2069f432c Nowadays, the predominant pattern in the tree is to embed ("subclass") such structures in one another and cast to the containing struct with container_of(). But it wasn't until July 2002 that container_of() was introduced with historic commit ec4f214232cf: https://git.kernel.org/tglx/history/c/ec4f214232cf pnv_php, introduced in 2016, did the right thing and embedded struct hotplug_slot in its internal struct pnv_php_slot, but all other drivers cargo-culted cpqphp's design and linked separate structs with pointers. Embedding structs is preferrable to linking them with pointers because it requires fewer allocations, thereby reducing overhead and simplifying error paths. Casting an embedded struct to the containing struct becomes a cheap subtraction rather than a dereference. And having fewer pointers reduces the risk of them pointing nowhere either accidentally or due to an attack. Convert all drivers to embed struct hotplug_slot in their internal slot struct. The "private" pointer in struct hotplug_slot thereby becomes unused, so drop it. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Tyrel Datwyler <tyreld@linux.vnet.ibm.com> # drivers/pci/hotplug/rpa* Acked-by: Sebastian Ott <sebott@linux.ibm.com> # drivers/pci/hotplug/s390* Acked-by: Andy Shevchenko <andy.shevchenko@gmail.com> # drivers/platform/x86 Cc: Len Brown <lenb@kernel.org> Cc: Scott Murray <scott@spiteful.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Oliver OHalloran <oliveroh@au1.ibm.com> Cc: Gavin Shan <gwshan@linux.vnet.ibm.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Cc: Corentin Chary <corentin.chary@gmail.com> Cc: Darren Hart <dvhart@infradead.org>
2018-09-08 01:59:01 -06:00
struct controller *ctrl = to_ctrl(hotplug_slot);
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 stat_mask = 0, ctrl_mask = 0;
int rc;
if (probe)
return 0;
PCI: pciehp: Avoid slot access during reset The ->reset_slot callback introduced by commits: 2e35afaefe64 ("PCI: pciehp: Add reset_slot() method") and 06a8d89af551 ("PCI: pciehp: Disable link notification across slot reset") disables notification of Presence Detect Changed and Data Link Layer State Changed events for the duration of a secondary bus reset. However a bus reset not only triggers these events, but may also clear the Presence Detect State bit in the Slot Status register and the Data Link Layer Link Active bit in the Link Status register momentarily. According to Sinan Kaya: "I know for a fact that bus reset clears the Data Link Layer Active bit as soon as link goes down. It gets set again following link up. Presence detect depends on the HW implementation. QDT root ports don't change presence detect for instance since nobody actually removed the card. If an implementation supports in-band presence detect, the answer is yes. As soon as the link goes down, presence detect bit will get cleared until recovery." https://lkml.kernel.org/r/42e72f83-3b24-f7ef-e5bc-290fae99259a@codeaurora.org In-band presence detect is also covered in Table 4-15 in PCIe r4.0, sec 4.2.6. pciehp should therefore ensure that any parts of the driver that access those bits do not run concurrently to a bus reset. The only precaution the commits took to that effect was to halt interrupt polling. They made no effort to drain the slot workqueue, cancel an outstanding Attention Button work, or block slot enable/disable requests via sysfs and in the ->probe hook. Now that pciehp is converted to enable/disable the slot exclusively from the IRQ thread, the only places accessing the two above-mentioned bits are the IRQ thread and the ->probe hook. Add locking to serialize them with a bus reset. This obviates the need to halt interrupt polling. Do not add locking to the ->get_adapter_status sysfs callback to afford users unfettered access to that bit. Use an rw_semaphore in lieu of a regular mutex to allow parallel execution of the non-reset code paths accessing the critical bits, i.e. the IRQ thread and the ->probe hook. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Rajat Jain <rajatja@google.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Sinan Kaya <okaya@kernel.org>
2018-07-27 23:18:00 -06:00
down_write(&ctrl->reset_lock);
if (!ATTN_BUTTN(ctrl)) {
ctrl_mask |= PCI_EXP_SLTCTL_PDCE;
stat_mask |= PCI_EXP_SLTSTA_PDC;
}
ctrl_mask |= PCI_EXP_SLTCTL_DLLSCE;
stat_mask |= PCI_EXP_SLTSTA_DLLSC;
pcie_write_cmd(ctrl, 0, ctrl_mask);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 0);
rc = pci_bridge_secondary_bus_reset(ctrl->pcie->port);
pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, stat_mask);
PCI: pciehp: Wait for hotplug command completion where necessary The commit referenced below deferred waiting for command completion until the start of the next command, allowing hardware to do the latching asynchronously. Unfortunately, being ready to accept a new command is the only indication we have that the previous command is completed. In cases where we need that state change to be enabled, we must still wait for completion. For instance, pciehp_reset_slot() attempts to disable anything that might generate a surprise hotplug on slots that support presence detection. If we don't wait for those settings to latch before the secondary bus reset, we negate any value in attempting to prevent the spurious hotplug. Create a base function with optional wait and helper functions so that pcie_write_cmd() turns back into the "safe" interface which waits before and after issuing a command and add pcie_write_cmd_nowait(), which eliminates the trailing wait for asynchronous completion. The following functions are returned to their previous behavior: pciehp_power_on_slot pciehp_power_off_slot pcie_disable_notification pciehp_reset_slot The rationale is that pciehp_power_on_slot() enables the link and therefore relies on completion of power-on. pciehp_power_off_slot() and pcie_disable_notification() need a wait because data structures may be freed after these calls and continued signaling from the device would be unexpected. And, of course, pciehp_reset_slot() needs to wait for the scenario outlined above. Fixes: 3461a068661c ("PCI: pciehp: Wait for hotplug command completion lazily") Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> CC: stable@vger.kernel.org # v3.17+
2015-06-08 17:10:50 -06:00
pcie_write_cmd_nowait(ctrl, ctrl_mask, ctrl_mask);
ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, ctrl_mask);
PCI: pciehp: Avoid slot access during reset The ->reset_slot callback introduced by commits: 2e35afaefe64 ("PCI: pciehp: Add reset_slot() method") and 06a8d89af551 ("PCI: pciehp: Disable link notification across slot reset") disables notification of Presence Detect Changed and Data Link Layer State Changed events for the duration of a secondary bus reset. However a bus reset not only triggers these events, but may also clear the Presence Detect State bit in the Slot Status register and the Data Link Layer Link Active bit in the Link Status register momentarily. According to Sinan Kaya: "I know for a fact that bus reset clears the Data Link Layer Active bit as soon as link goes down. It gets set again following link up. Presence detect depends on the HW implementation. QDT root ports don't change presence detect for instance since nobody actually removed the card. If an implementation supports in-band presence detect, the answer is yes. As soon as the link goes down, presence detect bit will get cleared until recovery." https://lkml.kernel.org/r/42e72f83-3b24-f7ef-e5bc-290fae99259a@codeaurora.org In-band presence detect is also covered in Table 4-15 in PCIe r4.0, sec 4.2.6. pciehp should therefore ensure that any parts of the driver that access those bits do not run concurrently to a bus reset. The only precaution the commits took to that effect was to halt interrupt polling. They made no effort to drain the slot workqueue, cancel an outstanding Attention Button work, or block slot enable/disable requests via sysfs and in the ->probe hook. Now that pciehp is converted to enable/disable the slot exclusively from the IRQ thread, the only places accessing the two above-mentioned bits are the IRQ thread and the ->probe hook. Add locking to serialize them with a bus reset. This obviates the need to halt interrupt polling. Do not add locking to the ->get_adapter_status sysfs callback to afford users unfettered access to that bit. Use an rw_semaphore in lieu of a regular mutex to allow parallel execution of the non-reset code paths accessing the critical bits, i.e. the IRQ thread and the ->probe hook. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Rajat Jain <rajatja@google.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Sinan Kaya <okaya@kernel.org>
2018-07-27 23:18:00 -06:00
up_write(&ctrl->reset_lock);
return rc;
}
int pcie_init_notification(struct controller *ctrl)
{
if (pciehp_request_irq(ctrl))
return -1;
pcie_enable_notification(ctrl);
ctrl->notification_enabled = 1;
return 0;
}
void pcie_shutdown_notification(struct controller *ctrl)
{
if (ctrl->notification_enabled) {
pcie_disable_notification(ctrl);
pciehp_free_irq(ctrl);
ctrl->notification_enabled = 0;
}
}
static inline void dbg_ctrl(struct controller *ctrl)
{
struct pci_dev *pdev = ctrl->pcie->port;
u16 reg16;
if (!pciehp_debug)
return;
ctrl_info(ctrl, "Slot Capabilities : 0x%08x\n", ctrl->slot_cap);
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &reg16);
ctrl_info(ctrl, "Slot Status : 0x%04x\n", reg16);
pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &reg16);
ctrl_info(ctrl, "Slot Control : 0x%04x\n", reg16);
}
#define FLAG(x, y) (((x) & (y)) ? '+' : '-')
struct controller *pcie_init(struct pcie_device *dev)
{
struct controller *ctrl;
u32 slot_cap, link_cap;
PCI: pciehp: Tolerate Presence Detect hardwired to zero The WiGig Bus Extension (WBE) specification allows tunneling PCIe over IEEE 802.11. A product implementing this spec is the wil6210 from Wilocity (now part of Qualcomm Atheros). It integrates a PCIe switch with a wireless network adapter: 00.0-+ [1ae9:0101] Upstream Port +-00.0-+ [1ae9:0200] Downstream Port | +-00.0 [168c:0034] Atheros AR9462 Wireless Network Adapter +-02.0 [1ae9:0201] Downstream Port +-03.0 [1ae9:0201] Downstream Port Wirelessly attached devices presumably appear below the hotplug ports with device ID [1ae9:0201]. Oddly, the Downstream Port [1ae9:0200] leading to the wireless network adapter is likewise Hotplug Capable, but has its Presence Detect State bit hardwired to zero. Even if the Link Active bit is set, Presence Detect is zero, so this cannot be caused by in-band presence detection but only by broken hardware. pciehp assumes an empty slot if Presence Detect State is zero, regardless of Link Active being one. Consequently, up until v4.18 it removes the wireless network adapter in pciehp_resume(). From v4.19 it already does so in pciehp_probe(). Be lenient towards broken hardware and assume the slot is occupied if Link Active is set: Introduce pciehp_card_present_or_link_active() and use it in lieu of pciehp_get_adapter_status() everywhere, except in pciehp_handle_presence_or_link_change() whose log messages depend on which of Presence Detect State or Link Active is set. Remove the Presence Detect State check from __pciehp_enable_slot() because it is only called if either of Presence Detect State or Link Active is set. Caution: There is a possibility that broken hardware exists which has working Presence Detect but hardwires Link Active to one. On such hardware the slot will now incorrectly be considered always occupied. If such hardware is discovered, this commit can be rolled back and a quirk can be added which sets is_hotplug_bridge = 0 for [1ae9:0200]. Link: https://bugzilla.kernel.org/show_bug.cgi?id=200839 Reported-and-tested-by: David Yang <mmyangfl@gmail.com> Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Rajat Jain <rajatja@google.com> Cc: Ashok Raj <ashok.raj@intel.com>
2018-09-08 01:59:01 -06:00
u8 poweron;
struct pci_dev *pdev = dev->port;
struct pci_bus *subordinate = pdev->subordinate;
ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL);
if (!ctrl)
return NULL;
ctrl->pcie = dev;
pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &slot_cap);
if (pdev->hotplug_user_indicators)
slot_cap &= ~(PCI_EXP_SLTCAP_AIP | PCI_EXP_SLTCAP_PIP);
PCI: pciehp: Assume NoCompl+ for Thunderbolt ports Certain Thunderbolt 1 controllers claim to support Command Completed events (value of 0b in the No Command Completed Support field of the Slot Capabilities register) but in reality they neither set the Command Completed bit in the Slot Status register nor signal a Command Completed interrupt: 8086:1513 CV82524 [Light Ridge 4C 2010] 8086:151a DSL2310 [Eagle Ridge 2C 2011] 8086:151b CVL2510 [Light Peak 2C 2010] 8086:1547 DSL3510 [Cactus Ridge 4C 2012] 8086:1548 DSL3310 [Cactus Ridge 2C 2012] 8086:1549 DSL2210 [Port Ridge 1C 2011] All known newer chips (Redwood Ridge and onwards) set No Command Completed Support, indicating that they do not support Command Completed events. The user-visible impact is that after unplugging such a device, 2 seconds elapse until pciehp is unbound. That's because on ->remove, pcie_write_cmd() is called via pcie_disable_notification() and every call to pcie_write_cmd() takes 2 seconds (1 second for each invocation of pcie_wait_cmd()): [ 337.942727] pciehp 0000:0a:00.0:pcie204: Timeout on hotplug command 0x1038 (issued 21176 msec ago) [ 340.014735] pciehp 0000:0a:00.0:pcie204: Timeout on hotplug command 0x0000 (issued 2072 msec ago) That by itself has always been unpleasant, but the situation has become worse with commit cc27b735ad3a ("PCI/portdrv: Turn off PCIe services during shutdown"): Now pciehp is unbound on ->shutdown. Because Thunderbolt controllers typically have 4 hotplug ports, every reboot and shutdown is now delayed by 8 seconds, plus another 2 seconds for every attached Thunderbolt 1 device. Thunderbolt hotplug slots are not physical slots that one inserts cards into, but rather logical hotplug slots implemented in silicon. Devices appear beyond those logical slots once a PCI tunnel is established on top of the Thunderbolt Converged I/O switch. One would expect commands written to the Slot Control register to be executed immediately by the silicon, so for simplicity we always assume NoCompl+ for Thunderbolt ports. Fixes: cc27b735ad3a ("PCI/portdrv: Turn off PCIe services during shutdown") Tested-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com> Cc: stable@vger.kernel.org # v4.12+ Cc: Sinan Kaya <okaya@codeaurora.org> Cc: Yehezkel Bernat <yehezkel.bernat@intel.com> Cc: Michael Jamet <michael.jamet@intel.com> Cc: Andreas Noever <andreas.noever@gmail.com>
2018-01-17 08:48:39 -07:00
/*
* We assume no Thunderbolt controllers support Command Complete events,
* but some controllers falsely claim they do.
*/
if (pdev->is_thunderbolt)
slot_cap |= PCI_EXP_SLTCAP_NCCS;
ctrl->slot_cap = slot_cap;
mutex_init(&ctrl->ctrl_lock);
mutex_init(&ctrl->state_lock);
PCI: pciehp: Avoid slot access during reset The ->reset_slot callback introduced by commits: 2e35afaefe64 ("PCI: pciehp: Add reset_slot() method") and 06a8d89af551 ("PCI: pciehp: Disable link notification across slot reset") disables notification of Presence Detect Changed and Data Link Layer State Changed events for the duration of a secondary bus reset. However a bus reset not only triggers these events, but may also clear the Presence Detect State bit in the Slot Status register and the Data Link Layer Link Active bit in the Link Status register momentarily. According to Sinan Kaya: "I know for a fact that bus reset clears the Data Link Layer Active bit as soon as link goes down. It gets set again following link up. Presence detect depends on the HW implementation. QDT root ports don't change presence detect for instance since nobody actually removed the card. If an implementation supports in-band presence detect, the answer is yes. As soon as the link goes down, presence detect bit will get cleared until recovery." https://lkml.kernel.org/r/42e72f83-3b24-f7ef-e5bc-290fae99259a@codeaurora.org In-band presence detect is also covered in Table 4-15 in PCIe r4.0, sec 4.2.6. pciehp should therefore ensure that any parts of the driver that access those bits do not run concurrently to a bus reset. The only precaution the commits took to that effect was to halt interrupt polling. They made no effort to drain the slot workqueue, cancel an outstanding Attention Button work, or block slot enable/disable requests via sysfs and in the ->probe hook. Now that pciehp is converted to enable/disable the slot exclusively from the IRQ thread, the only places accessing the two above-mentioned bits are the IRQ thread and the ->probe hook. Add locking to serialize them with a bus reset. This obviates the need to halt interrupt polling. Do not add locking to the ->get_adapter_status sysfs callback to afford users unfettered access to that bit. Use an rw_semaphore in lieu of a regular mutex to allow parallel execution of the non-reset code paths accessing the critical bits, i.e. the IRQ thread and the ->probe hook. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Rajat Jain <rajatja@google.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Sinan Kaya <okaya@kernel.org>
2018-07-27 23:18:00 -06:00
init_rwsem(&ctrl->reset_lock);
PCI: pciehp: Enable/disable exclusively from IRQ thread Besides the IRQ thread, there are several other places in the driver which enable or disable the slot: - pciehp_probe() enables the slot if it's occupied and the pciehp_force module parameter is used. - pciehp_resume() enables or disables the slot after system sleep. - pciehp_queue_pushbutton_work() enables or disables the slot after the 5 second delay following an Attention Button press. - pciehp_sysfs_enable_slot() and pciehp_sysfs_disable_slot() enable or disable the slot on sysfs write. This requires locking and complicates pciehp's state machine. A simplification can be achieved by enabling and disabling the slot exclusively from the IRQ thread. Amend the functions listed above to request slot enable/disablement from the IRQ thread by either synthesizing a Presence Detect Changed event or, in the case of a disable user request (via sysfs or an Attention Button press), submitting a newly introduced force disable request. The latter is needed because the slot shall be forced off despite being occupied. For this force disable request, avoid colliding with Slot Status register bits by using a bit number greater than 16. For synchronous execution of requests (on sysfs write), wait for the request to finish and retrieve the result. There can only ever be one sysfs write in flight due to the locking in kernfs_fop_write(), hence there is no risk of returning the result of a different sysfs request to user space. The POWERON_STATE and POWEROFF_STATE is now no longer entered by the above-listed functions, but solely by the IRQ thread when it begins a power transition. Afterwards, it moves to STATIC_STATE. The same applies to canceling the Attention Button work, it likewise becomes an IRQ thread only operation. An immediate consequence is that the POWERON_STATE and POWEROFF_STATE is never observed by the IRQ thread itself, only by functions called in a different context, such as pciehp_sysfs_enable_slot(). So remove handling of these states from pciehp_handle_button_press() and pciehp_handle_link_change() which are exclusively called from the IRQ thread. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2018-07-19 16:27:46 -06:00
init_waitqueue_head(&ctrl->requester);
init_waitqueue_head(&ctrl->queue);
INIT_DELAYED_WORK(&ctrl->button_work, pciehp_queue_pushbutton_work);
dbg_ctrl(ctrl);
2014-06-14 10:56:31 -06:00
down_read(&pci_bus_sem);
ctrl->state = list_empty(&subordinate->devices) ? OFF_STATE : ON_STATE;
up_read(&pci_bus_sem);
/* Check if Data Link Layer Link Active Reporting is implemented */
pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &link_cap);
if (link_cap & PCI_EXP_LNKCAP_DLLLARC)
ctrl->link_active_reporting = 1;
PCI: pciehp: Always enable occupied slot on probe Per PCIe r4.0, sec 6.7.3.4, a "port may optionally send an MSI when there are hot-plug events that occur while interrupt generation is disabled, and interrupt generation is subsequently enabled." On probe, we currently clear all event bits in the Slot Status register with the notable exception of the Presence Detect Changed bit. Thereby we seek to receive an interrupt for an already occupied slot once event notification is enabled. But because the interrupt is optional, users may have to specify the pciehp_force parameter on the command line, which is inconvenient. Moreover, now that pciehp's event handling has become resilient to missed events, a Presence Detect Changed interrupt for a slot which is powered on is interpreted as removal of the card. If the slot has already been brought up by the BIOS, receiving such an interrupt on probe causes the slot to be powered off and immediately back on, which is likewise undesirable. Avoid both issues by making the behavior of pciehp_force the default and clearing the Presence Detect Changed bit on probe. Note that the stated purpose of pciehp_force per the MODULE_PARM_DESC ("Force pciehp, even if OSHP is missing") seems nonsensical because the OSHP control method is only relevant for SHCP slots according to the PCI Firmware specification r3.0, sec 4.8. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Mika Westerberg <mika.westerberg@linux.intel.com>
2018-07-19 16:27:50 -06:00
/* Clear all remaining event bits in Slot Status register. */
pcie_capability_write_word(pdev, PCI_EXP_SLTSTA,
PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD |
PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_CC |
PCI: pciehp: Always enable occupied slot on probe Per PCIe r4.0, sec 6.7.3.4, a "port may optionally send an MSI when there are hot-plug events that occur while interrupt generation is disabled, and interrupt generation is subsequently enabled." On probe, we currently clear all event bits in the Slot Status register with the notable exception of the Presence Detect Changed bit. Thereby we seek to receive an interrupt for an already occupied slot once event notification is enabled. But because the interrupt is optional, users may have to specify the pciehp_force parameter on the command line, which is inconvenient. Moreover, now that pciehp's event handling has become resilient to missed events, a Presence Detect Changed interrupt for a slot which is powered on is interpreted as removal of the card. If the slot has already been brought up by the BIOS, receiving such an interrupt on probe causes the slot to be powered off and immediately back on, which is likewise undesirable. Avoid both issues by making the behavior of pciehp_force the default and clearing the Presence Detect Changed bit on probe. Note that the stated purpose of pciehp_force per the MODULE_PARM_DESC ("Force pciehp, even if OSHP is missing") seems nonsensical because the OSHP control method is only relevant for SHCP slots according to the PCI Firmware specification r3.0, sec 4.8. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Mika Westerberg <mika.westerberg@linux.intel.com>
2018-07-19 16:27:50 -06:00
PCI_EXP_SLTSTA_DLLSC | PCI_EXP_SLTSTA_PDC);
PCI: pciehp: Add quirk for Command Completed errata Several PCIe hotplug controllers have errata that mean they do not set the Command Completed bit unless writes to the Slot Command register change "Control" bits. Command Completed is never set for writes that only change software notification "Enable" bits. This results in timeouts like this: pciehp 0000:00:1c.0:pcie004: Timeout on hotplug command 0x1038 (issued 65284 msec ago) When this erratum is present, avoid these timeouts by marking commands "completed" immediately unless they change the "Control" bits. Here's the text of the Intel erratum CF118. We assume this applies to all Intel parts: CF118 PCIe Slot Status Register Command Completed bit not always updated on any configuration write to the Slot Control Register Problem: For PCIe root ports (devices 0 - 10) supporting hot-plug, the Slot Status Register (offset AAh) Command Completed (bit[4]) status is updated under the following condition: IOH will set Command Completed bit after delivering the new commands written in the Slot Controller register (offset A8h) to VPP. The IOH detects new commands written in Slot Control register by checking the change of value for Power Controller Control (bit[10]), Power Indicator Control (bits[9:8]), Attention Indicator Control (bits[7:6]), or Electromechanical Interlock Control (bit[11]) fields. Any other configuration writes to the Slot Control register without changing the values of these fields will not cause Command Completed bit to be set. The PCIe Base Specification Revision 2.0 or later describes the “Slot Control Register” in section 7.8.10, as follows (Reference section 7.8.10, Slot Control Register, Offset 18h). In hot-plug capable Downstream Ports, a write to the Slot Control register must cause a hot-plug command to be generated (see Section 6.7.3.2 for details on hot-plug commands). A write to the Slot Control register in a Downstream Port that is not hotplug capable must not cause a hot-plug command to be executed. The PCIe Spec intended that every write to the Slot Control Register is a command and expected a command complete status to abstract the VPP implementation specific nuances from the OS software. IOH PCIe Slot Control Register implementation is not fully conforming to the PCIe Specification in this respect. Implication: Software checking on the Command Completed status after writing to the Slot Control register may time out. Workaround: Software can read the Slot Control register and compare the existing and new values to determine if it should check the Command Completed status after writing to the Slot Control register. Per Sinan, the Qualcomm QDF2400 controller also does not set the Command Completed bit unless writes to the Slot Command register change "Control" bits. Link: http://www.intel.com/content/www/us/en/processors/xeon/xeon-e7-v2-spec-update.html Link: https://lkml.kernel.org/r/8770820b-85a0-172b-7230-3a44524e6c9f@molgen.mpg.de Reported-by: Paul Menzel <pmenzel+linux-pci@molgen.mpg.de> # Lenovo X60 Tested-by: Paul Menzel <pmenzel+linux-pci@molgen.mpg.de> # Lenovo X60 Signed-off-by: Sinan Kaya <okaya@codeaurora.org> # Qcom quirk Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
2018-05-03 17:39:38 -06:00
ctrl_info(ctrl, "Slot #%d AttnBtn%c PwrCtrl%c MRL%c AttnInd%c PwrInd%c HotPlug%c Surprise%c Interlock%c NoCompl%c LLActRep%c%s\n",
(slot_cap & PCI_EXP_SLTCAP_PSN) >> 19,
FLAG(slot_cap, PCI_EXP_SLTCAP_ABP),
FLAG(slot_cap, PCI_EXP_SLTCAP_PCP),
FLAG(slot_cap, PCI_EXP_SLTCAP_MRLSP),
FLAG(slot_cap, PCI_EXP_SLTCAP_AIP),
FLAG(slot_cap, PCI_EXP_SLTCAP_PIP),
FLAG(slot_cap, PCI_EXP_SLTCAP_HPC),
FLAG(slot_cap, PCI_EXP_SLTCAP_HPS),
FLAG(slot_cap, PCI_EXP_SLTCAP_EIP),
FLAG(slot_cap, PCI_EXP_SLTCAP_NCCS),
PCI: pciehp: Add quirk for Command Completed errata Several PCIe hotplug controllers have errata that mean they do not set the Command Completed bit unless writes to the Slot Command register change "Control" bits. Command Completed is never set for writes that only change software notification "Enable" bits. This results in timeouts like this: pciehp 0000:00:1c.0:pcie004: Timeout on hotplug command 0x1038 (issued 65284 msec ago) When this erratum is present, avoid these timeouts by marking commands "completed" immediately unless they change the "Control" bits. Here's the text of the Intel erratum CF118. We assume this applies to all Intel parts: CF118 PCIe Slot Status Register Command Completed bit not always updated on any configuration write to the Slot Control Register Problem: For PCIe root ports (devices 0 - 10) supporting hot-plug, the Slot Status Register (offset AAh) Command Completed (bit[4]) status is updated under the following condition: IOH will set Command Completed bit after delivering the new commands written in the Slot Controller register (offset A8h) to VPP. The IOH detects new commands written in Slot Control register by checking the change of value for Power Controller Control (bit[10]), Power Indicator Control (bits[9:8]), Attention Indicator Control (bits[7:6]), or Electromechanical Interlock Control (bit[11]) fields. Any other configuration writes to the Slot Control register without changing the values of these fields will not cause Command Completed bit to be set. The PCIe Base Specification Revision 2.0 or later describes the “Slot Control Register” in section 7.8.10, as follows (Reference section 7.8.10, Slot Control Register, Offset 18h). In hot-plug capable Downstream Ports, a write to the Slot Control register must cause a hot-plug command to be generated (see Section 6.7.3.2 for details on hot-plug commands). A write to the Slot Control register in a Downstream Port that is not hotplug capable must not cause a hot-plug command to be executed. The PCIe Spec intended that every write to the Slot Control Register is a command and expected a command complete status to abstract the VPP implementation specific nuances from the OS software. IOH PCIe Slot Control Register implementation is not fully conforming to the PCIe Specification in this respect. Implication: Software checking on the Command Completed status after writing to the Slot Control register may time out. Workaround: Software can read the Slot Control register and compare the existing and new values to determine if it should check the Command Completed status after writing to the Slot Control register. Per Sinan, the Qualcomm QDF2400 controller also does not set the Command Completed bit unless writes to the Slot Command register change "Control" bits. Link: http://www.intel.com/content/www/us/en/processors/xeon/xeon-e7-v2-spec-update.html Link: https://lkml.kernel.org/r/8770820b-85a0-172b-7230-3a44524e6c9f@molgen.mpg.de Reported-by: Paul Menzel <pmenzel+linux-pci@molgen.mpg.de> # Lenovo X60 Tested-by: Paul Menzel <pmenzel+linux-pci@molgen.mpg.de> # Lenovo X60 Signed-off-by: Sinan Kaya <okaya@codeaurora.org> # Qcom quirk Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
2018-05-03 17:39:38 -06:00
FLAG(link_cap, PCI_EXP_LNKCAP_DLLLARC),
pdev->broken_cmd_compl ? " (with Cmd Compl erratum)" : "");
PCI: pciehp: Deduplicate presence check on probe & resume On driver probe and on resume from system sleep, pciehp checks the Presence Detect State bit in the Slot Status register to bring up an occupied slot or bring down an unoccupied slot. Both code paths are identical, so deduplicate them per Mika's request. On probe, an additional check is performed to disable power of an unoccupied slot. This can e.g. happen if power was enabled by BIOS. It cannot happen once pciehp has taken control, hence is not necessary on resume: The Slot Control register is set to the same value that it had on suspend by pci_restore_state(), so if the slot was occupied, power is enabled and if it wasn't, power is disabled. Should occupancy have changed during the system sleep transition, power is adjusted by bringing up or down the slot per the paragraph above. To allow for deduplication of the presence check, move the power check to pcie_init(). This seems safer anyway, because right now it is performed while interrupts are already enabled, and although I can't think of a scenario where pciehp_power_off_slot() and the IRQ thread collide, it does feel brittle. However this means that pcie_init() may now write to the Slot Control register before the IRQ is requested. If both the CCIE and HPIE bits happen to be set, pcie_wait_cmd() will wait for an interrupt (instead of polling the Command Completed bit) and eventually emit a timeout message. Additionally, if a level-triggered INTx interrupt is used, the user may see a spurious interrupt splat. Avoid by disabling interrupts before disabling power. (Normally the HPIE and CCIE bits should be clear on probe, but conceivably they may already have been set e.g. by BIOS.) Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
2018-07-27 23:22:00 -06:00
/*
* If empty slot's power status is on, turn power off. The IRQ isn't
* requested yet, so avoid triggering a notification with this command.
*/
if (POWER_CTRL(ctrl)) {
pciehp_get_power_status(ctrl, &poweron);
PCI: pciehp: Tolerate Presence Detect hardwired to zero The WiGig Bus Extension (WBE) specification allows tunneling PCIe over IEEE 802.11. A product implementing this spec is the wil6210 from Wilocity (now part of Qualcomm Atheros). It integrates a PCIe switch with a wireless network adapter: 00.0-+ [1ae9:0101] Upstream Port +-00.0-+ [1ae9:0200] Downstream Port | +-00.0 [168c:0034] Atheros AR9462 Wireless Network Adapter +-02.0 [1ae9:0201] Downstream Port +-03.0 [1ae9:0201] Downstream Port Wirelessly attached devices presumably appear below the hotplug ports with device ID [1ae9:0201]. Oddly, the Downstream Port [1ae9:0200] leading to the wireless network adapter is likewise Hotplug Capable, but has its Presence Detect State bit hardwired to zero. Even if the Link Active bit is set, Presence Detect is zero, so this cannot be caused by in-band presence detection but only by broken hardware. pciehp assumes an empty slot if Presence Detect State is zero, regardless of Link Active being one. Consequently, up until v4.18 it removes the wireless network adapter in pciehp_resume(). From v4.19 it already does so in pciehp_probe(). Be lenient towards broken hardware and assume the slot is occupied if Link Active is set: Introduce pciehp_card_present_or_link_active() and use it in lieu of pciehp_get_adapter_status() everywhere, except in pciehp_handle_presence_or_link_change() whose log messages depend on which of Presence Detect State or Link Active is set. Remove the Presence Detect State check from __pciehp_enable_slot() because it is only called if either of Presence Detect State or Link Active is set. Caution: There is a possibility that broken hardware exists which has working Presence Detect but hardwires Link Active to one. On such hardware the slot will now incorrectly be considered always occupied. If such hardware is discovered, this commit can be rolled back and a quirk can be added which sets is_hotplug_bridge = 0 for [1ae9:0200]. Link: https://bugzilla.kernel.org/show_bug.cgi?id=200839 Reported-and-tested-by: David Yang <mmyangfl@gmail.com> Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Rajat Jain <rajatja@google.com> Cc: Ashok Raj <ashok.raj@intel.com>
2018-09-08 01:59:01 -06:00
if (!pciehp_card_present_or_link_active(ctrl) && poweron) {
PCI: pciehp: Deduplicate presence check on probe & resume On driver probe and on resume from system sleep, pciehp checks the Presence Detect State bit in the Slot Status register to bring up an occupied slot or bring down an unoccupied slot. Both code paths are identical, so deduplicate them per Mika's request. On probe, an additional check is performed to disable power of an unoccupied slot. This can e.g. happen if power was enabled by BIOS. It cannot happen once pciehp has taken control, hence is not necessary on resume: The Slot Control register is set to the same value that it had on suspend by pci_restore_state(), so if the slot was occupied, power is enabled and if it wasn't, power is disabled. Should occupancy have changed during the system sleep transition, power is adjusted by bringing up or down the slot per the paragraph above. To allow for deduplication of the presence check, move the power check to pcie_init(). This seems safer anyway, because right now it is performed while interrupts are already enabled, and although I can't think of a scenario where pciehp_power_off_slot() and the IRQ thread collide, it does feel brittle. However this means that pcie_init() may now write to the Slot Control register before the IRQ is requested. If both the CCIE and HPIE bits happen to be set, pcie_wait_cmd() will wait for an interrupt (instead of polling the Command Completed bit) and eventually emit a timeout message. Additionally, if a level-triggered INTx interrupt is used, the user may see a spurious interrupt splat. Avoid by disabling interrupts before disabling power. (Normally the HPIE and CCIE bits should be clear on probe, but conceivably they may already have been set e.g. by BIOS.) Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
2018-07-27 23:22:00 -06:00
pcie_disable_notification(ctrl);
pciehp_power_off_slot(ctrl);
PCI: pciehp: Deduplicate presence check on probe & resume On driver probe and on resume from system sleep, pciehp checks the Presence Detect State bit in the Slot Status register to bring up an occupied slot or bring down an unoccupied slot. Both code paths are identical, so deduplicate them per Mika's request. On probe, an additional check is performed to disable power of an unoccupied slot. This can e.g. happen if power was enabled by BIOS. It cannot happen once pciehp has taken control, hence is not necessary on resume: The Slot Control register is set to the same value that it had on suspend by pci_restore_state(), so if the slot was occupied, power is enabled and if it wasn't, power is disabled. Should occupancy have changed during the system sleep transition, power is adjusted by bringing up or down the slot per the paragraph above. To allow for deduplication of the presence check, move the power check to pcie_init(). This seems safer anyway, because right now it is performed while interrupts are already enabled, and although I can't think of a scenario where pciehp_power_off_slot() and the IRQ thread collide, it does feel brittle. However this means that pcie_init() may now write to the Slot Control register before the IRQ is requested. If both the CCIE and HPIE bits happen to be set, pcie_wait_cmd() will wait for an interrupt (instead of polling the Command Completed bit) and eventually emit a timeout message. Additionally, if a level-triggered INTx interrupt is used, the user may see a spurious interrupt splat. Avoid by disabling interrupts before disabling power. (Normally the HPIE and CCIE bits should be clear on probe, but conceivably they may already have been set e.g. by BIOS.) Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
2018-07-27 23:22:00 -06:00
}
}
return ctrl;
}
void pciehp_release_ctrl(struct controller *ctrl)
{
cancel_delayed_work_sync(&ctrl->button_work);
kfree(ctrl);
}
PCI: pciehp: Add quirk for Command Completed errata Several PCIe hotplug controllers have errata that mean they do not set the Command Completed bit unless writes to the Slot Command register change "Control" bits. Command Completed is never set for writes that only change software notification "Enable" bits. This results in timeouts like this: pciehp 0000:00:1c.0:pcie004: Timeout on hotplug command 0x1038 (issued 65284 msec ago) When this erratum is present, avoid these timeouts by marking commands "completed" immediately unless they change the "Control" bits. Here's the text of the Intel erratum CF118. We assume this applies to all Intel parts: CF118 PCIe Slot Status Register Command Completed bit not always updated on any configuration write to the Slot Control Register Problem: For PCIe root ports (devices 0 - 10) supporting hot-plug, the Slot Status Register (offset AAh) Command Completed (bit[4]) status is updated under the following condition: IOH will set Command Completed bit after delivering the new commands written in the Slot Controller register (offset A8h) to VPP. The IOH detects new commands written in Slot Control register by checking the change of value for Power Controller Control (bit[10]), Power Indicator Control (bits[9:8]), Attention Indicator Control (bits[7:6]), or Electromechanical Interlock Control (bit[11]) fields. Any other configuration writes to the Slot Control register without changing the values of these fields will not cause Command Completed bit to be set. The PCIe Base Specification Revision 2.0 or later describes the “Slot Control Register” in section 7.8.10, as follows (Reference section 7.8.10, Slot Control Register, Offset 18h). In hot-plug capable Downstream Ports, a write to the Slot Control register must cause a hot-plug command to be generated (see Section 6.7.3.2 for details on hot-plug commands). A write to the Slot Control register in a Downstream Port that is not hotplug capable must not cause a hot-plug command to be executed. The PCIe Spec intended that every write to the Slot Control Register is a command and expected a command complete status to abstract the VPP implementation specific nuances from the OS software. IOH PCIe Slot Control Register implementation is not fully conforming to the PCIe Specification in this respect. Implication: Software checking on the Command Completed status after writing to the Slot Control register may time out. Workaround: Software can read the Slot Control register and compare the existing and new values to determine if it should check the Command Completed status after writing to the Slot Control register. Per Sinan, the Qualcomm QDF2400 controller also does not set the Command Completed bit unless writes to the Slot Command register change "Control" bits. Link: http://www.intel.com/content/www/us/en/processors/xeon/xeon-e7-v2-spec-update.html Link: https://lkml.kernel.org/r/8770820b-85a0-172b-7230-3a44524e6c9f@molgen.mpg.de Reported-by: Paul Menzel <pmenzel+linux-pci@molgen.mpg.de> # Lenovo X60 Tested-by: Paul Menzel <pmenzel+linux-pci@molgen.mpg.de> # Lenovo X60 Signed-off-by: Sinan Kaya <okaya@codeaurora.org> # Qcom quirk Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
2018-05-03 17:39:38 -06:00
static void quirk_cmd_compl(struct pci_dev *pdev)
{
u32 slot_cap;
if (pci_is_pcie(pdev)) {
pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &slot_cap);
if (slot_cap & PCI_EXP_SLTCAP_HPC &&
!(slot_cap & PCI_EXP_SLTCAP_NCCS))
pdev->broken_cmd_compl = 1;
}
}
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0400,
PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0401,
PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);