1
0
Fork 0

PCI changes for the v4.8 merge window:

Enumeration
     Move ecam.h to linux/include/pci-ecam.h (Jayachandran C)
     Add parent device field to ECAM struct pci_config_window (Jayachandran C)
     Add generic MCFG table handling (Tomasz Nowicki)
     Refactor pci_bus_assign_domain_nr() for CONFIG_PCI_DOMAINS_GENERIC (Tomasz Nowicki)
     Factor DT-specific pci_bus_find_domain_nr() code out (Tomasz Nowicki)
 
   Resource management
     Add devm_request_pci_bus_resources() (Bjorn Helgaas)
     Unify pci_resource_to_user() declarations (Bjorn Helgaas)
     Implement pci_resource_to_user() with pcibios_resource_to_bus() (microblaze, powerpc, sparc) (Bjorn Helgaas)
     Request host bridge window resources (designware, iproc, rcar, xgene, xilinx, xilinx-nwl) (Bjorn Helgaas)
     Make PCI I/O space optional on ARM32 (Bjorn Helgaas)
     Ignore write combining when mapping I/O port space (Bjorn Helgaas)
     Claim bus resources on MIPS PCI_PROBE_ONLY set-ups (Bjorn Helgaas)
     Remove unicore32 pci=firmware command line parameter handling (Bjorn Helgaas)
     Support I/O resources when parsing host bridge resources (Jayachandran C)
     Add helpers to request/release memory and I/O regions (Johannes Thumshirn)
     Use pci_(request|release)_mem_regions (NVMe, lpfc, GenWQE, ethernet/intel, alx) (Johannes Thumshirn)
     Extend pci=resource_alignment to specify device/vendor IDs (Koehrer Mathias (ETAS/ESW5))
     Add generic pci_bus_claim_resources() (Lorenzo Pieralisi)
     Claim bus resources on ARM32 PCI_PROBE_ONLY set-ups (Lorenzo Pieralisi)
     Remove ARM32 and ARM64 arch-specific pcibios_enable_device() (Lorenzo Pieralisi)
     Add pci_unmap_iospace() to unmap I/O resources (Sinan Kaya)
     Remove powerpc __pci_mmap_set_pgprot() (Yinghai Lu)
 
   PCI device hotplug
     Allow additional bus numbers for hotplug bridges (Keith Busch)
     Ignore interrupts during D3cold (Lukas Wunner)
 
   Power management
     Enforce type casting for pci_power_t (Andy Shevchenko)
     Don't clear d3cold_allowed for PCIe ports (Mika Westerberg)
     Put PCIe ports into D3 during suspend (Mika Westerberg)
     Power on bridges before scanning new devices (Mika Westerberg)
     Runtime resume bridge before rescan (Mika Westerberg)
     Add runtime PM support for PCIe ports (Mika Westerberg)
     Remove redundant check of pcie_set_clkpm (Shawn Lin)
 
   Virtualization
     Add function 1 DMA alias quirk for Marvell 88SE9182 (Aaron Sierra)
     Add DMA alias quirk for Adaptec 3805 (Alex Williamson)
     Mark Atheros AR9485 and QCA9882 to avoid bus reset (Chris Blake)
     Add ACS quirk for Solarflare SFC9220 (Edward Cree)
 
   MSI
     Fix PCI_MSI dependencies (Arnd Bergmann)
     Add pci_msix_desc_addr() helper (Christoph Hellwig)
     Switch msix_program_entries() to use pci_msix_desc_addr() (Christoph Hellwig)
     Make the "entries" argument to pci_enable_msix() optional (Christoph Hellwig)
     Provide sensible IRQ vector alloc/free routines (Christoph Hellwig)
     Spread interrupt vectors in pci_alloc_irq_vectors() (Christoph Hellwig)
 
   Error Handling
     Bind DPC to Root Ports as well as Downstream Ports (Keith Busch)
     Remove DPC tristate module option (Keith Busch)
     Convert Downstream Port Containment driver to use devm_* functions (Mika Westerberg)
 
   Generic host bridge driver
     Select IRQ_DOMAIN (Arnd Bergmann)
     Claim bus resources on PCI_PROBE_ONLY set-ups (Lorenzo Pieralisi)
 
   ACPI host bridge driver
     Add ARM64 acpi_pci_bus_find_domain_nr() (Tomasz Nowicki)
     Add ARM64 ACPI support for legacy IRQs parsing and consolidation with DT code (Tomasz Nowicki)
     Implement ARM64 AML accessors for PCI_Config region (Tomasz Nowicki)
     Support ARM64 ACPI-based PCI host controller (Tomasz Nowicki)
 
   Altera host bridge driver
     Check link status before retrain link (Ley Foon Tan)
     Poll for link up status after retraining the link (Ley Foon Tan)
 
   Axis ARTPEC-6 host bridge driver
     Add PCI_MSI_IRQ_DOMAIN dependency (Arnd Bergmann)
     Add DT binding for Axis ARTPEC-6 PCIe controller (Niklas Cassel)
     Add Axis ARTPEC-6 PCIe controller driver (Niklas Cassel)
 
   Intel VMD host bridge driver
     Use lock save/restore in interrupt enable path (Jon Derrick)
     Select device dma ops to override (Keith Busch)
     Initialize list item in IRQ disable (Keith Busch)
     Use x86_vector_domain as parent domain (Keith Busch)
     Separate MSI and MSI-X vector sharing (Keith Busch)
 
   Marvell Aardvark host bridge driver
     Add DT binding for the Aardvark PCIe controller (Thomas Petazzoni)
     Add Aardvark PCI host controller driver (Thomas Petazzoni)
     Add Aardvark PCIe support for Armada 3700 (Thomas Petazzoni)
 
   Microsoft Hyper-V host bridge driver
     Fix interrupt cleanup path (Cathy Avery)
     Don't leak buffer in hv_pci_onchannelcallback() (Vitaly Kuznetsov)
     Handle all pending messages in hv_pci_onchannelcallback() (Vitaly Kuznetsov)
 
   NVIDIA Tegra host bridge driver
     Program PADS_REFCLK_CFG* always, not just on legacy SoCs (Stephen Warren)
     Program PADS_REFCLK_CFG* registers with per-SoC values (Stephen Warren)
     Use lower-case hex consistently for register definitions (Thierry Reding)
     Use generic pci_remap_iospace() rather than ARM32-specific one (Thierry Reding)
     Stop setting pcibios_min_mem (Thierry Reding)
 
   Renesas R-Car host bridge driver
     Drop gen2 dummy I/O port region (Bjorn Helgaas)
 
   TI DRA7xx host bridge driver
     Fix return value in case of error (Christophe JAILLET)
 
   Xilinx AXI host bridge driver
     Fix return value in case of error (Christophe JAILLET)
 
   Miscellaneous
     Make bus_attr_resource_alignment static (Ben Dooks)
     Include <asm/dma.h> for isa_dma_bridge_buggy (Ben Dooks)
     MAINTAINERS: Add file patterns for PCI device tree bindings (Geert Uytterhoeven)
     Make host bridge drivers explicitly non-modular (Paul Gortmaker)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJXoNRtAAoJEFmIoMA60/r8LMkP/3kiNh21QFS6RZGOaDft5/Py
 n14Zo0w51avspxoI3iyDlBd5q/SssMqi+2c6Ko/fh2D2xMxJgmQOjdMDrIGARxGA
 qEHk/5IoXquY2/GcptmCk3ap66cJ6kTovS4OPrb73m3fPuknFwFwdzExq22XHbnI
 crPya6xwQxPLc54VpY/TsgW8E+EKZd/3FW9wuzzNHXrXmTILyhBQzQAA0K470GMx
 wEXU6kc3M/XhRuF1zjV9/O+H/xguwfnbTpZLvd2NAF6uXKZoRytEHHtNnVqu1hoe
 UPpDS2xq32pMNbGxGqBetCdIbkY/hWOufmckHI7Yu2OfXBYyHBYMG2je1+nMPkOV
 WiFhhrchGt5KnEMUwXPS4ROqnSZVpZBl1Fd4s10GhUYkoE2HNKJXta398H9FR1jj
 4NEVSi4mSX/+CkaoIN3lXYiaf9P0wv4Wppve4Scr30+VnLjJhm7Vw5La7v12oo6x
 otrJ/g98AkmnbuUdLeWBUS/+TOcdPjZYbw52rqBsbOOjFm51Zcj6D7kf5WcTypQy
 HzbvygSVabcioWehUG1uudC8pdJmQlUGx1aES/iu+mZEae4cuUFALu6hDBD9IYnZ
 5JdwjVzI0UItEwT3rQt3t4xiAqHADQ0NAVNJVCeREdoy/YQpSoTWGXIpyqCZ1yCm
 aBykjRsxbKQXlhVeIxuc
 =NVxu
 -----END PGP SIGNATURE-----

Merge tag 'pci-v4.8-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
 "Highlights:

   - ARM64 support for ACPI host bridges

   - new drivers for Axis ARTPEC-6 and Marvell Aardvark

   - new pci_alloc_irq_vectors() interface for MSI-X, MSI, legacy INTx

   - pci_resource_to_user() cleanup (more to come)

  Detailed summary:

  Enumeration:
   - Move ecam.h to linux/include/pci-ecam.h (Jayachandran C)
   - Add parent device field to ECAM struct pci_config_window (Jayachandran C)
   - Add generic MCFG table handling (Tomasz Nowicki)
   - Refactor pci_bus_assign_domain_nr() for CONFIG_PCI_DOMAINS_GENERIC (Tomasz Nowicki)
   - Factor DT-specific pci_bus_find_domain_nr() code out (Tomasz Nowicki)

  Resource management:
   - Add devm_request_pci_bus_resources() (Bjorn Helgaas)
   - Unify pci_resource_to_user() declarations (Bjorn Helgaas)
   - Implement pci_resource_to_user() with pcibios_resource_to_bus() (microblaze, powerpc, sparc) (Bjorn Helgaas)
   - Request host bridge window resources (designware, iproc, rcar, xgene, xilinx, xilinx-nwl) (Bjorn Helgaas)
   - Make PCI I/O space optional on ARM32 (Bjorn Helgaas)
   - Ignore write combining when mapping I/O port space (Bjorn Helgaas)
   - Claim bus resources on MIPS PCI_PROBE_ONLY set-ups (Bjorn Helgaas)
   - Remove unicore32 pci=firmware command line parameter handling (Bjorn Helgaas)
   - Support I/O resources when parsing host bridge resources (Jayachandran C)
   - Add helpers to request/release memory and I/O regions (Johannes Thumshirn)
   - Use pci_(request|release)_mem_regions (NVMe, lpfc, GenWQE, ethernet/intel, alx) (Johannes Thumshirn)
   - Extend pci=resource_alignment to specify device/vendor IDs (Koehrer Mathias (ETAS/ESW5))
   - Add generic pci_bus_claim_resources() (Lorenzo Pieralisi)
   - Claim bus resources on ARM32 PCI_PROBE_ONLY set-ups (Lorenzo Pieralisi)
   - Remove ARM32 and ARM64 arch-specific pcibios_enable_device() (Lorenzo Pieralisi)
   - Add pci_unmap_iospace() to unmap I/O resources (Sinan Kaya)
   - Remove powerpc __pci_mmap_set_pgprot() (Yinghai Lu)

  PCI device hotplug:
   - Allow additional bus numbers for hotplug bridges (Keith Busch)
   - Ignore interrupts during D3cold (Lukas Wunner)

  Power management:
   - Enforce type casting for pci_power_t (Andy Shevchenko)
   - Don't clear d3cold_allowed for PCIe ports (Mika Westerberg)
   - Put PCIe ports into D3 during suspend (Mika Westerberg)
   - Power on bridges before scanning new devices (Mika Westerberg)
   - Runtime resume bridge before rescan (Mika Westerberg)
   - Add runtime PM support for PCIe ports (Mika Westerberg)
   - Remove redundant check of pcie_set_clkpm (Shawn Lin)

  Virtualization:
   - Add function 1 DMA alias quirk for Marvell 88SE9182 (Aaron Sierra)
   - Add DMA alias quirk for Adaptec 3805 (Alex Williamson)
   - Mark Atheros AR9485 and QCA9882 to avoid bus reset (Chris Blake)
   - Add ACS quirk for Solarflare SFC9220 (Edward Cree)

  MSI:
   - Fix PCI_MSI dependencies (Arnd Bergmann)
   - Add pci_msix_desc_addr() helper (Christoph Hellwig)
   - Switch msix_program_entries() to use pci_msix_desc_addr() (Christoph Hellwig)
   - Make the "entries" argument to pci_enable_msix() optional (Christoph Hellwig)
   - Provide sensible IRQ vector alloc/free routines (Christoph Hellwig)
   - Spread interrupt vectors in pci_alloc_irq_vectors() (Christoph Hellwig)

  Error Handling:
   - Bind DPC to Root Ports as well as Downstream Ports (Keith Busch)
   - Remove DPC tristate module option (Keith Busch)
   - Convert Downstream Port Containment driver to use devm_* functions (Mika Westerberg)

  Generic host bridge driver:
   - Select IRQ_DOMAIN (Arnd Bergmann)
   - Claim bus resources on PCI_PROBE_ONLY set-ups (Lorenzo Pieralisi)

  ACPI host bridge driver:
   - Add ARM64 acpi_pci_bus_find_domain_nr() (Tomasz Nowicki)
   - Add ARM64 ACPI support for legacy IRQs parsing and consolidation with DT code (Tomasz Nowicki)
   - Implement ARM64 AML accessors for PCI_Config region (Tomasz Nowicki)
   - Support ARM64 ACPI-based PCI host controller (Tomasz Nowicki)

  Altera host bridge driver:
   - Check link status before retrain link (Ley Foon Tan)
   - Poll for link up status after retraining the link (Ley Foon Tan)

  Axis ARTPEC-6 host bridge driver:
   - Add PCI_MSI_IRQ_DOMAIN dependency (Arnd Bergmann)
   - Add DT binding for Axis ARTPEC-6 PCIe controller (Niklas Cassel)
   - Add Axis ARTPEC-6 PCIe controller driver (Niklas Cassel)

  Intel VMD host bridge driver:
   - Use lock save/restore in interrupt enable path (Jon Derrick)
   - Select device dma ops to override (Keith Busch)
   - Initialize list item in IRQ disable (Keith Busch)
   - Use x86_vector_domain as parent domain (Keith Busch)
   - Separate MSI and MSI-X vector sharing (Keith Busch)

  Marvell Aardvark host bridge driver:
   - Add DT binding for the Aardvark PCIe controller (Thomas Petazzoni)
   - Add Aardvark PCI host controller driver (Thomas Petazzoni)
   - Add Aardvark PCIe support for Armada 3700 (Thomas Petazzoni)

  Microsoft Hyper-V host bridge driver:
   - Fix interrupt cleanup path (Cathy Avery)
   - Don't leak buffer in hv_pci_onchannelcallback() (Vitaly Kuznetsov)
   - Handle all pending messages in hv_pci_onchannelcallback() (Vitaly Kuznetsov)

  NVIDIA Tegra host bridge driver:
   - Program PADS_REFCLK_CFG* always, not just on legacy SoCs (Stephen Warren)
   - Program PADS_REFCLK_CFG* registers with per-SoC values (Stephen Warren)
   - Use lower-case hex consistently for register definitions (Thierry Reding)
   - Use generic pci_remap_iospace() rather than ARM32-specific one (Thierry Reding)
   - Stop setting pcibios_min_mem (Thierry Reding)

  Renesas R-Car host bridge driver:
   - Drop gen2 dummy I/O port region (Bjorn Helgaas)

  TI DRA7xx host bridge driver:
   - Fix return value in case of error (Christophe JAILLET)

  Xilinx AXI host bridge driver:
   - Fix return value in case of error (Christophe JAILLET)

  Miscellaneous:
   - Make bus_attr_resource_alignment static (Ben Dooks)
   - Include <asm/dma.h> for isa_dma_bridge_buggy (Ben Dooks)
   - MAINTAINERS: Add file patterns for PCI device tree bindings (Geert Uytterhoeven)
   - Make host bridge drivers explicitly non-modular (Paul Gortmaker)"

* tag 'pci-v4.8-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (125 commits)
  PCI: xgene: Make explicitly non-modular
  PCI: thunder-pem: Make explicitly non-modular
  PCI: thunder-ecam: Make explicitly non-modular
  PCI: tegra: Make explicitly non-modular
  PCI: rcar-gen2: Make explicitly non-modular
  PCI: rcar: Make explicitly non-modular
  PCI: mvebu: Make explicitly non-modular
  PCI: layerscape: Make explicitly non-modular
  PCI: keystone: Make explicitly non-modular
  PCI: hisi: Make explicitly non-modular
  PCI: generic: Make explicitly non-modular
  PCI: designware-plat: Make it explicitly non-modular
  PCI: artpec6: Make explicitly non-modular
  PCI: armada8k: Make explicitly non-modular
  PCI: artpec: Add PCI_MSI_IRQ_DOMAIN dependency
  PCI: Add ACS quirk for Solarflare SFC9220
  arm64: dts: marvell: Add Aardvark PCIe support for Armada 3700
  PCI: aardvark: Add Aardvark PCI host controller driver
  dt-bindings: add DT binding for the Aardvark PCIe controller
  PCI: tegra: Program PADS_REFCLK_CFG* registers with per-SoC values
  ...
hifive-unleashed-5.1
Linus Torvalds 2016-08-02 17:12:29 -04:00
commit c8d0267efd
87 changed files with 3022 additions and 1184 deletions

View File

@ -78,422 +78,111 @@ CONFIG_PCI_MSI option.
4.2 Using MSI
Most of the hard work is done for the driver in the PCI layer. It simply
has to request that the PCI layer set up the MSI capability for this
Most of the hard work is done for the driver in the PCI layer. The driver
simply has to request that the PCI layer set up the MSI capability for this
device.
4.2.1 pci_enable_msi
To automatically use MSI or MSI-X interrupt vectors, use the following
function:
int pci_enable_msi(struct pci_dev *dev)
int pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
unsigned int max_vecs, unsigned int flags);
A successful call allocates ONE interrupt to the device, regardless
of how many MSIs the device supports. The device is switched from
pin-based interrupt mode to MSI mode. The dev->irq number is changed
to a new number which represents the message signaled interrupt;
consequently, this function should be called before the driver calls
request_irq(), because an MSI is delivered via a vector that is
different from the vector of a pin-based interrupt.
which allocates up to max_vecs interrupt vectors for a PCI device. It
returns the number of vectors allocated or a negative error. If the device
has a requirements for a minimum number of vectors the driver can pass a
min_vecs argument set to this limit, and the PCI core will return -ENOSPC
if it can't meet the minimum number of vectors.
4.2.2 pci_enable_msi_range
The flags argument should normally be set to 0, but can be used to pass the
PCI_IRQ_NOMSI and PCI_IRQ_NOMSIX flag in case a device claims to support
MSI or MSI-X, but the support is broken, or to pass PCI_IRQ_NOLEGACY in
case the device does not support legacy interrupt lines.
int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec)
By default this function will spread the interrupts around the available
CPUs, but this feature can be disabled by passing the PCI_IRQ_NOAFFINITY
flag.
This function allows a device driver to request any number of MSI
interrupts within specified range from 'minvec' to 'maxvec'.
To get the Linux IRQ numbers passed to request_irq() and free_irq() and the
vectors, use the following function:
If this function returns a positive number it indicates the number of
MSI interrupts that have been successfully allocated. In this case
the device is switched from pin-based interrupt mode to MSI mode and
updates dev->irq to be the lowest of the new interrupts assigned to it.
The other interrupts assigned to the device are in the range dev->irq
to dev->irq + returned value - 1. Device driver can use the returned
number of successfully allocated MSI interrupts to further allocate
and initialize device resources.
int pci_irq_vector(struct pci_dev *dev, unsigned int nr);
If this function returns a negative number, it indicates an error and
the driver should not attempt to request any more MSI interrupts for
this device.
Any allocated resources should be freed before removing the device using
the following function:
This function should be called before the driver calls request_irq(),
because MSI interrupts are delivered via vectors that are different
from the vector of a pin-based interrupt.
void pci_free_irq_vectors(struct pci_dev *dev);
It is ideal if drivers can cope with a variable number of MSI interrupts;
there are many reasons why the platform may not be able to provide the
exact number that a driver asks for.
If a device supports both MSI-X and MSI capabilities, this API will use the
MSI-X facilities in preference to the MSI facilities. MSI-X supports any
number of interrupts between 1 and 2048. In contrast, MSI is restricted to
a maximum of 32 interrupts (and must be a power of two). In addition, the
MSI interrupt vectors must be allocated consecutively, so the system might
not be able to allocate as many vectors for MSI as it could for MSI-X. On
some platforms, MSI interrupts must all be targeted at the same set of CPUs
whereas MSI-X interrupts can all be targeted at different CPUs.
There could be devices that can not operate with just any number of MSI
interrupts within a range. See chapter 4.3.1.3 to get the idea how to
handle such devices for MSI-X - the same logic applies to MSI.
If a device supports neither MSI-X or MSI it will fall back to a single
legacy IRQ vector.
4.2.1.1 Maximum possible number of MSI interrupts
The typical usage of MSI or MSI-X interrupts is to allocate as many vectors
as possible, likely up to the limit supported by the device. If nvec is
larger than the number supported by the device it will automatically be
capped to the supported limit, so there is no need to query the number of
vectors supported beforehand:
The typical usage of MSI interrupts is to allocate as many vectors as
possible, likely up to the limit returned by pci_msi_vec_count() function:
static int foo_driver_enable_msi(struct pci_dev *pdev, int nvec)
{
return pci_enable_msi_range(pdev, 1, nvec);
}
Note the value of 'minvec' parameter is 1. As 'minvec' is inclusive,
the value of 0 would be meaningless and could result in error.
Some devices have a minimal limit on number of MSI interrupts.
In this case the function could look like this:
static int foo_driver_enable_msi(struct pci_dev *pdev, int nvec)
{
return pci_enable_msi_range(pdev, FOO_DRIVER_MINIMUM_NVEC, nvec);
}
4.2.1.2 Exact number of MSI interrupts
nvec = pci_alloc_irq_vectors(pdev, 1, nvec, 0);
if (nvec < 0)
goto out_err;
If a driver is unable or unwilling to deal with a variable number of MSI
interrupts it could request a particular number of interrupts by passing
that number to pci_enable_msi_range() function as both 'minvec' and 'maxvec'
parameters:
interrupts it can request a particular number of interrupts by passing that
number to pci_alloc_irq_vectors() function as both 'min_vecs' and
'max_vecs' parameters:
static int foo_driver_enable_msi(struct pci_dev *pdev, int nvec)
{
return pci_enable_msi_range(pdev, nvec, nvec);
}
ret = pci_alloc_irq_vectors(pdev, nvec, nvec, 0);
if (ret < 0)
goto out_err;
Note, unlike pci_enable_msi_exact() function, which could be also used to
enable a particular number of MSI-X interrupts, pci_enable_msi_range()
returns either a negative errno or 'nvec' (not negative errno or 0 - as
pci_enable_msi_exact() does).
The most notorious example of the request type described above is enabling
the single MSI mode for a device. It could be done by passing two 1s as
'min_vecs' and 'max_vecs':
4.2.1.3 Single MSI mode
ret = pci_alloc_irq_vectors(pdev, 1, 1, 0);
if (ret < 0)
goto out_err;
The most notorious example of the request type described above is
enabling the single MSI mode for a device. It could be done by passing
two 1s as 'minvec' and 'maxvec':
Some devices might not support using legacy line interrupts, in which case
the PCI_IRQ_NOLEGACY flag can be used to fail the request if the platform
can't provide MSI or MSI-X interrupts:
static int foo_driver_enable_single_msi(struct pci_dev *pdev)
{
return pci_enable_msi_range(pdev, 1, 1);
}
nvec = pci_alloc_irq_vectors(pdev, 1, nvec, PCI_IRQ_NOLEGACY);
if (nvec < 0)
goto out_err;
Note, unlike pci_enable_msi() function, which could be also used to
enable the single MSI mode, pci_enable_msi_range() returns either a
negative errno or 1 (not negative errno or 0 - as pci_enable_msi()
does).
4.3 Legacy APIs
4.2.3 pci_enable_msi_exact
The following old APIs to enable and disable MSI or MSI-X interrupts should
not be used in new code:
int pci_enable_msi_exact(struct pci_dev *dev, int nvec)
pci_enable_msi() /* deprecated */
pci_enable_msi_range() /* deprecated */
pci_enable_msi_exact() /* deprecated */
pci_disable_msi() /* deprecated */
pci_enable_msix_range() /* deprecated */
pci_enable_msix_exact() /* deprecated */
pci_disable_msix() /* deprecated */
This variation on pci_enable_msi_range() call allows a device driver to
request exactly 'nvec' MSIs.
Additionally there are APIs to provide the number of supported MSI or MSI-X
vectors: pci_msi_vec_count() and pci_msix_vec_count(). In general these
should be avoided in favor of letting pci_alloc_irq_vectors() cap the
number of vectors. If you have a legitimate special use case for the count
of vectors we might have to revisit that decision and add a
pci_nr_irq_vectors() helper that handles MSI and MSI-X transparently.
If this function returns a negative number, it indicates an error and
the driver should not attempt to request any more MSI interrupts for
this device.
4.4 Considerations when using MSIs
By contrast with pci_enable_msi_range() function, pci_enable_msi_exact()
returns zero in case of success, which indicates MSI interrupts have been
successfully allocated.
4.2.4 pci_disable_msi
void pci_disable_msi(struct pci_dev *dev)
This function should be used to undo the effect of pci_enable_msi_range().
Calling it restores dev->irq to the pin-based interrupt number and frees
the previously allocated MSIs. The interrupts may subsequently be assigned
to another device, so drivers should not cache the value of dev->irq.
Before calling this function, a device driver must always call free_irq()
on any interrupt for which it previously called request_irq().
Failure to do so results in a BUG_ON(), leaving the device with
MSI enabled and thus leaking its vector.
4.2.4 pci_msi_vec_count
int pci_msi_vec_count(struct pci_dev *dev)
This function could be used to retrieve the number of MSI vectors the
device requested (via the Multiple Message Capable register). The MSI
specification only allows the returned value to be a power of two,
up to a maximum of 2^5 (32).
If this function returns a negative number, it indicates the device is
not capable of sending MSIs.
If this function returns a positive number, it indicates the maximum
number of MSI interrupt vectors that could be allocated.
4.3 Using MSI-X
The MSI-X capability is much more flexible than the MSI capability.
It supports up to 2048 interrupts, each of which can be controlled
independently. To support this flexibility, drivers must use an array of
`struct msix_entry':
struct msix_entry {
u16 vector; /* kernel uses to write alloc vector */
u16 entry; /* driver uses to specify entry */
};
This allows for the device to use these interrupts in a sparse fashion;
for example, it could use interrupts 3 and 1027 and yet allocate only a
two-element array. The driver is expected to fill in the 'entry' value
in each element of the array to indicate for which entries the kernel
should assign interrupts; it is invalid to fill in two entries with the
same number.
4.3.1 pci_enable_msix_range
int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
int minvec, int maxvec)
Calling this function asks the PCI subsystem to allocate any number of
MSI-X interrupts within specified range from 'minvec' to 'maxvec'.
The 'entries' argument is a pointer to an array of msix_entry structs
which should be at least 'maxvec' entries in size.
On success, the device is switched into MSI-X mode and the function
returns the number of MSI-X interrupts that have been successfully
allocated. In this case the 'vector' member in entries numbered from
0 to the returned value - 1 is populated with the interrupt number;
the driver should then call request_irq() for each 'vector' that it
decides to use. The device driver is responsible for keeping track of the
interrupts assigned to the MSI-X vectors so it can free them again later.
Device driver can use the returned number of successfully allocated MSI-X
interrupts to further allocate and initialize device resources.
If this function returns a negative number, it indicates an error and
the driver should not attempt to allocate any more MSI-X interrupts for
this device.
This function, in contrast with pci_enable_msi_range(), does not adjust
dev->irq. The device will not generate interrupts for this interrupt
number once MSI-X is enabled.
Device drivers should normally call this function once per device
during the initialization phase.
It is ideal if drivers can cope with a variable number of MSI-X interrupts;
there are many reasons why the platform may not be able to provide the
exact number that a driver asks for.
There could be devices that can not operate with just any number of MSI-X
interrupts within a range. E.g., an network adapter might need let's say
four vectors per each queue it provides. Therefore, a number of MSI-X
interrupts allocated should be a multiple of four. In this case interface
pci_enable_msix_range() can not be used alone to request MSI-X interrupts
(since it can allocate any number within the range, without any notion of
the multiple of four) and the device driver should master a custom logic
to request the required number of MSI-X interrupts.
4.3.1.1 Maximum possible number of MSI-X interrupts
The typical usage of MSI-X interrupts is to allocate as many vectors as
possible, likely up to the limit returned by pci_msix_vec_count() function:
static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec)
{
return pci_enable_msix_range(adapter->pdev, adapter->msix_entries,
1, nvec);
}
Note the value of 'minvec' parameter is 1. As 'minvec' is inclusive,
the value of 0 would be meaningless and could result in error.
Some devices have a minimal limit on number of MSI-X interrupts.
In this case the function could look like this:
static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec)
{
return pci_enable_msix_range(adapter->pdev, adapter->msix_entries,
FOO_DRIVER_MINIMUM_NVEC, nvec);
}
4.3.1.2 Exact number of MSI-X interrupts
If a driver is unable or unwilling to deal with a variable number of MSI-X
interrupts it could request a particular number of interrupts by passing
that number to pci_enable_msix_range() function as both 'minvec' and 'maxvec'
parameters:
static int foo_driver_enable_msix(struct foo_adapter *adapter, int nvec)
{
return pci_enable_msix_range(adapter->pdev, adapter->msix_entries,
nvec, nvec);
}
Note, unlike pci_enable_msix_exact() function, which could be also used to
enable a particular number of MSI-X interrupts, pci_enable_msix_range()
returns either a negative errno or 'nvec' (not negative errno or 0 - as
pci_enable_msix_exact() does).
4.3.1.3 Specific requirements to the number of MSI-X interrupts
As noted above, there could be devices that can not operate with just any
number of MSI-X interrupts within a range. E.g., let's assume a device that
is only capable sending the number of MSI-X interrupts which is a power of
two. A routine that enables MSI-X mode for such device might look like this:
/*
* Assume 'minvec' and 'maxvec' are non-zero
*/
static int foo_driver_enable_msix(struct foo_adapter *adapter,
int minvec, int maxvec)
{
int rc;
minvec = roundup_pow_of_two(minvec);
maxvec = rounddown_pow_of_two(maxvec);
if (minvec > maxvec)
return -ERANGE;
retry:
rc = pci_enable_msix_range(adapter->pdev, adapter->msix_entries,
maxvec, maxvec);
/*
* -ENOSPC is the only error code allowed to be analyzed
*/
if (rc == -ENOSPC) {
if (maxvec == 1)
return -ENOSPC;
maxvec /= 2;
if (minvec > maxvec)
return -ENOSPC;
goto retry;
}
return rc;
}
Note how pci_enable_msix_range() return value is analyzed for a fallback -
any error code other than -ENOSPC indicates a fatal error and should not
be retried.
4.3.2 pci_enable_msix_exact
int pci_enable_msix_exact(struct pci_dev *dev,
struct msix_entry *entries, int nvec)
This variation on pci_enable_msix_range() call allows a device driver to
request exactly 'nvec' MSI-Xs.
If this function returns a negative number, it indicates an error and
the driver should not attempt to allocate any more MSI-X interrupts for
this device.
By contrast with pci_enable_msix_range() function, pci_enable_msix_exact()
returns zero in case of success, which indicates MSI-X interrupts have been
successfully allocated.
Another version of a routine that enables MSI-X mode for a device with
specific requirements described in chapter 4.3.1.3 might look like this:
/*
* Assume 'minvec' and 'maxvec' are non-zero
*/
static int foo_driver_enable_msix(struct foo_adapter *adapter,
int minvec, int maxvec)
{
int rc;
minvec = roundup_pow_of_two(minvec);
maxvec = rounddown_pow_of_two(maxvec);
if (minvec > maxvec)
return -ERANGE;
retry:
rc = pci_enable_msix_exact(adapter->pdev,
adapter->msix_entries, maxvec);
/*
* -ENOSPC is the only error code allowed to be analyzed
*/
if (rc == -ENOSPC) {
if (maxvec == 1)
return -ENOSPC;
maxvec /= 2;
if (minvec > maxvec)
return -ENOSPC;
goto retry;
} else if (rc < 0) {
return rc;
}
return maxvec;
}
4.3.3 pci_disable_msix
void pci_disable_msix(struct pci_dev *dev)
This function should be used to undo the effect of pci_enable_msix_range().
It frees the previously allocated MSI-X interrupts. The interrupts may
subsequently be assigned to another device, so drivers should not cache
the value of the 'vector' elements over a call to pci_disable_msix().
Before calling this function, a device driver must always call free_irq()
on any interrupt for which it previously called request_irq().
Failure to do so results in a BUG_ON(), leaving the device with
MSI-X enabled and thus leaking its vector.
4.3.3 The MSI-X Table
The MSI-X capability specifies a BAR and offset within that BAR for the
MSI-X Table. This address is mapped by the PCI subsystem, and should not
be accessed directly by the device driver. If the driver wishes to
mask or unmask an interrupt, it should call disable_irq() / enable_irq().
4.3.4 pci_msix_vec_count
int pci_msix_vec_count(struct pci_dev *dev)
This function could be used to retrieve number of entries in the device
MSI-X table.
If this function returns a negative number, it indicates the device is
not capable of sending MSI-Xs.
If this function returns a positive number, it indicates the maximum
number of MSI-X interrupt vectors that could be allocated.
4.4 Handling devices implementing both MSI and MSI-X capabilities
If a device implements both MSI and MSI-X capabilities, it can
run in either MSI mode or MSI-X mode, but not both simultaneously.
This is a requirement of the PCI spec, and it is enforced by the
PCI layer. Calling pci_enable_msi_range() when MSI-X is already
enabled or pci_enable_msix_range() when MSI is already enabled
results in an error. If a device driver wishes to switch between MSI
and MSI-X at runtime, it must first quiesce the device, then switch
it back to pin-interrupt mode, before calling pci_enable_msi_range()
or pci_enable_msix_range() and resuming operation. This is not expected
to be a common operation but may be useful for debugging or testing
during development.
4.5 Considerations when using MSIs
4.5.1 Choosing between MSI-X and MSI
If your device supports both MSI-X and MSI capabilities, you should use
the MSI-X facilities in preference to the MSI facilities. As mentioned
above, MSI-X supports any number of interrupts between 1 and 2048.
In contrast, MSI is restricted to a maximum of 32 interrupts (and
must be a power of two). In addition, the MSI interrupt vectors must
be allocated consecutively, so the system might not be able to allocate
as many vectors for MSI as it could for MSI-X. On some platforms, MSI
interrupts must all be targeted at the same set of CPUs whereas MSI-X
interrupts can all be targeted at different CPUs.
4.5.2 Spinlocks
4.4.1 Spinlocks
Most device drivers have a per-device spinlock which is taken in the
interrupt handler. With pin-based interrupts or a single MSI, it is not
@ -505,7 +194,7 @@ acquire the spinlock. Such deadlocks can be avoided by using
spin_lock_irqsave() or spin_lock_irq() which disable local interrupts
and acquire the lock (see Documentation/DocBook/kernel-locking).
4.6 How to tell whether MSI/MSI-X is enabled on a device
4.5 How to tell whether MSI/MSI-X is enabled on a device
Using 'lspci -v' (as root) may show some devices with "MSI", "Message
Signalled Interrupts" or "MSI-X" capabilities. Each of these capabilities

View File

@ -0,0 +1,56 @@
Aardvark PCIe controller
This PCIe controller is used on the Marvell Armada 3700 ARM64 SoC.
The Device Tree node describing an Aardvark PCIe controller must
contain the following properties:
- compatible: Should be "marvell,armada-3700-pcie"
- reg: range of registers for the PCIe controller
- interrupts: the interrupt line of the PCIe controller
- #address-cells: set to <3>
- #size-cells: set to <2>
- device_type: set to "pci"
- ranges: ranges for the PCI memory and I/O regions
- #interrupt-cells: set to <1>
- msi-controller: indicates that the PCIe controller can itself
handle MSI interrupts
- msi-parent: pointer to the MSI controller to be used
- interrupt-map-mask and interrupt-map: standard PCI properties to
define the mapping of the PCIe interface to interrupt numbers.
- bus-range: PCI bus numbers covered
In addition, the Device Tree describing an Aardvark PCIe controller
must include a sub-node that describes the legacy interrupt controller
built into the PCIe controller. This sub-node must have the following
properties:
- interrupt-controller
- #interrupt-cells: set to <1>
Example:
pcie0: pcie@d0070000 {
compatible = "marvell,armada-3700-pcie";
device_type = "pci";
status = "disabled";
reg = <0 0xd0070000 0 0x20000>;
#address-cells = <3>;
#size-cells = <2>;
bus-range = <0x00 0xff>;
interrupts = <GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>;
#interrupt-cells = <1>;
msi-controller;
msi-parent = <&pcie0>;
ranges = <0x82000000 0 0xe8000000 0 0xe8000000 0 0x1000000 /* Port 0 MEM */
0x81000000 0 0xe9000000 0 0xe9000000 0 0x10000>; /* Port 0 IO*/
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc 0>,
<0 0 0 2 &pcie_intc 1>,
<0 0 0 3 &pcie_intc 2>,
<0 0 0 4 &pcie_intc 3>;
pcie_intc: interrupt-controller {
interrupt-controller;
#interrupt-cells = <1>;
};
};

View File

@ -0,0 +1,46 @@
* Axis ARTPEC-6 PCIe interface
This PCIe host controller is based on the Synopsys DesignWare PCIe IP
and thus inherits all the common properties defined in designware-pcie.txt.
Required properties:
- compatible: "axis,artpec6-pcie", "snps,dw-pcie"
- reg: base addresses and lengths of the PCIe controller (DBI),
the phy controller, and configuration address space.
- reg-names: Must include the following entries:
- "dbi"
- "phy"
- "config"
- interrupts: A list of interrupt outputs of the controller. Must contain an
entry for each entry in the interrupt-names property.
- interrupt-names: Must include the following entries:
- "msi": The interrupt that is asserted when an MSI is received
- axis,syscon-pcie: A phandle pointing to the ARTPEC-6 system controller,
used to enable and control the Synopsys IP.
Example:
pcie@f8050000 {
compatible = "axis,artpec6-pcie", "snps,dw-pcie";
reg = <0xf8050000 0x2000
0xf8040000 0x1000
0xc0000000 0x1000>;
reg-names = "dbi", "phy", "config";
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
/* downstream I/O */
ranges = <0x81000000 0 0x00010000 0xc0010000 0 0x00010000
/* non-prefetchable memory */
0x82000000 0 0xc0020000 0xc0020000 0 0x1ffe0000>;
num-lanes = <2>;
interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 2 &intc GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 3 &intc GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 4 &intc GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>;
axis,syscon-pcie = <&syscon>;
};

View File

@ -3021,6 +3021,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
resource_alignment=
Format:
[<order of align>@][<domain>:]<bus>:<slot>.<func>[; ...]
[<order of align>@]pci:<vendor>:<device>\
[:<subvendor>:<subdevice>][; ...]
Specifies alignment and device to reassign
aligned memory resources.
If <order of align> is not specified,
@ -3039,6 +3041,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
hpmemsize=nn[KMG] The fixed amount of bus space which is
reserved for hotplug bridge's memory window.
Default size is 2 megabytes.
hpbussize=nn The minimum amount of additional bus numbers
reserved for buses below a hotplug bridge.
Default is 1.
realloc= Enable/disable reallocating PCI bridge resources
if allocations done by BIOS are too small to
accommodate resources required by all child
@ -3070,6 +3075,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
compat Treat PCIe ports as PCI-to-PCI bridges, disable the PCIe
ports driver.
pcie_port_pm= [PCIE] PCIe port power management handling:
off Disable power management of all PCIe ports
force Forcibly enable power management of all PCIe ports
pcie_pme= [PCIE,PM] Native PCIe PME signaling options:
nomsi Do not use MSI for native PCIe PME signaling (this makes
all PCIe root ports use INTx for all services).

View File

@ -8883,6 +8883,7 @@ L: linux-pci@vger.kernel.org
Q: http://patchwork.ozlabs.org/project/linux-pci/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git
S: Supported
F: Documentation/devicetree/bindings/pci/
F: Documentation/PCI/
F: drivers/pci/
F: include/linux/pci*
@ -8946,6 +8947,13 @@ L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: drivers/pci/host/*mvebu*
PCI DRIVER FOR AARDVARK (Marvell Armada 3700)
M: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: drivers/pci/host/pci-aardvark.c
PCI DRIVER FOR NVIDIA TEGRA
M: Thierry Reding <thierry.reding@gmail.com>
L: linux-tegra@vger.kernel.org
@ -9028,6 +9036,15 @@ S: Maintained
F: Documentation/devicetree/bindings/pci/xgene-pci-msi.txt
F: drivers/pci/host/pci-xgene-msi.c
PCIE DRIVER FOR AXIS ARTPEC
M: Niklas Cassel <niklas.cassel@axis.com>
M: Jesper Nilsson <jesper.nilsson@axis.com>
L: linux-arm-kernel@axis.com
L: linux-pci@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/pci/axis,artpec*
F: drivers/pci/host/*artpec*
PCIE DRIVER FOR HISILICON
M: Zhou Wang <wangzhou1@hisilicon.com>
M: Gabriele Paoloni <gabriele.paoloni@huawei.com>

View File

@ -700,7 +700,7 @@ config ARCH_VIRT
depends on ARCH_MULTI_V7
select ARM_AMBA
select ARM_GIC
select ARM_GIC_V2M if PCI_MSI
select ARM_GIC_V2M if PCI
select ARM_GIC_V3
select ARM_PSCI
select HAVE_ARM_ARCH_TIMER

View File

@ -22,6 +22,7 @@ struct hw_pci {
struct msi_controller *msi_ctrl;
struct pci_ops *ops;
int nr_controllers;
unsigned int io_optional:1;
void **private_data;
int (*setup)(int nr, struct pci_sys_data *);
struct pci_bus *(*scan)(int nr, struct pci_sys_data *);

View File

@ -410,7 +410,8 @@ static int pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
return irq;
}
static int pcibios_init_resources(int busnr, struct pci_sys_data *sys)
static int pcibios_init_resource(int busnr, struct pci_sys_data *sys,
int io_optional)
{
int ret;
struct resource_entry *window;
@ -420,6 +421,14 @@ static int pcibios_init_resources(int busnr, struct pci_sys_data *sys)
&iomem_resource, sys->mem_offset);
}
/*
* If a platform says I/O port support is optional, we don't add
* the default I/O space. The platform is responsible for adding
* any I/O space it needs.
*/
if (io_optional)
return 0;
resource_list_for_each_entry(window, &sys->resources)
if (resource_type(window->res) == IORESOURCE_IO)
return 0;
@ -466,7 +475,7 @@ static void pcibios_init_hw(struct device *parent, struct hw_pci *hw,
if (ret > 0) {
struct pci_host_bridge *host_bridge;
ret = pcibios_init_resources(nr, sys);
ret = pcibios_init_resource(nr, sys, hw->io_optional);
if (ret) {
kfree(sys);
break;
@ -515,25 +524,23 @@ void pci_common_init_dev(struct device *parent, struct hw_pci *hw)
list_for_each_entry(sys, &head, node) {
struct pci_bus *bus = sys->bus;
if (!pci_has_flag(PCI_PROBE_ONLY)) {
/*
* We insert PCI resources into the iomem_resource and
* ioport_resource trees in either pci_bus_claim_resources()
* or pci_bus_assign_resources().
*/
if (pci_has_flag(PCI_PROBE_ONLY)) {
pci_bus_claim_resources(bus);
} else {
struct pci_bus *child;
/*
* Size the bridge windows.
*/
pci_bus_size_bridges(bus);
/*
* Assign resources.
*/
pci_bus_assign_resources(bus);
list_for_each_entry(child, &bus->children, node)
pcie_bus_configure_settings(child);
}
/*
* Tell drivers about devices found.
*/
pci_bus_add_devices(bus);
}
}
@ -590,18 +597,6 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res,
return start;
}
/**
* pcibios_enable_device - Enable I/O and memory.
* @dev: PCI device to be enabled
*/
int pcibios_enable_device(struct pci_dev *dev, int mask)
{
if (pci_has_flag(PCI_PROBE_ONLY))
return 0;
return pci_enable_resources(dev, mask);
}
int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
enum pci_mmap_state mmap_state, int write_combine)
{

View File

@ -3,6 +3,7 @@ config ARM64
select ACPI_CCA_REQUIRED if ACPI
select ACPI_GENERIC_GSI if ACPI
select ACPI_REDUCED_HARDWARE_ONLY if ACPI
select ACPI_MCFG if ACPI
select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
@ -22,9 +23,9 @@ config ARM64
select ARM_ARCH_TIMER
select ARM_GIC
select AUDIT_ARCH_COMPAT_GENERIC
select ARM_GIC_V2M if PCI_MSI
select ARM_GIC_V2M if PCI
select ARM_GIC_V3
select ARM_GIC_V3_ITS if PCI_MSI
select ARM_GIC_V3_ITS if PCI
select ARM_PSCI_FW
select BUILDTIME_EXTABLE_SORT
select CLONE_BACKWARDS
@ -102,6 +103,7 @@ config ARM64
select OF_EARLY_FLATTREE
select OF_NUMA if NUMA && OF
select OF_RESERVED_MEM
select PCI_ECAM if ACPI
select PERF_USE_VMALLOC
select POWER_RESET
select POWER_SUPPLY

View File

@ -76,3 +76,8 @@
&usb3 {
status = "okay";
};
/* CON17 (PCIe) / CON12 (mini-PCIe) */
&pcie0 {
status = "okay";
};

View File

@ -176,5 +176,30 @@
<0x1d40000 0x40000>; /* GICR */
};
};
pcie0: pcie@d0070000 {
compatible = "marvell,armada-3700-pcie";
device_type = "pci";
status = "disabled";
reg = <0 0xd0070000 0 0x20000>;
#address-cells = <3>;
#size-cells = <2>;
bus-range = <0x00 0xff>;
interrupts = <GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>;
#interrupt-cells = <1>;
msi-parent = <&pcie0>;
msi-controller;
ranges = <0x82000000 0 0xe8000000 0 0xe8000000 0 0x1000000 /* Port 0 MEM */
0x81000000 0 0xe9000000 0 0xe9000000 0 0x10000>; /* Port 0 IO*/
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc 0>,
<0 0 0 2 &pcie_intc 1>,
<0 0 0 3 &pcie_intc 2>,
<0 0 0 4 &pcie_intc 3>;
pcie_intc: interrupt-controller {
interrupt-controller;
#interrupt-cells = <1>;
};
};
};
};

View File

@ -17,6 +17,9 @@
#include <linux/mm.h>
#include <linux/of_pci.h>
#include <linux/of_platform.h>
#include <linux/pci.h>
#include <linux/pci-acpi.h>
#include <linux/pci-ecam.h>
#include <linux/slab.h>
/*
@ -36,25 +39,17 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res,
return res->start;
}
/**
* pcibios_enable_device - Enable I/O and memory.
* @dev: PCI device to be enabled
* @mask: bitmask of BARs to enable
*/
int pcibios_enable_device(struct pci_dev *dev, int mask)
{
if (pci_has_flag(PCI_PROBE_ONLY))
return 0;
return pci_enable_resources(dev, mask);
}
/*
* Try to assign the IRQ number from DT when adding a new device
* Try to assign the IRQ number when probing a new device
*/
int pcibios_add_device(struct pci_dev *dev)
int pcibios_alloc_irq(struct pci_dev *dev)
{
dev->irq = of_irq_parse_and_map_pci(dev, 0, 0);
if (acpi_disabled)
dev->irq = of_irq_parse_and_map_pci(dev, 0, 0);
#ifdef CONFIG_ACPI
else
return acpi_pci_irq_enable(dev);
#endif
return 0;
}
@ -65,13 +60,21 @@ int pcibios_add_device(struct pci_dev *dev)
int raw_pci_read(unsigned int domain, unsigned int bus,
unsigned int devfn, int reg, int len, u32 *val)
{
return -ENXIO;
struct pci_bus *b = pci_find_bus(domain, bus);
if (!b)
return PCIBIOS_DEVICE_NOT_FOUND;
return b->ops->read(b, devfn, reg, len, val);
}
int raw_pci_write(unsigned int domain, unsigned int bus,
unsigned int devfn, int reg, int len, u32 val)
{
return -ENXIO;
struct pci_bus *b = pci_find_bus(domain, bus);
if (!b)
return PCIBIOS_DEVICE_NOT_FOUND;
return b->ops->write(b, devfn, reg, len, val);
}
#ifdef CONFIG_NUMA
@ -85,10 +88,124 @@ EXPORT_SYMBOL(pcibus_to_node);
#endif
#ifdef CONFIG_ACPI
/* Root bridge scanning */
struct acpi_pci_generic_root_info {
struct acpi_pci_root_info common;
struct pci_config_window *cfg; /* config space mapping */
};
int acpi_pci_bus_find_domain_nr(struct pci_bus *bus)
{
struct pci_config_window *cfg = bus->sysdata;
struct acpi_device *adev = to_acpi_device(cfg->parent);
struct acpi_pci_root *root = acpi_driver_data(adev);
return root->segment;
}
int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge)
{
if (!acpi_disabled) {
struct pci_config_window *cfg = bridge->bus->sysdata;
struct acpi_device *adev = to_acpi_device(cfg->parent);
ACPI_COMPANION_SET(&bridge->dev, adev);
}
return 0;
}
/*
* Lookup the bus range for the domain in MCFG, and set up config space
* mapping.
*/
static struct pci_config_window *
pci_acpi_setup_ecam_mapping(struct acpi_pci_root *root)
{
struct resource *bus_res = &root->secondary;
u16 seg = root->segment;
struct pci_config_window *cfg;
struct resource cfgres;
unsigned int bsz;
/* Use address from _CBA if present, otherwise lookup MCFG */
if (!root->mcfg_addr)
root->mcfg_addr = pci_mcfg_lookup(seg, bus_res);
if (!root->mcfg_addr) {
dev_err(&root->device->dev, "%04x:%pR ECAM region not found\n",
seg, bus_res);
return NULL;
}
bsz = 1 << pci_generic_ecam_ops.bus_shift;
cfgres.start = root->mcfg_addr + bus_res->start * bsz;
cfgres.end = cfgres.start + resource_size(bus_res) * bsz - 1;
cfgres.flags = IORESOURCE_MEM;
cfg = pci_ecam_create(&root->device->dev, &cfgres, bus_res,
&pci_generic_ecam_ops);
if (IS_ERR(cfg)) {
dev_err(&root->device->dev, "%04x:%pR error %ld mapping ECAM\n",
seg, bus_res, PTR_ERR(cfg));
return NULL;
}
return cfg;
}
/* release_info: free resources allocated by init_info */
static void pci_acpi_generic_release_info(struct acpi_pci_root_info *ci)
{
struct acpi_pci_generic_root_info *ri;
ri = container_of(ci, struct acpi_pci_generic_root_info, common);
pci_ecam_free(ri->cfg);
kfree(ri);
}
static struct acpi_pci_root_ops acpi_pci_root_ops = {
.release_info = pci_acpi_generic_release_info,
};
/* Interface called from ACPI code to setup PCI host controller */
struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
{
/* TODO: Should be revisited when implementing PCI on ACPI */
return NULL;
int node = acpi_get_node(root->device->handle);
struct acpi_pci_generic_root_info *ri;
struct pci_bus *bus, *child;
ri = kzalloc_node(sizeof(*ri), GFP_KERNEL, node);
if (!ri)
return NULL;
ri->cfg = pci_acpi_setup_ecam_mapping(root);
if (!ri->cfg) {
kfree(ri);
return NULL;
}
acpi_pci_root_ops.pci_ops = &ri->cfg->ops->pci_ops;
bus = acpi_pci_root_create(root, &acpi_pci_root_ops, &ri->common,
ri->cfg);
if (!bus)
return NULL;
pci_bus_size_bridges(bus);
pci_bus_assign_resources(bus);
list_for_each_entry(child, &bus->children, node)
pcie_bus_configure_settings(child);
return bus;
}
void pcibios_add_bus(struct pci_bus *bus)
{
acpi_pci_add_bus(bus);
}
void pcibios_remove_bus(struct pci_bus *bus)
{
acpi_pci_remove_bus(bus);
}
#endif

View File

@ -82,9 +82,6 @@ extern pgprot_t pci_phys_mem_access_prot(struct file *file,
pgprot_t prot);
#define HAVE_ARCH_PCI_RESOURCE_TO_USER
extern void pci_resource_to_user(const struct pci_dev *dev, int bar,
const struct resource *rsrc,
resource_size_t *start, resource_size_t *end);
extern void pcibios_setup_bus_devices(struct pci_bus *bus);
extern void pcibios_setup_bus_self(struct pci_bus *bus);

View File

@ -218,33 +218,6 @@ static struct resource *__pci_mmap_make_offset(struct pci_dev *dev,
return NULL;
}
/*
* Set vm_page_prot of VMA, as appropriate for this architecture, for a pci
* device mapping.
*/
static pgprot_t __pci_mmap_set_pgprot(struct pci_dev *dev, struct resource *rp,
pgprot_t protection,
enum pci_mmap_state mmap_state,
int write_combine)
{
pgprot_t prot = protection;
/* Write combine is always 0 on non-memory space mappings. On
* memory space, if the user didn't pass 1, we check for a
* "prefetchable" resource. This is a bit hackish, but we use
* this to workaround the inability of /sysfs to provide a write
* combine bit
*/
if (mmap_state != pci_mmap_mem)
write_combine = 0;
else if (write_combine == 0) {
if (rp->flags & IORESOURCE_PREFETCH)
write_combine = 1;
}
return pgprot_noncached(prot);
}
/*
* This one is used by /dev/mem and fbdev who have no clue about the
* PCI device, it tries to find the PCI device first and calls the
@ -317,9 +290,7 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
return -EINVAL;
vma->vm_pgoff = offset >> PAGE_SHIFT;
vma->vm_page_prot = __pci_mmap_set_pgprot(dev, rp,
vma->vm_page_prot,
mmap_state, write_combine);
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
ret = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
vma->vm_end - vma->vm_start, vma->vm_page_prot);
@ -473,39 +444,25 @@ void pci_resource_to_user(const struct pci_dev *dev, int bar,
const struct resource *rsrc,
resource_size_t *start, resource_size_t *end)
{
struct pci_controller *hose = pci_bus_to_host(dev->bus);
resource_size_t offset = 0;
struct pci_bus_region region;
if (hose == NULL)
if (rsrc->flags & IORESOURCE_IO) {
pcibios_resource_to_bus(dev->bus, &region,
(struct resource *) rsrc);
*start = region.start;
*end = region.end;
return;
}
if (rsrc->flags & IORESOURCE_IO)
offset = (unsigned long)hose->io_base_virt - _IO_BASE;
/* We pass a fully fixed up address to userland for MMIO instead of
* a BAR value because X is lame and expects to be able to use that
* to pass to /dev/mem !
/* We pass a CPU physical address to userland for MMIO instead of a
* BAR value because X is lame and expects to be able to use that
* to pass to /dev/mem!
*
* That means that we'll have potentially 64 bits values where some
* userland apps only expect 32 (like X itself since it thinks only
* Sparc has 64 bits MMIO) but if we don't do that, we break it on
* 32 bits CHRPs :-(
*
* Hopefully, the sysfs insterface is immune to that gunk. Once X
* has been fixed (and the fix spread enough), we can re-enable the
* 2 lines below and pass down a BAR value to userland. In that case
* we'll also have to re-enable the matching code in
* __pci_mmap_make_offset().
*
* BenH.
* That means we may have 64-bit values where some apps only expect
* 32 (like X itself since it thinks only Sparc has 64-bit MMIO).
*/
#if 0
else if (rsrc->flags & IORESOURCE_MEM)
offset = hose->pci_mem_offset;
#endif
*start = rsrc->start - offset;
*end = rsrc->end - offset;
*start = rsrc->start;
*end = rsrc->end;
}
/**

View File

@ -80,16 +80,6 @@ extern int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
#define HAVE_ARCH_PCI_RESOURCE_TO_USER
static inline void pci_resource_to_user(const struct pci_dev *dev, int bar,
const struct resource *rsrc, resource_size_t *start,
resource_size_t *end)
{
phys_addr_t size = resource_size(rsrc);
*start = fixup_bigphys_addr(rsrc->start, size);
*end = rsrc->start + size;
}
/*
* Dynamic DMA mapping stuff.
* MIPS has everything mapped statically.

View File

@ -112,7 +112,14 @@ static void pcibios_scanbus(struct pci_controller *hose)
need_domain_info = 1;
}
if (!pci_has_flag(PCI_PROBE_ONLY)) {
/*
* We insert PCI resources into the iomem_resource and
* ioport_resource trees in either pci_bus_claim_resources()
* or pci_bus_assign_resources().
*/
if (pci_has_flag(PCI_PROBE_ONLY)) {
pci_bus_claim_resources(bus);
} else {
pci_bus_size_bridges(bus);
pci_bus_assign_resources(bus);
}
@ -319,6 +326,16 @@ void pcibios_fixup_bus(struct pci_bus *bus)
EXPORT_SYMBOL(PCIBIOS_MIN_IO);
EXPORT_SYMBOL(PCIBIOS_MIN_MEM);
void pci_resource_to_user(const struct pci_dev *dev, int bar,
const struct resource *rsrc, resource_size_t *start,
resource_size_t *end)
{
phys_addr_t size = resource_size(rsrc);
*start = fixup_bigphys_addr(rsrc->start, size);
*end = rsrc->start + size;
}
int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
enum pci_mmap_state mmap_state, int write_combine)
{

View File

@ -136,9 +136,6 @@ extern pgprot_t pci_phys_mem_access_prot(struct file *file,
pgprot_t prot);
#define HAVE_ARCH_PCI_RESOURCE_TO_USER
extern void pci_resource_to_user(const struct pci_dev *dev, int bar,
const struct resource *rsrc,
resource_size_t *start, resource_size_t *end);
extern resource_size_t pcibios_io_space_offset(struct pci_controller *hose);
extern void pcibios_setup_bus_devices(struct pci_bus *bus);

View File

@ -411,36 +411,6 @@ static struct resource *__pci_mmap_make_offset(struct pci_dev *dev,
return NULL;
}
/*
* Set vm_page_prot of VMA, as appropriate for this architecture, for a pci
* device mapping.
*/
static pgprot_t __pci_mmap_set_pgprot(struct pci_dev *dev, struct resource *rp,
pgprot_t protection,
enum pci_mmap_state mmap_state,
int write_combine)
{
/* Write combine is always 0 on non-memory space mappings. On
* memory space, if the user didn't pass 1, we check for a
* "prefetchable" resource. This is a bit hackish, but we use
* this to workaround the inability of /sysfs to provide a write
* combine bit
*/
if (mmap_state != pci_mmap_mem)
write_combine = 0;
else if (write_combine == 0) {
if (rp->flags & IORESOURCE_PREFETCH)
write_combine = 1;
}
/* XXX would be nice to have a way to ask for write-through */
if (write_combine)
return pgprot_noncached_wc(protection);
else
return pgprot_noncached(protection);
}
/*
* This one is used by /dev/mem and fbdev who have no clue about the
* PCI device, it tries to find the PCI device first and calls the
@ -514,9 +484,10 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
return -EINVAL;
vma->vm_pgoff = offset >> PAGE_SHIFT;
vma->vm_page_prot = __pci_mmap_set_pgprot(dev, rp,
vma->vm_page_prot,
mmap_state, write_combine);
if (write_combine)
vma->vm_page_prot = pgprot_noncached_wc(vma->vm_page_prot);
else
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
ret = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
vma->vm_end - vma->vm_start, vma->vm_page_prot);
@ -666,39 +637,25 @@ void pci_resource_to_user(const struct pci_dev *dev, int bar,
const struct resource *rsrc,
resource_size_t *start, resource_size_t *end)
{
struct pci_controller *hose = pci_bus_to_host(dev->bus);
resource_size_t offset = 0;
struct pci_bus_region region;
if (hose == NULL)
if (rsrc->flags & IORESOURCE_IO) {
pcibios_resource_to_bus(dev->bus, &region,
(struct resource *) rsrc);
*start = region.start;
*end = region.end;
return;
}
if (rsrc->flags & IORESOURCE_IO)
offset = (unsigned long)hose->io_base_virt - _IO_BASE;
/* We pass a fully fixed up address to userland for MMIO instead of
* a BAR value because X is lame and expects to be able to use that
* to pass to /dev/mem !
/* We pass a CPU physical address to userland for MMIO instead of a
* BAR value because X is lame and expects to be able to use that
* to pass to /dev/mem!
*
* That means that we'll have potentially 64 bits values where some
* userland apps only expect 32 (like X itself since it thinks only
* Sparc has 64 bits MMIO) but if we don't do that, we break it on
* 32 bits CHRPs :-(
*
* Hopefully, the sysfs insterface is immune to that gunk. Once X
* has been fixed (and the fix spread enough), we can re-enable the
* 2 lines below and pass down a BAR value to userland. In that case
* we'll also have to re-enable the matching code in
* __pci_mmap_make_offset().
*
* BenH.
* That means we may have 64-bit values where some apps only expect
* 32 (like X itself since it thinks only Sparc has 64-bit MMIO).
*/
#if 0
else if (rsrc->flags & IORESOURCE_MEM)
offset = hose->pci_mem_offset;
#endif
*start = rsrc->start - offset;
*end = rsrc->end - offset;
*start = rsrc->start;
*end = rsrc->end;
}
/**

View File

@ -55,9 +55,6 @@ static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
}
#define HAVE_ARCH_PCI_RESOURCE_TO_USER
void pci_resource_to_user(const struct pci_dev *dev, int bar,
const struct resource *rsrc,
resource_size_t *start, resource_size_t *end);
#endif /* __KERNEL__ */
#endif /* __SPARC64_PCI_H */

View File

@ -986,16 +986,18 @@ void pci_resource_to_user(const struct pci_dev *pdev, int bar,
const struct resource *rp, resource_size_t *start,
resource_size_t *end)
{
struct pci_pbm_info *pbm = pdev->dev.archdata.host_controller;
unsigned long offset;
struct pci_bus_region region;
if (rp->flags & IORESOURCE_IO)
offset = pbm->io_space.start;
else
offset = pbm->mem_space.start;
*start = rp->start - offset;
*end = rp->end - offset;
/*
* "User" addresses are shown in /sys/devices/pci.../.../resource
* and /proc/bus/pci/devices and used as mmap offsets for
* /proc/bus/pci/BB/DD.F files (see proc_bus_pci_mmap()).
*
* On sparc, these are PCI bus addresses, i.e., raw BAR values.
*/
pcibios_resource_to_bus(pdev->bus, &region, (struct resource *) rp);
*start = region.start;
*end = region.end;
}
void pcibios_set_master(struct pci_dev *dev)

View File

@ -265,10 +265,8 @@ static int __init pci_common_init(void)
pci_fixup_irqs(pci_common_swizzle, pci_puv3_map_irq);
if (!pci_has_flag(PCI_PROBE_ONLY)) {
pci_bus_size_bridges(puv3_bus);
pci_bus_assign_resources(puv3_bus);
}
pci_bus_size_bridges(puv3_bus);
pci_bus_assign_resources(puv3_bus);
pci_bus_add_devices(puv3_bus);
return 0;
}
@ -279,9 +277,6 @@ char * __init pcibios_setup(char *str)
if (!strcmp(str, "debug")) {
debug_pci = 1;
return NULL;
} else if (!strcmp(str, "firmware")) {
pci_add_flags(PCI_PROBE_ONLY);
return NULL;
}
return str;
}

View File

@ -133,7 +133,7 @@ static void pcibios_fixup_device_resources(struct pci_dev *dev)
if (pci_probe & PCI_NOASSIGN_BARS) {
/*
* If the BIOS did not assign the BAR, zero out the
* resource so the kernel doesn't attmept to assign
* resource so the kernel doesn't attempt to assign
* it later on in pci_assign_unassigned_resources
*/
for (bar = 0; bar <= PCI_STD_RESOURCE_END; bar++) {

View File

@ -119,10 +119,11 @@ static void vmd_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
static void vmd_irq_enable(struct irq_data *data)
{
struct vmd_irq *vmdirq = data->chip_data;
unsigned long flags;
raw_spin_lock(&list_lock);
raw_spin_lock_irqsave(&list_lock, flags);
list_add_tail_rcu(&vmdirq->node, &vmdirq->irq->irq_list);
raw_spin_unlock(&list_lock);
raw_spin_unlock_irqrestore(&list_lock, flags);
data->chip->irq_unmask(data);
}
@ -130,12 +131,14 @@ static void vmd_irq_enable(struct irq_data *data)
static void vmd_irq_disable(struct irq_data *data)
{
struct vmd_irq *vmdirq = data->chip_data;
unsigned long flags;
data->chip->irq_mask(data);
raw_spin_lock(&list_lock);
raw_spin_lock_irqsave(&list_lock, flags);
list_del_rcu(&vmdirq->node);
raw_spin_unlock(&list_lock);
INIT_LIST_HEAD_RCU(&vmdirq->node);
raw_spin_unlock_irqrestore(&list_lock, flags);
}
/*
@ -166,16 +169,20 @@ static irq_hw_number_t vmd_get_hwirq(struct msi_domain_info *info,
* XXX: We can be even smarter selecting the best IRQ once we solve the
* affinity problem.
*/
static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd)
static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *desc)
{
int i, best = 0;
int i, best = 1;
unsigned long flags;
raw_spin_lock(&list_lock);
if (!desc->msi_attrib.is_msix || vmd->msix_count == 1)
return &vmd->irqs[0];
raw_spin_lock_irqsave(&list_lock, flags);
for (i = 1; i < vmd->msix_count; i++)
if (vmd->irqs[i].count < vmd->irqs[best].count)
best = i;
vmd->irqs[best].count++;
raw_spin_unlock(&list_lock);
raw_spin_unlock_irqrestore(&list_lock, flags);
return &vmd->irqs[best];
}
@ -184,14 +191,15 @@ static int vmd_msi_init(struct irq_domain *domain, struct msi_domain_info *info,
unsigned int virq, irq_hw_number_t hwirq,
msi_alloc_info_t *arg)
{
struct vmd_dev *vmd = vmd_from_bus(msi_desc_to_pci_dev(arg->desc)->bus);
struct msi_desc *desc = arg->desc;
struct vmd_dev *vmd = vmd_from_bus(msi_desc_to_pci_dev(desc)->bus);
struct vmd_irq *vmdirq = kzalloc(sizeof(*vmdirq), GFP_KERNEL);
if (!vmdirq)
return -ENOMEM;
INIT_LIST_HEAD(&vmdirq->node);
vmdirq->irq = vmd_next_irq(vmd);
vmdirq->irq = vmd_next_irq(vmd, desc);
vmdirq->virq = virq;
irq_domain_set_info(domain, virq, vmdirq->irq->vmd_vector, info->chip,
@ -203,11 +211,12 @@ static void vmd_msi_free(struct irq_domain *domain,
struct msi_domain_info *info, unsigned int virq)
{
struct vmd_irq *vmdirq = irq_get_chip_data(virq);
unsigned long flags;
/* XXX: Potential optimization to rebalance */
raw_spin_lock(&list_lock);
raw_spin_lock_irqsave(&list_lock, flags);
vmdirq->irq->count--;
raw_spin_unlock(&list_lock);
raw_spin_unlock_irqrestore(&list_lock, flags);
kfree_rcu(vmdirq, rcu);
}
@ -261,7 +270,7 @@ static struct device *to_vmd_dev(struct device *dev)
static struct dma_map_ops *vmd_dma_ops(struct device *dev)
{
return to_vmd_dev(dev)->archdata.dma_ops;
return get_dma_ops(to_vmd_dev(dev));
}
static void *vmd_alloc(struct device *dev, size_t size, dma_addr_t *addr,
@ -367,7 +376,7 @@ static void vmd_teardown_dma_ops(struct vmd_dev *vmd)
{
struct dma_domain *domain = &vmd->dma_domain;
if (vmd->dev->dev.archdata.dma_ops)
if (get_dma_ops(&vmd->dev->dev))
del_dma_domain(domain);
}
@ -379,7 +388,7 @@ static void vmd_teardown_dma_ops(struct vmd_dev *vmd)
static void vmd_setup_dma_ops(struct vmd_dev *vmd)
{
const struct dma_map_ops *source = vmd->dev->dev.archdata.dma_ops;
const struct dma_map_ops *source = get_dma_ops(&vmd->dev->dev);
struct dma_map_ops *dest = &vmd->dma_ops;
struct dma_domain *domain = &vmd->dma_domain;
@ -594,7 +603,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd)
sd->node = pcibus_to_node(vmd->dev->bus);
vmd->irq_domain = pci_msi_create_irq_domain(NULL, &vmd_msi_domain_info,
NULL);
x86_vector_domain);
if (!vmd->irq_domain)
return -ENODEV;

View File

@ -221,6 +221,9 @@ config ACPI_PROCESSOR_IDLE
bool
select CPU_IDLE
config ACPI_MCFG
bool
config ACPI_CPPC_LIB
bool
depends on ACPI_PROCESSOR

View File

@ -40,6 +40,7 @@ acpi-$(CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC) += processor_pdc.o
acpi-y += ec.o
acpi-$(CONFIG_ACPI_DOCK) += dock.o
acpi-y += pci_root.o pci_link.o pci_irq.o
obj-$(CONFIG_ACPI_MCFG) += pci_mcfg.o
acpi-y += acpi_lpss.o acpi_apd.o
acpi-y += acpi_platform.o
acpi-y += acpi_pnp.o

View File

@ -0,0 +1,92 @@
/*
* Copyright (C) 2016 Broadcom
* Author: Jayachandran C <jchandra@broadcom.com>
* Copyright (C) 2016 Semihalf
* Author: Tomasz Nowicki <tn@semihalf.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation (the "GPL").
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License version 2 (GPLv2) for more details.
*
* You should have received a copy of the GNU General Public License
* version 2 (GPLv2) along with this source code.
*/
#define pr_fmt(fmt) "ACPI: " fmt
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/pci-acpi.h>
/* Structure to hold entries from the MCFG table */
struct mcfg_entry {
struct list_head list;
phys_addr_t addr;
u16 segment;
u8 bus_start;
u8 bus_end;
};
/* List to save MCFG entries */
static LIST_HEAD(pci_mcfg_list);
phys_addr_t pci_mcfg_lookup(u16 seg, struct resource *bus_res)
{
struct mcfg_entry *e;
/*
* We expect exact match, unless MCFG entry end bus covers more than
* specified by caller.
*/
list_for_each_entry(e, &pci_mcfg_list, list) {
if (e->segment == seg && e->bus_start == bus_res->start &&
e->bus_end >= bus_res->end)
return e->addr;
}
return 0;
}
static __init int pci_mcfg_parse(struct acpi_table_header *header)
{
struct acpi_table_mcfg *mcfg;
struct acpi_mcfg_allocation *mptr;
struct mcfg_entry *e, *arr;
int i, n;
if (header->length < sizeof(struct acpi_table_mcfg))
return -EINVAL;
n = (header->length - sizeof(struct acpi_table_mcfg)) /
sizeof(struct acpi_mcfg_allocation);
mcfg = (struct acpi_table_mcfg *)header;
mptr = (struct acpi_mcfg_allocation *) &mcfg[1];
arr = kcalloc(n, sizeof(*arr), GFP_KERNEL);
if (!arr)
return -ENOMEM;
for (i = 0, e = arr; i < n; i++, mptr++, e++) {
e->segment = mptr->pci_segment;
e->addr = mptr->address;
e->bus_start = mptr->start_bus_number;
e->bus_end = mptr->end_bus_number;
list_add(&e->list, &pci_mcfg_list);
}
pr_info("MCFG table detected, %d entries\n", n);
return 0;
}
/* Interface called by ACPI - parse and save MCFG table */
void __init pci_mmcfg_late_init(void)
{
int err = acpi_table_parse(ACPI_SIG_MCFG, pci_mcfg_parse);
if (err)
pr_err("Failed to parse MCFG (%d)\n", err);
}

View File

@ -720,6 +720,36 @@ next:
}
}
static void acpi_pci_root_remap_iospace(struct resource_entry *entry)
{
#ifdef PCI_IOBASE
struct resource *res = entry->res;
resource_size_t cpu_addr = res->start;
resource_size_t pci_addr = cpu_addr - entry->offset;
resource_size_t length = resource_size(res);
unsigned long port;
if (pci_register_io_range(cpu_addr, length))
goto err;
port = pci_address_to_pio(cpu_addr);
if (port == (unsigned long)-1)
goto err;
res->start = port;
res->end = port + length - 1;
entry->offset = port - pci_addr;
if (pci_remap_iospace(res, cpu_addr) < 0)
goto err;
pr_info("Remapped I/O %pa to %pR\n", &cpu_addr, res);
return;
err:
res->flags |= IORESOURCE_DISABLED;
#endif
}
int acpi_pci_probe_root_resources(struct acpi_pci_root_info *info)
{
int ret;
@ -740,6 +770,9 @@ int acpi_pci_probe_root_resources(struct acpi_pci_root_info *info)
"no IO and memory resources present in _CRS\n");
else {
resource_list_for_each_entry_safe(entry, tmp, list) {
if (entry->res->flags & IORESOURCE_IO)
acpi_pci_root_remap_iospace(entry);
if (entry->res->flags & IORESOURCE_DISABLED)
resource_list_destroy_entry(entry);
else
@ -811,6 +844,8 @@ static void acpi_pci_root_release_info(struct pci_host_bridge *bridge)
resource_list_for_each_entry(entry, &bridge->windows) {
res = entry->res;
if (res->flags & IORESOURCE_IO)
pci_unmap_iospace(res);
if (res->parent &&
(res->flags & (IORESOURCE_MEM | IORESOURCE_IO)))
release_resource(res);

View File

@ -21,9 +21,9 @@ config ARM_GIC_MAX_NR
config ARM_GIC_V2M
bool
depends on ARM_GIC
depends on PCI && PCI_MSI
select PCI_MSI_IRQ_DOMAIN
depends on PCI
select ARM_GIC
select PCI_MSI
config GIC_NON_BANKED
bool
@ -37,7 +37,8 @@ config ARM_GIC_V3
config ARM_GIC_V3_ITS
bool
select PCI_MSI_IRQ_DOMAIN
depends on PCI
depends on PCI_MSI
config ARM_NVIC
bool
@ -62,13 +63,13 @@ config ARM_VIC_NR
config ARMADA_370_XP_IRQ
bool
select GENERIC_IRQ_CHIP
select PCI_MSI_IRQ_DOMAIN if PCI_MSI
select PCI_MSI if PCI
config ALPINE_MSI
bool
depends on PCI && PCI_MSI
depends on PCI
select PCI_MSI
select GENERIC_IRQ_CHIP
select PCI_MSI_IRQ_DOMAIN
config ATMEL_AIC_IRQ
bool
@ -117,7 +118,6 @@ config HISILICON_IRQ_MBIGEN
bool
select ARM_GIC_V3
select ARM_GIC_V3_ITS
select GENERIC_MSI_IRQ_DOMAIN
config IMGPDC_IRQ
bool
@ -250,12 +250,10 @@ config IRQ_MXS
config MVEBU_ODMI
bool
select GENERIC_MSI_IRQ_DOMAIN
config LS_SCFG_MSI
def_bool y if SOC_LS1021A || ARCH_LAYERSCAPE
depends on PCI && PCI_MSI
select PCI_MSI_IRQ_DOMAIN
config PARTITION_PERCPU
bool

View File

@ -182,7 +182,7 @@ static void genwqe_dev_free(struct genwqe_dev *cd)
*/
static int genwqe_bus_reset(struct genwqe_dev *cd)
{
int bars, rc = 0;
int rc = 0;
struct pci_dev *pci_dev = cd->pci_dev;
void __iomem *mmio;
@ -193,8 +193,7 @@ static int genwqe_bus_reset(struct genwqe_dev *cd)
cd->mmio = NULL;
pci_iounmap(pci_dev, mmio);
bars = pci_select_bars(pci_dev, IORESOURCE_MEM);
pci_release_selected_regions(pci_dev, bars);
pci_release_mem_regions(pci_dev);
/*
* Firmware/BIOS might change memory mapping during bus reset.
@ -218,7 +217,7 @@ static int genwqe_bus_reset(struct genwqe_dev *cd)
GENWQE_INJECT_GFIR_FATAL |
GENWQE_INJECT_GFIR_INFO);
rc = pci_request_selected_regions(pci_dev, bars, genwqe_driver_name);
rc = pci_request_mem_regions(pci_dev, genwqe_driver_name);
if (rc) {
dev_err(&pci_dev->dev,
"[%s] err: request bars failed (%d)\n", __func__, rc);
@ -1068,10 +1067,9 @@ static int genwqe_health_check_stop(struct genwqe_dev *cd)
*/
static int genwqe_pci_setup(struct genwqe_dev *cd)
{
int err, bars;
int err;
struct pci_dev *pci_dev = cd->pci_dev;
bars = pci_select_bars(pci_dev, IORESOURCE_MEM);
err = pci_enable_device_mem(pci_dev);
if (err) {
dev_err(&pci_dev->dev,
@ -1080,7 +1078,7 @@ static int genwqe_pci_setup(struct genwqe_dev *cd)
}
/* Reserve PCI I/O and memory resources */
err = pci_request_selected_regions(pci_dev, bars, genwqe_driver_name);
err = pci_request_mem_regions(pci_dev, genwqe_driver_name);
if (err) {
dev_err(&pci_dev->dev,
"[%s] err: request bars failed (%d)\n", __func__, err);
@ -1142,7 +1140,7 @@ static int genwqe_pci_setup(struct genwqe_dev *cd)
out_iounmap:
pci_iounmap(pci_dev, cd->mmio);
out_release_resources:
pci_release_selected_regions(pci_dev, bars);
pci_release_mem_regions(pci_dev);
err_disable_device:
pci_disable_device(pci_dev);
err_out:
@ -1154,14 +1152,12 @@ static int genwqe_pci_setup(struct genwqe_dev *cd)
*/
static void genwqe_pci_remove(struct genwqe_dev *cd)
{
int bars;
struct pci_dev *pci_dev = cd->pci_dev;
if (cd->mmio)
pci_iounmap(pci_dev, cd->mmio);
bars = pci_select_bars(pci_dev, IORESOURCE_MEM);
pci_release_selected_regions(pci_dev, bars);
pci_release_mem_regions(pci_dev);
pci_disable_device(pci_dev);
}

View File

@ -1251,7 +1251,7 @@ static int alx_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
struct alx_priv *alx;
struct alx_hw *hw;
bool phy_configured;
int bars, err;
int err;
err = pci_enable_device_mem(pdev);
if (err)
@ -1271,11 +1271,10 @@ static int alx_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
}
}
bars = pci_select_bars(pdev, IORESOURCE_MEM);
err = pci_request_selected_regions(pdev, bars, alx_drv_name);
err = pci_request_mem_regions(pdev, alx_drv_name);
if (err) {
dev_err(&pdev->dev,
"pci_request_selected_regions failed(bars:%d)\n", bars);
"pci_request_mem_regions failed\n");
goto out_pci_disable;
}
@ -1401,7 +1400,7 @@ out_unmap:
out_free_netdev:
free_netdev(netdev);
out_pci_release:
pci_release_selected_regions(pdev, bars);
pci_release_mem_regions(pdev);
out_pci_disable:
pci_disable_device(pdev);
return err;
@ -1420,8 +1419,7 @@ static void alx_remove(struct pci_dev *pdev)
unregister_netdev(alx->dev);
iounmap(hw->hw_addr);
pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
pci_release_mem_regions(pdev);
pci_disable_pcie_error_reporting(pdev);
pci_disable_device(pdev);

View File

@ -7330,8 +7330,7 @@ err_flashmap:
err_ioremap:
free_netdev(netdev);
err_alloc_etherdev:
pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
pci_release_mem_regions(pdev);
err_pci_reg:
err_dma:
pci_disable_device(pdev);
@ -7398,8 +7397,7 @@ static void e1000_remove(struct pci_dev *pdev)
if ((adapter->hw.flash_address) &&
(adapter->hw.mac.type < e1000_pch_spt))
iounmap(adapter->hw.flash_address);
pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
pci_release_mem_regions(pdev);
free_netdev(netdev);

View File

@ -1963,10 +1963,7 @@ static int fm10k_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_dma;
}
err = pci_request_selected_regions(pdev,
pci_select_bars(pdev,
IORESOURCE_MEM),
fm10k_driver_name);
err = pci_request_mem_regions(pdev, fm10k_driver_name);
if (err) {
dev_err(&pdev->dev,
"pci_request_selected_regions failed: %d\n", err);
@ -2070,8 +2067,7 @@ err_sw_init:
err_ioremap:
free_netdev(netdev);
err_alloc_netdev:
pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
pci_release_mem_regions(pdev);
err_pci_reg:
err_dma:
pci_disable_device(pdev);
@ -2119,8 +2115,7 @@ static void fm10k_remove(struct pci_dev *pdev)
free_netdev(netdev);
pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
pci_release_mem_regions(pdev);
pci_disable_pcie_error_reporting(pdev);

View File

@ -10710,8 +10710,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
}
/* set up pci connections */
err = pci_request_selected_regions(pdev, pci_select_bars(pdev,
IORESOURCE_MEM), i40e_driver_name);
err = pci_request_mem_regions(pdev, i40e_driver_name);
if (err) {
dev_info(&pdev->dev,
"pci_request_selected_regions failed %d\n", err);
@ -11208,8 +11207,7 @@ err_ioremap:
kfree(pf);
err_pf_alloc:
pci_disable_pcie_error_reporting(pdev);
pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
pci_release_mem_regions(pdev);
err_pci_reg:
err_dma:
pci_disable_device(pdev);
@ -11320,8 +11318,7 @@ static void i40e_remove(struct pci_dev *pdev)
iounmap(hw->hw_addr);
kfree(pf);
pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
pci_release_mem_regions(pdev);
pci_disable_pcie_error_reporting(pdev);
pci_disable_device(pdev);

View File

@ -2324,9 +2324,7 @@ static int igb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
}
}
err = pci_request_selected_regions(pdev, pci_select_bars(pdev,
IORESOURCE_MEM),
igb_driver_name);
err = pci_request_mem_regions(pdev, igb_driver_name);
if (err)
goto err_pci_reg;
@ -2750,8 +2748,7 @@ err_sw_init:
err_ioremap:
free_netdev(netdev);
err_alloc_etherdev:
pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
pci_release_mem_regions(pdev);
err_pci_reg:
err_dma:
pci_disable_device(pdev);
@ -2916,8 +2913,7 @@ static void igb_remove(struct pci_dev *pdev)
pci_iounmap(pdev, adapter->io_addr);
if (hw->flash_address)
iounmap(hw->flash_address);
pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
pci_release_mem_regions(pdev);
kfree(adapter->shadow_vfta);
free_netdev(netdev);

View File

@ -9353,8 +9353,7 @@ static int ixgbe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
pci_using_dac = 0;
}
err = pci_request_selected_regions(pdev, pci_select_bars(pdev,
IORESOURCE_MEM), ixgbe_driver_name);
err = pci_request_mem_regions(pdev, ixgbe_driver_name);
if (err) {
dev_err(&pdev->dev,
"pci_request_selected_regions failed 0x%x\n", err);
@ -9740,8 +9739,7 @@ err_ioremap:
disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state);
free_netdev(netdev);
err_alloc_etherdev:
pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
pci_release_mem_regions(pdev);
err_pci_reg:
err_dma:
if (!adapter || disable_dev)
@ -9808,8 +9806,7 @@ static void ixgbe_remove(struct pci_dev *pdev)
#endif
iounmap(adapter->io_addr);
pci_release_selected_regions(pdev, pci_select_bars(pdev,
IORESOURCE_MEM));
pci_release_mem_regions(pdev);
e_dev_info("complete\n");

View File

@ -1661,14 +1661,9 @@ static int nvme_pci_enable(struct nvme_dev *dev)
static void nvme_dev_unmap(struct nvme_dev *dev)
{
struct pci_dev *pdev = to_pci_dev(dev->dev);
int bars;
if (dev->bar)
iounmap(dev->bar);
bars = pci_select_bars(pdev, IORESOURCE_MEM);
pci_release_selected_regions(pdev, bars);
pci_release_mem_regions(to_pci_dev(dev->dev));
}
static void nvme_pci_disable(struct nvme_dev *dev)
@ -1897,13 +1892,9 @@ static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = {
static int nvme_dev_map(struct nvme_dev *dev)
{
int bars;
struct pci_dev *pdev = to_pci_dev(dev->dev);
bars = pci_select_bars(pdev, IORESOURCE_MEM);
if (!bars)
return -ENODEV;
if (pci_request_selected_regions(pdev, bars, "nvme"))
if (pci_request_mem_regions(pdev, "nvme"))
return -ENODEV;
dev->bar = ioremap(pci_resource_start(pdev, 0), 8192);
@ -1912,7 +1903,7 @@ static int nvme_dev_map(struct nvme_dev *dev)
return 0;
release:
pci_release_selected_regions(pdev, bars);
pci_release_mem_regions(pdev);
return -ENODEV;
}

View File

@ -25,7 +25,7 @@ config PCI_MSI
If you don't know what to do here, say Y.
config PCI_MSI_IRQ_DOMAIN
bool
def_bool ARM || ARM64 || X86
depends on PCI_MSI
select GENERIC_MSI_IRQ_DOMAIN

View File

@ -91,6 +91,35 @@ void pci_bus_remove_resources(struct pci_bus *bus)
}
}
int devm_request_pci_bus_resources(struct device *dev,
struct list_head *resources)
{
struct resource_entry *win;
struct resource *parent, *res;
int err;
resource_list_for_each_entry(win, resources) {
res = win->res;
switch (resource_type(res)) {
case IORESOURCE_IO:
parent = &ioport_resource;
break;
case IORESOURCE_MEM:
parent = &iomem_resource;
break;
default:
continue;
}
err = devm_request_resource(dev, parent, res);
if (err)
return err;
}
return 0;
}
EXPORT_SYMBOL_GPL(devm_request_pci_bus_resources);
static struct pci_bus_region pci_32_bit = {0, 0xffffffffULL};
#ifdef CONFIG_PCI_BUS_ADDR_T_64BIT
static struct pci_bus_region pci_64_bit = {0,
@ -291,6 +320,7 @@ void pci_bus_add_device(struct pci_dev *dev)
pci_fixup_device(pci_fixup_final, dev);
pci_create_sysfs_dev_files(dev);
pci_proc_attach_device(dev);
pci_bridge_d3_device_changed(dev);
dev->match_driver = true;
retval = device_attach(&dev->dev);
@ -397,4 +427,3 @@ void pci_bus_put(struct pci_bus *bus)
put_device(&bus->dev);
}
EXPORT_SYMBOL(pci_bus_put);

View File

@ -19,10 +19,9 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/pci-ecam.h>
#include <linux/slab.h>
#include "ecam.h"
/*
* On 64-bit systems, we do a single ioremap for the whole config space
* since we have enough virtual address range available. On 32-bit, we
@ -52,6 +51,7 @@ struct pci_config_window *pci_ecam_create(struct device *dev,
if (!cfg)
return ERR_PTR(-ENOMEM);
cfg->parent = dev;
cfg->ops = ops;
cfg->busr.start = busr->start;
cfg->busr.end = busr->end;
@ -95,7 +95,7 @@ struct pci_config_window *pci_ecam_create(struct device *dev,
}
if (ops->init) {
err = ops->init(dev, cfg);
err = ops->init(cfg);
if (err)
goto err_exit;
}

View File

@ -3,8 +3,9 @@ menu "PCI host controller drivers"
config PCI_DRA7XX
bool "TI DRA7xx PCIe controller"
select PCIE_DW
depends on OF && HAS_IOMEM && TI_PIPE3
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW
help
Enables support for the PCIe controller in the DRA7xx SoC. There
are two instances of PCIe controller in DRA7xx. This controller can
@ -16,11 +17,20 @@ config PCI_MVEBU
depends on ARM
depends on OF
config PCI_AARDVARK
bool "Aardvark PCIe controller"
depends on ARCH_MVEBU && ARM64
depends on OF
depends on PCI_MSI_IRQ_DOMAIN
help
Add support for Aardvark 64bit PCIe Host Controller. This
controller is part of the South Bridge of the Marvel Armada
3700 SoC.
config PCIE_XILINX_NWL
bool "NWL PCIe Core"
depends on ARCH_ZYNQMP
select PCI_MSI_IRQ_DOMAIN if PCI_MSI
depends on PCI_MSI_IRQ_DOMAIN
help
Say 'Y' here if you want kernel support for Xilinx
NWL PCIe controller. The controller can act as Root Port
@ -29,6 +39,7 @@ config PCIE_XILINX_NWL
config PCIE_DW_PLAT
bool "Platform bus based DesignWare PCIe Controller"
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW
---help---
This selects the DesignWare PCIe controller support. Select this if
@ -40,16 +51,19 @@ config PCIE_DW_PLAT
config PCIE_DW
bool
depends on PCI_MSI_IRQ_DOMAIN
config PCI_EXYNOS
bool "Samsung Exynos PCIe controller"
depends on SOC_EXYNOS5440
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW
config PCI_IMX6
bool "Freescale i.MX6 PCIe controller"
depends on SOC_IMX6Q
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW
@ -72,8 +86,7 @@ config PCI_RCAR_GEN2
config PCIE_RCAR
bool "Renesas R-Car PCIe controller"
depends on ARCH_RENESAS || (ARM && COMPILE_TEST)
select PCI_MSI
select PCI_MSI_IRQ_DOMAIN
depends on PCI_MSI_IRQ_DOMAIN
help
Say Y here if you want PCIe controller support on R-Car SoCs.
@ -85,6 +98,7 @@ config PCI_HOST_GENERIC
bool "Generic PCI host controller"
depends on (ARM || ARM64) && OF
select PCI_HOST_COMMON
select IRQ_DOMAIN
help
Say Y here if you want to support a simple generic PCI host
controller, such as the one emulated by kvmtool.
@ -92,6 +106,7 @@ config PCI_HOST_GENERIC
config PCIE_SPEAR13XX
bool "STMicroelectronics SPEAr PCIe controller"
depends on ARCH_SPEAR13XX
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW
help
@ -100,6 +115,7 @@ config PCIE_SPEAR13XX
config PCI_KEYSTONE
bool "TI Keystone PCIe controller"
depends on ARCH_KEYSTONE
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW
select PCIEPORTBUS
help
@ -120,7 +136,6 @@ config PCI_XGENE
depends on ARCH_XGENE
depends on OF
select PCIEPORTBUS
select PCI_MSI_IRQ_DOMAIN if PCI_MSI
help
Say Y here if you want internal PCI support on APM X-Gene SoC.
There are 5 internal PCIe ports available. Each port is GEN3 capable
@ -128,7 +143,8 @@ config PCI_XGENE
config PCI_XGENE_MSI
bool "X-Gene v1 PCIe MSI feature"
depends on PCI_XGENE && PCI_MSI
depends on PCI_XGENE
depends on PCI_MSI_IRQ_DOMAIN
default y
help
Say Y here if you want PCIe MSI support for the APM X-Gene v1 SoC.
@ -137,6 +153,7 @@ config PCI_XGENE_MSI
config PCI_LAYERSCAPE
bool "Freescale Layerscape PCIe controller"
depends on OF && (ARM || ARCH_LAYERSCAPE)
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW
select MFD_SYSCON
help
@ -177,8 +194,7 @@ config PCIE_IPROC_BCMA
config PCIE_IPROC_MSI
bool "Broadcom iProc PCIe MSI support"
depends on PCIE_IPROC_PLATFORM || PCIE_IPROC_BCMA
depends on PCI_MSI
select PCI_MSI_IRQ_DOMAIN
depends on PCI_MSI_IRQ_DOMAIN
default ARCH_BCM_IPROC
help
Say Y here if you want to enable MSI support for Broadcom's iProc
@ -195,8 +211,8 @@ config PCIE_ALTERA
config PCIE_ALTERA_MSI
bool "Altera PCIe MSI feature"
depends on PCIE_ALTERA && PCI_MSI
select PCI_MSI_IRQ_DOMAIN
depends on PCIE_ALTERA
depends on PCI_MSI_IRQ_DOMAIN
help
Say Y here if you want PCIe MSI support for the Altera FPGA.
This MSI driver supports Altera MSI to GIC controller IP.
@ -204,6 +220,7 @@ config PCIE_ALTERA_MSI
config PCI_HISI
depends on OF && ARM64
bool "HiSilicon Hip05 and Hip06 SoCs PCIe controllers"
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW
help
@ -213,6 +230,7 @@ config PCI_HISI
config PCIE_QCOM
bool "Qualcomm PCIe controller"
depends on ARCH_QCOM && OF
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW
select PCIEPORTBUS
help
@ -237,6 +255,7 @@ config PCI_HOST_THUNDER_ECAM
config PCIE_ARMADA_8K
bool "Marvell Armada-8K PCIe controller"
depends on ARCH_MVEBU
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW
select PCIEPORTBUS
help
@ -245,4 +264,14 @@ config PCIE_ARMADA_8K
Designware hardware and therefore the driver re-uses the
Designware core functions to implement the driver.
config PCIE_ARTPEC6
bool "Axis ARTPEC-6 PCIe controller"
depends on MACH_ARTPEC6
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW
select PCIEPORTBUS
help
Say Y here to enable PCIe controller support on Axis ARTPEC-6
SoCs. This PCIe controller uses the DesignWare core.
endmenu

View File

@ -5,6 +5,7 @@ obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o
obj-$(CONFIG_PCI_IMX6) += pci-imx6.o
obj-$(CONFIG_PCI_HYPERV) += pci-hyperv.o
obj-$(CONFIG_PCI_MVEBU) += pci-mvebu.o
obj-$(CONFIG_PCI_AARDVARK) += pci-aardvark.o
obj-$(CONFIG_PCI_TEGRA) += pci-tegra.o
obj-$(CONFIG_PCI_RCAR_GEN2) += pci-rcar-gen2.o
obj-$(CONFIG_PCIE_RCAR) += pcie-rcar.o
@ -29,3 +30,4 @@ obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o
obj-$(CONFIG_PCI_HOST_THUNDER_ECAM) += pci-thunder-ecam.o
obj-$(CONFIG_PCI_HOST_THUNDER_PEM) += pci-thunder-pem.o
obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o
obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o

File diff suppressed because it is too large Load Diff

View File

@ -181,14 +181,14 @@ static int dra7xx_pcie_init_irq_domain(struct pcie_port *pp)
if (!pcie_intc_node) {
dev_err(dev, "No PCIe Intc node found\n");
return PTR_ERR(pcie_intc_node);
return -ENODEV;
}
pp->irq_domain = irq_domain_add_linear(pcie_intc_node, 4,
&intx_domain_ops, pp);
if (!pp->irq_domain) {
dev_err(dev, "Failed to get a INTx IRQ domain\n");
return PTR_ERR(pp->irq_domain);
return -ENODEV;
}
return 0;

View File

@ -20,10 +20,9 @@
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/of_pci.h>
#include <linux/pci-ecam.h>
#include <linux/platform_device.h>
#include "../ecam.h"
static int gen_pci_parse_request_of_pci_ranges(struct device *dev,
struct list_head *resources, struct resource **bus_range)
{
@ -36,44 +35,34 @@ static int gen_pci_parse_request_of_pci_ranges(struct device *dev,
if (err)
return err;
err = devm_request_pci_bus_resources(dev, resources);
if (err)
return err;
resource_list_for_each_entry(win, resources) {
struct resource *parent, *res = win->res;
struct resource *res = win->res;
switch (resource_type(res)) {
case IORESOURCE_IO:
parent = &ioport_resource;
err = pci_remap_iospace(res, iobase);
if (err) {
if (err)
dev_warn(dev, "error %d: failed to map resource %pR\n",
err, res);
continue;
}
break;
case IORESOURCE_MEM:
parent = &iomem_resource;
res_valid |= !(res->flags & IORESOURCE_PREFETCH);
break;
case IORESOURCE_BUS:
*bus_range = res;
default:
continue;
break;
}
err = devm_request_resource(dev, parent, res);
if (err)
goto out_release_res;
}
if (!res_valid) {
dev_err(dev, "non-prefetchable memory resource required\n");
err = -EINVAL;
goto out_release_res;
}
if (res_valid)
return 0;
return 0;
out_release_res:
return err;
dev_err(dev, "non-prefetchable memory resource required\n");
return -EINVAL;
}
static void gen_pci_unmap_cfg(void *ptr)
@ -155,7 +144,14 @@ int pci_host_common_probe(struct platform_device *pdev,
pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci);
if (!pci_has_flag(PCI_PROBE_ONLY)) {
/*
* We insert PCI resources into the iomem_resource and
* ioport_resource trees in either pci_bus_claim_resources()
* or pci_bus_assign_resources().
*/
if (pci_has_flag(PCI_PROBE_ONLY)) {
pci_bus_claim_resources(bus);
} else {
pci_bus_size_bridges(bus);
pci_bus_assign_resources(bus);

View File

@ -20,13 +20,12 @@
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/of_address.h>
#include <linux/of_pci.h>
#include <linux/pci-ecam.h>
#include <linux/platform_device.h>
#include "../ecam.h"
static struct pci_ecam_ops gen_pci_cfg_cam_bus_ops = {
.bus_shift = 16,
.pci_ops = {
@ -46,8 +45,6 @@ static const struct of_device_id gen_pci_of_match[] = {
{ },
};
MODULE_DEVICE_TABLE(of, gen_pci_of_match);
static int gen_pci_probe(struct platform_device *pdev)
{
const struct of_device_id *of_id;
@ -66,8 +63,4 @@ static struct platform_driver gen_pci_driver = {
},
.probe = gen_pci_probe,
};
module_platform_driver(gen_pci_driver);
MODULE_DESCRIPTION("Generic PCI host driver");
MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>");
MODULE_LICENSE("GPL v2");
builtin_platform_driver(gen_pci_driver);

View File

@ -732,16 +732,18 @@ static void hv_msi_free(struct irq_domain *domain, struct msi_domain_info *info,
pdev = msi_desc_to_pci_dev(msi);
hbus = info->data;
hpdev = get_pcichild_wslot(hbus, devfn_to_wslot(pdev->devfn));
if (!hpdev)
int_desc = irq_data_get_irq_chip_data(irq_data);
if (!int_desc)
return;
int_desc = irq_data_get_irq_chip_data(irq_data);
if (int_desc) {
irq_data->chip_data = NULL;
hv_int_desc_free(hpdev, int_desc);
irq_data->chip_data = NULL;
hpdev = get_pcichild_wslot(hbus, devfn_to_wslot(pdev->devfn));
if (!hpdev) {
kfree(int_desc);
return;
}
hv_int_desc_free(hpdev, int_desc);
put_pcichild(hpdev, hv_pcidev_ref_by_slot);
}
@ -1657,14 +1659,16 @@ static void hv_pci_onchannelcallback(void *context)
continue;
}
/* Zero length indicates there are no more packets. */
if (ret || !bytes_recvd)
break;
/*
* All incoming packets must be at least as large as a
* response.
*/
if (bytes_recvd <= sizeof(struct pci_response)) {
kfree(buffer);
return;
}
if (bytes_recvd <= sizeof(struct pci_response))
continue;
desc = (struct vmpacket_descriptor *)buffer;
switch (desc->type) {
@ -1679,8 +1683,7 @@ static void hv_pci_onchannelcallback(void *context)
comp_packet->completion_func(comp_packet->compl_ctxt,
response,
bytes_recvd);
kfree(buffer);
return;
break;
case VM_PKT_DATA_INBAND:
@ -1727,8 +1730,9 @@ static void hv_pci_onchannelcallback(void *context)
desc->type, req_id, bytes_recvd);
break;
}
break;
}
kfree(buffer);
}
/**

View File

@ -17,7 +17,7 @@
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/irqdomain.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/msi.h>
#include <linux/of_irq.h>
#include <linux/of.h>
@ -360,7 +360,6 @@ static const struct of_device_id ks_pcie_of_match[] = {
},
{ },
};
MODULE_DEVICE_TABLE(of, ks_pcie_of_match);
static int __exit ks_pcie_remove(struct platform_device *pdev)
{
@ -439,9 +438,4 @@ static struct platform_driver ks_pcie_driver __refdata = {
.of_match_table = of_match_ptr(ks_pcie_of_match),
},
};
module_platform_driver(ks_pcie_driver);
MODULE_AUTHOR("Murali Karicheri <m-karicheri2@ti.com>");
MODULE_DESCRIPTION("Keystone PCIe host controller driver");
MODULE_LICENSE("GPL v2");
builtin_platform_driver(ks_pcie_driver);

View File

@ -12,7 +12,7 @@
#include <linux/kernel.h>
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/of_pci.h>
#include <linux/of_platform.h>
#include <linux/of_irq.h>
@ -211,7 +211,6 @@ static const struct of_device_id ls_pcie_of_match[] = {
{ .compatible = "fsl,ls2085a-pcie", .data = &ls2080_drvdata },
{ },
};
MODULE_DEVICE_TABLE(of, ls_pcie_of_match);
static int __init ls_add_pcie_port(struct pcie_port *pp,
struct platform_device *pdev)
@ -275,9 +274,4 @@ static struct platform_driver ls_pcie_driver = {
.of_match_table = ls_pcie_of_match,
},
};
module_platform_driver_probe(ls_pcie_driver, ls_pcie_probe);
MODULE_AUTHOR("Minghuan Lian <Minghuan.Lian@freescale.com>");
MODULE_DESCRIPTION("Freescale Layerscape PCIe host controller driver");
MODULE_LICENSE("GPL v2");
builtin_platform_driver_probe(ls_pcie_driver, ls_pcie_probe);

View File

@ -1,6 +1,8 @@
/*
* PCIe driver for Marvell Armada 370 and Armada XP SoCs
*
* Author: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
*
* This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any
* warranty of any kind, whether express or implied.
@ -11,7 +13,7 @@
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/gpio.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/mbus.h>
#include <linux/msi.h>
#include <linux/slab.h>
@ -839,25 +841,22 @@ static struct pci_ops mvebu_pcie_ops = {
static int mvebu_pcie_setup(int nr, struct pci_sys_data *sys)
{
struct mvebu_pcie *pcie = sys_to_pcie(sys);
int i;
int err, i;
pcie->mem.name = "PCI MEM";
pcie->realio.name = "PCI I/O";
if (request_resource(&iomem_resource, &pcie->mem))
return 0;
if (resource_size(&pcie->realio) != 0) {
if (request_resource(&ioport_resource, &pcie->realio)) {
release_resource(&pcie->mem);
return 0;
}
if (resource_size(&pcie->realio) != 0)
pci_add_resource_offset(&sys->resources, &pcie->realio,
sys->io_offset);
}
pci_add_resource_offset(&sys->resources, &pcie->mem, sys->mem_offset);
pci_add_resource(&sys->resources, &pcie->busn);
err = devm_request_pci_bus_resources(&pcie->pdev->dev, &sys->resources);
if (err)
return 0;
for (i = 0; i < pcie->nports; i++) {
struct mvebu_pcie_port *port = &pcie->ports[i];
@ -1298,7 +1297,6 @@ static const struct of_device_id mvebu_pcie_of_match_table[] = {
{ .compatible = "marvell,kirkwood-pcie", },
{},
};
MODULE_DEVICE_TABLE(of, mvebu_pcie_of_match_table);
static const struct dev_pm_ops mvebu_pcie_pm_ops = {
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(mvebu_pcie_suspend, mvebu_pcie_resume)
@ -1314,8 +1312,4 @@ static struct platform_driver mvebu_pcie_driver = {
},
.probe = mvebu_pcie_probe,
};
module_platform_driver(mvebu_pcie_driver);
MODULE_AUTHOR("Thomas Petazzoni <thomas.petazzoni@free-electrons.com>");
MODULE_DESCRIPTION("Marvell EBU PCIe driver");
MODULE_LICENSE("GPL v2");
builtin_platform_driver(mvebu_pcie_driver);

View File

@ -4,6 +4,8 @@
* Copyright (C) 2013 Renesas Solutions Corp.
* Copyright (C) 2013 Cogent Embedded, Inc.
*
* Author: Valentine Barshak <valentine.barshak@cogentembedded.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
@ -14,7 +16,6 @@
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/of_pci.h>
#include <linux/pci.h>
@ -97,7 +98,6 @@
struct rcar_pci_priv {
struct device *dev;
void __iomem *reg;
struct resource io_res;
struct resource mem_res;
struct resource *cfg_res;
unsigned busnr;
@ -194,6 +194,7 @@ static int rcar_pci_setup(int nr, struct pci_sys_data *sys)
struct rcar_pci_priv *priv = sys->private_data;
void __iomem *reg = priv->reg;
u32 val;
int ret;
pm_runtime_enable(priv->dev);
pm_runtime_get_sync(priv->dev);
@ -273,8 +274,10 @@ static int rcar_pci_setup(int nr, struct pci_sys_data *sys)
rcar_pci_setup_errirq(priv);
/* Add PCI resources */
pci_add_resource(&sys->resources, &priv->io_res);
pci_add_resource(&sys->resources, &priv->mem_res);
ret = devm_request_pci_bus_resources(priv->dev, &sys->resources);
if (ret < 0)
return ret;
/* Setup bus number based on platform device id / of bus-range */
sys->busnr = priv->busnr;
@ -371,14 +374,6 @@ static int rcar_pci_probe(struct platform_device *pdev)
return -ENOMEM;
priv->mem_res = *mem_res;
/*
* The controller does not support/use port I/O,
* so setup a dummy port I/O region here.
*/
priv->io_res.start = priv->mem_res.start;
priv->io_res.end = priv->mem_res.end;
priv->io_res.flags = IORESOURCE_IO;
priv->cfg_res = cfg_res;
priv->irq = platform_get_irq(pdev, 0);
@ -421,6 +416,7 @@ static int rcar_pci_probe(struct platform_device *pdev)
hw_private[0] = priv;
memset(&hw, 0, sizeof(hw));
hw.nr_controllers = ARRAY_SIZE(hw_private);
hw.io_optional = 1;
hw.private_data = hw_private;
hw.map_irq = rcar_pci_map_irq;
hw.ops = &rcar_pci_ops;
@ -437,8 +433,6 @@ static struct of_device_id rcar_pci_of_match[] = {
{ },
};
MODULE_DEVICE_TABLE(of, rcar_pci_of_match);
static struct platform_driver rcar_pci_driver = {
.driver = {
.name = "pci-rcar-gen2",
@ -447,9 +441,4 @@ static struct platform_driver rcar_pci_driver = {
},
.probe = rcar_pci_probe,
};
module_platform_driver(rcar_pci_driver);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Renesas R-Car Gen2 internal PCI");
MODULE_AUTHOR("Valentine Barshak <valentine.barshak@cogentembedded.com>");
builtin_platform_driver(rcar_pci_driver);

View File

@ -9,6 +9,8 @@
*
* Bits taken from arch/arm/mach-dove/pcie.c
*
* Author: Thierry Reding <treding@nvidia.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
@ -32,7 +34,7 @@
#include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/msi.h>
#include <linux/of_address.h>
#include <linux/of_pci.h>
@ -183,26 +185,26 @@
#define AFI_PEXBIAS_CTRL_0 0x168
#define RP_VEND_XP 0x00000F00
#define RP_VEND_XP 0x00000f00
#define RP_VEND_XP_DL_UP (1 << 30)
#define RP_PRIV_MISC 0x00000FE0
#define RP_PRIV_MISC_PRSNT_MAP_EP_PRSNT (0xE << 0)
#define RP_PRIV_MISC_PRSNT_MAP_EP_ABSNT (0xF << 0)
#define RP_PRIV_MISC 0x00000fe0
#define RP_PRIV_MISC_PRSNT_MAP_EP_PRSNT (0xe << 0)
#define RP_PRIV_MISC_PRSNT_MAP_EP_ABSNT (0xf << 0)
#define RP_LINK_CONTROL_STATUS 0x00000090
#define RP_LINK_CONTROL_STATUS_DL_LINK_ACTIVE 0x20000000
#define RP_LINK_CONTROL_STATUS_LINKSTAT_MASK 0x3fff0000
#define PADS_CTL_SEL 0x0000009C
#define PADS_CTL_SEL 0x0000009c
#define PADS_CTL 0x000000A0
#define PADS_CTL 0x000000a0
#define PADS_CTL_IDDQ_1L (1 << 0)
#define PADS_CTL_TX_DATA_EN_1L (1 << 6)
#define PADS_CTL_RX_DATA_EN_1L (1 << 10)
#define PADS_PLL_CTL_TEGRA20 0x000000B8
#define PADS_PLL_CTL_TEGRA30 0x000000B4
#define PADS_PLL_CTL_TEGRA20 0x000000b8
#define PADS_PLL_CTL_TEGRA30 0x000000b4
#define PADS_PLL_CTL_RST_B4SM (1 << 1)
#define PADS_PLL_CTL_LOCKDET (1 << 8)
#define PADS_PLL_CTL_REFCLK_MASK (0x3 << 16)
@ -214,9 +216,9 @@
#define PADS_PLL_CTL_TXCLKREF_DIV5 (1 << 20)
#define PADS_PLL_CTL_TXCLKREF_BUF_EN (1 << 22)
#define PADS_REFCLK_CFG0 0x000000C8
#define PADS_REFCLK_CFG1 0x000000CC
#define PADS_REFCLK_BIAS 0x000000D0
#define PADS_REFCLK_CFG0 0x000000c8
#define PADS_REFCLK_CFG1 0x000000cc
#define PADS_REFCLK_BIAS 0x000000d0
/*
* Fields in PADS_REFCLK_CFG*. Those registers form an array of 16-bit
@ -228,15 +230,6 @@
#define PADS_REFCLK_CFG_PREDI_SHIFT 8 /* 11:8 */
#define PADS_REFCLK_CFG_DRVI_SHIFT 12 /* 15:12 */
/* Default value provided by HW engineering is 0xfa5c */
#define PADS_REFCLK_CFG_VALUE \
( \
(0x17 << PADS_REFCLK_CFG_TERM_SHIFT) | \
(0 << PADS_REFCLK_CFG_E_TERM_SHIFT) | \
(0xa << PADS_REFCLK_CFG_PREDI_SHIFT) | \
(0xf << PADS_REFCLK_CFG_DRVI_SHIFT) \
)
struct tegra_msi {
struct msi_controller chip;
DECLARE_BITMAP(used, INT_PCI_MSI_NR);
@ -252,6 +245,8 @@ struct tegra_pcie_soc_data {
unsigned int msi_base_shift;
u32 pads_pll_ctl;
u32 tx_ref_sel;
u32 pads_refclk_cfg0;
u32 pads_refclk_cfg1;
bool has_pex_clkreq_en;
bool has_pex_bias_ctrl;
bool has_intr_prsnt_sense;
@ -274,7 +269,6 @@ struct tegra_pcie {
struct list_head buses;
struct resource *cs;
struct resource all;
struct resource io;
struct resource pio;
struct resource mem;
@ -623,30 +617,21 @@ static int tegra_pcie_setup(int nr, struct pci_sys_data *sys)
sys->mem_offset = pcie->offset.mem;
sys->io_offset = pcie->offset.io;
err = devm_request_resource(pcie->dev, &pcie->all, &pcie->io);
err = devm_request_resource(pcie->dev, &iomem_resource, &pcie->io);
if (err < 0)
return err;
err = devm_request_resource(pcie->dev, &ioport_resource, &pcie->pio);
if (err < 0)
return err;
err = devm_request_resource(pcie->dev, &pcie->all, &pcie->mem);
if (err < 0)
return err;
err = devm_request_resource(pcie->dev, &pcie->all, &pcie->prefetch);
if (err)
return err;
pci_add_resource_offset(&sys->resources, &pcie->pio, sys->io_offset);
pci_add_resource_offset(&sys->resources, &pcie->mem, sys->mem_offset);
pci_add_resource_offset(&sys->resources, &pcie->prefetch,
sys->mem_offset);
pci_add_resource(&sys->resources, &pcie->busn);
pci_ioremap_io(pcie->pio.start, pcie->io.start);
err = devm_request_pci_bus_resources(pcie->dev, &sys->resources);
if (err < 0)
return err;
pci_remap_iospace(&pcie->pio, pcie->io.start);
return 1;
}
@ -838,12 +823,6 @@ static int tegra_pcie_phy_enable(struct tegra_pcie *pcie)
value |= PADS_PLL_CTL_RST_B4SM;
pads_writel(pcie, value, soc->pads_pll_ctl);
/* Configure the reference clock driver */
value = PADS_REFCLK_CFG_VALUE | (PADS_REFCLK_CFG_VALUE << 16);
pads_writel(pcie, value, PADS_REFCLK_CFG0);
if (soc->num_ports > 2)
pads_writel(pcie, PADS_REFCLK_CFG_VALUE, PADS_REFCLK_CFG1);
/* wait for the PLL to lock */
err = tegra_pcie_pll_wait(pcie, 500);
if (err < 0) {
@ -927,6 +906,7 @@ static int tegra_pcie_port_phy_power_off(struct tegra_pcie_port *port)
static int tegra_pcie_phy_power_on(struct tegra_pcie *pcie)
{
const struct tegra_pcie_soc_data *soc = pcie->soc_data;
struct tegra_pcie_port *port;
int err;
@ -952,6 +932,12 @@ static int tegra_pcie_phy_power_on(struct tegra_pcie *pcie)
}
}
/* Configure the reference clock driver */
pads_writel(pcie, soc->pads_refclk_cfg0, PADS_REFCLK_CFG0);
if (soc->num_ports > 2)
pads_writel(pcie, soc->pads_refclk_cfg1, PADS_REFCLK_CFG1);
return 0;
}
@ -1822,12 +1808,6 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
struct resource res;
int err;
memset(&pcie->all, 0, sizeof(pcie->all));
pcie->all.flags = IORESOURCE_MEM;
pcie->all.name = np->full_name;
pcie->all.start = ~0;
pcie->all.end = 0;
if (of_pci_range_parser_init(&parser, np)) {
dev_err(pcie->dev, "missing \"ranges\" property\n");
return -EINVAL;
@ -1880,18 +1860,8 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
}
break;
}
if (res.start <= pcie->all.start)
pcie->all.start = res.start;
if (res.end >= pcie->all.end)
pcie->all.end = res.end;
}
err = devm_request_resource(pcie->dev, &iomem_resource, &pcie->all);
if (err < 0)
return err;
err = of_pci_parse_bus_range(np, &pcie->busn);
if (err < 0) {
dev_err(pcie->dev, "failed to parse ranges property: %d\n",
@ -2078,6 +2048,7 @@ static const struct tegra_pcie_soc_data tegra20_pcie_data = {
.msi_base_shift = 0,
.pads_pll_ctl = PADS_PLL_CTL_TEGRA20,
.tx_ref_sel = PADS_PLL_CTL_TXCLKREF_DIV10,
.pads_refclk_cfg0 = 0xfa5cfa5c,
.has_pex_clkreq_en = false,
.has_pex_bias_ctrl = false,
.has_intr_prsnt_sense = false,
@ -2090,6 +2061,8 @@ static const struct tegra_pcie_soc_data tegra30_pcie_data = {
.msi_base_shift = 8,
.pads_pll_ctl = PADS_PLL_CTL_TEGRA30,
.tx_ref_sel = PADS_PLL_CTL_TXCLKREF_BUF_EN,
.pads_refclk_cfg0 = 0xfa5cfa5c,
.pads_refclk_cfg1 = 0xfa5cfa5c,
.has_pex_clkreq_en = true,
.has_pex_bias_ctrl = true,
.has_intr_prsnt_sense = true,
@ -2102,6 +2075,7 @@ static const struct tegra_pcie_soc_data tegra124_pcie_data = {
.msi_base_shift = 8,
.pads_pll_ctl = PADS_PLL_CTL_TEGRA30,
.tx_ref_sel = PADS_PLL_CTL_TXCLKREF_BUF_EN,
.pads_refclk_cfg0 = 0x44ac44ac,
.has_pex_clkreq_en = true,
.has_pex_bias_ctrl = true,
.has_intr_prsnt_sense = true,
@ -2115,7 +2089,6 @@ static const struct of_device_id tegra_pcie_of_match[] = {
{ .compatible = "nvidia,tegra20-pcie", .data = &tegra20_pcie_data },
{ },
};
MODULE_DEVICE_TABLE(of, tegra_pcie_of_match);
static void *tegra_pcie_ports_seq_start(struct seq_file *s, loff_t *pos)
{
@ -2249,8 +2222,6 @@ static int tegra_pcie_probe(struct platform_device *pdev)
if (err < 0)
return err;
pcibios_min_mem = 0;
err = tegra_pcie_get_resources(pcie);
if (err < 0) {
dev_err(&pdev->dev, "failed to request resources: %d\n", err);
@ -2306,8 +2277,4 @@ static struct platform_driver tegra_pcie_driver = {
},
.probe = tegra_pcie_probe,
};
module_platform_driver(tegra_pcie_driver);
MODULE_AUTHOR("Thierry Reding <treding@nvidia.com>");
MODULE_DESCRIPTION("NVIDIA Tegra PCIe driver");
MODULE_LICENSE("GPL v2");
builtin_platform_driver(tegra_pcie_driver);

View File

@ -7,14 +7,13 @@
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/ioport.h>
#include <linux/of_pci.h>
#include <linux/of.h>
#include <linux/pci-ecam.h>
#include <linux/platform_device.h>
#include "../ecam.h"
static void set_val(u32 v, int where, int size, u32 *val)
{
int shift = (where & 3) * 8;
@ -360,7 +359,6 @@ static const struct of_device_id thunder_ecam_of_match[] = {
{ .compatible = "cavium,pci-host-thunder-ecam" },
{ },
};
MODULE_DEVICE_TABLE(of, thunder_ecam_of_match);
static int thunder_ecam_probe(struct platform_device *pdev)
{
@ -374,7 +372,4 @@ static struct platform_driver thunder_ecam_driver = {
},
.probe = thunder_ecam_probe,
};
module_platform_driver(thunder_ecam_driver);
MODULE_DESCRIPTION("Thunder ECAM PCI host driver");
MODULE_LICENSE("GPL v2");
builtin_platform_driver(thunder_ecam_driver);

View File

@ -15,13 +15,12 @@
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/of_address.h>
#include <linux/of_pci.h>
#include <linux/pci-ecam.h>
#include <linux/platform_device.h>
#include "../ecam.h"
#define PEM_CFG_WR 0x28
#define PEM_CFG_RD 0x30
@ -285,8 +284,9 @@ static int thunder_pem_config_write(struct pci_bus *bus, unsigned int devfn,
return pci_generic_config_write(bus, devfn, where, size, val);
}
static int thunder_pem_init(struct device *dev, struct pci_config_window *cfg)
static int thunder_pem_init(struct pci_config_window *cfg)
{
struct device *dev = cfg->parent;
resource_size_t bar4_start;
struct resource *res_pem;
struct thunder_pem_pci *pem_pci;
@ -346,7 +346,6 @@ static const struct of_device_id thunder_pem_of_match[] = {
{ .compatible = "cavium,pci-host-thunder-pem" },
{ },
};
MODULE_DEVICE_TABLE(of, thunder_pem_of_match);
static int thunder_pem_probe(struct platform_device *pdev)
{
@ -360,7 +359,4 @@ static struct platform_driver thunder_pem_driver = {
},
.probe = thunder_pem_probe,
};
module_platform_driver(thunder_pem_driver);
MODULE_DESCRIPTION("Thunder PEM PCIe host driver");
MODULE_LICENSE("GPL v2");
builtin_platform_driver(thunder_pem_driver);

View File

@ -80,21 +80,21 @@ static int versatile_pci_parse_request_of_pci_ranges(struct device *dev,
if (err)
return err;
err = devm_request_pci_bus_resources(dev, res);
if (err)
goto out_release_res;
resource_list_for_each_entry(win, res) {
struct resource *parent, *res = win->res;
struct resource *res = win->res;
switch (resource_type(res)) {
case IORESOURCE_IO:
parent = &ioport_resource;
err = pci_remap_iospace(res, iobase);
if (err) {
if (err)
dev_warn(dev, "error %d: failed to map resource %pR\n",
err, res);
continue;
}
break;
case IORESOURCE_MEM:
parent = &iomem_resource;
res_valid |= !(res->flags & IORESOURCE_PREFETCH);
writel(res->start >> 28, PCI_IMAP(mem));
@ -102,23 +102,14 @@ static int versatile_pci_parse_request_of_pci_ranges(struct device *dev,
mem++;
break;
case IORESOURCE_BUS:
default:
continue;
}
err = devm_request_resource(dev, parent, res);
if (err)
goto out_release_res;
}
if (!res_valid) {
dev_err(dev, "non-prefetchable memory resource required\n");
err = -EINVAL;
goto out_release_res;
}
if (res_valid)
return 0;
return 0;
dev_err(dev, "non-prefetchable memory resource required\n");
err = -EINVAL;
out_release_res:
pci_free_resource_list(res);

View File

@ -21,7 +21,7 @@
#include <linux/io.h>
#include <linux/jiffies.h>
#include <linux/memblock.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
@ -540,14 +540,20 @@ static int xgene_pcie_probe_bridge(struct platform_device *pdev)
if (ret)
return ret;
ret = devm_request_pci_bus_resources(&pdev->dev, &res);
if (ret)
goto error;
ret = xgene_pcie_setup(port, &res, iobase);
if (ret)
return ret;
goto error;
bus = pci_create_root_bus(&pdev->dev, 0,
&xgene_pcie_ops, port, &res);
if (!bus)
return -ENOMEM;
if (!bus) {
ret = -ENOMEM;
goto error;
}
pci_scan_child_bus(bus);
pci_assign_unassigned_bus_resources(bus);
@ -555,6 +561,10 @@ static int xgene_pcie_probe_bridge(struct platform_device *pdev)
platform_set_drvdata(pdev, port);
return 0;
error:
pci_free_resource_list(&res);
return ret;
}
static const struct of_device_id xgene_pcie_match_table[] = {
@ -569,8 +579,4 @@ static struct platform_driver xgene_pcie_driver = {
},
.probe = xgene_pcie_probe_bridge,
};
module_platform_driver(xgene_pcie_driver);
MODULE_AUTHOR("Tanmay Inamdar <tinamdar@apm.com>");
MODULE_DESCRIPTION("APM X-Gene PCIe driver");
MODULE_LICENSE("GPL v2");
builtin_platform_driver(xgene_pcie_driver);

View File

@ -61,6 +61,8 @@
#define TLP_LOOP 500
#define RP_DEVFN 0
#define LINK_UP_TIMEOUT 5000
#define INTX_NUM 4
#define DWORD_MASK 3
@ -81,9 +83,30 @@ struct tlp_rp_regpair_t {
u32 reg1;
};
static inline void cra_writel(struct altera_pcie *pcie, const u32 value,
const u32 reg)
{
writel_relaxed(value, pcie->cra_base + reg);
}
static inline u32 cra_readl(struct altera_pcie *pcie, const u32 reg)
{
return readl_relaxed(pcie->cra_base + reg);
}
static bool altera_pcie_link_is_up(struct altera_pcie *pcie)
{
return !!((cra_readl(pcie, RP_LTSSM) & RP_LTSSM_MASK) == LTSSM_L0);
}
static void altera_pcie_retrain(struct pci_dev *dev)
{
u16 linkcap, linkstat;
struct altera_pcie *pcie = dev->bus->sysdata;
int timeout = 0;
if (!altera_pcie_link_is_up(pcie))
return;
/*
* Set the retrain bit if the PCIe rootport support > 2.5GB/s, but
@ -95,9 +118,16 @@ static void altera_pcie_retrain(struct pci_dev *dev)
return;
pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &linkstat);
if ((linkstat & PCI_EXP_LNKSTA_CLS) == PCI_EXP_LNKSTA_CLS_2_5GB)
if ((linkstat & PCI_EXP_LNKSTA_CLS) == PCI_EXP_LNKSTA_CLS_2_5GB) {
pcie_capability_set_word(dev, PCI_EXP_LNKCTL,
PCI_EXP_LNKCTL_RL);
while (!altera_pcie_link_is_up(pcie)) {
timeout++;
if (timeout > LINK_UP_TIMEOUT)
break;
udelay(5);
}
}
}
DECLARE_PCI_FIXUP_EARLY(0x1172, PCI_ANY_ID, altera_pcie_retrain);
@ -120,17 +150,6 @@ static bool altera_pcie_hide_rc_bar(struct pci_bus *bus, unsigned int devfn,
return false;
}
static inline void cra_writel(struct altera_pcie *pcie, const u32 value,
const u32 reg)
{
writel_relaxed(value, pcie->cra_base + reg);
}
static inline u32 cra_readl(struct altera_pcie *pcie, const u32 reg)
{
return readl_relaxed(pcie->cra_base + reg);
}
static void tlp_write_tx(struct altera_pcie *pcie,
struct tlp_rp_regpair_t *tlp_rp_regdata)
{
@ -139,11 +158,6 @@ static void tlp_write_tx(struct altera_pcie *pcie,
cra_writel(pcie, tlp_rp_regdata->ctrl, RP_TX_CNTRL);
}
static bool altera_pcie_link_is_up(struct altera_pcie *pcie)
{
return !!((cra_readl(pcie, RP_LTSSM) & RP_LTSSM_MASK) == LTSSM_L0);
}
static bool altera_pcie_valid_config(struct altera_pcie *pcie,
struct pci_bus *bus, int dev)
{
@ -415,11 +429,6 @@ static void altera_pcie_isr(struct irq_desc *desc)
chained_irq_exit(chip, desc);
}
static void altera_pcie_release_of_pci_ranges(struct altera_pcie *pcie)
{
pci_free_resource_list(&pcie->resources);
}
static int altera_pcie_parse_request_of_pci_ranges(struct altera_pcie *pcie)
{
int err, res_valid = 0;
@ -432,33 +441,25 @@ static int altera_pcie_parse_request_of_pci_ranges(struct altera_pcie *pcie)
if (err)
return err;
resource_list_for_each_entry(win, &pcie->resources) {
struct resource *parent, *res = win->res;
switch (resource_type(res)) {
case IORESOURCE_MEM:
parent = &iomem_resource;
res_valid |= !(res->flags & IORESOURCE_PREFETCH);
break;
default:
continue;
}
err = devm_request_resource(dev, parent, res);
if (err)
goto out_release_res;
}
if (!res_valid) {
dev_err(dev, "non-prefetchable memory resource required\n");
err = -EINVAL;
err = devm_request_pci_bus_resources(dev, &pcie->resources);
if (err)
goto out_release_res;
resource_list_for_each_entry(win, &pcie->resources) {
struct resource *res = win->res;
if (resource_type(res) == IORESOURCE_MEM)
res_valid |= !(res->flags & IORESOURCE_PREFETCH);
}
return 0;
if (res_valid)
return 0;
dev_err(dev, "non-prefetchable memory resource required\n");
err = -EINVAL;
out_release_res:
altera_pcie_release_of_pci_ranges(pcie);
pci_free_resource_list(&pcie->resources);
return err;
}

View File

@ -5,6 +5,9 @@
*
* Copyright (C) 2016 Marvell Technology Group Ltd.
*
* Author: Yehuda Yitshak <yehuday@marvell.com>
* Author: Shadi Ammouri <shadi@marvell.com>
*
* This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any
* warranty of any kind, whether express or implied.
@ -14,7 +17,7 @@
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/of.h>
#include <linux/pci.h>
#include <linux/phy/phy.h>
@ -244,7 +247,6 @@ static const struct of_device_id armada8k_pcie_of_match[] = {
{ .compatible = "marvell,armada8k-pcie", },
{},
};
MODULE_DEVICE_TABLE(of, armada8k_pcie_of_match);
static struct platform_driver armada8k_pcie_driver = {
.probe = armada8k_pcie_probe,
@ -253,10 +255,4 @@ static struct platform_driver armada8k_pcie_driver = {
.of_match_table = of_match_ptr(armada8k_pcie_of_match),
},
};
module_platform_driver(armada8k_pcie_driver);
MODULE_DESCRIPTION("Armada 8k PCIe host controller driver");
MODULE_AUTHOR("Yehuda Yitshak <yehuday@marvell.com>");
MODULE_AUTHOR("Shadi Ammouri <shadi@marvell.com>");
MODULE_LICENSE("GPL v2");
builtin_platform_driver(armada8k_pcie_driver);

View File

@ -0,0 +1,280 @@
/*
* PCIe host controller driver for Axis ARTPEC-6 SoC
*
* Author: Niklas Cassel <niklas.cassel@axis.com>
*
* Based on work done by Phil Edworthy <phil@edworthys.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/delay.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/resource.h>
#include <linux/signal.h>
#include <linux/types.h>
#include <linux/interrupt.h>
#include <linux/mfd/syscon.h>
#include <linux/regmap.h>
#include "pcie-designware.h"
#define to_artpec6_pcie(x) container_of(x, struct artpec6_pcie, pp)
struct artpec6_pcie {
struct pcie_port pp;
struct regmap *regmap;
void __iomem *phy_base;
};
/* PCIe Port Logic registers (memory-mapped) */
#define PL_OFFSET 0x700
#define PCIE_PHY_DEBUG_R0 (PL_OFFSET + 0x28)
#define PCIE_PHY_DEBUG_R1 (PL_OFFSET + 0x2c)
#define MISC_CONTROL_1_OFF (PL_OFFSET + 0x1bc)
#define DBI_RO_WR_EN 1
/* ARTPEC-6 specific registers */
#define PCIECFG 0x18
#define PCIECFG_DBG_OEN (1 << 24)
#define PCIECFG_CORE_RESET_REQ (1 << 21)
#define PCIECFG_LTSSM_ENABLE (1 << 20)
#define PCIECFG_CLKREQ_B (1 << 11)
#define PCIECFG_REFCLK_ENABLE (1 << 10)
#define PCIECFG_PLL_ENABLE (1 << 9)
#define PCIECFG_PCLK_ENABLE (1 << 8)
#define PCIECFG_RISRCREN (1 << 4)
#define PCIECFG_MODE_TX_DRV_EN (1 << 3)
#define PCIECFG_CISRREN (1 << 2)
#define PCIECFG_MACRO_ENABLE (1 << 0)
#define NOCCFG 0x40
#define NOCCFG_ENABLE_CLK_PCIE (1 << 4)
#define NOCCFG_POWER_PCIE_IDLEACK (1 << 3)
#define NOCCFG_POWER_PCIE_IDLE (1 << 2)
#define NOCCFG_POWER_PCIE_IDLEREQ (1 << 1)
#define PHY_STATUS 0x118
#define PHY_COSPLLLOCK (1 << 0)
#define ARTPEC6_CPU_TO_BUS_ADDR 0x0fffffff
static int artpec6_pcie_establish_link(struct pcie_port *pp)
{
struct artpec6_pcie *artpec6_pcie = to_artpec6_pcie(pp);
u32 val;
unsigned int retries;
/* Hold DW core in reset */
regmap_read(artpec6_pcie->regmap, PCIECFG, &val);
val |= PCIECFG_CORE_RESET_REQ;
regmap_write(artpec6_pcie->regmap, PCIECFG, val);
regmap_read(artpec6_pcie->regmap, PCIECFG, &val);
val |= PCIECFG_RISRCREN | /* Receiver term. 50 Ohm */
PCIECFG_MODE_TX_DRV_EN |
PCIECFG_CISRREN | /* Reference clock term. 100 Ohm */
PCIECFG_MACRO_ENABLE;
val |= PCIECFG_REFCLK_ENABLE;
val &= ~PCIECFG_DBG_OEN;
val &= ~PCIECFG_CLKREQ_B;
regmap_write(artpec6_pcie->regmap, PCIECFG, val);
usleep_range(5000, 6000);
regmap_read(artpec6_pcie->regmap, NOCCFG, &val);
val |= NOCCFG_ENABLE_CLK_PCIE;
regmap_write(artpec6_pcie->regmap, NOCCFG, val);
usleep_range(20, 30);
regmap_read(artpec6_pcie->regmap, PCIECFG, &val);
val |= PCIECFG_PCLK_ENABLE | PCIECFG_PLL_ENABLE;
regmap_write(artpec6_pcie->regmap, PCIECFG, val);
usleep_range(6000, 7000);
regmap_read(artpec6_pcie->regmap, NOCCFG, &val);
val &= ~NOCCFG_POWER_PCIE_IDLEREQ;
regmap_write(artpec6_pcie->regmap, NOCCFG, val);
retries = 50;
do {
usleep_range(1000, 2000);
regmap_read(artpec6_pcie->regmap, NOCCFG, &val);
retries--;
} while (retries &&
(val & (NOCCFG_POWER_PCIE_IDLEACK | NOCCFG_POWER_PCIE_IDLE)));
retries = 50;
do {
usleep_range(1000, 2000);
val = readl(artpec6_pcie->phy_base + PHY_STATUS);
retries--;
} while (retries && !(val & PHY_COSPLLLOCK));
/* Take DW core out of reset */
regmap_read(artpec6_pcie->regmap, PCIECFG, &val);
val &= ~PCIECFG_CORE_RESET_REQ;
regmap_write(artpec6_pcie->regmap, PCIECFG, val);
usleep_range(100, 200);
/*
* Enable writing to config regs. This is required as the Synopsys
* driver changes the class code. That register needs DBI write enable.
*/
writel(DBI_RO_WR_EN, pp->dbi_base + MISC_CONTROL_1_OFF);
pp->io_base &= ARTPEC6_CPU_TO_BUS_ADDR;
pp->mem_base &= ARTPEC6_CPU_TO_BUS_ADDR;
pp->cfg0_base &= ARTPEC6_CPU_TO_BUS_ADDR;
pp->cfg1_base &= ARTPEC6_CPU_TO_BUS_ADDR;
/* setup root complex */
dw_pcie_setup_rc(pp);
/* assert LTSSM enable */
regmap_read(artpec6_pcie->regmap, PCIECFG, &val);
val |= PCIECFG_LTSSM_ENABLE;
regmap_write(artpec6_pcie->regmap, PCIECFG, val);
/* check if the link is up or not */
if (!dw_pcie_wait_for_link(pp))
return 0;
dev_dbg(pp->dev, "DEBUG_R0: 0x%08x, DEBUG_R1: 0x%08x\n",
readl(pp->dbi_base + PCIE_PHY_DEBUG_R0),
readl(pp->dbi_base + PCIE_PHY_DEBUG_R1));
return -ETIMEDOUT;
}
static void artpec6_pcie_enable_interrupts(struct pcie_port *pp)
{
if (IS_ENABLED(CONFIG_PCI_MSI))
dw_pcie_msi_init(pp);
}
static void artpec6_pcie_host_init(struct pcie_port *pp)
{
artpec6_pcie_establish_link(pp);
artpec6_pcie_enable_interrupts(pp);
}
static int artpec6_pcie_link_up(struct pcie_port *pp)
{
u32 rc;
/*
* Get status from Synopsys IP
* link is debug bit 36, debug register 1 starts at bit 32
*/
rc = readl(pp->dbi_base + PCIE_PHY_DEBUG_R1) & (0x1 << (36 - 32));
if (rc)
return 1;
return 0;
}
static struct pcie_host_ops artpec6_pcie_host_ops = {
.link_up = artpec6_pcie_link_up,
.host_init = artpec6_pcie_host_init,
};
static irqreturn_t artpec6_pcie_msi_handler(int irq, void *arg)
{
struct pcie_port *pp = arg;
return dw_handle_msi_irq(pp);
}
static int __init artpec6_add_pcie_port(struct pcie_port *pp,
struct platform_device *pdev)
{
int ret;
if (IS_ENABLED(CONFIG_PCI_MSI)) {
pp->msi_irq = platform_get_irq_byname(pdev, "msi");
if (pp->msi_irq <= 0) {
dev_err(&pdev->dev, "failed to get MSI irq\n");
return -ENODEV;
}
ret = devm_request_irq(&pdev->dev, pp->msi_irq,
artpec6_pcie_msi_handler,
IRQF_SHARED | IRQF_NO_THREAD,
"artpec6-pcie-msi", pp);
if (ret) {
dev_err(&pdev->dev, "failed to request MSI irq\n");
return ret;
}
}
pp->root_bus_nr = -1;
pp->ops = &artpec6_pcie_host_ops;
ret = dw_pcie_host_init(pp);
if (ret) {
dev_err(&pdev->dev, "failed to initialize host\n");
return ret;
}
return 0;
}
static int artpec6_pcie_probe(struct platform_device *pdev)
{
struct artpec6_pcie *artpec6_pcie;
struct pcie_port *pp;
struct resource *dbi_base;
struct resource *phy_base;
int ret;
artpec6_pcie = devm_kzalloc(&pdev->dev, sizeof(*artpec6_pcie),
GFP_KERNEL);
if (!artpec6_pcie)
return -ENOMEM;
pp = &artpec6_pcie->pp;
pp->dev = &pdev->dev;
dbi_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
pp->dbi_base = devm_ioremap_resource(&pdev->dev, dbi_base);
if (IS_ERR(pp->dbi_base))
return PTR_ERR(pp->dbi_base);
phy_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "phy");
artpec6_pcie->phy_base = devm_ioremap_resource(&pdev->dev, phy_base);
if (IS_ERR(artpec6_pcie->phy_base))
return PTR_ERR(artpec6_pcie->phy_base);
artpec6_pcie->regmap =
syscon_regmap_lookup_by_phandle(pdev->dev.of_node,
"axis,syscon-pcie");
if (IS_ERR(artpec6_pcie->regmap))
return PTR_ERR(artpec6_pcie->regmap);
ret = artpec6_add_pcie_port(pp, pdev);
if (ret < 0)
return ret;
platform_set_drvdata(pdev, artpec6_pcie);
return 0;
}
static const struct of_device_id artpec6_pcie_of_match[] = {
{ .compatible = "axis,artpec6-pcie", },
{},
};
static struct platform_driver artpec6_pcie_driver = {
.probe = artpec6_pcie_probe,
.driver = {
.name = "artpec6-pcie",
.of_match_table = artpec6_pcie_of_match,
},
};
builtin_platform_driver(artpec6_pcie_driver);

View File

@ -14,7 +14,7 @@
#include <linux/gpio.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/of_gpio.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
@ -121,7 +121,6 @@ static const struct of_device_id dw_plat_pcie_of_match[] = {
{ .compatible = "snps,dw-pcie", },
{},
};
MODULE_DEVICE_TABLE(of, dw_plat_pcie_of_match);
static struct platform_driver dw_plat_pcie_driver = {
.driver = {
@ -130,9 +129,4 @@ static struct platform_driver dw_plat_pcie_driver = {
},
.probe = dw_plat_pcie_probe,
};
module_platform_driver(dw_plat_pcie_driver);
MODULE_AUTHOR("Joao Pinto <Joao.Pinto@synopsys.com>");
MODULE_DESCRIPTION("Synopsys PCIe host controller glue platform driver");
MODULE_LICENSE("GPL v2");
builtin_platform_driver(dw_plat_pcie_driver);

View File

@ -452,6 +452,10 @@ int dw_pcie_host_init(struct pcie_port *pp)
if (ret)
return ret;
ret = devm_request_pci_bus_resources(&pdev->dev, &res);
if (ret)
goto error;
/* Get the I/O and memory ranges from DT */
resource_list_for_each_entry(win, &res) {
switch (resource_type(win->res)) {
@ -461,11 +465,9 @@ int dw_pcie_host_init(struct pcie_port *pp)
pp->io_size = resource_size(pp->io);
pp->io_bus_addr = pp->io->start - win->offset;
ret = pci_remap_iospace(pp->io, pp->io_base);
if (ret) {
if (ret)
dev_warn(pp->dev, "error %d: failed to map resource %pR\n",
ret, pp->io);
continue;
}
break;
case IORESOURCE_MEM:
pp->mem = win->res;
@ -483,8 +485,6 @@ int dw_pcie_host_init(struct pcie_port *pp)
case IORESOURCE_BUS:
pp->busn = win->res;
break;
default:
continue;
}
}
@ -493,7 +493,8 @@ int dw_pcie_host_init(struct pcie_port *pp)
resource_size(pp->cfg));
if (!pp->dbi_base) {
dev_err(pp->dev, "error with ioremap\n");
return -ENOMEM;
ret = -ENOMEM;
goto error;
}
}
@ -504,7 +505,8 @@ int dw_pcie_host_init(struct pcie_port *pp)
pp->cfg0_size);
if (!pp->va_cfg0_base) {
dev_err(pp->dev, "error with ioremap in function\n");
return -ENOMEM;
ret = -ENOMEM;
goto error;
}
}
@ -513,7 +515,8 @@ int dw_pcie_host_init(struct pcie_port *pp)
pp->cfg1_size);
if (!pp->va_cfg1_base) {
dev_err(pp->dev, "error with ioremap\n");
return -ENOMEM;
ret = -ENOMEM;
goto error;
}
}
@ -528,7 +531,8 @@ int dw_pcie_host_init(struct pcie_port *pp)
&dw_pcie_msi_chip);
if (!pp->irq_domain) {
dev_err(pp->dev, "irq domain init failed\n");
return -ENXIO;
ret = -ENXIO;
goto error;
}
for (i = 0; i < MAX_MSI_IRQS; i++)
@ -536,7 +540,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
} else {
ret = pp->ops->msi_host_init(pp, &dw_pcie_msi_chip);
if (ret < 0)
return ret;
goto error;
}
}
@ -552,8 +556,10 @@ int dw_pcie_host_init(struct pcie_port *pp)
} else
bus = pci_scan_root_bus(pp->dev, pp->root_bus_nr, &dw_pcie_ops,
pp, &res);
if (!bus)
return -ENOMEM;
if (!bus) {
ret = -ENOMEM;
goto error;
}
if (pp->ops->scan_bus)
pp->ops->scan_bus(pp);
@ -571,6 +577,10 @@ int dw_pcie_host_init(struct pcie_port *pp)
pci_bus_add_devices(bus);
return 0;
error:
pci_free_resource_list(&res);
return ret;
}
static int dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus,

View File

@ -12,7 +12,7 @@
* published by the Free Software Foundation.
*/
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/mfd/syscon.h>
#include <linux/of_address.h>
#include <linux/of_pci.h>
@ -235,9 +235,6 @@ static const struct of_device_id hisi_pcie_of_match[] = {
{},
};
MODULE_DEVICE_TABLE(of, hisi_pcie_of_match);
static struct platform_driver hisi_pcie_driver = {
.probe = hisi_pcie_probe,
.driver = {
@ -245,10 +242,4 @@ static struct platform_driver hisi_pcie_driver = {
.of_match_table = hisi_pcie_of_match,
},
};
module_platform_driver(hisi_pcie_driver);
MODULE_AUTHOR("Zhou Wang <wangzhou1@hisilicon.com>");
MODULE_AUTHOR("Dacai Zhu <zhudacai@hisilicon.com>");
MODULE_AUTHOR("Gabriele Paoloni <gabriele.paoloni@huawei.com>");
MODULE_LICENSE("GPL v2");
builtin_platform_driver(hisi_pcie_driver);

View File

@ -462,6 +462,10 @@ int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res)
if (!pcie || !pcie->dev || !pcie->base)
return -EINVAL;
ret = devm_request_pci_bus_resources(pcie->dev, res);
if (ret)
return ret;
ret = phy_init(pcie->phy);
if (ret) {
dev_err(pcie->dev, "unable to initialize PCIe PHY\n");

View File

@ -7,6 +7,8 @@
* arch/sh/drivers/pci/ops-sh7786.c
* Copyright (C) 2009 - 2011 Paul Mundt
*
* Author: Phil Edworthy <phil.edworthy@renesas.com>
*
* This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any
* warranty of any kind, whether express or implied.
@ -18,7 +20,7 @@
#include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/msi.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
@ -936,12 +938,6 @@ static const struct of_device_id rcar_pcie_of_match[] = {
{ .compatible = "renesas,pcie-r8a7795", .data = rcar_pcie_hw_init },
{},
};
MODULE_DEVICE_TABLE(of, rcar_pcie_of_match);
static void rcar_pcie_release_of_pci_ranges(struct rcar_pcie *pci)
{
pci_free_resource_list(&pci->resources);
}
static int rcar_pcie_parse_request_of_pci_ranges(struct rcar_pcie *pci)
{
@ -955,37 +951,25 @@ static int rcar_pcie_parse_request_of_pci_ranges(struct rcar_pcie *pci)
if (err)
return err;
resource_list_for_each_entry(win, &pci->resources) {
struct resource *parent, *res = win->res;
err = devm_request_pci_bus_resources(dev, &pci->resources);
if (err)
goto out_release_res;
switch (resource_type(res)) {
case IORESOURCE_IO:
parent = &ioport_resource;
resource_list_for_each_entry(win, &pci->resources) {
struct resource *res = win->res;
if (resource_type(res) == IORESOURCE_IO) {
err = pci_remap_iospace(res, iobase);
if (err) {
if (err)
dev_warn(dev, "error %d: failed to map resource %pR\n",
err, res);
continue;
}
break;
case IORESOURCE_MEM:
parent = &iomem_resource;
break;
case IORESOURCE_BUS:
default:
continue;
}
err = devm_request_resource(dev, parent, res);
if (err)
goto out_release_res;
}
return 0;
out_release_res:
rcar_pcie_release_of_pci_ranges(pci);
pci_free_resource_list(&pci->resources);
return err;
}
@ -1073,8 +1057,4 @@ static struct platform_driver rcar_pcie_driver = {
},
.probe = rcar_pcie_probe,
};
module_platform_driver(rcar_pcie_driver);
MODULE_AUTHOR("Phil Edworthy <phil.edworthy@renesas.com>");
MODULE_DESCRIPTION("Renesas R-Car PCIe driver");
MODULE_LICENSE("GPL v2");
builtin_platform_driver(rcar_pcie_driver);

View File

@ -825,27 +825,33 @@ static int nwl_pcie_probe(struct platform_device *pdev)
err = of_pci_get_host_bridge_resources(node, 0, 0xff, &res, &iobase);
if (err) {
pr_err("Getting bridge resources failed\n");
dev_err(pcie->dev, "Getting bridge resources failed\n");
return err;
}
err = devm_request_pci_bus_resources(pcie->dev, &res);
if (err)
goto error;
err = nwl_pcie_init_irq_domain(pcie);
if (err) {
dev_err(pcie->dev, "Failed creating IRQ Domain\n");
return err;
goto error;
}
bus = pci_create_root_bus(&pdev->dev, pcie->root_busno,
&nwl_pcie_ops, pcie, &res);
if (!bus)
return -ENOMEM;
if (!bus) {
err = -ENOMEM;
goto error;
}
if (IS_ENABLED(CONFIG_PCI_MSI)) {
err = nwl_pcie_enable_msi(pcie, bus);
if (err < 0) {
dev_err(&pdev->dev,
"failed to enable MSI support: %d\n", err);
return err;
goto error;
}
}
pci_scan_child_bus(bus);
@ -855,6 +861,10 @@ static int nwl_pcie_probe(struct platform_device *pdev)
pci_bus_add_devices(bus);
platform_set_drvdata(pdev, pcie);
return 0;
error:
pci_free_resource_list(&res);
return err;
}
static int nwl_pcie_remove(struct platform_device *pdev)

View File

@ -550,7 +550,7 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
pcie_intc_node = of_get_next_child(node, NULL);
if (!pcie_intc_node) {
dev_err(dev, "No PCIe Intc node found\n");
return PTR_ERR(pcie_intc_node);
return -ENODEV;
}
port->irq_domain = irq_domain_add_linear(pcie_intc_node, 4,
@ -558,7 +558,7 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
port);
if (!port->irq_domain) {
dev_err(dev, "Failed to get a INTx IRQ domain\n");
return PTR_ERR(port->irq_domain);
return -ENODEV;
}
/* Setup MSI */
@ -569,7 +569,7 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
&xilinx_pcie_msi_chip);
if (!port->irq_domain) {
dev_err(dev, "Failed to get a MSI IRQ domain\n");
return PTR_ERR(port->irq_domain);
return -ENODEV;
}
xilinx_pcie_enable_msi(port);
@ -660,7 +660,6 @@ static int xilinx_pcie_probe(struct platform_device *pdev)
struct xilinx_pcie_port *port;
struct device *dev = &pdev->dev;
struct pci_bus *bus;
int err;
resource_size_t iobase = 0;
LIST_HEAD(res);
@ -694,10 +693,17 @@ static int xilinx_pcie_probe(struct platform_device *pdev)
dev_err(dev, "Getting bridge resources failed\n");
return err;
}
err = devm_request_pci_bus_resources(dev, &res);
if (err)
goto error;
bus = pci_create_root_bus(&pdev->dev, 0,
&xilinx_pcie_ops, port, &res);
if (!bus)
return -ENOMEM;
if (!bus) {
err = -ENOMEM;
goto error;
}
#ifdef CONFIG_PCI_MSI
xilinx_pcie_msi_chip.dev = port->dev;
@ -712,6 +718,10 @@ static int xilinx_pcie_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, port);
return 0;
error:
pci_free_resource_list(&res);
return err;
}
/**

View File

@ -675,6 +675,8 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
if (bridge->is_going_away)
return;
pm_runtime_get_sync(&bridge->pci_dev->dev);
list_for_each_entry(slot, &bridge->slots, node) {
struct pci_bus *bus = slot->bus;
struct pci_dev *dev, *tmp;
@ -694,6 +696,8 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge)
disable_slot(slot);
}
}
pm_runtime_put(&bridge->pci_dev->dev);
}
/*

View File

@ -546,6 +546,10 @@ static irqreturn_t pcie_isr(int irq, void *dev_id)
u8 present;
bool link;
/* Interrupts cannot originate from a controller that's asleep */
if (pdev->current_state == PCI_D3cold)
return IRQ_NONE;
/*
* In order to guarantee that all interrupt events are
* serviced, we need to re-inspect Slot Status register after

View File

@ -4,6 +4,7 @@
*
* Copyright (C) 2003-2004 Intel
* Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com)
* Copyright (C) 2016 Christoph Hellwig.
*/
#include <linux/err.h>
@ -207,6 +208,12 @@ static void msi_mask_irq(struct msi_desc *desc, u32 mask, u32 flag)
desc->masked = __pci_msi_desc_mask_irq(desc, mask, flag);
}
static void __iomem *pci_msix_desc_addr(struct msi_desc *desc)
{
return desc->mask_base +
desc->msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE;
}
/*
* This internal function does not flush PCI writes to the device.
* All users must ensure that they read from the device before either
@ -217,8 +224,6 @@ static void msi_mask_irq(struct msi_desc *desc, u32 mask, u32 flag)
u32 __pci_msix_desc_mask_irq(struct msi_desc *desc, u32 flag)
{
u32 mask_bits = desc->masked;
unsigned offset = desc->msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE +
PCI_MSIX_ENTRY_VECTOR_CTRL;
if (pci_msi_ignore_mask)
return 0;
@ -226,7 +231,7 @@ u32 __pci_msix_desc_mask_irq(struct msi_desc *desc, u32 flag)
mask_bits &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT;
if (flag)
mask_bits |= PCI_MSIX_ENTRY_CTRL_MASKBIT;
writel(mask_bits, desc->mask_base + offset);
writel(mask_bits, pci_msix_desc_addr(desc) + PCI_MSIX_ENTRY_VECTOR_CTRL);
return mask_bits;
}
@ -284,8 +289,7 @@ void __pci_read_msi_msg(struct msi_desc *entry, struct msi_msg *msg)
BUG_ON(dev->current_state != PCI_D0);
if (entry->msi_attrib.is_msix) {
void __iomem *base = entry->mask_base +
entry->msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE;
void __iomem *base = pci_msix_desc_addr(entry);
msg->address_lo = readl(base + PCI_MSIX_ENTRY_LOWER_ADDR);
msg->address_hi = readl(base + PCI_MSIX_ENTRY_UPPER_ADDR);
@ -315,9 +319,7 @@ void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg)
if (dev->current_state != PCI_D0) {
/* Don't touch the hardware now */
} else if (entry->msi_attrib.is_msix) {
void __iomem *base;
base = entry->mask_base +
entry->msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE;
void __iomem *base = pci_msix_desc_addr(entry);
writel(msg->address_lo, base + PCI_MSIX_ENTRY_LOWER_ADDR);
writel(msg->address_hi, base + PCI_MSIX_ENTRY_UPPER_ADDR);
@ -567,6 +569,7 @@ static struct msi_desc *msi_setup_entry(struct pci_dev *dev, int nvec)
entry->msi_attrib.multi_cap = (control & PCI_MSI_FLAGS_QMASK) >> 1;
entry->msi_attrib.multiple = ilog2(__roundup_pow_of_two(nvec));
entry->nvec_used = nvec;
entry->affinity = dev->irq_affinity;
if (control & PCI_MSI_FLAGS_64BIT)
entry->mask_pos = dev->msi_cap + PCI_MSI_MASK_64;
@ -678,10 +681,18 @@ static void __iomem *msix_map_region(struct pci_dev *dev, unsigned nr_entries)
static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
struct msix_entry *entries, int nvec)
{
const struct cpumask *mask = NULL;
struct msi_desc *entry;
int i;
int cpu = -1, i;
for (i = 0; i < nvec; i++) {
if (dev->irq_affinity) {
cpu = cpumask_next(cpu, dev->irq_affinity);
if (cpu >= nr_cpu_ids)
cpu = cpumask_first(dev->irq_affinity);
mask = cpumask_of(cpu);
}
entry = alloc_msi_entry(&dev->dev);
if (!entry) {
if (!i)
@ -694,10 +705,14 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
entry->msi_attrib.is_msix = 1;
entry->msi_attrib.is_64 = 1;
entry->msi_attrib.entry_nr = entries[i].entry;
if (entries)
entry->msi_attrib.entry_nr = entries[i].entry;
else
entry->msi_attrib.entry_nr = i;
entry->msi_attrib.default_irq = dev->irq;
entry->mask_base = base;
entry->nvec_used = 1;
entry->affinity = mask;
list_add_tail(&entry->list, dev_to_msi_list(&dev->dev));
}
@ -712,13 +727,11 @@ static void msix_program_entries(struct pci_dev *dev,
int i = 0;
for_each_pci_msi_entry(entry, dev) {
int offset = entries[i].entry * PCI_MSIX_ENTRY_SIZE +
PCI_MSIX_ENTRY_VECTOR_CTRL;
entries[i].vector = entry->irq;
entry->masked = readl(entry->mask_base + offset);
if (entries)
entries[i++].vector = entry->irq;
entry->masked = readl(pci_msix_desc_addr(entry) +
PCI_MSIX_ENTRY_VECTOR_CTRL);
msix_mask_irq(entry, 1);
i++;
}
}
@ -931,7 +944,7 @@ EXPORT_SYMBOL(pci_msix_vec_count);
/**
* pci_enable_msix - configure device's MSI-X capability structure
* @dev: pointer to the pci_dev data structure of MSI-X device function
* @entries: pointer to an array of MSI-X entries
* @entries: pointer to an array of MSI-X entries (optional)
* @nvec: number of MSI-X irqs requested for allocation by device driver
*
* Setup the MSI-X capability structure of device function with the number
@ -951,22 +964,21 @@ int pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, int nvec)
if (!pci_msi_supported(dev, nvec))
return -EINVAL;
if (!entries)
return -EINVAL;
nr_entries = pci_msix_vec_count(dev);
if (nr_entries < 0)
return nr_entries;
if (nvec > nr_entries)
return nr_entries;
/* Check for any invalid entries */
for (i = 0; i < nvec; i++) {
if (entries[i].entry >= nr_entries)
return -EINVAL; /* invalid entry */
for (j = i + 1; j < nvec; j++) {
if (entries[i].entry == entries[j].entry)
return -EINVAL; /* duplicate entry */
if (entries) {
/* Check for any invalid entries */
for (i = 0; i < nvec; i++) {
if (entries[i].entry >= nr_entries)
return -EINVAL; /* invalid entry */
for (j = i + 1; j < nvec; j++) {
if (entries[i].entry == entries[j].entry)
return -EINVAL; /* duplicate entry */
}
}
}
WARN_ON(!!dev->msix_enabled);
@ -1026,19 +1038,8 @@ int pci_msi_enabled(void)
}
EXPORT_SYMBOL(pci_msi_enabled);
/**
* pci_enable_msi_range - configure device's MSI capability structure
* @dev: device to configure
* @minvec: minimal number of interrupts to configure
* @maxvec: maximum number of interrupts to configure
*
* This function tries to allocate a maximum possible number of interrupts in a
* range between @minvec and @maxvec. It returns a negative errno if an error
* occurs. If it succeeds, it returns the actual number of interrupts allocated
* and updates the @dev's irq member to the lowest new interrupt number;
* the other interrupt numbers allocated to this device are consecutive.
**/
int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec)
static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
unsigned int flags)
{
int nvec;
int rc;
@ -1061,26 +1062,86 @@ int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec)
nvec = pci_msi_vec_count(dev);
if (nvec < 0)
return nvec;
else if (nvec < minvec)
if (nvec < minvec)
return -EINVAL;
else if (nvec > maxvec)
if (nvec > maxvec)
nvec = maxvec;
do {
rc = msi_capability_init(dev, nvec);
if (rc < 0) {
return rc;
} else if (rc > 0) {
if (rc < minvec)
for (;;) {
if (!(flags & PCI_IRQ_NOAFFINITY)) {
dev->irq_affinity = irq_create_affinity_mask(&nvec);
if (nvec < minvec)
return -ENOSPC;
nvec = rc;
}
} while (rc);
return nvec;
rc = msi_capability_init(dev, nvec);
if (rc == 0)
return nvec;
kfree(dev->irq_affinity);
dev->irq_affinity = NULL;
if (rc < 0)
return rc;
if (rc < minvec)
return -ENOSPC;
nvec = rc;
}
}
/**
* pci_enable_msi_range - configure device's MSI capability structure
* @dev: device to configure
* @minvec: minimal number of interrupts to configure
* @maxvec: maximum number of interrupts to configure
*
* This function tries to allocate a maximum possible number of interrupts in a
* range between @minvec and @maxvec. It returns a negative errno if an error
* occurs. If it succeeds, it returns the actual number of interrupts allocated
* and updates the @dev's irq member to the lowest new interrupt number;
* the other interrupt numbers allocated to this device are consecutive.
**/
int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec)
{
return __pci_enable_msi_range(dev, minvec, maxvec, PCI_IRQ_NOAFFINITY);
}
EXPORT_SYMBOL(pci_enable_msi_range);
static int __pci_enable_msix_range(struct pci_dev *dev,
struct msix_entry *entries, int minvec, int maxvec,
unsigned int flags)
{
int nvec = maxvec;
int rc;
if (maxvec < minvec)
return -ERANGE;
for (;;) {
if (!(flags & PCI_IRQ_NOAFFINITY)) {
dev->irq_affinity = irq_create_affinity_mask(&nvec);
if (nvec < minvec)
return -ENOSPC;
}
rc = pci_enable_msix(dev, entries, nvec);
if (rc == 0)
return nvec;
kfree(dev->irq_affinity);
dev->irq_affinity = NULL;
if (rc < 0)
return rc;
if (rc < minvec)
return -ENOSPC;
nvec = rc;
}
}
/**
* pci_enable_msix_range - configure device's MSI-X capability structure
* @dev: pointer to the pci_dev data structure of MSI-X device function
@ -1097,29 +1158,102 @@ EXPORT_SYMBOL(pci_enable_msi_range);
* with new allocated MSI-X interrupts.
**/
int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
int minvec, int maxvec)
int minvec, int maxvec)
{
int nvec = maxvec;
int rc;
if (maxvec < minvec)
return -ERANGE;
do {
rc = pci_enable_msix(dev, entries, nvec);
if (rc < 0) {
return rc;
} else if (rc > 0) {
if (rc < minvec)
return -ENOSPC;
nvec = rc;
}
} while (rc);
return nvec;
return __pci_enable_msix_range(dev, entries, minvec, maxvec,
PCI_IRQ_NOAFFINITY);
}
EXPORT_SYMBOL(pci_enable_msix_range);
/**
* pci_alloc_irq_vectors - allocate multiple IRQs for a device
* @dev: PCI device to operate on
* @min_vecs: minimum number of vectors required (must be >= 1)
* @max_vecs: maximum (desired) number of vectors
* @flags: flags or quirks for the allocation
*
* Allocate up to @max_vecs interrupt vectors for @dev, using MSI-X or MSI
* vectors if available, and fall back to a single legacy vector
* if neither is available. Return the number of vectors allocated,
* (which might be smaller than @max_vecs) if successful, or a negative
* error code on error. If less than @min_vecs interrupt vectors are
* available for @dev the function will fail with -ENOSPC.
*
* To get the Linux IRQ number used for a vector that can be passed to
* request_irq() use the pci_irq_vector() helper.
*/
int pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
unsigned int max_vecs, unsigned int flags)
{
int vecs = -ENOSPC;
if (!(flags & PCI_IRQ_NOMSIX)) {
vecs = __pci_enable_msix_range(dev, NULL, min_vecs, max_vecs,
flags);
if (vecs > 0)
return vecs;
}
if (!(flags & PCI_IRQ_NOMSI)) {
vecs = __pci_enable_msi_range(dev, min_vecs, max_vecs, flags);
if (vecs > 0)
return vecs;
}
/* use legacy irq if allowed */
if (!(flags & PCI_IRQ_NOLEGACY) && min_vecs == 1)
return 1;
return vecs;
}
EXPORT_SYMBOL(pci_alloc_irq_vectors);
/**
* pci_free_irq_vectors - free previously allocated IRQs for a device
* @dev: PCI device to operate on
*
* Undoes the allocations and enabling in pci_alloc_irq_vectors().
*/
void pci_free_irq_vectors(struct pci_dev *dev)
{
pci_disable_msix(dev);
pci_disable_msi(dev);
}
EXPORT_SYMBOL(pci_free_irq_vectors);
/**
* pci_irq_vector - return Linux IRQ number of a device vector
* @dev: PCI device to operate on
* @nr: device-relative interrupt vector index (0-based).
*/
int pci_irq_vector(struct pci_dev *dev, unsigned int nr)
{
if (dev->msix_enabled) {
struct msi_desc *entry;
int i = 0;
for_each_pci_msi_entry(entry, dev) {
if (i == nr)
return entry->irq;
i++;
}
WARN_ON_ONCE(1);
return -EINVAL;
}
if (dev->msi_enabled) {
struct msi_desc *entry = first_pci_msi_entry(dev);
if (WARN_ON_ONCE(nr >= entry->nvec_used))
return -EINVAL;
} else {
if (WARN_ON_ONCE(nr > 0))
return -EINVAL;
}
return dev->irq + nr;
}
EXPORT_SYMBOL(pci_irq_vector);
struct pci_dev *msi_desc_to_pci_dev(struct msi_desc *desc)
{
return to_pci_dev(desc->dev);

View File

@ -777,7 +777,7 @@ static int pci_pm_suspend_noirq(struct device *dev)
if (!pci_dev->state_saved) {
pci_save_state(pci_dev);
if (!pci_has_subordinate(pci_dev))
if (pci_power_manageable(pci_dev))
pci_prepare_to_sleep(pci_dev);
}
@ -1144,7 +1144,6 @@ static int pci_pm_runtime_suspend(struct device *dev)
return -ENOSYS;
pci_dev->state_saved = false;
pci_dev->no_d3cold = false;
error = pm->runtime_suspend(dev);
if (error) {
/*
@ -1161,8 +1160,6 @@ static int pci_pm_runtime_suspend(struct device *dev)
return error;
}
if (!pci_dev->d3cold_allowed)
pci_dev->no_d3cold = true;
pci_fixup_device(pci_fixup_suspend, pci_dev);

View File

@ -406,6 +406,11 @@ static ssize_t d3cold_allowed_store(struct device *dev,
return -EINVAL;
pdev->d3cold_allowed = !!val;
if (pdev->d3cold_allowed)
pci_d3cold_enable(pdev);
else
pci_d3cold_disable(pdev);
pm_runtime_resume(dev);
return count;

View File

@ -7,8 +7,10 @@
* Copyright 1997 -- 2000 Martin Mares <mj@ucw.cz>
*/
#include <linux/acpi.h>
#include <linux/kernel.h>
#include <linux/delay.h>
#include <linux/dmi.h>
#include <linux/init.h>
#include <linux/of.h>
#include <linux/of_pci.h>
@ -25,7 +27,9 @@
#include <linux/device.h>
#include <linux/pm_runtime.h>
#include <linux/pci_hotplug.h>
#include <linux/vmalloc.h>
#include <asm/setup.h>
#include <asm/dma.h>
#include <linux/aer.h>
#include "pci.h"
@ -81,6 +85,9 @@ unsigned long pci_cardbus_mem_size = DEFAULT_CARDBUS_MEM_SIZE;
unsigned long pci_hotplug_io_size = DEFAULT_HOTPLUG_IO_SIZE;
unsigned long pci_hotplug_mem_size = DEFAULT_HOTPLUG_MEM_SIZE;
#define DEFAULT_HOTPLUG_BUS_SIZE 1
unsigned long pci_hotplug_bus_size = DEFAULT_HOTPLUG_BUS_SIZE;
enum pcie_bus_config_types pcie_bus_config = PCIE_BUS_DEFAULT;
/*
@ -101,6 +108,21 @@ unsigned int pcibios_max_latency = 255;
/* If set, the PCIe ARI capability will not be used. */
static bool pcie_ari_disabled;
/* Disable bridge_d3 for all PCIe ports */
static bool pci_bridge_d3_disable;
/* Force bridge_d3 for all PCIe ports */
static bool pci_bridge_d3_force;
static int __init pcie_port_pm_setup(char *str)
{
if (!strcmp(str, "off"))
pci_bridge_d3_disable = true;
else if (!strcmp(str, "force"))
pci_bridge_d3_force = true;
return 1;
}
__setup("pcie_port_pm=", pcie_port_pm_setup);
/**
* pci_bus_max_busnr - returns maximum PCI bus number of given bus' children
* @bus: pointer to PCI bus structure to search
@ -2155,6 +2177,164 @@ void pci_config_pm_runtime_put(struct pci_dev *pdev)
pm_runtime_put_sync(parent);
}
/**
* pci_bridge_d3_possible - Is it possible to put the bridge into D3
* @bridge: Bridge to check
*
* This function checks if it is possible to move the bridge to D3.
* Currently we only allow D3 for recent enough PCIe ports.
*/
static bool pci_bridge_d3_possible(struct pci_dev *bridge)
{
unsigned int year;
if (!pci_is_pcie(bridge))
return false;
switch (pci_pcie_type(bridge)) {
case PCI_EXP_TYPE_ROOT_PORT:
case PCI_EXP_TYPE_UPSTREAM:
case PCI_EXP_TYPE_DOWNSTREAM:
if (pci_bridge_d3_disable)
return false;
if (pci_bridge_d3_force)
return true;
/*
* It should be safe to put PCIe ports from 2015 or newer
* to D3.
*/
if (dmi_get_date(DMI_BIOS_DATE, &year, NULL, NULL) &&
year >= 2015) {
return true;
}
break;
}
return false;
}
static int pci_dev_check_d3cold(struct pci_dev *dev, void *data)
{
bool *d3cold_ok = data;
bool no_d3cold;
/*
* The device needs to be allowed to go D3cold and if it is wake
* capable to do so from D3cold.
*/
no_d3cold = dev->no_d3cold || !dev->d3cold_allowed ||
(device_may_wakeup(&dev->dev) && !pci_pme_capable(dev, PCI_D3cold)) ||
!pci_power_manageable(dev);
*d3cold_ok = !no_d3cold;
return no_d3cold;
}
/*
* pci_bridge_d3_update - Update bridge D3 capabilities
* @dev: PCI device which is changed
* @remove: Is the device being removed
*
* Update upstream bridge PM capabilities accordingly depending on if the
* device PM configuration was changed or the device is being removed. The
* change is also propagated upstream.
*/
static void pci_bridge_d3_update(struct pci_dev *dev, bool remove)
{
struct pci_dev *bridge;
bool d3cold_ok = true;
bridge = pci_upstream_bridge(dev);
if (!bridge || !pci_bridge_d3_possible(bridge))
return;
pci_dev_get(bridge);
/*
* If the device is removed we do not care about its D3cold
* capabilities.
*/
if (!remove)
pci_dev_check_d3cold(dev, &d3cold_ok);
if (d3cold_ok) {
/*
* We need to go through all children to find out if all of
* them can still go to D3cold.
*/
pci_walk_bus(bridge->subordinate, pci_dev_check_d3cold,
&d3cold_ok);
}
if (bridge->bridge_d3 != d3cold_ok) {
bridge->bridge_d3 = d3cold_ok;
/* Propagate change to upstream bridges */
pci_bridge_d3_update(bridge, false);
}
pci_dev_put(bridge);
}
/**
* pci_bridge_d3_device_changed - Update bridge D3 capabilities on change
* @dev: PCI device that was changed
*
* If a device is added or its PM configuration, such as is it allowed to
* enter D3cold, is changed this function updates upstream bridge PM
* capabilities accordingly.
*/
void pci_bridge_d3_device_changed(struct pci_dev *dev)
{
pci_bridge_d3_update(dev, false);
}
/**
* pci_bridge_d3_device_removed - Update bridge D3 capabilities on remove
* @dev: PCI device being removed
*
* Function updates upstream bridge PM capabilities based on other devices
* still left on the bus.
*/
void pci_bridge_d3_device_removed(struct pci_dev *dev)
{
pci_bridge_d3_update(dev, true);
}
/**
* pci_d3cold_enable - Enable D3cold for device
* @dev: PCI device to handle
*
* This function can be used in drivers to enable D3cold from the device
* they handle. It also updates upstream PCI bridge PM capabilities
* accordingly.
*/
void pci_d3cold_enable(struct pci_dev *dev)
{
if (dev->no_d3cold) {
dev->no_d3cold = false;
pci_bridge_d3_device_changed(dev);
}
}
EXPORT_SYMBOL_GPL(pci_d3cold_enable);
/**
* pci_d3cold_disable - Disable D3cold for device
* @dev: PCI device to handle
*
* This function can be used in drivers to disable D3cold from the device
* they handle. It also updates upstream PCI bridge PM capabilities
* accordingly.
*/
void pci_d3cold_disable(struct pci_dev *dev)
{
if (!dev->no_d3cold) {
dev->no_d3cold = true;
pci_bridge_d3_device_changed(dev);
}
}
EXPORT_SYMBOL_GPL(pci_d3cold_disable);
/**
* pci_pm_init - Initialize PM functions of given PCI device
* @dev: PCI device to handle.
@ -2189,6 +2369,7 @@ void pci_pm_init(struct pci_dev *dev)
dev->pm_cap = pm;
dev->d3_delay = PCI_PM_D3_WAIT;
dev->d3cold_delay = PCI_PM_D3COLD_WAIT;
dev->bridge_d3 = pci_bridge_d3_possible(dev);
dev->d3cold_allowed = true;
dev->d1_support = false;
@ -3165,6 +3346,23 @@ int __weak pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr)
#endif
}
/**
* pci_unmap_iospace - Unmap the memory mapped I/O space
* @res: resource to be unmapped
*
* Unmap the CPU virtual address @res from virtual address space.
* Only architectures that have memory mapped IO functions defined
* (and the PCI_IOBASE value defined) should call this function.
*/
void pci_unmap_iospace(struct resource *res)
{
#if defined(PCI_IOBASE) && defined(CONFIG_MMU)
unsigned long vaddr = (unsigned long)PCI_IOBASE + res->start;
unmap_kernel_range(vaddr, resource_size(res));
#endif
}
static void __pci_set_master(struct pci_dev *dev, bool enable)
{
u16 old_cmd, cmd;
@ -4755,6 +4953,7 @@ static DEFINE_SPINLOCK(resource_alignment_lock);
static resource_size_t pci_specified_resource_alignment(struct pci_dev *dev)
{
int seg, bus, slot, func, align_order, count;
unsigned short vendor, device, subsystem_vendor, subsystem_device;
resource_size_t align = 0;
char *p;
@ -4768,28 +4967,55 @@ static resource_size_t pci_specified_resource_alignment(struct pci_dev *dev)
} else {
align_order = -1;
}
if (sscanf(p, "%x:%x:%x.%x%n",
&seg, &bus, &slot, &func, &count) != 4) {
seg = 0;
if (sscanf(p, "%x:%x.%x%n",
&bus, &slot, &func, &count) != 3) {
/* Invalid format */
printk(KERN_ERR "PCI: Can't parse resource_alignment parameter: %s\n",
p);
if (strncmp(p, "pci:", 4) == 0) {
/* PCI vendor/device (subvendor/subdevice) ids are specified */
p += 4;
if (sscanf(p, "%hx:%hx:%hx:%hx%n",
&vendor, &device, &subsystem_vendor, &subsystem_device, &count) != 4) {
if (sscanf(p, "%hx:%hx%n", &vendor, &device, &count) != 2) {
printk(KERN_ERR "PCI: Can't parse resource_alignment parameter: pci:%s\n",
p);
break;
}
subsystem_vendor = subsystem_device = 0;
}
p += count;
if ((!vendor || (vendor == dev->vendor)) &&
(!device || (device == dev->device)) &&
(!subsystem_vendor || (subsystem_vendor == dev->subsystem_vendor)) &&
(!subsystem_device || (subsystem_device == dev->subsystem_device))) {
if (align_order == -1)
align = PAGE_SIZE;
else
align = 1 << align_order;
/* Found */
break;
}
}
p += count;
if (seg == pci_domain_nr(dev->bus) &&
bus == dev->bus->number &&
slot == PCI_SLOT(dev->devfn) &&
func == PCI_FUNC(dev->devfn)) {
if (align_order == -1)
align = PAGE_SIZE;
else
align = 1 << align_order;
/* Found */
break;
else {
if (sscanf(p, "%x:%x:%x.%x%n",
&seg, &bus, &slot, &func, &count) != 4) {
seg = 0;
if (sscanf(p, "%x:%x.%x%n",
&bus, &slot, &func, &count) != 3) {
/* Invalid format */
printk(KERN_ERR "PCI: Can't parse resource_alignment parameter: %s\n",
p);
break;
}
}
p += count;
if (seg == pci_domain_nr(dev->bus) &&
bus == dev->bus->number &&
slot == PCI_SLOT(dev->devfn) &&
func == PCI_FUNC(dev->devfn)) {
if (align_order == -1)
align = PAGE_SIZE;
else
align = 1 << align_order;
/* Found */
break;
}
}
if (*p != ';' && *p != ',') {
/* End of param or invalid format */
@ -4897,7 +5123,7 @@ static ssize_t pci_resource_alignment_store(struct bus_type *bus,
return pci_set_resource_alignment_param(buf, count);
}
BUS_ATTR(resource_alignment, 0644, pci_resource_alignment_show,
static BUS_ATTR(resource_alignment, 0644, pci_resource_alignment_show,
pci_resource_alignment_store);
static int __init pci_resource_alignment_sysfs_init(void)
@ -4923,7 +5149,7 @@ int pci_get_new_domain_nr(void)
}
#ifdef CONFIG_PCI_DOMAINS_GENERIC
void pci_bus_assign_domain_nr(struct pci_bus *bus, struct device *parent)
static int of_pci_bus_find_domain_nr(struct device *parent)
{
static int use_dt_domains = -1;
int domain = -1;
@ -4967,7 +5193,13 @@ void pci_bus_assign_domain_nr(struct pci_bus *bus, struct device *parent)
domain = -1;
}
bus->domain_nr = domain;
return domain;
}
int pci_bus_find_domain_nr(struct pci_bus *bus, struct device *parent)
{
return acpi_disabled ? of_pci_bus_find_domain_nr(parent) :
acpi_pci_bus_find_domain_nr(bus);
}
#endif
#endif
@ -5021,6 +5253,11 @@ static int __init pci_setup(char *str)
pci_hotplug_io_size = memparse(str + 9, &str);
} else if (!strncmp(str, "hpmemsize=", 10)) {
pci_hotplug_mem_size = memparse(str + 10, &str);
} else if (!strncmp(str, "hpbussize=", 10)) {
pci_hotplug_bus_size =
simple_strtoul(str + 10, &str, 0);
if (pci_hotplug_bus_size > 0xff)
pci_hotplug_bus_size = DEFAULT_HOTPLUG_BUS_SIZE;
} else if (!strncmp(str, "pcie_bus_tune_off", 17)) {
pcie_bus_config = PCIE_BUS_TUNE_OFF;
} else if (!strncmp(str, "pcie_bus_safe", 13)) {

View File

@ -82,6 +82,8 @@ void pci_pm_init(struct pci_dev *dev);
void pci_ea_init(struct pci_dev *dev);
void pci_allocate_cap_save_buffers(struct pci_dev *dev);
void pci_free_cap_save_buffers(struct pci_dev *dev);
void pci_bridge_d3_device_changed(struct pci_dev *dev);
void pci_bridge_d3_device_removed(struct pci_dev *dev);
static inline void pci_wakeup_event(struct pci_dev *dev)
{
@ -94,6 +96,15 @@ static inline bool pci_has_subordinate(struct pci_dev *pci_dev)
return !!(pci_dev->subordinate);
}
static inline bool pci_power_manageable(struct pci_dev *pci_dev)
{
/*
* Currently we allow normal PCI devices and PCI bridges transition
* into D3 if their bridge_d3 is set.
*/
return !pci_has_subordinate(pci_dev) || pci_dev->bridge_d3;
}
struct pci_vpd_ops {
ssize_t (*read)(struct pci_dev *dev, loff_t pos, size_t count, void *buf);
ssize_t (*write)(struct pci_dev *dev, loff_t pos, size_t count, const void *buf);

View File

@ -83,7 +83,7 @@ config PCIE_PME
depends on PCIEPORTBUS && PM
config PCIE_DPC
tristate "PCIe Downstream Port Containment support"
bool "PCIe Downstream Port Containment support"
depends on PCIEPORTBUS
default n
help
@ -92,6 +92,3 @@ config PCIE_DPC
will be handled by the DPC driver. If your system doesn't
have this capability or you do not want to use this feature,
it is safe to answer N.
To compile this driver as a module, choose M here: the module
will be called pcie-dpc.

View File

@ -139,7 +139,7 @@ static void pcie_set_clkpm_nocheck(struct pcie_link_state *link, int enable)
static void pcie_set_clkpm(struct pcie_link_state *link, int enable)
{
/* Don't enable Clock PM if the link is not Clock PM capable */
if (!link->clkpm_capable && enable)
if (!link->clkpm_capable)
enable = 0;
/* Need nothing if the specified equals to current state */
if (link->clkpm_enabled == enable)

View File

@ -15,8 +15,8 @@
struct dpc_dev {
struct pcie_device *dev;
struct work_struct work;
int cap_pos;
struct work_struct work;
int cap_pos;
};
static void dpc_wait_link_inactive(struct pci_dev *pdev)
@ -89,7 +89,7 @@ static int dpc_probe(struct pcie_device *dev)
int status;
u16 ctl, cap;
dpc = kzalloc(sizeof(*dpc), GFP_KERNEL);
dpc = devm_kzalloc(&dev->device, sizeof(*dpc), GFP_KERNEL);
if (!dpc)
return -ENOMEM;
@ -98,11 +98,12 @@ static int dpc_probe(struct pcie_device *dev)
INIT_WORK(&dpc->work, interrupt_event_handler);
set_service_data(dev, dpc);
status = request_irq(dev->irq, dpc_irq, IRQF_SHARED, "pcie-dpc", dpc);
status = devm_request_irq(&dev->device, dev->irq, dpc_irq, IRQF_SHARED,
"pcie-dpc", dpc);
if (status) {
dev_warn(&dev->device, "request IRQ%d failed: %d\n", dev->irq,
status);
goto out;
return status;
}
pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CAP, &cap);
@ -117,9 +118,6 @@ static int dpc_probe(struct pcie_device *dev)
FLAG(cap, PCI_EXP_DPC_CAP_SW_TRIGGER), (cap >> 8) & 0xf,
FLAG(cap, PCI_EXP_DPC_CAP_DL_ACTIVE));
return status;
out:
kfree(dpc);
return status;
}
static void dpc_remove(struct pcie_device *dev)
@ -131,14 +129,11 @@ static void dpc_remove(struct pcie_device *dev)
pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, &ctl);
ctl &= ~(PCI_EXP_DPC_CTL_EN_NONFATAL | PCI_EXP_DPC_CTL_INT_EN);
pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, ctl);
free_irq(dev->irq, dpc);
kfree(dpc);
}
static struct pcie_port_service_driver dpcdriver = {
.name = "dpc",
.port_type = PCI_EXP_TYPE_ROOT_PORT | PCI_EXP_TYPE_DOWNSTREAM,
.port_type = PCIE_ANY_PORT,
.service = PCIE_PORT_SERVICE_DPC,
.probe = dpc_probe,
.remove = dpc_remove,

View File

@ -11,6 +11,7 @@
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include <linux/string.h>
#include <linux/slab.h>
#include <linux/pcieport_if.h>
@ -342,6 +343,8 @@ static int pcie_device_init(struct pci_dev *pdev, int service, int irq)
return retval;
}
pm_runtime_no_callbacks(device);
return 0;
}

View File

@ -93,6 +93,26 @@ static int pcie_port_resume_noirq(struct device *dev)
return 0;
}
static int pcie_port_runtime_suspend(struct device *dev)
{
return to_pci_dev(dev)->bridge_d3 ? 0 : -EBUSY;
}
static int pcie_port_runtime_resume(struct device *dev)
{
return 0;
}
static int pcie_port_runtime_idle(struct device *dev)
{
/*
* Assume the PCI core has set bridge_d3 whenever it thinks the port
* should be good to go to D3. Everything else, including moving
* the port to D3, is handled by the PCI core.
*/
return to_pci_dev(dev)->bridge_d3 ? 0 : -EBUSY;
}
static const struct dev_pm_ops pcie_portdrv_pm_ops = {
.suspend = pcie_port_device_suspend,
.resume = pcie_port_device_resume,
@ -101,6 +121,9 @@ static const struct dev_pm_ops pcie_portdrv_pm_ops = {
.poweroff = pcie_port_device_suspend,
.restore = pcie_port_device_resume,
.resume_noirq = pcie_port_resume_noirq,
.runtime_suspend = pcie_port_runtime_suspend,
.runtime_resume = pcie_port_runtime_resume,
.runtime_idle = pcie_port_runtime_idle,
};
#define PCIE_PORTDRV_PM_OPS (&pcie_portdrv_pm_ops)
@ -134,16 +157,39 @@ static int pcie_portdrv_probe(struct pci_dev *dev,
return status;
pci_save_state(dev);
/*
* D3cold may not work properly on some PCIe port, so disable
* it by default.
* Prevent runtime PM if the port is advertising support for PCIe
* hotplug. Otherwise the BIOS hotplug SMI code might not be able
* to enumerate devices behind this port properly (the port is
* powered down preventing all config space accesses to the
* subordinate devices). We can't be sure for native PCIe hotplug
* either so prevent that as well.
*/
dev->d3cold_allowed = false;
if (!dev->is_hotplug_bridge) {
/*
* Keep the port resumed 100ms to make sure things like
* config space accesses from userspace (lspci) will not
* cause the port to repeatedly suspend and resume.
*/
pm_runtime_set_autosuspend_delay(&dev->dev, 100);
pm_runtime_use_autosuspend(&dev->dev);
pm_runtime_mark_last_busy(&dev->dev);
pm_runtime_put_autosuspend(&dev->dev);
pm_runtime_allow(&dev->dev);
}
return 0;
}
static void pcie_portdrv_remove(struct pci_dev *dev)
{
if (!dev->is_hotplug_bridge) {
pm_runtime_forbid(&dev->dev);
pm_runtime_get_noresume(&dev->dev);
pm_runtime_dont_use_autosuspend(&dev->dev);
}
pcie_port_device_remove(dev);
}

View File

@ -16,6 +16,7 @@
#include <linux/aer.h>
#include <linux/acpi.h>
#include <linux/irqdomain.h>
#include <linux/pm_runtime.h>
#include "pci.h"
#define CARDBUS_LATENCY_TIMER 176 /* secondary latency timer */
@ -832,6 +833,12 @@ int pci_scan_bridge(struct pci_bus *bus, struct pci_dev *dev, int max, int pass)
u8 primary, secondary, subordinate;
int broken = 0;
/*
* Make sure the bridge is powered on to be able to access config
* space of devices below it.
*/
pm_runtime_get_sync(&dev->dev);
pci_read_config_dword(dev, PCI_PRIMARY_BUS, &buses);
primary = buses & 0xFF;
secondary = (buses >> 8) & 0xFF;
@ -1012,6 +1019,8 @@ int pci_scan_bridge(struct pci_bus *bus, struct pci_dev *dev, int max, int pass)
out:
pci_write_config_word(dev, PCI_BRIDGE_CONTROL, bctl);
pm_runtime_put(&dev->dev);
return max;
}
EXPORT_SYMBOL(pci_scan_bridge);
@ -2076,6 +2085,15 @@ unsigned int pci_scan_child_bus(struct pci_bus *bus)
max = pci_scan_bridge(bus, dev, max, pass);
}
/*
* Make sure a hotplug bridge has at least the minimum requested
* number of buses.
*/
if (bus->self && bus->self->is_hotplug_bridge && pci_hotplug_bus_size) {
if (max - bus->busn_res.start < pci_hotplug_bus_size - 1)
max = bus->busn_res.start + pci_hotplug_bus_size - 1;
}
/*
* We've scanned the bus and so we know all about what's on
* the other side of any bridges that may be on this bus plus
@ -2127,7 +2145,9 @@ struct pci_bus *pci_create_root_bus(struct device *parent, int bus,
b->sysdata = sysdata;
b->ops = ops;
b->number = b->busn_res.start = bus;
pci_bus_assign_domain_nr(b, parent);
#ifdef CONFIG_PCI_DOMAINS_GENERIC
b->domain_nr = pci_bus_find_domain_nr(b, parent);
#endif
b2 = pci_find_bus(pci_domain_nr(b), bus);
if (b2) {
/* If we already got to this bus through a different bridge, ignore it */

View File

@ -231,7 +231,7 @@ static int proc_bus_pci_mmap(struct file *file, struct vm_area_struct *vma)
{
struct pci_dev *dev = PDE_DATA(file_inode(file));
struct pci_filp_private *fpriv = file->private_data;
int i, ret;
int i, ret, write_combine;
if (!capable(CAP_SYS_RAWIO))
return -EPERM;
@ -245,9 +245,12 @@ static int proc_bus_pci_mmap(struct file *file, struct vm_area_struct *vma)
if (i >= PCI_ROM_RESOURCE)
return -ENODEV;
if (fpriv->mmap_state == pci_mmap_mem)
write_combine = fpriv->write_combine;
else
write_combine = 0;
ret = pci_mmap_page_range(dev, vma,
fpriv->mmap_state,
fpriv->write_combine);
fpriv->mmap_state, write_combine);
if (ret < 0)
return ret;

View File

@ -3189,13 +3189,15 @@ static void quirk_no_bus_reset(struct pci_dev *dev)
}
/*
* Atheros AR93xx chips do not behave after a bus reset. The device will
* throw a Link Down error on AER-capable systems and regardless of AER,
* config space of the device is never accessible again and typically
* causes the system to hang or reset when access is attempted.
* Some Atheros AR9xxx and QCA988x chips do not behave after a bus reset.
* The device will throw a Link Down error on AER-capable systems and
* regardless of AER, config space of the device is never accessible again
* and typically causes the system to hang or reset when access is attempted.
* http://www.spinics.net/lists/linux-pci/msg34797.html
*/
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0030, quirk_no_bus_reset);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0032, quirk_no_bus_reset);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x003c, quirk_no_bus_reset);
static void quirk_no_pm_reset(struct pci_dev *dev)
{
@ -3711,6 +3713,9 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9172,
/* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c59 */
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x917a,
quirk_dma_func1_alias);
/* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c78 */
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9182,
quirk_dma_func1_alias);
/* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c46 */
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x91a0,
quirk_dma_func1_alias);
@ -3747,6 +3752,9 @@ static const struct pci_device_id fixed_dma_alias_tbl[] = {
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x0285,
PCI_VENDOR_ID_ADAPTEC2, 0x02bb), /* Adaptec 3405 */
.driver_data = PCI_DEVFN(1, 0) },
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_ADAPTEC2, 0x0285,
PCI_VENDOR_ID_ADAPTEC2, 0x02bc), /* Adaptec 3805 */
.driver_data = PCI_DEVFN(1, 0) },
{ 0 }
};
@ -4087,6 +4095,7 @@ static const struct pci_dev_acs_enabled {
{ PCI_VENDOR_ID_AMD, 0x7809, pci_quirk_amd_sb_acs },
{ PCI_VENDOR_ID_SOLARFLARE, 0x0903, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_SOLARFLARE, 0x0923, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_SOLARFLARE, 0x0A03, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_INTEL, 0x10C6, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_INTEL, 0x10DB, pci_quirk_mf_endpoint_acs },
{ PCI_VENDOR_ID_INTEL, 0x10DD, pci_quirk_mf_endpoint_acs },

View File

@ -96,6 +96,8 @@ static void pci_remove_bus_device(struct pci_dev *dev)
dev->subordinate = NULL;
}
pci_bridge_d3_device_removed(dev);
pci_destroy_dev(dev);
}

View File

@ -1428,6 +1428,74 @@ void pci_bus_assign_resources(const struct pci_bus *bus)
}
EXPORT_SYMBOL(pci_bus_assign_resources);
static void pci_claim_device_resources(struct pci_dev *dev)
{
int i;
for (i = 0; i < PCI_BRIDGE_RESOURCES; i++) {
struct resource *r = &dev->resource[i];
if (!r->flags || r->parent)
continue;
pci_claim_resource(dev, i);
}
}
static void pci_claim_bridge_resources(struct pci_dev *dev)
{
int i;
for (i = PCI_BRIDGE_RESOURCES; i < PCI_NUM_RESOURCES; i++) {
struct resource *r = &dev->resource[i];
if (!r->flags || r->parent)
continue;
pci_claim_bridge_resource(dev, i);
}
}
static void pci_bus_allocate_dev_resources(struct pci_bus *b)
{
struct pci_dev *dev;
struct pci_bus *child;
list_for_each_entry(dev, &b->devices, bus_list) {
pci_claim_device_resources(dev);
child = dev->subordinate;
if (child)
pci_bus_allocate_dev_resources(child);
}
}
static void pci_bus_allocate_resources(struct pci_bus *b)
{
struct pci_bus *child;
/*
* Carry out a depth-first search on the PCI bus
* tree to allocate bridge apertures. Read the
* programmed bridge bases and recursively claim
* the respective bridge resources.
*/
if (b->self) {
pci_read_bridge_bases(b);
pci_claim_bridge_resources(b->self);
}
list_for_each_entry(child, &b->children, node)
pci_bus_allocate_resources(child);
}
void pci_bus_claim_resources(struct pci_bus *b)
{
pci_bus_allocate_resources(b);
pci_bus_allocate_dev_resources(b);
}
EXPORT_SYMBOL(pci_bus_claim_resources);
static void __pci_bridge_assign_resources(const struct pci_dev *bridge,
struct list_head *add_head,
struct list_head *fail_head)

View File

@ -4854,20 +4854,17 @@ static int
lpfc_enable_pci_dev(struct lpfc_hba *phba)
{
struct pci_dev *pdev;
int bars = 0;
/* Obtain PCI device reference */
if (!phba->pcidev)
goto out_error;
else
pdev = phba->pcidev;
/* Select PCI BARs */
bars = pci_select_bars(pdev, IORESOURCE_MEM);
/* Enable PCI device */
if (pci_enable_device_mem(pdev))
goto out_error;
/* Request PCI resource for the device */
if (pci_request_selected_regions(pdev, bars, LPFC_DRIVER_NAME))
if (pci_request_mem_regions(pdev, LPFC_DRIVER_NAME))
goto out_disable_device;
/* Set up device as PCI master and save state for EEH */
pci_set_master(pdev);
@ -4884,7 +4881,7 @@ out_disable_device:
pci_disable_device(pdev);
out_error:
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
"1401 Failed to enable pci device, bars:x%x\n", bars);
"1401 Failed to enable pci device\n");
return -ENODEV;
}
@ -4899,17 +4896,14 @@ static void
lpfc_disable_pci_dev(struct lpfc_hba *phba)
{
struct pci_dev *pdev;
int bars;
/* Obtain PCI device reference */
if (!phba->pcidev)
return;
else
pdev = phba->pcidev;
/* Select PCI BARs */
bars = pci_select_bars(pdev, IORESOURCE_MEM);
/* Release PCI resource and disable PCI device */
pci_release_selected_regions(pdev, bars);
pci_release_mem_regions(pdev);
pci_disable_device(pdev);
return;
@ -9811,7 +9805,6 @@ lpfc_pci_remove_one_s3(struct pci_dev *pdev)
struct lpfc_vport **vports;
struct lpfc_hba *phba = vport->phba;
int i;
int bars = pci_select_bars(pdev, IORESOURCE_MEM);
spin_lock_irq(&phba->hbalock);
vport->load_flag |= FC_UNLOADING;
@ -9886,7 +9879,7 @@ lpfc_pci_remove_one_s3(struct pci_dev *pdev)
lpfc_hba_free(phba);
pci_release_selected_regions(pdev, bars);
pci_release_mem_regions(pdev);
pci_disable_device(pdev);
}

View File

@ -387,7 +387,7 @@ static int xhci_pci_suspend(struct usb_hcd *hcd, bool do_wakeup)
* need to have the registers polled during D3, so avoid D3cold.
*/
if (xhci->quirks & XHCI_COMP_MODE_QUIRK)
pdev->no_d3cold = true;
pci_d3cold_disable(pdev);
if (xhci->quirks & XHCI_PME_STUCK_QUIRK)
xhci_pme_quirk(hcd);

View File

@ -24,6 +24,8 @@ static inline acpi_status pci_acpi_remove_pm_notifier(struct acpi_device *dev)
}
extern phys_addr_t acpi_pci_root_get_mcfg_addr(acpi_handle handle);
extern phys_addr_t pci_mcfg_lookup(u16 domain, struct resource *bus_res);
static inline acpi_handle acpi_find_root_bridge_handle(struct pci_dev *pdev)
{
struct pci_bus *pbus = pdev->bus;

View File

@ -27,8 +27,7 @@ struct pci_config_window;
struct pci_ecam_ops {
unsigned int bus_shift;
struct pci_ops pci_ops;
int (*init)(struct device *,
struct pci_config_window *);
int (*init)(struct pci_config_window *);
};
/*
@ -45,6 +44,7 @@ struct pci_config_window {
void __iomem *win; /* 64-bit single mapping */
void __iomem **winp; /* 32-bit per-bus mapping */
};
struct device *parent;/* ECAM res was from this dev */
};
/* create and free pci_config_window */

View File

@ -101,6 +101,10 @@ enum {
DEVICE_COUNT_RESOURCE = PCI_NUM_RESOURCES,
};
/*
* pci_power_t values must match the bits in the Capabilities PME_Support
* and Control/Status PowerState fields in the Power Management capability.
*/
typedef int __bitwise pci_power_t;
#define PCI_D0 ((pci_power_t __force) 0)
@ -116,7 +120,7 @@ extern const char *pci_power_names[];
static inline const char *pci_power_name(pci_power_t state)
{
return pci_power_names[1 + (int) state];
return pci_power_names[1 + (__force int) state];
}
#define PCI_PM_D2_DELAY 200
@ -294,6 +298,7 @@ struct pci_dev {
unsigned int d2_support:1; /* Low power state D2 is supported */
unsigned int no_d1d2:1; /* D1 and D2 are forbidden */
unsigned int no_d3cold:1; /* D3cold is forbidden */
unsigned int bridge_d3:1; /* Allow D3 for bridge */
unsigned int d3cold_allowed:1; /* D3cold is allowed by user */
unsigned int mmio_always_on:1; /* disallow turning off io/mem
decoding during bar sizing */
@ -320,6 +325,7 @@ struct pci_dev {
* directly, use the values stored here. They might be different!
*/
unsigned int irq;
struct cpumask *irq_affinity;
struct resource resource[DEVICE_COUNT_RESOURCE]; /* I/O and memory regions + expansion ROMs */
bool match_driver; /* Skip attaching driver */
@ -1084,6 +1090,8 @@ int pci_back_from_sleep(struct pci_dev *dev);
bool pci_dev_run_wake(struct pci_dev *dev);
bool pci_check_pme_status(struct pci_dev *dev);
void pci_pme_wakeup_bus(struct pci_bus *bus);
void pci_d3cold_enable(struct pci_dev *dev);
void pci_d3cold_disable(struct pci_dev *dev);
static inline int pci_enable_wake(struct pci_dev *dev, pci_power_t state,
bool enable)
@ -1115,6 +1123,7 @@ int pci_set_vpd_size(struct pci_dev *dev, size_t len);
/* Helper functions for low-level code (drivers/pci/setup-[bus,res].c) */
resource_size_t pcibios_retrieve_fw_addr(struct pci_dev *dev, int idx);
void pci_bus_assign_resources(const struct pci_bus *bus);
void pci_bus_claim_resources(struct pci_bus *bus);
void pci_bus_size_bridges(struct pci_bus *bus);
int pci_claim_resource(struct pci_dev *, int);
int pci_claim_bridge_resource(struct pci_dev *bridge, int i);
@ -1144,9 +1153,12 @@ void pci_add_resource(struct list_head *resources, struct resource *res);
void pci_add_resource_offset(struct list_head *resources, struct resource *res,
resource_size_t offset);
void pci_free_resource_list(struct list_head *resources);
void pci_bus_add_resource(struct pci_bus *bus, struct resource *res, unsigned int flags);
void pci_bus_add_resource(struct pci_bus *bus, struct resource *res,
unsigned int flags);
struct resource *pci_bus_resource_n(const struct pci_bus *bus, int n);
void pci_bus_remove_resources(struct pci_bus *bus);
int devm_request_pci_bus_resources(struct device *dev,
struct list_head *resources);
#define pci_bus_for_each_resource(bus, res, i) \
for (i = 0; \
@ -1168,6 +1180,7 @@ int pci_register_io_range(phys_addr_t addr, resource_size_t size);
unsigned long pci_address_to_pio(phys_addr_t addr);
phys_addr_t pci_pio_to_address(unsigned long pio);
int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr);
void pci_unmap_iospace(struct resource *res);
static inline pci_bus_addr_t pci_bus_address(struct pci_dev *pdev, int bar)
{
@ -1238,6 +1251,11 @@ resource_size_t pcibios_iov_resource_alignment(struct pci_dev *dev, int resno);
int pci_set_vga_state(struct pci_dev *pdev, bool decode,
unsigned int command_bits, u32 flags);
#define PCI_IRQ_NOLEGACY (1 << 0) /* don't use legacy interrupts */
#define PCI_IRQ_NOMSI (1 << 1) /* don't use MSI interrupts */
#define PCI_IRQ_NOMSIX (1 << 2) /* don't use MSI-X interrupts */
#define PCI_IRQ_NOAFFINITY (1 << 3) /* don't auto-assign affinity */
/* kmem_cache style wrapper around pci_alloc_consistent() */
#include <linux/pci-dma.h>
@ -1285,6 +1303,11 @@ static inline int pci_enable_msix_exact(struct pci_dev *dev,
return rc;
return 0;
}
int pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
unsigned int max_vecs, unsigned int flags);
void pci_free_irq_vectors(struct pci_dev *dev);
int pci_irq_vector(struct pci_dev *dev, unsigned int nr);
#else
static inline int pci_msi_vec_count(struct pci_dev *dev) { return -ENOSYS; }
static inline void pci_msi_shutdown(struct pci_dev *dev) { }
@ -1308,6 +1331,24 @@ static inline int pci_enable_msix_range(struct pci_dev *dev,
static inline int pci_enable_msix_exact(struct pci_dev *dev,
struct msix_entry *entries, int nvec)
{ return -ENOSYS; }
static inline int pci_alloc_irq_vectors(struct pci_dev *dev,
unsigned int min_vecs, unsigned int max_vecs,
unsigned int flags)
{
if (min_vecs > 1)
return -EINVAL;
return 1;
}
static inline void pci_free_irq_vectors(struct pci_dev *dev)
{
}
static inline int pci_irq_vector(struct pci_dev *dev, unsigned int nr)
{
if (WARN_ON_ONCE(nr > 0))
return -EINVAL;
return dev->irq;
}
#endif
#ifdef CONFIG_PCIEPORTBUS
@ -1390,12 +1431,13 @@ static inline int pci_domain_nr(struct pci_bus *bus)
{
return bus->domain_nr;
}
void pci_bus_assign_domain_nr(struct pci_bus *bus, struct device *parent);
#ifdef CONFIG_ACPI
int acpi_pci_bus_find_domain_nr(struct pci_bus *bus);
#else
static inline void pci_bus_assign_domain_nr(struct pci_bus *bus,
struct device *parent)
{
}
static inline int acpi_pci_bus_find_domain_nr(struct pci_bus *bus)
{ return 0; }
#endif
int pci_bus_find_domain_nr(struct pci_bus *bus, struct device *parent);
#endif
/* some architectures require additional setup to direct VGA traffic */
@ -1403,6 +1445,34 @@ typedef int (*arch_set_vga_state_t)(struct pci_dev *pdev, bool decode,
unsigned int command_bits, u32 flags);
void pci_register_set_vga_state(arch_set_vga_state_t func);
static inline int
pci_request_io_regions(struct pci_dev *pdev, const char *name)
{
return pci_request_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_IO), name);
}
static inline void
pci_release_io_regions(struct pci_dev *pdev)
{
return pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_IO));
}
static inline int
pci_request_mem_regions(struct pci_dev *pdev, const char *name)
{
return pci_request_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM), name);
}
static inline void
pci_release_mem_regions(struct pci_dev *pdev)
{
return pci_release_selected_regions(pdev,
pci_select_bars(pdev, IORESOURCE_MEM));
}
#else /* CONFIG_PCI is not enabled */
static inline void pci_set_flags(int flags) { }
@ -1555,7 +1625,11 @@ static inline const char *pci_name(const struct pci_dev *pdev)
/* Some archs don't want to expose struct resource to userland as-is
* in sysfs and /proc
*/
#ifndef HAVE_ARCH_PCI_RESOURCE_TO_USER
#ifdef HAVE_ARCH_PCI_RESOURCE_TO_USER
void pci_resource_to_user(const struct pci_dev *dev, int bar,
const struct resource *rsrc,
resource_size_t *start, resource_size_t *end);
#else
static inline void pci_resource_to_user(const struct pci_dev *dev, int bar,
const struct resource *rsrc, resource_size_t *start,
resource_size_t *end)
@ -1707,6 +1781,7 @@ extern u8 pci_cache_line_size;
extern unsigned long pci_hotplug_io_size;
extern unsigned long pci_hotplug_mem_size;
extern unsigned long pci_hotplug_bus_size;
/* Architecture-specific versions may override these (weak) */
void pcibios_disable_device(struct pci_dev *dev);
@ -1723,7 +1798,7 @@ void pcibios_free_irq(struct pci_dev *dev);
extern struct dev_pm_ops pcibios_pm_ops;
#endif
#ifdef CONFIG_PCI_MMCONFIG
#if defined(CONFIG_PCI_MMCONFIG) || defined(CONFIG_ACPI_MCFG)
void __init pci_mmcfg_early_init(void);
void __init pci_mmcfg_late_init(void);
#else