1
0
Fork 0

pci-v4.18-changes

-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAlsZdg0UHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vwJOBAAsuuWsOdiJRRhQLU5WfEMFgzcL02R
 gsumqZkK7E8LOq0DPNMtcgv9O0KgYZyCiZyTMJ8N7sEYohg04lMz8mtYXOibjcwI
 p+nVMko8jQXV9FXwSMGVqigEaLLcrbtkbf/mPriD63DDnRMa/+/Jh15SwfLTydIH
 QRTJbIxkS3EiOauj5C8QY3UwzjlvV9mDilzM/x+MSK27k2HFU9Pw/3lIWHY716rr
 grPZTwBTfIT+QFZjwOm6iKzHjxRM830sofXARkcH4CgSNaTeq5UbtvAs293MHvc+
 v/v/1dfzUh00NxfZDWKHvTUMhjazeTeD9jEVS7T+HUcGzvwGxMSml6bBdznvwKCa
 46ynePOd1VcEBlMYYS+P4njRYBLWeUwt6/TzqR4yVwb0keQ6Yj3Y9H2UpzscYiCl
 O+0qz6RwyjKY0TpxfjoojgHn4U5ByI5fzVDJHbfr2MFTqqRNaabVrfl6xU4sVuhh
 OluT5ym+/dOCTI/wjlolnKNb0XThVre8e2Busr3TRvuwTMKMIWqJ9sXLovntdbqE
 furPD/UnuZHkjSFhQ1SQwYdWmsZI5qAq2C9haY8sEWsXEBEcBGLJ2BEleMxm8UsL
 KXuy4ER+R4M+sFtCkoWf3D4NTOBUdPHi4jyk6Ooo1idOwXCsASVvUjUEG5YcQC6R
 kpJ1VPTKK1XN64I=
 =aFAi
 -----END PGP SIGNATURE-----

Merge tag 'pci-v4.18-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:

  - unify AER decoding for native and ACPI CPER sources (Alexandru
    Gagniuc)

  - add TLP header info to AER tracepoint (Thomas Tai)

  - add generic pcie_wait_for_link() interface (Oza Pawandeep)

  - handle AER ERR_FATAL by removing and re-enumerating devices, as
    Downstream Port Containment does (Oza Pawandeep)

  - factor out common code between AER and DPC recovery (Oza Pawandeep)

  - stop triggering DPC for ERR_NONFATAL errors (Oza Pawandeep)

  - share ERR_FATAL recovery path between AER and DPC (Oza Pawandeep)

  - disable ASPM L1.2 substate if we don't have LTR (Bjorn Helgaas)

  - respect platform ownership of LTR (Bjorn Helgaas)

  - clear interrupt status in top half to avoid interrupt storm (Oza
    Pawandeep)

  - neaten pci=earlydump output (Andy Shevchenko)

  - avoid errors when extended config space inaccessible (Gilles Buloz)

  - prevent sysfs disable of device while driver attached (Christoph
    Hellwig)

  - use core interface to report PCIe link properties in bnx2x, bnxt_en,
    cxgb4, ixgbe (Bjorn Helgaas)

  - remove unused pcie_get_minimum_link() (Bjorn Helgaas)

  - fix use-before-set error in ibmphp (Dan Carpenter)

  - fix pciehp timeouts caused by Command Completed errata (Bjorn
    Helgaas)

  - fix refcounting in pnv_php hotplug (Julia Lawall)

  - clear pciehp Presence Detect and Data Link Layer Status Changed on
    resume so we don't miss hotplug events (Mika Westerberg)

  - only request pciehp control if we support it, so platform can use
    ACPI hotplug otherwise (Mika Westerberg)

  - convert SHPC to be builtin only (Mika Westerberg)

  - request SHPC control via _OSC if we support it (Mika Westerberg)

  - simplify SHPC handoff from firmware (Mika Westerberg)

  - fix an SHPC quirk that mistakenly included *all* AMD bridges as well
    as devices from any vendor with device ID 0x7458 (Bjorn Helgaas)

  - assign a bus number even to non-native hotplug bridges to leave
    space for acpiphp additions, to fix a common Thunderbolt xHCI
    hot-add failure (Mika Westerberg)

  - keep acpiphp from scanning native hotplug bridges, to fix common
    Thunderbolt hot-add failures (Mika Westerberg)

  - improve "partially hidden behind bridge" messages from core (Mika
    Westerberg)

  - add macros for PCIe Link Control 2 register (Frederick Lawler)

  - replace IB/hfi1 custom macros with PCI core versions (Frederick
    Lawler)

  - remove dead microblaze and xtensa code (Bjorn Helgaas)

  - use dev_printk() when possible in xtensa and mips (Bjorn Helgaas)

  - remove unused pcie_port_acpi_setup() and portdrv_acpi.c (Bjorn
    Helgaas)

  - add managed interface to get PCI host bridge resources from OF (Jan
    Kiszka)

  - add support for unbinding generic PCI host controller (Jan Kiszka)

  - fix memory leaks when unbinding generic PCI host controller (Jan
    Kiszka)

  - request legacy VGA framebuffer only for VGA devices to avoid false
    device conflicts (Bjorn Helgaas)

  - turn on PCI_COMMAND_IO & PCI_COMMAND_MEMORY in pci_enable_device()
    like everybody else, not in pcibios_fixup_bus() (Bjorn Helgaas)

  - add generic enable function for simple SR-IOV hardware (Alexander
    Duyck)

  - use generic SR-IOV enable for ena, nvme (Alexander Duyck)

  - add ACS quirk for Intel 7th & 8th Gen mobile (Alex Williamson)

  - add ACS quirk for Intel 300 series (Mika Westerberg)

  - enable register clock for Armada 7K/8K (Gregory CLEMENT)

  - reduce Keystone "link already up" log level (Fabio Estevam)

  - move private DT functions to drivers/pci/ (Rob Herring)

  - factor out dwc CONFIG_PCI Kconfig dependencies (Rob Herring)

  - add DesignWare support to the endpoint test driver (Gustavo
    Pimentel)

  - add DesignWare support for endpoint mode (Gustavo Pimentel)

  - use devm_ioremap_resource() instead of devm_ioremap() in dra7xx and
    artpec6 (Gustavo Pimentel)

  - fix Qualcomm bitwise NOT issue (Dan Carpenter)

  - add Qualcomm runtime PM support (Srinivas Kandagatla)

  - fix DesignWare enumeration below bridges (Koen Vandeputte)

  - use usleep() instead of mdelay() in endpoint test (Jia-Ju Bai)

  - add configfs entries for pci_epf_driver device IDs (Kishon Vijay
    Abraham I)

  - clean up pci_endpoint_test driver (Gustavo Pimentel)

  - update Layerscape maintainer email addresses (Minghuan Lian)

  - add COMPILE_TEST to improve build test coverage (Rob Herring)

  - fix Hyper-V bus registration failure caused by domain/serial number
    confusion (Sridhar Pitchai)

  - improve Hyper-V refcounting and coding style (Stephen Hemminger)

  - avoid potential Hyper-V hang waiting for a response that will never
    come (Dexuan Cui)

  - implement Mediatek chained IRQ handling (Honghui Zhang)

  - fix vendor ID & class type for Mediatek MT7622 (Honghui Zhang)

  - add Mobiveil PCIe host controller driver (Subrahmanya Lingappa)

  - add Mobiveil MSI support (Subrahmanya Lingappa)

  - clean up clocks, MSI, IRQ mappings in R-Car probe failure paths
    (Marek Vasut)

  - poll more frequently (5us vs 5ms) while waiting for R-Car data link
    active (Marek Vasut)

  - use generic OF parsing interface in R-Car (Vladimir Zapolskiy)

  - add R-Car V3H (R8A77980) "compatible" string (Sergei Shtylyov)

  - add R-Car gen3 PHY support (Sergei Shtylyov)

  - improve R-Car PHYRDY polling (Sergei Shtylyov)

  - clean up R-Car macros (Marek Vasut)

  - use runtime PM for R-Car controller clock (Dien Pham)

  - update arm64 defconfig for Rockchip (Shawn Lin)

  - refactor Rockchip code to facilitate both root port and endpoint
    mode (Shawn Lin)

  - add Rockchip endpoint mode driver (Shawn Lin)

  - support VMD "membar shadow" feature (Jon Derrick)

  - support VMD bus number offsets (Jon Derrick)

  - add VMD "no AER source ID" quirk for more device IDs (Jon Derrick)

  - remove unnecessary host controller CONFIG_PCIEPORTBUS Kconfig
    selections (Bjorn Helgaas)

  - clean up quirks.c organization and whitespace (Bjorn Helgaas)

* tag 'pci-v4.18-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (144 commits)
  PCI/AER: Replace struct pcie_device with pci_dev
  PCI/AER: Remove unused parameters
  PCI: qcom: Include gpio/consumer.h
  PCI: Improve "partially hidden behind bridge" log message
  PCI: Improve pci_scan_bridge() and pci_scan_bridge_extend() doc
  PCI: Move resource distribution for single bridge outside loop
  PCI: Account for all bridges on bus when distributing bus numbers
  ACPI / hotplug / PCI: Drop unnecessary parentheses
  ACPI / hotplug / PCI: Mark stale PCI devices disconnected
  ACPI / hotplug / PCI: Don't scan bridges managed by native hotplug
  PCI: hotplug: Add hotplug_is_native()
  PCI: shpchp: Add shpchp_is_native()
  PCI: shpchp: Fix AMD POGO identification
  PCI: mobiveil: Add MSI support
  PCI: mobiveil: Add Mobiveil PCIe Host Bridge IP driver
  PCI/AER: Decode Error Source Requester ID
  PCI/AER: Remove aer_recover_work_func() forward declaration
  PCI/DPC: Use the generic pcie_do_fatal_recovery() path
  PCI/AER: Pass service type to pcie_do_fatal_recovery()
  PCI/DPC: Disable ERR_NONFATAL handling by DPC
  ...
hifive-unleashed-5.1
Linus Torvalds 2018-06-07 12:45:58 -07:00
commit 3a3869f1c4
120 changed files with 6189 additions and 3863 deletions

View File

@ -110,7 +110,7 @@ The actual steps taken by a platform to recover from a PCI error
event will be platform-dependent, but will follow the general
sequence described below.
STEP 0: Error Event
STEP 0: Error Event: ERR_NONFATAL
-------------------
A PCI bus error is detected by the PCI hardware. On powerpc, the slot
is isolated, in that all I/O is blocked: all reads return 0xffffffff,
@ -228,13 +228,7 @@ proceeds to either STEP3 (Link Reset) or to STEP 5 (Resume Operations).
If any driver returned PCI_ERS_RESULT_NEED_RESET, then the platform
proceeds to STEP 4 (Slot Reset)
STEP 3: Link Reset
------------------
The platform resets the link. This is a PCI-Express specific step
and is done whenever a fatal error has been detected that can be
"solved" by resetting the link.
STEP 4: Slot Reset
STEP 3: Slot Reset
------------------
In response to a return value of PCI_ERS_RESULT_NEED_RESET, the
@ -320,7 +314,7 @@ Failure).
>>> However, it probably should.
STEP 5: Resume Operations
STEP 4: Resume Operations
-------------------------
The platform will call the resume() callback on all affected device
drivers if all drivers on the segment have returned
@ -332,7 +326,7 @@ a result code.
At this point, if a new error happens, the platform will restart
a new error recovery sequence.
STEP 6: Permanent Failure
STEP 5: Permanent Failure
-------------------------
A "permanent failure" has occurred, and the platform cannot recover
the device. The platform will call error_detected() with a
@ -355,6 +349,27 @@ errors. See the discussion in powerpc/eeh-pci-error-recovery.txt
for additional detail on real-life experience of the causes of
software errors.
STEP 0: Error Event: ERR_FATAL
-------------------
PCI bus error is detected by the PCI hardware. On powerpc, the slot is
isolated, in that all I/O is blocked: all reads return 0xffffffff, all
writes are ignored.
STEP 1: Remove devices
--------------------
Platform removes the devices depending on the error agent, it could be
this port for all subordinates or upstream component (likely downstream
port)
STEP 2: Reset link
--------------------
The platform resets the link. This is a PCI-Express specific step and is
done whenever a fatal error has been detected that can be "solved" by
resetting the link.
STEP 3: Re-enumerate the devices
--------------------
Initiates the re-enumeration.
Conclusion; General Remarks
---------------------------

View File

@ -3162,6 +3162,8 @@
on: Turn realloc on
realloc same as realloc=on
noari do not use PCIe ARI.
noats [PCIE, Intel-IOMMU, AMD-IOMMU]
do not use PCIe ATS (and IOMMU device IOTLB).
pcie_scan_all Scan all possible PCIe devices. Otherwise we
only look for one device below a PCIe downstream
port.

View File

@ -1,7 +1,9 @@
* Synopsys DesignWare PCIe interface
Required properties:
- compatible: should contain "snps,dw-pcie" to identify the core.
- compatible:
"snps,dw-pcie" for RC mode;
"snps,dw-pcie-ep" for EP mode;
- reg: Should contain the configuration address space.
- reg-names: Must be "config" for the PCIe configuration space.
(The old way of getting the configuration address space from "ranges"
@ -41,11 +43,11 @@ EP mode:
Example configuration:
pcie: pcie@dffff000 {
pcie: pcie@dfc00000 {
compatible = "snps,dw-pcie";
reg = <0xdffff000 0x1000>, /* Controller registers */
<0xd0000000 0x2000>; /* PCI config space */
reg-names = "ctrlreg", "config";
reg = <0xdfc00000 0x0001000>, /* IP registers */
<0xd0000000 0x0002000>; /* Configuration space */
reg-names = "dbi", "config";
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
@ -54,5 +56,15 @@ Example configuration:
interrupts = <25>, <24>;
#interrupt-cells = <1>;
num-lanes = <1>;
num-viewport = <3>;
};
or
pcie: pcie@dfc00000 {
compatible = "snps,dw-pcie-ep";
reg = <0xdfc00000 0x0001000>, /* IP registers 1 */
<0xdfc01000 0x0001000>, /* IP registers 2 */
<0xd0000000 0x2000000>; /* Configuration space */
reg-names = "dbi", "dbi2", "addr_space";
num-ib-windows = <6>;
num-ob-windows = <2>;
num-lanes = <1>;
};

View File

@ -0,0 +1,73 @@
* Mobiveil AXI PCIe Root Port Bridge DT description
Mobiveil's GPEX 4.0 is a PCIe Gen4 root port bridge IP. This configurable IP
has up to 8 outbound and inbound windows for the address translation.
Required properties:
- #address-cells: Address representation for root ports, set to <3>
- #size-cells: Size representation for root ports, set to <2>
- #interrupt-cells: specifies the number of cells needed to encode an
interrupt source. The value must be 1.
- compatible: Should contain "mbvl,gpex40-pcie"
- reg: Should contain PCIe registers location and length
"config_axi_slave": PCIe controller registers
"csr_axi_slave" : Bridge config registers
"gpio_slave" : GPIO registers to control slot power
"apb_csr" : MSI registers
- device_type: must be "pci"
- apio-wins : number of requested apio outbound windows
default 2 outbound windows are configured -
1. Config window
2. Memory window
- ppio-wins : number of requested ppio inbound windows
default 1 inbound memory window is configured.
- bus-range: PCI bus numbers covered
- interrupt-controller: identifies the node as an interrupt controller
- #interrupt-cells: specifies the number of cells needed to encode an
interrupt source. The value must be 1.
- interrupt-parent : phandle to the interrupt controller that
it is attached to, it should be set to gic to point to
ARM's Generic Interrupt Controller node in system DT.
- interrupts: The interrupt line of the PCIe controller
last cell of this field is set to 4 to
denote it as IRQ_TYPE_LEVEL_HIGH type interrupt.
- interrupt-map-mask,
interrupt-map: standard PCI properties to define the mapping of the
PCI interface to interrupt numbers.
- ranges: ranges for the PCI memory regions (I/O space region is not
supported by hardware)
Please refer to the standard PCI bus binding document for a more
detailed explanation
Example:
++++++++
pcie0: pcie@a0000000 {
#address-cells = <3>;
#size-cells = <2>;
compatible = "mbvl,gpex40-pcie";
reg = <0xa0000000 0x00001000>,
<0xb0000000 0x00010000>,
<0xff000000 0x00200000>,
<0xb0010000 0x00001000>;
reg-names = "config_axi_slave",
"csr_axi_slave",
"gpio_slave",
"apb_csr";
device_type = "pci";
apio-wins = <2>;
ppio-wins = <1>;
bus-range = <0x00000000 0x000000ff>;
interrupt-controller;
interrupt-parent = <&gic>;
#interrupt-cells = <1>;
interrupts = < 0 89 4 >;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 0 &pci_express 0>,
<0 0 0 1 &pci_express 1>,
<0 0 0 2 &pci_express 2>,
<0 0 0 3 &pci_express 3>;
ranges = < 0x83000000 0 0x00000000 0xa8000000 0 0x8000000>;
};

View File

@ -12,7 +12,10 @@ Required properties:
- "ctrl" for the control register region
- "config" for the config space region
- interrupts: Interrupt specifier for the PCIe controler
- clocks: reference to the PCIe controller clock
- clocks: reference to the PCIe controller clocks
- clock-names: mandatory if there is a second clock, in this case the
name must be "core" for the first clock and "reg" for the second
one
Example:

View File

@ -8,6 +8,7 @@ compatible: "renesas,pcie-r8a7743" for the R8A7743 SoC;
"renesas,pcie-r8a7793" for the R8A7793 SoC;
"renesas,pcie-r8a7795" for the R8A7795 SoC;
"renesas,pcie-r8a7796" for the R8A7796 SoC;
"renesas,pcie-r8a77980" for the R8A77980 SoC;
"renesas,pcie-rcar-gen2" for a generic R-Car Gen2 or
RZ/G1 compatible device.
"renesas,pcie-rcar-gen3" for a generic R-Car Gen3 compatible device.
@ -32,6 +33,11 @@ compatible: "renesas,pcie-r8a7743" for the R8A7743 SoC;
and PCIe bus clocks.
- clock-names: from common clock binding: should be "pcie" and "pcie_bus".
Optional properties:
- phys: from common PHY binding: PHY phandle and specifier (only make sense
for R-Car gen3 SoCs where the PCIe PHYs have their own register blocks).
- phy-names: from common PHY binding: should be "pcie".
Example:
SoC-specific DT Entry:

View File

@ -0,0 +1,62 @@
* Rockchip AXI PCIe Endpoint Controller DT description
Required properties:
- compatible: Should contain "rockchip,rk3399-pcie-ep"
- reg: Two register ranges as listed in the reg-names property
- reg-names: Must include the following names
- "apb-base"
- "mem-base"
- clocks: Must contain an entry for each entry in clock-names.
See ../clocks/clock-bindings.txt for details.
- clock-names: Must include the following entries:
- "aclk"
- "aclk-perf"
- "hclk"
- "pm"
- resets: Must contain seven entries for each entry in reset-names.
See ../reset/reset.txt for details.
- reset-names: Must include the following names
- "core"
- "mgmt"
- "mgmt-sticky"
- "pipe"
- "pm"
- "aclk"
- "pclk"
- pinctrl-names : The pin control state names
- pinctrl-0: The "default" pinctrl state
- phys: Must contain an phandle to a PHY for each entry in phy-names.
- phy-names: Must include 4 entries for all 4 lanes even if some of
them won't be used for your cases. Entries are of the form "pcie-phy-N":
where N ranges from 0 to 3.
(see example below and you MUST also refer to ../phy/rockchip-pcie-phy.txt
for changing the #phy-cells of phy node to support it)
- rockchip,max-outbound-regions: Maximum number of outbound regions
Optional Property:
- num-lanes: number of lanes to use
- max-functions: Maximum number of functions that can be configured (default 1).
pcie0-ep: pcie@f8000000 {
compatible = "rockchip,rk3399-pcie-ep";
#address-cells = <3>;
#size-cells = <2>;
rockchip,max-outbound-regions = <16>;
clocks = <&cru ACLK_PCIE>, <&cru ACLK_PERF_PCIE>,
<&cru PCLK_PCIE>, <&cru SCLK_PCIE_PM>;
clock-names = "aclk", "aclk-perf",
"hclk", "pm";
max-functions = /bits/ 8 <8>;
num-lanes = <4>;
reg = <0x0 0xfd000000 0x0 0x1000000>, <0x0 0x80000000 0x0 0x20000>;
reg-names = "apb-base", "mem-base";
resets = <&cru SRST_PCIE_CORE>, <&cru SRST_PCIE_MGMT>,
<&cru SRST_PCIE_MGMT_STICKY>, <&cru SRST_PCIE_PIPE> ,
<&cru SRST_PCIE_PM>, <&cru SRST_P_PCIE>, <&cru SRST_A_PCIE>;
reset-names = "core", "mgmt", "mgmt-sticky", "pipe",
"pm", "pclk", "aclk";
phys = <&pcie_phy 0>, <&pcie_phy 1>, <&pcie_phy 2>, <&pcie_phy 3>;
phy-names = "pcie-phy-0", "pcie-phy-1", "pcie-phy-2", "pcie-phy-3";
pinctrl-names = "default";
pinctrl-0 = <&pcie_clkreq>;
};

View File

@ -205,6 +205,7 @@ lwn Liebherr-Werk Nenzing GmbH
macnica Macnica Americas
marvell Marvell Technology Group Ltd.
maxim Maxim Integrated Products
mbvl Mobiveil Inc.
mcube mCube
meas Measurement Specialties
mediatek MediaTek Inc.

View File

@ -9484,6 +9484,13 @@ Q: http://patchwork.linuxtv.org/project/linux-media/list/
S: Maintained
F: drivers/media/dvb-frontends/mn88473*
PCI DRIVER FOR MOBIVEIL PCIE IP
M: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>
L: linux-pci@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/pci/mobiveil-pcie.txt
F: drivers/pci/host/pcie-mobiveil.c
MODULE SUPPORT
M: Jessica Yu <jeyu@kernel.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jeyu/linux.git modules-next
@ -10826,9 +10833,9 @@ F: Documentation/devicetree/bindings/pci/cdns,*.txt
F: drivers/pci/cadence/pcie-cadence*
PCI DRIVER FOR FREESCALE LAYERSCAPE
M: Minghuan Lian <minghuan.Lian@freescale.com>
M: Mingkai Hu <mingkai.hu@freescale.com>
M: Roy Zang <tie-fei.zang@freescale.com>
M: Minghuan Lian <minghuan.Lian@nxp.com>
M: Mingkai Hu <mingkai.hu@nxp.com>
M: Roy Zang <roy.zang@nxp.com>
L: linuxppc-dev@lists.ozlabs.org
L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org
@ -11054,8 +11061,8 @@ M: Shawn Lin <shawn.lin@rock-chips.com>
L: linux-pci@vger.kernel.org
L: linux-rockchip@lists.infradead.org
S: Maintained
F: Documentation/devicetree/bindings/pci/rockchip-pcie.txt
F: drivers/pci/host/pcie-rockchip.c
F: Documentation/devicetree/bindings/pci/rockchip-pcie*
F: drivers/pci/host/pcie-rockchip*
PCI DRIVER FOR V3 SEMICONDUCTOR V360EPC
M: Linus Walleij <linus.walleij@linaro.org>

View File

@ -78,7 +78,8 @@ CONFIG_PCIE_ARMADA_8K=y
CONFIG_PCI_AARDVARK=y
CONFIG_PCI_TEGRA=y
CONFIG_PCIE_RCAR=y
CONFIG_PCIE_ROCKCHIP=m
CONFIG_PCIE_ROCKCHIP=y
CONFIG_PCIE_ROCKCHIP_HOST=m
CONFIG_PCI_HOST_GENERIC=y
CONFIG_PCI_XGENE=y
CONFIG_PCI_HOST_THUNDER_PEM=y

View File

@ -61,10 +61,6 @@ extern int pci_mmap_legacy_page_range(struct pci_bus *bus,
#define HAVE_PCI_LEGACY 1
extern void pcibios_claim_one_bus(struct pci_bus *b);
extern void pcibios_finish_adding_to_bus(struct pci_bus *bus);
extern void pcibios_resource_survey(void);
struct file;

View File

@ -915,67 +915,6 @@ void __init pcibios_resource_survey(void)
pci_assign_unassigned_resources();
}
/* This is used by the PCI hotplug driver to allocate resource
* of newly plugged busses. We can try to consolidate with the
* rest of the code later, for now, keep it as-is as our main
* resource allocation function doesn't deal with sub-trees yet.
*/
void pcibios_claim_one_bus(struct pci_bus *bus)
{
struct pci_dev *dev;
struct pci_bus *child_bus;
list_for_each_entry(dev, &bus->devices, bus_list) {
int i;
for (i = 0; i < PCI_NUM_RESOURCES; i++) {
struct resource *r = &dev->resource[i];
if (r->parent || !r->start || !r->flags)
continue;
pr_debug("PCI: Claiming %s: ", pci_name(dev));
pr_debug("Resource %d: %016llx..%016llx [%x]\n",
i, (unsigned long long)r->start,
(unsigned long long)r->end,
(unsigned int)r->flags);
if (pci_claim_resource(dev, i) == 0)
continue;
pci_claim_bridge_resource(dev, i);
}
}
list_for_each_entry(child_bus, &bus->children, node)
pcibios_claim_one_bus(child_bus);
}
EXPORT_SYMBOL_GPL(pcibios_claim_one_bus);
/* pcibios_finish_adding_to_bus
*
* This is to be called by the hotplug code after devices have been
* added to a bus, this include calling it for a PHB that is just
* being added
*/
void pcibios_finish_adding_to_bus(struct pci_bus *bus)
{
pr_debug("PCI: Finishing adding to hotplug bus %04x:%02x\n",
pci_domain_nr(bus), bus->number);
/* Allocate bus and devices resources */
pcibios_allocate_bus_resources(bus);
pcibios_claim_one_bus(bus);
/* Add new devices to global lists. Register in proc, sysfs. */
pci_bus_add_devices(bus);
/* Fixup EEH */
/* eeh_add_device_tree_late(bus); */
}
EXPORT_SYMBOL_GPL(pcibios_finish_adding_to_bus);
static void pcibios_setup_phb_resources(struct pci_controller *hose,
struct list_head *resources)
{

View File

@ -263,9 +263,8 @@ static int pcibios_enable_resources(struct pci_dev *dev, int mask)
(!(r->flags & IORESOURCE_ROM_ENABLE)))
continue;
if (!r->start && r->end) {
printk(KERN_ERR "PCI: Device %s not available "
"because of resource collisions\n",
pci_name(dev));
pci_err(dev,
"can't enable device: resource collisions\n");
return -EINVAL;
}
if (r->flags & IORESOURCE_IO)
@ -274,8 +273,7 @@ static int pcibios_enable_resources(struct pci_dev *dev, int mask)
cmd |= PCI_COMMAND_MEMORY;
}
if (cmd != old_cmd) {
printk("PCI: Enabling device %s (%04x -> %04x)\n",
pci_name(dev), old_cmd, cmd);
pci_info(dev, "enabling device (%04x -> %04x)\n", old_cmd, cmd);
pci_write_config_word(dev, PCI_COMMAND, cmd);
}
return 0;

View File

@ -60,50 +60,30 @@ void leon_pci_init(struct platform_device *ofdev, struct leon_pci_info *info)
pci_bus_add_devices(root_bus);
}
void pcibios_fixup_bus(struct pci_bus *pbus)
int pcibios_enable_device(struct pci_dev *dev, int mask)
{
struct pci_dev *dev;
int i, has_io, has_mem;
u16 cmd;
u16 cmd, oldcmd;
int i;
list_for_each_entry(dev, &pbus->devices, bus_list) {
/*
* We can not rely on that the bootloader has enabled I/O
* or memory access to PCI devices. Instead we enable it here
* if the device has BARs of respective type.
*/
has_io = has_mem = 0;
for (i = 0; i < PCI_ROM_RESOURCE; i++) {
unsigned long f = dev->resource[i].flags;
if (f & IORESOURCE_IO)
has_io = 1;
else if (f & IORESOURCE_MEM)
has_mem = 1;
}
/* ROM BARs are mapped into 32-bit memory space */
if (dev->resource[PCI_ROM_RESOURCE].end != 0) {
dev->resource[PCI_ROM_RESOURCE].flags |=
IORESOURCE_ROM_ENABLE;
has_mem = 1;
}
pci_bus_read_config_word(pbus, dev->devfn, PCI_COMMAND, &cmd);
if (has_io && !(cmd & PCI_COMMAND_IO)) {
#ifdef CONFIG_PCI_DEBUG
printk(KERN_INFO "LEONPCI: Enabling I/O for dev %s\n",
pci_name(dev));
#endif
pci_read_config_word(dev, PCI_COMMAND, &cmd);
oldcmd = cmd;
for (i = 0; i < PCI_NUM_RESOURCES; i++) {
struct resource *res = &dev->resource[i];
/* Only set up the requested stuff */
if (!(mask & (1<<i)))
continue;
if (res->flags & IORESOURCE_IO)
cmd |= PCI_COMMAND_IO;
pci_bus_write_config_word(pbus, dev->devfn, PCI_COMMAND,
cmd);
}
if (has_mem && !(cmd & PCI_COMMAND_MEMORY)) {
#ifdef CONFIG_PCI_DEBUG
printk(KERN_INFO "LEONPCI: Enabling MEMORY for dev"
"%s\n", pci_name(dev));
#endif
if (res->flags & IORESOURCE_MEM)
cmd |= PCI_COMMAND_MEMORY;
pci_bus_write_config_word(pbus, dev->devfn, PCI_COMMAND,
cmd);
}
}
if (cmd != oldcmd) {
pci_info(dev, "enabling device (%04x -> %04x)\n", oldcmd, cmd);
pci_write_config_word(dev, PCI_COMMAND, cmd);
}
return 0;
}

View File

@ -214,8 +214,8 @@ static void pci_parse_of_addrs(struct platform_device *op,
if (!addrs)
return;
if (ofpci_verbose)
printk(" parse addresses (%d bytes) @ %p\n",
proplen, addrs);
pci_info(dev, " parse addresses (%d bytes) @ %p\n",
proplen, addrs);
op_res = &op->resource[0];
for (; proplen >= 20; proplen -= 20, addrs += 5, op_res++) {
struct resource *res;
@ -227,8 +227,8 @@ static void pci_parse_of_addrs(struct platform_device *op,
continue;
i = addrs[0] & 0xff;
if (ofpci_verbose)
printk(" start: %llx, end: %llx, i: %x\n",
op_res->start, op_res->end, i);
pci_info(dev, " start: %llx, end: %llx, i: %x\n",
op_res->start, op_res->end, i);
if (PCI_BASE_ADDRESS_0 <= i && i <= PCI_BASE_ADDRESS_5) {
res = &dev->resource[(i - PCI_BASE_ADDRESS_0) >> 2];
@ -236,13 +236,15 @@ static void pci_parse_of_addrs(struct platform_device *op,
res = &dev->resource[PCI_ROM_RESOURCE];
flags |= IORESOURCE_READONLY | IORESOURCE_SIZEALIGN;
} else {
printk(KERN_ERR "PCI: bad cfg reg num 0x%x\n", i);
pci_err(dev, "bad cfg reg num 0x%x\n", i);
continue;
}
res->start = op_res->start;
res->end = op_res->end;
res->flags = flags;
res->name = pci_name(dev);
pci_info(dev, "reg 0x%x: %pR\n", i, res);
}
}
@ -289,8 +291,8 @@ static struct pci_dev *of_create_pci_dev(struct pci_pbm_info *pbm,
type = "";
if (ofpci_verbose)
printk(" create device, devfn: %x, type: %s\n",
devfn, type);
pci_info(bus," create device, devfn: %x, type: %s\n",
devfn, type);
dev->sysdata = node;
dev->dev.parent = bus->bridge;
@ -323,10 +325,6 @@ static struct pci_dev *of_create_pci_dev(struct pci_pbm_info *pbm,
dev_set_name(&dev->dev, "%04x:%02x:%02x.%d", pci_domain_nr(bus),
dev->bus->number, PCI_SLOT(devfn), PCI_FUNC(devfn));
if (ofpci_verbose)
printk(" class: 0x%x device name: %s\n",
dev->class, pci_name(dev));
/* I have seen IDE devices which will not respond to
* the bmdma simplex check reads if bus mastering is
* disabled.
@ -353,10 +351,13 @@ static struct pci_dev *of_create_pci_dev(struct pci_pbm_info *pbm,
dev->irq = PCI_IRQ_NONE;
}
pci_info(dev, "[%04x:%04x] type %02x class %#08x\n",
dev->vendor, dev->device, dev->hdr_type, dev->class);
pci_parse_of_addrs(sd->op, node, dev);
if (ofpci_verbose)
printk(" adding to system ...\n");
pci_info(dev, " adding to system ...\n");
pci_device_add(dev, bus);
@ -430,19 +431,19 @@ static void of_scan_pci_bridge(struct pci_pbm_info *pbm,
u64 size;
if (ofpci_verbose)
printk("of_scan_pci_bridge(%s)\n", node->full_name);
pci_info(dev, "of_scan_pci_bridge(%s)\n", node->full_name);
/* parse bus-range property */
busrange = of_get_property(node, "bus-range", &len);
if (busrange == NULL || len != 8) {
printk(KERN_DEBUG "Can't get bus-range for PCI-PCI bridge %s\n",
pci_info(dev, "Can't get bus-range for PCI-PCI bridge %s\n",
node->full_name);
return;
}
if (ofpci_verbose)
printk(" Bridge bus range [%u --> %u]\n",
busrange[0], busrange[1]);
pci_info(dev, " Bridge bus range [%u --> %u]\n",
busrange[0], busrange[1]);
ranges = of_get_property(node, "ranges", &len);
simba = 0;
@ -454,8 +455,8 @@ static void of_scan_pci_bridge(struct pci_pbm_info *pbm,
bus = pci_add_new_bus(dev->bus, dev, busrange[0]);
if (!bus) {
printk(KERN_ERR "Failed to create pci bus for %s\n",
node->full_name);
pci_err(dev, "Failed to create pci bus for %s\n",
node->full_name);
return;
}
@ -464,8 +465,8 @@ static void of_scan_pci_bridge(struct pci_pbm_info *pbm,
bus->bridge_ctl = 0;
if (ofpci_verbose)
printk(" Bridge ranges[%p] simba[%d]\n",
ranges, simba);
pci_info(dev, " Bridge ranges[%p] simba[%d]\n",
ranges, simba);
/* parse ranges property, or cook one up by hand for Simba */
/* PCI #address-cells == 3 and #size-cells == 2 always */
@ -487,10 +488,10 @@ static void of_scan_pci_bridge(struct pci_pbm_info *pbm,
u64 start;
if (ofpci_verbose)
printk(" RAW Range[%08x:%08x:%08x:%08x:%08x:%08x:"
"%08x:%08x]\n",
ranges[0], ranges[1], ranges[2], ranges[3],
ranges[4], ranges[5], ranges[6], ranges[7]);
pci_info(dev, " RAW Range[%08x:%08x:%08x:%08x:%08x:%08x:"
"%08x:%08x]\n",
ranges[0], ranges[1], ranges[2], ranges[3],
ranges[4], ranges[5], ranges[6], ranges[7]);
flags = pci_parse_of_flags(ranges[0]);
size = GET_64BIT(ranges, 6);
@ -510,14 +511,14 @@ static void of_scan_pci_bridge(struct pci_pbm_info *pbm,
if (flags & IORESOURCE_IO) {
res = bus->resource[0];
if (res->flags) {
printk(KERN_ERR "PCI: ignoring extra I/O range"
" for bridge %s\n", node->full_name);
pci_err(dev, "ignoring extra I/O range"
" for bridge %s\n", node->full_name);
continue;
}
} else {
if (i >= PCI_NUM_RESOURCES - PCI_BRIDGE_RESOURCES) {
printk(KERN_ERR "PCI: too many memory ranges"
" for bridge %s\n", node->full_name);
pci_err(dev, "too many memory ranges"
" for bridge %s\n", node->full_name);
continue;
}
res = bus->resource[i];
@ -529,8 +530,8 @@ static void of_scan_pci_bridge(struct pci_pbm_info *pbm,
region.end = region.start + size - 1;
if (ofpci_verbose)
printk(" Using flags[%08x] start[%016llx] size[%016llx]\n",
flags, start, size);
pci_info(dev, " Using flags[%08x] start[%016llx] size[%016llx]\n",
flags, start, size);
pcibios_bus_to_resource(dev->bus, res, &region);
}
@ -538,7 +539,7 @@ after_ranges:
sprintf(bus->name, "PCI Bus %04x:%02x", pci_domain_nr(bus),
bus->number);
if (ofpci_verbose)
printk(" bus name: %s\n", bus->name);
pci_info(dev, " bus name: %s\n", bus->name);
pci_of_scan_bus(pbm, node, bus);
}
@ -553,14 +554,14 @@ static void pci_of_scan_bus(struct pci_pbm_info *pbm,
struct pci_dev *dev;
if (ofpci_verbose)
printk("PCI: scan_bus[%s] bus no %d\n",
node->full_name, bus->number);
pci_info(bus, "scan_bus[%s] bus no %d\n",
node->full_name, bus->number);
child = NULL;
prev_devfn = -1;
while ((child = of_get_next_child(node, child)) != NULL) {
if (ofpci_verbose)
printk(" * %s\n", child->full_name);
pci_info(bus, " * %s\n", child->full_name);
reg = of_get_property(child, "reg", &reglen);
if (reg == NULL || reglen < 20)
continue;
@ -581,8 +582,7 @@ static void pci_of_scan_bus(struct pci_pbm_info *pbm,
if (!dev)
continue;
if (ofpci_verbose)
printk("PCI: dev header type: %x\n",
dev->hdr_type);
pci_info(dev, "dev header type: %x\n", dev->hdr_type);
if (pci_is_bridge(dev))
of_scan_pci_bridge(pbm, child, dev);
@ -624,6 +624,45 @@ static void pci_bus_register_of_sysfs(struct pci_bus *bus)
pci_bus_register_of_sysfs(child_bus);
}
static void pci_claim_legacy_resources(struct pci_dev *dev)
{
struct pci_bus_region region;
struct resource *p, *root, *conflict;
if ((dev->class >> 8) != PCI_CLASS_DISPLAY_VGA)
return;
p = kzalloc(sizeof(*p), GFP_KERNEL);
if (!p)
return;
p->name = "Video RAM area";
p->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
region.start = 0xa0000UL;
region.end = region.start + 0x1ffffUL;
pcibios_bus_to_resource(dev->bus, p, &region);
root = pci_find_parent_resource(dev, p);
if (!root) {
pci_info(dev, "can't claim VGA legacy %pR: no compatible bridge window\n", p);
goto err;
}
conflict = request_resource_conflict(root, p);
if (conflict) {
pci_info(dev, "can't claim VGA legacy %pR: address conflict with %s %pR\n",
p, conflict->name, conflict);
goto err;
}
pci_info(dev, "VGA legacy framebuffer %pR\n", p);
return;
err:
kfree(p);
}
static void pci_claim_bus_resources(struct pci_bus *bus)
{
struct pci_bus *child_bus;
@ -639,15 +678,13 @@ static void pci_claim_bus_resources(struct pci_bus *bus)
continue;
if (ofpci_verbose)
printk("PCI: Claiming %s: "
"Resource %d: %016llx..%016llx [%x]\n",
pci_name(dev), i,
(unsigned long long)r->start,
(unsigned long long)r->end,
(unsigned int)r->flags);
pci_info(dev, "Claiming Resource %d: %pR\n",
i, r);
pci_claim_resource(dev, i);
}
pci_claim_legacy_resources(dev);
}
list_for_each_entry(child_bus, &bus->children, node)
@ -687,6 +724,7 @@ struct pci_bus *pci_scan_one_pbm(struct pci_pbm_info *pbm,
pci_bus_register_of_sysfs(bus);
pci_claim_bus_resources(bus);
pci_bus_add_devices(bus);
return bus;
}
@ -713,9 +751,7 @@ int pcibios_enable_device(struct pci_dev *dev, int mask)
}
if (cmd != oldcmd) {
printk(KERN_DEBUG "PCI: Enabling device: (%s), cmd %x\n",
pci_name(dev), cmd);
/* Enable the appropriate bits in the PCI command register. */
pci_info(dev, "enabling device (%04x -> %04x)\n", oldcmd, cmd);
pci_write_config_word(dev, PCI_COMMAND, cmd);
}
return 0;
@ -1075,8 +1111,8 @@ static void pci_bus_slot_names(struct device_node *node, struct pci_bus *bus)
sp = prop->names;
if (ofpci_verbose)
printk("PCI: Making slots for [%s] mask[0x%02x]\n",
node->full_name, mask);
pci_info(bus, "Making slots for [%s] mask[0x%02x]\n",
node->full_name, mask);
i = 0;
while (mask) {
@ -1089,12 +1125,12 @@ static void pci_bus_slot_names(struct device_node *node, struct pci_bus *bus)
}
if (ofpci_verbose)
printk("PCI: Making slot [%s]\n", sp);
pci_info(bus, "Making slot [%s]\n", sp);
pci_slot = pci_create_slot(bus, i, sp, NULL);
if (IS_ERR(pci_slot))
printk(KERN_ERR "PCI: pci_create_slot returned %ld\n",
PTR_ERR(pci_slot));
pci_err(bus, "pci_create_slot returned %ld\n",
PTR_ERR(pci_slot));
sp += strlen(sp) + 1;
mask &= ~this_bit;

View File

@ -329,23 +329,6 @@ void pci_get_pbm_props(struct pci_pbm_info *pbm)
}
}
static void pci_register_legacy_regions(struct resource *io_res,
struct resource *mem_res)
{
struct resource *p;
/* VGA Video RAM. */
p = kzalloc(sizeof(*p), GFP_KERNEL);
if (!p)
return;
p->name = "Video RAM area";
p->start = mem_res->start + 0xa0000UL;
p->end = p->start + 0x1ffffUL;
p->flags = IORESOURCE_BUSY;
request_resource(mem_res, p);
}
static void pci_register_iommu_region(struct pci_pbm_info *pbm)
{
const u32 *vdma = of_get_property(pbm->op->dev.of_node, "virtual-dma",
@ -487,8 +470,6 @@ void pci_determine_mem_io_space(struct pci_pbm_info *pbm)
if (pbm->mem64_space.flags)
request_resource(&iomem_resource, &pbm->mem64_space);
pci_register_legacy_regions(&pbm->io_space,
&pbm->mem_space);
pci_register_iommu_region(pbm);
}
@ -508,8 +489,8 @@ void pci_scan_for_target_abort(struct pci_pbm_info *pbm,
PCI_STATUS_REC_TARGET_ABORT));
if (error_bits) {
pci_write_config_word(pdev, PCI_STATUS, error_bits);
printk("%s: Device %s saw Target Abort [%016x]\n",
pbm->name, pci_name(pdev), status);
pci_info(pdev, "%s: Device saw Target Abort [%016x]\n",
pbm->name, status);
}
}
@ -531,8 +512,8 @@ void pci_scan_for_master_abort(struct pci_pbm_info *pbm,
(status & (PCI_STATUS_REC_MASTER_ABORT));
if (error_bits) {
pci_write_config_word(pdev, PCI_STATUS, error_bits);
printk("%s: Device %s received Master Abort [%016x]\n",
pbm->name, pci_name(pdev), status);
pci_info(pdev, "%s: Device received Master Abort "
"[%016x]\n", pbm->name, status);
}
}
@ -555,8 +536,8 @@ void pci_scan_for_parity_error(struct pci_pbm_info *pbm,
PCI_STATUS_DETECTED_PARITY));
if (error_bits) {
pci_write_config_word(pdev, PCI_STATUS, error_bits);
printk("%s: Device %s saw Parity Error [%016x]\n",
pbm->name, pci_name(pdev), status);
pci_info(pdev, "%s: Device saw Parity Error [%016x]\n",
pbm->name, status);
}
}

View File

@ -191,8 +191,8 @@ static void sparc64_teardown_msi_irq(unsigned int irq,
break;
}
if (i >= pbm->msi_num) {
printk(KERN_ERR "%s: teardown: No MSI for irq %u\n",
pbm->name, irq);
pci_err(pdev, "%s: teardown: No MSI for irq %u\n", pbm->name,
irq);
return;
}
@ -201,9 +201,9 @@ static void sparc64_teardown_msi_irq(unsigned int irq,
err = ops->msi_teardown(pbm, msi_num);
if (err) {
printk(KERN_ERR "%s: teardown: ops->teardown() on MSI %u, "
"irq %u, gives error %d\n",
pbm->name, msi_num, irq, err);
pci_err(pdev, "%s: teardown: ops->teardown() on MSI %u, "
"irq %u, gives error %d\n", pbm->name, msi_num, irq,
err);
return;
}

View File

@ -518,10 +518,10 @@ static void pcic_map_pci_device(struct linux_pcic *pcic,
* board in a PCI slot. We must remap it
* under 64K but it is not done yet. XXX
*/
printk("PCIC: Skipping I/O space at 0x%lx, "
"this will Oops if a driver attaches "
"device '%s' at %02x:%02x)\n", address,
namebuf, dev->bus->number, dev->devfn);
pci_info(dev, "PCIC: Skipping I/O space at "
"0x%lx, this will Oops if a driver "
"attaches device '%s'\n", address,
namebuf);
}
}
}
@ -551,8 +551,8 @@ pcic_fill_irq(struct linux_pcic *pcic, struct pci_dev *dev, int node)
p++;
}
if (i >= pcic->pcic_imdim) {
printk("PCIC: device %s devfn %02x:%02x not found in %d\n",
namebuf, dev->bus->number, dev->devfn, pcic->pcic_imdim);
pci_info(dev, "PCIC: device %s not found in %d\n", namebuf,
pcic->pcic_imdim);
dev->irq = 0;
return;
}
@ -565,7 +565,7 @@ pcic_fill_irq(struct linux_pcic *pcic, struct pci_dev *dev, int node)
ivec = readw(pcic->pcic_regs+PCI_INT_SELECT_HI);
real_irq = ivec >> ((i-4) << 2) & 0xF;
} else { /* Corrupted map */
printk("PCIC: BAD PIN %d\n", i); for (;;) {}
pci_info(dev, "PCIC: BAD PIN %d\n", i); for (;;) {}
}
/* P3 */ /* printk("PCIC: device %s pin %d ivec 0x%x irq %x\n", namebuf, i, ivec, dev->irq); */
@ -574,10 +574,10 @@ pcic_fill_irq(struct linux_pcic *pcic, struct pci_dev *dev, int node)
*/
if (real_irq == 0 || p->force) {
if (p->irq == 0 || p->irq >= 15) { /* Corrupted map */
printk("PCIC: BAD IRQ %d\n", p->irq); for (;;) {}
pci_info(dev, "PCIC: BAD IRQ %d\n", p->irq); for (;;) {}
}
printk("PCIC: setting irq %d at pin %d for device %02x:%02x\n",
p->irq, p->pin, dev->bus->number, dev->devfn);
pci_info(dev, "PCIC: setting irq %d at pin %d\n", p->irq,
p->pin);
real_irq = p->irq;
i = p->pin;
@ -602,15 +602,13 @@ pcic_fill_irq(struct linux_pcic *pcic, struct pci_dev *dev, int node)
void pcibios_fixup_bus(struct pci_bus *bus)
{
struct pci_dev *dev;
int i, has_io, has_mem;
unsigned int cmd = 0;
struct linux_pcic *pcic;
/* struct linux_pbm_info* pbm = &pcic->pbm; */
int node;
struct pcidev_cookie *pcp;
if (!pcic0_up) {
printk("pcibios_fixup_bus: no PCIC\n");
pci_info(bus, "pcibios_fixup_bus: no PCIC\n");
return;
}
pcic = &pcic0;
@ -619,44 +617,12 @@ void pcibios_fixup_bus(struct pci_bus *bus)
* Next crud is an equivalent of pbm = pcic_bus_to_pbm(bus);
*/
if (bus->number != 0) {
printk("pcibios_fixup_bus: nonzero bus 0x%x\n", bus->number);
pci_info(bus, "pcibios_fixup_bus: nonzero bus 0x%x\n",
bus->number);
return;
}
list_for_each_entry(dev, &bus->devices, bus_list) {
/*
* Comment from i386 branch:
* There are buggy BIOSes that forget to enable I/O and memory
* access to PCI devices. We try to fix this, but we need to
* be sure that the BIOS didn't forget to assign an address
* to the device. [mj]
* OBP is a case of such BIOS :-)
*/
has_io = has_mem = 0;
for(i=0; i<6; i++) {
unsigned long f = dev->resource[i].flags;
if (f & IORESOURCE_IO) {
has_io = 1;
} else if (f & IORESOURCE_MEM)
has_mem = 1;
}
pcic_read_config(dev->bus, dev->devfn, PCI_COMMAND, 2, &cmd);
if (has_io && !(cmd & PCI_COMMAND_IO)) {
printk("PCIC: Enabling I/O for device %02x:%02x\n",
dev->bus->number, dev->devfn);
cmd |= PCI_COMMAND_IO;
pcic_write_config(dev->bus, dev->devfn,
PCI_COMMAND, 2, cmd);
}
if (has_mem && !(cmd & PCI_COMMAND_MEMORY)) {
printk("PCIC: Enabling memory for device %02x:%02x\n",
dev->bus->number, dev->devfn);
cmd |= PCI_COMMAND_MEMORY;
pcic_write_config(dev->bus, dev->devfn,
PCI_COMMAND, 2, cmd);
}
node = pdev_to_pnode(&pcic->pbm, dev);
if(node == 0)
node = -1;
@ -675,6 +641,34 @@ void pcibios_fixup_bus(struct pci_bus *bus)
}
}
int pcibios_enable_device(struct pci_dev *dev, int mask)
{
u16 cmd, oldcmd;
int i;
pci_read_config_word(dev, PCI_COMMAND, &cmd);
oldcmd = cmd;
for (i = 0; i < PCI_NUM_RESOURCES; i++) {
struct resource *res = &dev->resource[i];
/* Only set up the requested stuff */
if (!(mask & (1<<i)))
continue;
if (res->flags & IORESOURCE_IO)
cmd |= PCI_COMMAND_IO;
if (res->flags & IORESOURCE_MEM)
cmd |= PCI_COMMAND_MEMORY;
}
if (cmd != oldcmd) {
pci_info(dev, "enabling device (%04x -> %04x)\n", oldcmd, cmd);
pci_write_config_word(dev, PCI_COMMAND, cmd);
}
return 0;
}
/* Makes compiler happy */
static volatile int pcic_timer_dummy;
@ -747,17 +741,11 @@ static void watchdog_reset() {
}
#endif
int pcibios_enable_device(struct pci_dev *pdev, int mask)
{
return 0;
}
/*
* NMI
*/
void pcic_nmi(unsigned int pend, struct pt_regs *regs)
{
pend = swab32(pend);
if (!pcic_speculative || (pend & PCI_SYS_INT_PENDING_PIO) == 0) {

View File

@ -59,24 +59,15 @@ int early_pci_allowed(void)
void early_dump_pci_device(u8 bus, u8 slot, u8 func)
{
u32 value[256 / 4];
int i;
int j;
u32 val;
printk(KERN_INFO "pci 0000:%02x:%02x.%d config space:",
bus, slot, func);
pr_info("pci 0000:%02x:%02x.%d config space:\n", bus, slot, func);
for (i = 0; i < 256; i += 4) {
if (!(i & 0x0f))
printk("\n %02x:",i);
for (i = 0; i < 256; i += 4)
value[i / 4] = read_pci_config(bus, slot, func, i);
val = read_pci_config(bus, slot, func, i);
for (j = 0; j < 4; j++) {
printk(" %02x", val & 0xff);
val >>= 8;
}
}
printk("\n");
print_hex_dump(KERN_INFO, "", DUMP_PREFIX_OFFSET, 16, 1, value, 256, false);
}
void early_dump_pci_devices(void)

View File

@ -636,6 +636,10 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2030, quirk_no_aersid);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2031, quirk_no_aersid);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2032, quirk_no_aersid);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2033, quirk_no_aersid);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334a, quirk_no_aersid);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334b, quirk_no_aersid);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334c, quirk_no_aersid);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334d, quirk_no_aersid);
#ifdef CONFIG_PHYS_ADDR_T_64BIT

View File

@ -20,8 +20,6 @@
#define pcibios_assign_all_busses() 0
extern struct pci_controller* pcibios_alloc_controller(void);
/* Assume some values. (We should revise them, if necessary) */
#define PCIBIOS_MIN_IO 0x2000

View File

@ -41,8 +41,8 @@
* pci_bus_add_device
*/
struct pci_controller* pci_ctrl_head;
struct pci_controller** pci_ctrl_tail = &pci_ctrl_head;
static struct pci_controller *pci_ctrl_head;
static struct pci_controller **pci_ctrl_tail = &pci_ctrl_head;
static int pci_bus_count;
@ -80,50 +80,6 @@ pcibios_align_resource(void *data, const struct resource *res,
return start;
}
int
pcibios_enable_resources(struct pci_dev *dev, int mask)
{
u16 cmd, old_cmd;
int idx;
struct resource *r;
pci_read_config_word(dev, PCI_COMMAND, &cmd);
old_cmd = cmd;
for(idx=0; idx<6; idx++) {
r = &dev->resource[idx];
if (!r->start && r->end) {
pr_err("PCI: Device %s not available because "
"of resource collisions\n", pci_name(dev));
return -EINVAL;
}
if (r->flags & IORESOURCE_IO)
cmd |= PCI_COMMAND_IO;
if (r->flags & IORESOURCE_MEM)
cmd |= PCI_COMMAND_MEMORY;
}
if (dev->resource[PCI_ROM_RESOURCE].start)
cmd |= PCI_COMMAND_MEMORY;
if (cmd != old_cmd) {
pr_info("PCI: Enabling device %s (%04x -> %04x)\n",
pci_name(dev), old_cmd, cmd);
pci_write_config_word(dev, PCI_COMMAND, cmd);
}
return 0;
}
struct pci_controller * __init pcibios_alloc_controller(void)
{
struct pci_controller *pci_ctrl;
pci_ctrl = (struct pci_controller *)alloc_bootmem(sizeof(*pci_ctrl));
memset(pci_ctrl, 0, sizeof(struct pci_controller));
*pci_ctrl_tail = pci_ctrl;
pci_ctrl_tail = &pci_ctrl->next;
return pci_ctrl;
}
static void __init pci_controller_apertures(struct pci_controller *pci_ctrl,
struct list_head *resources)
{
@ -223,8 +179,7 @@ int pcibios_enable_device(struct pci_dev *dev, int mask)
for (idx=0; idx<6; idx++) {
r = &dev->resource[idx];
if (!r->start && r->end) {
pr_err("PCI: Device %s not available because "
"of resource collisions\n", pci_name(dev));
pci_err(dev, "can't enable device: resource collisions\n");
return -EINVAL;
}
if (r->flags & IORESOURCE_IO)
@ -233,29 +188,13 @@ int pcibios_enable_device(struct pci_dev *dev, int mask)
cmd |= PCI_COMMAND_MEMORY;
}
if (cmd != old_cmd) {
pr_info("PCI: Enabling device %s (%04x -> %04x)\n",
pci_name(dev), old_cmd, cmd);
pci_info(dev, "enabling device (%04x -> %04x)\n", old_cmd, cmd);
pci_write_config_word(dev, PCI_COMMAND, cmd);
}
return 0;
}
#ifdef CONFIG_PROC_FS
/*
* Return the index of the PCI controller for device pdev.
*/
int
pci_controller_num(struct pci_dev *dev)
{
struct pci_controller *pci_ctrl = (struct pci_controller*) dev->sysdata;
return pci_ctrl->index;
}
#endif /* CONFIG_PROC_FS */
/*
* Platform support for /proc/bus/pci/X/Y mmap()s.
* -- paulus.

View File

@ -153,6 +153,7 @@ static struct pci_osc_bit_struct pci_osc_control_bit[] = {
{ OSC_PCI_EXPRESS_PME_CONTROL, "PME" },
{ OSC_PCI_EXPRESS_AER_CONTROL, "AER" },
{ OSC_PCI_EXPRESS_CAPABILITY_CONTROL, "PCIeCapability" },
{ OSC_PCI_EXPRESS_LTR_CONTROL, "LTR" },
};
static void decode_osc_bits(struct acpi_pci_root *root, char *msg, u32 word,
@ -472,9 +473,17 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm)
}
control = OSC_PCI_EXPRESS_CAPABILITY_CONTROL
| OSC_PCI_EXPRESS_NATIVE_HP_CONTROL
| OSC_PCI_EXPRESS_PME_CONTROL;
if (IS_ENABLED(CONFIG_PCIEASPM))
control |= OSC_PCI_EXPRESS_LTR_CONTROL;
if (IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE))
control |= OSC_PCI_EXPRESS_NATIVE_HP_CONTROL;
if (IS_ENABLED(CONFIG_HOTPLUG_PCI_SHPC))
control |= OSC_PCI_SHPC_NATIVE_HP_CONTROL;
if (pci_aer_available()) {
if (aer_acpi_firmware_first())
dev_info(&device->dev,
@ -900,11 +909,15 @@ struct pci_bus *acpi_pci_root_create(struct acpi_pci_root *root,
host_bridge = to_pci_host_bridge(bus->bridge);
if (!(root->osc_control_set & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL))
host_bridge->native_hotplug = 0;
host_bridge->native_pcie_hotplug = 0;
if (!(root->osc_control_set & OSC_PCI_SHPC_NATIVE_HP_CONTROL))
host_bridge->native_shpc_hotplug = 0;
if (!(root->osc_control_set & OSC_PCI_EXPRESS_AER_CONTROL))
host_bridge->native_aer = 0;
if (!(root->osc_control_set & OSC_PCI_EXPRESS_PME_CONTROL))
host_bridge->native_pme = 0;
if (!(root->osc_control_set & OSC_PCI_EXPRESS_LTR_CONTROL))
host_bridge->native_ltr = 0;
pci_scan_child_bus(bus);
pci_set_host_bridge_release(host_bridge, acpi_pci_root_release_info,

View File

@ -56,11 +56,6 @@
#include "chip_registers.h"
#include "aspm.h"
/* link speed vector for Gen3 speed - not in Linux headers */
#define GEN1_SPEED_VECTOR 0x1
#define GEN2_SPEED_VECTOR 0x2
#define GEN3_SPEED_VECTOR 0x3
/*
* This file contains PCIe utility routines.
*/
@ -262,7 +257,7 @@ static u32 extract_speed(u16 linkstat)
case PCI_EXP_LNKSTA_CLS_5_0GB:
speed = 5000; /* Gen 2, 5GHz */
break;
case GEN3_SPEED_VECTOR:
case PCI_EXP_LNKSTA_CLS_8_0GB:
speed = 8000; /* Gen 3, 8GHz */
break;
}
@ -317,7 +312,7 @@ int pcie_speeds(struct hfi1_devdata *dd)
return ret;
}
if ((linkcap & PCI_EXP_LNKCAP_SLS) != GEN3_SPEED_VECTOR) {
if ((linkcap & PCI_EXP_LNKCAP_SLS) != PCI_EXP_LNKCAP_SLS_8_0GB) {
dd_dev_info(dd,
"This HFI is not Gen3 capable, max speed 0x%x, need 0x3\n",
linkcap & PCI_EXP_LNKCAP_SLS);
@ -694,9 +689,6 @@ const struct pci_error_handlers hfi1_pci_err_handler = {
/* gasket block secondary bus reset delay */
#define SBR_DELAY_US 200000 /* 200ms */
/* mask for PCIe capability register lnkctl2 target link speed */
#define LNKCTL2_TARGET_LINK_SPEED_MASK 0xf
static uint pcie_target = 3;
module_param(pcie_target, uint, S_IRUGO);
MODULE_PARM_DESC(pcie_target, "PCIe target speed (0 skip, 1-3 Gen1-3)");
@ -1045,13 +1037,13 @@ int do_pcie_gen3_transition(struct hfi1_devdata *dd)
return 0;
if (pcie_target == 1) { /* target Gen1 */
target_vector = GEN1_SPEED_VECTOR;
target_vector = PCI_EXP_LNKCTL2_TLS_2_5GT;
target_speed = 2500;
} else if (pcie_target == 2) { /* target Gen2 */
target_vector = GEN2_SPEED_VECTOR;
target_vector = PCI_EXP_LNKCTL2_TLS_5_0GT;
target_speed = 5000;
} else if (pcie_target == 3) { /* target Gen3 */
target_vector = GEN3_SPEED_VECTOR;
target_vector = PCI_EXP_LNKCTL2_TLS_8_0GT;
target_speed = 8000;
} else {
/* off or invalid target - skip */
@ -1290,8 +1282,8 @@ retry:
dd_dev_info(dd, "%s: ..old link control2: 0x%x\n", __func__,
(u32)lnkctl2);
/* only write to parent if target is not as high as ours */
if ((lnkctl2 & LNKCTL2_TARGET_LINK_SPEED_MASK) < target_vector) {
lnkctl2 &= ~LNKCTL2_TARGET_LINK_SPEED_MASK;
if ((lnkctl2 & PCI_EXP_LNKCTL2_TLS) < target_vector) {
lnkctl2 &= ~PCI_EXP_LNKCTL2_TLS;
lnkctl2 |= target_vector;
dd_dev_info(dd, "%s: ..new link control2: 0x%x\n", __func__,
(u32)lnkctl2);
@ -1316,7 +1308,7 @@ retry:
dd_dev_info(dd, "%s: ..old link control2: 0x%x\n", __func__,
(u32)lnkctl2);
lnkctl2 &= ~LNKCTL2_TARGET_LINK_SPEED_MASK;
lnkctl2 &= ~PCI_EXP_LNKCTL2_TLS;
lnkctl2 |= target_vector;
dd_dev_info(dd, "%s: ..new link control2: 0x%x\n", __func__,
(u32)lnkctl2);

View File

@ -354,6 +354,9 @@ static bool pci_iommuv2_capable(struct pci_dev *pdev)
};
int i, pos;
if (pci_ats_disabled())
return false;
for (i = 0; i < 3; ++i) {
pos = pci_find_ext_capability(pdev, caps[i]);
if (pos == 0)
@ -3523,9 +3526,11 @@ int amd_iommu_device_info(struct pci_dev *pdev,
memset(info, 0, sizeof(*info));
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ATS);
if (pos)
info->flags |= AMD_IOMMU_DEVICE_FLAG_ATS_SUP;
if (!pci_ats_disabled()) {
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ATS);
if (pos)
info->flags |= AMD_IOMMU_DEVICE_FLAG_ATS_SUP;
}
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI);
if (pos)

View File

@ -2459,7 +2459,8 @@ static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu,
if (dev && dev_is_pci(dev)) {
struct pci_dev *pdev = to_pci_dev(info->dev);
if (ecap_dev_iotlb_support(iommu->ecap) &&
if (!pci_ats_disabled() &&
ecap_dev_iotlb_support(iommu->ecap) &&
pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ATS) &&
dmar_find_matched_atsr_unit(pdev))
info->ats_supported = 1;

View File

@ -203,7 +203,7 @@ static bool pci_endpoint_test_msi_irq(struct pci_endpoint_test *test,
if (!val)
return false;
if (test->last_irq - pdev->irq == msi_num - 1)
if (pci_irq_vector(pdev, msi_num - 1) == test->last_irq)
return true;
return false;
@ -233,7 +233,7 @@ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test, size_t size)
orig_src_addr = dma_alloc_coherent(dev, size + alignment,
&orig_src_phys_addr, GFP_KERNEL);
if (!orig_src_addr) {
dev_err(dev, "failed to allocate source buffer\n");
dev_err(dev, "Failed to allocate source buffer\n");
ret = false;
goto err;
}
@ -259,7 +259,7 @@ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test, size_t size)
orig_dst_addr = dma_alloc_coherent(dev, size + alignment,
&orig_dst_phys_addr, GFP_KERNEL);
if (!orig_dst_addr) {
dev_err(dev, "failed to allocate destination address\n");
dev_err(dev, "Failed to allocate destination address\n");
ret = false;
goto err_orig_src_addr;
}
@ -321,7 +321,7 @@ static bool pci_endpoint_test_write(struct pci_endpoint_test *test, size_t size)
orig_addr = dma_alloc_coherent(dev, size + alignment, &orig_phys_addr,
GFP_KERNEL);
if (!orig_addr) {
dev_err(dev, "failed to allocate address\n");
dev_err(dev, "Failed to allocate address\n");
ret = false;
goto err;
}
@ -382,7 +382,7 @@ static bool pci_endpoint_test_read(struct pci_endpoint_test *test, size_t size)
orig_addr = dma_alloc_coherent(dev, size + alignment, &orig_phys_addr,
GFP_KERNEL);
if (!orig_addr) {
dev_err(dev, "failed to allocate destination address\n");
dev_err(dev, "Failed to allocate destination address\n");
ret = false;
goto err;
}
@ -513,31 +513,31 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
if (!no_msi) {
irq = pci_alloc_irq_vectors(pdev, 1, 32, PCI_IRQ_MSI);
if (irq < 0)
dev_err(dev, "failed to get MSI interrupts\n");
dev_err(dev, "Failed to get MSI interrupts\n");
test->num_irqs = irq;
}
err = devm_request_irq(dev, pdev->irq, pci_endpoint_test_irqhandler,
IRQF_SHARED, DRV_MODULE_NAME, test);
if (err) {
dev_err(dev, "failed to request IRQ %d\n", pdev->irq);
dev_err(dev, "Failed to request IRQ %d\n", pdev->irq);
goto err_disable_msi;
}
for (i = 1; i < irq; i++) {
err = devm_request_irq(dev, pdev->irq + i,
err = devm_request_irq(dev, pci_irq_vector(pdev, i),
pci_endpoint_test_irqhandler,
IRQF_SHARED, DRV_MODULE_NAME, test);
if (err)
dev_err(dev, "failed to request IRQ %d for MSI %d\n",
pdev->irq + i, i + 1);
pci_irq_vector(pdev, i), i + 1);
}
for (bar = BAR_0; bar <= BAR_5; bar++) {
if (pci_resource_flags(pdev, bar) & IORESOURCE_MEM) {
base = pci_ioremap_bar(pdev, bar);
if (!base) {
dev_err(dev, "failed to read BAR%d\n", bar);
dev_err(dev, "Failed to read BAR%d\n", bar);
WARN_ON(bar == test_reg_bar);
}
test->bar[bar] = base;
@ -557,7 +557,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
id = ida_simple_get(&pci_endpoint_test_ida, 0, 0, GFP_KERNEL);
if (id < 0) {
err = id;
dev_err(dev, "unable to get id\n");
dev_err(dev, "Unable to get id\n");
goto err_iounmap;
}
@ -573,7 +573,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
err = misc_register(misc_device);
if (err) {
dev_err(dev, "failed to register device\n");
dev_err(dev, "Failed to register device\n");
goto err_kfree_name;
}
@ -592,7 +592,7 @@ err_iounmap:
}
for (i = 0; i < irq; i++)
devm_free_irq(dev, pdev->irq + i, test);
devm_free_irq(&pdev->dev, pci_irq_vector(pdev, i), test);
err_disable_msi:
pci_disable_msi(pdev);
@ -625,7 +625,7 @@ static void pci_endpoint_test_remove(struct pci_dev *pdev)
pci_iounmap(pdev, test->bar[bar]);
}
for (i = 0; i < test->num_irqs; i++)
devm_free_irq(&pdev->dev, pdev->irq + i, test);
devm_free_irq(&pdev->dev, pci_irq_vector(pdev, i), test);
pci_disable_msi(pdev);
pci_release_regions(pdev);
pci_disable_device(pdev);
@ -634,6 +634,7 @@ static void pci_endpoint_test_remove(struct pci_dev *pdev)
static const struct pci_device_id pci_endpoint_test_tbl[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA74x) },
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA72x) },
{ PCI_DEVICE(PCI_VENDOR_ID_SYNOPSYS, 0xedda) },
{ }
};
MODULE_DEVICE_TABLE(pci, pci_endpoint_test_tbl);

View File

@ -3385,32 +3385,6 @@ err_disable_device:
return rc;
}
/*****************************************************************************/
static int ena_sriov_configure(struct pci_dev *dev, int numvfs)
{
int rc;
if (numvfs > 0) {
rc = pci_enable_sriov(dev, numvfs);
if (rc != 0) {
dev_err(&dev->dev,
"pci_enable_sriov failed to enable: %d vfs with the error: %d\n",
numvfs, rc);
return rc;
}
return numvfs;
}
if (numvfs == 0) {
pci_disable_sriov(dev);
return 0;
}
return -EINVAL;
}
/*****************************************************************************/
/*****************************************************************************/
/* ena_remove - Device Removal Routine
@ -3526,7 +3500,7 @@ static struct pci_driver ena_pci_driver = {
.suspend = ena_suspend,
.resume = ena_resume,
#endif
.sriov_configure = ena_sriov_configure,
.sriov_configure = pci_sriov_configure_simple,
};
static int __init ena_init(void)

View File

@ -13922,8 +13922,6 @@ static int bnx2x_init_one(struct pci_dev *pdev,
{
struct net_device *dev = NULL;
struct bnx2x *bp;
enum pcie_link_width pcie_width;
enum pci_bus_speed pcie_speed;
int rc, max_non_def_sbs;
int rx_count, tx_count, rss_count, doorbell_size;
int max_cos_est;
@ -14091,21 +14089,12 @@ static int bnx2x_init_one(struct pci_dev *pdev,
dev_addr_add(bp->dev, bp->fip_mac, NETDEV_HW_ADDR_T_SAN);
rtnl_unlock();
}
if (pcie_get_minimum_link(bp->pdev, &pcie_speed, &pcie_width) ||
pcie_speed == PCI_SPEED_UNKNOWN ||
pcie_width == PCIE_LNK_WIDTH_UNKNOWN)
BNX2X_DEV_INFO("Failed to determine PCI Express Bandwidth\n");
else
BNX2X_DEV_INFO(
"%s (%c%d) PCI-E x%d %s found at mem %lx, IRQ %d, node addr %pM\n",
board_info[ent->driver_data].name,
(CHIP_REV(bp) >> 12) + 'A', (CHIP_METAL(bp) >> 4),
pcie_width,
pcie_speed == PCIE_SPEED_2_5GT ? "2.5GHz" :
pcie_speed == PCIE_SPEED_5_0GT ? "5.0GHz" :
pcie_speed == PCIE_SPEED_8_0GT ? "8.0GHz" :
"Unknown",
dev->base_addr, bp->pdev->irq, dev->dev_addr);
BNX2X_DEV_INFO(
"%s (%c%d) PCI-E found at mem %lx, IRQ %d, node addr %pM\n",
board_info[ent->driver_data].name,
(CHIP_REV(bp) >> 12) + 'A', (CHIP_METAL(bp) >> 4),
dev->base_addr, bp->pdev->irq, dev->dev_addr);
pcie_print_link_status(bp->pdev);
bnx2x_register_phc(bp);

View File

@ -8685,22 +8685,6 @@ static int bnxt_init_mac_addr(struct bnxt *bp)
return rc;
}
static void bnxt_parse_log_pcie_link(struct bnxt *bp)
{
enum pcie_link_width width = PCIE_LNK_WIDTH_UNKNOWN;
enum pci_bus_speed speed = PCI_SPEED_UNKNOWN;
if (pcie_get_minimum_link(pci_physfn(bp->pdev), &speed, &width) ||
speed == PCI_SPEED_UNKNOWN || width == PCIE_LNK_WIDTH_UNKNOWN)
netdev_info(bp->dev, "Failed to determine PCIe Link Info\n");
else
netdev_info(bp->dev, "PCIe: Speed %s Width x%d\n",
speed == PCIE_SPEED_2_5GT ? "2.5GT/s" :
speed == PCIE_SPEED_5_0GT ? "5.0GT/s" :
speed == PCIE_SPEED_8_0GT ? "8.0GT/s" :
"Unknown", width);
}
static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
{
static int version_printed;
@ -8915,8 +8899,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
netdev_info(dev, "%s found at mem %lx, node addr %pM\n",
board_info[ent->driver_data].name,
(long)pci_resource_start(pdev, 0), dev->dev_addr);
bnxt_parse_log_pcie_link(bp);
pcie_print_link_status(pdev);
return 0;

View File

@ -5066,79 +5066,6 @@ static int init_rss(struct adapter *adap)
return 0;
}
static int cxgb4_get_pcie_dev_link_caps(struct adapter *adap,
enum pci_bus_speed *speed,
enum pcie_link_width *width)
{
u32 lnkcap1, lnkcap2;
int err1, err2;
#define PCIE_MLW_CAP_SHIFT 4 /* start of MLW mask in link capabilities */
*speed = PCI_SPEED_UNKNOWN;
*width = PCIE_LNK_WIDTH_UNKNOWN;
err1 = pcie_capability_read_dword(adap->pdev, PCI_EXP_LNKCAP,
&lnkcap1);
err2 = pcie_capability_read_dword(adap->pdev, PCI_EXP_LNKCAP2,
&lnkcap2);
if (!err2 && lnkcap2) { /* PCIe r3.0-compliant */
if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_8_0GB)
*speed = PCIE_SPEED_8_0GT;
else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_5_0GB)
*speed = PCIE_SPEED_5_0GT;
else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_2_5GB)
*speed = PCIE_SPEED_2_5GT;
}
if (!err1) {
*width = (lnkcap1 & PCI_EXP_LNKCAP_MLW) >> PCIE_MLW_CAP_SHIFT;
if (!lnkcap2) { /* pre-r3.0 */
if (lnkcap1 & PCI_EXP_LNKCAP_SLS_5_0GB)
*speed = PCIE_SPEED_5_0GT;
else if (lnkcap1 & PCI_EXP_LNKCAP_SLS_2_5GB)
*speed = PCIE_SPEED_2_5GT;
}
}
if (*speed == PCI_SPEED_UNKNOWN || *width == PCIE_LNK_WIDTH_UNKNOWN)
return err1 ? err1 : err2 ? err2 : -EINVAL;
return 0;
}
static void cxgb4_check_pcie_caps(struct adapter *adap)
{
enum pcie_link_width width, width_cap;
enum pci_bus_speed speed, speed_cap;
#define PCIE_SPEED_STR(speed) \
(speed == PCIE_SPEED_8_0GT ? "8.0GT/s" : \
speed == PCIE_SPEED_5_0GT ? "5.0GT/s" : \
speed == PCIE_SPEED_2_5GT ? "2.5GT/s" : \
"Unknown")
if (cxgb4_get_pcie_dev_link_caps(adap, &speed_cap, &width_cap)) {
dev_warn(adap->pdev_dev,
"Unable to determine PCIe device BW capabilities\n");
return;
}
if (pcie_get_minimum_link(adap->pdev, &speed, &width) ||
speed == PCI_SPEED_UNKNOWN || width == PCIE_LNK_WIDTH_UNKNOWN) {
dev_warn(adap->pdev_dev,
"Unable to determine PCI Express bandwidth.\n");
return;
}
dev_info(adap->pdev_dev, "PCIe link speed is %s, device supports %s\n",
PCIE_SPEED_STR(speed), PCIE_SPEED_STR(speed_cap));
dev_info(adap->pdev_dev, "PCIe link width is x%d, device supports x%d\n",
width, width_cap);
if (speed < speed_cap || width < width_cap)
dev_info(adap->pdev_dev,
"A slot with more lanes and/or higher speed is "
"suggested for optimal performance.\n");
}
/* Dump basic information about the adapter */
static void print_adapter_info(struct adapter *adapter)
{
@ -5798,7 +5725,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
}
/* check for PCI Express bandwidth capabiltites */
cxgb4_check_pcie_caps(adapter);
pcie_print_link_status(pdev);
err = init_rss(adapter);
if (err)

View File

@ -245,9 +245,6 @@ static void ixgbe_check_minimum_link(struct ixgbe_adapter *adapter,
int expected_gts)
{
struct ixgbe_hw *hw = &adapter->hw;
int max_gts = 0;
enum pci_bus_speed speed = PCI_SPEED_UNKNOWN;
enum pcie_link_width width = PCIE_LNK_WIDTH_UNKNOWN;
struct pci_dev *pdev;
/* Some devices are not connected over PCIe and thus do not negotiate
@ -263,49 +260,7 @@ static void ixgbe_check_minimum_link(struct ixgbe_adapter *adapter,
else
pdev = adapter->pdev;
if (pcie_get_minimum_link(pdev, &speed, &width) ||
speed == PCI_SPEED_UNKNOWN || width == PCIE_LNK_WIDTH_UNKNOWN) {
e_dev_warn("Unable to determine PCI Express bandwidth.\n");
return;
}
switch (speed) {
case PCIE_SPEED_2_5GT:
/* 8b/10b encoding reduces max throughput by 20% */
max_gts = 2 * width;
break;
case PCIE_SPEED_5_0GT:
/* 8b/10b encoding reduces max throughput by 20% */
max_gts = 4 * width;
break;
case PCIE_SPEED_8_0GT:
/* 128b/130b encoding reduces throughput by less than 2% */
max_gts = 8 * width;
break;
default:
e_dev_warn("Unable to determine PCI Express bandwidth.\n");
return;
}
e_dev_info("PCI Express bandwidth of %dGT/s available\n",
max_gts);
e_dev_info("(Speed:%s, Width: x%d, Encoding Loss:%s)\n",
(speed == PCIE_SPEED_8_0GT ? "8.0GT/s" :
speed == PCIE_SPEED_5_0GT ? "5.0GT/s" :
speed == PCIE_SPEED_2_5GT ? "2.5GT/s" :
"Unknown"),
width,
(speed == PCIE_SPEED_2_5GT ? "20%" :
speed == PCIE_SPEED_5_0GT ? "20%" :
speed == PCIE_SPEED_8_0GT ? "<2%" :
"Unknown"));
if (max_gts < expected_gts) {
e_dev_warn("This is not sufficient for optimal performance of this card.\n");
e_dev_warn("For optimal performance, at least %dGT/s of bandwidth is required.\n",
expected_gts);
e_dev_warn("A slot with more lanes and/or higher speed is suggested.\n");
}
pcie_print_link_status(pdev);
}
static void ixgbe_service_event_schedule(struct ixgbe_adapter *adapter)

View File

@ -2630,24 +2630,6 @@ static void nvme_remove(struct pci_dev *pdev)
nvme_put_ctrl(&dev->ctrl);
}
static int nvme_pci_sriov_configure(struct pci_dev *pdev, int numvfs)
{
int ret = 0;
if (numvfs == 0) {
if (pci_vfs_assigned(pdev)) {
dev_warn(&pdev->dev,
"Cannot disable SR-IOV VFs while assigned\n");
return -EPERM;
}
pci_disable_sriov(pdev);
return 0;
}
ret = pci_enable_sriov(pdev, numvfs);
return ret ? ret : numvfs;
}
#ifdef CONFIG_PM_SLEEP
static int nvme_suspend(struct device *dev)
{
@ -2774,7 +2756,7 @@ static struct pci_driver nvme_driver = {
.driver = {
.pm = &nvme_dev_pm_ops,
},
.sriov_configure = nvme_pci_sriov_configure,
.sriov_configure = pci_sriov_configure_simple,
.err_handler = &nvme_err_handler,
};

View File

@ -67,6 +67,18 @@ config PCI_STUB
When in doubt, say N.
config PCI_PF_STUB
tristate "PCI PF Stub driver"
depends on PCI
depends on PCI_IOV
help
Say Y or M here if you want to enable support for devices that
require SR-IOV support, while at the same time the PF itself is
not providing any actual services on the host itself such as
storage or networking.
When in doubt, say N.
config XEN_PCIDEV_FRONTEND
tristate "Xen PCI Frontend"
depends on PCI && X86 && XEN

View File

@ -24,6 +24,7 @@ obj-$(CONFIG_PCI_LABEL) += pci-label.o
obj-$(CONFIG_X86_INTEL_MID) += pci-mid.o
obj-$(CONFIG_PCI_SYSCALL) += syscall.o
obj-$(CONFIG_PCI_STUB) += pci-stub.o
obj-$(CONFIG_PCI_PF_STUB) += pci-pf-stub.o
obj-$(CONFIG_PCI_ECAM) += ecam.o
obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o

View File

@ -20,6 +20,9 @@ void pci_ats_init(struct pci_dev *dev)
{
int pos;
if (pci_ats_disabled())
return;
pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ATS);
if (!pos)
return;

View File

@ -1,13 +1,13 @@
# SPDX-License-Identifier: GPL-2.0
menu "DesignWare PCI Core Support"
depends on PCI
config PCIE_DW
bool
config PCIE_DW_HOST
bool
depends on PCI
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW
@ -22,7 +22,7 @@ config PCI_DRA7XX
config PCI_DRA7XX_HOST
bool "TI DRA7xx PCIe controller Host Mode"
depends on SOC_DRA7XX || COMPILE_TEST
depends on PCI && PCI_MSI_IRQ_DOMAIN
depends on PCI_MSI_IRQ_DOMAIN
depends on OF && HAS_IOMEM && TI_PIPE3
select PCIE_DW_HOST
select PCI_DRA7XX
@ -51,50 +51,62 @@ config PCI_DRA7XX_EP
This uses the DesignWare core.
config PCIE_DW_PLAT
bool "Platform bus based DesignWare PCIe Controller"
depends on PCI
depends on PCI_MSI_IRQ_DOMAIN
bool
config PCIE_DW_PLAT_HOST
bool "Platform bus based DesignWare PCIe Controller - Host mode"
depends on PCI && PCI_MSI_IRQ_DOMAIN
select PCIE_DW_HOST
---help---
This selects the DesignWare PCIe controller support. Select this if
you have a PCIe controller on Platform bus.
select PCIE_DW_PLAT
default y
help
Enables support for the PCIe controller in the Designware IP to
work in host mode. There are two instances of PCIe controller in
Designware IP.
This controller can work either as EP or RC. In order to enable
host-specific features PCIE_DW_PLAT_HOST must be selected and in
order to enable device-specific features PCI_DW_PLAT_EP must be
selected.
If you have a controller with this interface, say Y or M here.
If unsure, say N.
config PCIE_DW_PLAT_EP
bool "Platform bus based DesignWare PCIe Controller - Endpoint mode"
depends on PCI && PCI_MSI_IRQ_DOMAIN
depends on PCI_ENDPOINT
select PCIE_DW_EP
select PCIE_DW_PLAT
help
Enables support for the PCIe controller in the Designware IP to
work in endpoint mode. There are two instances of PCIe controller
in Designware IP.
This controller can work either as EP or RC. In order to enable
host-specific features PCIE_DW_PLAT_HOST must be selected and in
order to enable device-specific features PCI_DW_PLAT_EP must be
selected.
config PCI_EXYNOS
bool "Samsung Exynos PCIe controller"
depends on PCI
depends on SOC_EXYNOS5440
depends on SOC_EXYNOS5440 || COMPILE_TEST
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW_HOST
config PCI_IMX6
bool "Freescale i.MX6 PCIe controller"
depends on PCI
depends on SOC_IMX6Q
depends on SOC_IMX6Q || (ARM && COMPILE_TEST)
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW_HOST
config PCIE_SPEAR13XX
bool "STMicroelectronics SPEAr PCIe controller"
depends on PCI
depends on ARCH_SPEAR13XX
depends on ARCH_SPEAR13XX || COMPILE_TEST
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW_HOST
help
Say Y here if you want PCIe support on SPEAr13XX SoCs.
config PCI_KEYSTONE
bool "TI Keystone PCIe controller"
depends on PCI
depends on ARCH_KEYSTONE
depends on ARCH_KEYSTONE || (ARM && COMPILE_TEST)
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW_HOST
help
Say Y here if you want to enable PCI controller support on Keystone
@ -104,8 +116,7 @@ config PCI_KEYSTONE
config PCI_LAYERSCAPE
bool "Freescale Layerscape PCIe controller"
depends on PCI
depends on OF && (ARM || ARCH_LAYERSCAPE)
depends on OF && (ARM || ARCH_LAYERSCAPE || COMPILE_TEST)
depends on PCI_MSI_IRQ_DOMAIN
select MFD_SYSCON
select PCIE_DW_HOST
@ -113,11 +124,9 @@ config PCI_LAYERSCAPE
Say Y here if you want PCIe controller support on Layerscape SoCs.
config PCI_HISI
depends on OF && ARM64
depends on OF && (ARM64 || COMPILE_TEST)
bool "HiSilicon Hip05 and Hip06 SoCs PCIe controllers"
depends on PCI
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW_HOST
select PCI_HOST_COMMON
help
@ -126,10 +135,8 @@ config PCI_HISI
config PCIE_QCOM
bool "Qualcomm PCIe controller"
depends on PCI
depends on ARCH_QCOM && OF
depends on OF && (ARCH_QCOM || COMPILE_TEST)
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW_HOST
help
Say Y here to enable PCIe controller support on Qualcomm SoCs. The
@ -138,10 +145,8 @@ config PCIE_QCOM
config PCIE_ARMADA_8K
bool "Marvell Armada-8K PCIe controller"
depends on PCI
depends on ARCH_MVEBU
depends on ARCH_MVEBU || COMPILE_TEST
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW_HOST
help
Say Y here if you want to enable PCIe controller support on
@ -154,9 +159,8 @@ config PCIE_ARTPEC6
config PCIE_ARTPEC6_HOST
bool "Axis ARTPEC-6 PCIe controller Host Mode"
depends on MACH_ARTPEC6
depends on PCI && PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
depends on MACH_ARTPEC6 || COMPILE_TEST
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW_HOST
select PCIE_ARTPEC6
help
@ -165,7 +169,7 @@ config PCIE_ARTPEC6_HOST
config PCIE_ARTPEC6_EP
bool "Axis ARTPEC-6 PCIe controller Endpoint Mode"
depends on MACH_ARTPEC6
depends on MACH_ARTPEC6 || COMPILE_TEST
depends on PCI_ENDPOINT
select PCIE_DW_EP
select PCIE_ARTPEC6
@ -174,11 +178,9 @@ config PCIE_ARTPEC6_EP
endpoint mode. This uses the DesignWare core.
config PCIE_KIRIN
depends on OF && ARM64
depends on OF && (ARM64 || COMPILE_TEST)
bool "HiSilicon Kirin series SoCs PCIe controllers"
depends on PCI_MSI_IRQ_DOMAIN
depends on PCI
select PCIEPORTBUS
select PCIE_DW_HOST
help
Say Y here if you want PCIe controller support
@ -186,10 +188,8 @@ config PCIE_KIRIN
config PCIE_HISI_STB
bool "HiSilicon STB SoCs PCIe controllers"
depends on ARCH_HISI
depends on PCI
depends on ARCH_HISI || COMPILE_TEST
depends on PCI_MSI_IRQ_DOMAIN
select PCIEPORTBUS
select PCIE_DW_HOST
help
Say Y here if you want PCIe controller support on HiSilicon STB SoCs

View File

@ -27,6 +27,7 @@
#include <linux/mfd/syscon.h>
#include <linux/regmap.h>
#include "../pci.h"
#include "pcie-designware.h"
/* PCIe controller wrapper DRA7XX configuration registers */
@ -406,14 +407,14 @@ static int __init dra7xx_add_pcie_ep(struct dra7xx_pcie *dra7xx,
ep->ops = &pcie_ep_ops;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ep_dbics");
pci->dbi_base = devm_ioremap(dev, res->start, resource_size(res));
if (!pci->dbi_base)
return -ENOMEM;
pci->dbi_base = devm_ioremap_resource(dev, res);
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ep_dbics2");
pci->dbi_base2 = devm_ioremap(dev, res->start, resource_size(res));
if (!pci->dbi_base2)
return -ENOMEM;
pci->dbi_base2 = devm_ioremap_resource(dev, res);
if (IS_ERR(pci->dbi_base2))
return PTR_ERR(pci->dbi_base2);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space");
if (!res)
@ -459,9 +460,9 @@ static int __init dra7xx_add_pcie_port(struct dra7xx_pcie *dra7xx,
return ret;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rc_dbics");
pci->dbi_base = devm_ioremap(dev, res->start, resource_size(res));
if (!pci->dbi_base)
return -ENOMEM;
pci->dbi_base = devm_ioremap_resource(dev, res);
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
pp->ops = &dra7xx_pcie_host_ops;

View File

@ -338,7 +338,7 @@ static int imx6_pcie_enable_ref_clk(struct imx6_pcie *imx6_pcie)
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12,
IMX6SX_GPR12_PCIE_TEST_POWERDOWN, 0);
break;
case IMX6QP: /* FALLTHROUGH */
case IMX6QP: /* FALLTHROUGH */
case IMX6Q:
/* power up core phy and enable ref clock */
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1,

View File

@ -89,7 +89,7 @@ static int ks_pcie_establish_link(struct keystone_pcie *ks_pcie)
dw_pcie_setup_rc(pp);
if (dw_pcie_link_up(pci)) {
dev_err(dev, "Link already up\n");
dev_info(dev, "Link already up\n");
return 0;
}

View File

@ -28,6 +28,7 @@
struct armada8k_pcie {
struct dw_pcie *pci;
struct clk *clk;
struct clk *clk_reg;
};
#define PCIE_VENDOR_REGS_OFFSET 0x8000
@ -229,26 +230,38 @@ static int armada8k_pcie_probe(struct platform_device *pdev)
if (ret)
return ret;
pcie->clk_reg = devm_clk_get(dev, "reg");
if (pcie->clk_reg == ERR_PTR(-EPROBE_DEFER)) {
ret = -EPROBE_DEFER;
goto fail;
}
if (!IS_ERR(pcie->clk_reg)) {
ret = clk_prepare_enable(pcie->clk_reg);
if (ret)
goto fail_clkreg;
}
/* Get the dw-pcie unit configuration/control registers base. */
base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ctrl");
pci->dbi_base = devm_pci_remap_cfg_resource(dev, base);
if (IS_ERR(pci->dbi_base)) {
dev_err(dev, "couldn't remap regs base %p\n", base);
ret = PTR_ERR(pci->dbi_base);
goto fail;
goto fail_clkreg;
}
platform_set_drvdata(pdev, pcie);
ret = armada8k_add_pcie_port(pcie, pdev);
if (ret)
goto fail;
goto fail_clkreg;
return 0;
fail_clkreg:
clk_disable_unprepare(pcie->clk_reg);
fail:
if (!IS_ERR(pcie->clk))
clk_disable_unprepare(pcie->clk);
clk_disable_unprepare(pcie->clk);
return ret;
}

View File

@ -463,9 +463,9 @@ static int artpec6_add_pcie_ep(struct artpec6_pcie *artpec6_pcie,
ep->ops = &pcie_ep_ops;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi2");
pci->dbi_base2 = devm_ioremap(dev, res->start, resource_size(res));
if (!pci->dbi_base2)
return -ENOMEM;
pci->dbi_base2 = devm_ioremap_resource(dev, res);
if (IS_ERR(pci->dbi_base2))
return PTR_ERR(pci->dbi_base2);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space");
if (!res)

View File

@ -75,7 +75,7 @@ static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, enum pci_barno bar,
free_win = find_first_zero_bit(ep->ib_window_map, ep->num_ib_windows);
if (free_win >= ep->num_ib_windows) {
dev_err(pci->dev, "no free inbound window\n");
dev_err(pci->dev, "No free inbound window\n");
return -EINVAL;
}
@ -100,7 +100,7 @@ static int dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep, phys_addr_t phys_addr,
free_win = find_first_zero_bit(ep->ob_window_map, ep->num_ob_windows);
if (free_win >= ep->num_ob_windows) {
dev_err(pci->dev, "no free outbound window\n");
dev_err(pci->dev, "No free outbound window\n");
return -EINVAL;
}
@ -204,7 +204,7 @@ static int dw_pcie_ep_map_addr(struct pci_epc *epc, u8 func_no,
ret = dw_pcie_ep_outbound_atu(ep, addr, pci_addr, size);
if (ret) {
dev_err(pci->dev, "failed to enable address\n");
dev_err(pci->dev, "Failed to enable address\n");
return ret;
}
@ -348,21 +348,21 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
ret = of_property_read_u32(np, "num-ib-windows", &ep->num_ib_windows);
if (ret < 0) {
dev_err(dev, "unable to read *num-ib-windows* property\n");
dev_err(dev, "Unable to read *num-ib-windows* property\n");
return ret;
}
if (ep->num_ib_windows > MAX_IATU_IN) {
dev_err(dev, "invalid *num-ib-windows*\n");
dev_err(dev, "Invalid *num-ib-windows*\n");
return -EINVAL;
}
ret = of_property_read_u32(np, "num-ob-windows", &ep->num_ob_windows);
if (ret < 0) {
dev_err(dev, "unable to read *num-ob-windows* property\n");
dev_err(dev, "Unable to read *num-ob-windows* property\n");
return ret;
}
if (ep->num_ob_windows > MAX_IATU_OUT) {
dev_err(dev, "invalid *num-ob-windows*\n");
dev_err(dev, "Invalid *num-ob-windows*\n");
return -EINVAL;
}
@ -389,7 +389,7 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
epc = devm_pci_epc_create(dev, &epc_ops);
if (IS_ERR(epc)) {
dev_err(dev, "failed to create epc device\n");
dev_err(dev, "Failed to create epc device\n");
return PTR_ERR(epc);
}
@ -411,6 +411,9 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
return -ENOMEM;
}
epc->features = EPC_FEATURE_NO_LINKUP_NOTIFIER;
EPC_FEATURE_SET_BAR(epc->features, BAR_0);
ep->epc = epc;
epc_set_drvdata(epc, ep);
dw_pcie_setup(pci);

View File

@ -15,6 +15,7 @@
#include <linux/pci_regs.h>
#include <linux/platform_device.h>
#include "../pci.h"
#include "pcie-designware.h"
static struct pci_ops dw_pcie_ops;
@ -83,18 +84,23 @@ irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
for (i = 0; i < num_ctrls; i++) {
dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12, 4,
&val);
dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_STATUS +
(i * MSI_REG_CTRL_BLOCK_SIZE),
4, &val);
if (!val)
continue;
ret = IRQ_HANDLED;
pos = 0;
while ((pos = find_next_bit((unsigned long *) &val, 32,
pos)) != 32) {
irq = irq_find_mapping(pp->irq_domain, i * 32 + pos);
while ((pos = find_next_bit((unsigned long *) &val,
MAX_MSI_IRQS_PER_CTRL,
pos)) != MAX_MSI_IRQS_PER_CTRL) {
irq = irq_find_mapping(pp->irq_domain,
(i * MAX_MSI_IRQS_PER_CTRL) +
pos);
generic_handle_irq(irq);
dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12,
dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS +
(i * MSI_REG_CTRL_BLOCK_SIZE),
4, 1 << pos);
pos++;
}
@ -157,9 +163,9 @@ static void dw_pci_bottom_mask(struct irq_data *data)
if (pp->ops->msi_clear_irq) {
pp->ops->msi_clear_irq(pp, data->hwirq);
} else {
ctrl = data->hwirq / 32;
res = ctrl * 12;
bit = data->hwirq % 32;
ctrl = data->hwirq / MAX_MSI_IRQS_PER_CTRL;
res = ctrl * MSI_REG_CTRL_BLOCK_SIZE;
bit = data->hwirq % MAX_MSI_IRQS_PER_CTRL;
pp->irq_status[ctrl] &= ~(1 << bit);
dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4,
@ -180,9 +186,9 @@ static void dw_pci_bottom_unmask(struct irq_data *data)
if (pp->ops->msi_set_irq) {
pp->ops->msi_set_irq(pp, data->hwirq);
} else {
ctrl = data->hwirq / 32;
res = ctrl * 12;
bit = data->hwirq % 32;
ctrl = data->hwirq / MAX_MSI_IRQS_PER_CTRL;
res = ctrl * MSI_REG_CTRL_BLOCK_SIZE;
bit = data->hwirq % MAX_MSI_IRQS_PER_CTRL;
pp->irq_status[ctrl] |= 1 << bit;
dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4,
@ -248,8 +254,10 @@ static void dw_pcie_irq_domain_free(struct irq_domain *domain,
unsigned long flags;
raw_spin_lock_irqsave(&pp->lock, flags);
bitmap_release_region(pp->msi_irq_in_use, data->hwirq,
order_base_2(nr_irqs));
raw_spin_unlock_irqrestore(&pp->lock, flags);
}
@ -266,7 +274,7 @@ int dw_pcie_allocate_domains(struct pcie_port *pp)
pp->irq_domain = irq_domain_create_linear(fwnode, pp->num_vectors,
&dw_pcie_msi_domain_ops, pp);
if (!pp->irq_domain) {
dev_err(pci->dev, "failed to create IRQ domain\n");
dev_err(pci->dev, "Failed to create IRQ domain\n");
return -ENOMEM;
}
@ -274,7 +282,7 @@ int dw_pcie_allocate_domains(struct pcie_port *pp)
&dw_pcie_msi_domain_info,
pp->irq_domain);
if (!pp->msi_domain) {
dev_err(pci->dev, "failed to create MSI domain\n");
dev_err(pci->dev, "Failed to create MSI domain\n");
irq_domain_remove(pp->irq_domain);
return -ENOMEM;
}
@ -301,13 +309,13 @@ void dw_pcie_msi_init(struct pcie_port *pp)
page = alloc_page(GFP_KERNEL);
pp->msi_data = dma_map_page(dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE);
if (dma_mapping_error(dev, pp->msi_data)) {
dev_err(dev, "failed to map MSI data\n");
dev_err(dev, "Failed to map MSI data\n");
__free_page(page);
return;
}
msi_target = (u64)pp->msi_data;
/* program the msi_data */
/* Program the msi_data */
dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_LO, 4,
lower_32_bits(msi_target));
dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_HI, 4,
@ -330,19 +338,19 @@ int dw_pcie_host_init(struct pcie_port *pp)
cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config");
if (cfg_res) {
pp->cfg0_size = resource_size(cfg_res) / 2;
pp->cfg1_size = resource_size(cfg_res) / 2;
pp->cfg0_size = resource_size(cfg_res) >> 1;
pp->cfg1_size = resource_size(cfg_res) >> 1;
pp->cfg0_base = cfg_res->start;
pp->cfg1_base = cfg_res->start + pp->cfg0_size;
} else if (!pp->va_cfg0_base) {
dev_err(dev, "missing *config* reg space\n");
dev_err(dev, "Missing *config* reg space\n");
}
bridge = pci_alloc_host_bridge(0);
if (!bridge)
return -ENOMEM;
ret = of_pci_get_host_bridge_resources(np, 0, 0xff,
ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff,
&bridge->windows, &pp->io_base);
if (ret)
return ret;
@ -357,7 +365,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
case IORESOURCE_IO:
ret = pci_remap_iospace(win->res, pp->io_base);
if (ret) {
dev_warn(dev, "error %d: failed to map resource %pR\n",
dev_warn(dev, "Error %d: failed to map resource %pR\n",
ret, win->res);
resource_list_destroy_entry(win);
} else {
@ -375,8 +383,8 @@ int dw_pcie_host_init(struct pcie_port *pp)
break;
case 0:
pp->cfg = win->res;
pp->cfg0_size = resource_size(pp->cfg) / 2;
pp->cfg1_size = resource_size(pp->cfg) / 2;
pp->cfg0_size = resource_size(pp->cfg) >> 1;
pp->cfg1_size = resource_size(pp->cfg) >> 1;
pp->cfg0_base = pp->cfg->start;
pp->cfg1_base = pp->cfg->start + pp->cfg0_size;
break;
@ -391,7 +399,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
pp->cfg->start,
resource_size(pp->cfg));
if (!pci->dbi_base) {
dev_err(dev, "error with ioremap\n");
dev_err(dev, "Error with ioremap\n");
ret = -ENOMEM;
goto error;
}
@ -403,7 +411,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
pp->va_cfg0_base = devm_pci_remap_cfgspace(dev,
pp->cfg0_base, pp->cfg0_size);
if (!pp->va_cfg0_base) {
dev_err(dev, "error with ioremap in function\n");
dev_err(dev, "Error with ioremap in function\n");
ret = -ENOMEM;
goto error;
}
@ -414,7 +422,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
pp->cfg1_base,
pp->cfg1_size);
if (!pp->va_cfg1_base) {
dev_err(dev, "error with ioremap\n");
dev_err(dev, "Error with ioremap\n");
ret = -ENOMEM;
goto error;
}
@ -586,7 +594,7 @@ static int dw_pcie_valid_device(struct pcie_port *pp, struct pci_bus *bus,
return 0;
}
/* access only one slot on each root port */
/* Access only one slot on each root port */
if (bus->number == pp->root_bus_nr && dev > 0)
return 0;
@ -650,13 +658,15 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
/* Initialize IRQ Status array */
for (ctrl = 0; ctrl < num_ctrls; ctrl++)
dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + (ctrl * 12), 4,
&pp->irq_status[ctrl]);
/* setup RC BARs */
dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE +
(ctrl * MSI_REG_CTRL_BLOCK_SIZE),
4, &pp->irq_status[ctrl]);
/* Setup RC BARs */
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0x00000004);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000);
/* setup interrupt pins */
/* Setup interrupt pins */
dw_pcie_dbi_ro_wr_en(pci);
val = dw_pcie_readl_dbi(pci, PCI_INTERRUPT_LINE);
val &= 0xffff00ff;
@ -664,13 +674,13 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
dw_pcie_writel_dbi(pci, PCI_INTERRUPT_LINE, val);
dw_pcie_dbi_ro_wr_dis(pci);
/* setup bus numbers */
/* Setup bus numbers */
val = dw_pcie_readl_dbi(pci, PCI_PRIMARY_BUS);
val &= 0xff000000;
val |= 0x00ff0100;
dw_pcie_writel_dbi(pci, PCI_PRIMARY_BUS, val);
/* setup command register */
/* Setup command register */
val = dw_pcie_readl_dbi(pci, PCI_COMMAND);
val &= 0xffff0000;
val |= PCI_COMMAND_IO | PCI_COMMAND_MEMORY |
@ -683,7 +693,7 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
* we should not program the ATU here.
*/
if (!pp->ops->rd_other_conf) {
/* get iATU unroll support */
/* Get iATU unroll support */
pci->iatu_unroll_enabled = dw_pcie_iatu_unroll_enabled(pci);
dev_dbg(pci->dev, "iATU unroll: %s\n",
pci->iatu_unroll_enabled ? "enabled" : "disabled");
@ -701,7 +711,7 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
/* Enable write permission for the DBI read-only register */
dw_pcie_dbi_ro_wr_en(pci);
/* program correct class for RC */
/* Program correct class for RC */
dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI);
/* Better disable write permission right after the update */
dw_pcie_dbi_ro_wr_dis(pci);

View File

@ -12,19 +12,29 @@
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/of_device.h>
#include <linux/of_gpio.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/resource.h>
#include <linux/signal.h>
#include <linux/types.h>
#include <linux/regmap.h>
#include "pcie-designware.h"
struct dw_plat_pcie {
struct dw_pcie *pci;
struct dw_pcie *pci;
struct regmap *regmap;
enum dw_pcie_device_mode mode;
};
struct dw_plat_pcie_of_data {
enum dw_pcie_device_mode mode;
};
static const struct of_device_id dw_plat_pcie_of_match[];
static int dw_plat_pcie_host_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
@ -38,13 +48,63 @@ static int dw_plat_pcie_host_init(struct pcie_port *pp)
return 0;
}
static void dw_plat_set_num_vectors(struct pcie_port *pp)
{
pp->num_vectors = MAX_MSI_IRQS;
}
static const struct dw_pcie_host_ops dw_plat_pcie_host_ops = {
.host_init = dw_plat_pcie_host_init,
.set_num_vectors = dw_plat_set_num_vectors,
};
static int dw_plat_add_pcie_port(struct pcie_port *pp,
static int dw_plat_pcie_establish_link(struct dw_pcie *pci)
{
return 0;
}
static const struct dw_pcie_ops dw_pcie_ops = {
.start_link = dw_plat_pcie_establish_link,
};
static void dw_plat_pcie_ep_init(struct dw_pcie_ep *ep)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
enum pci_barno bar;
for (bar = BAR_0; bar <= BAR_5; bar++)
dw_pcie_ep_reset_bar(pci, bar);
}
static int dw_plat_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
enum pci_epc_irq_type type,
u8 interrupt_num)
{
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
switch (type) {
case PCI_EPC_IRQ_LEGACY:
dev_err(pci->dev, "EP cannot trigger legacy IRQs\n");
return -EINVAL;
case PCI_EPC_IRQ_MSI:
return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num);
default:
dev_err(pci->dev, "UNKNOWN IRQ type\n");
}
return 0;
}
static struct dw_pcie_ep_ops pcie_ep_ops = {
.ep_init = dw_plat_pcie_ep_init,
.raise_irq = dw_plat_pcie_ep_raise_irq,
};
static int dw_plat_add_pcie_port(struct dw_plat_pcie *dw_plat_pcie,
struct platform_device *pdev)
{
struct dw_pcie *pci = dw_plat_pcie->pci;
struct pcie_port *pp = &pci->pp;
struct device *dev = &pdev->dev;
int ret;
@ -63,15 +123,44 @@ static int dw_plat_add_pcie_port(struct pcie_port *pp,
ret = dw_pcie_host_init(pp);
if (ret) {
dev_err(dev, "failed to initialize host\n");
dev_err(dev, "Failed to initialize host\n");
return ret;
}
return 0;
}
static const struct dw_pcie_ops dw_pcie_ops = {
};
static int dw_plat_add_pcie_ep(struct dw_plat_pcie *dw_plat_pcie,
struct platform_device *pdev)
{
int ret;
struct dw_pcie_ep *ep;
struct resource *res;
struct device *dev = &pdev->dev;
struct dw_pcie *pci = dw_plat_pcie->pci;
ep = &pci->ep;
ep->ops = &pcie_ep_ops;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi2");
pci->dbi_base2 = devm_ioremap_resource(dev, res);
if (IS_ERR(pci->dbi_base2))
return PTR_ERR(pci->dbi_base2);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space");
if (!res)
return -EINVAL;
ep->phys_base = res->start;
ep->addr_size = resource_size(res);
ret = dw_pcie_ep_init(ep);
if (ret) {
dev_err(dev, "Failed to initialize endpoint\n");
return ret;
}
return 0;
}
static int dw_plat_pcie_probe(struct platform_device *pdev)
{
@ -80,6 +169,16 @@ static int dw_plat_pcie_probe(struct platform_device *pdev)
struct dw_pcie *pci;
struct resource *res; /* Resource from DT */
int ret;
const struct of_device_id *match;
const struct dw_plat_pcie_of_data *data;
enum dw_pcie_device_mode mode;
match = of_match_device(dw_plat_pcie_of_match, dev);
if (!match)
return -EINVAL;
data = (struct dw_plat_pcie_of_data *)match->data;
mode = (enum dw_pcie_device_mode)data->mode;
dw_plat_pcie = devm_kzalloc(dev, sizeof(*dw_plat_pcie), GFP_KERNEL);
if (!dw_plat_pcie)
@ -93,23 +192,59 @@ static int dw_plat_pcie_probe(struct platform_device *pdev)
pci->ops = &dw_pcie_ops;
dw_plat_pcie->pci = pci;
dw_plat_pcie->mode = mode;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
if (!res)
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
pci->dbi_base = devm_ioremap_resource(dev, res);
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
platform_set_drvdata(pdev, dw_plat_pcie);
ret = dw_plat_add_pcie_port(&pci->pp, pdev);
if (ret < 0)
return ret;
switch (dw_plat_pcie->mode) {
case DW_PCIE_RC_TYPE:
if (!IS_ENABLED(CONFIG_PCIE_DW_PLAT_HOST))
return -ENODEV;
ret = dw_plat_add_pcie_port(dw_plat_pcie, pdev);
if (ret < 0)
return ret;
break;
case DW_PCIE_EP_TYPE:
if (!IS_ENABLED(CONFIG_PCIE_DW_PLAT_EP))
return -ENODEV;
ret = dw_plat_add_pcie_ep(dw_plat_pcie, pdev);
if (ret < 0)
return ret;
break;
default:
dev_err(dev, "INVALID device type %d\n", dw_plat_pcie->mode);
}
return 0;
}
static const struct dw_plat_pcie_of_data dw_plat_pcie_rc_of_data = {
.mode = DW_PCIE_RC_TYPE,
};
static const struct dw_plat_pcie_of_data dw_plat_pcie_ep_of_data = {
.mode = DW_PCIE_EP_TYPE,
};
static const struct of_device_id dw_plat_pcie_of_match[] = {
{ .compatible = "snps,dw-pcie", },
{
.compatible = "snps,dw-pcie",
.data = &dw_plat_pcie_rc_of_data,
},
{
.compatible = "snps,dw-pcie-ep",
.data = &dw_plat_pcie_ep_of_data,
},
{},
};

View File

@ -69,7 +69,7 @@ u32 __dw_pcie_read_dbi(struct dw_pcie *pci, void __iomem *base, u32 reg,
ret = dw_pcie_read(base + reg, size, &val);
if (ret)
dev_err(pci->dev, "read DBI address failed\n");
dev_err(pci->dev, "Read DBI address failed\n");
return val;
}
@ -86,7 +86,7 @@ void __dw_pcie_write_dbi(struct dw_pcie *pci, void __iomem *base, u32 reg,
ret = dw_pcie_write(base + reg, size, val);
if (ret)
dev_err(pci->dev, "write DBI address failed\n");
dev_err(pci->dev, "Write DBI address failed\n");
}
static u32 dw_pcie_readl_ob_unroll(struct dw_pcie *pci, u32 index, u32 reg)
@ -137,7 +137,7 @@ static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, int index,
usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
}
dev_err(pci->dev, "outbound iATU is not being enabled\n");
dev_err(pci->dev, "Outbound iATU is not being enabled\n");
}
void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type,
@ -180,7 +180,7 @@ void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type,
usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
}
dev_err(pci->dev, "outbound iATU is not being enabled\n");
dev_err(pci->dev, "Outbound iATU is not being enabled\n");
}
static u32 dw_pcie_readl_ib_unroll(struct dw_pcie *pci, u32 index, u32 reg)
@ -238,7 +238,7 @@ static int dw_pcie_prog_inbound_atu_unroll(struct dw_pcie *pci, int index,
usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
}
dev_err(pci->dev, "inbound iATU is not being enabled\n");
dev_err(pci->dev, "Inbound iATU is not being enabled\n");
return -EBUSY;
}
@ -284,7 +284,7 @@ int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int bar,
usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);
}
dev_err(pci->dev, "inbound iATU is not being enabled\n");
dev_err(pci->dev, "Inbound iATU is not being enabled\n");
return -EBUSY;
}
@ -313,16 +313,16 @@ int dw_pcie_wait_for_link(struct dw_pcie *pci)
{
int retries;
/* check if the link is up or not */
/* Check if the link is up or not */
for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
if (dw_pcie_link_up(pci)) {
dev_info(pci->dev, "link up\n");
dev_info(pci->dev, "Link up\n");
return 0;
}
usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
}
dev_err(pci->dev, "phy link never came up\n");
dev_err(pci->dev, "Phy link never came up\n");
return -ETIMEDOUT;
}
@ -351,7 +351,7 @@ void dw_pcie_setup(struct dw_pcie *pci)
if (ret)
lanes = 0;
/* set the number of lanes */
/* Set the number of lanes */
val = dw_pcie_readl_dbi(pci, PCIE_PORT_LINK_CONTROL);
val &= ~PORT_LINK_MODE_MASK;
switch (lanes) {
@ -373,7 +373,7 @@ void dw_pcie_setup(struct dw_pcie *pci)
}
dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val);
/* set link width speed control register */
/* Set link width speed control register */
val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL);
val &= ~PORT_LOGIC_LINK_WIDTH_MASK;
switch (lanes) {

View File

@ -110,6 +110,7 @@
#define MAX_MSI_IRQS 256
#define MAX_MSI_IRQS_PER_CTRL 32
#define MAX_MSI_CTRLS (MAX_MSI_IRQS / MAX_MSI_IRQS_PER_CTRL)
#define MSI_REG_CTRL_BLOCK_SIZE 12
#define MSI_DEF_NUM_VECTORS 32
/* Maximum number of inbound/outbound iATUs */

View File

@ -10,7 +10,7 @@
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/gpio.h>
#include <linux/gpio/consumer.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/iopoll.h>
@ -19,6 +19,7 @@
#include <linux/of_device.h>
#include <linux/of_gpio.h>
#include <linux/pci.h>
#include <linux/pm_runtime.h>
#include <linux/platform_device.h>
#include <linux/phy/phy.h>
#include <linux/regulator/consumer.h>
@ -869,7 +870,7 @@ static int qcom_pcie_init_2_4_0(struct qcom_pcie *pcie)
/* enable PCIe clocks and resets */
val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL);
val &= !BIT(0);
val &= ~BIT(0);
writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL);
/* change DBI base address */
@ -1088,6 +1089,7 @@ static int qcom_pcie_host_init(struct pcie_port *pp)
struct qcom_pcie *pcie = to_qcom_pcie(pci);
int ret;
pm_runtime_get_sync(pci->dev);
qcom_ep_reset_assert(pcie);
ret = pcie->ops->init(pcie);
@ -1124,6 +1126,7 @@ err_disable_phy:
phy_power_off(pcie->phy);
err_deinit:
pcie->ops->deinit(pcie);
pm_runtime_put(pci->dev);
return ret;
}
@ -1212,6 +1215,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
if (!pci)
return -ENOMEM;
pm_runtime_enable(dev);
pci->dev = dev;
pci->ops = &dw_pcie_ops;
pp = &pci->pp;
@ -1257,14 +1261,17 @@ static int qcom_pcie_probe(struct platform_device *pdev)
}
ret = phy_init(pcie->phy);
if (ret)
if (ret) {
pm_runtime_disable(&pdev->dev);
return ret;
}
platform_set_drvdata(pdev, pcie);
ret = dw_pcie_host_init(pp);
if (ret) {
dev_err(dev, "cannot initialize host\n");
pm_runtime_disable(&pdev->dev);
return ret;
}

View File

@ -87,7 +87,7 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test)
src_addr = pci_epc_mem_alloc_addr(epc, &src_phys_addr, reg->size);
if (!src_addr) {
dev_err(dev, "failed to allocate source address\n");
dev_err(dev, "Failed to allocate source address\n");
reg->status = STATUS_SRC_ADDR_INVALID;
ret = -ENOMEM;
goto err;
@ -96,14 +96,14 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test)
ret = pci_epc_map_addr(epc, epf->func_no, src_phys_addr, reg->src_addr,
reg->size);
if (ret) {
dev_err(dev, "failed to map source address\n");
dev_err(dev, "Failed to map source address\n");
reg->status = STATUS_SRC_ADDR_INVALID;
goto err_src_addr;
}
dst_addr = pci_epc_mem_alloc_addr(epc, &dst_phys_addr, reg->size);
if (!dst_addr) {
dev_err(dev, "failed to allocate destination address\n");
dev_err(dev, "Failed to allocate destination address\n");
reg->status = STATUS_DST_ADDR_INVALID;
ret = -ENOMEM;
goto err_src_map_addr;
@ -112,7 +112,7 @@ static int pci_epf_test_copy(struct pci_epf_test *epf_test)
ret = pci_epc_map_addr(epc, epf->func_no, dst_phys_addr, reg->dst_addr,
reg->size);
if (ret) {
dev_err(dev, "failed to map destination address\n");
dev_err(dev, "Failed to map destination address\n");
reg->status = STATUS_DST_ADDR_INVALID;
goto err_dst_addr;
}
@ -149,7 +149,7 @@ static int pci_epf_test_read(struct pci_epf_test *epf_test)
src_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size);
if (!src_addr) {
dev_err(dev, "failed to allocate address\n");
dev_err(dev, "Failed to allocate address\n");
reg->status = STATUS_SRC_ADDR_INVALID;
ret = -ENOMEM;
goto err;
@ -158,7 +158,7 @@ static int pci_epf_test_read(struct pci_epf_test *epf_test)
ret = pci_epc_map_addr(epc, epf->func_no, phys_addr, reg->src_addr,
reg->size);
if (ret) {
dev_err(dev, "failed to map address\n");
dev_err(dev, "Failed to map address\n");
reg->status = STATUS_SRC_ADDR_INVALID;
goto err_addr;
}
@ -201,7 +201,7 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test)
dst_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size);
if (!dst_addr) {
dev_err(dev, "failed to allocate address\n");
dev_err(dev, "Failed to allocate address\n");
reg->status = STATUS_DST_ADDR_INVALID;
ret = -ENOMEM;
goto err;
@ -210,7 +210,7 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test)
ret = pci_epc_map_addr(epc, epf->func_no, phys_addr, reg->dst_addr,
reg->size);
if (ret) {
dev_err(dev, "failed to map address\n");
dev_err(dev, "Failed to map address\n");
reg->status = STATUS_DST_ADDR_INVALID;
goto err_addr;
}
@ -230,7 +230,7 @@ static int pci_epf_test_write(struct pci_epf_test *epf_test)
* wait 1ms inorder for the write to complete. Without this delay L3
* error in observed in the host system.
*/
mdelay(1);
usleep_range(1000, 2000);
kfree(buf);
@ -379,7 +379,7 @@ static int pci_epf_test_set_bar(struct pci_epf *epf)
ret = pci_epc_set_bar(epc, epf->func_no, epf_bar);
if (ret) {
pci_epf_free_space(epf, epf_test->reg[bar], bar);
dev_err(dev, "failed to set BAR%d\n", bar);
dev_err(dev, "Failed to set BAR%d\n", bar);
if (bar == test_reg_bar)
return ret;
}
@ -406,7 +406,7 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
base = pci_epf_alloc_space(epf, sizeof(struct pci_epf_test_reg),
test_reg_bar);
if (!base) {
dev_err(dev, "failed to allocated register space\n");
dev_err(dev, "Failed to allocated register space\n");
return -ENOMEM;
}
epf_test->reg[test_reg_bar] = base;
@ -416,7 +416,7 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
continue;
base = pci_epf_alloc_space(epf, bar_size[bar], bar);
if (!base)
dev_err(dev, "failed to allocate space for BAR%d\n",
dev_err(dev, "Failed to allocate space for BAR%d\n",
bar);
epf_test->reg[bar] = base;
}
@ -435,9 +435,16 @@ static int pci_epf_test_bind(struct pci_epf *epf)
if (WARN_ON_ONCE(!epc))
return -EINVAL;
if (epc->features & EPC_FEATURE_NO_LINKUP_NOTIFIER)
epf_test->linkup_notifier = false;
else
epf_test->linkup_notifier = true;
epf_test->test_reg_bar = EPC_FEATURE_GET_BAR(epc->features);
ret = pci_epc_write_header(epc, epf->func_no, header);
if (ret) {
dev_err(dev, "configuration header write failed\n");
dev_err(dev, "Configuration header write failed\n");
return ret;
}
@ -519,7 +526,7 @@ static int __init pci_epf_test_init(void)
WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
ret = pci_epf_register_driver(&test_driver);
if (ret) {
pr_err("failed to register pci epf test driver --> %d\n", ret);
pr_err("Failed to register pci epf test driver --> %d\n", ret);
return ret;
}

View File

@ -15,6 +15,8 @@
#include <linux/pci-epf.h>
#include <linux/pci-ep-cfs.h>
static DEFINE_MUTEX(pci_epf_mutex);
static struct bus_type pci_epf_bus_type;
static const struct device_type pci_epf_type;
@ -143,7 +145,13 @@ EXPORT_SYMBOL_GPL(pci_epf_alloc_space);
*/
void pci_epf_unregister_driver(struct pci_epf_driver *driver)
{
pci_ep_cfs_remove_epf_group(driver->group);
struct config_group *group;
mutex_lock(&pci_epf_mutex);
list_for_each_entry(group, &driver->epf_group, group_entry)
pci_ep_cfs_remove_epf_group(group);
list_del(&driver->epf_group);
mutex_unlock(&pci_epf_mutex);
driver_unregister(&driver->driver);
}
EXPORT_SYMBOL_GPL(pci_epf_unregister_driver);
@ -159,6 +167,8 @@ int __pci_epf_register_driver(struct pci_epf_driver *driver,
struct module *owner)
{
int ret;
struct config_group *group;
const struct pci_epf_device_id *id;
if (!driver->ops)
return -EINVAL;
@ -173,7 +183,16 @@ int __pci_epf_register_driver(struct pci_epf_driver *driver,
if (ret)
return ret;
driver->group = pci_ep_cfs_add_epf_group(driver->driver.name);
INIT_LIST_HEAD(&driver->epf_group);
id = driver->id_table;
while (id->name[0]) {
group = pci_ep_cfs_add_epf_group(id->name);
mutex_lock(&pci_epf_mutex);
list_add_tail(&group->group_entry, &driver->epf_group);
mutex_unlock(&pci_epf_mutex);
id++;
}
return 0;
}

View File

@ -5,13 +5,14 @@ menu "PCI host controller drivers"
config PCI_MVEBU
bool "Marvell EBU PCIe controller"
depends on ARCH_MVEBU || ARCH_DOVE
depends on ARCH_MVEBU || ARCH_DOVE || COMPILE_TEST
depends on MVEBU_MBUS
depends on ARM
depends on OF
config PCI_AARDVARK
bool "Aardvark PCIe controller"
depends on ARCH_MVEBU && ARM64
depends on (ARCH_MVEBU && ARM64) || COMPILE_TEST
depends on OF
depends on PCI_MSI_IRQ_DOMAIN
help
@ -21,7 +22,7 @@ config PCI_AARDVARK
config PCIE_XILINX_NWL
bool "NWL PCIe Core"
depends on ARCH_ZYNQMP
depends on ARCH_ZYNQMP || COMPILE_TEST
depends on PCI_MSI_IRQ_DOMAIN
help
Say 'Y' here if you want kernel support for Xilinx
@ -32,12 +33,11 @@ config PCIE_XILINX_NWL
config PCI_FTPCI100
bool "Faraday Technology FTPCI100 PCI controller"
depends on OF
depends on ARM
default ARCH_GEMINI
config PCI_TEGRA
bool "NVIDIA Tegra PCIe controller"
depends on ARCH_TEGRA
depends on ARCH_TEGRA || COMPILE_TEST
depends on PCI_MSI_IRQ_DOMAIN
help
Say Y here if you want support for the PCIe host controller found
@ -45,8 +45,8 @@ config PCI_TEGRA
config PCI_RCAR_GEN2
bool "Renesas R-Car Gen2 Internal PCI controller"
depends on ARM
depends on ARCH_RENESAS || COMPILE_TEST
depends on ARM
help
Say Y here if you want internal PCI support on R-Car Gen2 SoC.
There are 3 internal PCI controllers available with a single
@ -54,7 +54,7 @@ config PCI_RCAR_GEN2
config PCIE_RCAR
bool "Renesas R-Car PCIe controller"
depends on ARCH_RENESAS || (ARM && COMPILE_TEST)
depends on ARCH_RENESAS || COMPILE_TEST
depends on PCI_MSI_IRQ_DOMAIN
help
Say Y here if you want PCIe controller support on R-Car SoCs.
@ -65,25 +65,25 @@ config PCI_HOST_COMMON
config PCI_HOST_GENERIC
bool "Generic PCI host controller"
depends on (ARM || ARM64) && OF
depends on OF
select PCI_HOST_COMMON
select IRQ_DOMAIN
select PCI_DOMAINS
help
Say Y here if you want to support a simple generic PCI host
controller, such as the one emulated by kvmtool.
config PCIE_XILINX
bool "Xilinx AXI PCIe host bridge support"
depends on ARCH_ZYNQ || MICROBLAZE || (MIPS && PCI_DRIVERS_GENERIC)
depends on ARCH_ZYNQ || MICROBLAZE || (MIPS && PCI_DRIVERS_GENERIC) || COMPILE_TEST
help
Say 'Y' here if you want kernel to support the Xilinx AXI PCIe
Host Bridge driver.
config PCI_XGENE
bool "X-Gene PCIe controller"
depends on ARM64
depends on ARM64 || COMPILE_TEST
depends on OF || (ACPI && PCI_QUIRKS)
select PCIEPORTBUS
help
Say Y here if you want internal PCI support on APM X-Gene SoC.
There are 5 internal PCIe ports available. Each port is GEN3 capable
@ -101,7 +101,7 @@ config PCI_XGENE_MSI
config PCI_V3_SEMI
bool "V3 Semiconductor PCI controller"
depends on OF
depends on ARM
depends on ARM || COMPILE_TEST
default ARCH_INTEGRATOR_AP
config PCI_VERSATILE
@ -147,8 +147,7 @@ config PCIE_IPROC_MSI
config PCIE_ALTERA
bool "Altera PCIe controller"
depends on ARM || NIOS2
depends on OF_PCI
depends on ARM || NIOS2 || COMPILE_TEST
select PCI_DOMAINS
help
Say Y here if you want to enable PCIe controller support on Altera
@ -164,7 +163,7 @@ config PCIE_ALTERA_MSI
config PCI_HOST_THUNDER_PEM
bool "Cavium Thunder PCIe controller to off-chip devices"
depends on ARM64
depends on ARM64 || COMPILE_TEST
depends on OF || (ACPI && PCI_QUIRKS)
select PCI_HOST_COMMON
help
@ -172,29 +171,45 @@ config PCI_HOST_THUNDER_PEM
config PCI_HOST_THUNDER_ECAM
bool "Cavium Thunder ECAM controller to on-chip devices on pass-1.x silicon"
depends on ARM64
depends on ARM64 || COMPILE_TEST
depends on OF || (ACPI && PCI_QUIRKS)
select PCI_HOST_COMMON
help
Say Y here if you want ECAM support for CN88XX-Pass-1.x Cavium Thunder SoCs.
config PCIE_ROCKCHIP
tristate "Rockchip PCIe controller"
bool
depends on PCI
config PCIE_ROCKCHIP_HOST
tristate "Rockchip PCIe host controller"
depends on ARCH_ROCKCHIP || COMPILE_TEST
depends on OF
depends on PCI_MSI_IRQ_DOMAIN
select MFD_SYSCON
select PCIE_ROCKCHIP
help
Say Y here if you want internal PCI support on Rockchip SoC.
There is 1 internal PCIe port available to support GEN2 with
4 slots.
config PCIE_ROCKCHIP_EP
bool "Rockchip PCIe endpoint controller"
depends on ARCH_ROCKCHIP || COMPILE_TEST
depends on OF
depends on PCI_ENDPOINT
select MFD_SYSCON
select PCIE_ROCKCHIP
help
Say Y here if you want to support Rockchip PCIe controller in
endpoint mode on Rockchip SoC. There is 1 internal PCIe port
available to support GEN2 with 4 slots.
config PCIE_MEDIATEK
bool "MediaTek PCIe controller"
depends on (ARM || ARM64) && (ARCH_MEDIATEK || COMPILE_TEST)
depends on ARCH_MEDIATEK || COMPILE_TEST
depends on OF
depends on PCI
select PCIEPORTBUS
depends on PCI_MSI_IRQ_DOMAIN
help
Say Y here if you want to enable PCIe controller support on
MediaTek SoCs.

View File

@ -20,6 +20,8 @@ obj-$(CONFIG_PCIE_IPROC_BCMA) += pcie-iproc-bcma.o
obj-$(CONFIG_PCIE_ALTERA) += pcie-altera.o
obj-$(CONFIG_PCIE_ALTERA_MSI) += pcie-altera-msi.o
obj-$(CONFIG_PCIE_ROCKCHIP) += pcie-rockchip.o
obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o
obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o
obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o
obj-$(CONFIG_PCIE_TANGO_SMP8759) += pcie-tango.o
obj-$(CONFIG_VMD) += vmd.o

View File

@ -19,6 +19,8 @@
#include <linux/of_address.h>
#include <linux/of_pci.h>
#include "../pci.h"
/* PCIe core registers */
#define PCIE_CORE_CMD_STATUS_REG 0x4
#define PCIE_CORE_CMD_IO_ACCESS_EN BIT(0)
@ -822,14 +824,13 @@ static int advk_pcie_parse_request_of_pci_ranges(struct advk_pcie *pcie)
{
int err, res_valid = 0;
struct device *dev = &pcie->pdev->dev;
struct device_node *np = dev->of_node;
struct resource_entry *win, *tmp;
resource_size_t iobase;
INIT_LIST_HEAD(&pcie->resources);
err = of_pci_get_host_bridge_resources(np, 0, 0xff, &pcie->resources,
&iobase);
err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff,
&pcie->resources, &iobase);
if (err)
return err;

View File

@ -28,6 +28,8 @@
#include <linux/irq.h>
#include <linux/clk.h>
#include "../pci.h"
/*
* Special configuration registers directly in the first few words
* in I/O space.
@ -476,8 +478,8 @@ static int faraday_pci_probe(struct platform_device *pdev)
if (IS_ERR(p->base))
return PTR_ERR(p->base);
ret = of_pci_get_host_bridge_resources(dev->of_node, 0, 0xff,
&res, &io_base);
ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff,
&res, &io_base);
if (ret)
return ret;

View File

@ -101,5 +101,18 @@ int pci_host_common_probe(struct platform_device *pdev,
return ret;
}
platform_set_drvdata(pdev, bridge->bus);
return 0;
}
int pci_host_common_remove(struct platform_device *pdev)
{
struct pci_bus *bus = platform_get_drvdata(pdev);
pci_lock_rescan_remove();
pci_stop_root_bus(bus);
pci_remove_root_bus(bus);
pci_unlock_rescan_remove();
return 0;
}

View File

@ -95,5 +95,6 @@ static struct platform_driver gen_pci_driver = {
.suppress_bind_attrs = true,
},
.probe = gen_pci_probe,
.remove = pci_host_common_remove,
};
builtin_platform_driver(gen_pci_driver);

View File

@ -433,7 +433,7 @@ enum hv_pcibus_state {
struct hv_pcibus_device {
struct pci_sysdata sysdata;
enum hv_pcibus_state state;
atomic_t remove_lock;
refcount_t remove_lock;
struct hv_device *hdev;
resource_size_t low_mmio_space;
resource_size_t high_mmio_space;
@ -488,17 +488,6 @@ enum hv_pcichild_state {
hv_pcichild_maximum
};
enum hv_pcidev_ref_reason {
hv_pcidev_ref_invalid = 0,
hv_pcidev_ref_initial,
hv_pcidev_ref_by_slot,
hv_pcidev_ref_packet,
hv_pcidev_ref_pnp,
hv_pcidev_ref_childlist,
hv_pcidev_irqdata,
hv_pcidev_ref_max
};
struct hv_pci_dev {
/* List protected by pci_rescan_remove_lock */
struct list_head list_entry;
@ -548,14 +537,41 @@ static void hv_pci_generic_compl(void *context, struct pci_response *resp,
static struct hv_pci_dev *get_pcichild_wslot(struct hv_pcibus_device *hbus,
u32 wslot);
static void get_pcichild(struct hv_pci_dev *hv_pcidev,
enum hv_pcidev_ref_reason reason);
static void put_pcichild(struct hv_pci_dev *hv_pcidev,
enum hv_pcidev_ref_reason reason);
static void get_pcichild(struct hv_pci_dev *hpdev)
{
refcount_inc(&hpdev->refs);
}
static void put_pcichild(struct hv_pci_dev *hpdev)
{
if (refcount_dec_and_test(&hpdev->refs))
kfree(hpdev);
}
static void get_hvpcibus(struct hv_pcibus_device *hv_pcibus);
static void put_hvpcibus(struct hv_pcibus_device *hv_pcibus);
/*
* There is no good way to get notified from vmbus_onoffer_rescind(),
* so let's use polling here, since this is not a hot path.
*/
static int wait_for_response(struct hv_device *hdev,
struct completion *comp)
{
while (true) {
if (hdev->channel->rescind) {
dev_warn_once(&hdev->device, "The device is gone.\n");
return -ENODEV;
}
if (wait_for_completion_timeout(comp, HZ / 10))
break;
}
return 0;
}
/**
* devfn_to_wslot() - Convert from Linux PCI slot to Windows
* @devfn: The Linux representation of PCI slot
@ -762,7 +778,7 @@ static int hv_pcifront_read_config(struct pci_bus *bus, unsigned int devfn,
_hv_pcifront_read_config(hpdev, where, size, val);
put_pcichild(hpdev, hv_pcidev_ref_by_slot);
put_pcichild(hpdev);
return PCIBIOS_SUCCESSFUL;
}
@ -790,7 +806,7 @@ static int hv_pcifront_write_config(struct pci_bus *bus, unsigned int devfn,
_hv_pcifront_write_config(hpdev, where, size, val);
put_pcichild(hpdev, hv_pcidev_ref_by_slot);
put_pcichild(hpdev);
return PCIBIOS_SUCCESSFUL;
}
@ -856,7 +872,7 @@ static void hv_msi_free(struct irq_domain *domain, struct msi_domain_info *info,
}
hv_int_desc_free(hpdev, int_desc);
put_pcichild(hpdev, hv_pcidev_ref_by_slot);
put_pcichild(hpdev);
}
static int hv_set_affinity(struct irq_data *data, const struct cpumask *dest,
@ -1186,13 +1202,13 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
msg->address_lo = comp.int_desc.address & 0xffffffff;
msg->data = comp.int_desc.data;
put_pcichild(hpdev, hv_pcidev_ref_by_slot);
put_pcichild(hpdev);
return;
free_int_desc:
kfree(int_desc);
drop_reference:
put_pcichild(hpdev, hv_pcidev_ref_by_slot);
put_pcichild(hpdev);
return_null_message:
msg->address_hi = 0;
msg->address_lo = 0;
@ -1283,7 +1299,6 @@ static u64 get_bar_size(u64 bar_val)
*/
static void survey_child_resources(struct hv_pcibus_device *hbus)
{
struct list_head *iter;
struct hv_pci_dev *hpdev;
resource_size_t bar_size = 0;
unsigned long flags;
@ -1309,8 +1324,7 @@ static void survey_child_resources(struct hv_pcibus_device *hbus)
* for a child device are a power of 2 in size and aligned in memory,
* so it's sufficient to just add them up without tracking alignment.
*/
list_for_each(iter, &hbus->children) {
hpdev = container_of(iter, struct hv_pci_dev, list_entry);
list_for_each_entry(hpdev, &hbus->children, list_entry) {
for (i = 0; i < 6; i++) {
if (hpdev->probed_bar[i] & PCI_BASE_ADDRESS_SPACE_IO)
dev_err(&hbus->hdev->device,
@ -1363,7 +1377,6 @@ static void prepopulate_bars(struct hv_pcibus_device *hbus)
resource_size_t low_base = 0;
resource_size_t bar_size;
struct hv_pci_dev *hpdev;
struct list_head *iter;
unsigned long flags;
u64 bar_val;
u32 command;
@ -1385,9 +1398,7 @@ static void prepopulate_bars(struct hv_pcibus_device *hbus)
/* Pick addresses for the BARs. */
do {
list_for_each(iter, &hbus->children) {
hpdev = container_of(iter, struct hv_pci_dev,
list_entry);
list_for_each_entry(hpdev, &hbus->children, list_entry) {
for (i = 0; i < 6; i++) {
bar_val = hpdev->probed_bar[i];
if (bar_val == 0)
@ -1508,19 +1519,6 @@ static void q_resource_requirements(void *context, struct pci_response *resp,
complete(&completion->host_event);
}
static void get_pcichild(struct hv_pci_dev *hpdev,
enum hv_pcidev_ref_reason reason)
{
refcount_inc(&hpdev->refs);
}
static void put_pcichild(struct hv_pci_dev *hpdev,
enum hv_pcidev_ref_reason reason)
{
if (refcount_dec_and_test(&hpdev->refs))
kfree(hpdev);
}
/**
* new_pcichild_device() - Create a new child device
* @hbus: The internal struct tracking this root PCI bus.
@ -1568,24 +1566,14 @@ static struct hv_pci_dev *new_pcichild_device(struct hv_pcibus_device *hbus,
if (ret)
goto error;
wait_for_completion(&comp_pkt.host_event);
if (wait_for_response(hbus->hdev, &comp_pkt.host_event))
goto error;
hpdev->desc = *desc;
refcount_set(&hpdev->refs, 1);
get_pcichild(hpdev, hv_pcidev_ref_childlist);
get_pcichild(hpdev);
spin_lock_irqsave(&hbus->device_list_lock, flags);
/*
* When a device is being added to the bus, we set the PCI domain
* number to be the device serial number, which is non-zero and
* unique on the same VM. The serial numbers start with 1, and
* increase by 1 for each device. So device names including this
* can have shorter names than based on the bus instance UUID.
* Only the first device serial number is used for domain, so the
* domain number will not change after the first device is added.
*/
if (list_empty(&hbus->children))
hbus->sysdata.domain = desc->ser;
list_add_tail(&hpdev->list_entry, &hbus->children);
spin_unlock_irqrestore(&hbus->device_list_lock, flags);
return hpdev;
@ -1618,7 +1606,7 @@ static struct hv_pci_dev *get_pcichild_wslot(struct hv_pcibus_device *hbus,
list_for_each_entry(iter, &hbus->children, list_entry) {
if (iter->desc.win_slot.slot == wslot) {
hpdev = iter;
get_pcichild(hpdev, hv_pcidev_ref_by_slot);
get_pcichild(hpdev);
break;
}
}
@ -1654,7 +1642,6 @@ static void pci_devices_present_work(struct work_struct *work)
{
u32 child_no;
bool found;
struct list_head *iter;
struct pci_function_description *new_desc;
struct hv_pci_dev *hpdev;
struct hv_pcibus_device *hbus;
@ -1691,10 +1678,8 @@ static void pci_devices_present_work(struct work_struct *work)
/* First, mark all existing children as reported missing. */
spin_lock_irqsave(&hbus->device_list_lock, flags);
list_for_each(iter, &hbus->children) {
hpdev = container_of(iter, struct hv_pci_dev,
list_entry);
hpdev->reported_missing = true;
list_for_each_entry(hpdev, &hbus->children, list_entry) {
hpdev->reported_missing = true;
}
spin_unlock_irqrestore(&hbus->device_list_lock, flags);
@ -1704,11 +1689,8 @@ static void pci_devices_present_work(struct work_struct *work)
new_desc = &dr->func[child_no];
spin_lock_irqsave(&hbus->device_list_lock, flags);
list_for_each(iter, &hbus->children) {
hpdev = container_of(iter, struct hv_pci_dev,
list_entry);
if ((hpdev->desc.win_slot.slot ==
new_desc->win_slot.slot) &&
list_for_each_entry(hpdev, &hbus->children, list_entry) {
if ((hpdev->desc.win_slot.slot == new_desc->win_slot.slot) &&
(hpdev->desc.v_id == new_desc->v_id) &&
(hpdev->desc.d_id == new_desc->d_id) &&
(hpdev->desc.ser == new_desc->ser)) {
@ -1730,12 +1712,10 @@ static void pci_devices_present_work(struct work_struct *work)
spin_lock_irqsave(&hbus->device_list_lock, flags);
do {
found = false;
list_for_each(iter, &hbus->children) {
hpdev = container_of(iter, struct hv_pci_dev,
list_entry);
list_for_each_entry(hpdev, &hbus->children, list_entry) {
if (hpdev->reported_missing) {
found = true;
put_pcichild(hpdev, hv_pcidev_ref_childlist);
put_pcichild(hpdev);
list_move_tail(&hpdev->list_entry, &removed);
break;
}
@ -1748,7 +1728,7 @@ static void pci_devices_present_work(struct work_struct *work)
hpdev = list_first_entry(&removed, struct hv_pci_dev,
list_entry);
list_del(&hpdev->list_entry);
put_pcichild(hpdev, hv_pcidev_ref_initial);
put_pcichild(hpdev);
}
switch (hbus->state) {
@ -1883,8 +1863,8 @@ static void hv_eject_device_work(struct work_struct *work)
sizeof(*ejct_pkt), (unsigned long)&ctxt.pkt,
VM_PKT_DATA_INBAND, 0);
put_pcichild(hpdev, hv_pcidev_ref_childlist);
put_pcichild(hpdev, hv_pcidev_ref_pnp);
put_pcichild(hpdev);
put_pcichild(hpdev);
put_hvpcibus(hpdev->hbus);
}
@ -1899,7 +1879,7 @@ static void hv_eject_device_work(struct work_struct *work)
static void hv_pci_eject_device(struct hv_pci_dev *hpdev)
{
hpdev->state = hv_pcichild_ejecting;
get_pcichild(hpdev, hv_pcidev_ref_pnp);
get_pcichild(hpdev);
INIT_WORK(&hpdev->wrk, hv_eject_device_work);
get_hvpcibus(hpdev->hbus);
queue_work(hpdev->hbus->wq, &hpdev->wrk);
@ -1999,8 +1979,7 @@ static void hv_pci_onchannelcallback(void *context)
dev_message->wslot.slot);
if (hpdev) {
hv_pci_eject_device(hpdev);
put_pcichild(hpdev,
hv_pcidev_ref_by_slot);
put_pcichild(hpdev);
}
break;
@ -2069,15 +2048,16 @@ static int hv_pci_protocol_negotiation(struct hv_device *hdev)
sizeof(struct pci_version_request),
(unsigned long)pkt, VM_PKT_DATA_INBAND,
VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
if (!ret)
ret = wait_for_response(hdev, &comp_pkt.host_event);
if (ret) {
dev_err(&hdev->device,
"PCI Pass-through VSP failed sending version reqquest: %#x",
"PCI Pass-through VSP failed to request version: %d",
ret);
goto exit;
}
wait_for_completion(&comp_pkt.host_event);
if (comp_pkt.completion_status >= 0) {
pci_protocol_version = pci_protocol_versions[i];
dev_info(&hdev->device,
@ -2286,11 +2266,12 @@ static int hv_pci_enter_d0(struct hv_device *hdev)
ret = vmbus_sendpacket(hdev->channel, d0_entry, sizeof(*d0_entry),
(unsigned long)pkt, VM_PKT_DATA_INBAND,
VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
if (!ret)
ret = wait_for_response(hdev, &comp_pkt.host_event);
if (ret)
goto exit;
wait_for_completion(&comp_pkt.host_event);
if (comp_pkt.completion_status < 0) {
dev_err(&hdev->device,
"PCI Pass-through VSP failed D0 Entry with status %x\n",
@ -2330,11 +2311,10 @@ static int hv_pci_query_relations(struct hv_device *hdev)
ret = vmbus_sendpacket(hdev->channel, &message, sizeof(message),
0, VM_PKT_DATA_INBAND, 0);
if (ret)
return ret;
if (!ret)
ret = wait_for_response(hdev, &comp);
wait_for_completion(&comp);
return 0;
return ret;
}
/**
@ -2398,17 +2378,17 @@ static int hv_send_resources_allocated(struct hv_device *hdev)
PCI_RESOURCES_ASSIGNED2;
res_assigned2->wslot.slot = hpdev->desc.win_slot.slot;
}
put_pcichild(hpdev, hv_pcidev_ref_by_slot);
put_pcichild(hpdev);
ret = vmbus_sendpacket(hdev->channel, &pkt->message,
size_res, (unsigned long)pkt,
VM_PKT_DATA_INBAND,
VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
if (!ret)
ret = wait_for_response(hdev, &comp_pkt.host_event);
if (ret)
break;
wait_for_completion(&comp_pkt.host_event);
if (comp_pkt.completion_status < 0) {
ret = -EPROTO;
dev_err(&hdev->device,
@ -2446,7 +2426,7 @@ static int hv_send_resources_released(struct hv_device *hdev)
pkt.message_type.type = PCI_RESOURCES_RELEASED;
pkt.wslot.slot = hpdev->desc.win_slot.slot;
put_pcichild(hpdev, hv_pcidev_ref_by_slot);
put_pcichild(hpdev);
ret = vmbus_sendpacket(hdev->channel, &pkt, sizeof(pkt), 0,
VM_PKT_DATA_INBAND, 0);
@ -2459,12 +2439,12 @@ static int hv_send_resources_released(struct hv_device *hdev)
static void get_hvpcibus(struct hv_pcibus_device *hbus)
{
atomic_inc(&hbus->remove_lock);
refcount_inc(&hbus->remove_lock);
}
static void put_hvpcibus(struct hv_pcibus_device *hbus)
{
if (atomic_dec_and_test(&hbus->remove_lock))
if (refcount_dec_and_test(&hbus->remove_lock))
complete(&hbus->remove_event);
}
@ -2508,7 +2488,7 @@ static int hv_pci_probe(struct hv_device *hdev,
hdev->dev_instance.b[8] << 8;
hbus->hdev = hdev;
atomic_inc(&hbus->remove_lock);
refcount_set(&hbus->remove_lock, 1);
INIT_LIST_HEAD(&hbus->children);
INIT_LIST_HEAD(&hbus->dr_list);
INIT_LIST_HEAD(&hbus->resources_for_children);

View File

@ -21,6 +21,8 @@
#include <linux/of_pci.h>
#include <linux/of_platform.h>
#include "../pci.h"
/*
* PCIe unit register offsets.
*/

View File

@ -21,6 +21,8 @@
#include <linux/sizes.h>
#include <linux/slab.h>
#include "../pci.h"
/* AHB-PCI Bridge PCI communication registers */
#define RCAR_AHBPCI_PCICOM_OFFSET 0x800

View File

@ -40,6 +40,8 @@
#include <soc/tegra/cpuidle.h>
#include <soc/tegra/pmc.h>
#include "../pci.h"
#define INT_PCI_MSI_NR (8 * 32)
/* register definitions */

View File

@ -33,6 +33,8 @@
#include <linux/regmap.h>
#include <linux/clk.h>
#include "../pci.h"
#define V3_PCI_VENDOR 0x00000000
#define V3_PCI_DEVICE 0x00000002
#define V3_PCI_CMD 0x00000004
@ -791,7 +793,8 @@ static int v3_pci_probe(struct platform_device *pdev)
if (IS_ERR(v3->config_base))
return PTR_ERR(v3->config_base);
ret = of_pci_get_host_bridge_resources(np, 0, 0xff, &res, &io_base);
ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &res,
&io_base);
if (ret)
return ret;

View File

@ -15,6 +15,8 @@
#include <linux/pci.h>
#include <linux/platform_device.h>
#include "../pci.h"
static void __iomem *versatile_pci_base;
static void __iomem *versatile_cfg_base[2];
@ -64,11 +66,10 @@ static int versatile_pci_parse_request_of_pci_ranges(struct device *dev,
struct list_head *res)
{
int err, mem = 1, res_valid = 0;
struct device_node *np = dev->of_node;
resource_size_t iobase;
struct resource_entry *win, *tmp;
err = of_pci_get_host_bridge_resources(np, 0, 0xff, res, &iobase);
err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, res, &iobase);
if (err)
return err;

View File

@ -22,6 +22,8 @@
#include <linux/platform_device.h>
#include <linux/slab.h>
#include "../pci.h"
#define PCIECORE_CTLANDSTATUS 0x50
#define PIM1_1L 0x80
#define IBAR2 0x98
@ -632,7 +634,8 @@ static int xgene_pcie_probe(struct platform_device *pdev)
if (ret)
return ret;
ret = of_pci_get_host_bridge_resources(dn, 0, 0xff, &res, &iobase);
ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &res,
&iobase);
if (ret)
return ret;

View File

@ -17,6 +17,8 @@
#include <linux/platform_device.h>
#include <linux/slab.h>
#include "../pci.h"
#define RP_TX_REG0 0x2000
#define RP_TX_REG1 0x2004
#define RP_TX_CNTRL 0x2008
@ -488,11 +490,10 @@ static int altera_pcie_parse_request_of_pci_ranges(struct altera_pcie *pcie)
{
int err, res_valid = 0;
struct device *dev = &pcie->pdev->dev;
struct device_node *np = dev->of_node;
struct resource_entry *win;
err = of_pci_get_host_bridge_resources(np, 0, 0xff, &pcie->resources,
NULL);
err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff,
&pcie->resources, NULL);
if (err)
return err;

View File

@ -16,6 +16,7 @@
#include <linux/of_platform.h>
#include <linux/phy/phy.h>
#include "../pci.h"
#include "pcie-iproc.h"
static const struct of_device_id iproc_pcie_of_match_table[] = {
@ -99,8 +100,8 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev)
pcie->phy = NULL;
}
ret = of_pci_get_host_bridge_resources(np, 0, 0xff, &resources,
&iobase);
ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &resources,
&iobase);
if (ret) {
dev_err(dev, "unable to get PCI host bridge resources\n");
return ret;

View File

@ -11,8 +11,10 @@
#include <linux/delay.h>
#include <linux/iopoll.h>
#include <linux/irq.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/msi.h>
#include <linux/of_address.h>
#include <linux/of_pci.h>
#include <linux/of_platform.h>
@ -22,6 +24,8 @@
#include <linux/pm_runtime.h>
#include <linux/reset.h>
#include "../pci.h"
/* PCIe shared registers */
#define PCIE_SYS_CFG 0x00
#define PCIE_INT_ENABLE 0x0c
@ -66,6 +70,10 @@
/* PCIe V2 per-port registers */
#define PCIE_MSI_VECTOR 0x0c0
#define PCIE_CONF_VEND_ID 0x100
#define PCIE_CONF_CLASS_ID 0x106
#define PCIE_INT_MASK 0x420
#define INTX_MASK GENMASK(19, 16)
#define INTX_SHIFT 16
@ -125,13 +133,13 @@ struct mtk_pcie_port;
/**
* struct mtk_pcie_soc - differentiate between host generations
* @has_msi: whether this host supports MSI interrupts or not
* @need_fix_class_id: whether this host's class ID needed to be fixed or not
* @ops: pointer to configuration access functions
* @startup: pointer to controller setting functions
* @setup_irq: pointer to initialize IRQ functions
*/
struct mtk_pcie_soc {
bool has_msi;
bool need_fix_class_id;
struct pci_ops *ops;
int (*startup)(struct mtk_pcie_port *port);
int (*setup_irq)(struct mtk_pcie_port *port, struct device_node *node);
@ -155,7 +163,9 @@ struct mtk_pcie_soc {
* @lane: lane count
* @slot: port slot
* @irq_domain: legacy INTx IRQ domain
* @inner_domain: inner IRQ domain
* @msi_domain: MSI IRQ domain
* @lock: protect the msi_irq_in_use bitmap
* @msi_irq_in_use: bit map for assigned MSI IRQ
*/
struct mtk_pcie_port {
@ -173,7 +183,9 @@ struct mtk_pcie_port {
u32 lane;
u32 slot;
struct irq_domain *irq_domain;
struct irq_domain *inner_domain;
struct irq_domain *msi_domain;
struct mutex lock;
DECLARE_BITMAP(msi_irq_in_use, MTK_MSI_IRQS_NUM);
};
@ -375,6 +387,7 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
{
struct mtk_pcie *pcie = port->pcie;
struct resource *mem = &pcie->mem;
const struct mtk_pcie_soc *soc = port->pcie->soc;
u32 val;
size_t size;
int err;
@ -403,6 +416,15 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
PCIE_MAC_SRSTB | PCIE_CRSTB;
writel(val, port->base + PCIE_RST_CTRL);
/* Set up vendor ID and class code */
if (soc->need_fix_class_id) {
val = PCI_VENDOR_ID_MEDIATEK;
writew(val, port->base + PCIE_CONF_VEND_ID);
val = PCI_CLASS_BRIDGE_HOST;
writew(val, port->base + PCIE_CONF_CLASS_ID);
}
/* 100ms timeout value should be enough for Gen1/2 training */
err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_V2, val,
!!(val & PCIE_PORT_LINKUP_V2), 20,
@ -430,103 +452,130 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
return 0;
}
static int mtk_pcie_msi_alloc(struct mtk_pcie_port *port)
static void mtk_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
int msi;
msi = find_first_zero_bit(port->msi_irq_in_use, MTK_MSI_IRQS_NUM);
if (msi < MTK_MSI_IRQS_NUM)
set_bit(msi, port->msi_irq_in_use);
else
return -ENOSPC;
return msi;
}
static void mtk_pcie_msi_free(struct mtk_pcie_port *port, unsigned long hwirq)
{
clear_bit(hwirq, port->msi_irq_in_use);
}
static int mtk_pcie_msi_setup_irq(struct msi_controller *chip,
struct pci_dev *pdev, struct msi_desc *desc)
{
struct mtk_pcie_port *port;
struct msi_msg msg;
unsigned int irq;
int hwirq;
phys_addr_t msg_addr;
port = mtk_pcie_find_port(pdev->bus, pdev->devfn);
if (!port)
return -EINVAL;
hwirq = mtk_pcie_msi_alloc(port);
if (hwirq < 0)
return hwirq;
irq = irq_create_mapping(port->msi_domain, hwirq);
if (!irq) {
mtk_pcie_msi_free(port, hwirq);
return -EINVAL;
}
chip->dev = &pdev->dev;
irq_set_msi_desc(irq, desc);
struct mtk_pcie_port *port = irq_data_get_irq_chip_data(data);
phys_addr_t addr;
/* MT2712/MT7622 only support 32-bit MSI addresses */
msg_addr = virt_to_phys(port->base + PCIE_MSI_VECTOR);
msg.address_hi = 0;
msg.address_lo = lower_32_bits(msg_addr);
msg.data = hwirq;
addr = virt_to_phys(port->base + PCIE_MSI_VECTOR);
msg->address_hi = 0;
msg->address_lo = lower_32_bits(addr);
pci_write_msi_msg(irq, &msg);
msg->data = data->hwirq;
dev_dbg(port->pcie->dev, "msi#%d address_hi %#x address_lo %#x\n",
(int)data->hwirq, msg->address_hi, msg->address_lo);
}
static int mtk_msi_set_affinity(struct irq_data *irq_data,
const struct cpumask *mask, bool force)
{
return -EINVAL;
}
static void mtk_msi_ack_irq(struct irq_data *data)
{
struct mtk_pcie_port *port = irq_data_get_irq_chip_data(data);
u32 hwirq = data->hwirq;
writel(1 << hwirq, port->base + PCIE_IMSI_STATUS);
}
static struct irq_chip mtk_msi_bottom_irq_chip = {
.name = "MTK MSI",
.irq_compose_msi_msg = mtk_compose_msi_msg,
.irq_set_affinity = mtk_msi_set_affinity,
.irq_ack = mtk_msi_ack_irq,
};
static int mtk_pcie_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs, void *args)
{
struct mtk_pcie_port *port = domain->host_data;
unsigned long bit;
WARN_ON(nr_irqs != 1);
mutex_lock(&port->lock);
bit = find_first_zero_bit(port->msi_irq_in_use, MTK_MSI_IRQS_NUM);
if (bit >= MTK_MSI_IRQS_NUM) {
mutex_unlock(&port->lock);
return -ENOSPC;
}
__set_bit(bit, port->msi_irq_in_use);
mutex_unlock(&port->lock);
irq_domain_set_info(domain, virq, bit, &mtk_msi_bottom_irq_chip,
domain->host_data, handle_edge_irq,
NULL, NULL);
return 0;
}
static void mtk_msi_teardown_irq(struct msi_controller *chip, unsigned int irq)
static void mtk_pcie_irq_domain_free(struct irq_domain *domain,
unsigned int virq, unsigned int nr_irqs)
{
struct pci_dev *pdev = to_pci_dev(chip->dev);
struct irq_data *d = irq_get_irq_data(irq);
irq_hw_number_t hwirq = irqd_to_hwirq(d);
struct mtk_pcie_port *port;
struct irq_data *d = irq_domain_get_irq_data(domain, virq);
struct mtk_pcie_port *port = irq_data_get_irq_chip_data(d);
port = mtk_pcie_find_port(pdev->bus, pdev->devfn);
if (!port)
return;
mutex_lock(&port->lock);
irq_dispose_mapping(irq);
mtk_pcie_msi_free(port, hwirq);
}
if (!test_bit(d->hwirq, port->msi_irq_in_use))
dev_err(port->pcie->dev, "trying to free unused MSI#%lu\n",
d->hwirq);
else
__clear_bit(d->hwirq, port->msi_irq_in_use);
static struct msi_controller mtk_pcie_msi_chip = {
.setup_irq = mtk_pcie_msi_setup_irq,
.teardown_irq = mtk_msi_teardown_irq,
};
mutex_unlock(&port->lock);
static struct irq_chip mtk_msi_irq_chip = {
.name = "MTK PCIe MSI",
.irq_enable = pci_msi_unmask_irq,
.irq_disable = pci_msi_mask_irq,
.irq_mask = pci_msi_mask_irq,
.irq_unmask = pci_msi_unmask_irq,
};
static int mtk_pcie_msi_map(struct irq_domain *domain, unsigned int irq,
irq_hw_number_t hwirq)
{
irq_set_chip_and_handler(irq, &mtk_msi_irq_chip, handle_simple_irq);
irq_set_chip_data(irq, domain->host_data);
return 0;
irq_domain_free_irqs_parent(domain, virq, nr_irqs);
}
static const struct irq_domain_ops msi_domain_ops = {
.map = mtk_pcie_msi_map,
.alloc = mtk_pcie_irq_domain_alloc,
.free = mtk_pcie_irq_domain_free,
};
static struct irq_chip mtk_msi_irq_chip = {
.name = "MTK PCIe MSI",
.irq_ack = irq_chip_ack_parent,
.irq_mask = pci_msi_mask_irq,
.irq_unmask = pci_msi_unmask_irq,
};
static struct msi_domain_info mtk_msi_domain_info = {
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_PCI_MSIX),
.chip = &mtk_msi_irq_chip,
};
static int mtk_pcie_allocate_msi_domains(struct mtk_pcie_port *port)
{
struct fwnode_handle *fwnode = of_node_to_fwnode(port->pcie->dev->of_node);
mutex_init(&port->lock);
port->inner_domain = irq_domain_create_linear(fwnode, MTK_MSI_IRQS_NUM,
&msi_domain_ops, port);
if (!port->inner_domain) {
dev_err(port->pcie->dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
port->msi_domain = pci_msi_create_irq_domain(fwnode, &mtk_msi_domain_info,
port->inner_domain);
if (!port->msi_domain) {
dev_err(port->pcie->dev, "failed to create MSI domain\n");
irq_domain_remove(port->inner_domain);
return -ENOMEM;
}
return 0;
}
static void mtk_pcie_enable_msi(struct mtk_pcie_port *port)
{
u32 val;
@ -559,6 +608,7 @@ static int mtk_pcie_init_irq_domain(struct mtk_pcie_port *port,
{
struct device *dev = port->pcie->dev;
struct device_node *pcie_intc_node;
int ret;
/* Setup INTx */
pcie_intc_node = of_get_next_child(node, NULL);
@ -575,27 +625,28 @@ static int mtk_pcie_init_irq_domain(struct mtk_pcie_port *port,
}
if (IS_ENABLED(CONFIG_PCI_MSI)) {
port->msi_domain = irq_domain_add_linear(node, MTK_MSI_IRQS_NUM,
&msi_domain_ops,
&mtk_pcie_msi_chip);
if (!port->msi_domain) {
dev_err(dev, "failed to create MSI IRQ domain\n");
return -ENODEV;
}
ret = mtk_pcie_allocate_msi_domains(port);
if (ret)
return ret;
mtk_pcie_enable_msi(port);
}
return 0;
}
static irqreturn_t mtk_pcie_intr_handler(int irq, void *data)
static void mtk_pcie_intr_handler(struct irq_desc *desc)
{
struct mtk_pcie_port *port = (struct mtk_pcie_port *)data;
struct mtk_pcie_port *port = irq_desc_get_handler_data(desc);
struct irq_chip *irqchip = irq_desc_get_chip(desc);
unsigned long status;
u32 virq;
u32 bit = INTX_SHIFT;
while ((status = readl(port->base + PCIE_INT_STATUS)) & INTX_MASK) {
chained_irq_enter(irqchip, desc);
status = readl(port->base + PCIE_INT_STATUS);
if (status & INTX_MASK) {
for_each_set_bit_from(bit, &status, PCI_NUM_INTX + INTX_SHIFT) {
/* Clear the INTx */
writel(1 << bit, port->base + PCIE_INT_STATUS);
@ -606,14 +657,12 @@ static irqreturn_t mtk_pcie_intr_handler(int irq, void *data)
}
if (IS_ENABLED(CONFIG_PCI_MSI)) {
while ((status = readl(port->base + PCIE_INT_STATUS)) & MSI_STATUS) {
if (status & MSI_STATUS){
unsigned long imsi_status;
while ((imsi_status = readl(port->base + PCIE_IMSI_STATUS))) {
for_each_set_bit(bit, &imsi_status, MTK_MSI_IRQS_NUM) {
/* Clear the MSI */
writel(1 << bit, port->base + PCIE_IMSI_STATUS);
virq = irq_find_mapping(port->msi_domain, bit);
virq = irq_find_mapping(port->inner_domain, bit);
generic_handle_irq(virq);
}
}
@ -622,7 +671,9 @@ static irqreturn_t mtk_pcie_intr_handler(int irq, void *data)
}
}
return IRQ_HANDLED;
chained_irq_exit(irqchip, desc);
return;
}
static int mtk_pcie_setup_irq(struct mtk_pcie_port *port,
@ -633,20 +684,15 @@ static int mtk_pcie_setup_irq(struct mtk_pcie_port *port,
struct platform_device *pdev = to_platform_device(dev);
int err, irq;
irq = platform_get_irq(pdev, port->slot);
err = devm_request_irq(dev, irq, mtk_pcie_intr_handler,
IRQF_SHARED, "mtk-pcie", port);
if (err) {
dev_err(dev, "unable to request IRQ %d\n", irq);
return err;
}
err = mtk_pcie_init_irq_domain(port, node);
if (err) {
dev_err(dev, "failed to init PCIe IRQ domain\n");
return err;
}
irq = platform_get_irq(pdev, port->slot);
irq_set_chained_handler_and_data(irq, mtk_pcie_intr_handler, port);
return 0;
}
@ -1080,8 +1126,6 @@ static int mtk_pcie_register_host(struct pci_host_bridge *host)
host->map_irq = of_irq_parse_and_map_pci;
host->swizzle_irq = pci_common_swizzle;
host->sysdata = pcie;
if (IS_ENABLED(CONFIG_PCI_MSI) && pcie->soc->has_msi)
host->msi = &mtk_pcie_msi_chip;
err = pci_scan_root_bus_bridge(host);
if (err < 0)
@ -1142,8 +1186,14 @@ static const struct mtk_pcie_soc mtk_pcie_soc_v1 = {
.startup = mtk_pcie_startup_port,
};
static const struct mtk_pcie_soc mtk_pcie_soc_v2 = {
.has_msi = true,
static const struct mtk_pcie_soc mtk_pcie_soc_mt2712 = {
.ops = &mtk_pcie_ops_v2,
.startup = mtk_pcie_startup_port_v2,
.setup_irq = mtk_pcie_setup_irq,
};
static const struct mtk_pcie_soc mtk_pcie_soc_mt7622 = {
.need_fix_class_id = true,
.ops = &mtk_pcie_ops_v2,
.startup = mtk_pcie_startup_port_v2,
.setup_irq = mtk_pcie_setup_irq,
@ -1152,8 +1202,8 @@ static const struct mtk_pcie_soc mtk_pcie_soc_v2 = {
static const struct of_device_id mtk_pcie_ids[] = {
{ .compatible = "mediatek,mt2701-pcie", .data = &mtk_pcie_soc_v1 },
{ .compatible = "mediatek,mt7623-pcie", .data = &mtk_pcie_soc_v1 },
{ .compatible = "mediatek,mt2712-pcie", .data = &mtk_pcie_soc_v2 },
{ .compatible = "mediatek,mt7622-pcie", .data = &mtk_pcie_soc_v2 },
{ .compatible = "mediatek,mt2712-pcie", .data = &mtk_pcie_soc_mt2712 },
{ .compatible = "mediatek,mt7622-pcie", .data = &mtk_pcie_soc_mt7622 },
{},
};

View File

@ -0,0 +1,866 @@
// SPDX-License-Identifier: GPL-2.0
/*
* PCIe host controller driver for Mobiveil PCIe Host controller
*
* Copyright (c) 2018 Mobiveil Inc.
* Author: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>
*/
#include <linux/delay.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/msi.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/of_pci.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
/* register offsets and bit positions */
/*
* translation tables are grouped into windows, each window registers are
* grouped into blocks of 4 or 16 registers each
*/
#define PAB_REG_BLOCK_SIZE 16
#define PAB_EXT_REG_BLOCK_SIZE 4
#define PAB_REG_ADDR(offset, win) (offset + (win * PAB_REG_BLOCK_SIZE))
#define PAB_EXT_REG_ADDR(offset, win) (offset + (win * PAB_EXT_REG_BLOCK_SIZE))
#define LTSSM_STATUS 0x0404
#define LTSSM_STATUS_L0_MASK 0x3f
#define LTSSM_STATUS_L0 0x2d
#define PAB_CTRL 0x0808
#define AMBA_PIO_ENABLE_SHIFT 0
#define PEX_PIO_ENABLE_SHIFT 1
#define PAGE_SEL_SHIFT 13
#define PAGE_SEL_MASK 0x3f
#define PAGE_LO_MASK 0x3ff
#define PAGE_SEL_EN 0xc00
#define PAGE_SEL_OFFSET_SHIFT 10
#define PAB_AXI_PIO_CTRL 0x0840
#define APIO_EN_MASK 0xf
#define PAB_PEX_PIO_CTRL 0x08c0
#define PIO_ENABLE_SHIFT 0
#define PAB_INTP_AMBA_MISC_ENB 0x0b0c
#define PAB_INTP_AMBA_MISC_STAT 0x0b1c
#define PAB_INTP_INTX_MASK 0x01e0
#define PAB_INTP_MSI_MASK 0x8
#define PAB_AXI_AMAP_CTRL(win) PAB_REG_ADDR(0x0ba0, win)
#define WIN_ENABLE_SHIFT 0
#define WIN_TYPE_SHIFT 1
#define PAB_EXT_AXI_AMAP_SIZE(win) PAB_EXT_REG_ADDR(0xbaf0, win)
#define PAB_AXI_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x0ba4, win)
#define AXI_WINDOW_ALIGN_MASK 3
#define PAB_AXI_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x0ba8, win)
#define PAB_BUS_SHIFT 24
#define PAB_DEVICE_SHIFT 19
#define PAB_FUNCTION_SHIFT 16
#define PAB_AXI_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x0bac, win)
#define PAB_INTP_AXI_PIO_CLASS 0x474
#define PAB_PEX_AMAP_CTRL(win) PAB_REG_ADDR(0x4ba0, win)
#define AMAP_CTRL_EN_SHIFT 0
#define AMAP_CTRL_TYPE_SHIFT 1
#define PAB_EXT_PEX_AMAP_SIZEN(win) PAB_EXT_REG_ADDR(0xbef0, win)
#define PAB_PEX_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x4ba4, win)
#define PAB_PEX_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x4ba8, win)
#define PAB_PEX_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x4bac, win)
/* starting offset of INTX bits in status register */
#define PAB_INTX_START 5
/* supported number of MSI interrupts */
#define PCI_NUM_MSI 16
/* MSI registers */
#define MSI_BASE_LO_OFFSET 0x04
#define MSI_BASE_HI_OFFSET 0x08
#define MSI_SIZE_OFFSET 0x0c
#define MSI_ENABLE_OFFSET 0x14
#define MSI_STATUS_OFFSET 0x18
#define MSI_DATA_OFFSET 0x20
#define MSI_ADDR_L_OFFSET 0x24
#define MSI_ADDR_H_OFFSET 0x28
/* outbound and inbound window definitions */
#define WIN_NUM_0 0
#define WIN_NUM_1 1
#define CFG_WINDOW_TYPE 0
#define IO_WINDOW_TYPE 1
#define MEM_WINDOW_TYPE 2
#define IB_WIN_SIZE (256 * 1024 * 1024 * 1024)
#define MAX_PIO_WINDOWS 8
/* Parameters for the waiting for link up routine */
#define LINK_WAIT_MAX_RETRIES 10
#define LINK_WAIT_MIN 90000
#define LINK_WAIT_MAX 100000
struct mobiveil_msi { /* MSI information */
struct mutex lock; /* protect bitmap variable */
struct irq_domain *msi_domain;
struct irq_domain *dev_domain;
phys_addr_t msi_pages_phys;
int num_of_vectors;
DECLARE_BITMAP(msi_irq_in_use, PCI_NUM_MSI);
};
struct mobiveil_pcie {
struct platform_device *pdev;
struct list_head resources;
void __iomem *config_axi_slave_base; /* endpoint config base */
void __iomem *csr_axi_slave_base; /* root port config base */
void __iomem *apb_csr_base; /* MSI register base */
void __iomem *pcie_reg_base; /* Physical PCIe Controller Base */
struct irq_domain *intx_domain;
raw_spinlock_t intx_mask_lock;
int irq;
int apio_wins;
int ppio_wins;
int ob_wins_configured; /* configured outbound windows */
int ib_wins_configured; /* configured inbound windows */
struct resource *ob_io_res;
char root_bus_nr;
struct mobiveil_msi msi;
};
static inline void csr_writel(struct mobiveil_pcie *pcie, const u32 value,
const u32 reg)
{
writel_relaxed(value, pcie->csr_axi_slave_base + reg);
}
static inline u32 csr_readl(struct mobiveil_pcie *pcie, const u32 reg)
{
return readl_relaxed(pcie->csr_axi_slave_base + reg);
}
static bool mobiveil_pcie_link_up(struct mobiveil_pcie *pcie)
{
return (csr_readl(pcie, LTSSM_STATUS) &
LTSSM_STATUS_L0_MASK) == LTSSM_STATUS_L0;
}
static bool mobiveil_pcie_valid_device(struct pci_bus *bus, unsigned int devfn)
{
struct mobiveil_pcie *pcie = bus->sysdata;
/* Only one device down on each root port */
if ((bus->number == pcie->root_bus_nr) && (devfn > 0))
return false;
/*
* Do not read more than one device on the bus directly
* attached to RC
*/
if ((bus->primary == pcie->root_bus_nr) && (devfn > 0))
return false;
return true;
}
/*
* mobiveil_pcie_map_bus - routine to get the configuration base of either
* root port or endpoint
*/
static void __iomem *mobiveil_pcie_map_bus(struct pci_bus *bus,
unsigned int devfn, int where)
{
struct mobiveil_pcie *pcie = bus->sysdata;
if (!mobiveil_pcie_valid_device(bus, devfn))
return NULL;
if (bus->number == pcie->root_bus_nr) {
/* RC config access */
return pcie->csr_axi_slave_base + where;
}
/*
* EP config access (in Config/APIO space)
* Program PEX Address base (31..16 bits) with appropriate value
* (BDF) in PAB_AXI_AMAP_PEX_WIN_L0 Register.
* Relies on pci_lock serialization
*/
csr_writel(pcie, bus->number << PAB_BUS_SHIFT |
PCI_SLOT(devfn) << PAB_DEVICE_SHIFT |
PCI_FUNC(devfn) << PAB_FUNCTION_SHIFT,
PAB_AXI_AMAP_PEX_WIN_L(WIN_NUM_0));
return pcie->config_axi_slave_base + where;
}
static struct pci_ops mobiveil_pcie_ops = {
.map_bus = mobiveil_pcie_map_bus,
.read = pci_generic_config_read,
.write = pci_generic_config_write,
};
static void mobiveil_pcie_isr(struct irq_desc *desc)
{
struct irq_chip *chip = irq_desc_get_chip(desc);
struct mobiveil_pcie *pcie = irq_desc_get_handler_data(desc);
struct device *dev = &pcie->pdev->dev;
struct mobiveil_msi *msi = &pcie->msi;
u32 msi_data, msi_addr_lo, msi_addr_hi;
u32 intr_status, msi_status;
unsigned long shifted_status;
u32 bit, virq, val, mask;
/*
* The core provides a single interrupt for both INTx/MSI messages.
* So we'll read both INTx and MSI status
*/
chained_irq_enter(chip, desc);
/* read INTx status */
val = csr_readl(pcie, PAB_INTP_AMBA_MISC_STAT);
mask = csr_readl(pcie, PAB_INTP_AMBA_MISC_ENB);
intr_status = val & mask;
/* Handle INTx */
if (intr_status & PAB_INTP_INTX_MASK) {
shifted_status = csr_readl(pcie, PAB_INTP_AMBA_MISC_STAT) >>
PAB_INTX_START;
do {
for_each_set_bit(bit, &shifted_status, PCI_NUM_INTX) {
virq = irq_find_mapping(pcie->intx_domain,
bit + 1);
if (virq)
generic_handle_irq(virq);
else
dev_err_ratelimited(dev,
"unexpected IRQ, INT%d\n", bit);
/* clear interrupt */
csr_writel(pcie,
shifted_status << PAB_INTX_START,
PAB_INTP_AMBA_MISC_STAT);
}
} while ((shifted_status >> PAB_INTX_START) != 0);
}
/* read extra MSI status register */
msi_status = readl_relaxed(pcie->apb_csr_base + MSI_STATUS_OFFSET);
/* handle MSI interrupts */
while (msi_status & 1) {
msi_data = readl_relaxed(pcie->apb_csr_base
+ MSI_DATA_OFFSET);
/*
* MSI_STATUS_OFFSET register gets updated to zero
* once we pop not only the MSI data but also address
* from MSI hardware FIFO. So keeping these following
* two dummy reads.
*/
msi_addr_lo = readl_relaxed(pcie->apb_csr_base +
MSI_ADDR_L_OFFSET);
msi_addr_hi = readl_relaxed(pcie->apb_csr_base +
MSI_ADDR_H_OFFSET);
dev_dbg(dev, "MSI registers, data: %08x, addr: %08x:%08x\n",
msi_data, msi_addr_hi, msi_addr_lo);
virq = irq_find_mapping(msi->dev_domain, msi_data);
if (virq)
generic_handle_irq(virq);
msi_status = readl_relaxed(pcie->apb_csr_base +
MSI_STATUS_OFFSET);
}
/* Clear the interrupt status */
csr_writel(pcie, intr_status, PAB_INTP_AMBA_MISC_STAT);
chained_irq_exit(chip, desc);
}
static int mobiveil_pcie_parse_dt(struct mobiveil_pcie *pcie)
{
struct device *dev = &pcie->pdev->dev;
struct platform_device *pdev = pcie->pdev;
struct device_node *node = dev->of_node;
struct resource *res;
const char *type;
type = of_get_property(node, "device_type", NULL);
if (!type || strcmp(type, "pci")) {
dev_err(dev, "invalid \"device_type\" %s\n", type);
return -EINVAL;
}
/* map config resource */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
"config_axi_slave");
pcie->config_axi_slave_base = devm_pci_remap_cfg_resource(dev, res);
if (IS_ERR(pcie->config_axi_slave_base))
return PTR_ERR(pcie->config_axi_slave_base);
pcie->ob_io_res = res;
/* map csr resource */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
"csr_axi_slave");
pcie->csr_axi_slave_base = devm_pci_remap_cfg_resource(dev, res);
if (IS_ERR(pcie->csr_axi_slave_base))
return PTR_ERR(pcie->csr_axi_slave_base);
pcie->pcie_reg_base = res->start;
/* map MSI config resource */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "apb_csr");
pcie->apb_csr_base = devm_pci_remap_cfg_resource(dev, res);
if (IS_ERR(pcie->apb_csr_base))
return PTR_ERR(pcie->apb_csr_base);
/* read the number of windows requested */
if (of_property_read_u32(node, "apio-wins", &pcie->apio_wins))
pcie->apio_wins = MAX_PIO_WINDOWS;
if (of_property_read_u32(node, "ppio-wins", &pcie->ppio_wins))
pcie->ppio_wins = MAX_PIO_WINDOWS;
pcie->irq = platform_get_irq(pdev, 0);
if (pcie->irq <= 0) {
dev_err(dev, "failed to map IRQ: %d\n", pcie->irq);
return -ENODEV;
}
irq_set_chained_handler_and_data(pcie->irq, mobiveil_pcie_isr, pcie);
return 0;
}
/*
* select_paged_register - routine to access paged register of root complex
*
* registers of RC are paged, for this scheme to work
* extracted higher 6 bits of the offset will be written to pg_sel
* field of PAB_CTRL register and rest of the lower 10 bits enabled with
* PAGE_SEL_EN are used as offset of the register.
*/
static void select_paged_register(struct mobiveil_pcie *pcie, u32 offset)
{
int pab_ctrl_dw, pg_sel;
/* clear pg_sel field */
pab_ctrl_dw = csr_readl(pcie, PAB_CTRL);
pab_ctrl_dw = (pab_ctrl_dw & ~(PAGE_SEL_MASK << PAGE_SEL_SHIFT));
/* set pg_sel field */
pg_sel = (offset >> PAGE_SEL_OFFSET_SHIFT) & PAGE_SEL_MASK;
pab_ctrl_dw |= ((pg_sel << PAGE_SEL_SHIFT));
csr_writel(pcie, pab_ctrl_dw, PAB_CTRL);
}
static void write_paged_register(struct mobiveil_pcie *pcie,
u32 val, u32 offset)
{
u32 off = (offset & PAGE_LO_MASK) | PAGE_SEL_EN;
select_paged_register(pcie, offset);
csr_writel(pcie, val, off);
}
static u32 read_paged_register(struct mobiveil_pcie *pcie, u32 offset)
{
u32 off = (offset & PAGE_LO_MASK) | PAGE_SEL_EN;
select_paged_register(pcie, offset);
return csr_readl(pcie, off);
}
static void program_ib_windows(struct mobiveil_pcie *pcie, int win_num,
int pci_addr, u32 type, u64 size)
{
int pio_ctrl_val;
int amap_ctrl_dw;
u64 size64 = ~(size - 1);
if ((pcie->ib_wins_configured + 1) > pcie->ppio_wins) {
dev_err(&pcie->pdev->dev,
"ERROR: max inbound windows reached !\n");
return;
}
pio_ctrl_val = csr_readl(pcie, PAB_PEX_PIO_CTRL);
csr_writel(pcie,
pio_ctrl_val | (1 << PIO_ENABLE_SHIFT), PAB_PEX_PIO_CTRL);
amap_ctrl_dw = read_paged_register(pcie, PAB_PEX_AMAP_CTRL(win_num));
amap_ctrl_dw = (amap_ctrl_dw | (type << AMAP_CTRL_TYPE_SHIFT));
amap_ctrl_dw = (amap_ctrl_dw | (1 << AMAP_CTRL_EN_SHIFT));
write_paged_register(pcie, amap_ctrl_dw | lower_32_bits(size64),
PAB_PEX_AMAP_CTRL(win_num));
write_paged_register(pcie, upper_32_bits(size64),
PAB_EXT_PEX_AMAP_SIZEN(win_num));
write_paged_register(pcie, pci_addr, PAB_PEX_AMAP_AXI_WIN(win_num));
write_paged_register(pcie, pci_addr, PAB_PEX_AMAP_PEX_WIN_L(win_num));
write_paged_register(pcie, 0, PAB_PEX_AMAP_PEX_WIN_H(win_num));
}
/*
* routine to program the outbound windows
*/
static void program_ob_windows(struct mobiveil_pcie *pcie, int win_num,
u64 cpu_addr, u64 pci_addr, u32 config_io_bit, u64 size)
{
u32 value, type;
u64 size64 = ~(size - 1);
if ((pcie->ob_wins_configured + 1) > pcie->apio_wins) {
dev_err(&pcie->pdev->dev,
"ERROR: max outbound windows reached !\n");
return;
}
/*
* program Enable Bit to 1, Type Bit to (00) base 2, AXI Window Size Bit
* to 4 KB in PAB_AXI_AMAP_CTRL register
*/
type = config_io_bit;
value = csr_readl(pcie, PAB_AXI_AMAP_CTRL(win_num));
csr_writel(pcie, 1 << WIN_ENABLE_SHIFT | type << WIN_TYPE_SHIFT |
lower_32_bits(size64), PAB_AXI_AMAP_CTRL(win_num));
write_paged_register(pcie, upper_32_bits(size64),
PAB_EXT_AXI_AMAP_SIZE(win_num));
/*
* program AXI window base with appropriate value in
* PAB_AXI_AMAP_AXI_WIN0 register
*/
value = csr_readl(pcie, PAB_AXI_AMAP_AXI_WIN(win_num));
csr_writel(pcie, cpu_addr & (~AXI_WINDOW_ALIGN_MASK),
PAB_AXI_AMAP_AXI_WIN(win_num));
value = csr_readl(pcie, PAB_AXI_AMAP_PEX_WIN_H(win_num));
csr_writel(pcie, lower_32_bits(pci_addr),
PAB_AXI_AMAP_PEX_WIN_L(win_num));
csr_writel(pcie, upper_32_bits(pci_addr),
PAB_AXI_AMAP_PEX_WIN_H(win_num));
pcie->ob_wins_configured++;
}
static int mobiveil_bringup_link(struct mobiveil_pcie *pcie)
{
int retries;
/* check if the link is up or not */
for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
if (mobiveil_pcie_link_up(pcie))
return 0;
usleep_range(LINK_WAIT_MIN, LINK_WAIT_MAX);
}
dev_err(&pcie->pdev->dev, "link never came up\n");
return -ETIMEDOUT;
}
static void mobiveil_pcie_enable_msi(struct mobiveil_pcie *pcie)
{
phys_addr_t msg_addr = pcie->pcie_reg_base;
struct mobiveil_msi *msi = &pcie->msi;
pcie->msi.num_of_vectors = PCI_NUM_MSI;
msi->msi_pages_phys = (phys_addr_t)msg_addr;
writel_relaxed(lower_32_bits(msg_addr),
pcie->apb_csr_base + MSI_BASE_LO_OFFSET);
writel_relaxed(upper_32_bits(msg_addr),
pcie->apb_csr_base + MSI_BASE_HI_OFFSET);
writel_relaxed(4096, pcie->apb_csr_base + MSI_SIZE_OFFSET);
writel_relaxed(1, pcie->apb_csr_base + MSI_ENABLE_OFFSET);
}
static int mobiveil_host_init(struct mobiveil_pcie *pcie)
{
u32 value, pab_ctrl, type = 0;
int err;
struct resource_entry *win, *tmp;
err = mobiveil_bringup_link(pcie);
if (err) {
dev_info(&pcie->pdev->dev, "link bring-up failed\n");
return err;
}
/*
* program Bus Master Enable Bit in Command Register in PAB Config
* Space
*/
value = csr_readl(pcie, PCI_COMMAND);
csr_writel(pcie, value | PCI_COMMAND_IO | PCI_COMMAND_MEMORY |
PCI_COMMAND_MASTER, PCI_COMMAND);
/*
* program PIO Enable Bit to 1 (and PEX PIO Enable to 1) in PAB_CTRL
* register
*/
pab_ctrl = csr_readl(pcie, PAB_CTRL);
csr_writel(pcie, pab_ctrl | (1 << AMBA_PIO_ENABLE_SHIFT) |
(1 << PEX_PIO_ENABLE_SHIFT), PAB_CTRL);
csr_writel(pcie, (PAB_INTP_INTX_MASK | PAB_INTP_MSI_MASK),
PAB_INTP_AMBA_MISC_ENB);
/*
* program PIO Enable Bit to 1 and Config Window Enable Bit to 1 in
* PAB_AXI_PIO_CTRL Register
*/
value = csr_readl(pcie, PAB_AXI_PIO_CTRL);
csr_writel(pcie, value | APIO_EN_MASK, PAB_AXI_PIO_CTRL);
/*
* we'll program one outbound window for config reads and
* another default inbound window for all the upstream traffic
* rest of the outbound windows will be configured according to
* the "ranges" field defined in device tree
*/
/* config outbound translation window */
program_ob_windows(pcie, pcie->ob_wins_configured,
pcie->ob_io_res->start, 0, CFG_WINDOW_TYPE,
resource_size(pcie->ob_io_res));
/* memory inbound translation window */
program_ib_windows(pcie, WIN_NUM_1, 0, MEM_WINDOW_TYPE, IB_WIN_SIZE);
/* Get the I/O and memory ranges from DT */
resource_list_for_each_entry_safe(win, tmp, &pcie->resources) {
type = 0;
if (resource_type(win->res) == IORESOURCE_MEM)
type = MEM_WINDOW_TYPE;
if (resource_type(win->res) == IORESOURCE_IO)
type = IO_WINDOW_TYPE;
if (type) {
/* configure outbound translation window */
program_ob_windows(pcie, pcie->ob_wins_configured,
win->res->start, 0, type,
resource_size(win->res));
}
}
/* setup MSI hardware registers */
mobiveil_pcie_enable_msi(pcie);
return err;
}
static void mobiveil_mask_intx_irq(struct irq_data *data)
{
struct irq_desc *desc = irq_to_desc(data->irq);
struct mobiveil_pcie *pcie;
unsigned long flags;
u32 mask, shifted_val;
pcie = irq_desc_get_chip_data(desc);
mask = 1 << ((data->hwirq + PAB_INTX_START) - 1);
raw_spin_lock_irqsave(&pcie->intx_mask_lock, flags);
shifted_val = csr_readl(pcie, PAB_INTP_AMBA_MISC_ENB);
csr_writel(pcie, (shifted_val & (~mask)), PAB_INTP_AMBA_MISC_ENB);
raw_spin_unlock_irqrestore(&pcie->intx_mask_lock, flags);
}
static void mobiveil_unmask_intx_irq(struct irq_data *data)
{
struct irq_desc *desc = irq_to_desc(data->irq);
struct mobiveil_pcie *pcie;
unsigned long flags;
u32 shifted_val, mask;
pcie = irq_desc_get_chip_data(desc);
mask = 1 << ((data->hwirq + PAB_INTX_START) - 1);
raw_spin_lock_irqsave(&pcie->intx_mask_lock, flags);
shifted_val = csr_readl(pcie, PAB_INTP_AMBA_MISC_ENB);
csr_writel(pcie, (shifted_val | mask), PAB_INTP_AMBA_MISC_ENB);
raw_spin_unlock_irqrestore(&pcie->intx_mask_lock, flags);
}
static struct irq_chip intx_irq_chip = {
.name = "mobiveil_pcie:intx",
.irq_enable = mobiveil_unmask_intx_irq,
.irq_disable = mobiveil_mask_intx_irq,
.irq_mask = mobiveil_mask_intx_irq,
.irq_unmask = mobiveil_unmask_intx_irq,
};
/* routine to setup the INTx related data */
static int mobiveil_pcie_intx_map(struct irq_domain *domain, unsigned int irq,
irq_hw_number_t hwirq)
{
irq_set_chip_and_handler(irq, &intx_irq_chip, handle_level_irq);
irq_set_chip_data(irq, domain->host_data);
return 0;
}
/* INTx domain operations structure */
static const struct irq_domain_ops intx_domain_ops = {
.map = mobiveil_pcie_intx_map,
};
static struct irq_chip mobiveil_msi_irq_chip = {
.name = "Mobiveil PCIe MSI",
.irq_mask = pci_msi_mask_irq,
.irq_unmask = pci_msi_unmask_irq,
};
static struct msi_domain_info mobiveil_msi_domain_info = {
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX),
.chip = &mobiveil_msi_irq_chip,
};
static void mobiveil_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct mobiveil_pcie *pcie = irq_data_get_irq_chip_data(data);
phys_addr_t addr = pcie->pcie_reg_base + (data->hwirq * sizeof(int));
msg->address_lo = lower_32_bits(addr);
msg->address_hi = upper_32_bits(addr);
msg->data = data->hwirq;
dev_dbg(&pcie->pdev->dev, "msi#%d address_hi %#x address_lo %#x\n",
(int)data->hwirq, msg->address_hi, msg->address_lo);
}
static int mobiveil_msi_set_affinity(struct irq_data *irq_data,
const struct cpumask *mask, bool force)
{
return -EINVAL;
}
static struct irq_chip mobiveil_msi_bottom_irq_chip = {
.name = "Mobiveil MSI",
.irq_compose_msi_msg = mobiveil_compose_msi_msg,
.irq_set_affinity = mobiveil_msi_set_affinity,
};
static int mobiveil_irq_msi_domain_alloc(struct irq_domain *domain,
unsigned int virq, unsigned int nr_irqs, void *args)
{
struct mobiveil_pcie *pcie = domain->host_data;
struct mobiveil_msi *msi = &pcie->msi;
unsigned long bit;
WARN_ON(nr_irqs != 1);
mutex_lock(&msi->lock);
bit = find_first_zero_bit(msi->msi_irq_in_use, msi->num_of_vectors);
if (bit >= msi->num_of_vectors) {
mutex_unlock(&msi->lock);
return -ENOSPC;
}
set_bit(bit, msi->msi_irq_in_use);
mutex_unlock(&msi->lock);
irq_domain_set_info(domain, virq, bit, &mobiveil_msi_bottom_irq_chip,
domain->host_data, handle_level_irq,
NULL, NULL);
return 0;
}
static void mobiveil_irq_msi_domain_free(struct irq_domain *domain,
unsigned int virq, unsigned int nr_irqs)
{
struct irq_data *d = irq_domain_get_irq_data(domain, virq);
struct mobiveil_pcie *pcie = irq_data_get_irq_chip_data(d);
struct mobiveil_msi *msi = &pcie->msi;
mutex_lock(&msi->lock);
if (!test_bit(d->hwirq, msi->msi_irq_in_use)) {
dev_err(&pcie->pdev->dev, "trying to free unused MSI#%lu\n",
d->hwirq);
} else {
__clear_bit(d->hwirq, msi->msi_irq_in_use);
}
mutex_unlock(&msi->lock);
}
static const struct irq_domain_ops msi_domain_ops = {
.alloc = mobiveil_irq_msi_domain_alloc,
.free = mobiveil_irq_msi_domain_free,
};
static int mobiveil_allocate_msi_domains(struct mobiveil_pcie *pcie)
{
struct device *dev = &pcie->pdev->dev;
struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
struct mobiveil_msi *msi = &pcie->msi;
mutex_init(&pcie->msi.lock);
msi->dev_domain = irq_domain_add_linear(NULL, msi->num_of_vectors,
&msi_domain_ops, pcie);
if (!msi->dev_domain) {
dev_err(dev, "failed to create IRQ domain\n");
return -ENOMEM;
}
msi->msi_domain = pci_msi_create_irq_domain(fwnode,
&mobiveil_msi_domain_info, msi->dev_domain);
if (!msi->msi_domain) {
dev_err(dev, "failed to create MSI domain\n");
irq_domain_remove(msi->dev_domain);
return -ENOMEM;
}
return 0;
}
static int mobiveil_pcie_init_irq_domain(struct mobiveil_pcie *pcie)
{
struct device *dev = &pcie->pdev->dev;
struct device_node *node = dev->of_node;
int ret;
/* setup INTx */
pcie->intx_domain = irq_domain_add_linear(node,
PCI_NUM_INTX, &intx_domain_ops, pcie);
if (!pcie->intx_domain) {
dev_err(dev, "Failed to get a INTx IRQ domain\n");
return -ENODEV;
}
raw_spin_lock_init(&pcie->intx_mask_lock);
/* setup MSI */
ret = mobiveil_allocate_msi_domains(pcie);
if (ret)
return ret;
return 0;
}
static int mobiveil_pcie_probe(struct platform_device *pdev)
{
struct mobiveil_pcie *pcie;
struct pci_bus *bus;
struct pci_bus *child;
struct pci_host_bridge *bridge;
struct device *dev = &pdev->dev;
resource_size_t iobase;
int ret;
/* allocate the PCIe port */
bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie));
if (!bridge)
return -ENODEV;
pcie = pci_host_bridge_priv(bridge);
if (!pcie)
return -ENOMEM;
pcie->pdev = pdev;
ret = mobiveil_pcie_parse_dt(pcie);
if (ret) {
dev_err(dev, "Parsing DT failed, ret: %x\n", ret);
return ret;
}
INIT_LIST_HEAD(&pcie->resources);
/* parse the host bridge base addresses from the device tree file */
ret = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff,
&pcie->resources, &iobase);
if (ret) {
dev_err(dev, "Getting bridge resources failed\n");
return -ENOMEM;
}
/*
* configure all inbound and outbound windows and prepare the RC for
* config access
*/
ret = mobiveil_host_init(pcie);
if (ret) {
dev_err(dev, "Failed to initialize host\n");
goto error;
}
/* fixup for PCIe class register */
csr_writel(pcie, 0x060402ab, PAB_INTP_AXI_PIO_CLASS);
/* initialize the IRQ domains */
ret = mobiveil_pcie_init_irq_domain(pcie);
if (ret) {
dev_err(dev, "Failed creating IRQ Domain\n");
goto error;
}
ret = devm_request_pci_bus_resources(dev, &pcie->resources);
if (ret)
goto error;
/* Initialize bridge */
list_splice_init(&pcie->resources, &bridge->windows);
bridge->dev.parent = dev;
bridge->sysdata = pcie;
bridge->busnr = pcie->root_bus_nr;
bridge->ops = &mobiveil_pcie_ops;
bridge->map_irq = of_irq_parse_and_map_pci;
bridge->swizzle_irq = pci_common_swizzle;
/* setup the kernel resources for the newly added PCIe root bus */
ret = pci_scan_root_bus_bridge(bridge);
if (ret)
goto error;
bus = bridge->bus;
pci_assign_unassigned_bus_resources(bus);
list_for_each_entry(child, &bus->children, node)
pcie_bus_configure_settings(child);
pci_bus_add_devices(bus);
return 0;
error:
pci_free_resource_list(&pcie->resources);
return ret;
}
static const struct of_device_id mobiveil_pcie_of_match[] = {
{.compatible = "mbvl,gpex40-pcie",},
{},
};
MODULE_DEVICE_TABLE(of, mobiveil_pcie_of_match);
static struct platform_driver mobiveil_pcie_driver = {
.probe = mobiveil_pcie_probe,
.driver = {
.name = "mobiveil-pcie",
.of_match_table = mobiveil_pcie_of_match,
.suppress_bind_attrs = true,
},
};
builtin_platform_driver(mobiveil_pcie_driver);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Mobiveil PCIe host controller driver");
MODULE_AUTHOR("Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>");

View File

@ -11,6 +11,7 @@
* Author: Phil Edworthy <phil.edworthy@renesas.com>
*/
#include <linux/bitops.h>
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
@ -24,18 +25,23 @@
#include <linux/of_pci.h>
#include <linux/of_platform.h>
#include <linux/pci.h>
#include <linux/phy/phy.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
#include "../pci.h"
#define PCIECAR 0x000010
#define PCIECCTLR 0x000018
#define CONFIG_SEND_ENABLE (1 << 31)
#define CONFIG_SEND_ENABLE BIT(31)
#define TYPE0 (0 << 8)
#define TYPE1 (1 << 8)
#define TYPE1 BIT(8)
#define PCIECDR 0x000020
#define PCIEMSR 0x000028
#define PCIEINTXR 0x000400
#define PCIEPHYSR 0x0007f0
#define PHYRDY BIT(0)
#define PCIEMSITXR 0x000840
/* Transfer control */
@ -44,7 +50,7 @@
#define PCIETSTR 0x02004
#define DATA_LINK_ACTIVE 1
#define PCIEERRFR 0x02020
#define UNSUPPORTED_REQUEST (1 << 4)
#define UNSUPPORTED_REQUEST BIT(4)
#define PCIEMSIFR 0x02044
#define PCIEMSIALR 0x02048
#define MSIFE 1
@ -57,17 +63,17 @@
/* local address reg & mask */
#define PCIELAR(x) (0x02200 + ((x) * 0x20))
#define PCIELAMR(x) (0x02208 + ((x) * 0x20))
#define LAM_PREFETCH (1 << 3)
#define LAM_64BIT (1 << 2)
#define LAR_ENABLE (1 << 1)
#define LAM_PREFETCH BIT(3)
#define LAM_64BIT BIT(2)
#define LAR_ENABLE BIT(1)
/* PCIe address reg & mask */
#define PCIEPALR(x) (0x03400 + ((x) * 0x20))
#define PCIEPAUR(x) (0x03404 + ((x) * 0x20))
#define PCIEPAMR(x) (0x03408 + ((x) * 0x20))
#define PCIEPTCTLR(x) (0x0340c + ((x) * 0x20))
#define PAR_ENABLE (1 << 31)
#define IO_SPACE (1 << 8)
#define PAR_ENABLE BIT(31)
#define IO_SPACE BIT(8)
/* Configuration */
#define PCICONF(x) (0x010000 + ((x) * 0x4))
@ -79,47 +85,46 @@
#define IDSETR1 0x011004
#define TLCTLR 0x011048
#define MACSR 0x011054
#define SPCHGFIN (1 << 4)
#define SPCHGFAIL (1 << 6)
#define SPCHGSUC (1 << 7)
#define SPCHGFIN BIT(4)
#define SPCHGFAIL BIT(6)
#define SPCHGSUC BIT(7)
#define LINK_SPEED (0xf << 16)
#define LINK_SPEED_2_5GTS (1 << 16)
#define LINK_SPEED_5_0GTS (2 << 16)
#define MACCTLR 0x011058
#define SPEED_CHANGE (1 << 24)
#define SCRAMBLE_DISABLE (1 << 27)
#define SPEED_CHANGE BIT(24)
#define SCRAMBLE_DISABLE BIT(27)
#define MACS2R 0x011078
#define MACCGSPSETR 0x011084
#define SPCNGRSN (1 << 31)
#define SPCNGRSN BIT(31)
/* R-Car H1 PHY */
#define H1_PCIEPHYADRR 0x04000c
#define WRITE_CMD (1 << 16)
#define PHY_ACK (1 << 24)
#define WRITE_CMD BIT(16)
#define PHY_ACK BIT(24)
#define RATE_POS 12
#define LANE_POS 8
#define ADR_POS 0
#define H1_PCIEPHYDOUTR 0x040014
#define H1_PCIEPHYSR 0x040018
/* R-Car Gen2 PHY */
#define GEN2_PCIEPHYADDR 0x780
#define GEN2_PCIEPHYDATA 0x784
#define GEN2_PCIEPHYCTRL 0x78c
#define INT_PCI_MSI_NR 32
#define INT_PCI_MSI_NR 32
#define RCONF(x) (PCICONF(0)+(x))
#define RPMCAP(x) (PMCAP(0)+(x))
#define REXPCAP(x) (EXPCAP(0)+(x))
#define RVCCAP(x) (VCCAP(0)+(x))
#define RCONF(x) (PCICONF(0) + (x))
#define RPMCAP(x) (PMCAP(0) + (x))
#define REXPCAP(x) (EXPCAP(0) + (x))
#define RVCCAP(x) (VCCAP(0) + (x))
#define PCIE_CONF_BUS(b) (((b) & 0xff) << 24)
#define PCIE_CONF_DEV(d) (((d) & 0x1f) << 19)
#define PCIE_CONF_FUNC(f) (((f) & 0x7) << 16)
#define PCIE_CONF_BUS(b) (((b) & 0xff) << 24)
#define PCIE_CONF_DEV(d) (((d) & 0x1f) << 19)
#define PCIE_CONF_FUNC(f) (((f) & 0x7) << 16)
#define RCAR_PCI_MAX_RESOURCES 4
#define MAX_NR_INBOUND_MAPS 6
#define RCAR_PCI_MAX_RESOURCES 4
#define MAX_NR_INBOUND_MAPS 6
struct rcar_msi {
DECLARE_BITMAP(used, INT_PCI_MSI_NR);
@ -139,10 +144,10 @@ static inline struct rcar_msi *to_rcar_msi(struct msi_controller *chip)
/* Structure representing the PCIe interface */
struct rcar_pcie {
struct device *dev;
struct phy *phy;
void __iomem *base;
struct list_head resources;
int root_bus_nr;
struct clk *clk;
struct clk *bus_clk;
struct rcar_msi msi;
};
@ -527,15 +532,30 @@ static void phy_write_reg(struct rcar_pcie *pcie,
phy_wait_for_ack(pcie);
}
static int rcar_pcie_wait_for_dl(struct rcar_pcie *pcie)
static int rcar_pcie_wait_for_phyrdy(struct rcar_pcie *pcie)
{
unsigned int timeout = 10;
while (timeout--) {
if (rcar_pci_read_reg(pcie, PCIEPHYSR) & PHYRDY)
return 0;
msleep(5);
}
return -ETIMEDOUT;
}
static int rcar_pcie_wait_for_dl(struct rcar_pcie *pcie)
{
unsigned int timeout = 10000;
while (timeout--) {
if ((rcar_pci_read_reg(pcie, PCIETSTR) & DATA_LINK_ACTIVE))
return 0;
msleep(5);
udelay(5);
cpu_relax();
}
return -ETIMEDOUT;
@ -551,6 +571,10 @@ static int rcar_pcie_hw_init(struct rcar_pcie *pcie)
/* Set mode */
rcar_pci_write_reg(pcie, 1, PCIEMSR);
err = rcar_pcie_wait_for_phyrdy(pcie);
if (err)
return err;
/*
* Initial header for port config space is type 1, set the device
* class to match. Hardware takes care of propagating the IDSETR
@ -605,10 +629,8 @@ static int rcar_pcie_hw_init(struct rcar_pcie *pcie)
return 0;
}
static int rcar_pcie_hw_init_h1(struct rcar_pcie *pcie)
static int rcar_pcie_phy_init_h1(struct rcar_pcie *pcie)
{
unsigned int timeout = 10;
/* Initialize the phy */
phy_write_reg(pcie, 0, 0x42, 0x1, 0x0EC34191);
phy_write_reg(pcie, 1, 0x42, 0x1, 0x0EC34180);
@ -627,17 +649,10 @@ static int rcar_pcie_hw_init_h1(struct rcar_pcie *pcie)
phy_write_reg(pcie, 0, 0x64, 0x1, 0x3F0F1F0F);
phy_write_reg(pcie, 0, 0x66, 0x1, 0x00008000);
while (timeout--) {
if (rcar_pci_read_reg(pcie, H1_PCIEPHYSR))
return rcar_pcie_hw_init(pcie);
msleep(5);
}
return -ETIMEDOUT;
return 0;
}
static int rcar_pcie_hw_init_gen2(struct rcar_pcie *pcie)
static int rcar_pcie_phy_init_gen2(struct rcar_pcie *pcie)
{
/*
* These settings come from the R-Car Series, 2nd Generation User's
@ -654,7 +669,18 @@ static int rcar_pcie_hw_init_gen2(struct rcar_pcie *pcie)
rcar_pci_write_reg(pcie, 0x00000001, GEN2_PCIEPHYCTRL);
rcar_pci_write_reg(pcie, 0x00000006, GEN2_PCIEPHYCTRL);
return rcar_pcie_hw_init(pcie);
return 0;
}
static int rcar_pcie_phy_init_gen3(struct rcar_pcie *pcie)
{
int err;
err = phy_init(pcie->phy);
if (err)
return err;
return phy_power_on(pcie->phy);
}
static int rcar_msi_alloc(struct rcar_msi *chip)
@ -842,6 +868,20 @@ static const struct irq_domain_ops msi_domain_ops = {
.map = rcar_msi_map,
};
static void rcar_pcie_unmap_msi(struct rcar_pcie *pcie)
{
struct rcar_msi *msi = &pcie->msi;
int i, irq;
for (i = 0; i < INT_PCI_MSI_NR; i++) {
irq = irq_find_mapping(msi->domain, i);
if (irq > 0)
irq_dispose_mapping(irq);
}
irq_domain_remove(msi->domain);
}
static int rcar_pcie_enable_msi(struct rcar_pcie *pcie)
{
struct device *dev = pcie->dev;
@ -896,16 +936,35 @@ static int rcar_pcie_enable_msi(struct rcar_pcie *pcie)
return 0;
err:
irq_domain_remove(msi->domain);
rcar_pcie_unmap_msi(pcie);
return err;
}
static void rcar_pcie_teardown_msi(struct rcar_pcie *pcie)
{
struct rcar_msi *msi = &pcie->msi;
/* Disable all MSI interrupts */
rcar_pci_write_reg(pcie, 0, PCIEMSIIER);
/* Disable address decoding of the MSI interrupt, MSIFE */
rcar_pci_write_reg(pcie, 0, PCIEMSIALR);
free_pages(msi->pages, 0);
rcar_pcie_unmap_msi(pcie);
}
static int rcar_pcie_get_resources(struct rcar_pcie *pcie)
{
struct device *dev = pcie->dev;
struct resource res;
int err, i;
pcie->phy = devm_phy_optional_get(dev, "pcie");
if (IS_ERR(pcie->phy))
return PTR_ERR(pcie->phy);
err = of_address_to_resource(dev->of_node, 0, &res);
if (err)
return err;
@ -914,30 +973,17 @@ static int rcar_pcie_get_resources(struct rcar_pcie *pcie)
if (IS_ERR(pcie->base))
return PTR_ERR(pcie->base);
pcie->clk = devm_clk_get(dev, "pcie");
if (IS_ERR(pcie->clk)) {
dev_err(dev, "cannot get platform clock\n");
return PTR_ERR(pcie->clk);
}
err = clk_prepare_enable(pcie->clk);
if (err)
return err;
pcie->bus_clk = devm_clk_get(dev, "pcie_bus");
if (IS_ERR(pcie->bus_clk)) {
dev_err(dev, "cannot get pcie bus clock\n");
err = PTR_ERR(pcie->bus_clk);
goto fail_clk;
return PTR_ERR(pcie->bus_clk);
}
err = clk_prepare_enable(pcie->bus_clk);
if (err)
goto fail_clk;
i = irq_of_parse_and_map(dev->of_node, 0);
if (!i) {
dev_err(dev, "cannot get platform resources for msi interrupt\n");
err = -ENOENT;
goto err_map_reg;
goto err_irq1;
}
pcie->msi.irq1 = i;
@ -945,17 +991,15 @@ static int rcar_pcie_get_resources(struct rcar_pcie *pcie)
if (!i) {
dev_err(dev, "cannot get platform resources for msi interrupt\n");
err = -ENOENT;
goto err_map_reg;
goto err_irq2;
}
pcie->msi.irq2 = i;
return 0;
err_map_reg:
clk_disable_unprepare(pcie->bus_clk);
fail_clk:
clk_disable_unprepare(pcie->clk);
err_irq2:
irq_dispose_mapping(pcie->msi.irq1);
err_irq1:
return err;
}
@ -1051,63 +1095,28 @@ static int rcar_pcie_parse_map_dma_ranges(struct rcar_pcie *pcie,
}
static const struct of_device_id rcar_pcie_of_match[] = {
{ .compatible = "renesas,pcie-r8a7779", .data = rcar_pcie_hw_init_h1 },
{ .compatible = "renesas,pcie-r8a7779",
.data = rcar_pcie_phy_init_h1 },
{ .compatible = "renesas,pcie-r8a7790",
.data = rcar_pcie_hw_init_gen2 },
.data = rcar_pcie_phy_init_gen2 },
{ .compatible = "renesas,pcie-r8a7791",
.data = rcar_pcie_hw_init_gen2 },
.data = rcar_pcie_phy_init_gen2 },
{ .compatible = "renesas,pcie-rcar-gen2",
.data = rcar_pcie_hw_init_gen2 },
{ .compatible = "renesas,pcie-r8a7795", .data = rcar_pcie_hw_init },
{ .compatible = "renesas,pcie-rcar-gen3", .data = rcar_pcie_hw_init },
.data = rcar_pcie_phy_init_gen2 },
{ .compatible = "renesas,pcie-r8a7795",
.data = rcar_pcie_phy_init_gen3 },
{ .compatible = "renesas,pcie-rcar-gen3",
.data = rcar_pcie_phy_init_gen3 },
{},
};
static int rcar_pcie_parse_request_of_pci_ranges(struct rcar_pcie *pci)
{
int err;
struct device *dev = pci->dev;
struct device_node *np = dev->of_node;
resource_size_t iobase;
struct resource_entry *win, *tmp;
err = of_pci_get_host_bridge_resources(np, 0, 0xff, &pci->resources,
&iobase);
if (err)
return err;
err = devm_request_pci_bus_resources(dev, &pci->resources);
if (err)
goto out_release_res;
resource_list_for_each_entry_safe(win, tmp, &pci->resources) {
struct resource *res = win->res;
if (resource_type(res) == IORESOURCE_IO) {
err = pci_remap_iospace(res, iobase);
if (err) {
dev_warn(dev, "error %d: failed to map resource %pR\n",
err, res);
resource_list_destroy_entry(win);
}
}
}
return 0;
out_release_res:
pci_free_resource_list(&pci->resources);
return err;
}
static int rcar_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct rcar_pcie *pcie;
unsigned int data;
int err;
int (*hw_init_fn)(struct rcar_pcie *);
int (*phy_init_fn)(struct rcar_pcie *);
struct pci_host_bridge *bridge;
bridge = pci_alloc_host_bridge(sizeof(*pcie));
@ -1118,36 +1127,45 @@ static int rcar_pcie_probe(struct platform_device *pdev)
pcie->dev = dev;
INIT_LIST_HEAD(&pcie->resources);
err = rcar_pcie_parse_request_of_pci_ranges(pcie);
err = pci_parse_request_of_pci_ranges(dev, &pcie->resources, NULL);
if (err)
goto err_free_bridge;
pm_runtime_enable(pcie->dev);
err = pm_runtime_get_sync(pcie->dev);
if (err < 0) {
dev_err(pcie->dev, "pm_runtime_get_sync failed\n");
goto err_pm_disable;
}
err = rcar_pcie_get_resources(pcie);
if (err < 0) {
dev_err(dev, "failed to request resources: %d\n", err);
goto err_free_resource_list;
goto err_pm_put;
}
err = clk_prepare_enable(pcie->bus_clk);
if (err) {
dev_err(dev, "failed to enable bus clock: %d\n", err);
goto err_unmap_msi_irqs;
}
err = rcar_pcie_parse_map_dma_ranges(pcie, dev->of_node);
if (err)
goto err_free_resource_list;
goto err_clk_disable;
pm_runtime_enable(dev);
err = pm_runtime_get_sync(dev);
if (err < 0) {
dev_err(dev, "pm_runtime_get_sync failed\n");
goto err_pm_disable;
phy_init_fn = of_device_get_match_data(dev);
err = phy_init_fn(pcie);
if (err) {
dev_err(dev, "failed to init PCIe PHY\n");
goto err_clk_disable;
}
/* Failure to get a link might just be that no cards are inserted */
hw_init_fn = of_device_get_match_data(dev);
err = hw_init_fn(pcie);
if (err) {
if (rcar_pcie_hw_init(pcie)) {
dev_info(dev, "PCIe link down\n");
err = -ENODEV;
goto err_pm_put;
goto err_clk_disable;
}
data = rcar_pci_read_reg(pcie, MACSR);
@ -1159,24 +1177,34 @@ static int rcar_pcie_probe(struct platform_device *pdev)
dev_err(dev,
"failed to enable MSI support: %d\n",
err);
goto err_pm_put;
goto err_clk_disable;
}
}
err = rcar_pcie_enable(pcie);
if (err)
goto err_pm_put;
goto err_msi_teardown;
return 0;
err_msi_teardown:
if (IS_ENABLED(CONFIG_PCI_MSI))
rcar_pcie_teardown_msi(pcie);
err_clk_disable:
clk_disable_unprepare(pcie->bus_clk);
err_unmap_msi_irqs:
irq_dispose_mapping(pcie->msi.irq2);
irq_dispose_mapping(pcie->msi.irq1);
err_pm_put:
pm_runtime_put(dev);
err_pm_disable:
pm_runtime_disable(dev);
err_free_resource_list:
pci_free_resource_list(&pcie->resources);
err_free_bridge:
pci_free_host_bridge(bridge);

View File

@ -0,0 +1,642 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Rockchip AXI PCIe endpoint controller driver
*
* Copyright (c) 2018 Rockchip, Inc.
*
* Author: Shawn Lin <shawn.lin@rock-chips.com>
* Simon Xue <xxm@rock-chips.com>
*/
#include <linux/configfs.h>
#include <linux/delay.h>
#include <linux/kernel.h>
#include <linux/of.h>
#include <linux/pci-epc.h>
#include <linux/platform_device.h>
#include <linux/pci-epf.h>
#include <linux/sizes.h>
#include "pcie-rockchip.h"
/**
* struct rockchip_pcie_ep - private data for PCIe endpoint controller driver
* @rockchip: Rockchip PCIe controller
* @max_regions: maximum number of regions supported by hardware
* @ob_region_map: bitmask of mapped outbound regions
* @ob_addr: base addresses in the AXI bus where the outbound regions start
* @irq_phys_addr: base address on the AXI bus where the MSI/legacy IRQ
* dedicated outbound regions is mapped.
* @irq_cpu_addr: base address in the CPU space where a write access triggers
* the sending of a memory write (MSI) / normal message (legacy
* IRQ) TLP through the PCIe bus.
* @irq_pci_addr: used to save the current mapping of the MSI/legacy IRQ
* dedicated outbound region.
* @irq_pci_fn: the latest PCI function that has updated the mapping of
* the MSI/legacy IRQ dedicated outbound region.
* @irq_pending: bitmask of asserted legacy IRQs.
*/
struct rockchip_pcie_ep {
struct rockchip_pcie rockchip;
struct pci_epc *epc;
u32 max_regions;
unsigned long ob_region_map;
phys_addr_t *ob_addr;
phys_addr_t irq_phys_addr;
void __iomem *irq_cpu_addr;
u64 irq_pci_addr;
u8 irq_pci_fn;
u8 irq_pending;
};
static void rockchip_pcie_clear_ep_ob_atu(struct rockchip_pcie *rockchip,
u32 region)
{
rockchip_pcie_write(rockchip, 0,
ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(region));
rockchip_pcie_write(rockchip, 0,
ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(region));
rockchip_pcie_write(rockchip, 0,
ROCKCHIP_PCIE_AT_OB_REGION_DESC0(region));
rockchip_pcie_write(rockchip, 0,
ROCKCHIP_PCIE_AT_OB_REGION_DESC1(region));
rockchip_pcie_write(rockchip, 0,
ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(region));
rockchip_pcie_write(rockchip, 0,
ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(region));
}
static void rockchip_pcie_prog_ep_ob_atu(struct rockchip_pcie *rockchip, u8 fn,
u32 r, u32 type, u64 cpu_addr,
u64 pci_addr, size_t size)
{
u64 sz = 1ULL << fls64(size - 1);
int num_pass_bits = ilog2(sz);
u32 addr0, addr1, desc0, desc1;
bool is_nor_msg = (type == AXI_WRAPPER_NOR_MSG);
/* The minimal region size is 1MB */
if (num_pass_bits < 8)
num_pass_bits = 8;
cpu_addr -= rockchip->mem_res->start;
addr0 = ((is_nor_msg ? 0x10 : (num_pass_bits - 1)) &
PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) |
(lower_32_bits(cpu_addr) & PCIE_CORE_OB_REGION_ADDR0_LO_ADDR);
addr1 = upper_32_bits(is_nor_msg ? cpu_addr : pci_addr);
desc0 = ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN(fn) | type;
desc1 = 0;
if (is_nor_msg) {
rockchip_pcie_write(rockchip, 0,
ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r));
rockchip_pcie_write(rockchip, 0,
ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r));
rockchip_pcie_write(rockchip, desc0,
ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r));
rockchip_pcie_write(rockchip, desc1,
ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r));
} else {
/* PCI bus address region */
rockchip_pcie_write(rockchip, addr0,
ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r));
rockchip_pcie_write(rockchip, addr1,
ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r));
rockchip_pcie_write(rockchip, desc0,
ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r));
rockchip_pcie_write(rockchip, desc1,
ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r));
addr0 =
((num_pass_bits - 1) & PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) |
(lower_32_bits(cpu_addr) &
PCIE_CORE_OB_REGION_ADDR0_LO_ADDR);
addr1 = upper_32_bits(cpu_addr);
}
/* CPU bus address region */
rockchip_pcie_write(rockchip, addr0,
ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(r));
rockchip_pcie_write(rockchip, addr1,
ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(r));
}
static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn,
struct pci_epf_header *hdr)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
struct rockchip_pcie *rockchip = &ep->rockchip;
/* All functions share the same vendor ID with function 0 */
if (fn == 0) {
u32 vid_regs = (hdr->vendorid & GENMASK(15, 0)) |
(hdr->subsys_vendor_id & GENMASK(31, 16)) << 16;
rockchip_pcie_write(rockchip, vid_regs,
PCIE_CORE_CONFIG_VENDOR);
}
rockchip_pcie_write(rockchip, hdr->deviceid << 16,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + PCI_VENDOR_ID);
rockchip_pcie_write(rockchip,
hdr->revid |
hdr->progif_code << 8 |
hdr->subclass_code << 16 |
hdr->baseclass_code << 24,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + PCI_REVISION_ID);
rockchip_pcie_write(rockchip, hdr->cache_line_size,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
PCI_CACHE_LINE_SIZE);
rockchip_pcie_write(rockchip, hdr->subsys_id << 16,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
PCI_SUBSYSTEM_VENDOR_ID);
rockchip_pcie_write(rockchip, hdr->interrupt_pin << 8,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
PCI_INTERRUPT_LINE);
return 0;
}
static int rockchip_pcie_ep_set_bar(struct pci_epc *epc, u8 fn,
struct pci_epf_bar *epf_bar)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
struct rockchip_pcie *rockchip = &ep->rockchip;
dma_addr_t bar_phys = epf_bar->phys_addr;
enum pci_barno bar = epf_bar->barno;
int flags = epf_bar->flags;
u32 addr0, addr1, reg, cfg, b, aperture, ctrl;
u64 sz;
/* BAR size is 2^(aperture + 7) */
sz = max_t(size_t, epf_bar->size, MIN_EP_APERTURE);
/*
* roundup_pow_of_two() returns an unsigned long, which is not suited
* for 64bit values.
*/
sz = 1ULL << fls64(sz - 1);
aperture = ilog2(sz) - 7; /* 128B -> 0, 256B -> 1, 512B -> 2, ... */
if ((flags & PCI_BASE_ADDRESS_SPACE) == PCI_BASE_ADDRESS_SPACE_IO) {
ctrl = ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_IO_32BITS;
} else {
bool is_prefetch = !!(flags & PCI_BASE_ADDRESS_MEM_PREFETCH);
bool is_64bits = sz > SZ_2G;
if (is_64bits && (bar & 1))
return -EINVAL;
if (is_64bits && is_prefetch)
ctrl =
ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_PREFETCH_MEM_64BITS;
else if (is_prefetch)
ctrl =
ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_PREFETCH_MEM_32BITS;
else if (is_64bits)
ctrl = ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_MEM_64BITS;
else
ctrl = ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_MEM_32BITS;
}
if (bar < BAR_4) {
reg = ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG0(fn);
b = bar;
} else {
reg = ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG1(fn);
b = bar - BAR_4;
}
addr0 = lower_32_bits(bar_phys);
addr1 = upper_32_bits(bar_phys);
cfg = rockchip_pcie_read(rockchip, reg);
cfg &= ~(ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
cfg |= (ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_APERTURE(b, aperture) |
ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl));
rockchip_pcie_write(rockchip, cfg, reg);
rockchip_pcie_write(rockchip, addr0,
ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar));
rockchip_pcie_write(rockchip, addr1,
ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar));
return 0;
}
static void rockchip_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn,
struct pci_epf_bar *epf_bar)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
struct rockchip_pcie *rockchip = &ep->rockchip;
u32 reg, cfg, b, ctrl;
enum pci_barno bar = epf_bar->barno;
if (bar < BAR_4) {
reg = ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG0(fn);
b = bar;
} else {
reg = ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG1(fn);
b = bar - BAR_4;
}
ctrl = ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_DISABLED;
cfg = rockchip_pcie_read(rockchip, reg);
cfg &= ~(ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) |
ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b));
cfg |= ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_CTRL(b, ctrl);
rockchip_pcie_write(rockchip, cfg, reg);
rockchip_pcie_write(rockchip, 0x0,
ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar));
rockchip_pcie_write(rockchip, 0x0,
ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar));
}
static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn,
phys_addr_t addr, u64 pci_addr,
size_t size)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
struct rockchip_pcie *pcie = &ep->rockchip;
u32 r;
r = find_first_zero_bit(&ep->ob_region_map,
sizeof(ep->ob_region_map) * BITS_PER_LONG);
/*
* Region 0 is reserved for configuration space and shouldn't
* be used elsewhere per TRM, so leave it out.
*/
if (r >= ep->max_regions - 1) {
dev_err(&epc->dev, "no free outbound region\n");
return -EINVAL;
}
rockchip_pcie_prog_ep_ob_atu(pcie, fn, r, AXI_WRAPPER_MEM_WRITE, addr,
pci_addr, size);
set_bit(r, &ep->ob_region_map);
ep->ob_addr[r] = addr;
return 0;
}
static void rockchip_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn,
phys_addr_t addr)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
struct rockchip_pcie *rockchip = &ep->rockchip;
u32 r;
for (r = 0; r < ep->max_regions - 1; r++)
if (ep->ob_addr[r] == addr)
break;
/*
* Region 0 is reserved for configuration space and shouldn't
* be used elsewhere per TRM, so leave it out.
*/
if (r == ep->max_regions - 1)
return;
rockchip_pcie_clear_ep_ob_atu(rockchip, r);
ep->ob_addr[r] = 0;
clear_bit(r, &ep->ob_region_map);
}
static int rockchip_pcie_ep_set_msi(struct pci_epc *epc, u8 fn,
u8 multi_msg_cap)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
struct rockchip_pcie *rockchip = &ep->rockchip;
u16 flags;
flags = rockchip_pcie_read(rockchip,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
ROCKCHIP_PCIE_EP_MSI_CTRL_REG);
flags &= ~ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_MASK;
flags |=
((multi_msg_cap << 1) << ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET) |
PCI_MSI_FLAGS_64BIT;
flags &= ~ROCKCHIP_PCIE_EP_MSI_CTRL_MASK_MSI_CAP;
rockchip_pcie_write(rockchip, flags,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
ROCKCHIP_PCIE_EP_MSI_CTRL_REG);
return 0;
}
static int rockchip_pcie_ep_get_msi(struct pci_epc *epc, u8 fn)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
struct rockchip_pcie *rockchip = &ep->rockchip;
u16 flags;
flags = rockchip_pcie_read(rockchip,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
ROCKCHIP_PCIE_EP_MSI_CTRL_REG);
if (!(flags & ROCKCHIP_PCIE_EP_MSI_CTRL_ME))
return -EINVAL;
return ((flags & ROCKCHIP_PCIE_EP_MSI_CTRL_MME_MASK) >>
ROCKCHIP_PCIE_EP_MSI_CTRL_MME_OFFSET);
}
static void rockchip_pcie_ep_assert_intx(struct rockchip_pcie_ep *ep, u8 fn,
u8 intx, bool is_asserted)
{
struct rockchip_pcie *rockchip = &ep->rockchip;
u32 r = ep->max_regions - 1;
u32 offset;
u16 status;
u8 msg_code;
if (unlikely(ep->irq_pci_addr != ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR ||
ep->irq_pci_fn != fn)) {
rockchip_pcie_prog_ep_ob_atu(rockchip, fn, r,
AXI_WRAPPER_NOR_MSG,
ep->irq_phys_addr, 0, 0);
ep->irq_pci_addr = ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR;
ep->irq_pci_fn = fn;
}
intx &= 3;
if (is_asserted) {
ep->irq_pending |= BIT(intx);
msg_code = ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTA + intx;
} else {
ep->irq_pending &= ~BIT(intx);
msg_code = ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTA + intx;
}
status = rockchip_pcie_read(rockchip,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
ROCKCHIP_PCIE_EP_CMD_STATUS);
status &= ROCKCHIP_PCIE_EP_CMD_STATUS_IS;
if ((status != 0) ^ (ep->irq_pending != 0)) {
status ^= ROCKCHIP_PCIE_EP_CMD_STATUS_IS;
rockchip_pcie_write(rockchip, status,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
ROCKCHIP_PCIE_EP_CMD_STATUS);
}
offset =
ROCKCHIP_PCIE_MSG_ROUTING(ROCKCHIP_PCIE_MSG_ROUTING_LOCAL_INTX) |
ROCKCHIP_PCIE_MSG_CODE(msg_code) | ROCKCHIP_PCIE_MSG_NO_DATA;
writel(0, ep->irq_cpu_addr + offset);
}
static int rockchip_pcie_ep_send_legacy_irq(struct rockchip_pcie_ep *ep, u8 fn,
u8 intx)
{
u16 cmd;
cmd = rockchip_pcie_read(&ep->rockchip,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
ROCKCHIP_PCIE_EP_CMD_STATUS);
if (cmd & PCI_COMMAND_INTX_DISABLE)
return -EINVAL;
/*
* Should add some delay between toggling INTx per TRM vaguely saying
* it depends on some cycles of the AHB bus clock to function it. So
* add sufficient 1ms here.
*/
rockchip_pcie_ep_assert_intx(ep, fn, intx, true);
mdelay(1);
rockchip_pcie_ep_assert_intx(ep, fn, intx, false);
return 0;
}
static int rockchip_pcie_ep_send_msi_irq(struct rockchip_pcie_ep *ep, u8 fn,
u8 interrupt_num)
{
struct rockchip_pcie *rockchip = &ep->rockchip;
u16 flags, mme, data, data_mask;
u8 msi_count;
u64 pci_addr, pci_addr_mask = 0xff;
/* Check MSI enable bit */
flags = rockchip_pcie_read(&ep->rockchip,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
ROCKCHIP_PCIE_EP_MSI_CTRL_REG);
if (!(flags & ROCKCHIP_PCIE_EP_MSI_CTRL_ME))
return -EINVAL;
/* Get MSI numbers from MME */
mme = ((flags & ROCKCHIP_PCIE_EP_MSI_CTRL_MME_MASK) >>
ROCKCHIP_PCIE_EP_MSI_CTRL_MME_OFFSET);
msi_count = 1 << mme;
if (!interrupt_num || interrupt_num > msi_count)
return -EINVAL;
/* Set MSI private data */
data_mask = msi_count - 1;
data = rockchip_pcie_read(rockchip,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
ROCKCHIP_PCIE_EP_MSI_CTRL_REG +
PCI_MSI_DATA_64);
data = (data & ~data_mask) | ((interrupt_num - 1) & data_mask);
/* Get MSI PCI address */
pci_addr = rockchip_pcie_read(rockchip,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
ROCKCHIP_PCIE_EP_MSI_CTRL_REG +
PCI_MSI_ADDRESS_HI);
pci_addr <<= 32;
pci_addr |= rockchip_pcie_read(rockchip,
ROCKCHIP_PCIE_EP_FUNC_BASE(fn) +
ROCKCHIP_PCIE_EP_MSI_CTRL_REG +
PCI_MSI_ADDRESS_LO);
pci_addr &= GENMASK_ULL(63, 2);
/* Set the outbound region if needed. */
if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) ||
ep->irq_pci_fn != fn)) {
rockchip_pcie_prog_ep_ob_atu(rockchip, fn, ep->max_regions - 1,
AXI_WRAPPER_MEM_WRITE,
ep->irq_phys_addr,
pci_addr & ~pci_addr_mask,
pci_addr_mask + 1);
ep->irq_pci_addr = (pci_addr & ~pci_addr_mask);
ep->irq_pci_fn = fn;
}
writew(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask));
return 0;
}
static int rockchip_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn,
enum pci_epc_irq_type type,
u8 interrupt_num)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
switch (type) {
case PCI_EPC_IRQ_LEGACY:
return rockchip_pcie_ep_send_legacy_irq(ep, fn, 0);
case PCI_EPC_IRQ_MSI:
return rockchip_pcie_ep_send_msi_irq(ep, fn, interrupt_num);
default:
return -EINVAL;
}
}
static int rockchip_pcie_ep_start(struct pci_epc *epc)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
struct rockchip_pcie *rockchip = &ep->rockchip;
struct pci_epf *epf;
u32 cfg;
cfg = BIT(0);
list_for_each_entry(epf, &epc->pci_epf, list)
cfg |= BIT(epf->func_no);
rockchip_pcie_write(rockchip, cfg, PCIE_CORE_PHY_FUNC_CFG);
list_for_each_entry(epf, &epc->pci_epf, list)
pci_epf_linkup(epf);
return 0;
}
static const struct pci_epc_ops rockchip_pcie_epc_ops = {
.write_header = rockchip_pcie_ep_write_header,
.set_bar = rockchip_pcie_ep_set_bar,
.clear_bar = rockchip_pcie_ep_clear_bar,
.map_addr = rockchip_pcie_ep_map_addr,
.unmap_addr = rockchip_pcie_ep_unmap_addr,
.set_msi = rockchip_pcie_ep_set_msi,
.get_msi = rockchip_pcie_ep_get_msi,
.raise_irq = rockchip_pcie_ep_raise_irq,
.start = rockchip_pcie_ep_start,
};
static int rockchip_pcie_parse_ep_dt(struct rockchip_pcie *rockchip,
struct rockchip_pcie_ep *ep)
{
struct device *dev = rockchip->dev;
int err;
err = rockchip_pcie_parse_dt(rockchip);
if (err)
return err;
err = rockchip_pcie_get_phys(rockchip);
if (err)
return err;
err = of_property_read_u32(dev->of_node,
"rockchip,max-outbound-regions",
&ep->max_regions);
if (err < 0 || ep->max_regions > MAX_REGION_LIMIT)
ep->max_regions = MAX_REGION_LIMIT;
err = of_property_read_u8(dev->of_node, "max-functions",
&ep->epc->max_functions);
if (err < 0)
ep->epc->max_functions = 1;
return 0;
}
static const struct of_device_id rockchip_pcie_ep_of_match[] = {
{ .compatible = "rockchip,rk3399-pcie-ep"},
{},
};
static int rockchip_pcie_ep_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct rockchip_pcie_ep *ep;
struct rockchip_pcie *rockchip;
struct pci_epc *epc;
size_t max_regions;
int err;
ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL);
if (!ep)
return -ENOMEM;
rockchip = &ep->rockchip;
rockchip->is_rc = false;
rockchip->dev = dev;
epc = devm_pci_epc_create(dev, &rockchip_pcie_epc_ops);
if (IS_ERR(epc)) {
dev_err(dev, "failed to create epc device\n");
return PTR_ERR(epc);
}
ep->epc = epc;
epc_set_drvdata(epc, ep);
err = rockchip_pcie_parse_ep_dt(rockchip, ep);
if (err)
return err;
err = rockchip_pcie_enable_clocks(rockchip);
if (err)
return err;
err = rockchip_pcie_init_port(rockchip);
if (err)
goto err_disable_clocks;
/* Establish the link automatically */
rockchip_pcie_write(rockchip, PCIE_CLIENT_LINK_TRAIN_ENABLE,
PCIE_CLIENT_CONFIG);
max_regions = ep->max_regions;
ep->ob_addr = devm_kzalloc(dev, max_regions * sizeof(*ep->ob_addr),
GFP_KERNEL);
if (!ep->ob_addr) {
err = -ENOMEM;
goto err_uninit_port;
}
/* Only enable function 0 by default */
rockchip_pcie_write(rockchip, BIT(0), PCIE_CORE_PHY_FUNC_CFG);
err = pci_epc_mem_init(epc, rockchip->mem_res->start,
resource_size(rockchip->mem_res));
if (err < 0) {
dev_err(dev, "failed to initialize the memory space\n");
goto err_uninit_port;
}
ep->irq_cpu_addr = pci_epc_mem_alloc_addr(epc, &ep->irq_phys_addr,
SZ_128K);
if (!ep->irq_cpu_addr) {
dev_err(dev, "failed to reserve memory space for MSI\n");
err = -ENOMEM;
goto err_epc_mem_exit;
}
ep->irq_pci_addr = ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR;
return 0;
err_epc_mem_exit:
pci_epc_mem_exit(epc);
err_uninit_port:
rockchip_pcie_deinit_phys(rockchip);
err_disable_clocks:
rockchip_pcie_disable_clocks(rockchip);
return err;
}
static struct platform_driver rockchip_pcie_ep_driver = {
.driver = {
.name = "rockchip-pcie-ep",
.of_match_table = rockchip_pcie_ep_of_match,
},
.probe = rockchip_pcie_ep_probe,
};
builtin_platform_driver(rockchip_pcie_ep_driver);

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,338 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Rockchip AXI PCIe controller driver
*
* Copyright (c) 2018 Rockchip, Inc.
*
* Author: Shawn Lin <shawn.lin@rock-chips.com>
*
*/
#ifndef _PCIE_ROCKCHIP_H
#define _PCIE_ROCKCHIP_H
#include <linux/kernel.h>
#include <linux/pci.h>
/*
* The upper 16 bits of PCIE_CLIENT_CONFIG are a write mask for the lower 16
* bits. This allows atomic updates of the register without locking.
*/
#define HIWORD_UPDATE(mask, val) (((mask) << 16) | (val))
#define HIWORD_UPDATE_BIT(val) HIWORD_UPDATE(val, val)
#define ENCODE_LANES(x) ((((x) >> 1) & 3) << 4)
#define MAX_LANE_NUM 4
#define MAX_REGION_LIMIT 32
#define MIN_EP_APERTURE 28
#define PCIE_CLIENT_BASE 0x0
#define PCIE_CLIENT_CONFIG (PCIE_CLIENT_BASE + 0x00)
#define PCIE_CLIENT_CONF_ENABLE HIWORD_UPDATE_BIT(0x0001)
#define PCIE_CLIENT_CONF_DISABLE HIWORD_UPDATE(0x0001, 0)
#define PCIE_CLIENT_LINK_TRAIN_ENABLE HIWORD_UPDATE_BIT(0x0002)
#define PCIE_CLIENT_ARI_ENABLE HIWORD_UPDATE_BIT(0x0008)
#define PCIE_CLIENT_CONF_LANE_NUM(x) HIWORD_UPDATE(0x0030, ENCODE_LANES(x))
#define PCIE_CLIENT_MODE_RC HIWORD_UPDATE_BIT(0x0040)
#define PCIE_CLIENT_MODE_EP HIWORD_UPDATE(0x0040, 0)
#define PCIE_CLIENT_GEN_SEL_1 HIWORD_UPDATE(0x0080, 0)
#define PCIE_CLIENT_GEN_SEL_2 HIWORD_UPDATE_BIT(0x0080)
#define PCIE_CLIENT_DEBUG_OUT_0 (PCIE_CLIENT_BASE + 0x3c)
#define PCIE_CLIENT_DEBUG_LTSSM_MASK GENMASK(5, 0)
#define PCIE_CLIENT_DEBUG_LTSSM_L1 0x18
#define PCIE_CLIENT_DEBUG_LTSSM_L2 0x19
#define PCIE_CLIENT_BASIC_STATUS1 (PCIE_CLIENT_BASE + 0x48)
#define PCIE_CLIENT_LINK_STATUS_UP 0x00300000
#define PCIE_CLIENT_LINK_STATUS_MASK 0x00300000
#define PCIE_CLIENT_INT_MASK (PCIE_CLIENT_BASE + 0x4c)
#define PCIE_CLIENT_INT_STATUS (PCIE_CLIENT_BASE + 0x50)
#define PCIE_CLIENT_INTR_MASK GENMASK(8, 5)
#define PCIE_CLIENT_INTR_SHIFT 5
#define PCIE_CLIENT_INT_LEGACY_DONE BIT(15)
#define PCIE_CLIENT_INT_MSG BIT(14)
#define PCIE_CLIENT_INT_HOT_RST BIT(13)
#define PCIE_CLIENT_INT_DPA BIT(12)
#define PCIE_CLIENT_INT_FATAL_ERR BIT(11)
#define PCIE_CLIENT_INT_NFATAL_ERR BIT(10)
#define PCIE_CLIENT_INT_CORR_ERR BIT(9)
#define PCIE_CLIENT_INT_INTD BIT(8)
#define PCIE_CLIENT_INT_INTC BIT(7)
#define PCIE_CLIENT_INT_INTB BIT(6)
#define PCIE_CLIENT_INT_INTA BIT(5)
#define PCIE_CLIENT_INT_LOCAL BIT(4)
#define PCIE_CLIENT_INT_UDMA BIT(3)
#define PCIE_CLIENT_INT_PHY BIT(2)
#define PCIE_CLIENT_INT_HOT_PLUG BIT(1)
#define PCIE_CLIENT_INT_PWR_STCG BIT(0)
#define PCIE_CLIENT_INT_LEGACY \
(PCIE_CLIENT_INT_INTA | PCIE_CLIENT_INT_INTB | \
PCIE_CLIENT_INT_INTC | PCIE_CLIENT_INT_INTD)
#define PCIE_CLIENT_INT_CLI \
(PCIE_CLIENT_INT_CORR_ERR | PCIE_CLIENT_INT_NFATAL_ERR | \
PCIE_CLIENT_INT_FATAL_ERR | PCIE_CLIENT_INT_DPA | \
PCIE_CLIENT_INT_HOT_RST | PCIE_CLIENT_INT_MSG | \
PCIE_CLIENT_INT_LEGACY_DONE | PCIE_CLIENT_INT_LEGACY | \
PCIE_CLIENT_INT_PHY)
#define PCIE_CORE_CTRL_MGMT_BASE 0x900000
#define PCIE_CORE_CTRL (PCIE_CORE_CTRL_MGMT_BASE + 0x000)
#define PCIE_CORE_PL_CONF_SPEED_5G 0x00000008
#define PCIE_CORE_PL_CONF_SPEED_MASK 0x00000018
#define PCIE_CORE_PL_CONF_LANE_MASK 0x00000006
#define PCIE_CORE_PL_CONF_LANE_SHIFT 1
#define PCIE_CORE_CTRL_PLC1 (PCIE_CORE_CTRL_MGMT_BASE + 0x004)
#define PCIE_CORE_CTRL_PLC1_FTS_MASK GENMASK(23, 8)
#define PCIE_CORE_CTRL_PLC1_FTS_SHIFT 8
#define PCIE_CORE_CTRL_PLC1_FTS_CNT 0xffff
#define PCIE_CORE_TXCREDIT_CFG1 (PCIE_CORE_CTRL_MGMT_BASE + 0x020)
#define PCIE_CORE_TXCREDIT_CFG1_MUI_MASK 0xFFFF0000
#define PCIE_CORE_TXCREDIT_CFG1_MUI_SHIFT 16
#define PCIE_CORE_TXCREDIT_CFG1_MUI_ENCODE(x) \
(((x) >> 3) << PCIE_CORE_TXCREDIT_CFG1_MUI_SHIFT)
#define PCIE_CORE_LANE_MAP (PCIE_CORE_CTRL_MGMT_BASE + 0x200)
#define PCIE_CORE_LANE_MAP_MASK 0x0000000f
#define PCIE_CORE_LANE_MAP_REVERSE BIT(16)
#define PCIE_CORE_INT_STATUS (PCIE_CORE_CTRL_MGMT_BASE + 0x20c)
#define PCIE_CORE_INT_PRFPE BIT(0)
#define PCIE_CORE_INT_CRFPE BIT(1)
#define PCIE_CORE_INT_RRPE BIT(2)
#define PCIE_CORE_INT_PRFO BIT(3)
#define PCIE_CORE_INT_CRFO BIT(4)
#define PCIE_CORE_INT_RT BIT(5)
#define PCIE_CORE_INT_RTR BIT(6)
#define PCIE_CORE_INT_PE BIT(7)
#define PCIE_CORE_INT_MTR BIT(8)
#define PCIE_CORE_INT_UCR BIT(9)
#define PCIE_CORE_INT_FCE BIT(10)
#define PCIE_CORE_INT_CT BIT(11)
#define PCIE_CORE_INT_UTC BIT(18)
#define PCIE_CORE_INT_MMVC BIT(19)
#define PCIE_CORE_CONFIG_VENDOR (PCIE_CORE_CTRL_MGMT_BASE + 0x44)
#define PCIE_CORE_INT_MASK (PCIE_CORE_CTRL_MGMT_BASE + 0x210)
#define PCIE_CORE_PHY_FUNC_CFG (PCIE_CORE_CTRL_MGMT_BASE + 0x2c0)
#define PCIE_RC_BAR_CONF (PCIE_CORE_CTRL_MGMT_BASE + 0x300)
#define ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_DISABLED 0x0
#define ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_IO_32BITS 0x1
#define ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_MEM_32BITS 0x4
#define ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_PREFETCH_MEM_32BITS 0x5
#define ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_MEM_64BITS 0x6
#define ROCKCHIP_PCIE_CORE_BAR_CFG_CTRL_PREFETCH_MEM_64BITS 0x7
#define PCIE_CORE_INT \
(PCIE_CORE_INT_PRFPE | PCIE_CORE_INT_CRFPE | \
PCIE_CORE_INT_RRPE | PCIE_CORE_INT_CRFO | \
PCIE_CORE_INT_RT | PCIE_CORE_INT_RTR | \
PCIE_CORE_INT_PE | PCIE_CORE_INT_MTR | \
PCIE_CORE_INT_UCR | PCIE_CORE_INT_FCE | \
PCIE_CORE_INT_CT | PCIE_CORE_INT_UTC | \
PCIE_CORE_INT_MMVC)
#define PCIE_RC_RP_ATS_BASE 0x400000
#define PCIE_RC_CONFIG_NORMAL_BASE 0x800000
#define PCIE_RC_CONFIG_BASE 0xa00000
#define PCIE_RC_CONFIG_RID_CCR (PCIE_RC_CONFIG_BASE + 0x08)
#define PCIE_RC_CONFIG_SCC_SHIFT 16
#define PCIE_RC_CONFIG_DCR (PCIE_RC_CONFIG_BASE + 0xc4)
#define PCIE_RC_CONFIG_DCR_CSPL_SHIFT 18
#define PCIE_RC_CONFIG_DCR_CSPL_LIMIT 0xff
#define PCIE_RC_CONFIG_DCR_CPLS_SHIFT 26
#define PCIE_RC_CONFIG_DCSR (PCIE_RC_CONFIG_BASE + 0xc8)
#define PCIE_RC_CONFIG_DCSR_MPS_MASK GENMASK(7, 5)
#define PCIE_RC_CONFIG_DCSR_MPS_256 (0x1 << 5)
#define PCIE_RC_CONFIG_LINK_CAP (PCIE_RC_CONFIG_BASE + 0xcc)
#define PCIE_RC_CONFIG_LINK_CAP_L0S BIT(10)
#define PCIE_RC_CONFIG_LCS (PCIE_RC_CONFIG_BASE + 0xd0)
#define PCIE_RC_CONFIG_L1_SUBSTATE_CTRL2 (PCIE_RC_CONFIG_BASE + 0x90c)
#define PCIE_RC_CONFIG_THP_CAP (PCIE_RC_CONFIG_BASE + 0x274)
#define PCIE_RC_CONFIG_THP_CAP_NEXT_MASK GENMASK(31, 20)
#define PCIE_CORE_AXI_CONF_BASE 0xc00000
#define PCIE_CORE_OB_REGION_ADDR0 (PCIE_CORE_AXI_CONF_BASE + 0x0)
#define PCIE_CORE_OB_REGION_ADDR0_NUM_BITS 0x3f
#define PCIE_CORE_OB_REGION_ADDR0_LO_ADDR 0xffffff00
#define PCIE_CORE_OB_REGION_ADDR1 (PCIE_CORE_AXI_CONF_BASE + 0x4)
#define PCIE_CORE_OB_REGION_DESC0 (PCIE_CORE_AXI_CONF_BASE + 0x8)
#define PCIE_CORE_OB_REGION_DESC1 (PCIE_CORE_AXI_CONF_BASE + 0xc)
#define PCIE_CORE_AXI_INBOUND_BASE 0xc00800
#define PCIE_RP_IB_ADDR0 (PCIE_CORE_AXI_INBOUND_BASE + 0x0)
#define PCIE_CORE_IB_REGION_ADDR0_NUM_BITS 0x3f
#define PCIE_CORE_IB_REGION_ADDR0_LO_ADDR 0xffffff00
#define PCIE_RP_IB_ADDR1 (PCIE_CORE_AXI_INBOUND_BASE + 0x4)
/* Size of one AXI Region (not Region 0) */
#define AXI_REGION_SIZE BIT(20)
/* Size of Region 0, equal to sum of sizes of other regions */
#define AXI_REGION_0_SIZE (32 * (0x1 << 20))
#define OB_REG_SIZE_SHIFT 5
#define IB_ROOT_PORT_REG_SIZE_SHIFT 3
#define AXI_WRAPPER_IO_WRITE 0x6
#define AXI_WRAPPER_MEM_WRITE 0x2
#define AXI_WRAPPER_TYPE0_CFG 0xa
#define AXI_WRAPPER_TYPE1_CFG 0xb
#define AXI_WRAPPER_NOR_MSG 0xc
#define MAX_AXI_IB_ROOTPORT_REGION_NUM 3
#define MIN_AXI_ADDR_BITS_PASSED 8
#define PCIE_RC_SEND_PME_OFF 0x11960
#define ROCKCHIP_VENDOR_ID 0x1d87
#define PCIE_ECAM_BUS(x) (((x) & 0xff) << 20)
#define PCIE_ECAM_DEV(x) (((x) & 0x1f) << 15)
#define PCIE_ECAM_FUNC(x) (((x) & 0x7) << 12)
#define PCIE_ECAM_REG(x) (((x) & 0xfff) << 0)
#define PCIE_ECAM_ADDR(bus, dev, func, reg) \
(PCIE_ECAM_BUS(bus) | PCIE_ECAM_DEV(dev) | \
PCIE_ECAM_FUNC(func) | PCIE_ECAM_REG(reg))
#define PCIE_LINK_IS_L2(x) \
(((x) & PCIE_CLIENT_DEBUG_LTSSM_MASK) == PCIE_CLIENT_DEBUG_LTSSM_L2)
#define PCIE_LINK_UP(x) \
(((x) & PCIE_CLIENT_LINK_STATUS_MASK) == PCIE_CLIENT_LINK_STATUS_UP)
#define PCIE_LINK_IS_GEN2(x) \
(((x) & PCIE_CORE_PL_CONF_SPEED_MASK) == PCIE_CORE_PL_CONF_SPEED_5G)
#define RC_REGION_0_ADDR_TRANS_H 0x00000000
#define RC_REGION_0_ADDR_TRANS_L 0x00000000
#define RC_REGION_0_PASS_BITS (25 - 1)
#define RC_REGION_0_TYPE_MASK GENMASK(3, 0)
#define MAX_AXI_WRAPPER_REGION_NUM 33
#define ROCKCHIP_PCIE_MSG_ROUTING_TO_RC 0x0
#define ROCKCHIP_PCIE_MSG_ROUTING_VIA_ADDR 0x1
#define ROCKCHIP_PCIE_MSG_ROUTING_VIA_ID 0x2
#define ROCKCHIP_PCIE_MSG_ROUTING_BROADCAST 0x3
#define ROCKCHIP_PCIE_MSG_ROUTING_LOCAL_INTX 0x4
#define ROCKCHIP_PCIE_MSG_ROUTING_PME_ACK 0x5
#define ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTA 0x20
#define ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTB 0x21
#define ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTC 0x22
#define ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTD 0x23
#define ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTA 0x24
#define ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTB 0x25
#define ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTC 0x26
#define ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTD 0x27
#define ROCKCHIP_PCIE_MSG_ROUTING_MASK GENMASK(7, 5)
#define ROCKCHIP_PCIE_MSG_ROUTING(route) \
(((route) << 5) & ROCKCHIP_PCIE_MSG_ROUTING_MASK)
#define ROCKCHIP_PCIE_MSG_CODE_MASK GENMASK(15, 8)
#define ROCKCHIP_PCIE_MSG_CODE(code) \
(((code) << 8) & ROCKCHIP_PCIE_MSG_CODE_MASK)
#define ROCKCHIP_PCIE_MSG_NO_DATA BIT(16)
#define ROCKCHIP_PCIE_EP_CMD_STATUS 0x4
#define ROCKCHIP_PCIE_EP_CMD_STATUS_IS BIT(19)
#define ROCKCHIP_PCIE_EP_MSI_CTRL_REG 0x90
#define ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET 17
#define ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_MASK GENMASK(19, 17)
#define ROCKCHIP_PCIE_EP_MSI_CTRL_MME_OFFSET 20
#define ROCKCHIP_PCIE_EP_MSI_CTRL_MME_MASK GENMASK(22, 20)
#define ROCKCHIP_PCIE_EP_MSI_CTRL_ME BIT(16)
#define ROCKCHIP_PCIE_EP_MSI_CTRL_MASK_MSI_CAP BIT(24)
#define ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR 0x1
#define ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR 0x3
#define ROCKCHIP_PCIE_EP_FUNC_BASE(fn) (((fn) << 12) & GENMASK(19, 12))
#define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \
(PCIE_RC_RP_ATS_BASE + 0x0840 + (fn) * 0x0040 + (bar) * 0x0008)
#define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \
(PCIE_RC_RP_ATS_BASE + 0x0844 + (fn) * 0x0040 + (bar) * 0x0008)
#define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r) \
(PCIE_RC_RP_ATS_BASE + 0x0000 + ((r) & 0x1f) * 0x0020)
#define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(19, 12)
#define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \
(((devfn) << 12) & \
ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK)
#define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK GENMASK(27, 20)
#define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(bus) \
(((bus) << 20) & ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK)
#define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r) \
(PCIE_RC_RP_ATS_BASE + 0x0004 + ((r) & 0x1f) * 0x0020)
#define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID BIT(23)
#define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK GENMASK(31, 24)
#define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN(devfn) \
(((devfn) << 24) & ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK)
#define ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r) \
(PCIE_RC_RP_ATS_BASE + 0x0008 + ((r) & 0x1f) * 0x0020)
#define ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r) \
(PCIE_RC_RP_ATS_BASE + 0x000c + ((r) & 0x1f) * 0x0020)
#define ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(r) \
(PCIE_RC_RP_ATS_BASE + 0x0018 + ((r) & 0x1f) * 0x0020)
#define ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(r) \
(PCIE_RC_RP_ATS_BASE + 0x001c + ((r) & 0x1f) * 0x0020)
#define ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG0(fn) \
(PCIE_CORE_CTRL_MGMT_BASE + 0x0240 + (fn) * 0x0008)
#define ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG1(fn) \
(PCIE_CORE_CTRL_MGMT_BASE + 0x0244 + (fn) * 0x0008)
#define ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b) \
(GENMASK(4, 0) << ((b) * 8))
#define ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_APERTURE(b, a) \
(((a) << ((b) * 8)) & \
ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_APERTURE_MASK(b))
#define ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b) \
(GENMASK(7, 5) << ((b) * 8))
#define ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_CTRL(b, c) \
(((c) << ((b) * 8 + 5)) & \
ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b))
struct rockchip_pcie {
void __iomem *reg_base; /* DT axi-base */
void __iomem *apb_base; /* DT apb-base */
bool legacy_phy;
struct phy *phys[MAX_LANE_NUM];
struct reset_control *core_rst;
struct reset_control *mgmt_rst;
struct reset_control *mgmt_sticky_rst;
struct reset_control *pipe_rst;
struct reset_control *pm_rst;
struct reset_control *aclk_rst;
struct reset_control *pclk_rst;
struct clk *aclk_pcie;
struct clk *aclk_perf_pcie;
struct clk *hclk_pcie;
struct clk *clk_pcie_pm;
struct regulator *vpcie12v; /* 12V power supply */
struct regulator *vpcie3v3; /* 3.3V power supply */
struct regulator *vpcie1v8; /* 1.8V power supply */
struct regulator *vpcie0v9; /* 0.9V power supply */
struct gpio_desc *ep_gpio;
u32 lanes;
u8 lanes_map;
u8 root_bus_nr;
int link_gen;
struct device *dev;
struct irq_domain *irq_domain;
int offset;
struct pci_bus *root_bus;
struct resource *io;
phys_addr_t io_bus_addr;
u32 io_size;
void __iomem *msg_region;
u32 mem_size;
phys_addr_t msg_bus_addr;
phys_addr_t mem_bus_addr;
bool is_rc;
struct resource *mem_res;
};
static u32 rockchip_pcie_read(struct rockchip_pcie *rockchip, u32 reg)
{
return readl(rockchip->apb_base + reg);
}
static void rockchip_pcie_write(struct rockchip_pcie *rockchip, u32 val,
u32 reg)
{
writel(val, rockchip->apb_base + reg);
}
int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip);
int rockchip_pcie_init_port(struct rockchip_pcie *rockchip);
int rockchip_pcie_get_phys(struct rockchip_pcie *rockchip);
void rockchip_pcie_deinit_phys(struct rockchip_pcie *rockchip);
int rockchip_pcie_enable_clocks(struct rockchip_pcie *rockchip);
void rockchip_pcie_disable_clocks(void *data);
void rockchip_pcie_cfg_configuration_accesses(
struct rockchip_pcie *rockchip, u32 type);
#endif /* _PCIE_ROCKCHIP_H */

View File

@ -21,6 +21,8 @@
#include <linux/platform_device.h>
#include <linux/irqchip/chained_irq.h>
#include "../pci.h"
/* Bridge core config registers */
#define BRCFG_PCIE_RX0 0x00000000
#define BRCFG_INTERRUPT 0x00000010
@ -825,7 +827,6 @@ static const struct of_device_id nwl_pcie_of_match[] = {
static int nwl_pcie_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *node = dev->of_node;
struct nwl_pcie *pcie;
struct pci_bus *bus;
struct pci_bus *child;
@ -855,7 +856,8 @@ static int nwl_pcie_probe(struct platform_device *pdev)
return err;
}
err = of_pci_get_host_bridge_resources(node, 0, 0xff, &res, &iobase);
err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &res,
&iobase);
if (err) {
dev_err(dev, "Getting bridge resources failed\n");
return err;

View File

@ -23,6 +23,8 @@
#include <linux/pci.h>
#include <linux/platform_device.h>
#include "../pci.h"
/* Register definitions */
#define XILINX_PCIE_REG_BIR 0x00000130
#define XILINX_PCIE_REG_IDR 0x00000138
@ -643,8 +645,8 @@ static int xilinx_pcie_probe(struct platform_device *pdev)
return err;
}
err = of_pci_get_host_bridge_resources(dev->of_node, 0, 0xff, &res,
&iobase);
err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &res,
&iobase);
if (err) {
dev_err(dev, "Getting bridge resources failed\n");
return err;

View File

@ -24,6 +24,28 @@
#define VMD_MEMBAR1 2
#define VMD_MEMBAR2 4
#define PCI_REG_VMCAP 0x40
#define BUS_RESTRICT_CAP(vmcap) (vmcap & 0x1)
#define PCI_REG_VMCONFIG 0x44
#define BUS_RESTRICT_CFG(vmcfg) ((vmcfg >> 8) & 0x3)
#define PCI_REG_VMLOCK 0x70
#define MB2_SHADOW_EN(vmlock) (vmlock & 0x2)
enum vmd_features {
/*
* Device may contain registers which hint the physical location of the
* membars, in order to allow proper address translation during
* resource assignment to enable guest virtualization
*/
VMD_FEAT_HAS_MEMBAR_SHADOW = (1 << 0),
/*
* Device may provide root port configuration information which limits
* bus numbering
*/
VMD_FEAT_HAS_BUS_RESTRICTIONS = (1 << 1),
};
/*
* Lock for manipulating VMD IRQ lists.
*/
@ -546,7 +568,7 @@ static int vmd_find_free_domain(void)
return domain + 1;
}
static int vmd_enable_domain(struct vmd_dev *vmd)
static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
{
struct pci_sysdata *sd = &vmd->sysdata;
struct fwnode_handle *fn;
@ -554,12 +576,57 @@ static int vmd_enable_domain(struct vmd_dev *vmd)
u32 upper_bits;
unsigned long flags;
LIST_HEAD(resources);
resource_size_t offset[2] = {0};
resource_size_t membar2_offset = 0x2000, busn_start = 0;
/*
* Shadow registers may exist in certain VMD device ids which allow
* guests to correctly assign host physical addresses to the root ports
* and child devices. These registers will either return the host value
* or 0, depending on an enable bit in the VMD device.
*/
if (features & VMD_FEAT_HAS_MEMBAR_SHADOW) {
u32 vmlock;
int ret;
membar2_offset = 0x2018;
ret = pci_read_config_dword(vmd->dev, PCI_REG_VMLOCK, &vmlock);
if (ret || vmlock == ~0)
return -ENODEV;
if (MB2_SHADOW_EN(vmlock)) {
void __iomem *membar2;
membar2 = pci_iomap(vmd->dev, VMD_MEMBAR2, 0);
if (!membar2)
return -ENOMEM;
offset[0] = vmd->dev->resource[VMD_MEMBAR1].start -
readq(membar2 + 0x2008);
offset[1] = vmd->dev->resource[VMD_MEMBAR2].start -
readq(membar2 + 0x2010);
pci_iounmap(vmd->dev, membar2);
}
}
/*
* Certain VMD devices may have a root port configuration option which
* limits the bus range to between 0-127 or 128-255
*/
if (features & VMD_FEAT_HAS_BUS_RESTRICTIONS) {
u32 vmcap, vmconfig;
pci_read_config_dword(vmd->dev, PCI_REG_VMCAP, &vmcap);
pci_read_config_dword(vmd->dev, PCI_REG_VMCONFIG, &vmconfig);
if (BUS_RESTRICT_CAP(vmcap) &&
(BUS_RESTRICT_CFG(vmconfig) == 0x1))
busn_start = 128;
}
res = &vmd->dev->resource[VMD_CFGBAR];
vmd->resources[0] = (struct resource) {
.name = "VMD CFGBAR",
.start = 0,
.end = (resource_size(res) >> 20) - 1,
.start = busn_start,
.end = busn_start + (resource_size(res) >> 20) - 1,
.flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
};
@ -600,7 +667,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd)
flags &= ~IORESOURCE_MEM_64;
vmd->resources[2] = (struct resource) {
.name = "VMD MEMBAR2",
.start = res->start + 0x2000,
.start = res->start + membar2_offset,
.end = res->end,
.flags = flags,
.parent = res,
@ -624,10 +691,11 @@ static int vmd_enable_domain(struct vmd_dev *vmd)
return -ENODEV;
pci_add_resource(&resources, &vmd->resources[0]);
pci_add_resource(&resources, &vmd->resources[1]);
pci_add_resource(&resources, &vmd->resources[2]);
vmd->bus = pci_create_root_bus(&vmd->dev->dev, 0, &vmd_ops, sd,
&resources);
pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]);
pci_add_resource_offset(&resources, &vmd->resources[2], offset[1]);
vmd->bus = pci_create_root_bus(&vmd->dev->dev, busn_start, &vmd_ops,
sd, &resources);
if (!vmd->bus) {
pci_free_resource_list(&resources);
irq_domain_remove(vmd->irq_domain);
@ -713,7 +781,7 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
spin_lock_init(&vmd->cfg_lock);
pci_set_drvdata(dev, vmd);
err = vmd_enable_domain(vmd);
err = vmd_enable_domain(vmd, (unsigned long) id->driver_data);
if (err)
return err;
@ -778,7 +846,10 @@ static int vmd_resume(struct device *dev)
static SIMPLE_DEV_PM_OPS(vmd_dev_pm_ops, vmd_suspend, vmd_resume);
static const struct pci_device_id vmd_ids[] = {
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x201d),},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_201D),},
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_28C0),
.driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW |
VMD_FEAT_HAS_BUS_RESTRICTIONS,},
{0,}
};
MODULE_DEVICE_TABLE(pci, vmd_ids);

View File

@ -104,14 +104,11 @@ config HOTPLUG_PCI_CPCI_GENERIC
When in doubt, say N.
config HOTPLUG_PCI_SHPC
tristate "SHPC PCI Hotplug driver"
bool "SHPC PCI Hotplug driver"
help
Say Y here if you have a motherboard with a SHPC PCI Hotplug
controller.
To compile this driver as a module, choose M here: the
module will be called shpchp.
When in doubt, say N.
config HOTPLUG_PCI_POWERNV

View File

@ -63,22 +63,17 @@ static acpi_status acpi_run_oshp(acpi_handle handle)
/**
* acpi_get_hp_hw_control_from_firmware
* @dev: the pci_dev of the bridge that has a hotplug controller
* @flags: requested control bits for _OSC
*
* Attempt to take hotplug control from firmware.
*/
int acpi_get_hp_hw_control_from_firmware(struct pci_dev *pdev, u32 flags)
int acpi_get_hp_hw_control_from_firmware(struct pci_dev *pdev)
{
const struct pci_host_bridge *host;
const struct acpi_pci_root *root;
acpi_status status;
acpi_handle chandle, handle;
struct acpi_buffer string = { ACPI_ALLOCATE_BUFFER, NULL };
flags &= OSC_PCI_SHPC_NATIVE_HP_CONTROL;
if (!flags) {
err("Invalid flags %u specified!\n", flags);
return -EINVAL;
}
/*
* Per PCI firmware specification, we should run the ACPI _OSC
* method to get control of hotplug hardware before using it. If
@ -88,25 +83,20 @@ int acpi_get_hp_hw_control_from_firmware(struct pci_dev *pdev, u32 flags)
* OSHP within the scope of the hotplug controller and its parents,
* up to the host bridge under which this controller exists.
*/
handle = acpi_find_root_bridge_handle(pdev);
if (handle) {
acpi_get_name(handle, ACPI_FULL_PATHNAME, &string);
dbg("Trying to get hotplug control for %s\n",
(char *)string.pointer);
status = acpi_pci_osc_control_set(handle, &flags, flags);
if (ACPI_SUCCESS(status))
goto got_one;
if (status == AE_SUPPORT)
goto no_control;
kfree(string.pointer);
string = (struct acpi_buffer){ ACPI_ALLOCATE_BUFFER, NULL };
}
if (shpchp_is_native(pdev))
return 0;
/* If _OSC exists, we should not evaluate OSHP */
host = pci_find_host_bridge(pdev->bus);
root = acpi_pci_find_root(ACPI_HANDLE(&host->dev));
if (root->osc_support_set)
goto no_control;
handle = ACPI_HANDLE(&pdev->dev);
if (!handle) {
/*
* This hotplug controller was not listed in the ACPI name
* space at all. Try to get acpi handle of parent pci bus.
* space at all. Try to get ACPI handle of parent PCI bus.
*/
struct pci_bus *pbus;
for (pbus = pdev->bus; pbus; pbus = pbus->parent) {
@ -118,8 +108,8 @@ int acpi_get_hp_hw_control_from_firmware(struct pci_dev *pdev, u32 flags)
while (handle) {
acpi_get_name(handle, ACPI_FULL_PATHNAME, &string);
dbg("Trying to get hotplug control for %s\n",
(char *)string.pointer);
pci_info(pdev, "Requesting control of SHPC hotplug via OSHP (%s)\n",
(char *)string.pointer);
status = acpi_run_oshp(handle);
if (ACPI_SUCCESS(status))
goto got_one;
@ -131,13 +121,12 @@ int acpi_get_hp_hw_control_from_firmware(struct pci_dev *pdev, u32 flags)
break;
}
no_control:
dbg("Cannot get control of hotplug hardware for pci %s\n",
pci_name(pdev));
pci_info(pdev, "Cannot get control of SHPC hotplug\n");
kfree(string.pointer);
return -ENODEV;
got_one:
dbg("Gained control for hotplug HW for pci %s (%s)\n",
pci_name(pdev), (char *)string.pointer);
pci_info(pdev, "Gained control of SHPC hotplug (%s)\n",
(char *)string.pointer);
kfree(string.pointer);
return 0;
}

View File

@ -287,11 +287,12 @@ static acpi_status acpiphp_add_context(acpi_handle handle, u32 lvl, void *data,
/*
* Expose slots to user space for functions that have _EJ0 or _RMV or
* are located in dock stations. Do not expose them for devices handled
* by the native PCIe hotplug (PCIeHP), becuase that code is supposed to
* expose slots to user space in those cases.
* by the native PCIe hotplug (PCIeHP) or standard PCI hotplug
* (SHPCHP), because that code is supposed to expose slots to user
* space in those cases.
*/
if ((acpi_pci_check_ejectable(pbus, handle) || is_dock_device(adev))
&& !(pdev && pdev->is_hotplug_bridge && pciehp_is_native(pdev))) {
&& !(pdev && hotplug_is_native(pdev))) {
unsigned long long sun;
int retval;
@ -430,6 +431,29 @@ static int acpiphp_rescan_slot(struct acpiphp_slot *slot)
return pci_scan_slot(slot->bus, PCI_DEVFN(slot->device, 0));
}
static void acpiphp_native_scan_bridge(struct pci_dev *bridge)
{
struct pci_bus *bus = bridge->subordinate;
struct pci_dev *dev;
int max;
if (!bus)
return;
max = bus->busn_res.start;
/* Scan already configured non-hotplug bridges */
for_each_pci_bridge(dev, bus) {
if (!hotplug_is_native(dev))
max = pci_scan_bridge(bus, dev, max, 0);
}
/* Scan non-hotplug bridges that need to be reconfigured */
for_each_pci_bridge(dev, bus) {
if (!hotplug_is_native(dev))
max = pci_scan_bridge(bus, dev, max, 1);
}
}
/**
* enable_slot - enable, configure a slot
* @slot: slot to be enabled
@ -442,25 +466,42 @@ static void enable_slot(struct acpiphp_slot *slot)
struct pci_dev *dev;
struct pci_bus *bus = slot->bus;
struct acpiphp_func *func;
int max, pass;
LIST_HEAD(add_list);
acpiphp_rescan_slot(slot);
max = acpiphp_max_busnr(bus);
for (pass = 0; pass < 2; pass++) {
if (bus->self && hotplug_is_native(bus->self)) {
/*
* If native hotplug is used, it will take care of hotplug
* slot management and resource allocation for hotplug
* bridges. However, ACPI hotplug may still be used for
* non-hotplug bridges to bring in additional devices such
* as a Thunderbolt host controller.
*/
for_each_pci_bridge(dev, bus) {
if (PCI_SLOT(dev->devfn) != slot->device)
continue;
if (PCI_SLOT(dev->devfn) == slot->device)
acpiphp_native_scan_bridge(dev);
}
pci_assign_unassigned_bridge_resources(bus->self);
} else {
LIST_HEAD(add_list);
int max, pass;
max = pci_scan_bridge(bus, dev, max, pass);
if (pass && dev->subordinate) {
check_hotplug_bridge(slot, dev);
pcibios_resource_survey_bus(dev->subordinate);
__pci_bus_size_bridges(dev->subordinate, &add_list);
acpiphp_rescan_slot(slot);
max = acpiphp_max_busnr(bus);
for (pass = 0; pass < 2; pass++) {
for_each_pci_bridge(dev, bus) {
if (PCI_SLOT(dev->devfn) != slot->device)
continue;
max = pci_scan_bridge(bus, dev, max, pass);
if (pass && dev->subordinate) {
check_hotplug_bridge(slot, dev);
pcibios_resource_survey_bus(dev->subordinate);
__pci_bus_size_bridges(dev->subordinate,
&add_list);
}
}
}
__pci_bus_assign_resources(bus, &add_list, NULL);
}
__pci_bus_assign_resources(bus, &add_list, NULL);
acpiphp_sanitize_bus(bus);
pcie_bus_configure_settings(bus);
@ -481,7 +522,7 @@ static void enable_slot(struct acpiphp_slot *slot)
if (!dev) {
/* Do not set SLOT_ENABLED flag if some funcs
are not added. */
slot->flags &= (~SLOT_ENABLED);
slot->flags &= ~SLOT_ENABLED;
continue;
}
}
@ -510,7 +551,7 @@ static void disable_slot(struct acpiphp_slot *slot)
list_for_each_entry(func, &slot->funcs, sibling)
acpi_bus_trim(func_to_acpi_device(func));
slot->flags &= (~SLOT_ENABLED);
slot->flags &= ~SLOT_ENABLED;
}
static bool slot_no_hotplug(struct acpiphp_slot *slot)
@ -608,6 +649,11 @@ static void trim_stale_devices(struct pci_dev *dev)
alive = pci_device_is_present(dev);
if (!alive) {
pci_dev_set_disconnected(dev, NULL);
if (pci_has_subordinate(dev))
pci_walk_bus(dev->subordinate, pci_dev_set_disconnected,
NULL);
pci_stop_and_remove_bus_device(dev);
if (adev)
acpi_bus_trim(adev);

View File

@ -379,7 +379,7 @@ static int get_adapter_present(struct hotplug_slot *hotplug_slot, u8 *value)
static int get_max_bus_speed(struct slot *slot)
{
int rc;
int rc = 0;
u8 mode = 0;
enum pci_bus_speed speed;
struct pci_bus *bus = slot->hotplug_slot->pci_slot->bus;

View File

@ -121,7 +121,7 @@ struct controller *pcie_init(struct pcie_device *dev);
int pcie_init_notification(struct controller *ctrl);
int pciehp_enable_slot(struct slot *p_slot);
int pciehp_disable_slot(struct slot *p_slot);
void pcie_enable_notification(struct controller *ctrl);
void pcie_reenable_notification(struct controller *ctrl);
int pciehp_power_on_slot(struct slot *slot);
void pciehp_power_off_slot(struct slot *slot);
void pciehp_get_power_status(struct slot *slot, u8 *status);

View File

@ -283,7 +283,7 @@ static int pciehp_resume(struct pcie_device *dev)
ctrl = get_service_data(dev);
/* reinitialize the chipset's event detection logic */
pcie_enable_notification(ctrl);
pcie_reenable_notification(ctrl);
slot = ctrl->slot;

View File

@ -10,7 +10,6 @@
* All rights reserved.
*
* Send feedback to <greg@kroah.com>,<kristen.c.accardi@intel.com>
*
*/
#include <linux/kernel.h>
@ -147,25 +146,22 @@ static void pcie_wait_cmd(struct controller *ctrl)
else
rc = pcie_poll_cmd(ctrl, jiffies_to_msecs(timeout));
/*
* Controllers with errata like Intel CF118 don't generate
* completion notifications unless the power/indicator/interlock
* control bits are changed. On such controllers, we'll emit this
* timeout message when we wait for completion of commands that
* don't change those bits, e.g., commands that merely enable
* interrupts.
*/
if (!rc)
ctrl_info(ctrl, "Timeout on hotplug command %#06x (issued %u msec ago)\n",
ctrl->slot_ctrl,
jiffies_to_msecs(jiffies - ctrl->cmd_started));
}
#define CC_ERRATUM_MASK (PCI_EXP_SLTCTL_PCC | \
PCI_EXP_SLTCTL_PIC | \
PCI_EXP_SLTCTL_AIC | \
PCI_EXP_SLTCTL_EIC)
static void pcie_do_write_cmd(struct controller *ctrl, u16 cmd,
u16 mask, bool wait)
{
struct pci_dev *pdev = ctrl_dev(ctrl);
u16 slot_ctrl;
u16 slot_ctrl_orig, slot_ctrl;
mutex_lock(&ctrl->ctrl_lock);
@ -180,6 +176,7 @@ static void pcie_do_write_cmd(struct controller *ctrl, u16 cmd,
goto out;
}
slot_ctrl_orig = slot_ctrl;
slot_ctrl &= ~mask;
slot_ctrl |= (cmd & mask);
ctrl->cmd_busy = 1;
@ -188,6 +185,17 @@ static void pcie_do_write_cmd(struct controller *ctrl, u16 cmd,
ctrl->cmd_started = jiffies;
ctrl->slot_ctrl = slot_ctrl;
/*
* Controllers with the Intel CF118 and similar errata advertise
* Command Completed support, but they only set Command Completed
* if we change the "Control" bits for power, power indicator,
* attention indicator, or interlock. If we only change the
* "Enable" bits, they never set the Command Completed bit.
*/
if (pdev->broken_cmd_compl &&
(slot_ctrl_orig & CC_ERRATUM_MASK) == (slot_ctrl & CC_ERRATUM_MASK))
ctrl->cmd_busy = 0;
/*
* Optionally wait for the hardware to be ready for a new command,
* indicating completion of the above issued command.
@ -231,25 +239,11 @@ bool pciehp_check_link_active(struct controller *ctrl)
return ret;
}
static void __pcie_wait_link_active(struct controller *ctrl, bool active)
{
int timeout = 1000;
if (pciehp_check_link_active(ctrl) == active)
return;
while (timeout > 0) {
msleep(10);
timeout -= 10;
if (pciehp_check_link_active(ctrl) == active)
return;
}
ctrl_dbg(ctrl, "Data Link Layer Link Active not %s in 1000 msec\n",
active ? "set" : "cleared");
}
static void pcie_wait_link_active(struct controller *ctrl)
{
__pcie_wait_link_active(ctrl, true);
struct pci_dev *pdev = ctrl_dev(ctrl);
pcie_wait_for_link(pdev, true);
}
static bool pci_bus_check_dev(struct pci_bus *bus, int devfn)
@ -659,7 +653,7 @@ static irqreturn_t pcie_isr(int irq, void *dev_id)
return handled;
}
void pcie_enable_notification(struct controller *ctrl)
static void pcie_enable_notification(struct controller *ctrl)
{
u16 cmd, mask;
@ -697,6 +691,17 @@ void pcie_enable_notification(struct controller *ctrl)
pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, cmd);
}
void pcie_reenable_notification(struct controller *ctrl)
{
/*
* Clear both Presence and Data Link Layer Changed to make sure
* those events still fire after we have re-enabled them.
*/
pcie_capability_write_word(ctrl->pcie->port, PCI_EXP_SLTSTA,
PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC);
pcie_enable_notification(ctrl);
}
static void pcie_disable_notification(struct controller *ctrl)
{
u16 mask;
@ -861,7 +866,7 @@ struct controller *pcie_init(struct pcie_device *dev)
PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_CC |
PCI_EXP_SLTSTA_DLLSC);
ctrl_info(ctrl, "Slot #%d AttnBtn%c PwrCtrl%c MRL%c AttnInd%c PwrInd%c HotPlug%c Surprise%c Interlock%c NoCompl%c LLActRep%c\n",
ctrl_info(ctrl, "Slot #%d AttnBtn%c PwrCtrl%c MRL%c AttnInd%c PwrInd%c HotPlug%c Surprise%c Interlock%c NoCompl%c LLActRep%c%s\n",
(slot_cap & PCI_EXP_SLTCAP_PSN) >> 19,
FLAG(slot_cap, PCI_EXP_SLTCAP_ABP),
FLAG(slot_cap, PCI_EXP_SLTCAP_PCP),
@ -872,7 +877,8 @@ struct controller *pcie_init(struct pcie_device *dev)
FLAG(slot_cap, PCI_EXP_SLTCAP_HPS),
FLAG(slot_cap, PCI_EXP_SLTCAP_EIP),
FLAG(slot_cap, PCI_EXP_SLTCAP_NCCS),
FLAG(link_cap, PCI_EXP_LNKCAP_DLLLARC));
FLAG(link_cap, PCI_EXP_LNKCAP_DLLLARC),
pdev->broken_cmd_compl ? " (with Cmd Compl erratum)" : "");
if (pcie_init_slot(ctrl))
goto abort_ctrl;
@ -891,3 +897,21 @@ void pciehp_release_ctrl(struct controller *ctrl)
pcie_cleanup_slot(ctrl);
kfree(ctrl);
}
static void quirk_cmd_compl(struct pci_dev *pdev)
{
u32 slot_cap;
if (pci_is_pcie(pdev)) {
pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &slot_cap);
if (slot_cap & PCI_EXP_SLTCAP_HPC &&
!(slot_cap & PCI_EXP_SLTCAP_NCCS))
pdev->broken_cmd_compl = 1;
}
}
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0400,
PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0401,
PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl);

View File

@ -220,12 +220,16 @@ static int pnv_php_populate_changeset(struct of_changeset *ocs,
for_each_child_of_node(dn, child) {
ret = of_changeset_attach_node(ocs, child);
if (ret)
if (ret) {
of_node_put(child);
break;
}
ret = pnv_php_populate_changeset(ocs, child);
if (ret)
if (ret) {
of_node_put(child);
break;
}
}
return ret;

View File

@ -105,7 +105,6 @@ struct controller {
};
/* Define AMD SHPC ID */
#define PCI_DEVICE_ID_AMD_GOLAM_7450 0x7450
#define PCI_DEVICE_ID_AMD_POGO_7458 0x7458
/* AMD PCI-X bridge registers */
@ -173,17 +172,6 @@ static inline const char *slot_name(struct slot *slot)
return hotplug_slot_name(slot->hotplug_slot);
}
#ifdef CONFIG_ACPI
#include <linux/pci-acpi.h>
static inline int get_hp_hw_control_from_firmware(struct pci_dev *dev)
{
u32 flags = OSC_PCI_SHPC_NATIVE_HP_CONTROL;
return acpi_get_hp_hw_control_from_firmware(dev, flags);
}
#else
#define get_hp_hw_control_from_firmware(dev) (0)
#endif
struct ctrl_reg {
volatile u32 base_offset;
volatile u32 slot_avail1;

View File

@ -270,24 +270,12 @@ static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value)
return 0;
}
static int is_shpc_capable(struct pci_dev *dev)
{
if (dev->vendor == PCI_VENDOR_ID_AMD &&
dev->device == PCI_DEVICE_ID_AMD_GOLAM_7450)
return 1;
if (!pci_find_capability(dev, PCI_CAP_ID_SHPC))
return 0;
if (get_hp_hw_control_from_firmware(dev))
return 0;
return 1;
}
static int shpc_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
int rc;
struct controller *ctrl;
if (!is_shpc_capable(pdev))
if (acpi_get_hp_hw_control_from_firmware(pdev))
return -ENODEV;
ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL);

View File

@ -585,13 +585,13 @@ static int shpchp_enable_slot (struct slot *p_slot)
ctrl_dbg(ctrl, "%s: p_slot->pwr_save %x\n", __func__, p_slot->pwr_save);
p_slot->hpc_ops->get_latch_status(p_slot, &getstatus);
if (((p_slot->ctrl->pci_dev->vendor == PCI_VENDOR_ID_AMD) ||
(p_slot->ctrl->pci_dev->device == PCI_DEVICE_ID_AMD_POGO_7458))
if ((p_slot->ctrl->pci_dev->vendor == PCI_VENDOR_ID_AMD &&
p_slot->ctrl->pci_dev->device == PCI_DEVICE_ID_AMD_POGO_7458)
&& p_slot->ctrl->num_slots == 1) {
/* handle amd pogo errata; this must be done before enable */
/* handle AMD POGO errata; this must be done before enable */
amd_pogo_errata_save_misc_reg(p_slot);
retval = board_added(p_slot);
/* handle amd pogo errata; this must be done after enable */
/* handle AMD POGO errata; this must be done after enable */
amd_pogo_errata_restore_misc_reg(p_slot);
} else
retval = board_added(p_slot);

View File

@ -469,6 +469,7 @@ found:
iov->nres = nres;
iov->ctrl = ctrl;
iov->total_VFs = total;
iov->driver_max_VFs = total;
pci_read_config_word(dev, pos + PCI_SRIOV_VF_DID, &iov->vf_device);
iov->pgsz = pgsz;
iov->self = dev;
@ -827,9 +828,42 @@ int pci_sriov_get_totalvfs(struct pci_dev *dev)
if (!dev->is_physfn)
return 0;
if (dev->sriov->driver_max_VFs)
return dev->sriov->driver_max_VFs;
return dev->sriov->total_VFs;
return dev->sriov->driver_max_VFs;
}
EXPORT_SYMBOL_GPL(pci_sriov_get_totalvfs);
/**
* pci_sriov_configure_simple - helper to configure SR-IOV
* @dev: the PCI device
* @nr_virtfn: number of virtual functions to enable, 0 to disable
*
* Enable or disable SR-IOV for devices that don't require any PF setup
* before enabling SR-IOV. Return value is negative on error, or number of
* VFs allocated on success.
*/
int pci_sriov_configure_simple(struct pci_dev *dev, int nr_virtfn)
{
int rc;
might_sleep();
if (!dev->is_physfn)
return -ENODEV;
if (pci_vfs_assigned(dev)) {
pci_warn(dev, "Cannot modify SR-IOV while VFs are assigned\n");
return -EPERM;
}
if (nr_virtfn == 0) {
sriov_disable(dev);
return 0;
}
rc = sriov_enable(dev, nr_virtfn);
if (rc < 0)
return rc;
return nr_virtfn;
}
EXPORT_SYMBOL_GPL(pci_sriov_configure_simple);

View File

@ -244,8 +244,9 @@ EXPORT_SYMBOL_GPL(of_pci_check_probe_only);
#if defined(CONFIG_OF_ADDRESS)
/**
* of_pci_get_host_bridge_resources - Parse PCI host bridge resources from DT
* @dev: device node of the host bridge having the range property
* devm_of_pci_get_host_bridge_resources() - Resource-managed parsing of PCI
* host bridge resources from DT
* @dev: host bridge device
* @busno: bus number associated with the bridge root bus
* @bus_max: maximum number of buses for this bridge
* @resources: list where the range of resources will be added after DT parsing
@ -253,8 +254,6 @@ EXPORT_SYMBOL_GPL(of_pci_check_probe_only);
* address for the start of the I/O range. Can be NULL if the caller doesn't
* expect I/O ranges to be present in the device tree.
*
* It is the caller's job to free the @resources list.
*
* This function will parse the "ranges" property of a PCI host bridge device
* node and setup the resource mapping based on its content. It is expected
* that the property conforms with the Power ePAPR document.
@ -262,11 +261,11 @@ EXPORT_SYMBOL_GPL(of_pci_check_probe_only);
* It returns zero if the range parsing has been successful or a standard error
* value if it failed.
*/
int of_pci_get_host_bridge_resources(struct device_node *dev,
int devm_of_pci_get_host_bridge_resources(struct device *dev,
unsigned char busno, unsigned char bus_max,
struct list_head *resources, resource_size_t *io_base)
{
struct resource_entry *window;
struct device_node *dev_node = dev->of_node;
struct resource *res;
struct resource *bus_range;
struct of_pci_range range;
@ -277,19 +276,19 @@ int of_pci_get_host_bridge_resources(struct device_node *dev,
if (io_base)
*io_base = (resource_size_t)OF_BAD_ADDR;
bus_range = kzalloc(sizeof(*bus_range), GFP_KERNEL);
bus_range = devm_kzalloc(dev, sizeof(*bus_range), GFP_KERNEL);
if (!bus_range)
return -ENOMEM;
pr_info("host bridge %pOF ranges:\n", dev);
dev_info(dev, "host bridge %pOF ranges:\n", dev_node);
err = of_pci_parse_bus_range(dev, bus_range);
err = of_pci_parse_bus_range(dev_node, bus_range);
if (err) {
bus_range->start = busno;
bus_range->end = bus_max;
bus_range->flags = IORESOURCE_BUS;
pr_info(" No bus range found for %pOF, using %pR\n",
dev, bus_range);
dev_info(dev, " No bus range found for %pOF, using %pR\n",
dev_node, bus_range);
} else {
if (bus_range->end > bus_range->start + bus_max)
bus_range->end = bus_range->start + bus_max;
@ -297,11 +296,11 @@ int of_pci_get_host_bridge_resources(struct device_node *dev,
pci_add_resource(resources, bus_range);
/* Check for ranges property */
err = of_pci_range_parser_init(&parser, dev);
err = of_pci_range_parser_init(&parser, dev_node);
if (err)
goto parse_failed;
goto failed;
pr_debug("Parsing ranges property...\n");
dev_dbg(dev, "Parsing ranges property...\n");
for_each_of_pci_range(&parser, &range) {
/* Read next ranges element */
if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_IO)
@ -310,9 +309,9 @@ int of_pci_get_host_bridge_resources(struct device_node *dev,
snprintf(range_type, 4, "MEM");
else
snprintf(range_type, 4, "err");
pr_info(" %s %#010llx..%#010llx -> %#010llx\n", range_type,
range.cpu_addr, range.cpu_addr + range.size - 1,
range.pci_addr);
dev_info(dev, " %s %#010llx..%#010llx -> %#010llx\n",
range_type, range.cpu_addr,
range.cpu_addr + range.size - 1, range.pci_addr);
/*
* If we failed translation or got a zero-sized region
@ -321,28 +320,28 @@ int of_pci_get_host_bridge_resources(struct device_node *dev,
if (range.cpu_addr == OF_BAD_ADDR || range.size == 0)
continue;
res = kzalloc(sizeof(struct resource), GFP_KERNEL);
res = devm_kzalloc(dev, sizeof(struct resource), GFP_KERNEL);
if (!res) {
err = -ENOMEM;
goto parse_failed;
goto failed;
}
err = of_pci_range_to_resource(&range, dev, res);
err = of_pci_range_to_resource(&range, dev_node, res);
if (err) {
kfree(res);
devm_kfree(dev, res);
continue;
}
if (resource_type(res) == IORESOURCE_IO) {
if (!io_base) {
pr_err("I/O range found for %pOF. Please provide an io_base pointer to save CPU base address\n",
dev);
dev_err(dev, "I/O range found for %pOF. Please provide an io_base pointer to save CPU base address\n",
dev_node);
err = -EINVAL;
goto conversion_failed;
goto failed;
}
if (*io_base != (resource_size_t)OF_BAD_ADDR)
pr_warn("More than one I/O resource converted for %pOF. CPU base address for old range lost!\n",
dev);
dev_warn(dev, "More than one I/O resource converted for %pOF. CPU base address for old range lost!\n",
dev_node);
*io_base = range.cpu_addr;
}
@ -351,15 +350,11 @@ int of_pci_get_host_bridge_resources(struct device_node *dev,
return 0;
conversion_failed:
kfree(res);
parse_failed:
resource_list_for_each_entry(window, resources)
kfree(window->res);
failed:
pci_free_resource_list(resources);
return err;
}
EXPORT_SYMBOL_GPL(of_pci_get_host_bridge_resources);
EXPORT_SYMBOL_GPL(devm_of_pci_get_host_bridge_resources);
#endif /* CONFIG_OF_ADDRESS */
/**
@ -599,12 +594,12 @@ int pci_parse_request_of_pci_ranges(struct device *dev,
struct resource **bus_range)
{
int err, res_valid = 0;
struct device_node *np = dev->of_node;
resource_size_t iobase;
struct resource_entry *win, *tmp;
INIT_LIST_HEAD(resources);
err = of_pci_get_host_bridge_resources(np, 0, 0xff, resources, &iobase);
err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, resources,
&iobase);
if (err)
return err;

View File

@ -370,26 +370,57 @@ EXPORT_SYMBOL_GPL(pci_get_hp_params);
/**
* pciehp_is_native - Check whether a hotplug port is handled by the OS
* @pdev: Hotplug port to check
* @bridge: Hotplug port to check
*
* Walk up from @pdev to the host bridge, obtain its cached _OSC Control Field
* and return the value of the "PCI Express Native Hot Plug control" bit.
* On failure to obtain the _OSC Control Field return %false.
* Returns true if the given @bridge is handled by the native PCIe hotplug
* driver.
*/
bool pciehp_is_native(struct pci_dev *pdev)
bool pciehp_is_native(struct pci_dev *bridge)
{
struct acpi_pci_root *root;
acpi_handle handle;
const struct pci_host_bridge *host;
u32 slot_cap;
handle = acpi_find_root_bridge_handle(pdev);
if (!handle)
if (!IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE))
return false;
root = acpi_pci_find_root(handle);
if (!root)
pcie_capability_read_dword(bridge, PCI_EXP_SLTCAP, &slot_cap);
if (!(slot_cap & PCI_EXP_SLTCAP_HPC))
return false;
return root->osc_control_set & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL;
if (pcie_ports_native)
return true;
host = pci_find_host_bridge(bridge->bus);
return host->native_pcie_hotplug;
}
/**
* shpchp_is_native - Check whether a hotplug port is handled by the OS
* @bridge: Hotplug port to check
*
* Returns true if the given @bridge is handled by the native SHPC hotplug
* driver.
*/
bool shpchp_is_native(struct pci_dev *bridge)
{
const struct pci_host_bridge *host;
if (!IS_ENABLED(CONFIG_HOTPLUG_PCI_SHPC))
return false;
/*
* It is assumed that AMD GOLAM chips support SHPC but they do not
* have SHPC capability.
*/
if (bridge->vendor == PCI_VENDOR_ID_AMD &&
bridge->device == PCI_DEVICE_ID_AMD_GOLAM_7450)
return true;
if (!pci_find_capability(bridge, PCI_CAP_ID_SHPC))
return false;
host = pci_find_host_bridge(bridge->bus);
return host->native_shpc_hotplug;
}
/**

View File

@ -1539,7 +1539,7 @@ static int pci_uevent(struct device *dev, struct kobj_uevent_env *env)
return 0;
}
#if defined(CONFIG_PCIEAER) || defined(CONFIG_EEH)
#if defined(CONFIG_PCIEPORTBUS) || defined(CONFIG_EEH)
/**
* pci_uevent_ers - emit a uevent during recovery path of PCI device
* @pdev: PCI device undergoing error recovery

View File

@ -0,0 +1,54 @@
// SPDX-License-Identifier: GPL-2.0
/* pci-pf-stub - simple stub driver for PCI SR-IOV PF device
*
* This driver is meant to act as a "whitelist" for devices that provde
* SR-IOV functionality while at the same time not actually needing a
* driver of their own.
*/
#include <linux/module.h>
#include <linux/pci.h>
/**
* pci_pf_stub_whitelist - White list of devices to bind pci-pf-stub onto
*
* This table provides the list of IDs this driver is supposed to bind
* onto. You could think of this as a list of "quirked" devices where we
* are adding support for SR-IOV here since there are no other drivers
* that they would be running under.
*/
static const struct pci_device_id pci_pf_stub_whitelist[] = {
{ PCI_VDEVICE(AMAZON, 0x0053) },
/* required last entry */
{ 0 }
};
MODULE_DEVICE_TABLE(pci, pci_pf_stub_whitelist);
static int pci_pf_stub_probe(struct pci_dev *dev,
const struct pci_device_id *id)
{
pci_info(dev, "claimed by pci-pf-stub\n");
return 0;
}
static struct pci_driver pf_stub_driver = {
.name = "pci-pf-stub",
.id_table = pci_pf_stub_whitelist,
.probe = pci_pf_stub_probe,
.sriov_configure = pci_sriov_configure_simple,
};
static int __init pci_pf_stub_init(void)
{
return pci_register_driver(&pf_stub_driver);
}
static void __exit pci_pf_stub_exit(void)
{
pci_unregister_driver(&pf_stub_driver);
}
module_init(pci_pf_stub_init);
module_exit(pci_pf_stub_exit);
MODULE_LICENSE("GPL");

View File

@ -288,13 +288,16 @@ static ssize_t enable_store(struct device *dev, struct device_attribute *attr,
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
if (!val) {
if (pci_is_enabled(pdev))
pci_disable_device(pdev);
else
result = -EIO;
} else
device_lock(dev);
if (dev->driver)
result = -EBUSY;
else if (val)
result = pci_enable_device(pdev);
else if (pci_is_enabled(pdev))
pci_disable_device(pdev);
else
result = -EIO;
device_unlock(dev);
return result < 0 ? result : count;
}

View File

@ -112,6 +112,14 @@ unsigned int pcibios_max_latency = 255;
/* If set, the PCIe ARI capability will not be used. */
static bool pcie_ari_disabled;
/* If set, the PCIe ATS capability will not be used. */
static bool pcie_ats_disabled;
bool pci_ats_disabled(void)
{
return pcie_ats_disabled;
}
/* Disable bridge_d3 for all PCIe ports */
static bool pci_bridge_d3_disable;
/* Force bridge_d3 for all PCIe ports */
@ -4153,6 +4161,35 @@ static int pci_pm_reset(struct pci_dev *dev, int probe)
return pci_dev_wait(dev, "PM D3->D0", PCIE_RESET_READY_POLL_MS);
}
/**
* pcie_wait_for_link - Wait until link is active or inactive
* @pdev: Bridge device
* @active: waiting for active or inactive?
*
* Use this to wait till link becomes active or inactive.
*/
bool pcie_wait_for_link(struct pci_dev *pdev, bool active)
{
int timeout = 1000;
bool ret;
u16 lnk_status;
for (;;) {
pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status);
ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA);
if (ret == active)
return true;
if (timeout <= 0)
break;
msleep(10);
timeout -= 10;
}
pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n",
active ? "set" : "cleared");
return false;
}
void pci_reset_secondary_bus(struct pci_dev *dev)
{
@ -5084,49 +5121,6 @@ int pcie_set_mps(struct pci_dev *dev, int mps)
}
EXPORT_SYMBOL(pcie_set_mps);
/**
* pcie_get_minimum_link - determine minimum link settings of a PCI device
* @dev: PCI device to query
* @speed: storage for minimum speed
* @width: storage for minimum width
*
* This function will walk up the PCI device chain and determine the minimum
* link width and speed of the device.
*/
int pcie_get_minimum_link(struct pci_dev *dev, enum pci_bus_speed *speed,
enum pcie_link_width *width)
{
int ret;
*speed = PCI_SPEED_UNKNOWN;
*width = PCIE_LNK_WIDTH_UNKNOWN;
while (dev) {
u16 lnksta;
enum pci_bus_speed next_speed;
enum pcie_link_width next_width;
ret = pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta);
if (ret)
return ret;
next_speed = pcie_link_speed[lnksta & PCI_EXP_LNKSTA_CLS];
next_width = (lnksta & PCI_EXP_LNKSTA_NLW) >>
PCI_EXP_LNKSTA_NLW_SHIFT;
if (next_speed < *speed)
*speed = next_speed;
if (next_width < *width)
*width = next_width;
dev = dev->bus->self;
}
return 0;
}
EXPORT_SYMBOL(pcie_get_minimum_link);
/**
* pcie_bandwidth_available - determine minimum link settings of a PCIe
* device and its bandwidth limitation
@ -5717,15 +5711,14 @@ static void pci_no_domains(void)
#endif
}
#ifdef CONFIG_PCI_DOMAINS
#ifdef CONFIG_PCI_DOMAINS_GENERIC
static atomic_t __domain_nr = ATOMIC_INIT(-1);
int pci_get_new_domain_nr(void)
static int pci_get_new_domain_nr(void)
{
return atomic_inc_return(&__domain_nr);
}
#ifdef CONFIG_PCI_DOMAINS_GENERIC
static int of_pci_bus_find_domain_nr(struct device *parent)
{
static int use_dt_domains = -1;
@ -5780,7 +5773,6 @@ int pci_bus_find_domain_nr(struct pci_bus *bus, struct device *parent)
acpi_pci_bus_find_domain_nr(bus);
}
#endif
#endif
/**
* pci_ext_cfg_avail - can we access extended PCI config space?
@ -5808,6 +5800,9 @@ static int __init pci_setup(char *str)
if (*str && (str = pcibios_setup(str)) && *str) {
if (!strcmp(str, "nomsi")) {
pci_no_msi();
} else if (!strncmp(str, "noats", 5)) {
pr_info("PCIe: ATS is disabled\n");
pcie_ats_disabled = true;
} else if (!strcmp(str, "noaer")) {
pci_no_aer();
} else if (!strncmp(str, "realloc=", 8)) {

View File

@ -353,6 +353,11 @@ static inline resource_size_t pci_resource_alignment(struct pci_dev *dev,
void pci_enable_acs(struct pci_dev *dev);
/* PCI error reporting and recovery */
void pcie_do_fatal_recovery(struct pci_dev *dev, u32 service);
void pcie_do_nonfatal_recovery(struct pci_dev *dev);
bool pcie_wait_for_link(struct pci_dev *pdev, bool active);
#ifdef CONFIG_PCIEASPM
void pcie_aspm_init_link_state(struct pci_dev *pdev);
void pcie_aspm_exit_link_state(struct pci_dev *pdev);
@ -407,4 +412,44 @@ static inline u64 pci_rebar_size_to_bytes(int size)
return 1ULL << (size + 20);
}
struct device_node;
#ifdef CONFIG_OF
int of_pci_parse_bus_range(struct device_node *node, struct resource *res);
int of_get_pci_domain_nr(struct device_node *node);
int of_pci_get_max_link_speed(struct device_node *node);
#else
static inline int
of_pci_parse_bus_range(struct device_node *node, struct resource *res)
{
return -EINVAL;
}
static inline int
of_get_pci_domain_nr(struct device_node *node)
{
return -1;
}
static inline int
of_pci_get_max_link_speed(struct device_node *node)
{
return -EINVAL;
}
#endif /* CONFIG_OF */
#if defined(CONFIG_OF_ADDRESS)
int devm_of_pci_get_host_bridge_resources(struct device *dev,
unsigned char busno, unsigned char bus_max,
struct list_head *resources, resource_size_t *io_base);
#else
static inline int devm_of_pci_get_host_bridge_resources(struct device *dev,
unsigned char busno, unsigned char bus_max,
struct list_head *resources, resource_size_t *io_base)
{
return -EINVAL;
}
#endif
#endif /* DRIVERS_PCI_H */

View File

@ -2,7 +2,7 @@
#
# Makefile for PCI Express features and port driver
pcieportdrv-y := portdrv_core.o portdrv_pci.o
pcieportdrv-y := portdrv_core.o portdrv_pci.o err.o
obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o

View File

@ -94,7 +94,7 @@ static void set_downstream_devices_error_reporting(struct pci_dev *dev,
*/
static void aer_enable_rootport(struct aer_rpc *rpc)
{
struct pci_dev *pdev = rpc->rpd->port;
struct pci_dev *pdev = rpc->rpd;
int aer_pos;
u16 reg16;
u32 reg32;
@ -136,7 +136,7 @@ static void aer_enable_rootport(struct aer_rpc *rpc)
*/
static void aer_disable_rootport(struct aer_rpc *rpc)
{
struct pci_dev *pdev = rpc->rpd->port;
struct pci_dev *pdev = rpc->rpd;
u32 reg32;
int pos;
@ -232,7 +232,7 @@ static struct aer_rpc *aer_alloc_rpc(struct pcie_device *dev)
/* Initialize Root lock access, e_lock, to Root Error Status Reg */
spin_lock_init(&rpc->e_lock);
rpc->rpd = dev;
rpc->rpd = dev->port;
INIT_WORK(&rpc->dpc_handler, aer_isr);
mutex_init(&rpc->rpc_mutex);
@ -353,10 +353,7 @@ static void aer_error_resume(struct pci_dev *dev)
pos = dev->aer_cap;
pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status);
pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &mask);
if (dev->error_state == pci_channel_io_normal)
status &= ~mask; /* Clear corresponding nonfatal bits */
else
status &= mask; /* Clear corresponding fatal bits */
status &= ~mask; /* Clear corresponding nonfatal bits */
pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status);
}

View File

@ -58,7 +58,7 @@ struct aer_err_source {
};
struct aer_rpc {
struct pcie_device *rpd; /* Root Port device */
struct pci_dev *rpd; /* Root Port device */
struct work_struct dpc_handler;
struct aer_err_source e_sources[AER_ERROR_SOURCES_MAX];
struct aer_err_info e_info;
@ -76,36 +76,6 @@ struct aer_rpc {
*/
};
struct aer_broadcast_data {
enum pci_channel_state state;
enum pci_ers_result result;
};
static inline pci_ers_result_t merge_result(enum pci_ers_result orig,
enum pci_ers_result new)
{
if (new == PCI_ERS_RESULT_NO_AER_DRIVER)
return PCI_ERS_RESULT_NO_AER_DRIVER;
if (new == PCI_ERS_RESULT_NONE)
return orig;
switch (orig) {
case PCI_ERS_RESULT_CAN_RECOVER:
case PCI_ERS_RESULT_RECOVERED:
orig = new;
break;
case PCI_ERS_RESULT_DISCONNECT:
if (new == PCI_ERS_RESULT_NEED_RESET)
orig = PCI_ERS_RESULT_NEED_RESET;
break;
default:
break;
}
return orig;
}
extern struct bus_type pcie_port_bus_type;
void aer_isr(struct work_struct *work);
void aer_print_error(struct pci_dev *dev, struct aer_err_info *info);

View File

@ -20,6 +20,7 @@
#include <linux/slab.h>
#include <linux/kfifo.h>
#include "aerdrv.h"
#include "../../pci.h"
#define PCI_EXP_AER_FLAGS (PCI_EXP_DEVCTL_CERE | PCI_EXP_DEVCTL_NFERE | \
PCI_EXP_DEVCTL_FERE | PCI_EXP_DEVCTL_URRE)
@ -227,329 +228,14 @@ static bool find_source_device(struct pci_dev *parent,
return true;
}
static int report_error_detected(struct pci_dev *dev, void *data)
{
pci_ers_result_t vote;
const struct pci_error_handlers *err_handler;
struct aer_broadcast_data *result_data;
result_data = (struct aer_broadcast_data *) data;
device_lock(&dev->dev);
dev->error_state = result_data->state;
if (!dev->driver ||
!dev->driver->err_handler ||
!dev->driver->err_handler->error_detected) {
if (result_data->state == pci_channel_io_frozen &&
dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) {
/*
* In case of fatal recovery, if one of down-
* stream device has no driver. We might be
* unable to recover because a later insmod
* of a driver for this device is unaware of
* its hw state.
*/
pci_printk(KERN_DEBUG, dev, "device has %s\n",
dev->driver ?
"no AER-aware driver" : "no driver");
}
/*
* If there's any device in the subtree that does not
* have an error_detected callback, returning
* PCI_ERS_RESULT_NO_AER_DRIVER prevents calling of
* the subsequent mmio_enabled/slot_reset/resume
* callbacks of "any" device in the subtree. All the
* devices in the subtree are left in the error state
* without recovery.
*/
if (dev->hdr_type != PCI_HEADER_TYPE_BRIDGE)
vote = PCI_ERS_RESULT_NO_AER_DRIVER;
else
vote = PCI_ERS_RESULT_NONE;
} else {
err_handler = dev->driver->err_handler;
vote = err_handler->error_detected(dev, result_data->state);
pci_uevent_ers(dev, PCI_ERS_RESULT_NONE);
}
result_data->result = merge_result(result_data->result, vote);
device_unlock(&dev->dev);
return 0;
}
static int report_mmio_enabled(struct pci_dev *dev, void *data)
{
pci_ers_result_t vote;
const struct pci_error_handlers *err_handler;
struct aer_broadcast_data *result_data;
result_data = (struct aer_broadcast_data *) data;
device_lock(&dev->dev);
if (!dev->driver ||
!dev->driver->err_handler ||
!dev->driver->err_handler->mmio_enabled)
goto out;
err_handler = dev->driver->err_handler;
vote = err_handler->mmio_enabled(dev);
result_data->result = merge_result(result_data->result, vote);
out:
device_unlock(&dev->dev);
return 0;
}
static int report_slot_reset(struct pci_dev *dev, void *data)
{
pci_ers_result_t vote;
const struct pci_error_handlers *err_handler;
struct aer_broadcast_data *result_data;
result_data = (struct aer_broadcast_data *) data;
device_lock(&dev->dev);
if (!dev->driver ||
!dev->driver->err_handler ||
!dev->driver->err_handler->slot_reset)
goto out;
err_handler = dev->driver->err_handler;
vote = err_handler->slot_reset(dev);
result_data->result = merge_result(result_data->result, vote);
out:
device_unlock(&dev->dev);
return 0;
}
static int report_resume(struct pci_dev *dev, void *data)
{
const struct pci_error_handlers *err_handler;
device_lock(&dev->dev);
dev->error_state = pci_channel_io_normal;
if (!dev->driver ||
!dev->driver->err_handler ||
!dev->driver->err_handler->resume)
goto out;
err_handler = dev->driver->err_handler;
err_handler->resume(dev);
pci_uevent_ers(dev, PCI_ERS_RESULT_RECOVERED);
out:
device_unlock(&dev->dev);
return 0;
}
/**
* broadcast_error_message - handle message broadcast to downstream drivers
* @dev: pointer to from where in a hierarchy message is broadcasted down
* @state: error state
* @error_mesg: message to print
* @cb: callback to be broadcasted
*
* Invoked during error recovery process. Once being invoked, the content
* of error severity will be broadcasted to all downstream drivers in a
* hierarchy in question.
*/
static pci_ers_result_t broadcast_error_message(struct pci_dev *dev,
enum pci_channel_state state,
char *error_mesg,
int (*cb)(struct pci_dev *, void *))
{
struct aer_broadcast_data result_data;
pci_printk(KERN_DEBUG, dev, "broadcast %s message\n", error_mesg);
result_data.state = state;
if (cb == report_error_detected)
result_data.result = PCI_ERS_RESULT_CAN_RECOVER;
else
result_data.result = PCI_ERS_RESULT_RECOVERED;
if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) {
/*
* If the error is reported by a bridge, we think this error
* is related to the downstream link of the bridge, so we
* do error recovery on all subordinates of the bridge instead
* of the bridge and clear the error status of the bridge.
*/
if (cb == report_error_detected)
dev->error_state = state;
pci_walk_bus(dev->subordinate, cb, &result_data);
if (cb == report_resume) {
pci_cleanup_aer_uncorrect_error_status(dev);
dev->error_state = pci_channel_io_normal;
}
} else {
/*
* If the error is reported by an end point, we think this
* error is related to the upstream link of the end point.
*/
if (state == pci_channel_io_normal)
/*
* the error is non fatal so the bus is ok, just invoke
* the callback for the function that logged the error.
*/
cb(dev, &result_data);
else
pci_walk_bus(dev->bus, cb, &result_data);
}
return result_data.result;
}
/**
* default_reset_link - default reset function
* @dev: pointer to pci_dev data structure
*
* Invoked when performing link reset on a Downstream Port or a
* Root Port with no aer driver.
*/
static pci_ers_result_t default_reset_link(struct pci_dev *dev)
{
pci_reset_bridge_secondary_bus(dev);
pci_printk(KERN_DEBUG, dev, "downstream link has been reset\n");
return PCI_ERS_RESULT_RECOVERED;
}
static int find_aer_service_iter(struct device *device, void *data)
{
struct pcie_port_service_driver *service_driver, **drv;
drv = (struct pcie_port_service_driver **) data;
if (device->bus == &pcie_port_bus_type && device->driver) {
service_driver = to_service_driver(device->driver);
if (service_driver->service == PCIE_PORT_SERVICE_AER) {
*drv = service_driver;
return 1;
}
}
return 0;
}
static struct pcie_port_service_driver *find_aer_service(struct pci_dev *dev)
{
struct pcie_port_service_driver *drv = NULL;
device_for_each_child(&dev->dev, &drv, find_aer_service_iter);
return drv;
}
static pci_ers_result_t reset_link(struct pci_dev *dev)
{
struct pci_dev *udev;
pci_ers_result_t status;
struct pcie_port_service_driver *driver;
if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) {
/* Reset this port for all subordinates */
udev = dev;
} else {
/* Reset the upstream component (likely downstream port) */
udev = dev->bus->self;
}
/* Use the aer driver of the component firstly */
driver = find_aer_service(udev);
if (driver && driver->reset_link) {
status = driver->reset_link(udev);
} else if (udev->has_secondary_link) {
status = default_reset_link(udev);
} else {
pci_printk(KERN_DEBUG, dev, "no link-reset support at upstream device %s\n",
pci_name(udev));
return PCI_ERS_RESULT_DISCONNECT;
}
if (status != PCI_ERS_RESULT_RECOVERED) {
pci_printk(KERN_DEBUG, dev, "link reset at upstream device %s failed\n",
pci_name(udev));
return PCI_ERS_RESULT_DISCONNECT;
}
return status;
}
/**
* do_recovery - handle nonfatal/fatal error recovery process
* @dev: pointer to a pci_dev data structure of agent detecting an error
* @severity: error severity type
*
* Invoked when an error is nonfatal/fatal. Once being invoked, broadcast
* error detected message to all downstream drivers within a hierarchy in
* question and return the returned code.
*/
static void do_recovery(struct pci_dev *dev, int severity)
{
pci_ers_result_t status, result = PCI_ERS_RESULT_RECOVERED;
enum pci_channel_state state;
if (severity == AER_FATAL)
state = pci_channel_io_frozen;
else
state = pci_channel_io_normal;
status = broadcast_error_message(dev,
state,
"error_detected",
report_error_detected);
if (severity == AER_FATAL) {
result = reset_link(dev);
if (result != PCI_ERS_RESULT_RECOVERED)
goto failed;
}
if (status == PCI_ERS_RESULT_CAN_RECOVER)
status = broadcast_error_message(dev,
state,
"mmio_enabled",
report_mmio_enabled);
if (status == PCI_ERS_RESULT_NEED_RESET) {
/*
* TODO: Should call platform-specific
* functions to reset slot before calling
* drivers' slot_reset callbacks?
*/
status = broadcast_error_message(dev,
state,
"slot_reset",
report_slot_reset);
}
if (status != PCI_ERS_RESULT_RECOVERED)
goto failed;
broadcast_error_message(dev,
state,
"resume",
report_resume);
pci_info(dev, "AER: Device recovery successful\n");
return;
failed:
pci_uevent_ers(dev, PCI_ERS_RESULT_DISCONNECT);
/* TODO: Should kernel panic here? */
pci_info(dev, "AER: Device recovery failed\n");
}
/**
* handle_error_source - handle logging error into an event log
* @aerdev: pointer to pcie_device data structure of the root port
* @dev: pointer to pci_dev data structure of error source device
* @info: comprehensive error information
*
* Invoked when an error being detected by Root Port.
*/
static void handle_error_source(struct pcie_device *aerdev,
struct pci_dev *dev,
struct aer_err_info *info)
static void handle_error_source(struct pci_dev *dev, struct aer_err_info *info)
{
int pos;
@ -562,12 +248,13 @@ static void handle_error_source(struct pcie_device *aerdev,
if (pos)
pci_write_config_dword(dev, pos + PCI_ERR_COR_STATUS,
info->status);
} else
do_recovery(dev, info->severity);
} else if (info->severity == AER_NONFATAL)
pcie_do_nonfatal_recovery(dev);
else if (info->severity == AER_FATAL)
pcie_do_fatal_recovery(dev, PCIE_PORT_SERVICE_AER);
}
#ifdef CONFIG_ACPI_APEI_PCIEAER
static void aer_recover_work_func(struct work_struct *work);
#define AER_RECOVER_RING_ORDER 4
#define AER_RECOVER_RING_SIZE (1 << AER_RECOVER_RING_ORDER)
@ -582,6 +269,30 @@ struct aer_recover_entry {
static DEFINE_KFIFO(aer_recover_ring, struct aer_recover_entry,
AER_RECOVER_RING_SIZE);
static void aer_recover_work_func(struct work_struct *work)
{
struct aer_recover_entry entry;
struct pci_dev *pdev;
while (kfifo_get(&aer_recover_ring, &entry)) {
pdev = pci_get_domain_bus_and_slot(entry.domain, entry.bus,
entry.devfn);
if (!pdev) {
pr_err("AER recover: Can not find pci_dev for %04x:%02x:%02x:%x\n",
entry.domain, entry.bus,
PCI_SLOT(entry.devfn), PCI_FUNC(entry.devfn));
continue;
}
cper_print_aer(pdev, entry.severity, entry.regs);
if (entry.severity == AER_NONFATAL)
pcie_do_nonfatal_recovery(pdev);
else if (entry.severity == AER_FATAL)
pcie_do_fatal_recovery(pdev, PCIE_PORT_SERVICE_AER);
pci_dev_put(pdev);
}
}
/*
* Mutual exclusion for writers of aer_recover_ring, reader side don't
* need lock, because there is only one reader and lock is not needed
@ -611,27 +322,6 @@ void aer_recover_queue(int domain, unsigned int bus, unsigned int devfn,
spin_unlock_irqrestore(&aer_recover_ring_lock, flags);
}
EXPORT_SYMBOL_GPL(aer_recover_queue);
static void aer_recover_work_func(struct work_struct *work)
{
struct aer_recover_entry entry;
struct pci_dev *pdev;
while (kfifo_get(&aer_recover_ring, &entry)) {
pdev = pci_get_domain_bus_and_slot(entry.domain, entry.bus,
entry.devfn);
if (!pdev) {
pr_err("AER recover: Can not find pci_dev for %04x:%02x:%02x:%x\n",
entry.domain, entry.bus,
PCI_SLOT(entry.devfn), PCI_FUNC(entry.devfn));
continue;
}
cper_print_aer(pdev, entry.severity, entry.regs);
if (entry.severity != AER_CORRECTABLE)
do_recovery(pdev, entry.severity);
pci_dev_put(pdev);
}
}
#endif
/**
@ -695,8 +385,7 @@ static int get_device_error_info(struct pci_dev *dev, struct aer_err_info *info)
return 1;
}
static inline void aer_process_err_devices(struct pcie_device *p_device,
struct aer_err_info *e_info)
static inline void aer_process_err_devices(struct aer_err_info *e_info)
{
int i;
@ -707,19 +396,19 @@ static inline void aer_process_err_devices(struct pcie_device *p_device,
}
for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) {
if (get_device_error_info(e_info->dev[i], e_info))
handle_error_source(p_device, e_info->dev[i], e_info);
handle_error_source(e_info->dev[i], e_info);
}
}
/**
* aer_isr_one_error - consume an error detected by root port
* @p_device: pointer to error root port service device
* @rpc: pointer to the root port which holds an error
* @e_src: pointer to an error source
*/
static void aer_isr_one_error(struct pcie_device *p_device,
static void aer_isr_one_error(struct aer_rpc *rpc,
struct aer_err_source *e_src)
{
struct aer_rpc *rpc = get_service_data(p_device);
struct pci_dev *pdev = rpc->rpd;
struct aer_err_info *e_info = &rpc->e_info;
/*
@ -734,11 +423,10 @@ static void aer_isr_one_error(struct pcie_device *p_device,
e_info->multi_error_valid = 1;
else
e_info->multi_error_valid = 0;
aer_print_port_info(pdev, e_info);
aer_print_port_info(p_device->port, e_info);
if (find_source_device(p_device->port, e_info))
aer_process_err_devices(p_device, e_info);
if (find_source_device(pdev, e_info))
aer_process_err_devices(e_info);
}
if (e_src->status & PCI_ERR_ROOT_UNCOR_RCV) {
@ -754,10 +442,10 @@ static void aer_isr_one_error(struct pcie_device *p_device,
else
e_info->multi_error_valid = 0;
aer_print_port_info(p_device->port, e_info);
aer_print_port_info(pdev, e_info);
if (find_source_device(p_device->port, e_info))
aer_process_err_devices(p_device, e_info);
if (find_source_device(pdev, e_info))
aer_process_err_devices(e_info);
}
}
@ -799,11 +487,10 @@ static int get_e_source(struct aer_rpc *rpc, struct aer_err_source *e_src)
void aer_isr(struct work_struct *work)
{
struct aer_rpc *rpc = container_of(work, struct aer_rpc, dpc_handler);
struct pcie_device *p_device = rpc->rpd;
struct aer_err_source uninitialized_var(e_src);
mutex_lock(&rpc->rpc_mutex);
while (get_e_source(rpc, &e_src))
aer_isr_one_error(p_device, &e_src);
aer_isr_one_error(rpc, &e_src);
mutex_unlock(&rpc->rpc_mutex);
}

View File

@ -163,17 +163,17 @@ void aer_print_error(struct pci_dev *dev, struct aer_err_info *info)
int id = ((dev->bus->number << 8) | dev->devfn);
if (!info->status) {
pci_err(dev, "PCIe Bus Error: severity=%s, type=Unaccessible, id=%04x(Unregistered Agent ID)\n",
aer_error_severity_string[info->severity], id);
pci_err(dev, "PCIe Bus Error: severity=%s, type=Inaccessible, (Unregistered Agent ID)\n",
aer_error_severity_string[info->severity]);
goto out;
}
layer = AER_GET_LAYER_ERROR(info->severity, info->status);
agent = AER_GET_AGENT(info->severity, info->status);
pci_err(dev, "PCIe Bus Error: severity=%s, type=%s, id=%04x(%s)\n",
pci_err(dev, "PCIe Bus Error: severity=%s, type=%s, (%s)\n",
aer_error_severity_string[info->severity],
aer_error_layer[layer], id, aer_agent_string[agent]);
aer_error_layer[layer], aer_agent_string[agent]);
pci_err(dev, " device [%04x:%04x] error status/mask=%08x/%08x\n",
dev->vendor, dev->device,
@ -186,17 +186,21 @@ void aer_print_error(struct pci_dev *dev, struct aer_err_info *info)
out:
if (info->id && info->error_dev_num > 1 && info->id == id)
pci_err(dev, " Error of this Agent(%04x) is reported first\n", id);
pci_err(dev, " Error of this Agent is reported first\n");
trace_aer_event(dev_name(&dev->dev), (info->status & ~info->mask),
info->severity);
info->severity, info->tlp_header_valid, &info->tlp);
}
void aer_print_port_info(struct pci_dev *dev, struct aer_err_info *info)
{
pci_info(dev, "AER: %s%s error received: id=%04x\n",
u8 bus = info->id >> 8;
u8 devfn = info->id & 0xff;
pci_info(dev, "AER: %s%s error received: %04x:%02x:%02x.%d\n",
info->multi_error_valid ? "Multiple " : "",
aer_error_severity_string[info->severity], info->id);
aer_error_severity_string[info->severity],
pci_domain_nr(dev->bus), bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
}
#ifdef CONFIG_ACPI_APEI_PCIEAER
@ -216,28 +220,30 @@ EXPORT_SYMBOL_GPL(cper_severity_to_aer);
void cper_print_aer(struct pci_dev *dev, int aer_severity,
struct aer_capability_regs *aer)
{
int layer, agent, status_strs_size, tlp_header_valid = 0;
int layer, agent, tlp_header_valid = 0;
u32 status, mask;
const char **status_strs;
struct aer_err_info info;
if (aer_severity == AER_CORRECTABLE) {
status = aer->cor_status;
mask = aer->cor_mask;
status_strs = aer_correctable_error_string;
status_strs_size = ARRAY_SIZE(aer_correctable_error_string);
} else {
status = aer->uncor_status;
mask = aer->uncor_mask;
status_strs = aer_uncorrectable_error_string;
status_strs_size = ARRAY_SIZE(aer_uncorrectable_error_string);
tlp_header_valid = status & AER_LOG_TLP_MASKS;
}
layer = AER_GET_LAYER_ERROR(aer_severity, status);
agent = AER_GET_AGENT(aer_severity, status);
memset(&info, 0, sizeof(info));
info.severity = aer_severity;
info.status = status;
info.mask = mask;
info.first_error = PCI_ERR_CAP_FEP(aer->cap_control);
pci_err(dev, "aer_status: 0x%08x, aer_mask: 0x%08x\n", status, mask);
cper_print_bits("", status, status_strs, status_strs_size);
__aer_print_error(dev, &info);
pci_err(dev, "aer_layer=%s, aer_agent=%s\n",
aer_error_layer[layer], aer_agent_string[agent]);
@ -249,6 +255,6 @@ void cper_print_aer(struct pci_dev *dev, int aer_severity,
__print_tlp_header(dev, &aer->header_log);
trace_aer_event(dev_name(&dev->dev), (status & ~mask),
aer_severity);
aer_severity, tlp_header_valid, &aer->header_log);
}
#endif

Some files were not shown because too many files have changed in this diff Show More