1
0
Fork 0

Char / Misc driver patches for 5.3-rc1

Here is the "large" pull request for char and misc and other assorted
 smaller driver subsystems for 5.3-rc1.
 
 It seems that this tree is becoming the funnel point of lots of smaller
 driver subsystems, which is fine for me, but that's why it is getting
 larger over time and does not just contain stuff under drivers/char/ and
 drivers/misc.
 
 Lots of small updates all over the place here from different driver
 subsystems:
   - habana driver updates
   - coresight driver updates
   - documentation file movements and updates
   - Android binder fixes and updates
   - extcon driver updates
   - google firmware driver updates
   - fsi driver updates
   - smaller misc and char driver updates
   - soundwire driver updates
   - nvmem driver updates
   - w1 driver fixes
 
 All of these have been in linux-next for a while with no reported
 issues.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCXSXmoQ8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ylV9wCgyJGbpPch8v/ecrZGFHYS4sIMexIAoMco3zf6
 wnqFmXiz1O0tyo1sgV9R
 =7sqO
 -----END PGP SIGNATURE-----

Merge tag 'char-misc-5.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char / misc driver updates from Greg KH:
 "Here is the "large" pull request for char and misc and other assorted
  smaller driver subsystems for 5.3-rc1.

  It seems that this tree is becoming the funnel point of lots of
  smaller driver subsystems, which is fine for me, but that's why it is
  getting larger over time and does not just contain stuff under
  drivers/char/ and drivers/misc.

  Lots of small updates all over the place here from different driver
  subsystems:
   - habana driver updates
   - coresight driver updates
   - documentation file movements and updates
   - Android binder fixes and updates
   - extcon driver updates
   - google firmware driver updates
   - fsi driver updates
   - smaller misc and char driver updates
   - soundwire driver updates
   - nvmem driver updates
   - w1 driver fixes

  All of these have been in linux-next for a while with no reported
  issues"

* tag 'char-misc-5.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (188 commits)
  coresight: Do not default to CPU0 for missing CPU phandle
  dt-bindings: coresight: Change CPU phandle to required property
  ocxl: Allow contexts to be attached with a NULL mm
  fsi: sbefifo: Don't fail operations when in SBE IPL state
  coresight: tmc: Smatch: Fix potential NULL pointer dereference
  coresight: etm3x: Smatch: Fix potential NULL pointer dereference
  coresight: Potential uninitialized variable in probe()
  coresight: etb10: Do not call smp_processor_id from preemptible
  coresight: tmc-etf: Do not call smp_processor_id from preemptible
  coresight: tmc-etr: alloc_perf_buf: Do not call smp_processor_id from preemptible
  coresight: tmc-etr: Do not call smp_processor_id() from preemptible
  docs: misc-devices: convert files without extension to ReST
  fpga: dfl: fme: align PR buffer size per PR datawidth
  fpga: dfl: fme: remove copy_to_user() in ioctl for PR
  fpga: dfl-fme-mgr: fix FME_PR_INTFC_ID register address.
  intel_th: msu: Start read iterator from a non-empty window
  intel_th: msu: Split sgt array and pointer in multiwindow mode
  intel_th: msu: Support multipage blocks
  intel_th: pci: Add Ice Lake NNPI support
  intel_th: msu: Fix single mode with disabled IOMMU
  ...
alistair/sunxi64-5.4-dsi
Linus Torvalds 2019-07-11 15:34:05 -07:00
commit 97ff4ca46d
164 changed files with 6063 additions and 2988 deletions

View File

@ -3,7 +3,10 @@ Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Sets the device address to be used for read or write through
PCI bar. The acceptable value is a string that starts with "0x"
PCI bar, or the device VA of a host mapped memory to be read or
written directly from the host. The latter option is allowed
only when the IOMMU is disabled.
The acceptable value is a string that starts with "0x"
What: /sys/kernel/debug/habanalabs/hl<n>/command_buffers
Date: Jan 2019
@ -33,10 +36,12 @@ Contact: oded.gabbay@gmail.com
Description: Allows the root user to read or write directly through the
device's PCI bar. Writing to this file generates a write
transaction while reading from the file generates a read
transcation. This custom interface is needed (instead of using
transaction. This custom interface is needed (instead of using
the generic Linux user-space PCI mapping) because the DDR bar
is very small compared to the DDR memory and only the driver can
move the bar before and after the transaction
move the bar before and after the transaction.
If the IOMMU is disabled, it also allows the root user to read
or write from the host a device VA of a host mapped memory
What: /sys/kernel/debug/habanalabs/hl<n>/device
Date: Jan 2019
@ -46,6 +51,13 @@ Description: Enables the root user to set the device to specific state.
Valid values are "disable", "enable", "suspend", "resume".
User can read this property to see the valid values
What: /sys/kernel/debug/habanalabs/hl<n>/engines
Date: Jul 2019
KernelVersion: 5.3
Contact: oded.gabbay@gmail.com
Description: Displays the status registers values of the device engines and
their derived idle status
What: /sys/kernel/debug/habanalabs/hl<n>/i2c_addr
Date: Jan 2019
KernelVersion: 5.1

View File

@ -62,18 +62,20 @@ What: /sys/class/habanalabs/hl<n>/ic_clk
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Allows the user to set the maximum clock frequency of the
Interconnect fabric. Writes to this parameter affect the device
only when the power management profile is set to "manual" mode.
The device IC clock might be set to lower value then the
Description: Allows the user to set the maximum clock frequency, in Hz, of
the Interconnect fabric. Writes to this parameter affect the
device only when the power management profile is set to "manual"
mode. The device IC clock might be set to lower value than the
maximum. The user should read the ic_clk_curr to see the actual
frequency value of the IC
frequency value of the IC. This property is valid only for the
Goya ASIC family
What: /sys/class/habanalabs/hl<n>/ic_clk_curr
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays the current clock frequency of the Interconnect fabric
Description: Displays the current clock frequency, in Hz, of the Interconnect
fabric. This property is valid only for the Goya ASIC family
What: /sys/class/habanalabs/hl<n>/infineon_ver
Date: Jan 2019
@ -92,18 +94,20 @@ What: /sys/class/habanalabs/hl<n>/mme_clk
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Allows the user to set the maximum clock frequency of the
MME compute engine. Writes to this parameter affect the device
only when the power management profile is set to "manual" mode.
The device MME clock might be set to lower value then the
Description: Allows the user to set the maximum clock frequency, in Hz, of
the MME compute engine. Writes to this parameter affect the
device only when the power management profile is set to "manual"
mode. The device MME clock might be set to lower value than the
maximum. The user should read the mme_clk_curr to see the actual
frequency value of the MME
frequency value of the MME. This property is valid only for the
Goya ASIC family
What: /sys/class/habanalabs/hl<n>/mme_clk_curr
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays the current clock frequency of the MME compute engine
Description: Displays the current clock frequency, in Hz, of the MME compute
engine. This property is valid only for the Goya ASIC family
What: /sys/class/habanalabs/hl<n>/pci_addr
Date: Jan 2019
@ -163,18 +167,20 @@ What: /sys/class/habanalabs/hl<n>/tpc_clk
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Allows the user to set the maximum clock frequency of the
TPC compute engines. Writes to this parameter affect the device
only when the power management profile is set to "manual" mode.
The device TPC clock might be set to lower value then the
Description: Allows the user to set the maximum clock frequency, in Hz, of
the TPC compute engines. Writes to this parameter affect the
device only when the power management profile is set to "manual"
mode. The device TPC clock might be set to lower value than the
maximum. The user should read the tpc_clk_curr to see the actual
frequency value of the TPC
frequency value of the TPC. This property is valid only for
Goya ASIC family
What: /sys/class/habanalabs/hl<n>/tpc_clk_curr
Date: Jan 2019
KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays the current clock frequency of the TPC compute engines
Description: Displays the current clock frequency, in Hz, of the TPC compute
engines. This property is valid only for the Goya ASIC family
What: /sys/class/habanalabs/hl<n>/uboot_ver
Date: Jan 2019

View File

@ -26,8 +26,8 @@ Required properties:
processor core is clocked by the internal CPU clock, so it
is enabled with CPU clock by default.
- cpu : the CPU phandle the debug module is affined to. When omitted
the module is considered to belong to CPU0.
- cpu : the CPU phandle the debug module is affined to. Do not assume it
to default to CPU0 if omitted.
Optional properties:

View File

@ -59,6 +59,11 @@ its hardware characteristcs.
* port or ports: see "Graph bindings for Coresight" below.
* Additional required property for Embedded Trace Macrocell (version 3.x and
version 4.x):
* cpu: the cpu phandle this ETM/PTM is affined to. Do not
assume it to default to CPU0 if omitted.
* Additional required properties for System Trace Macrocells (STM):
* reg: along with the physical base address and length of the register
set as described above, another entry is required to describe the
@ -87,9 +92,6 @@ its hardware characteristcs.
* arm,cp14: must be present if the system accesses ETM/PTM management
registers via co-processor 14.
* cpu: the cpu phandle this ETM/PTM is affined to. When omitted the
source is considered to belong to CPU0.
* Optional property for TMC:
* arm,buffer-size: size of contiguous buffer space for TMC ETR

View File

@ -133,6 +133,18 @@ RTC bindings based on SCU Message Protocol
Required properties:
- compatible: should be "fsl,imx8qxp-sc-rtc";
OCOTP bindings based on SCU Message Protocol
------------------------------------------------------------
Required properties:
- compatible: Should be "fsl,imx8qxp-scu-ocotp"
- #address-cells: Must be 1. Contains byte index
- #size-cells: Must be 1. Contains byte length
Optional Child nodes:
- Data cells of ocotp:
Detailed bindings are described in bindings/nvmem/nvmem.txt
Example (imx8qxp):
-------------
aliases {
@ -177,6 +189,16 @@ firmware {
...
};
ocotp: imx8qx-ocotp {
compatible = "fsl,imx8qxp-scu-ocotp";
#address-cells = <1>;
#size-cells = <1>;
fec_mac0: mac@2c4 {
reg = <0x2c4 8>;
};
};
pd: imx8qx-pd {
compatible = "fsl,imx8qxp-scu-pd", "fsl,scu-pd";
#power-domain-cells = <1>;

View File

@ -0,0 +1,19 @@
FAIRCHILD SEMICONDUCTOR FSA9480 MICROUSB SWITCH
The FSA9480 is a USB port accessory detector and switch. The FSA9480 is fully
controlled using I2C and enables USB data, stereo and mono audio, video,
microphone, and UART data to use a common connector port.
Required properties:
- compatible : Must be "fcs,fsa9480"
- reg : Specifies i2c slave address. Must be 0x25.
- interrupts : Should contain one entry specifying interrupt signal of
interrupt parent to which interrupt pin of the chip is connected.
Example:
musb@25 {
compatible = "fcs,fsa9480";
reg = <0x25>;
interrupt-parent = <&gph2>;
interrupts = <7 0>;
};

View File

@ -5,6 +5,7 @@ controller in Ingenic JZ4780
Required properties:
- compatible: Should be set to one of:
"ingenic,jz4740-nemc" (JZ4740)
"ingenic,jz4780-nemc" (JZ4780)
- reg: Should specify the NEMC controller registers location and length.
- clocks: Clock for the NEMC controller.

View File

@ -0,0 +1,58 @@
* Xilinx SDFEC(16nm) IP *
The Soft Decision Forward Error Correction (SDFEC) Engine is a Hard IP block
which provides high-throughput LDPC and Turbo Code implementations.
The LDPC decode & encode functionality is capable of covering a range of
customer specified Quasi-cyclic (QC) codes. The Turbo decode functionality
principally covers codes used by LTE. The FEC Engine offers significant
power and area savings versus implementations done in the FPGA fabric.
Required properties:
- compatible: Must be "xlnx,sd-fec-1.1"
- clock-names : List of input clock names from the following:
- "core_clk", Main processing clock for processing core (required)
- "s_axi_aclk", AXI4-Lite memory-mapped slave interface clock (required)
- "s_axis_din_aclk", DIN AXI4-Stream Slave interface clock (optional)
- "s_axis_din_words-aclk", DIN_WORDS AXI4-Stream Slave interface clock (optional)
- "s_axis_ctrl_aclk", Control input AXI4-Stream Slave interface clock (optional)
- "m_axis_dout_aclk", DOUT AXI4-Stream Master interface clock (optional)
- "m_axis_dout_words_aclk", DOUT_WORDS AXI4-Stream Master interface clock (optional)
- "m_axis_status_aclk", Status output AXI4-Stream Master interface clock (optional)
- clocks : Clock phandles (see clock_bindings.txt for details).
- reg: Should contain Xilinx SDFEC 16nm Hardened IP block registers
location and length.
- xlnx,sdfec-code : Should contain "ldpc" or "turbo" to describe the codes
being used.
- xlnx,sdfec-din-words : A value 0 indicates that the DIN_WORDS interface is
driven with a fixed value and is not present on the device, a value of 1
configures the DIN_WORDS to be block based, while a value of 2 configures the
DIN_WORDS input to be supplied for each AXI transaction.
- xlnx,sdfec-din-width : Configures the DIN AXI stream where a value of 1
configures a width of "1x128b", 2 a width of "2x128b" and 4 configures a width
of "4x128b".
- xlnx,sdfec-dout-words : A value 0 indicates that the DOUT_WORDS interface is
driven with a fixed value and is not present on the device, a value of 1
configures the DOUT_WORDS to be block based, while a value of 2 configures the
DOUT_WORDS input to be supplied for each AXI transaction.
- xlnx,sdfec-dout-width : Configures the DOUT AXI stream where a value of 1
configures a width of "1x128b", 2 a width of "2x128b" and 4 configures a width
of "4x128b".
Optional properties:
- interrupts: should contain SDFEC interrupt number
Example
---------------------------------------
sd_fec_0: sd-fec@a0040000 {
compatible = "xlnx,sd-fec-1.1";
clock-names = "core_clk","s_axi_aclk","s_axis_ctrl_aclk","s_axis_din_aclk","m_axis_status_aclk","m_axis_dout_aclk";
clocks = <&misc_clk_2>,<&misc_clk_0>,<&misc_clk_1>,<&misc_clk_1>,<&misc_clk_1>, <&misc_clk_1>;
reg = <0x0 0xa0040000 0x0 0x40000>;
interrupt-parent = <&axi_intc>;
interrupts = <1 0>;
xlnx,sdfec-code = "ldpc";
xlnx,sdfec-din-words = <0>;
xlnx,sdfec-din-width = <2>;
xlnx,sdfec-dout-words = <0>;
xlnx,sdfec-dout-width = <1>;
};

View File

@ -1,60 +0,0 @@
MMIO register bitfield-based multiplexer controller bindings
Define register bitfields to be used to control multiplexers. The parent
device tree node must be a syscon node to provide register access.
Required properties:
- compatible : "mmio-mux"
- #mux-control-cells : <1>
- mux-reg-masks : an array of register offset and pre-shifted bitfield mask
pairs, each describing a single mux control.
* Standard mux-controller bindings as decribed in mux-controller.txt
Optional properties:
- idle-states : if present, the state the muxes will have when idle. The
special state MUX_IDLE_AS_IS is the default.
The multiplexer state of each multiplexer is defined as the value of the
bitfield described by the corresponding register offset and bitfield mask pair
in the mux-reg-masks array, accessed through the parent syscon.
Example:
syscon {
compatible = "syscon";
mux: mux-controller {
compatible = "mmio-mux";
#mux-control-cells = <1>;
mux-reg-masks = <0x3 0x30>, /* 0: reg 0x3, bits 5:4 */
<0x3 0x40>, /* 1: reg 0x3, bit 6 */
idle-states = <MUX_IDLE_AS_IS>, <0>;
};
};
video-mux {
compatible = "video-mux";
mux-controls = <&mux 0>;
ports {
/* inputs 0..3 */
port@0 {
reg = <0>;
};
port@1 {
reg = <1>;
};
port@2 {
reg = <2>;
};
port@3 {
reg = <3>;
};
/* output */
port@4 {
reg = <4>;
};
};
};

View File

@ -0,0 +1,129 @@
Generic register bitfield-based multiplexer controller bindings
Define register bitfields to be used to control multiplexers. The parent
device tree node must be a device node to provide register r/w access.
Required properties:
- compatible : should be one of
"reg-mux" : if parent device of mux controller is not syscon device
"mmio-mux" : if parent device of mux controller is syscon device
- #mux-control-cells : <1>
- mux-reg-masks : an array of register offset and pre-shifted bitfield mask
pairs, each describing a single mux control.
* Standard mux-controller bindings as decribed in mux-controller.txt
Optional properties:
- idle-states : if present, the state the muxes will have when idle. The
special state MUX_IDLE_AS_IS is the default.
The multiplexer state of each multiplexer is defined as the value of the
bitfield described by the corresponding register offset and bitfield mask
pair in the mux-reg-masks array.
Example 1:
The parent device of mux controller is not a syscon device.
&i2c0 {
fpga@66 { // fpga connected to i2c
compatible = "fsl,lx2160aqds-fpga", "fsl,fpga-qixis-i2c",
"simple-mfd";
reg = <0x66>;
mux: mux-controller {
compatible = "reg-mux";
#mux-control-cells = <1>;
mux-reg-masks = <0x54 0xf8>, /* 0: reg 0x54, bits 7:3 */
<0x54 0x07>; /* 1: reg 0x54, bits 2:0 */
};
};
};
mdio-mux-1 {
compatible = "mdio-mux-multiplexer";
mux-controls = <&mux 0>;
mdio-parent-bus = <&emdio1>;
#address-cells = <1>;
#size-cells = <0>;
mdio@0 {
reg = <0x0>;
#address-cells = <1>;
#size-cells = <0>;
};
mdio@8 {
reg = <0x8>;
#address-cells = <1>;
#size-cells = <0>;
};
..
..
};
mdio-mux-2 {
compatible = "mdio-mux-multiplexer";
mux-controls = <&mux 1>;
mdio-parent-bus = <&emdio2>;
#address-cells = <1>;
#size-cells = <0>;
mdio@0 {
reg = <0x0>;
#address-cells = <1>;
#size-cells = <0>;
};
mdio@1 {
reg = <0x1>;
#address-cells = <1>;
#size-cells = <0>;
};
..
..
};
Example 2:
The parent device of mux controller is syscon device.
syscon {
compatible = "syscon";
mux: mux-controller {
compatible = "mmio-mux";
#mux-control-cells = <1>;
mux-reg-masks = <0x3 0x30>, /* 0: reg 0x3, bits 5:4 */
<0x3 0x40>, /* 1: reg 0x3, bit 6 */
idle-states = <MUX_IDLE_AS_IS>, <0>;
};
};
video-mux {
compatible = "video-mux";
mux-controls = <&mux 0>;
#address-cells = <1>;
#size-cells = <0>;
ports {
/* inputs 0..3 */
port@0 {
reg = <0>;
};
port@1 {
reg = <1>;
};
port@2 {
reg = <2>;
};
port@3 {
reg = <3>;
};
/* output */
port@4 {
reg = <4>;
};
};
};

View File

@ -0,0 +1,51 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/nvmem/allwinner,sun4i-a10-sid.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Allwinner A10 Security ID Device Tree Bindings
maintainers:
- Chen-Yu Tsai <wens@csie.org>
- Maxime Ripard <maxime.ripard@bootlin.com>
allOf:
- $ref: "nvmem.yaml#"
properties:
compatible:
enum:
- allwinner,sun4i-a10-sid
- allwinner,sun7i-a20-sid
- allwinner,sun8i-a83t-sid
- allwinner,sun8i-h3-sid
- allwinner,sun50i-a64-sid
- allwinner,sun50i-h5-sid
- allwinner,sun50i-h6-sid
reg:
maxItems: 1
required:
- compatible
- reg
# FIXME: We should set it, but it would report all the generic
# properties as additional properties.
# additionalProperties: false
examples:
- |
sid@1c23800 {
compatible = "allwinner,sun4i-a10-sid";
reg = <0x01c23800 0x10>;
};
- |
sid@1c23800 {
compatible = "allwinner,sun7i-a20-sid";
reg = <0x01c23800 0x200>;
};
...

View File

@ -1,29 +0,0 @@
Allwinner sunxi-sid
Required properties:
- compatible: Should be one of the following:
"allwinner,sun4i-a10-sid"
"allwinner,sun7i-a20-sid"
"allwinner,sun8i-a83t-sid"
"allwinner,sun8i-h3-sid"
"allwinner,sun50i-a64-sid"
"allwinner,sun50i-h5-sid"
"allwinner,sun50i-h6-sid"
- reg: Should contain registers location and length
= Data cells =
Are child nodes of sunxi-sid, bindings of which as described in
bindings/nvmem/nvmem.txt
Example for sun4i:
sid@1c23800 {
compatible = "allwinner,sun4i-a10-sid";
reg = <0x01c23800 0x10>
};
Example for sun7i:
sid@1c23800 {
compatible = "allwinner,sun7i-a20-sid";
reg = <0x01c23800 0x200>
};

View File

@ -15,6 +15,7 @@ Required properties:
"fsl,imx6sll-ocotp" (i.MX6SLL),
"fsl,imx7ulp-ocotp" (i.MX7ULP),
"fsl,imx8mq-ocotp" (i.MX8MQ),
"fsl,imx8mm-ocotp" (i.MX8MM),
followed by "syscon".
- #address-cells : Should be 1
- #size-cells : Should be 1

View File

@ -42,6 +42,7 @@ available subsections can be seen below.
target
mtdnand
miscellaneous
mei/index
w1
rapidio
s390-drivers

View File

@ -0,0 +1,32 @@
.. SPDX-License-Identifier: GPL-2.0
HDCP:
=====
ME FW as a security engine provides the capability for setting up
HDCP2.2 protocol negotiation between the Intel graphics device and
an HDC2.2 sink.
ME FW prepares HDCP2.2 negotiation parameters, signs and encrypts them
according the HDCP 2.2 spec. The Intel graphics sends the created blob
to the HDCP2.2 sink.
Similarly, the HDCP2.2 sink's response is transferred to ME FW
for decryption and verification.
Once all the steps of HDCP2.2 negotiation are completed,
upon request ME FW will configure the port as authenticated and supply
the HDCP encryption keys to Intel graphics hardware.
mei_hdcp driver
---------------
.. kernel-doc:: drivers/misc/mei/hdcp/mei_hdcp.c
:doc: MEI_HDCP Client Driver
mei_hdcp api
------------
.. kernel-doc:: drivers/misc/mei/hdcp/mei_hdcp.c
:functions:

View File

@ -0,0 +1,101 @@
.. SPDX-License-Identifier: GPL-2.0
Intel(R) Active Management Technology (Intel AMT)
=================================================
Prominent usage of the Intel ME Interface is to communicate with Intel(R)
Active Management Technology (Intel AMT) implemented in firmware running on
the Intel ME.
Intel AMT provides the ability to manage a host remotely out-of-band (OOB)
even when the operating system running on the host processor has crashed or
is in a sleep state.
Some examples of Intel AMT usage are:
- Monitoring hardware state and platform components
- Remote power off/on (useful for green computing or overnight IT
maintenance)
- OS updates
- Storage of useful platform information such as software assets
- Built-in hardware KVM
- Selective network isolation of Ethernet and IP protocol flows based
on policies set by a remote management console
- IDE device redirection from remote management console
Intel AMT (OOB) communication is based on SOAP (deprecated
starting with Release 6.0) over HTTP/S or WS-Management protocol over
HTTP/S that are received from a remote management console application.
For more information about Intel AMT:
https://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide/default.htm
Intel AMT Applications
----------------------
1) Intel Local Management Service (Intel LMS)
Applications running locally on the platform communicate with Intel AMT Release
2.0 and later releases in the same way that network applications do via SOAP
over HTTP (deprecated starting with Release 6.0) or with WS-Management over
SOAP over HTTP. This means that some Intel AMT features can be accessed from a
local application using the same network interface as a remote application
communicating with Intel AMT over the network.
When a local application sends a message addressed to the local Intel AMT host
name, the Intel LMS, which listens for traffic directed to the host name,
intercepts the message and routes it to the Intel MEI.
For more information:
https://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide/default.htm
Under "About Intel AMT" => "Local Access"
For downloading Intel LMS:
https://github.com/intel/lms
The Intel LMS opens a connection using the Intel MEI driver to the Intel LMS
firmware feature using a defined GUID and then communicates with the feature
using a protocol called Intel AMT Port Forwarding Protocol (Intel APF protocol).
The protocol is used to maintain multiple sessions with Intel AMT from a
single application.
See the protocol specification in the Intel AMT Software Development Kit (SDK)
https://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide/default.htm
Under "SDK Resources" => "Intel(R) vPro(TM) Gateway (MPS)"
=> "Information for Intel(R) vPro(TM) Gateway Developers"
=> "Description of the Intel AMT Port Forwarding (APF) Protocol"
2) Intel AMT Remote configuration using a Local Agent
A Local Agent enables IT personnel to configure Intel AMT out-of-the-box
without requiring installing additional data to enable setup. The remote
configuration process may involve an ISV-developed remote configuration
agent that runs on the host.
For more information:
https://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide/default.htm
Under "Setup and Configuration of Intel AMT" =>
"SDK Tools Supporting Setup and Configuration" =>
"Using the Local Agent Sample"
Intel AMT OS Health Watchdog
----------------------------
The Intel AMT Watchdog is an OS Health (Hang/Crash) watchdog.
Whenever the OS hangs or crashes, Intel AMT will send an event
to any subscriber to this event. This mechanism means that
IT knows when a platform crashes even when there is a hard failure on the host.
The Intel AMT Watchdog is composed of two parts:
1) Firmware feature - receives the heartbeats
and sends an event when the heartbeats stop.
2) Intel MEI iAMT watchdog driver - connects to the watchdog feature,
configures the watchdog and sends the heartbeats.
The Intel iAMT watchdog MEI driver uses the kernel watchdog API to configure
the Intel AMT Watchdog and to send heartbeats to it. The default timeout of the
watchdog is 120 seconds.
If the Intel AMT is not enabled in the firmware then the watchdog client won't enumerate
on the me client bus and watchdog devices won't be exposed.
---
linux-mei@linux.intel.com

View File

@ -0,0 +1,23 @@
.. SPDX-License-Identifier: GPL-2.0
.. include:: <isonum.txt>
===================================================
Intel(R) Management Engine Interface (Intel(R) MEI)
===================================================
**Copyright** |copy| 2019 Intel Corporation
.. only:: html
.. class:: toc-title
Table of Contents
.. toctree::
:maxdepth: 3
mei
mei-client-bus
iamt

View File

@ -0,0 +1,168 @@
.. SPDX-License-Identifier: GPL-2.0
==============================================
Intel(R) Management Engine (ME) Client bus API
==============================================
Rationale
=========
The MEI character device is useful for dedicated applications to send and receive
data to the many FW appliance found in Intel's ME from the user space.
However, for some of the ME functionalities it makes sense to leverage existing software
stack and expose them through existing kernel subsystems.
In order to plug seamlessly into the kernel device driver model we add kernel virtual
bus abstraction on top of the MEI driver. This allows implementing Linux kernel drivers
for the various MEI features as a stand alone entities found in their respective subsystem.
Existing device drivers can even potentially be re-used by adding an MEI CL bus layer to
the existing code.
MEI CL bus API
==============
A driver implementation for an MEI Client is very similar to any other existing bus
based device drivers. The driver registers itself as an MEI CL bus driver through
the ``struct mei_cl_driver`` structure defined in :file:`include/linux/mei_cl_bus.c`
.. code-block:: C
struct mei_cl_driver {
struct device_driver driver;
const char *name;
const struct mei_cl_device_id *id_table;
int (*probe)(struct mei_cl_device *dev, const struct mei_cl_id *id);
int (*remove)(struct mei_cl_device *dev);
};
The mei_cl_device_id structure defined in :file:`include/linux/mod_devicetable.h` allows a
driver to bind itself against a device name.
.. code-block:: C
struct mei_cl_device_id {
char name[MEI_CL_NAME_SIZE];
uuid_le uuid;
__u8 version;
kernel_ulong_t driver_info;
};
To actually register a driver on the ME Client bus one must call the :c:func:`mei_cl_add_driver`
API. This is typically called at module initialization time.
Once the driver is registered and bound to the device, a driver will typically
try to do some I/O on this bus and this should be done through the :c:func:`mei_cl_send`
and :c:func:`mei_cl_recv` functions. More detailed information is in :ref:`api` section.
In order for a driver to be notified about pending traffic or event, the driver
should register a callback via :c:func:`mei_cl_devev_register_rx_cb` and
:c:func:`mei_cldev_register_notify_cb` function respectively.
.. _api:
API:
----
.. kernel-doc:: drivers/misc/mei/bus.c
:export: drivers/misc/mei/bus.c
Example
=======
As a theoretical example let's pretend the ME comes with a "contact" NFC IP.
The driver init and exit routines for this device would look like:
.. code-block:: C
#define CONTACT_DRIVER_NAME "contact"
static struct mei_cl_device_id contact_mei_cl_tbl[] = {
{ CONTACT_DRIVER_NAME, },
/* required last entry */
{ }
};
MODULE_DEVICE_TABLE(mei_cl, contact_mei_cl_tbl);
static struct mei_cl_driver contact_driver = {
.id_table = contact_mei_tbl,
.name = CONTACT_DRIVER_NAME,
.probe = contact_probe,
.remove = contact_remove,
};
static int contact_init(void)
{
int r;
r = mei_cl_driver_register(&contact_driver);
if (r) {
pr_err(CONTACT_DRIVER_NAME ": driver registration failed\n");
return r;
}
return 0;
}
static void __exit contact_exit(void)
{
mei_cl_driver_unregister(&contact_driver);
}
module_init(contact_init);
module_exit(contact_exit);
And the driver's simplified probe routine would look like that:
.. code-block:: C
int contact_probe(struct mei_cl_device *dev, struct mei_cl_device_id *id)
{
[...]
mei_cldev_enable(dev);
mei_cldev_register_rx_cb(dev, contact_rx_cb);
return 0;
}
In the probe routine the driver first enable the MEI device and then registers
an rx handler which is as close as it can get to registering a threaded IRQ handler.
The handler implementation will typically call :c:func:`mei_cldev_recv` and then
process received data.
.. code-block:: C
#define MAX_PAYLOAD 128
#define HDR_SIZE 4
static void conntact_rx_cb(struct mei_cl_device *cldev)
{
struct contact *c = mei_cldev_get_drvdata(cldev);
unsigned char payload[MAX_PAYLOAD];
ssize_t payload_sz;
payload_sz = mei_cldev_recv(cldev, payload, MAX_PAYLOAD)
if (reply_size < HDR_SIZE) {
return;
}
c->process_rx(payload);
}
MEI Client Bus Drivers
======================
.. toctree::
:maxdepth: 2
hdcp
nfc

View File

@ -0,0 +1,176 @@
.. SPDX-License-Identifier: GPL-2.0
Introduction
============
The Intel Management Engine (Intel ME) is an isolated and protected computing
resource (Co-processor) residing inside certain Intel chipsets. The Intel ME
provides support for computer/IT management and security features.
The actual feature set depends on the Intel chipset SKU.
The Intel Management Engine Interface (Intel MEI, previously known as HECI)
is the interface between the Host and Intel ME. This interface is exposed
to the host as a PCI device, actually multiple PCI devices might be exposed.
The Intel MEI Driver is in charge of the communication channel between
a host application and the Intel ME features.
Each Intel ME feature, or Intel ME Client is addressed by a unique GUID and
each client has its own protocol. The protocol is message-based with a
header and payload up to maximal number of bytes advertised by the client,
upon connection.
Intel MEI Driver
================
The driver exposes a character device with device nodes /dev/meiX.
An application maintains communication with an Intel ME feature while
/dev/meiX is open. The binding to a specific feature is performed by calling
:c:macro:`MEI_CONNECT_CLIENT_IOCTL`, which passes the desired GUID.
The number of instances of an Intel ME feature that can be opened
at the same time depends on the Intel ME feature, but most of the
features allow only a single instance.
The driver is transparent to data that are passed between firmware feature
and host application.
Because some of the Intel ME features can change the system
configuration, the driver by default allows only a privileged
user to access it.
The session is terminated calling :c:func:`close(int fd)`.
A code snippet for an application communicating with Intel AMTHI client:
.. code-block:: C
struct mei_connect_client_data data;
fd = open(MEI_DEVICE);
data.d.in_client_uuid = AMTHI_GUID;
ioctl(fd, IOCTL_MEI_CONNECT_CLIENT, &data);
printf("Ver=%d, MaxLen=%ld\n",
data.d.in_client_uuid.protocol_version,
data.d.in_client_uuid.max_msg_length);
[...]
write(fd, amthi_req_data, amthi_req_data_len);
[...]
read(fd, &amthi_res_data, amthi_res_data_len);
[...]
close(fd);
User space API
IOCTLs:
=======
The Intel MEI Driver supports the following IOCTL commands:
IOCTL_MEI_CONNECT_CLIENT
-------------------------
Connect to firmware Feature/Client.
.. code-block:: none
Usage:
struct mei_connect_client_data client_data;
ioctl(fd, IOCTL_MEI_CONNECT_CLIENT, &client_data);
Inputs:
struct mei_connect_client_data - contain the following
Input field:
in_client_uuid - GUID of the FW Feature that needs
to connect to.
Outputs:
out_client_properties - Client Properties: MTU and Protocol Version.
Error returns:
ENOTTY No such client (i.e. wrong GUID) or connection is not allowed.
EINVAL Wrong IOCTL Number
ENODEV Device or Connection is not initialized or ready.
ENOMEM Unable to allocate memory to client internal data.
EFAULT Fatal Error (e.g. Unable to access user input data)
EBUSY Connection Already Open
:Note:
max_msg_length (MTU) in client properties describes the maximum
data that can be sent or received. (e.g. if MTU=2K, can send
requests up to bytes 2k and received responses up to 2k bytes).
IOCTL_MEI_NOTIFY_SET
---------------------
Enable or disable event notifications.
.. code-block:: none
Usage:
uint32_t enable;
ioctl(fd, IOCTL_MEI_NOTIFY_SET, &enable);
uint32_t enable = 1;
or
uint32_t enable[disable] = 0;
Error returns:
EINVAL Wrong IOCTL Number
ENODEV Device is not initialized or the client not connected
ENOMEM Unable to allocate memory to client internal data.
EFAULT Fatal Error (e.g. Unable to access user input data)
EOPNOTSUPP if the device doesn't support the feature
:Note:
The client must be connected in order to enable notification events
IOCTL_MEI_NOTIFY_GET
--------------------
Retrieve event
.. code-block:: none
Usage:
uint32_t event;
ioctl(fd, IOCTL_MEI_NOTIFY_GET, &event);
Outputs:
1 - if an event is pending
0 - if there is no even pending
Error returns:
EINVAL Wrong IOCTL Number
ENODEV Device is not initialized or the client not connected
ENOMEM Unable to allocate memory to client internal data.
EFAULT Fatal Error (e.g. Unable to access user input data)
EOPNOTSUPP if the device doesn't support the feature
:Note:
The client must be connected and event notification has to be enabled
in order to receive an event
Supported Chipsets
==================
82X38/X48 Express and newer
linux-mei@linux.intel.com

View File

@ -0,0 +1,28 @@
.. SPDX-License-Identifier: GPL-2.0
MEI NFC
-------
Some Intel 8 and 9 Serieses chipsets supports NFC devices connected behind
the Intel Management Engine controller.
MEI client bus exposes the NFC chips as NFC phy devices and enables
binding with Microread and NXP PN544 NFC device driver from the Linux NFC
subsystem.
.. kernel-render:: DOT
:alt: MEI NFC digraph
:caption: **MEI NFC** Stack
digraph NFC {
cl_nfc -> me_cl_nfc;
"drivers/nfc/mei_phy" -> cl_nfc [lhead=bus];
"drivers/nfc/microread/mei" -> cl_nfc;
"drivers/nfc/microread/mei" -> "drivers/nfc/mei_phy";
"drivers/nfc/pn544/mei" -> cl_nfc;
"drivers/nfc/pn544/mei" -> "drivers/nfc/mei_phy";
"net/nfc" -> "drivers/nfc/microread/mei";
"net/nfc" -> "drivers/nfc/pn544/mei";
"neard" -> "net/nfc";
cl_nfc [label="mei/bus(nfc)"];
me_cl_nfc [label="me fw (nfc)"];
}

View File

@ -44,7 +44,9 @@ Message transfer.
b. Transfer message (Read/Write) to Slave1 or broadcast message on
Bus in case of bank switch.
c. Release Message lock ::
c. Release Message lock
::
+----------+ +---------+
| | | |

View File

@ -1,11 +1,17 @@
====================
Kernel driver eeprom
====================
Supported chips:
* Any EEPROM chip in the designated address range
Prefix: 'eeprom'
Addresses scanned: I2C 0x50 - 0x57
Datasheets: Publicly available from:
Atmel (www.atmel.com),
Catalyst (www.catsemi.com),
Fairchild (www.fairchildsemi.com),
@ -16,7 +22,9 @@ Supported chips:
Xicor (www.xicor.com),
and others.
Chip Size (bits) Address
========= ============= ============================================
Chip Size (bits) Address
========= ============= ============================================
24C01 1K 0x50 (shadows at 0x51 - 0x57)
24C01A 1K 0x50 - 0x57 (Typical device on DIMMs)
24C02 2K 0x50 - 0x57
@ -24,7 +32,7 @@ Supported chips:
(additional data at 0x51, 0x53, 0x55, 0x57)
24C08 8K 0x50, 0x54 (additional data at 0x51, 0x52,
0x53, 0x55, 0x56, 0x57)
24C16 16K 0x50 (additional data at 0x51 - 0x57)
24C16 16K 0x50 (additional data at 0x51 - 0x57)
Sony 2K 0x57
Atmel 34C02B 2K 0x50 - 0x57, SW write protect at 0x30-37
@ -33,14 +41,15 @@ Supported chips:
Fairchild 34W02 2K 0x50 - 0x57, SW write protect at 0x30-37
Microchip 24AA52 2K 0x50 - 0x57, SW write protect at 0x30-37
ST M34C02 2K 0x50 - 0x57, SW write protect at 0x30-37
========= ============= ============================================
Authors:
Frodo Looijaard <frodol@dds.nl>,
Philip Edelbrock <phil@netroedge.com>,
Jean Delvare <jdelvare@suse.de>,
Greg Kroah-Hartman <greg@kroah.com>,
IBM Corp.
- Frodo Looijaard <frodol@dds.nl>,
- Philip Edelbrock <phil@netroedge.com>,
- Jean Delvare <jdelvare@suse.de>,
- Greg Kroah-Hartman <greg@kroah.com>,
- IBM Corp.
Description
-----------
@ -74,23 +83,25 @@ this address will write protect the memory array permanently, and the
device will no longer respond at the 0x30-37 address. The eeprom driver
does not support this register.
Lacking functionality:
Lacking functionality
---------------------
* Full support for larger devices (24C04, 24C08, 24C16). These are not
typically found on a PC. These devices will appear as separate devices at
multiple addresses.
typically found on a PC. These devices will appear as separate devices at
multiple addresses.
* Support for really large devices (24C32, 24C64, 24C128, 24C256, 24C512).
These devices require two-byte address fields and are not supported.
These devices require two-byte address fields and are not supported.
* Enable Writing. Again, no technical reason why not, but making it easy
to change the contents of the EEPROMs (on DIMMs anyway) also makes it easy
to disable the DIMMs (potentially preventing the computer from booting)
until the values are restored somehow.
to change the contents of the EEPROMs (on DIMMs anyway) also makes it easy
to disable the DIMMs (potentially preventing the computer from booting)
until the values are restored somehow.
Use:
Use
---
After inserting the module (and any other required SMBus/i2c modules), you
should have some EEPROM directories in /sys/bus/i2c/devices/* of names such
should have some EEPROM directories in ``/sys/bus/i2c/devices/*`` of names such
as "0-0050". Inside each of these is a series of files, the eeprom file
contains the binary data from EEPROM.

View File

@ -1,10 +1,15 @@
========================
Kernel driver ics932s401
======================
========================
Supported chips:
* IDT ICS932S401
Prefix: 'ics932s401'
Addresses scanned: I2C 0x69
Datasheet: Publicly available at the IDT website
Author: Darrick J. Wong

View File

@ -14,4 +14,9 @@ fit into other categories.
.. toctree::
:maxdepth: 2
eeprom
ibmvmc
ics932s401
isl29003
lis3lv02d
max6875

View File

@ -1,10 +1,15 @@
======================
Kernel driver isl29003
=====================
======================
Supported chips:
* Intersil ISL29003
Prefix: 'isl29003'
Addresses scanned: none
Datasheet:
http://www.intersil.com/data/fn/fn7464.pdf
@ -37,25 +42,33 @@ Sysfs entries
-------------
range:
== ===========================
0: 0 lux to 1000 lux (default)
1: 0 lux to 4000 lux
2: 0 lux to 16,000 lux
3: 0 lux to 64,000 lux
== ===========================
resolution:
== =====================
0: 2^16 cycles (default)
1: 2^12 cycles
2: 2^8 cycles
3: 2^4 cycles
== =====================
mode:
== =================================================
0: diode1's current (unsigned 16bit) (default)
1: diode1's current (unsigned 16bit)
2: difference between diodes (l1 - l2, signed 15bit)
== =================================================
power_state:
== =================================================
0: device is disabled (default)
1: device is enabled
== =================================================
lux (read only):
returns the value from the last sensor reading

View File

@ -1,3 +1,4 @@
=======================
Kernel driver lis3lv02d
=======================
@ -8,8 +9,8 @@ Supported chips:
LIS331DLH (16 bits)
Authors:
Yan Burman <burman.yan@gmail.com>
Eric Piel <eric.piel@tremplin-utc.net>
- Yan Burman <burman.yan@gmail.com>
- Eric Piel <eric.piel@tremplin-utc.net>
Description
@ -25,11 +26,15 @@ neverball). The accelerometer data is readable via
to mg values (1/1000th of earth gravity).
Sysfs attributes under /sys/devices/platform/lis3lv02d/:
position - 3D position that the accelerometer reports. Format: "(x,y,z)"
rate - read reports the sampling rate of the accelerometer device in HZ.
position
- 3D position that the accelerometer reports. Format: "(x,y,z)"
rate
- read reports the sampling rate of the accelerometer device in HZ.
write changes sampling rate of the accelerometer device.
Only values which are supported by HW are accepted.
selftest - performs selftest for the chip as specified by chip manufacturer.
selftest
- performs selftest for the chip as specified by chip manufacturer.
This driver also provides an absolute input class device, allowing
the laptop to act as a pinball machine-esque joystick. Joystick device can be
@ -69,11 +74,12 @@ Axes orientation
For better compatibility between the various laptops. The values reported by
the accelerometer are converted into a "standard" organisation of the axes
(aka "can play neverball out of the box"):
* When the laptop is horizontal the position reported is about 0 for X and Y
and a positive value for Z
and a positive value for Z
* If the left side is elevated, X increases (becomes positive)
* If the front side (where the touchpad is) is elevated, Y decreases
(becomes negative)
(becomes negative)
* If the laptop is put upside-down, Z becomes negative
If your laptop model is not recognized (cf "dmesg"), you can send an

View File

@ -1,12 +1,16 @@
=====================
Kernel driver max6875
=====================
Supported chips:
* Maxim MAX6874, MAX6875
Prefix: 'max6875'
Addresses scanned: None (see below)
Datasheet:
http://pdfserv.maxim-ic.com/en/ds/MAX6874-MAX6875.pdf
Datasheet: http://pdfserv.maxim-ic.com/en/ds/MAX6874-MAX6875.pdf
Author: Ben Gardner <bgardner@wabtec.com>
@ -24,9 +28,13 @@ registers.
The Maxim MAX6874 is a similar, mostly compatible device, with more inputs
and outputs:
vin gpi vout
=========== === === ====
- vin gpi vout
=========== === === ====
MAX6874 6 4 8
MAX6875 4 3 5
=========== === === ====
See the datasheet for more information.
@ -41,13 +49,16 @@ General Remarks
---------------
Valid addresses for the MAX6875 are 0x50 and 0x52.
Valid addresses for the MAX6874 are 0x50, 0x52, 0x54 and 0x56.
The driver does not probe any address, so you explicitly instantiate the
devices.
Example:
$ modprobe max6875
$ echo max6875 0x50 > /sys/bus/i2c/devices/i2c-0/new_device
Example::
$ modprobe max6875
$ echo max6875 0x50 > /sys/bus/i2c/devices/i2c-0/new_device
The MAX6874/MAX6875 ignores address bit 0, so this driver attaches to multiple
addresses. For example, for address 0x50, it also reserves 0x51.
@ -58,52 +69,67 @@ Programming the chip using i2c-dev
----------------------------------
Use the i2c-dev interface to access and program the chips.
Reads and writes are performed differently depending on the address range.
The configuration registers are at addresses 0x00 - 0x45.
Use i2c_smbus_write_byte_data() to write a register and
i2c_smbus_read_byte_data() to read a register.
The command is the register number.
Examples:
To write a 1 to register 0x45:
To write a 1 to register 0x45::
i2c_smbus_write_byte_data(fd, 0x45, 1);
To read register 0x45:
To read register 0x45::
value = i2c_smbus_read_byte_data(fd, 0x45);
The configuration EEPROM is at addresses 0x8000 - 0x8045.
The user EEPROM is at addresses 0x8100 - 0x82ff.
Use i2c_smbus_write_word_data() to write a byte to EEPROM.
The command is the upper byte of the address: 0x80, 0x81, or 0x82.
The data word is the lower part of the address or'd with data << 8.
The data word is the lower part of the address or'd with data << 8::
cmd = address >> 8;
val = (address & 0xff) | (data << 8);
Example:
To write 0x5a to address 0x8003:
To write 0x5a to address 0x8003::
i2c_smbus_write_word_data(fd, 0x80, 0x5a03);
Reading data from the EEPROM is a little more complicated.
Use i2c_smbus_write_byte_data() to set the read address and then
i2c_smbus_read_byte() or i2c_smbus_read_i2c_block_data() to read the data.
Example:
To read data starting at offset 0x8100, first set the address:
To read data starting at offset 0x8100, first set the address::
i2c_smbus_write_byte_data(fd, 0x81, 0x00);
And then read the data
And then read the data::
value = i2c_smbus_read_byte(fd);
or
or::
count = i2c_smbus_read_i2c_block_data(fd, 0x84, 16, buffer);
The block read should read 16 bytes.
0x84 is the block read command.
See the datasheet for more details.

View File

@ -1,141 +0,0 @@
Intel(R) Management Engine (ME) Client bus API
==============================================
Rationale
=========
MEI misc character device is useful for dedicated applications to send and receive
data to the many FW appliance found in Intel's ME from the user space.
However for some of the ME functionalities it make sense to leverage existing software
stack and expose them through existing kernel subsystems.
In order to plug seamlessly into the kernel device driver model we add kernel virtual
bus abstraction on top of the MEI driver. This allows implementing linux kernel drivers
for the various MEI features as a stand alone entities found in their respective subsystem.
Existing device drivers can even potentially be re-used by adding an MEI CL bus layer to
the existing code.
MEI CL bus API
==============
A driver implementation for an MEI Client is very similar to existing bus
based device drivers. The driver registers itself as an MEI CL bus driver through
the mei_cl_driver structure:
struct mei_cl_driver {
struct device_driver driver;
const char *name;
const struct mei_cl_device_id *id_table;
int (*probe)(struct mei_cl_device *dev, const struct mei_cl_id *id);
int (*remove)(struct mei_cl_device *dev);
};
struct mei_cl_id {
char name[MEI_NAME_SIZE];
kernel_ulong_t driver_info;
};
The mei_cl_id structure allows the driver to bind itself against a device name.
To actually register a driver on the ME Client bus one must call the mei_cl_add_driver()
API. This is typically called at module init time.
Once registered on the ME Client bus, a driver will typically try to do some I/O on
this bus and this should be done through the mei_cl_send() and mei_cl_recv()
routines. The latter is synchronous (blocks and sleeps until data shows up).
In order for drivers to be notified of pending events waiting for them (e.g.
an Rx event) they can register an event handler through the
mei_cl_register_event_cb() routine. Currently only the MEI_EVENT_RX event
will trigger an event handler call and the driver implementation is supposed
to call mei_recv() from the event handler in order to fetch the pending
received buffers.
Example
=======
As a theoretical example let's pretend the ME comes with a "contact" NFC IP.
The driver init and exit routines for this device would look like:
#define CONTACT_DRIVER_NAME "contact"
static struct mei_cl_device_id contact_mei_cl_tbl[] = {
{ CONTACT_DRIVER_NAME, },
/* required last entry */
{ }
};
MODULE_DEVICE_TABLE(mei_cl, contact_mei_cl_tbl);
static struct mei_cl_driver contact_driver = {
.id_table = contact_mei_tbl,
.name = CONTACT_DRIVER_NAME,
.probe = contact_probe,
.remove = contact_remove,
};
static int contact_init(void)
{
int r;
r = mei_cl_driver_register(&contact_driver);
if (r) {
pr_err(CONTACT_DRIVER_NAME ": driver registration failed\n");
return r;
}
return 0;
}
static void __exit contact_exit(void)
{
mei_cl_driver_unregister(&contact_driver);
}
module_init(contact_init);
module_exit(contact_exit);
And the driver's simplified probe routine would look like that:
int contact_probe(struct mei_cl_device *dev, struct mei_cl_device_id *id)
{
struct contact_driver *contact;
[...]
mei_cl_enable_device(dev);
mei_cl_register_event_cb(dev, contact_event_cb, contact);
return 0;
}
In the probe routine the driver first enable the MEI device and then registers
an ME bus event handler which is as close as it can get to registering a
threaded IRQ handler.
The handler implementation will typically call some I/O routine depending on
the pending events:
#define MAX_NFC_PAYLOAD 128
static void contact_event_cb(struct mei_cl_device *dev, u32 events,
void *context)
{
struct contact_driver *contact = context;
if (events & BIT(MEI_EVENT_RX)) {
u8 payload[MAX_NFC_PAYLOAD];
int payload_size;
payload_size = mei_recv(dev, payload, MAX_NFC_PAYLOAD);
if (payload_size <= 0)
return;
/* Hook to the NFC subsystem */
nfc_hci_recv_frame(contact->hdev, payload, payload_size);
}
}

View File

@ -1,266 +0,0 @@
Intel(R) Management Engine Interface (Intel(R) MEI)
===================================================
Introduction
============
The Intel Management Engine (Intel ME) is an isolated and protected computing
resource (Co-processor) residing inside certain Intel chipsets. The Intel ME
provides support for computer/IT management features. The feature set
depends on the Intel chipset SKU.
The Intel Management Engine Interface (Intel MEI, previously known as HECI)
is the interface between the Host and Intel ME. This interface is exposed
to the host as a PCI device. The Intel MEI Driver is in charge of the
communication channel between a host application and the Intel ME feature.
Each Intel ME feature (Intel ME Client) is addressed by a GUID/UUID and
each client has its own protocol. The protocol is message-based with a
header and payload up to 512 bytes.
Prominent usage of the Intel ME Interface is to communicate with Intel(R)
Active Management Technology (Intel AMT) implemented in firmware running on
the Intel ME.
Intel AMT provides the ability to manage a host remotely out-of-band (OOB)
even when the operating system running on the host processor has crashed or
is in a sleep state.
Some examples of Intel AMT usage are:
- Monitoring hardware state and platform components
- Remote power off/on (useful for green computing or overnight IT
maintenance)
- OS updates
- Storage of useful platform information such as software assets
- Built-in hardware KVM
- Selective network isolation of Ethernet and IP protocol flows based
on policies set by a remote management console
- IDE device redirection from remote management console
Intel AMT (OOB) communication is based on SOAP (deprecated
starting with Release 6.0) over HTTP/S or WS-Management protocol over
HTTP/S that are received from a remote management console application.
For more information about Intel AMT:
http://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide
Intel MEI Driver
================
The driver exposes a misc device called /dev/mei.
An application maintains communication with an Intel ME feature while
/dev/mei is open. The binding to a specific feature is performed by calling
MEI_CONNECT_CLIENT_IOCTL, which passes the desired UUID.
The number of instances of an Intel ME feature that can be opened
at the same time depends on the Intel ME feature, but most of the
features allow only a single instance.
The Intel AMT Host Interface (Intel AMTHI) feature supports multiple
simultaneous user connected applications. The Intel MEI driver
handles this internally by maintaining request queues for the applications.
The driver is transparent to data that are passed between firmware feature
and host application.
Because some of the Intel ME features can change the system
configuration, the driver by default allows only a privileged
user to access it.
A code snippet for an application communicating with Intel AMTHI client:
struct mei_connect_client_data data;
fd = open(MEI_DEVICE);
data.d.in_client_uuid = AMTHI_UUID;
ioctl(fd, IOCTL_MEI_CONNECT_CLIENT, &data);
printf("Ver=%d, MaxLen=%ld\n",
data.d.in_client_uuid.protocol_version,
data.d.in_client_uuid.max_msg_length);
[...]
write(fd, amthi_req_data, amthi_req_data_len);
[...]
read(fd, &amthi_res_data, amthi_res_data_len);
[...]
close(fd);
IOCTL
=====
The Intel MEI Driver supports the following IOCTL commands:
IOCTL_MEI_CONNECT_CLIENT Connect to firmware Feature (client).
usage:
struct mei_connect_client_data clientData;
ioctl(fd, IOCTL_MEI_CONNECT_CLIENT, &clientData);
inputs:
mei_connect_client_data struct contain the following
input field:
in_client_uuid - UUID of the FW Feature that needs
to connect to.
outputs:
out_client_properties - Client Properties: MTU and Protocol Version.
error returns:
EINVAL Wrong IOCTL Number
ENODEV Device or Connection is not initialized or ready.
(e.g. Wrong UUID)
ENOMEM Unable to allocate memory to client internal data.
EFAULT Fatal Error (e.g. Unable to access user input data)
EBUSY Connection Already Open
Notes:
max_msg_length (MTU) in client properties describes the maximum
data that can be sent or received. (e.g. if MTU=2K, can send
requests up to bytes 2k and received responses up to 2k bytes).
IOCTL_MEI_NOTIFY_SET: enable or disable event notifications
Usage:
uint32_t enable;
ioctl(fd, IOCTL_MEI_NOTIFY_SET, &enable);
Inputs:
uint32_t enable = 1;
or
uint32_t enable[disable] = 0;
Error returns:
EINVAL Wrong IOCTL Number
ENODEV Device is not initialized or the client not connected
ENOMEM Unable to allocate memory to client internal data.
EFAULT Fatal Error (e.g. Unable to access user input data)
EOPNOTSUPP if the device doesn't support the feature
Notes:
The client must be connected in order to enable notification events
IOCTL_MEI_NOTIFY_GET : retrieve event
Usage:
uint32_t event;
ioctl(fd, IOCTL_MEI_NOTIFY_GET, &event);
Outputs:
1 - if an event is pending
0 - if there is no even pending
Error returns:
EINVAL Wrong IOCTL Number
ENODEV Device is not initialized or the client not connected
ENOMEM Unable to allocate memory to client internal data.
EFAULT Fatal Error (e.g. Unable to access user input data)
EOPNOTSUPP if the device doesn't support the feature
Notes:
The client must be connected and event notification has to be enabled
in order to receive an event
Intel ME Applications
=====================
1) Intel Local Management Service (Intel LMS)
Applications running locally on the platform communicate with Intel AMT Release
2.0 and later releases in the same way that network applications do via SOAP
over HTTP (deprecated starting with Release 6.0) or with WS-Management over
SOAP over HTTP. This means that some Intel AMT features can be accessed from a
local application using the same network interface as a remote application
communicating with Intel AMT over the network.
When a local application sends a message addressed to the local Intel AMT host
name, the Intel LMS, which listens for traffic directed to the host name,
intercepts the message and routes it to the Intel MEI.
For more information:
http://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide
Under "About Intel AMT" => "Local Access"
For downloading Intel LMS:
http://software.intel.com/en-us/articles/download-the-latest-intel-amt-open-source-drivers/
The Intel LMS opens a connection using the Intel MEI driver to the Intel LMS
firmware feature using a defined UUID and then communicates with the feature
using a protocol called Intel AMT Port Forwarding Protocol (Intel APF protocol).
The protocol is used to maintain multiple sessions with Intel AMT from a
single application.
See the protocol specification in the Intel AMT Software Development Kit (SDK)
http://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide
Under "SDK Resources" => "Intel(R) vPro(TM) Gateway (MPS)"
=> "Information for Intel(R) vPro(TM) Gateway Developers"
=> "Description of the Intel AMT Port Forwarding (APF) Protocol"
2) Intel AMT Remote configuration using a Local Agent
A Local Agent enables IT personnel to configure Intel AMT out-of-the-box
without requiring installing additional data to enable setup. The remote
configuration process may involve an ISV-developed remote configuration
agent that runs on the host.
For more information:
http://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide
Under "Setup and Configuration of Intel AMT" =>
"SDK Tools Supporting Setup and Configuration" =>
"Using the Local Agent Sample"
An open source Intel AMT configuration utility, implementing a local agent
that accesses the Intel MEI driver, can be found here:
http://software.intel.com/en-us/articles/download-the-latest-intel-amt-open-source-drivers/
Intel AMT OS Health Watchdog
============================
The Intel AMT Watchdog is an OS Health (Hang/Crash) watchdog.
Whenever the OS hangs or crashes, Intel AMT will send an event
to any subscriber to this event. This mechanism means that
IT knows when a platform crashes even when there is a hard failure on the host.
The Intel AMT Watchdog is composed of two parts:
1) Firmware feature - receives the heartbeats
and sends an event when the heartbeats stop.
2) Intel MEI iAMT watchdog driver - connects to the watchdog feature,
configures the watchdog and sends the heartbeats.
The Intel iAMT watchdog MEI driver uses the kernel watchdog API to configure
the Intel AMT Watchdog and to send heartbeats to it. The default timeout of the
watchdog is 120 seconds.
If the Intel AMT is not enabled in the firmware then the watchdog client won't enumerate
on the me client bus and watchdog devices won't be exposed.
Supported Chipsets
==================
7 Series Chipset Family
6 Series Chipset Family
5 Series Chipset Family
4 Series Chipset Family
Mobile 4 Series Chipset Family
ICH9
82946GZ/GL
82G35 Express
82Q963/Q965
82P965/G965
Mobile PM965/GM965
Mobile GME965/GLE960
82Q35 Express
82G33/G31/P35/P31 Express
82Q33 Express
82X38/X48 Express
---
linux-mei@linux.intel.com

View File

@ -6537,6 +6537,19 @@ F: fs/crypto/
F: include/linux/fscrypt*.h
F: Documentation/filesystems/fscrypt.rst
FSI SUBSYSTEM
M: Jeremy Kerr <jk@ozlabs.org>
M: Joel Stanley <joel@jms.id.au>
R: Alistar Popple <alistair@popple.id.au>
R: Eddie James <eajames@linux.ibm.com>
L: linux-fsi@lists.ozlabs.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/joel/fsi.git
Q: http://patchwork.ozlabs.org/project/linux-fsi/list/
S: Supported
F: drivers/fsi/
F: include/linux/fsi*.h
F: include/trace/events/fsi*.h
FSI-ATTACHED I2C DRIVER
M: Eddie James <eajames@linux.ibm.com>
L: linux-i2c@vger.kernel.org
@ -8103,7 +8116,7 @@ F: include/uapi/linux/mei.h
F: include/linux/mei_cl_bus.h
F: drivers/misc/mei/*
F: drivers/watchdog/mei_wdt.c
F: Documentation/misc-devices/mei/*
F: Documentation/driver-api/mei/*
F: samples/mei/*
INTEL MENLOW THERMAL DRIVER
@ -8936,7 +8949,7 @@ F: include/linux/leds.h
LEGACY EEPROM DRIVER
M: Jean Delvare <jdelvare@suse.com>
S: Maintained
F: Documentation/misc-devices/eeprom
F: Documentation/misc-devices/eeprom.rst
F: drivers/misc/eeprom/eeprom.c
LEGO MINDSTORMS EV3
@ -9222,7 +9235,7 @@ F: Documentation/memory-barriers.txt
LIS3LV02D ACCELEROMETER DRIVER
M: Eric Piel <eric.piel@tremplin-utc.net>
S: Maintained
F: Documentation/misc-devices/lis3lv02d
F: Documentation/misc-devices/lis3lv02d.rst
F: drivers/misc/lis3lv02d/
F: drivers/platform/x86/hp_accel.c

View File

@ -666,6 +666,11 @@ EXPORT_SYMBOL(radix__flush_tlb_page);
#define radix__flush_all_mm radix__local_flush_all_mm
#endif /* CONFIG_SMP */
/*
* If kernel TLBIs ever become local rather than global, then
* drivers/misc/ocxl/link.c:ocxl_link_add_pe will need some work, as it
* assumes kernel TLBIs are global.
*/
void radix__flush_tlb_kernel_range(unsigned long start, unsigned long end)
{
_tlbie_pid(0, RIC_FLUSH_ALL);

View File

@ -21,6 +21,15 @@
static const struct acpi_device_id amba_id_list[] = {
{"ARMH0061", 0}, /* PL061 GPIO Device */
{"ARMHC500", 0}, /* ARM CoreSight ETM4x */
{"ARMHC501", 0}, /* ARM CoreSight ETR */
{"ARMHC502", 0}, /* ARM CoreSight STM */
{"ARMHC503", 0}, /* ARM CoreSight Debug */
{"ARMHC979", 0}, /* ARM CoreSight TPIU */
{"ARMHC97C", 0}, /* ARM CoreSight SoC-400 TMC, SoC-600 ETF/ETB */
{"ARMHC98D", 0}, /* ARM CoreSight Dynamic Replicator */
{"ARMHC9CA", 0}, /* ARM CoreSight CATU */
{"ARMHC9FF", 0}, /* ARM CoreSight Dynamic Funnel */
{"", 0},
};

View File

@ -2059,10 +2059,9 @@ static size_t binder_get_object(struct binder_proc *proc,
read_size = min_t(size_t, sizeof(*object), buffer->data_size - offset);
if (offset > buffer->data_size || read_size < sizeof(*hdr) ||
!IS_ALIGNED(offset, sizeof(u32)))
binder_alloc_copy_from_buffer(&proc->alloc, object, buffer,
offset, read_size))
return 0;
binder_alloc_copy_from_buffer(&proc->alloc, object, buffer,
offset, read_size);
/* Ok, now see if we read a complete object. */
hdr = &object->hdr;
@ -2131,8 +2130,10 @@ static struct binder_buffer_object *binder_validate_ptr(
return NULL;
buffer_offset = start_offset + sizeof(binder_size_t) * index;
binder_alloc_copy_from_buffer(&proc->alloc, &object_offset,
b, buffer_offset, sizeof(object_offset));
if (binder_alloc_copy_from_buffer(&proc->alloc, &object_offset,
b, buffer_offset,
sizeof(object_offset)))
return NULL;
object_size = binder_get_object(proc, b, object_offset, object);
if (!object_size || object->hdr.type != BINDER_TYPE_PTR)
return NULL;
@ -2212,10 +2213,12 @@ static bool binder_validate_fixup(struct binder_proc *proc,
return false;
last_min_offset = last_bbo->parent_offset + sizeof(uintptr_t);
buffer_offset = objects_start_offset +
sizeof(binder_size_t) * last_bbo->parent,
binder_alloc_copy_from_buffer(&proc->alloc, &last_obj_offset,
b, buffer_offset,
sizeof(last_obj_offset));
sizeof(binder_size_t) * last_bbo->parent;
if (binder_alloc_copy_from_buffer(&proc->alloc,
&last_obj_offset,
b, buffer_offset,
sizeof(last_obj_offset)))
return false;
}
return (fixup_offset >= last_min_offset);
}
@ -2301,15 +2304,15 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
buffer_offset += sizeof(binder_size_t)) {
struct binder_object_header *hdr;
size_t object_size;
size_t object_size = 0;
struct binder_object object;
binder_size_t object_offset;
binder_alloc_copy_from_buffer(&proc->alloc, &object_offset,
buffer, buffer_offset,
sizeof(object_offset));
object_size = binder_get_object(proc, buffer,
object_offset, &object);
if (!binder_alloc_copy_from_buffer(&proc->alloc, &object_offset,
buffer, buffer_offset,
sizeof(object_offset)))
object_size = binder_get_object(proc, buffer,
object_offset, &object);
if (object_size == 0) {
pr_err("transaction release %d bad object at offset %lld, size %zd\n",
debug_id, (u64)object_offset, buffer->data_size);
@ -2432,15 +2435,16 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
for (fd_index = 0; fd_index < fda->num_fds;
fd_index++) {
u32 fd;
int err;
binder_size_t offset = fda_offset +
fd_index * sizeof(fd);
binder_alloc_copy_from_buffer(&proc->alloc,
&fd,
buffer,
offset,
sizeof(fd));
binder_deferred_fd_close(fd);
err = binder_alloc_copy_from_buffer(
&proc->alloc, &fd, buffer,
offset, sizeof(fd));
WARN_ON(err);
if (!err)
binder_deferred_fd_close(fd);
}
} break;
default:
@ -2683,11 +2687,12 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda,
int ret;
binder_size_t offset = fda_offset + fdi * sizeof(fd);
binder_alloc_copy_from_buffer(&target_proc->alloc,
&fd, t->buffer,
offset, sizeof(fd));
ret = binder_translate_fd(fd, offset, t, thread,
in_reply_to);
ret = binder_alloc_copy_from_buffer(&target_proc->alloc,
&fd, t->buffer,
offset, sizeof(fd));
if (!ret)
ret = binder_translate_fd(fd, offset, t, thread,
in_reply_to);
if (ret < 0)
return ret;
}
@ -2740,8 +2745,12 @@ static int binder_fixup_parent(struct binder_transaction *t,
}
buffer_offset = bp->parent_offset +
(uintptr_t)parent->buffer - (uintptr_t)b->user_data;
binder_alloc_copy_to_buffer(&target_proc->alloc, b, buffer_offset,
&bp->buffer, sizeof(bp->buffer));
if (binder_alloc_copy_to_buffer(&target_proc->alloc, b, buffer_offset,
&bp->buffer, sizeof(bp->buffer))) {
binder_user_error("%d:%d got transaction with invalid parent offset\n",
proc->pid, thread->pid);
return -EINVAL;
}
return 0;
}
@ -3160,15 +3169,20 @@ static void binder_transaction(struct binder_proc *proc,
goto err_binder_alloc_buf_failed;
}
if (secctx) {
int err;
size_t buf_offset = ALIGN(tr->data_size, sizeof(void *)) +
ALIGN(tr->offsets_size, sizeof(void *)) +
ALIGN(extra_buffers_size, sizeof(void *)) -
ALIGN(secctx_sz, sizeof(u64));
t->security_ctx = (uintptr_t)t->buffer->user_data + buf_offset;
binder_alloc_copy_to_buffer(&target_proc->alloc,
t->buffer, buf_offset,
secctx, secctx_sz);
err = binder_alloc_copy_to_buffer(&target_proc->alloc,
t->buffer, buf_offset,
secctx, secctx_sz);
if (err) {
t->security_ctx = 0;
WARN_ON(1);
}
security_release_secctx(secctx, secctx_sz);
secctx = NULL;
}
@ -3234,11 +3248,16 @@ static void binder_transaction(struct binder_proc *proc,
struct binder_object object;
binder_size_t object_offset;
binder_alloc_copy_from_buffer(&target_proc->alloc,
&object_offset,
t->buffer,
buffer_offset,
sizeof(object_offset));
if (binder_alloc_copy_from_buffer(&target_proc->alloc,
&object_offset,
t->buffer,
buffer_offset,
sizeof(object_offset))) {
return_error = BR_FAILED_REPLY;
return_error_param = -EINVAL;
return_error_line = __LINE__;
goto err_bad_offset;
}
object_size = binder_get_object(target_proc, t->buffer,
object_offset, &object);
if (object_size == 0 || object_offset < off_min) {
@ -3262,15 +3281,17 @@ static void binder_transaction(struct binder_proc *proc,
fp = to_flat_binder_object(hdr);
ret = binder_translate_binder(fp, t, thread);
if (ret < 0) {
if (ret < 0 ||
binder_alloc_copy_to_buffer(&target_proc->alloc,
t->buffer,
object_offset,
fp, sizeof(*fp))) {
return_error = BR_FAILED_REPLY;
return_error_param = ret;
return_error_line = __LINE__;
goto err_translate_failed;
}
binder_alloc_copy_to_buffer(&target_proc->alloc,
t->buffer, object_offset,
fp, sizeof(*fp));
} break;
case BINDER_TYPE_HANDLE:
case BINDER_TYPE_WEAK_HANDLE: {
@ -3278,15 +3299,16 @@ static void binder_transaction(struct binder_proc *proc,
fp = to_flat_binder_object(hdr);
ret = binder_translate_handle(fp, t, thread);
if (ret < 0) {
if (ret < 0 ||
binder_alloc_copy_to_buffer(&target_proc->alloc,
t->buffer,
object_offset,
fp, sizeof(*fp))) {
return_error = BR_FAILED_REPLY;
return_error_param = ret;
return_error_line = __LINE__;
goto err_translate_failed;
}
binder_alloc_copy_to_buffer(&target_proc->alloc,
t->buffer, object_offset,
fp, sizeof(*fp));
} break;
case BINDER_TYPE_FD: {
@ -3296,16 +3318,17 @@ static void binder_transaction(struct binder_proc *proc,
int ret = binder_translate_fd(fp->fd, fd_offset, t,
thread, in_reply_to);
if (ret < 0) {
fp->pad_binder = 0;
if (ret < 0 ||
binder_alloc_copy_to_buffer(&target_proc->alloc,
t->buffer,
object_offset,
fp, sizeof(*fp))) {
return_error = BR_FAILED_REPLY;
return_error_param = ret;
return_error_line = __LINE__;
goto err_translate_failed;
}
fp->pad_binder = 0;
binder_alloc_copy_to_buffer(&target_proc->alloc,
t->buffer, object_offset,
fp, sizeof(*fp));
} break;
case BINDER_TYPE_FDA: {
struct binder_object ptr_object;
@ -3393,15 +3416,16 @@ static void binder_transaction(struct binder_proc *proc,
num_valid,
last_fixup_obj_off,
last_fixup_min_off);
if (ret < 0) {
if (ret < 0 ||
binder_alloc_copy_to_buffer(&target_proc->alloc,
t->buffer,
object_offset,
bp, sizeof(*bp))) {
return_error = BR_FAILED_REPLY;
return_error_param = ret;
return_error_line = __LINE__;
goto err_translate_failed;
}
binder_alloc_copy_to_buffer(&target_proc->alloc,
t->buffer, object_offset,
bp, sizeof(*bp));
last_fixup_obj_off = object_offset;
last_fixup_min_off = 0;
} break;
@ -4140,20 +4164,27 @@ static int binder_apply_fd_fixups(struct binder_proc *proc,
trace_binder_transaction_fd_recv(t, fd, fixup->offset);
fd_install(fd, fixup->file);
fixup->file = NULL;
binder_alloc_copy_to_buffer(&proc->alloc, t->buffer,
fixup->offset, &fd,
sizeof(u32));
if (binder_alloc_copy_to_buffer(&proc->alloc, t->buffer,
fixup->offset, &fd,
sizeof(u32))) {
ret = -EINVAL;
break;
}
}
list_for_each_entry_safe(fixup, tmp, &t->fd_fixups, fixup_entry) {
if (fixup->file) {
fput(fixup->file);
} else if (ret) {
u32 fd;
int err;
binder_alloc_copy_from_buffer(&proc->alloc, &fd,
t->buffer, fixup->offset,
sizeof(fd));
binder_deferred_fd_close(fd);
err = binder_alloc_copy_from_buffer(&proc->alloc, &fd,
t->buffer,
fixup->offset,
sizeof(fd));
WARN_ON(err);
if (!err)
binder_deferred_fd_close(fd);
}
list_del(&fixup->fixup_entry);
kfree(fixup);
@ -4268,6 +4299,8 @@ retry:
case BINDER_WORK_TRANSACTION_COMPLETE: {
binder_inner_proc_unlock(proc);
cmd = BR_TRANSACTION_COMPLETE;
kfree(w);
binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
if (put_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
@ -4276,8 +4309,6 @@ retry:
binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE,
"%d:%d BR_TRANSACTION_COMPLETE\n",
proc->pid, thread->pid);
kfree(w);
binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
} break;
case BINDER_WORK_NODE: {
struct binder_node *node = container_of(w, struct binder_node, work);

View File

@ -1119,15 +1119,16 @@ binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc,
return 0;
}
static void binder_alloc_do_buffer_copy(struct binder_alloc *alloc,
bool to_buffer,
struct binder_buffer *buffer,
binder_size_t buffer_offset,
void *ptr,
size_t bytes)
static int binder_alloc_do_buffer_copy(struct binder_alloc *alloc,
bool to_buffer,
struct binder_buffer *buffer,
binder_size_t buffer_offset,
void *ptr,
size_t bytes)
{
/* All copies must be 32-bit aligned and 32-bit size */
BUG_ON(!check_buffer(alloc, buffer, buffer_offset, bytes));
if (!check_buffer(alloc, buffer, buffer_offset, bytes))
return -EINVAL;
while (bytes) {
unsigned long size;
@ -1155,25 +1156,26 @@ static void binder_alloc_do_buffer_copy(struct binder_alloc *alloc,
ptr = ptr + size;
buffer_offset += size;
}
return 0;
}
void binder_alloc_copy_to_buffer(struct binder_alloc *alloc,
struct binder_buffer *buffer,
binder_size_t buffer_offset,
void *src,
size_t bytes)
int binder_alloc_copy_to_buffer(struct binder_alloc *alloc,
struct binder_buffer *buffer,
binder_size_t buffer_offset,
void *src,
size_t bytes)
{
binder_alloc_do_buffer_copy(alloc, true, buffer, buffer_offset,
src, bytes);
return binder_alloc_do_buffer_copy(alloc, true, buffer, buffer_offset,
src, bytes);
}
void binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
void *dest,
struct binder_buffer *buffer,
binder_size_t buffer_offset,
size_t bytes)
int binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
void *dest,
struct binder_buffer *buffer,
binder_size_t buffer_offset,
size_t bytes)
{
binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset,
dest, bytes);
return binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset,
dest, bytes);
}

View File

@ -159,17 +159,17 @@ binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc,
const void __user *from,
size_t bytes);
void binder_alloc_copy_to_buffer(struct binder_alloc *alloc,
struct binder_buffer *buffer,
binder_size_t buffer_offset,
void *src,
size_t bytes);
int binder_alloc_copy_to_buffer(struct binder_alloc *alloc,
struct binder_buffer *buffer,
binder_size_t buffer_offset,
void *src,
size_t bytes);
void binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
void *dest,
struct binder_buffer *buffer,
binder_size_t buffer_offset,
size_t bytes);
int binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
void *dest,
struct binder_buffer *buffer,
binder_size_t buffer_offset,
size_t bytes);
#endif /* _LINUX_BINDER_ALLOC_H */

View File

@ -134,7 +134,7 @@ static int bsr_mmap(struct file *filp, struct vm_area_struct *vma)
return 0;
}
static int bsr_open(struct inode * inode, struct file * filp)
static int bsr_open(struct inode *inode, struct file *filp)
{
struct cdev *cdev = inode->i_cdev;
struct bsr_dev *dev = container_of(cdev, struct bsr_dev, bsr_cdev);
@ -309,7 +309,8 @@ static int __init bsr_init(void)
goto out_err_2;
}
if ((ret = bsr_create_devs(np)) < 0) {
ret = bsr_create_devs(np);
if (ret < 0) {
np = NULL;
goto out_err_3;
}

View File

@ -226,6 +226,7 @@ int misc_register(struct miscdevice *misc)
mutex_unlock(&misc_mtx);
return err;
}
EXPORT_SYMBOL(misc_register);
/**
* misc_deregister - unregister a miscellaneous device
@ -249,8 +250,6 @@ void misc_deregister(struct miscdevice *misc)
clear_bit(i, misc_minors);
mutex_unlock(&misc_mtx);
}
EXPORT_SYMBOL(misc_register);
EXPORT_SYMBOL(misc_deregister);
static char *misc_devnode(struct device *dev, umode_t *mode)

View File

@ -833,7 +833,7 @@ static int quad8_action_get(struct counter_device *counter,
return 0;
}
const struct counter_ops quad8_ops = {
static const struct counter_ops quad8_ops = {
.signal_read = quad8_signal_read,
.count_read = quad8_count_read,
.count_write = quad8_count_write,

View File

@ -37,6 +37,18 @@ config EXTCON_AXP288
Say Y here to enable support for USB peripheral detection
and USB MUX switching by X-Power AXP288 PMIC.
config EXTCON_FSA9480
tristate "FSA9480 EXTCON Support"
depends on INPUT && I2C
select IRQ_DOMAIN
select REGMAP_I2C
help
If you say yes here you get support for the Fairchild Semiconductor
FSA9480 microUSB switch and accessory detector chip. The FSA9480 is a USB
port accessory detector and switch. The FSA9480 is fully controlled using
I2C and enables USB data, stereo and mono audio, video, microphone
and UART data to use a common connector port.
config EXTCON_GPIO
tristate "GPIO extcon support"
depends on GPIOLIB || COMPILE_TEST

View File

@ -8,6 +8,7 @@ extcon-core-objs += extcon.o devres.o
obj-$(CONFIG_EXTCON_ADC_JACK) += extcon-adc-jack.o
obj-$(CONFIG_EXTCON_ARIZONA) += extcon-arizona.o
obj-$(CONFIG_EXTCON_AXP288) += extcon-axp288.o
obj-$(CONFIG_EXTCON_FSA9480) += extcon-fsa9480.o
obj-$(CONFIG_EXTCON_GPIO) += extcon-gpio.o
obj-$(CONFIG_EXTCON_INTEL_INT3496) += extcon-intel-int3496.o
obj-$(CONFIG_EXTCON_INTEL_CHT_WC) += extcon-intel-cht-wc.o

View File

@ -326,10 +326,12 @@ static void arizona_start_mic(struct arizona_extcon_info *info)
arizona_extcon_pulse_micbias(info);
regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1,
ARIZONA_MICD_ENA, ARIZONA_MICD_ENA,
&change);
if (!change) {
ret = regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1,
ARIZONA_MICD_ENA, ARIZONA_MICD_ENA,
&change);
if (ret < 0) {
dev_err(arizona->dev, "Failed to enable micd: %d\n", ret);
} else if (!change) {
regulator_disable(info->micvdd);
pm_runtime_put_autosuspend(info->dev);
}
@ -341,12 +343,14 @@ static void arizona_stop_mic(struct arizona_extcon_info *info)
const char *widget = arizona_extcon_get_micbias(info);
struct snd_soc_dapm_context *dapm = arizona->dapm;
struct snd_soc_component *component = snd_soc_dapm_to_component(dapm);
bool change;
bool change = false;
int ret;
regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1,
ARIZONA_MICD_ENA, 0,
&change);
ret = regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1,
ARIZONA_MICD_ENA, 0,
&change);
if (ret < 0)
dev_err(arizona->dev, "Failed to disable micd: %d\n", ret);
ret = snd_soc_component_disable_pin(component, widget);
if (ret != 0)
@ -1718,12 +1722,15 @@ static int arizona_extcon_remove(struct platform_device *pdev)
struct arizona *arizona = info->arizona;
int jack_irq_rise, jack_irq_fall;
bool change;
int ret;
regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1,
ARIZONA_MICD_ENA, 0,
&change);
if (change) {
ret = regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1,
ARIZONA_MICD_ENA, 0,
&change);
if (ret < 0) {
dev_err(&pdev->dev, "Failed to disable micd on remove: %d\n",
ret);
} else if (change) {
regulator_disable(info->micvdd);
pm_runtime_put(info->dev);
}

View File

@ -0,0 +1,395 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* extcon-fsa9480.c - Fairchild Semiconductor FSA9480 extcon driver
*
* Copyright (c) 2019 Tomasz Figa <tomasz.figa@gmail.com>
*
* Loosely based on old fsa9480 misc-device driver.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/types.h>
#include <linux/i2c.h>
#include <linux/slab.h>
#include <linux/bitops.h>
#include <linux/interrupt.h>
#include <linux/err.h>
#include <linux/platform_device.h>
#include <linux/kobject.h>
#include <linux/extcon-provider.h>
#include <linux/irqdomain.h>
#include <linux/regmap.h>
/* FSA9480 I2C registers */
#define FSA9480_REG_DEVID 0x01
#define FSA9480_REG_CTRL 0x02
#define FSA9480_REG_INT1 0x03
#define FSA9480_REG_INT2 0x04
#define FSA9480_REG_INT1_MASK 0x05
#define FSA9480_REG_INT2_MASK 0x06
#define FSA9480_REG_ADC 0x07
#define FSA9480_REG_TIMING1 0x08
#define FSA9480_REG_TIMING2 0x09
#define FSA9480_REG_DEV_T1 0x0a
#define FSA9480_REG_DEV_T2 0x0b
#define FSA9480_REG_BTN1 0x0c
#define FSA9480_REG_BTN2 0x0d
#define FSA9480_REG_CK 0x0e
#define FSA9480_REG_CK_INT1 0x0f
#define FSA9480_REG_CK_INT2 0x10
#define FSA9480_REG_CK_INTMASK1 0x11
#define FSA9480_REG_CK_INTMASK2 0x12
#define FSA9480_REG_MANSW1 0x13
#define FSA9480_REG_MANSW2 0x14
#define FSA9480_REG_END 0x15
/* Control */
#define CON_SWITCH_OPEN (1 << 4)
#define CON_RAW_DATA (1 << 3)
#define CON_MANUAL_SW (1 << 2)
#define CON_WAIT (1 << 1)
#define CON_INT_MASK (1 << 0)
#define CON_MASK (CON_SWITCH_OPEN | CON_RAW_DATA | \
CON_MANUAL_SW | CON_WAIT)
/* Device Type 1 */
#define DEV_USB_OTG 7
#define DEV_DEDICATED_CHG 6
#define DEV_USB_CHG 5
#define DEV_CAR_KIT 4
#define DEV_UART 3
#define DEV_USB 2
#define DEV_AUDIO_2 1
#define DEV_AUDIO_1 0
#define DEV_T1_USB_MASK (DEV_USB_OTG | DEV_USB)
#define DEV_T1_UART_MASK (DEV_UART)
#define DEV_T1_CHARGER_MASK (DEV_DEDICATED_CHG | DEV_USB_CHG)
/* Device Type 2 */
#define DEV_AV 14
#define DEV_TTY 13
#define DEV_PPD 12
#define DEV_JIG_UART_OFF 11
#define DEV_JIG_UART_ON 10
#define DEV_JIG_USB_OFF 9
#define DEV_JIG_USB_ON 8
#define DEV_T2_USB_MASK (DEV_JIG_USB_OFF | DEV_JIG_USB_ON)
#define DEV_T2_UART_MASK (DEV_JIG_UART_OFF | DEV_JIG_UART_ON)
#define DEV_T2_JIG_MASK (DEV_JIG_USB_OFF | DEV_JIG_USB_ON | \
DEV_JIG_UART_OFF | DEV_JIG_UART_ON)
/*
* Manual Switch
* D- [7:5] / D+ [4:2]
* 000: Open all / 001: USB / 010: AUDIO / 011: UART / 100: V_AUDIO
*/
#define SW_VAUDIO ((4 << 5) | (4 << 2))
#define SW_UART ((3 << 5) | (3 << 2))
#define SW_AUDIO ((2 << 5) | (2 << 2))
#define SW_DHOST ((1 << 5) | (1 << 2))
#define SW_AUTO ((0 << 5) | (0 << 2))
/* Interrupt 1 */
#define INT1_MASK (0xff << 0)
#define INT_DETACH (1 << 1)
#define INT_ATTACH (1 << 0)
/* Interrupt 2 mask */
#define INT2_MASK (0x1f << 0)
/* Timing Set 1 */
#define TIMING1_ADC_500MS (0x6 << 0)
struct fsa9480_usbsw {
struct device *dev;
struct regmap *regmap;
struct extcon_dev *edev;
u16 cable;
};
static const unsigned int fsa9480_extcon_cable[] = {
EXTCON_USB_HOST,
EXTCON_USB,
EXTCON_CHG_USB_DCP,
EXTCON_CHG_USB_SDP,
EXTCON_CHG_USB_ACA,
EXTCON_JACK_LINE_OUT,
EXTCON_JACK_VIDEO_OUT,
EXTCON_JIG,
EXTCON_NONE,
};
static const u64 cable_types[] = {
[DEV_USB_OTG] = BIT_ULL(EXTCON_USB_HOST),
[DEV_DEDICATED_CHG] = BIT_ULL(EXTCON_USB) | BIT_ULL(EXTCON_CHG_USB_DCP),
[DEV_USB_CHG] = BIT_ULL(EXTCON_USB) | BIT_ULL(EXTCON_CHG_USB_SDP),
[DEV_CAR_KIT] = BIT_ULL(EXTCON_USB) | BIT_ULL(EXTCON_CHG_USB_SDP)
| BIT_ULL(EXTCON_JACK_LINE_OUT),
[DEV_UART] = BIT_ULL(EXTCON_JIG),
[DEV_USB] = BIT_ULL(EXTCON_USB) | BIT_ULL(EXTCON_CHG_USB_SDP),
[DEV_AUDIO_2] = BIT_ULL(EXTCON_JACK_LINE_OUT),
[DEV_AUDIO_1] = BIT_ULL(EXTCON_JACK_LINE_OUT),
[DEV_AV] = BIT_ULL(EXTCON_JACK_LINE_OUT)
| BIT_ULL(EXTCON_JACK_VIDEO_OUT),
[DEV_TTY] = BIT_ULL(EXTCON_JIG),
[DEV_PPD] = BIT_ULL(EXTCON_JACK_LINE_OUT) | BIT_ULL(EXTCON_CHG_USB_ACA),
[DEV_JIG_UART_OFF] = BIT_ULL(EXTCON_JIG),
[DEV_JIG_UART_ON] = BIT_ULL(EXTCON_JIG),
[DEV_JIG_USB_OFF] = BIT_ULL(EXTCON_USB) | BIT_ULL(EXTCON_JIG),
[DEV_JIG_USB_ON] = BIT_ULL(EXTCON_USB) | BIT_ULL(EXTCON_JIG),
};
/* Define regmap configuration of FSA9480 for I2C communication */
static bool fsa9480_volatile_reg(struct device *dev, unsigned int reg)
{
switch (reg) {
case FSA9480_REG_INT1_MASK:
return true;
default:
break;
}
return false;
}
static const struct regmap_config fsa9480_regmap_config = {
.reg_bits = 8,
.val_bits = 8,
.volatile_reg = fsa9480_volatile_reg,
.max_register = FSA9480_REG_END,
};
static int fsa9480_write_reg(struct fsa9480_usbsw *usbsw, int reg, int value)
{
int ret;
ret = regmap_write(usbsw->regmap, reg, value);
if (ret < 0)
dev_err(usbsw->dev, "%s: err %d\n", __func__, ret);
return ret;
}
static int fsa9480_read_reg(struct fsa9480_usbsw *usbsw, int reg)
{
int ret, val;
ret = regmap_read(usbsw->regmap, reg, &val);
if (ret < 0) {
dev_err(usbsw->dev, "%s: err %d\n", __func__, ret);
return ret;
}
return val;
}
static int fsa9480_read_irq(struct fsa9480_usbsw *usbsw, int *value)
{
u8 regs[2];
int ret;
ret = regmap_bulk_read(usbsw->regmap, FSA9480_REG_INT1, regs, 2);
if (ret < 0)
dev_err(usbsw->dev, "%s: err %d\n", __func__, ret);
*value = regs[1] << 8 | regs[0];
return ret;
}
static void fsa9480_handle_change(struct fsa9480_usbsw *usbsw,
u16 mask, bool attached)
{
while (mask) {
int dev = fls64(mask) - 1;
u64 cables = cable_types[dev];
while (cables) {
int cable = fls64(cables) - 1;
extcon_set_state_sync(usbsw->edev, cable, attached);
cables &= ~BIT_ULL(cable);
}
mask &= ~BIT_ULL(dev);
}
}
static void fsa9480_detect_dev(struct fsa9480_usbsw *usbsw)
{
int val1, val2;
u16 val;
val1 = fsa9480_read_reg(usbsw, FSA9480_REG_DEV_T1);
val2 = fsa9480_read_reg(usbsw, FSA9480_REG_DEV_T2);
if (val1 < 0 || val2 < 0) {
dev_err(usbsw->dev, "%s: failed to read registers", __func__);
return;
}
val = val2 << 8 | val1;
dev_info(usbsw->dev, "dev1: 0x%x, dev2: 0x%x\n", val1, val2);
/* handle detached cables first */
fsa9480_handle_change(usbsw, usbsw->cable & ~val, false);
/* then handle attached ones */
fsa9480_handle_change(usbsw, val & ~usbsw->cable, true);
usbsw->cable = val;
}
static irqreturn_t fsa9480_irq_handler(int irq, void *data)
{
struct fsa9480_usbsw *usbsw = data;
int intr = 0;
/* clear interrupt */
fsa9480_read_irq(usbsw, &intr);
if (!intr)
return IRQ_NONE;
/* device detection */
fsa9480_detect_dev(usbsw);
return IRQ_HANDLED;
}
static int fsa9480_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
struct fsa9480_usbsw *info;
int ret;
if (!client->irq) {
dev_err(&client->dev, "no interrupt provided\n");
return -EINVAL;
}
info = devm_kzalloc(&client->dev, sizeof(*info), GFP_KERNEL);
if (!info)
return -ENOMEM;
info->dev = &client->dev;
i2c_set_clientdata(client, info);
/* External connector */
info->edev = devm_extcon_dev_allocate(info->dev,
fsa9480_extcon_cable);
if (IS_ERR(info->edev)) {
dev_err(info->dev, "failed to allocate memory for extcon\n");
ret = -ENOMEM;
return ret;
}
ret = devm_extcon_dev_register(info->dev, info->edev);
if (ret) {
dev_err(info->dev, "failed to register extcon device\n");
return ret;
}
info->regmap = devm_regmap_init_i2c(client, &fsa9480_regmap_config);
if (IS_ERR(info->regmap)) {
ret = PTR_ERR(info->regmap);
dev_err(info->dev, "failed to allocate register map: %d\n",
ret);
return ret;
}
/* ADC Detect Time: 500ms */
fsa9480_write_reg(info, FSA9480_REG_TIMING1, TIMING1_ADC_500MS);
/* configure automatic switching */
fsa9480_write_reg(info, FSA9480_REG_CTRL, CON_MASK);
/* unmask interrupt (attach/detach only) */
fsa9480_write_reg(info, FSA9480_REG_INT1_MASK,
INT1_MASK & ~(INT_ATTACH | INT_DETACH));
fsa9480_write_reg(info, FSA9480_REG_INT2_MASK, INT2_MASK);
ret = devm_request_threaded_irq(info->dev, client->irq, NULL,
fsa9480_irq_handler,
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
"fsa9480", info);
if (ret) {
dev_err(info->dev, "failed to request IRQ\n");
return ret;
}
device_init_wakeup(info->dev, true);
fsa9480_detect_dev(info);
return 0;
}
static int fsa9480_remove(struct i2c_client *client)
{
return 0;
}
#ifdef CONFIG_PM_SLEEP
static int fsa9480_suspend(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
if (device_may_wakeup(&client->dev) && client->irq)
enable_irq_wake(client->irq);
return 0;
}
static int fsa9480_resume(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
if (device_may_wakeup(&client->dev) && client->irq)
disable_irq_wake(client->irq);
return 0;
}
#endif
static const struct dev_pm_ops fsa9480_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(fsa9480_suspend, fsa9480_resume)
};
static const struct i2c_device_id fsa9480_id[] = {
{ "fsa9480", 0 },
{}
};
MODULE_DEVICE_TABLE(i2c, fsa9480_id);
static const struct of_device_id fsa9480_of_match[] = {
{ .compatible = "fcs,fsa9480", },
{ },
};
MODULE_DEVICE_TABLE(of, fsa9480_of_match);
static struct i2c_driver fsa9480_i2c_driver = {
.driver = {
.name = "fsa9480",
.pm = &fsa9480_pm_ops,
.of_match_table = fsa9480_of_match,
},
.probe = fsa9480_probe,
.remove = fsa9480_remove,
.id_table = fsa9480_id,
};
static int __init fsa9480_module_init(void)
{
return i2c_add_driver(&fsa9480_i2c_driver);
}
subsys_initcall(fsa9480_module_init);
static void __exit fsa9480_module_exit(void)
{
i2c_del_driver(&fsa9480_i2c_driver);
}
module_exit(fsa9480_module_exit);
MODULE_DESCRIPTION("Fairchild Semiconductor FSA9480 extcon driver");
MODULE_AUTHOR("Tomasz Figa <tomasz.figa@gmail.com>");
MODULE_LICENSE("GPL");

View File

@ -12,7 +12,7 @@
#ifndef __COREBOOT_TABLE_H
#define __COREBOOT_TABLE_H
#include <linux/io.h>
#include <linux/device.h>
/* Coreboot table header structure */
struct coreboot_table_header {
@ -83,4 +83,13 @@ int coreboot_driver_register(struct coreboot_driver *driver);
/* Unregister a driver that uses the data from a coreboot table. */
void coreboot_driver_unregister(struct coreboot_driver *driver);
/* module_coreboot_driver() - Helper macro for drivers that don't do
* anything special in module init/exit. This eliminates a lot of
* boilerplate. Each module may only use this macro once, and
* calling it replaces module_init() and module_exit()
*/
#define module_coreboot_driver(__coreboot_driver) \
module_driver(__coreboot_driver, coreboot_driver_register, \
coreboot_driver_unregister)
#endif /* __COREBOOT_TABLE_H */

View File

@ -89,19 +89,7 @@ static struct coreboot_driver framebuffer_driver = {
},
.tag = CB_TAG_FRAMEBUFFER,
};
static int __init coreboot_framebuffer_init(void)
{
return coreboot_driver_register(&framebuffer_driver);
}
static void coreboot_framebuffer_exit(void)
{
coreboot_driver_unregister(&framebuffer_driver);
}
module_init(coreboot_framebuffer_init);
module_exit(coreboot_framebuffer_exit);
module_coreboot_driver(framebuffer_driver);
MODULE_AUTHOR("Samuel Holland <samuel@sholland.org>");
MODULE_LICENSE("GPL");

View File

@ -8,6 +8,7 @@
*/
#include <linux/device.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
@ -26,7 +27,7 @@ struct cbmem_cons {
#define CURSOR_MASK ((1 << 28) - 1)
#define OVERFLOW (1 << 31)
static struct cbmem_cons __iomem *cbmem_console;
static struct cbmem_cons *cbmem_console;
static u32 cbmem_console_size;
/*
@ -67,7 +68,7 @@ static ssize_t memconsole_coreboot_read(char *buf, loff_t pos, size_t count)
static int memconsole_probe(struct coreboot_device *dev)
{
struct cbmem_cons __iomem *tmp_cbmc;
struct cbmem_cons *tmp_cbmc;
tmp_cbmc = memremap(dev->cbmem_ref.cbmem_addr,
sizeof(*tmp_cbmc), MEMREMAP_WB);
@ -77,13 +78,13 @@ static int memconsole_probe(struct coreboot_device *dev)
/* Read size only once to prevent overrun attack through /dev/mem. */
cbmem_console_size = tmp_cbmc->size_dont_access_after_boot;
cbmem_console = memremap(dev->cbmem_ref.cbmem_addr,
cbmem_console = devm_memremap(&dev->dev, dev->cbmem_ref.cbmem_addr,
cbmem_console_size + sizeof(*cbmem_console),
MEMREMAP_WB);
memunmap(tmp_cbmc);
if (!cbmem_console)
return -ENOMEM;
if (IS_ERR(cbmem_console))
return PTR_ERR(cbmem_console);
memconsole_setup(memconsole_coreboot_read);
@ -94,9 +95,6 @@ static int memconsole_remove(struct coreboot_device *dev)
{
memconsole_exit();
if (cbmem_console)
memunmap(cbmem_console);
return 0;
}
@ -108,19 +106,7 @@ static struct coreboot_driver memconsole_driver = {
},
.tag = CB_TAG_CBMEM_CONSOLE,
};
static void coreboot_memconsole_exit(void)
{
coreboot_driver_unregister(&memconsole_driver);
}
static int __init coreboot_memconsole_init(void)
{
return coreboot_driver_register(&memconsole_driver);
}
module_exit(coreboot_memconsole_exit);
module_init(coreboot_memconsole_init);
module_coreboot_driver(memconsole_driver);
MODULE_AUTHOR("Google, Inc.");
MODULE_LICENSE("GPL");

View File

@ -7,21 +7,22 @@
* Copyright 2017 Google Inc.
*/
#include <linux/init.h>
#include <linux/sysfs.h>
#include <linux/kobject.h>
#include <linux/module.h>
#include "memconsole.h"
static ssize_t (*memconsole_read_func)(char *, loff_t, size_t);
static ssize_t memconsole_read(struct file *filp, struct kobject *kobp,
struct bin_attribute *bin_attr, char *buf,
loff_t pos, size_t count)
{
ssize_t (*memconsole_read_func)(char *, loff_t, size_t);
memconsole_read_func = bin_attr->private;
if (WARN_ON_ONCE(!memconsole_read_func))
return -EIO;
return memconsole_read_func(buf, pos, count);
}
@ -32,7 +33,7 @@ static struct bin_attribute memconsole_bin_attr = {
void memconsole_setup(ssize_t (*read_func)(char *, loff_t, size_t))
{
memconsole_read_func = read_func;
memconsole_bin_attr.private = read_func;
}
EXPORT_SYMBOL(memconsole_setup);

View File

@ -316,19 +316,7 @@ static struct coreboot_driver vpd_driver = {
},
.tag = CB_TAG_VPD,
};
static int __init coreboot_vpd_init(void)
{
return coreboot_driver_register(&vpd_driver);
}
static void __exit coreboot_vpd_exit(void)
{
coreboot_driver_unregister(&vpd_driver);
}
module_init(coreboot_vpd_init);
module_exit(coreboot_vpd_exit);
module_coreboot_driver(vpd_driver);
MODULE_AUTHOR("Google, Inc.");
MODULE_LICENSE("GPL");

View File

@ -7,8 +7,6 @@
* Copyright 2017 Google Inc.
*/
#include <linux/export.h>
#include "vpd_decode.h"
static int vpd_decode_len(const s32 max_len, const u8 *in,

View File

@ -26,9 +26,9 @@ config FPGA_MGR_SOCFPGA_A10
FPGA manager driver support for Altera Arria10 SoCFPGA.
config ALTERA_PR_IP_CORE
tristate "Altera Partial Reconfiguration IP Core"
help
Core driver support for Altera Partial Reconfiguration IP component
tristate "Altera Partial Reconfiguration IP Core"
help
Core driver support for Altera Partial Reconfiguration IP component
config ALTERA_PR_IP_CORE_PLAT
tristate "Platform support of Altera Partial Reconfiguration IP Core"

View File

@ -30,8 +30,8 @@
#define FME_PR_STS 0x10
#define FME_PR_DATA 0x18
#define FME_PR_ERR 0x20
#define FME_PR_INTFC_ID_H 0xA8
#define FME_PR_INTFC_ID_L 0xB0
#define FME_PR_INTFC_ID_L 0xA8
#define FME_PR_INTFC_ID_H 0xB0
/* FME PR Control Register Bitfield */
#define FME_PR_CTRL_PR_RST BIT_ULL(0) /* Reset PR engine */

View File

@ -74,6 +74,7 @@ static int fme_pr(struct platform_device *pdev, unsigned long arg)
struct dfl_fme *fme;
unsigned long minsz;
void *buf = NULL;
size_t length;
int ret = 0;
u64 v;
@ -85,9 +86,6 @@ static int fme_pr(struct platform_device *pdev, unsigned long arg)
if (port_pr.argsz < minsz || port_pr.flags)
return -EINVAL;
if (!IS_ALIGNED(port_pr.buffer_size, 4))
return -EINVAL;
/* get fme header region */
fme_hdr = dfl_get_feature_ioaddr_by_id(&pdev->dev,
FME_FEATURE_ID_HEADER);
@ -103,7 +101,13 @@ static int fme_pr(struct platform_device *pdev, unsigned long arg)
port_pr.buffer_size))
return -EFAULT;
buf = vmalloc(port_pr.buffer_size);
/*
* align PR buffer per PR bandwidth, as HW ignores the extra padding
* data automatically.
*/
length = ALIGN(port_pr.buffer_size, 4);
buf = vmalloc(length);
if (!buf)
return -ENOMEM;
@ -140,7 +144,7 @@ static int fme_pr(struct platform_device *pdev, unsigned long arg)
fpga_image_info_free(region->info);
info->buf = buf;
info->count = port_pr.buffer_size;
info->count = length;
info->region_id = port_pr.port_id;
region->info = info;
@ -159,9 +163,6 @@ unlock_exit:
mutex_unlock(&pdata->lock);
free_exit:
vfree(buf);
if (copy_to_user((void __user *)arg, &port_pr, minsz))
return -EFAULT;
return ret;
}

View File

@ -1,4 +1,4 @@
// SPDX-License-Identifier: GPL-2.0+
/* SPDX-License-Identifier: GPL-2.0+ */
#ifndef __CF_FSI_FW_H
#define __CF_FSI_FW_H

View File

@ -1029,6 +1029,14 @@ static int fsi_slave_init(struct fsi_master *master, int link, uint8_t id)
}
rc = fsi_slave_set_smode(slave);
if (rc) {
dev_warn(&master->dev,
"can't set smode on slave:%02x:%02x %d\n",
link, id, rc);
goto err_free;
}
/* Allocate a minor in the FSI space */
rc = __fsi_get_new_minor(slave, fsi_dev_cfam, &slave->dev.devt,
&slave->cdev_idx);
@ -1040,17 +1048,14 @@ static int fsi_slave_init(struct fsi_master *master, int link, uint8_t id)
rc = cdev_device_add(&slave->cdev, &slave->dev);
if (rc) {
dev_err(&slave->dev, "Error %d creating slave device\n", rc);
goto err_free;
goto err_free_ida;
}
rc = fsi_slave_set_smode(slave);
if (rc) {
dev_warn(&master->dev,
"can't set smode on slave:%02x:%02x %d\n",
link, id, rc);
kfree(slave);
return -ENODEV;
}
/* Now that we have the cdev registered with the core, any fatal
* failures beyond this point will need to clean up through
* cdev_device_del(). Fortunately though, nothing past here is fatal.
*/
if (master->link_config)
master->link_config(master, link,
slave->t_send_delay,
@ -1067,10 +1072,13 @@ static int fsi_slave_init(struct fsi_master *master, int link, uint8_t id)
dev_dbg(&master->dev, "failed during slave scan with: %d\n",
rc);
return rc;
return 0;
err_free:
put_device(&slave->dev);
err_free_ida:
fsi_free_minor(slave->dev.devt);
err_free:
of_node_put(slave->dev.of_node);
kfree(slave);
return rc;
}

View File

@ -412,6 +412,7 @@ int fsi_occ_submit(struct device *dev, const void *request, size_t req_len,
msecs_to_jiffies(OCC_CMD_IN_PRG_WAIT_MS);
struct occ *occ = dev_get_drvdata(dev);
struct occ_response *resp = response;
u8 seq_no;
u16 resp_data_length;
unsigned long start;
int rc;
@ -426,6 +427,8 @@ int fsi_occ_submit(struct device *dev, const void *request, size_t req_len,
mutex_lock(&occ->occ_lock);
/* Extract the seq_no from the command (first byte) */
seq_no = *(const u8 *)request;
rc = occ_putsram(occ, OCC_SRAM_CMD_ADDR, request, req_len);
if (rc)
goto done;
@ -441,11 +444,17 @@ int fsi_occ_submit(struct device *dev, const void *request, size_t req_len,
if (rc)
goto done;
if (resp->return_status == OCC_RESP_CMD_IN_PRG) {
if (resp->return_status == OCC_RESP_CMD_IN_PRG ||
resp->seq_no != seq_no) {
rc = -ETIMEDOUT;
if (time_after(jiffies, start + timeout))
break;
if (time_after(jiffies, start + timeout)) {
dev_err(occ->dev, "resp timeout status=%02x "
"resp seq_no=%d our seq_no=%d\n",
resp->return_status, resp->seq_no,
seq_no);
goto done;
}
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_timeout(wait_time);

View File

@ -289,11 +289,11 @@ static int sbefifo_check_sbe_state(struct sbefifo *sbefifo)
switch ((sbm & CFAM_SBM_SBE_STATE_MASK) >> CFAM_SBM_SBE_STATE_SHIFT) {
case SBE_STATE_UNKNOWN:
return -ESHUTDOWN;
case SBE_STATE_DMT:
return -EBUSY;
case SBE_STATE_IPLING:
case SBE_STATE_ISTEP:
case SBE_STATE_MPIPL:
case SBE_STATE_DMT:
return -EBUSY;
case SBE_STATE_RUNTIME:
case SBE_STATE_DUMP: /* Not sure about that one */
break;

View File

@ -124,12 +124,12 @@ struct extended_sensor {
static int occ_poll(struct occ *occ)
{
int rc;
u16 checksum = occ->poll_cmd_data + 1;
u16 checksum = occ->poll_cmd_data + occ->seq_no + 1;
u8 cmd[8];
struct occ_poll_response_header *header;
/* big endian */
cmd[0] = 0; /* sequence number */
cmd[0] = occ->seq_no++; /* sequence number */
cmd[1] = 0; /* cmd type */
cmd[2] = 0; /* data length msb */
cmd[3] = 1; /* data length lsb */

View File

@ -95,6 +95,7 @@ struct occ {
struct occ_sensors sensors;
int powr_sample_time_us; /* average power sample time */
u8 seq_no;
u8 poll_cmd_data; /* to perform OCC poll command */
int (*send_cmd)(struct occ *occ, u8 *cmd);

View File

@ -4,6 +4,7 @@
#
menuconfig CORESIGHT
bool "CoreSight Tracing Support"
depends on OF || ACPI
select ARM_AMBA
select PERF_EVENTS
help

View File

@ -2,8 +2,7 @@
#
# Makefile for CoreSight drivers.
#
obj-$(CONFIG_CORESIGHT) += coresight.o coresight-etm-perf.o
obj-$(CONFIG_OF) += of_coresight.o
obj-$(CONFIG_CORESIGHT) += coresight.o coresight-etm-perf.o coresight-platform.o
obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o \
coresight-tmc-etf.o \
coresight-tmc-etr.o

View File

@ -28,6 +28,8 @@
#define catu_dbg(x, ...) do {} while (0)
#endif
DEFINE_CORESIGHT_DEVLIST(catu_devs, "catu");
struct catu_etr_buf {
struct tmc_sg_table *catu_table;
dma_addr_t sladdr;
@ -328,19 +330,18 @@ static int catu_alloc_etr_buf(struct tmc_drvdata *tmc_drvdata,
struct etr_buf *etr_buf, int node, void **pages)
{
struct coresight_device *csdev;
struct device *catu_dev;
struct tmc_sg_table *catu_table;
struct catu_etr_buf *catu_buf;
csdev = tmc_etr_get_catu_device(tmc_drvdata);
if (!csdev)
return -ENODEV;
catu_dev = csdev->dev.parent;
catu_buf = kzalloc(sizeof(*catu_buf), GFP_KERNEL);
if (!catu_buf)
return -ENOMEM;
catu_table = catu_init_sg_table(catu_dev, node, etr_buf->size, pages);
catu_table = catu_init_sg_table(&csdev->dev, node,
etr_buf->size, pages);
if (IS_ERR(catu_table)) {
kfree(catu_buf);
return PTR_ERR(catu_table);
@ -409,13 +410,14 @@ static int catu_enable_hw(struct catu_drvdata *drvdata, void *data)
int rc;
u32 control, mode;
struct etr_buf *etr_buf = data;
struct device *dev = &drvdata->csdev->dev;
if (catu_wait_for_ready(drvdata))
dev_warn(drvdata->dev, "Timeout while waiting for READY\n");
dev_warn(dev, "Timeout while waiting for READY\n");
control = catu_read_control(drvdata);
if (control & BIT(CATU_CONTROL_ENABLE)) {
dev_warn(drvdata->dev, "CATU is already enabled\n");
dev_warn(dev, "CATU is already enabled\n");
return -EBUSY;
}
@ -441,7 +443,7 @@ static int catu_enable_hw(struct catu_drvdata *drvdata, void *data)
catu_write_irqen(drvdata, 0);
catu_write_mode(drvdata, mode);
catu_write_control(drvdata, control);
dev_dbg(drvdata->dev, "Enabled in %s mode\n",
dev_dbg(dev, "Enabled in %s mode\n",
(mode == CATU_MODE_PASS_THROUGH) ?
"Pass through" :
"Translate");
@ -462,15 +464,16 @@ static int catu_enable(struct coresight_device *csdev, void *data)
static int catu_disable_hw(struct catu_drvdata *drvdata)
{
int rc = 0;
struct device *dev = &drvdata->csdev->dev;
catu_write_control(drvdata, 0);
coresight_disclaim_device_unlocked(drvdata->base);
if (catu_wait_for_ready(drvdata)) {
dev_info(drvdata->dev, "Timeout while waiting for READY\n");
dev_info(dev, "Timeout while waiting for READY\n");
rc = -EAGAIN;
}
dev_dbg(drvdata->dev, "Disabled\n");
dev_dbg(dev, "Disabled\n");
return rc;
}
@ -502,17 +505,11 @@ static int catu_probe(struct amba_device *adev, const struct amba_id *id)
struct coresight_desc catu_desc;
struct coresight_platform_data *pdata = NULL;
struct device *dev = &adev->dev;
struct device_node *np = dev->of_node;
void __iomem *base;
if (np) {
pdata = of_get_coresight_platform_data(dev, np);
if (IS_ERR(pdata)) {
ret = PTR_ERR(pdata);
goto out;
}
dev->platform_data = pdata;
}
catu_desc.name = coresight_alloc_device_name(&catu_devs, dev);
if (!catu_desc.name)
return -ENOMEM;
drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
if (!drvdata) {
@ -520,7 +517,6 @@ static int catu_probe(struct amba_device *adev, const struct amba_id *id)
goto out;
}
drvdata->dev = dev;
dev_set_drvdata(dev, drvdata);
base = devm_ioremap_resource(dev, &adev->res);
if (IS_ERR(base)) {
@ -547,6 +543,13 @@ static int catu_probe(struct amba_device *adev, const struct amba_id *id)
if (ret)
goto out;
pdata = coresight_get_platform_data(dev);
if (IS_ERR(pdata)) {
ret = PTR_ERR(pdata);
goto out;
}
dev->platform_data = pdata;
drvdata->base = base;
catu_desc.pdata = pdata;
catu_desc.dev = dev;
@ -554,6 +557,7 @@ static int catu_probe(struct amba_device *adev, const struct amba_id *id)
catu_desc.type = CORESIGHT_DEV_TYPE_HELPER;
catu_desc.subtype.helper_subtype = CORESIGHT_DEV_SUBTYPE_HELPER_CATU;
catu_desc.ops = &catu_ops;
drvdata->csdev = coresight_register(&catu_desc);
if (IS_ERR(drvdata->csdev))
ret = PTR_ERR(drvdata->csdev);

View File

@ -61,7 +61,6 @@
#define CATU_IRQEN_OFF 0x0
struct catu_drvdata {
struct device *dev;
void __iomem *base;
struct coresight_device *csdev;
int irq;

View File

@ -572,14 +572,16 @@ static int debug_probe(struct amba_device *adev, const struct amba_id *id)
struct device *dev = &adev->dev;
struct debug_drvdata *drvdata;
struct resource *res = &adev->res;
struct device_node *np = adev->dev.of_node;
int ret;
drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
if (!drvdata)
return -ENOMEM;
drvdata->cpu = np ? of_coresight_get_cpu(np) : 0;
drvdata->cpu = coresight_get_cpu(dev);
if (drvdata->cpu < 0)
return drvdata->cpu;
if (per_cpu(debug_drvdata, drvdata->cpu)) {
dev_err(dev, "CPU%d drvdata has already been initialized\n",
drvdata->cpu);

View File

@ -63,10 +63,11 @@
#define ETB_FFSR_BIT 1
#define ETB_FRAME_SIZE_WORDS 4
DEFINE_CORESIGHT_DEVLIST(etb_devs, "etb");
/**
* struct etb_drvdata - specifics associated to an ETB component
* @base: memory mapped base address for this component.
* @dev: the device entity associated to this component.
* @atclk: optional clock for the core parts of the ETB.
* @csdev: component vitals needed by the framework.
* @miscdev: specifics to handle "/dev/xyz.etb" entry.
@ -81,7 +82,6 @@
*/
struct etb_drvdata {
void __iomem *base;
struct device *dev;
struct clk *atclk;
struct coresight_device *csdev;
struct miscdevice miscdev;
@ -227,7 +227,6 @@ out:
static int etb_enable(struct coresight_device *csdev, u32 mode, void *data)
{
int ret;
struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
switch (mode) {
case CS_MODE_SYSFS:
@ -244,13 +243,14 @@ static int etb_enable(struct coresight_device *csdev, u32 mode, void *data)
if (ret)
return ret;
dev_dbg(drvdata->dev, "ETB enabled\n");
dev_dbg(&csdev->dev, "ETB enabled\n");
return 0;
}
static void __etb_disable_hw(struct etb_drvdata *drvdata)
{
u32 ffcr;
struct device *dev = &drvdata->csdev->dev;
CS_UNLOCK(drvdata->base);
@ -263,7 +263,7 @@ static void __etb_disable_hw(struct etb_drvdata *drvdata)
writel_relaxed(ffcr, drvdata->base + ETB_FFCR);
if (coresight_timeout(drvdata->base, ETB_FFCR, ETB_FFCR_BIT, 0)) {
dev_err(drvdata->dev,
dev_err(dev,
"timeout while waiting for completion of Manual Flush\n");
}
@ -271,7 +271,7 @@ static void __etb_disable_hw(struct etb_drvdata *drvdata)
writel_relaxed(0x0, drvdata->base + ETB_CTL_REG);
if (coresight_timeout(drvdata->base, ETB_FFSR, ETB_FFSR_BIT, 1)) {
dev_err(drvdata->dev,
dev_err(dev,
"timeout while waiting for Formatter to Stop\n");
}
@ -286,6 +286,7 @@ static void etb_dump_hw(struct etb_drvdata *drvdata)
u32 read_data, depth;
u32 read_ptr, write_ptr;
u32 frame_off, frame_endoff;
struct device *dev = &drvdata->csdev->dev;
CS_UNLOCK(drvdata->base);
@ -295,10 +296,10 @@ static void etb_dump_hw(struct etb_drvdata *drvdata)
frame_off = write_ptr % ETB_FRAME_SIZE_WORDS;
frame_endoff = ETB_FRAME_SIZE_WORDS - frame_off;
if (frame_off) {
dev_err(drvdata->dev,
dev_err(dev,
"write_ptr: %lu not aligned to formatter frame size\n",
(unsigned long)write_ptr);
dev_err(drvdata->dev, "frameoff: %lu, frame_endoff: %lu\n",
dev_err(dev, "frameoff: %lu, frame_endoff: %lu\n",
(unsigned long)frame_off, (unsigned long)frame_endoff);
write_ptr += frame_endoff;
}
@ -365,7 +366,7 @@ static int etb_disable(struct coresight_device *csdev)
drvdata->mode = CS_MODE_DISABLED;
spin_unlock_irqrestore(&drvdata->spinlock, flags);
dev_dbg(drvdata->dev, "ETB disabled\n");
dev_dbg(&csdev->dev, "ETB disabled\n");
return 0;
}
@ -373,12 +374,10 @@ static void *etb_alloc_buffer(struct coresight_device *csdev,
struct perf_event *event, void **pages,
int nr_pages, bool overwrite)
{
int node, cpu = event->cpu;
int node;
struct cs_buffers *buf;
if (cpu == -1)
cpu = smp_processor_id();
node = cpu_to_node(cpu);
node = (event->cpu == -1) ? NUMA_NO_NODE : cpu_to_node(event->cpu);
buf = kzalloc_node(sizeof(struct cs_buffers), GFP_KERNEL, node);
if (!buf)
@ -460,7 +459,7 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev,
* chance to fix things.
*/
if (write_ptr % ETB_FRAME_SIZE_WORDS) {
dev_err(drvdata->dev,
dev_err(&csdev->dev,
"write_ptr: %lu not aligned to formatter frame size\n",
(unsigned long)write_ptr);
@ -512,7 +511,13 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev,
lost = true;
}
if (lost)
/*
* Don't set the TRUNCATED flag in snapshot mode because 1) the
* captured buffer is expected to be truncated and 2) a full buffer
* prevents the event from being re-enabled by the perf core,
* resulting in stale data being send to user space.
*/
if (!buf->snapshot && lost)
perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
/* finally tell HW where we want to start reading from */
@ -548,13 +553,14 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev,
writel_relaxed(0x0, drvdata->base + ETB_RAM_WRITE_POINTER);
/*
* In snapshot mode we have to update the handle->head to point
* to the new location.
* In snapshot mode we simply increment the head by the number of byte
* that were written. User space function cs_etm_find_snapshot() will
* figure out how many bytes to get from the AUX buffer based on the
* position of the head.
*/
if (buf->snapshot) {
handle->head = (cur * PAGE_SIZE) + offset;
to_read = buf->nr_pages << PAGE_SHIFT;
}
if (buf->snapshot)
handle->head += to_read;
__etb_enable_hw(drvdata);
CS_LOCK(drvdata->base);
out:
@ -587,7 +593,7 @@ static void etb_dump(struct etb_drvdata *drvdata)
}
spin_unlock_irqrestore(&drvdata->spinlock, flags);
dev_dbg(drvdata->dev, "ETB dumped\n");
dev_dbg(&drvdata->csdev->dev, "ETB dumped\n");
}
static int etb_open(struct inode *inode, struct file *file)
@ -598,7 +604,7 @@ static int etb_open(struct inode *inode, struct file *file)
if (local_cmpxchg(&drvdata->reading, 0, 1))
return -EBUSY;
dev_dbg(drvdata->dev, "%s: successfully opened\n", __func__);
dev_dbg(&drvdata->csdev->dev, "%s: successfully opened\n", __func__);
return 0;
}
@ -608,6 +614,7 @@ static ssize_t etb_read(struct file *file, char __user *data,
u32 depth;
struct etb_drvdata *drvdata = container_of(file->private_data,
struct etb_drvdata, miscdev);
struct device *dev = &drvdata->csdev->dev;
etb_dump(drvdata);
@ -616,13 +623,14 @@ static ssize_t etb_read(struct file *file, char __user *data,
len = depth * 4 - *ppos;
if (copy_to_user(data, drvdata->buf + *ppos, len)) {
dev_dbg(drvdata->dev, "%s: copy_to_user failed\n", __func__);
dev_dbg(dev,
"%s: copy_to_user failed\n", __func__);
return -EFAULT;
}
*ppos += len;
dev_dbg(drvdata->dev, "%s: %zu bytes copied, %d bytes left\n",
dev_dbg(dev, "%s: %zu bytes copied, %d bytes left\n",
__func__, len, (int)(depth * 4 - *ppos));
return len;
}
@ -633,7 +641,7 @@ static int etb_release(struct inode *inode, struct file *file)
struct etb_drvdata, miscdev);
local_set(&drvdata->reading, 0);
dev_dbg(drvdata->dev, "%s: released\n", __func__);
dev_dbg(&drvdata->csdev->dev, "%s: released\n", __func__);
return 0;
}
@ -724,20 +732,15 @@ static int etb_probe(struct amba_device *adev, const struct amba_id *id)
struct etb_drvdata *drvdata;
struct resource *res = &adev->res;
struct coresight_desc desc = { 0 };
struct device_node *np = adev->dev.of_node;
if (np) {
pdata = of_get_coresight_platform_data(dev, np);
if (IS_ERR(pdata))
return PTR_ERR(pdata);
adev->dev.platform_data = pdata;
}
desc.name = coresight_alloc_device_name(&etb_devs, dev);
if (!desc.name)
return -ENOMEM;
drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
if (!drvdata)
return -ENOMEM;
drvdata->dev = &adev->dev;
drvdata->atclk = devm_clk_get(&adev->dev, "atclk"); /* optional */
if (!IS_ERR(drvdata->atclk)) {
ret = clk_prepare_enable(drvdata->atclk);
@ -768,6 +771,11 @@ static int etb_probe(struct amba_device *adev, const struct amba_id *id)
/* This device is not associated with a session */
drvdata->pid = -1;
pdata = coresight_get_platform_data(dev);
if (IS_ERR(pdata))
return PTR_ERR(pdata);
adev->dev.platform_data = pdata;
desc.type = CORESIGHT_DEV_TYPE_SINK;
desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER;
desc.ops = &etb_cs_ops;
@ -778,7 +786,7 @@ static int etb_probe(struct amba_device *adev, const struct amba_id *id)
if (IS_ERR(drvdata->csdev))
return PTR_ERR(drvdata->csdev);
drvdata->miscdev.name = pdata->name;
drvdata->miscdev.name = desc.name;
drvdata->miscdev.minor = MISC_DYNAMIC_MINOR;
drvdata->miscdev.fops = &etb_fops;
ret = misc_register(&drvdata->miscdev);

View File

@ -523,7 +523,7 @@ int etm_perf_add_symlink_sink(struct coresight_device *csdev)
unsigned long hash;
const char *name;
struct device *pmu_dev = etm_pmu.dev;
struct device *pdev = csdev->dev.parent;
struct device *dev = &csdev->dev;
struct dev_ext_attribute *ea;
if (csdev->type != CORESIGHT_DEV_TYPE_SINK &&
@ -536,15 +536,15 @@ int etm_perf_add_symlink_sink(struct coresight_device *csdev)
if (!etm_perf_up)
return -EPROBE_DEFER;
ea = devm_kzalloc(pdev, sizeof(*ea), GFP_KERNEL);
ea = devm_kzalloc(dev, sizeof(*ea), GFP_KERNEL);
if (!ea)
return -ENOMEM;
name = dev_name(pdev);
name = dev_name(dev);
/* See function coresight_get_sink_by_id() to know where this is used */
hash = hashlen_hash(hashlen_string(NULL, name));
ea->attr.attr.name = devm_kstrdup(pdev, name, GFP_KERNEL);
ea->attr.attr.name = devm_kstrdup(dev, name, GFP_KERNEL);
if (!ea->attr.attr.name)
return -ENOMEM;

View File

@ -208,7 +208,6 @@ struct etm_config {
/**
* struct etm_drvdata - specifics associated to an ETM component
* @base: memory mapped base address for this component.
* @dev: the device entity associated to this component.
* @atclk: optional clock for the core parts of the ETM.
* @csdev: component vitals needed by the framework.
* @spinlock: only one at a time pls.
@ -232,7 +231,6 @@ struct etm_config {
*/
struct etm_drvdata {
void __iomem *base;
struct device *dev;
struct clk *atclk;
struct coresight_device *csdev;
spinlock_t spinlock;
@ -260,7 +258,7 @@ static inline void etm_writel(struct etm_drvdata *drvdata,
{
if (drvdata->use_cp14) {
if (etm_writel_cp14(off, val)) {
dev_err(drvdata->dev,
dev_err(&drvdata->csdev->dev,
"invalid CP14 access to ETM reg: %#x", off);
}
} else {
@ -274,7 +272,7 @@ static inline unsigned int etm_readl(struct etm_drvdata *drvdata, u32 off)
if (drvdata->use_cp14) {
if (etm_readl_cp14(off, &val)) {
dev_err(drvdata->dev,
dev_err(&drvdata->csdev->dev,
"invalid CP14 access to ETM reg: %#x", off);
}
} else {

View File

@ -48,7 +48,7 @@ static ssize_t etmsr_show(struct device *dev,
unsigned long flags, val;
struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
pm_runtime_get_sync(drvdata->dev);
pm_runtime_get_sync(dev->parent);
spin_lock_irqsave(&drvdata->spinlock, flags);
CS_UNLOCK(drvdata->base);
@ -56,7 +56,7 @@ static ssize_t etmsr_show(struct device *dev,
CS_LOCK(drvdata->base);
spin_unlock_irqrestore(&drvdata->spinlock, flags);
pm_runtime_put(drvdata->dev);
pm_runtime_put(dev->parent);
return sprintf(buf, "%#lx\n", val);
}
@ -131,7 +131,7 @@ static ssize_t mode_store(struct device *dev,
if (config->mode & ETM_MODE_STALL) {
if (!(drvdata->etmccr & ETMCCR_FIFOFULL)) {
dev_warn(drvdata->dev, "stall mode not supported\n");
dev_warn(dev, "stall mode not supported\n");
ret = -EINVAL;
goto err_unlock;
}
@ -141,7 +141,7 @@ static ssize_t mode_store(struct device *dev,
if (config->mode & ETM_MODE_TIMESTAMP) {
if (!(drvdata->etmccer & ETMCCER_TIMESTAMP)) {
dev_warn(drvdata->dev, "timestamp not supported\n");
dev_warn(dev, "timestamp not supported\n");
ret = -EINVAL;
goto err_unlock;
}
@ -945,7 +945,7 @@ static ssize_t seq_curr_state_show(struct device *dev,
goto out;
}
pm_runtime_get_sync(drvdata->dev);
pm_runtime_get_sync(dev->parent);
spin_lock_irqsave(&drvdata->spinlock, flags);
CS_UNLOCK(drvdata->base);
@ -953,7 +953,7 @@ static ssize_t seq_curr_state_show(struct device *dev,
CS_LOCK(drvdata->base);
spin_unlock_irqrestore(&drvdata->spinlock, flags);
pm_runtime_put(drvdata->dev);
pm_runtime_put(dev->parent);
out:
return sprintf(buf, "%#lx\n", val);
}

View File

@ -165,7 +165,7 @@ static void etm_set_prog(struct etm_drvdata *drvdata)
*/
isb();
if (coresight_timeout_etm(drvdata, ETMSR, ETMSR_PROG_BIT, 1)) {
dev_err(drvdata->dev,
dev_err(&drvdata->csdev->dev,
"%s: timeout observed when probing at offset %#x\n",
__func__, ETMSR);
}
@ -184,7 +184,7 @@ static void etm_clr_prog(struct etm_drvdata *drvdata)
*/
isb();
if (coresight_timeout_etm(drvdata, ETMSR, ETMSR_PROG_BIT, 0)) {
dev_err(drvdata->dev,
dev_err(&drvdata->csdev->dev,
"%s: timeout observed when probing at offset %#x\n",
__func__, ETMSR);
}
@ -425,7 +425,7 @@ static int etm_enable_hw(struct etm_drvdata *drvdata)
done:
CS_LOCK(drvdata->base);
dev_dbg(drvdata->dev, "cpu: %d enable smp call done: %d\n",
dev_dbg(&drvdata->csdev->dev, "cpu: %d enable smp call done: %d\n",
drvdata->cpu, rc);
return rc;
}
@ -455,14 +455,16 @@ int etm_get_trace_id(struct etm_drvdata *drvdata)
{
unsigned long flags;
int trace_id = -1;
struct device *etm_dev;
if (!drvdata)
goto out;
etm_dev = drvdata->csdev->dev.parent;
if (!local_read(&drvdata->mode))
return drvdata->traceid;
pm_runtime_get_sync(drvdata->dev);
pm_runtime_get_sync(etm_dev);
spin_lock_irqsave(&drvdata->spinlock, flags);
@ -471,7 +473,7 @@ int etm_get_trace_id(struct etm_drvdata *drvdata)
CS_LOCK(drvdata->base);
spin_unlock_irqrestore(&drvdata->spinlock, flags);
pm_runtime_put(drvdata->dev);
pm_runtime_put(etm_dev);
out:
return trace_id;
@ -526,7 +528,7 @@ static int etm_enable_sysfs(struct coresight_device *csdev)
spin_unlock(&drvdata->spinlock);
if (!ret)
dev_dbg(drvdata->dev, "ETM tracing enabled\n");
dev_dbg(&csdev->dev, "ETM tracing enabled\n");
return ret;
}
@ -581,7 +583,8 @@ static void etm_disable_hw(void *info)
CS_LOCK(drvdata->base);
dev_dbg(drvdata->dev, "cpu: %d disable smp call done\n", drvdata->cpu);
dev_dbg(&drvdata->csdev->dev,
"cpu: %d disable smp call done\n", drvdata->cpu);
}
static void etm_disable_perf(struct coresight_device *csdev)
@ -628,7 +631,7 @@ static void etm_disable_sysfs(struct coresight_device *csdev)
spin_unlock(&drvdata->spinlock);
cpus_read_unlock();
dev_dbg(drvdata->dev, "ETM tracing disabled\n");
dev_dbg(&csdev->dev, "ETM tracing disabled\n");
}
static void etm_disable(struct coresight_device *csdev,
@ -788,22 +791,12 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
struct etm_drvdata *drvdata;
struct resource *res = &adev->res;
struct coresight_desc desc = { 0 };
struct device_node *np = adev->dev.of_node;
drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
if (!drvdata)
return -ENOMEM;
if (np) {
pdata = of_get_coresight_platform_data(dev, np);
if (IS_ERR(pdata))
return PTR_ERR(pdata);
adev->dev.platform_data = pdata;
drvdata->use_cp14 = of_property_read_bool(np, "arm,cp14");
}
drvdata->dev = &adev->dev;
drvdata->use_cp14 = fwnode_property_read_bool(dev->fwnode, "arm,cp14");
dev_set_drvdata(dev, drvdata);
/* Validity for the resource is already checked by the AMBA core */
@ -822,7 +815,13 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
return ret;
}
drvdata->cpu = pdata ? pdata->cpu : 0;
drvdata->cpu = coresight_get_cpu(dev);
if (drvdata->cpu < 0)
return drvdata->cpu;
desc.name = devm_kasprintf(dev, GFP_KERNEL, "etm%d", drvdata->cpu);
if (!desc.name)
return -ENOMEM;
cpus_read_lock();
etmdrvdata[drvdata->cpu] = drvdata;
@ -852,6 +851,13 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
etm_init_trace_id(drvdata);
etm_set_default(&drvdata->config);
pdata = coresight_get_platform_data(dev);
if (IS_ERR(pdata)) {
ret = PTR_ERR(pdata);
goto err_arch_supported;
}
adev->dev.platform_data = pdata;
desc.type = CORESIGHT_DEV_TYPE_SOURCE;
desc.subtype.source_subtype = CORESIGHT_DEV_SUBTYPE_SOURCE_PROC;
desc.ops = &etm_cs_ops;
@ -871,7 +877,8 @@ static int etm_probe(struct amba_device *adev, const struct amba_id *id)
}
pm_runtime_put(&adev->dev);
dev_info(dev, "%s initialized\n", (char *)coresight_get_uci_data(id));
dev_info(&drvdata->csdev->dev,
"%s initialized\n", (char *)coresight_get_uci_data(id));
if (boot_enable) {
coresight_enable(drvdata->csdev);
drvdata->boot_enable = true;

View File

@ -88,6 +88,7 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
{
int i, rc;
struct etmv4_config *config = &drvdata->config;
struct device *etm_dev = &drvdata->csdev->dev;
CS_UNLOCK(drvdata->base);
@ -102,7 +103,7 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
/* wait for TRCSTATR.IDLE to go up */
if (coresight_timeout(drvdata->base, TRCSTATR, TRCSTATR_IDLE_BIT, 1))
dev_err(drvdata->dev,
dev_err(etm_dev,
"timeout while waiting for Idle Trace Status\n");
writel_relaxed(config->pe_sel, drvdata->base + TRCPROCSELR);
@ -184,13 +185,13 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
/* wait for TRCSTATR.IDLE to go back down to '0' */
if (coresight_timeout(drvdata->base, TRCSTATR, TRCSTATR_IDLE_BIT, 0))
dev_err(drvdata->dev,
dev_err(etm_dev,
"timeout while waiting for Idle Trace Status\n");
done:
CS_LOCK(drvdata->base);
dev_dbg(drvdata->dev, "cpu: %d enable smp call done: %d\n",
dev_dbg(etm_dev, "cpu: %d enable smp call done: %d\n",
drvdata->cpu, rc);
return rc;
}
@ -400,7 +401,7 @@ static int etm4_enable_sysfs(struct coresight_device *csdev)
spin_unlock(&drvdata->spinlock);
if (!ret)
dev_dbg(drvdata->dev, "ETM tracing enabled\n");
dev_dbg(&csdev->dev, "ETM tracing enabled\n");
return ret;
}
@ -461,7 +462,8 @@ static void etm4_disable_hw(void *info)
CS_LOCK(drvdata->base);
dev_dbg(drvdata->dev, "cpu: %d disable smp call done\n", drvdata->cpu);
dev_dbg(&drvdata->csdev->dev,
"cpu: %d disable smp call done\n", drvdata->cpu);
}
static int etm4_disable_perf(struct coresight_device *csdev,
@ -511,7 +513,7 @@ static void etm4_disable_sysfs(struct coresight_device *csdev)
spin_unlock(&drvdata->spinlock);
cpus_read_unlock();
dev_dbg(drvdata->dev, "ETM tracing disabled\n");
dev_dbg(&csdev->dev, "ETM tracing disabled\n");
}
static void etm4_disable(struct coresight_device *csdev,
@ -1082,20 +1084,11 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
struct etmv4_drvdata *drvdata;
struct resource *res = &adev->res;
struct coresight_desc desc = { 0 };
struct device_node *np = adev->dev.of_node;
drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
if (!drvdata)
return -ENOMEM;
if (np) {
pdata = of_get_coresight_platform_data(dev, np);
if (IS_ERR(pdata))
return PTR_ERR(pdata);
adev->dev.platform_data = pdata;
}
drvdata->dev = &adev->dev;
dev_set_drvdata(dev, drvdata);
/* Validity for the resource is already checked by the AMBA core */
@ -1107,7 +1100,13 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
spin_lock_init(&drvdata->spinlock);
drvdata->cpu = pdata ? pdata->cpu : 0;
drvdata->cpu = coresight_get_cpu(dev);
if (drvdata->cpu < 0)
return drvdata->cpu;
desc.name = devm_kasprintf(dev, GFP_KERNEL, "etm%d", drvdata->cpu);
if (!desc.name)
return -ENOMEM;
cpus_read_lock();
etmdrvdata[drvdata->cpu] = drvdata;
@ -1138,6 +1137,13 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
etm4_init_trace_id(drvdata);
etm4_set_default(&drvdata->config);
pdata = coresight_get_platform_data(dev);
if (IS_ERR(pdata)) {
ret = PTR_ERR(pdata);
goto err_arch_supported;
}
adev->dev.platform_data = pdata;
desc.type = CORESIGHT_DEV_TYPE_SOURCE;
desc.subtype.source_subtype = CORESIGHT_DEV_SUBTYPE_SOURCE_PROC;
desc.ops = &etm4_cs_ops;
@ -1157,7 +1163,7 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
}
pm_runtime_put(&adev->dev);
dev_info(dev, "CPU%d: ETM v%d.%d initialized\n",
dev_info(&drvdata->csdev->dev, "CPU%d: ETM v%d.%d initialized\n",
drvdata->cpu, drvdata->arch >> 4, drvdata->arch & 0xf);
if (boot_enable) {

View File

@ -284,7 +284,6 @@ struct etmv4_config {
/**
* struct etm4_drvdata - specifics associated to an ETM component
* @base: Memory mapped base address for this component.
* @dev: The device entity associated to this component.
* @csdev: Component vitals needed by the framework.
* @spinlock: Only one at a time pls.
* @mode: This tracer's mode, i.e sysFS, Perf or disabled.
@ -340,7 +339,6 @@ struct etmv4_config {
*/
struct etmv4_drvdata {
void __iomem *base;
struct device *dev;
struct coresight_device *csdev;
spinlock_t spinlock;
local_t mode;

View File

@ -29,17 +29,17 @@
#define FUNNEL_HOLDTIME (0x7 << FUNNEL_HOLDTIME_SHFT)
#define FUNNEL_ENSx_MASK 0xff
DEFINE_CORESIGHT_DEVLIST(funnel_devs, "funnel");
/**
* struct funnel_drvdata - specifics associated to a funnel component
* @base: memory mapped base address for this component.
* @dev: the device entity associated to this component.
* @atclk: optional clock for the core parts of the funnel.
* @csdev: component vitals needed by the framework.
* @priority: port selection order.
*/
struct funnel_drvdata {
void __iomem *base;
struct device *dev;
struct clk *atclk;
struct coresight_device *csdev;
unsigned long priority;
@ -80,7 +80,7 @@ static int funnel_enable(struct coresight_device *csdev, int inport,
rc = dynamic_funnel_enable_hw(drvdata, inport);
if (!rc)
dev_dbg(drvdata->dev, "FUNNEL inport %d enabled\n", inport);
dev_dbg(&csdev->dev, "FUNNEL inport %d enabled\n", inport);
return rc;
}
@ -110,7 +110,7 @@ static void funnel_disable(struct coresight_device *csdev, int inport,
if (drvdata->base)
dynamic_funnel_disable_hw(drvdata, inport);
dev_dbg(drvdata->dev, "FUNNEL inport %d disabled\n", inport);
dev_dbg(&csdev->dev, "FUNNEL inport %d disabled\n", inport);
}
static const struct coresight_ops_link funnel_link_ops = {
@ -165,11 +165,11 @@ static ssize_t funnel_ctrl_show(struct device *dev,
u32 val;
struct funnel_drvdata *drvdata = dev_get_drvdata(dev->parent);
pm_runtime_get_sync(drvdata->dev);
pm_runtime_get_sync(dev->parent);
val = get_funnel_ctrl_hw(drvdata);
pm_runtime_put(drvdata->dev);
pm_runtime_put(dev->parent);
return sprintf(buf, "%#x\n", val);
}
@ -189,23 +189,19 @@ static int funnel_probe(struct device *dev, struct resource *res)
struct coresight_platform_data *pdata = NULL;
struct funnel_drvdata *drvdata;
struct coresight_desc desc = { 0 };
struct device_node *np = dev->of_node;
if (np) {
pdata = of_get_coresight_platform_data(dev, np);
if (IS_ERR(pdata))
return PTR_ERR(pdata);
dev->platform_data = pdata;
}
if (of_device_is_compatible(np, "arm,coresight-funnel"))
if (is_of_node(dev_fwnode(dev)) &&
of_device_is_compatible(dev->of_node, "arm,coresight-funnel"))
pr_warn_once("Uses OBSOLETE CoreSight funnel binding\n");
desc.name = coresight_alloc_device_name(&funnel_devs, dev);
if (!desc.name)
return -ENOMEM;
drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
if (!drvdata)
return -ENOMEM;
drvdata->dev = dev;
drvdata->atclk = devm_clk_get(dev, "atclk"); /* optional */
if (!IS_ERR(drvdata->atclk)) {
ret = clk_prepare_enable(drvdata->atclk);
@ -229,6 +225,13 @@ static int funnel_probe(struct device *dev, struct resource *res)
dev_set_drvdata(dev, drvdata);
pdata = coresight_get_platform_data(dev);
if (IS_ERR(pdata)) {
ret = PTR_ERR(pdata);
goto out_disable_clk;
}
dev->platform_data = pdata;
desc.type = CORESIGHT_DEV_TYPE_LINK;
desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_MERG;
desc.ops = &funnel_cs_ops;
@ -241,6 +244,7 @@ static int funnel_probe(struct device *dev, struct resource *res)
}
pm_runtime_put(dev);
ret = 0;
out_disable_clk:
if (ret && !IS_ERR_OR_NULL(drvdata->atclk))

View File

@ -0,0 +1,815 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2012, The Linux Foundation. All rights reserved.
*/
#include <linux/acpi.h>
#include <linux/types.h>
#include <linux/err.h>
#include <linux/slab.h>
#include <linux/clk.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_graph.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/amba/bus.h>
#include <linux/coresight.h>
#include <linux/cpumask.h>
#include <asm/smp_plat.h>
#include "coresight-priv.h"
/*
* coresight_alloc_conns: Allocate connections record for each output
* port from the device.
*/
static int coresight_alloc_conns(struct device *dev,
struct coresight_platform_data *pdata)
{
if (pdata->nr_outport) {
pdata->conns = devm_kzalloc(dev, pdata->nr_outport *
sizeof(*pdata->conns),
GFP_KERNEL);
if (!pdata->conns)
return -ENOMEM;
}
return 0;
}
int coresight_device_fwnode_match(struct device *dev, void *fwnode)
{
return dev_fwnode(dev) == fwnode;
}
static struct device *
coresight_find_device_by_fwnode(struct fwnode_handle *fwnode)
{
struct device *dev = NULL;
/*
* If we have a non-configurable replicator, it will be found on the
* platform bus.
*/
dev = bus_find_device(&platform_bus_type, NULL,
fwnode, coresight_device_fwnode_match);
if (dev)
return dev;
/*
* We have a configurable component - circle through the AMBA bus
* looking for the device that matches the endpoint node.
*/
return bus_find_device(&amba_bustype, NULL,
fwnode, coresight_device_fwnode_match);
}
#ifdef CONFIG_OF
static inline bool of_coresight_legacy_ep_is_input(struct device_node *ep)
{
return of_property_read_bool(ep, "slave-mode");
}
static void of_coresight_get_ports_legacy(const struct device_node *node,
int *nr_inport, int *nr_outport)
{
struct device_node *ep = NULL;
int in = 0, out = 0;
do {
ep = of_graph_get_next_endpoint(node, ep);
if (!ep)
break;
if (of_coresight_legacy_ep_is_input(ep))
in++;
else
out++;
} while (ep);
*nr_inport = in;
*nr_outport = out;
}
static struct device_node *of_coresight_get_port_parent(struct device_node *ep)
{
struct device_node *parent = of_graph_get_port_parent(ep);
/*
* Skip one-level up to the real device node, if we
* are using the new bindings.
*/
if (of_node_name_eq(parent, "in-ports") ||
of_node_name_eq(parent, "out-ports"))
parent = of_get_next_parent(parent);
return parent;
}
static inline struct device_node *
of_coresight_get_input_ports_node(const struct device_node *node)
{
return of_get_child_by_name(node, "in-ports");
}
static inline struct device_node *
of_coresight_get_output_ports_node(const struct device_node *node)
{
return of_get_child_by_name(node, "out-ports");
}
static inline int
of_coresight_count_ports(struct device_node *port_parent)
{
int i = 0;
struct device_node *ep = NULL;
while ((ep = of_graph_get_next_endpoint(port_parent, ep)))
i++;
return i;
}
static void of_coresight_get_ports(const struct device_node *node,
int *nr_inport, int *nr_outport)
{
struct device_node *input_ports = NULL, *output_ports = NULL;
input_ports = of_coresight_get_input_ports_node(node);
output_ports = of_coresight_get_output_ports_node(node);
if (input_ports || output_ports) {
if (input_ports) {
*nr_inport = of_coresight_count_ports(input_ports);
of_node_put(input_ports);
}
if (output_ports) {
*nr_outport = of_coresight_count_ports(output_ports);
of_node_put(output_ports);
}
} else {
/* Fall back to legacy DT bindings parsing */
of_coresight_get_ports_legacy(node, nr_inport, nr_outport);
}
}
static int of_coresight_get_cpu(struct device *dev)
{
int cpu;
struct device_node *dn;
if (!dev->of_node)
return -ENODEV;
dn = of_parse_phandle(dev->of_node, "cpu", 0);
if (!dn)
return -ENODEV;
cpu = of_cpu_node_to_id(dn);
of_node_put(dn);
return cpu;
}
/*
* of_coresight_parse_endpoint : Parse the given output endpoint @ep
* and fill the connection information in @conn
*
* Parses the local port, remote device name and the remote port.
*
* Returns :
* 1 - If the parsing is successful and a connection record
* was created for an output connection.
* 0 - If the parsing completed without any fatal errors.
* -Errno - Fatal error, abort the scanning.
*/
static int of_coresight_parse_endpoint(struct device *dev,
struct device_node *ep,
struct coresight_connection *conn)
{
int ret = 0;
struct of_endpoint endpoint, rendpoint;
struct device_node *rparent = NULL;
struct device_node *rep = NULL;
struct device *rdev = NULL;
struct fwnode_handle *rdev_fwnode;
do {
/* Parse the local port details */
if (of_graph_parse_endpoint(ep, &endpoint))
break;
/*
* Get a handle on the remote endpoint and the device it is
* attached to.
*/
rep = of_graph_get_remote_endpoint(ep);
if (!rep)
break;
rparent = of_coresight_get_port_parent(rep);
if (!rparent)
break;
if (of_graph_parse_endpoint(rep, &rendpoint))
break;
rdev_fwnode = of_fwnode_handle(rparent);
/* If the remote device is not available, defer probing */
rdev = coresight_find_device_by_fwnode(rdev_fwnode);
if (!rdev) {
ret = -EPROBE_DEFER;
break;
}
conn->outport = endpoint.port;
/*
* Hold the refcount to the target device. This could be
* released via:
* 1) coresight_release_platform_data() if the probe fails or
* this device is unregistered.
* 2) While removing the target device via
* coresight_remove_match()
*/
conn->child_fwnode = fwnode_handle_get(rdev_fwnode);
conn->child_port = rendpoint.port;
/* Connection record updated */
ret = 1;
} while (0);
of_node_put(rparent);
of_node_put(rep);
put_device(rdev);
return ret;
}
static int of_get_coresight_platform_data(struct device *dev,
struct coresight_platform_data *pdata)
{
int ret = 0;
struct coresight_connection *conn;
struct device_node *ep = NULL;
const struct device_node *parent = NULL;
bool legacy_binding = false;
struct device_node *node = dev->of_node;
/* Get the number of input and output port for this component */
of_coresight_get_ports(node, &pdata->nr_inport, &pdata->nr_outport);
/* If there are no output connections, we are done */
if (!pdata->nr_outport)
return 0;
ret = coresight_alloc_conns(dev, pdata);
if (ret)
return ret;
parent = of_coresight_get_output_ports_node(node);
/*
* If the DT uses obsoleted bindings, the ports are listed
* under the device and we need to filter out the input
* ports.
*/
if (!parent) {
legacy_binding = true;
parent = node;
dev_warn_once(dev, "Uses obsolete Coresight DT bindings\n");
}
conn = pdata->conns;
/* Iterate through each output port to discover topology */
while ((ep = of_graph_get_next_endpoint(parent, ep))) {
/*
* Legacy binding mixes input/output ports under the
* same parent. So, skip the input ports if we are dealing
* with legacy binding, as they processed with their
* connected output ports.
*/
if (legacy_binding && of_coresight_legacy_ep_is_input(ep))
continue;
ret = of_coresight_parse_endpoint(dev, ep, conn);
switch (ret) {
case 1:
conn++; /* Fall through */
case 0:
break;
default:
return ret;
}
}
return 0;
}
#else
static inline int
of_get_coresight_platform_data(struct device *dev,
struct coresight_platform_data *pdata)
{
return -ENOENT;
}
static inline int of_coresight_get_cpu(struct device *dev)
{
return -ENODEV;
}
#endif
#ifdef CONFIG_ACPI
#include <acpi/actypes.h>
#include <acpi/processor.h>
/* ACPI Graph _DSD UUID : "ab02a46b-74c7-45a2-bd68-f7d344ef2153" */
static const guid_t acpi_graph_uuid = GUID_INIT(0xab02a46b, 0x74c7, 0x45a2,
0xbd, 0x68, 0xf7, 0xd3,
0x44, 0xef, 0x21, 0x53);
/* Coresight ACPI Graph UUID : "3ecbc8b6-1d0e-4fb3-8107-e627f805c6cd" */
static const guid_t coresight_graph_uuid = GUID_INIT(0x3ecbc8b6, 0x1d0e, 0x4fb3,
0x81, 0x07, 0xe6, 0x27,
0xf8, 0x05, 0xc6, 0xcd);
#define ACPI_CORESIGHT_LINK_SLAVE 0
#define ACPI_CORESIGHT_LINK_MASTER 1
static inline bool is_acpi_guid(const union acpi_object *obj)
{
return (obj->type == ACPI_TYPE_BUFFER) && (obj->buffer.length == 16);
}
/*
* acpi_guid_matches - Checks if the given object is a GUID object and
* that it matches the supplied the GUID.
*/
static inline bool acpi_guid_matches(const union acpi_object *obj,
const guid_t *guid)
{
return is_acpi_guid(obj) &&
guid_equal((guid_t *)obj->buffer.pointer, guid);
}
static inline bool is_acpi_dsd_graph_guid(const union acpi_object *obj)
{
return acpi_guid_matches(obj, &acpi_graph_uuid);
}
static inline bool is_acpi_coresight_graph_guid(const union acpi_object *obj)
{
return acpi_guid_matches(obj, &coresight_graph_uuid);
}
static inline bool is_acpi_coresight_graph(const union acpi_object *obj)
{
const union acpi_object *graphid, *guid, *links;
if (obj->type != ACPI_TYPE_PACKAGE ||
obj->package.count < 3)
return false;
graphid = &obj->package.elements[0];
guid = &obj->package.elements[1];
links = &obj->package.elements[2];
if (graphid->type != ACPI_TYPE_INTEGER ||
links->type != ACPI_TYPE_INTEGER)
return false;
return is_acpi_coresight_graph_guid(guid);
}
/*
* acpi_validate_dsd_graph - Make sure the given _DSD graph conforms
* to the ACPI _DSD Graph specification.
*
* ACPI Devices Graph property has the following format:
* {
* Revision - Integer, must be 0
* NumberOfGraphs - Integer, N indicating the following list.
* Graph[1],
* ...
* Graph[N]
* }
*
* And each Graph entry has the following format:
* {
* GraphID - Integer, identifying a graph the device belongs to.
* UUID - UUID identifying the specification that governs
* this graph. (e.g, see is_acpi_coresight_graph())
* NumberOfLinks - Number "N" of connections on this node of the graph.
* Links[1]
* ...
* Links[N]
* }
*
* Where each "Links" entry has the following format:
*
* {
* SourcePortAddress - Integer
* DestinationPortAddress - Integer
* DestinationDeviceName - Reference to another device
* ( --- CoreSight specific extensions below ---)
* DirectionOfFlow - Integer 1 for output(master)
* 0 for input(slave)
* }
*
* e.g:
* For a Funnel device
*
* Device(MFUN) {
* ...
*
* Name (_DSD, Package() {
* // DSD Package contains tuples of { Proeprty_Type_UUID, Package() }
* ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"), //Std. Property UUID
* Package() {
* Package(2) { "property-name", <property-value> }
* },
*
* ToUUID("ab02a46b-74c7-45a2-bd68-f7d344ef2153"), // ACPI Graph UUID
* Package() {
* 0, // Revision
* 1, // NumberOfGraphs.
* Package() { // Graph[0] Package
* 1, // GraphID
* // Coresight Graph UUID
* ToUUID("3ecbc8b6-1d0e-4fb3-8107-e627f805c6cd"),
* 3, // NumberOfLinks aka ports
* // Link[0]: Output_0 -> Replicator:Input_0
* Package () { 0, 0, \_SB_.RPL0, 1 },
* // Link[1]: Input_0 <- Cluster0_Funnel0:Output_0
* Package () { 0, 0, \_SB_.CLU0.FUN0, 0 },
* // Link[2]: Input_1 <- Cluster1_Funnel0:Output_0
* Package () { 1, 0, \_SB_.CLU1.FUN0, 0 },
* } // End of Graph[0] Package
*
* }, // End of ACPI Graph Property
* })
*/
static inline bool acpi_validate_dsd_graph(const union acpi_object *graph)
{
int i, n;
const union acpi_object *rev, *nr_graphs;
/* The graph must contain at least the Revision and Number of Graphs */
if (graph->package.count < 2)
return false;
rev = &graph->package.elements[0];
nr_graphs = &graph->package.elements[1];
if (rev->type != ACPI_TYPE_INTEGER ||
nr_graphs->type != ACPI_TYPE_INTEGER)
return false;
/* We only support revision 0 */
if (rev->integer.value != 0)
return false;
n = nr_graphs->integer.value;
/* CoreSight devices are only part of a single Graph */
if (n != 1)
return false;
/* Make sure the ACPI graph package has right number of elements */
if (graph->package.count != (n + 2))
return false;
/*
* Each entry must be a graph package with at least 3 members :
* { GraphID, UUID, NumberOfLinks(n), Links[.],... }
*/
for (i = 2; i < n + 2; i++) {
const union acpi_object *obj = &graph->package.elements[i];
if (obj->type != ACPI_TYPE_PACKAGE ||
obj->package.count < 3)
return false;
}
return true;
}
/* acpi_get_dsd_graph - Find the _DSD Graph property for the given device. */
const union acpi_object *
acpi_get_dsd_graph(struct acpi_device *adev)
{
int i;
struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER };
acpi_status status;
const union acpi_object *dsd;
status = acpi_evaluate_object_typed(adev->handle, "_DSD", NULL,
&buf, ACPI_TYPE_PACKAGE);
if (ACPI_FAILURE(status))
return NULL;
dsd = buf.pointer;
/*
* _DSD property consists tuples { Prop_UUID, Package() }
* Iterate through all the packages and find the Graph.
*/
for (i = 0; i + 1 < dsd->package.count; i += 2) {
const union acpi_object *guid, *package;
guid = &dsd->package.elements[i];
package = &dsd->package.elements[i + 1];
/* All _DSD elements must have a UUID and a Package */
if (!is_acpi_guid(guid) || package->type != ACPI_TYPE_PACKAGE)
break;
/* Skip the non-Graph _DSD packages */
if (!is_acpi_dsd_graph_guid(guid))
continue;
if (acpi_validate_dsd_graph(package))
return package;
/* Invalid graph format, continue */
dev_warn(&adev->dev, "Invalid Graph _DSD property\n");
}
return NULL;
}
static inline bool
acpi_validate_coresight_graph(const union acpi_object *cs_graph)
{
int nlinks;
nlinks = cs_graph->package.elements[2].integer.value;
/*
* Graph must have the following fields :
* { GraphID, GraphUUID, NumberOfLinks, Links... }
*/
if (cs_graph->package.count != (nlinks + 3))
return false;
/* The links are validated in acpi_coresight_parse_link() */
return true;
}
/*
* acpi_get_coresight_graph - Parse the device _DSD tables and find
* the Graph property matching the CoreSight Graphs.
*
* Returns the pointer to the CoreSight Graph Package when found. Otherwise
* returns NULL.
*/
const union acpi_object *
acpi_get_coresight_graph(struct acpi_device *adev)
{
const union acpi_object *graph_list, *graph;
int i, nr_graphs;
graph_list = acpi_get_dsd_graph(adev);
if (!graph_list)
return graph_list;
nr_graphs = graph_list->package.elements[1].integer.value;
for (i = 2; i < nr_graphs + 2; i++) {
graph = &graph_list->package.elements[i];
if (!is_acpi_coresight_graph(graph))
continue;
if (acpi_validate_coresight_graph(graph))
return graph;
/* Invalid graph format */
break;
}
return NULL;
}
/*
* acpi_coresight_parse_link - Parse the given Graph connection
* of the device and populate the coresight_connection for an output
* connection.
*
* CoreSight Graph specification mandates that the direction of the data
* flow must be specified in the link. i.e,
*
* SourcePortAddress, // Integer
* DestinationPortAddress, // Integer
* DestinationDeviceName, // Reference to another device
* DirectionOfFlow, // 1 for output(master), 0 for input(slave)
*
* Returns the direction of the data flow [ Input(slave) or Output(master) ]
* upon success.
* Returns an negative error number otherwise.
*/
static int acpi_coresight_parse_link(struct acpi_device *adev,
const union acpi_object *link,
struct coresight_connection *conn)
{
int rc, dir;
const union acpi_object *fields;
struct acpi_device *r_adev;
struct device *rdev;
if (link->type != ACPI_TYPE_PACKAGE ||
link->package.count != 4)
return -EINVAL;
fields = link->package.elements;
if (fields[0].type != ACPI_TYPE_INTEGER ||
fields[1].type != ACPI_TYPE_INTEGER ||
fields[2].type != ACPI_TYPE_LOCAL_REFERENCE ||
fields[3].type != ACPI_TYPE_INTEGER)
return -EINVAL;
rc = acpi_bus_get_device(fields[2].reference.handle, &r_adev);
if (rc)
return rc;
dir = fields[3].integer.value;
if (dir == ACPI_CORESIGHT_LINK_MASTER) {
conn->outport = fields[0].integer.value;
conn->child_port = fields[1].integer.value;
rdev = coresight_find_device_by_fwnode(&r_adev->fwnode);
if (!rdev)
return -EPROBE_DEFER;
/*
* Hold the refcount to the target device. This could be
* released via:
* 1) coresight_release_platform_data() if the probe fails or
* this device is unregistered.
* 2) While removing the target device via
* coresight_remove_match().
*/
conn->child_fwnode = fwnode_handle_get(&r_adev->fwnode);
}
return dir;
}
/*
* acpi_coresight_parse_graph - Parse the _DSD CoreSight graph
* connection information and populate the supplied coresight_platform_data
* instance.
*/
static int acpi_coresight_parse_graph(struct acpi_device *adev,
struct coresight_platform_data *pdata)
{
int rc, i, nlinks;
const union acpi_object *graph;
struct coresight_connection *conns, *ptr;
pdata->nr_inport = pdata->nr_outport = 0;
graph = acpi_get_coresight_graph(adev);
if (!graph)
return -ENOENT;
nlinks = graph->package.elements[2].integer.value;
if (!nlinks)
return 0;
/*
* To avoid scanning the table twice (once for finding the number of
* output links and then later for parsing the output links),
* cache the links information in one go and then later copy
* it to the pdata.
*/
conns = devm_kcalloc(&adev->dev, nlinks, sizeof(*conns), GFP_KERNEL);
if (!conns)
return -ENOMEM;
ptr = conns;
for (i = 0; i < nlinks; i++) {
const union acpi_object *link = &graph->package.elements[3 + i];
int dir;
dir = acpi_coresight_parse_link(adev, link, ptr);
if (dir < 0)
return dir;
if (dir == ACPI_CORESIGHT_LINK_MASTER) {
pdata->nr_outport++;
ptr++;
} else {
pdata->nr_inport++;
}
}
rc = coresight_alloc_conns(&adev->dev, pdata);
if (rc)
return rc;
/* Copy the connection information to the final location */
for (i = 0; i < pdata->nr_outport; i++)
pdata->conns[i] = conns[i];
devm_kfree(&adev->dev, conns);
return 0;
}
/*
* acpi_handle_to_logical_cpuid - Map a given acpi_handle to the
* logical CPU id of the corresponding CPU device.
*
* Returns the logical CPU id when found. Otherwise returns >= nr_cpus_id.
*/
static int
acpi_handle_to_logical_cpuid(acpi_handle handle)
{
int i;
struct acpi_processor *pr;
for_each_possible_cpu(i) {
pr = per_cpu(processors, i);
if (pr && pr->handle == handle)
break;
}
return i;
}
/*
* acpi_coresigh_get_cpu - Find the logical CPU id of the CPU associated
* with this coresight device. With ACPI bindings, the CoreSight components
* are listed as child device of the associated CPU.
*
* Returns the logical CPU id when found. Otherwise returns 0.
*/
static int acpi_coresight_get_cpu(struct device *dev)
{
int cpu;
acpi_handle cpu_handle;
acpi_status status;
struct acpi_device *adev = ACPI_COMPANION(dev);
if (!adev)
return -ENODEV;
status = acpi_get_parent(adev->handle, &cpu_handle);
if (ACPI_FAILURE(status))
return -ENODEV;
cpu = acpi_handle_to_logical_cpuid(cpu_handle);
if (cpu >= nr_cpu_ids)
return -ENODEV;
return cpu;
}
static int
acpi_get_coresight_platform_data(struct device *dev,
struct coresight_platform_data *pdata)
{
struct acpi_device *adev;
adev = ACPI_COMPANION(dev);
if (!adev)
return -EINVAL;
return acpi_coresight_parse_graph(adev, pdata);
}
#else
static inline int
acpi_get_coresight_platform_data(struct device *dev,
struct coresight_platform_data *pdata)
{
return -ENOENT;
}
static inline int acpi_coresight_get_cpu(struct device *dev)
{
return -ENODEV;
}
#endif
int coresight_get_cpu(struct device *dev)
{
if (is_of_node(dev->fwnode))
return of_coresight_get_cpu(dev);
else if (is_acpi_device_node(dev->fwnode))
return acpi_coresight_get_cpu(dev);
return 0;
}
EXPORT_SYMBOL_GPL(coresight_get_cpu);
struct coresight_platform_data *
coresight_get_platform_data(struct device *dev)
{
int ret = -ENOENT;
struct coresight_platform_data *pdata = NULL;
struct fwnode_handle *fwnode = dev_fwnode(dev);
if (IS_ERR_OR_NULL(fwnode))
goto error;
pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
if (!pdata) {
ret = -ENOMEM;
goto error;
}
if (is_of_node(fwnode))
ret = of_get_coresight_platform_data(dev, pdata);
else if (is_acpi_device_node(fwnode))
ret = acpi_get_coresight_platform_data(dev, pdata);
if (!ret)
return pdata;
error:
if (!IS_ERR_OR_NULL(pdata))
/* Cleanup the connection information */
coresight_release_platform_data(pdata);
return ERR_PTR(ret);
}
EXPORT_SYMBOL_GPL(coresight_get_platform_data);

View File

@ -200,4 +200,8 @@ static inline void *coresight_get_uci_data(const struct amba_id *id)
return 0;
}
void coresight_release_platform_data(struct coresight_platform_data *pdata);
int coresight_device_fwnode_match(struct device *dev, void *fwnode);
#endif

View File

@ -5,6 +5,7 @@
* Description: CoreSight Replicator driver
*/
#include <linux/acpi.h>
#include <linux/amba/bus.h>
#include <linux/kernel.h>
#include <linux/device.h>
@ -22,17 +23,17 @@
#define REPLICATOR_IDFILTER0 0x000
#define REPLICATOR_IDFILTER1 0x004
DEFINE_CORESIGHT_DEVLIST(replicator_devs, "replicator");
/**
* struct replicator_drvdata - specifics associated to a replicator component
* @base: memory mapped base address for this component. Also indicates
* whether this one is programmable or not.
* @dev: the device entity associated with this component
* @atclk: optional clock for the core parts of the replicator.
* @csdev: component vitals needed by the framework
*/
struct replicator_drvdata {
void __iomem *base;
struct device *dev;
struct clk *atclk;
struct coresight_device *csdev;
};
@ -100,7 +101,7 @@ static int replicator_enable(struct coresight_device *csdev, int inport,
if (drvdata->base)
rc = dynamic_replicator_enable(drvdata, inport, outport);
if (!rc)
dev_dbg(drvdata->dev, "REPLICATOR enabled\n");
dev_dbg(&csdev->dev, "REPLICATOR enabled\n");
return rc;
}
@ -139,7 +140,7 @@ static void replicator_disable(struct coresight_device *csdev, int inport,
if (drvdata->base)
dynamic_replicator_disable(drvdata, inport, outport);
dev_dbg(drvdata->dev, "REPLICATOR disabled\n");
dev_dbg(&csdev->dev, "REPLICATOR disabled\n");
}
static const struct coresight_ops_link replicator_link_ops = {
@ -179,24 +180,20 @@ static int replicator_probe(struct device *dev, struct resource *res)
struct coresight_platform_data *pdata = NULL;
struct replicator_drvdata *drvdata;
struct coresight_desc desc = { 0 };
struct device_node *np = dev->of_node;
void __iomem *base;
if (np) {
pdata = of_get_coresight_platform_data(dev, np);
if (IS_ERR(pdata))
return PTR_ERR(pdata);
dev->platform_data = pdata;
}
if (of_device_is_compatible(np, "arm,coresight-replicator"))
if (is_of_node(dev_fwnode(dev)) &&
of_device_is_compatible(dev->of_node, "arm,coresight-replicator"))
pr_warn_once("Uses OBSOLETE CoreSight replicator binding\n");
desc.name = coresight_alloc_device_name(&replicator_devs, dev);
if (!desc.name)
return -ENOMEM;
drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
if (!drvdata)
return -ENOMEM;
drvdata->dev = dev;
drvdata->atclk = devm_clk_get(dev, "atclk"); /* optional */
if (!IS_ERR(drvdata->atclk)) {
ret = clk_prepare_enable(drvdata->atclk);
@ -220,11 +217,19 @@ static int replicator_probe(struct device *dev, struct resource *res)
dev_set_drvdata(dev, drvdata);
pdata = coresight_get_platform_data(dev);
if (IS_ERR(pdata)) {
ret = PTR_ERR(pdata);
goto out_disable_clk;
}
dev->platform_data = pdata;
desc.type = CORESIGHT_DEV_TYPE_LINK;
desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_SPLIT;
desc.ops = &replicator_cs_ops;
desc.pdata = dev->platform_data;
desc.dev = dev;
drvdata->csdev = coresight_register(&desc);
if (IS_ERR(drvdata->csdev)) {
ret = PTR_ERR(drvdata->csdev);
@ -292,11 +297,19 @@ static const struct of_device_id static_replicator_match[] = {
{}
};
#ifdef CONFIG_ACPI
static const struct acpi_device_id static_replicator_acpi_ids[] = {
{"ARMHC985", 0}, /* ARM CoreSight Static Replicator */
{}
};
#endif
static struct platform_driver static_replicator_driver = {
.probe = static_replicator_probe,
.driver = {
.name = "coresight-static-replicator",
.of_match_table = static_replicator_match,
.of_match_table = of_match_ptr(static_replicator_match),
.acpi_match_table = ACPI_PTR(static_replicator_acpi_ids),
.pm = &replicator_dev_pm_ops,
.suppress_bind_attrs = true,
},

View File

@ -16,6 +16,7 @@
* (C) 2015-2016 Chunyan Zhang <zhang.chunyan@linaro.org>
*/
#include <asm/local.h>
#include <linux/acpi.h>
#include <linux/amba/bus.h>
#include <linux/bitmap.h>
#include <linux/clk.h>
@ -107,10 +108,11 @@ struct channel_space {
unsigned long *guaranteed;
};
DEFINE_CORESIGHT_DEVLIST(stm_devs, "stm");
/**
* struct stm_drvdata - specifics associated to an STM component
* @base: memory mapped base address for this component.
* @dev: the device entity associated to this component.
* @atclk: optional clock for the core parts of the STM.
* @csdev: component vitals needed by the framework.
* @spinlock: only one at a time pls.
@ -128,7 +130,6 @@ struct channel_space {
*/
struct stm_drvdata {
void __iomem *base;
struct device *dev;
struct clk *atclk;
struct coresight_device *csdev;
spinlock_t spinlock;
@ -205,13 +206,13 @@ static int stm_enable(struct coresight_device *csdev,
if (val)
return -EBUSY;
pm_runtime_get_sync(drvdata->dev);
pm_runtime_get_sync(csdev->dev.parent);
spin_lock(&drvdata->spinlock);
stm_enable_hw(drvdata);
spin_unlock(&drvdata->spinlock);
dev_dbg(drvdata->dev, "STM tracing enabled\n");
dev_dbg(&csdev->dev, "STM tracing enabled\n");
return 0;
}
@ -271,10 +272,10 @@ static void stm_disable(struct coresight_device *csdev,
/* Wait until the engine has completely stopped */
coresight_timeout(drvdata->base, STMTCSR, STMTCSR_BUSY_BIT, 0);
pm_runtime_put(drvdata->dev);
pm_runtime_put(csdev->dev.parent);
local_set(&drvdata->mode, CS_MODE_DISABLED);
dev_dbg(drvdata->dev, "STM tracing disabled\n");
dev_dbg(&csdev->dev, "STM tracing disabled\n");
}
}
@ -685,14 +686,15 @@ static const struct attribute_group *coresight_stm_groups[] = {
NULL,
};
static int stm_get_resource_byname(struct device_node *np,
char *ch_base, struct resource *res)
#ifdef CONFIG_OF
static int of_stm_get_stimulus_area(struct device *dev, struct resource *res)
{
const char *name = NULL;
int index = 0, found = 0;
struct device_node *np = dev->of_node;
while (!of_property_read_string_index(np, "reg-names", index, &name)) {
if (strcmp(ch_base, name)) {
if (strcmp("stm-stimulus-base", name)) {
index++;
continue;
}
@ -707,6 +709,70 @@ static int stm_get_resource_byname(struct device_node *np,
return of_address_to_resource(np, index, res);
}
#else
static inline int of_stm_get_stimulus_area(struct device *dev,
struct resource *res)
{
return -ENOENT;
}
#endif
#ifdef CONFIG_ACPI
static int acpi_stm_get_stimulus_area(struct device *dev, struct resource *res)
{
int rc;
bool found_base = false;
struct resource_entry *rent;
LIST_HEAD(res_list);
struct acpi_device *adev = ACPI_COMPANION(dev);
if (!adev)
return -ENODEV;
rc = acpi_dev_get_resources(adev, &res_list, NULL, NULL);
if (rc < 0)
return rc;
/*
* The stimulus base for STM device must be listed as the second memory
* resource, followed by the programming base address as described in
* "Section 2.3 Resources" in ACPI for CoreSightTM 1.0 Platform Design
* document (DEN0067).
*/
rc = -ENOENT;
list_for_each_entry(rent, &res_list, node) {
if (resource_type(rent->res) != IORESOURCE_MEM)
continue;
if (found_base) {
*res = *rent->res;
rc = 0;
break;
}
found_base = true;
}
acpi_dev_free_resource_list(&res_list);
return rc;
}
#else
static inline int acpi_stm_get_stimulus_area(struct device *dev,
struct resource *res)
{
return -ENOENT;
}
#endif
static int stm_get_stimulus_area(struct device *dev, struct resource *res)
{
struct fwnode_handle *fwnode = dev_fwnode(dev);
if (is_of_node(fwnode))
return of_stm_get_stimulus_area(dev, res);
else if (is_acpi_node(fwnode))
return acpi_stm_get_stimulus_area(dev, res);
return -ENOENT;
}
static u32 stm_fundamental_data_size(struct stm_drvdata *drvdata)
{
@ -763,9 +829,10 @@ static void stm_init_default_data(struct stm_drvdata *drvdata)
bitmap_clear(drvdata->chs.guaranteed, 0, drvdata->numsp);
}
static void stm_init_generic_data(struct stm_drvdata *drvdata)
static void stm_init_generic_data(struct stm_drvdata *drvdata,
const char *name)
{
drvdata->stm.name = dev_name(drvdata->dev);
drvdata->stm.name = name;
/*
* MasterIDs are assigned at HW design phase. As such the core is
@ -795,19 +862,15 @@ static int stm_probe(struct amba_device *adev, const struct amba_id *id)
struct resource ch_res;
size_t bitmap_size;
struct coresight_desc desc = { 0 };
struct device_node *np = adev->dev.of_node;
if (np) {
pdata = of_get_coresight_platform_data(dev, np);
if (IS_ERR(pdata))
return PTR_ERR(pdata);
adev->dev.platform_data = pdata;
}
desc.name = coresight_alloc_device_name(&stm_devs, dev);
if (!desc.name)
return -ENOMEM;
drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
if (!drvdata)
return -ENOMEM;
drvdata->dev = &adev->dev;
drvdata->atclk = devm_clk_get(&adev->dev, "atclk"); /* optional */
if (!IS_ERR(drvdata->atclk)) {
ret = clk_prepare_enable(drvdata->atclk);
@ -821,7 +884,7 @@ static int stm_probe(struct amba_device *adev, const struct amba_id *id)
return PTR_ERR(base);
drvdata->base = base;
ret = stm_get_resource_byname(np, "stm-stimulus-base", &ch_res);
ret = stm_get_stimulus_area(dev, &ch_res);
if (ret)
return ret;
drvdata->chs.phys = ch_res.start;
@ -848,14 +911,22 @@ static int stm_probe(struct amba_device *adev, const struct amba_id *id)
spin_lock_init(&drvdata->spinlock);
stm_init_default_data(drvdata);
stm_init_generic_data(drvdata);
stm_init_generic_data(drvdata, desc.name);
if (stm_register_device(dev, &drvdata->stm, THIS_MODULE)) {
dev_info(dev,
"stm_register_device failed, probing deferred\n");
"%s : stm_register_device failed, probing deferred\n",
desc.name);
return -EPROBE_DEFER;
}
pdata = coresight_get_platform_data(dev);
if (IS_ERR(pdata)) {
ret = PTR_ERR(pdata);
goto stm_unregister;
}
adev->dev.platform_data = pdata;
desc.type = CORESIGHT_DEV_TYPE_SOURCE;
desc.subtype.source_subtype = CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE;
desc.ops = &stm_cs_ops;
@ -870,7 +941,8 @@ static int stm_probe(struct amba_device *adev, const struct amba_id *id)
pm_runtime_put(&adev->dev);
dev_info(dev, "%s initialized\n", (char *)coresight_get_uci_data(id));
dev_info(&drvdata->csdev->dev, "%s initialized\n",
(char *)coresight_get_uci_data(id));
return 0;
stm_unregister:

View File

@ -280,7 +280,6 @@ static int tmc_enable_etf_sink(struct coresight_device *csdev,
u32 mode, void *data)
{
int ret;
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
switch (mode) {
case CS_MODE_SYSFS:
@ -298,7 +297,7 @@ static int tmc_enable_etf_sink(struct coresight_device *csdev,
if (ret)
return ret;
dev_dbg(drvdata->dev, "TMC-ETB/ETF enabled\n");
dev_dbg(&csdev->dev, "TMC-ETB/ETF enabled\n");
return 0;
}
@ -328,7 +327,7 @@ static int tmc_disable_etf_sink(struct coresight_device *csdev)
spin_unlock_irqrestore(&drvdata->spinlock, flags);
dev_dbg(drvdata->dev, "TMC-ETB/ETF disabled\n");
dev_dbg(&csdev->dev, "TMC-ETB/ETF disabled\n");
return 0;
}
@ -351,7 +350,7 @@ static int tmc_enable_etf_link(struct coresight_device *csdev,
spin_unlock_irqrestore(&drvdata->spinlock, flags);
if (!ret)
dev_dbg(drvdata->dev, "TMC-ETF enabled\n");
dev_dbg(&csdev->dev, "TMC-ETF enabled\n");
return ret;
}
@ -371,19 +370,17 @@ static void tmc_disable_etf_link(struct coresight_device *csdev,
drvdata->mode = CS_MODE_DISABLED;
spin_unlock_irqrestore(&drvdata->spinlock, flags);
dev_dbg(drvdata->dev, "TMC-ETF disabled\n");
dev_dbg(&csdev->dev, "TMC-ETF disabled\n");
}
static void *tmc_alloc_etf_buffer(struct coresight_device *csdev,
struct perf_event *event, void **pages,
int nr_pages, bool overwrite)
{
int node, cpu = event->cpu;
int node;
struct cs_buffers *buf;
if (cpu == -1)
cpu = smp_processor_id();
node = cpu_to_node(cpu);
node = (event->cpu == -1) ? NUMA_NO_NODE : cpu_to_node(event->cpu);
/* Allocate memory structure for interaction with Perf */
buf = kzalloc_node(sizeof(struct cs_buffers), GFP_KERNEL, node);
@ -477,9 +474,11 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev,
/*
* The TMC RAM buffer may be bigger than the space available in the
* perf ring buffer (handle->size). If so advance the RRP so that we
* get the latest trace data.
* get the latest trace data. In snapshot mode none of that matters
* since we are expected to clobber stale data in favour of the latest
* traces.
*/
if (to_read > handle->size) {
if (!buf->snapshot && to_read > handle->size) {
u32 mask = 0;
/*
@ -516,7 +515,13 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev,
lost = true;
}
if (lost)
/*
* Don't set the TRUNCATED flag in snapshot mode because 1) the
* captured buffer is expected to be truncated and 2) a full buffer
* prevents the event from being re-enabled by the perf core,
* resulting in stale data being send to user space.
*/
if (!buf->snapshot && lost)
perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
cur = buf->cur;
@ -542,11 +547,15 @@ static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev,
}
}
/* In snapshot mode we have to update the head */
if (buf->snapshot) {
handle->head = (cur * PAGE_SIZE) + offset;
to_read = buf->nr_pages << PAGE_SHIFT;
}
/*
* In snapshot mode we simply increment the head by the number of byte
* that were written. User space function cs_etm_find_snapshot() will
* figure out how many bytes to get from the AUX buffer based on the
* position of the head.
*/
if (buf->snapshot)
handle->head += to_read;
CS_LOCK(drvdata->base);
out:
spin_unlock_irqrestore(&drvdata->spinlock, flags);

View File

@ -162,10 +162,11 @@ static void tmc_pages_free(struct tmc_pages *tmc_pages,
struct device *dev, enum dma_data_direction dir)
{
int i;
struct device *real_dev = dev->parent;
for (i = 0; i < tmc_pages->nr_pages; i++) {
if (tmc_pages->daddrs && tmc_pages->daddrs[i])
dma_unmap_page(dev, tmc_pages->daddrs[i],
dma_unmap_page(real_dev, tmc_pages->daddrs[i],
PAGE_SIZE, dir);
if (tmc_pages->pages && tmc_pages->pages[i])
__free_page(tmc_pages->pages[i]);
@ -193,6 +194,7 @@ static int tmc_pages_alloc(struct tmc_pages *tmc_pages,
int i, nr_pages;
dma_addr_t paddr;
struct page *page;
struct device *real_dev = dev->parent;
nr_pages = tmc_pages->nr_pages;
tmc_pages->daddrs = kcalloc(nr_pages, sizeof(*tmc_pages->daddrs),
@ -216,8 +218,8 @@ static int tmc_pages_alloc(struct tmc_pages *tmc_pages,
page = alloc_pages_node(node,
GFP_KERNEL | __GFP_ZERO, 0);
}
paddr = dma_map_page(dev, page, 0, PAGE_SIZE, dir);
if (dma_mapping_error(dev, paddr))
paddr = dma_map_page(real_dev, page, 0, PAGE_SIZE, dir);
if (dma_mapping_error(real_dev, paddr))
goto err;
tmc_pages->daddrs[i] = paddr;
tmc_pages->pages[i] = page;
@ -304,7 +306,7 @@ static int tmc_alloc_data_pages(struct tmc_sg_table *sg_table, void **pages)
* and data buffers. TMC writes to the data buffers and reads from the SG
* Table pages.
*
* @dev - Device to which page should be DMA mapped.
* @dev - Coresight device to which page should be DMA mapped.
* @node - Numa node for mem allocations
* @nr_tpages - Number of pages for the table entries.
* @nr_dpages - Number of pages for Data buffer.
@ -348,13 +350,13 @@ void tmc_sg_table_sync_data_range(struct tmc_sg_table *table,
{
int i, index, start;
int npages = DIV_ROUND_UP(size, PAGE_SIZE);
struct device *dev = table->dev;
struct device *real_dev = table->dev->parent;
struct tmc_pages *data = &table->data_pages;
start = offset >> PAGE_SHIFT;
for (i = start; i < (start + npages); i++) {
index = i % data->nr_pages;
dma_sync_single_for_cpu(dev, data->daddrs[index],
dma_sync_single_for_cpu(real_dev, data->daddrs[index],
PAGE_SIZE, DMA_FROM_DEVICE);
}
}
@ -363,11 +365,11 @@ void tmc_sg_table_sync_data_range(struct tmc_sg_table *table,
void tmc_sg_table_sync_table(struct tmc_sg_table *sg_table)
{
int i;
struct device *dev = sg_table->dev;
struct device *real_dev = sg_table->dev->parent;
struct tmc_pages *table_pages = &sg_table->table_pages;
for (i = 0; i < table_pages->nr_pages; i++)
dma_sync_single_for_device(dev, table_pages->daddrs[i],
dma_sync_single_for_device(real_dev, table_pages->daddrs[i],
PAGE_SIZE, DMA_TO_DEVICE);
}
@ -590,6 +592,7 @@ static int tmc_etr_alloc_flat_buf(struct tmc_drvdata *drvdata,
void **pages)
{
struct etr_flat_buf *flat_buf;
struct device *real_dev = drvdata->csdev->dev.parent;
/* We cannot reuse existing pages for flat buf */
if (pages)
@ -599,7 +602,7 @@ static int tmc_etr_alloc_flat_buf(struct tmc_drvdata *drvdata,
if (!flat_buf)
return -ENOMEM;
flat_buf->vaddr = dma_alloc_coherent(drvdata->dev, etr_buf->size,
flat_buf->vaddr = dma_alloc_coherent(real_dev, etr_buf->size,
&flat_buf->daddr, GFP_KERNEL);
if (!flat_buf->vaddr) {
kfree(flat_buf);
@ -607,7 +610,7 @@ static int tmc_etr_alloc_flat_buf(struct tmc_drvdata *drvdata,
}
flat_buf->size = etr_buf->size;
flat_buf->dev = drvdata->dev;
flat_buf->dev = &drvdata->csdev->dev;
etr_buf->hwaddr = flat_buf->daddr;
etr_buf->mode = ETR_MODE_FLAT;
etr_buf->private = flat_buf;
@ -618,9 +621,12 @@ static void tmc_etr_free_flat_buf(struct etr_buf *etr_buf)
{
struct etr_flat_buf *flat_buf = etr_buf->private;
if (flat_buf && flat_buf->daddr)
dma_free_coherent(flat_buf->dev, flat_buf->size,
if (flat_buf && flat_buf->daddr) {
struct device *real_dev = flat_buf->dev->parent;
dma_free_coherent(real_dev, flat_buf->size,
flat_buf->vaddr, flat_buf->daddr);
}
kfree(flat_buf);
}
@ -666,8 +672,9 @@ static int tmc_etr_alloc_sg_buf(struct tmc_drvdata *drvdata,
void **pages)
{
struct etr_sg_table *etr_table;
struct device *dev = &drvdata->csdev->dev;
etr_table = tmc_init_etr_sg_table(drvdata->dev, node,
etr_table = tmc_init_etr_sg_table(dev, node,
etr_buf->size, pages);
if (IS_ERR(etr_table))
return -ENOMEM;
@ -751,8 +758,8 @@ tmc_etr_get_catu_device(struct tmc_drvdata *drvdata)
if (!IS_ENABLED(CONFIG_CORESIGHT_CATU))
return NULL;
for (i = 0; i < etr->nr_outport; i++) {
tmp = etr->conns[i].child_dev;
for (i = 0; i < etr->pdata->nr_outport; i++) {
tmp = etr->pdata->conns[i].child_dev;
if (tmp && coresight_is_catu_device(tmp))
return tmp;
}
@ -823,9 +830,10 @@ static struct etr_buf *tmc_alloc_etr_buf(struct tmc_drvdata *drvdata,
bool has_etr_sg, has_iommu;
bool has_sg, has_catu;
struct etr_buf *etr_buf;
struct device *dev = &drvdata->csdev->dev;
has_etr_sg = tmc_etr_has_cap(drvdata, TMC_ETR_SG);
has_iommu = iommu_get_domain_for_dev(drvdata->dev);
has_iommu = iommu_get_domain_for_dev(dev->parent);
has_catu = !!tmc_etr_get_catu_device(drvdata);
has_sg = has_catu || has_etr_sg;
@ -863,7 +871,7 @@ static struct etr_buf *tmc_alloc_etr_buf(struct tmc_drvdata *drvdata,
return ERR_PTR(rc);
}
dev_dbg(drvdata->dev, "allocated buffer of size %ldKB in mode %d\n",
dev_dbg(dev, "allocated buffer of size %ldKB in mode %d\n",
(unsigned long)size >> 10, etr_buf->mode);
return etr_buf;
}
@ -1162,7 +1170,7 @@ out:
tmc_etr_free_sysfs_buf(free_buf);
if (!ret)
dev_dbg(drvdata->dev, "TMC-ETR enabled\n");
dev_dbg(&csdev->dev, "TMC-ETR enabled\n");
return ret;
}
@ -1178,14 +1186,11 @@ static struct etr_buf *
alloc_etr_buf(struct tmc_drvdata *drvdata, struct perf_event *event,
int nr_pages, void **pages, bool snapshot)
{
int node, cpu = event->cpu;
int node;
struct etr_buf *etr_buf;
unsigned long size;
if (cpu == -1)
cpu = smp_processor_id();
node = cpu_to_node(cpu);
node = (event->cpu == -1) ? NUMA_NO_NODE : cpu_to_node(event->cpu);
/*
* Try to match the perf ring buffer size if it is larger
* than the size requested via sysfs.
@ -1317,13 +1322,11 @@ static struct etr_perf_buffer *
tmc_etr_setup_perf_buf(struct tmc_drvdata *drvdata, struct perf_event *event,
int nr_pages, void **pages, bool snapshot)
{
int node, cpu = event->cpu;
int node;
struct etr_buf *etr_buf;
struct etr_perf_buffer *etr_perf;
if (cpu == -1)
cpu = smp_processor_id();
node = cpu_to_node(cpu);
node = (event->cpu == -1) ? NUMA_NO_NODE : cpu_to_node(event->cpu);
etr_perf = kzalloc_node(sizeof(*etr_perf), GFP_KERNEL, node);
if (!etr_perf)
@ -1358,7 +1361,7 @@ static void *tmc_alloc_etr_buffer(struct coresight_device *csdev,
etr_perf = tmc_etr_setup_perf_buf(drvdata, event,
nr_pages, pages, snapshot);
if (IS_ERR(etr_perf)) {
dev_dbg(drvdata->dev, "Unable to allocate ETR buffer\n");
dev_dbg(&csdev->dev, "Unable to allocate ETR buffer\n");
return NULL;
}
@ -1501,18 +1504,23 @@ tmc_update_etr_buffer(struct coresight_device *csdev,
tmc_etr_sync_perf_buffer(etr_perf);
/*
* Update handle->head in snapshot mode. Also update the size to the
* hardware buffer size if there was an overflow.
* In snapshot mode we simply increment the head by the number of byte
* that were written. User space function cs_etm_find_snapshot() will
* figure out how many bytes to get from the AUX buffer based on the
* position of the head.
*/
if (etr_perf->snapshot) {
if (etr_perf->snapshot)
handle->head += size;
if (etr_buf->full)
size = etr_buf->size;
}
lost |= etr_buf->full;
out:
if (lost)
/*
* Don't set the TRUNCATED flag in snapshot mode because 1) the
* captured buffer is expected to be truncated and 2) a full buffer
* prevents the event from being re-enabled by the perf core,
* resulting in stale data being send to user space.
*/
if (!etr_perf->snapshot && lost)
perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
return size;
}
@ -1612,7 +1620,7 @@ static int tmc_disable_etr_sink(struct coresight_device *csdev)
spin_unlock_irqrestore(&drvdata->spinlock, flags);
dev_dbg(drvdata->dev, "TMC-ETR disabled\n");
dev_dbg(&csdev->dev, "TMC-ETR disabled\n");
return 0;
}

View File

@ -27,12 +27,16 @@
#include "coresight-priv.h"
#include "coresight-tmc.h"
DEFINE_CORESIGHT_DEVLIST(etb_devs, "tmc_etb");
DEFINE_CORESIGHT_DEVLIST(etf_devs, "tmc_etf");
DEFINE_CORESIGHT_DEVLIST(etr_devs, "tmc_etr");
void tmc_wait_for_tmcready(struct tmc_drvdata *drvdata)
{
/* Ensure formatter, unformatter and hardware fifo are empty */
if (coresight_timeout(drvdata->base,
TMC_STS, TMC_STS_TMCREADY_BIT, 1)) {
dev_err(drvdata->dev,
dev_err(&drvdata->csdev->dev,
"timeout while waiting for TMC to be Ready\n");
}
}
@ -49,7 +53,7 @@ void tmc_flush_and_stop(struct tmc_drvdata *drvdata)
/* Ensure flush completes */
if (coresight_timeout(drvdata->base,
TMC_FFCR, TMC_FFCR_FLUSHMAN_BIT, 0)) {
dev_err(drvdata->dev,
dev_err(&drvdata->csdev->dev,
"timeout while waiting for completion of Manual Flush\n");
}
@ -83,7 +87,7 @@ static int tmc_read_prepare(struct tmc_drvdata *drvdata)
}
if (!ret)
dev_dbg(drvdata->dev, "TMC read start\n");
dev_dbg(&drvdata->csdev->dev, "TMC read start\n");
return ret;
}
@ -105,7 +109,7 @@ static int tmc_read_unprepare(struct tmc_drvdata *drvdata)
}
if (!ret)
dev_dbg(drvdata->dev, "TMC read end\n");
dev_dbg(&drvdata->csdev->dev, "TMC read end\n");
return ret;
}
@ -122,7 +126,7 @@ static int tmc_open(struct inode *inode, struct file *file)
nonseekable_open(inode, file);
dev_dbg(drvdata->dev, "%s: successfully opened\n", __func__);
dev_dbg(&drvdata->csdev->dev, "%s: successfully opened\n", __func__);
return 0;
}
@ -152,12 +156,13 @@ static ssize_t tmc_read(struct file *file, char __user *data, size_t len,
return 0;
if (copy_to_user(data, bufp, actual)) {
dev_dbg(drvdata->dev, "%s: copy_to_user failed\n", __func__);
dev_dbg(&drvdata->csdev->dev,
"%s: copy_to_user failed\n", __func__);
return -EFAULT;
}
*ppos += actual;
dev_dbg(drvdata->dev, "%zu bytes copied\n", actual);
dev_dbg(&drvdata->csdev->dev, "%zu bytes copied\n", actual);
return actual;
}
@ -172,7 +177,7 @@ static int tmc_release(struct inode *inode, struct file *file)
if (ret)
return ret;
dev_dbg(drvdata->dev, "%s: released\n", __func__);
dev_dbg(&drvdata->csdev->dev, "%s: released\n", __func__);
return 0;
}
@ -332,24 +337,22 @@ const struct attribute_group *coresight_tmc_groups[] = {
NULL,
};
static inline bool tmc_etr_can_use_sg(struct tmc_drvdata *drvdata)
static inline bool tmc_etr_can_use_sg(struct device *dev)
{
return fwnode_property_present(drvdata->dev->fwnode,
"arm,scatter-gather");
return fwnode_property_present(dev->fwnode, "arm,scatter-gather");
}
/* Detect and initialise the capabilities of a TMC ETR */
static int tmc_etr_setup_caps(struct tmc_drvdata *drvdata,
u32 devid, void *dev_caps)
static int tmc_etr_setup_caps(struct device *parent, u32 devid, void *dev_caps)
{
int rc;
u32 dma_mask = 0;
struct tmc_drvdata *drvdata = dev_get_drvdata(parent);
/* Set the unadvertised capabilities */
tmc_etr_init_caps(drvdata, (u32)(unsigned long)dev_caps);
if (!(devid & TMC_DEVID_NOSCAT) && tmc_etr_can_use_sg(drvdata))
if (!(devid & TMC_DEVID_NOSCAT) && tmc_etr_can_use_sg(parent))
tmc_etr_set_cap(drvdata, TMC_ETR_SG);
/* Check if the AXI address width is available */
@ -367,18 +370,27 @@ static int tmc_etr_setup_caps(struct tmc_drvdata *drvdata,
case 44:
case 48:
case 52:
dev_info(drvdata->dev, "Detected dma mask %dbits\n", dma_mask);
dev_info(parent, "Detected dma mask %dbits\n", dma_mask);
break;
default:
dma_mask = 40;
}
rc = dma_set_mask_and_coherent(drvdata->dev, DMA_BIT_MASK(dma_mask));
rc = dma_set_mask_and_coherent(parent, DMA_BIT_MASK(dma_mask));
if (rc)
dev_err(drvdata->dev, "Failed to setup DMA mask: %d\n", rc);
dev_err(parent, "Failed to setup DMA mask: %d\n", rc);
return rc;
}
static u32 tmc_etr_get_default_buffer_size(struct device *dev)
{
u32 size;
if (fwnode_property_read_u32(dev->fwnode, "arm,buffer-size", &size))
size = SZ_1M;
return size;
}
static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
{
int ret = 0;
@ -389,23 +401,13 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
struct tmc_drvdata *drvdata;
struct resource *res = &adev->res;
struct coresight_desc desc = { 0 };
struct device_node *np = adev->dev.of_node;
if (np) {
pdata = of_get_coresight_platform_data(dev, np);
if (IS_ERR(pdata)) {
ret = PTR_ERR(pdata);
goto out;
}
adev->dev.platform_data = pdata;
}
struct coresight_dev_list *dev_list = NULL;
ret = -ENOMEM;
drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
if (!drvdata)
goto out;
drvdata->dev = &adev->dev;
dev_set_drvdata(dev, drvdata);
/* Validity for the resource is already checked by the AMBA core */
@ -425,18 +427,11 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
/* This device is not associated with a session */
drvdata->pid = -1;
if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) {
if (np)
ret = of_property_read_u32(np,
"arm,buffer-size",
&drvdata->size);
if (ret)
drvdata->size = SZ_1M;
} else {
if (drvdata->config_type == TMC_CONFIG_TYPE_ETR)
drvdata->size = tmc_etr_get_default_buffer_size(dev);
else
drvdata->size = readl_relaxed(drvdata->base + TMC_RSZ) * 4;
}
desc.pdata = pdata;
desc.dev = dev;
desc.groups = coresight_tmc_groups;
@ -445,36 +440,53 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
desc.type = CORESIGHT_DEV_TYPE_SINK;
desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER;
desc.ops = &tmc_etb_cs_ops;
dev_list = &etb_devs;
break;
case TMC_CONFIG_TYPE_ETR:
desc.type = CORESIGHT_DEV_TYPE_SINK;
desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER;
desc.ops = &tmc_etr_cs_ops;
ret = tmc_etr_setup_caps(drvdata, devid,
ret = tmc_etr_setup_caps(dev, devid,
coresight_get_uci_data(id));
if (ret)
goto out;
idr_init(&drvdata->idr);
mutex_init(&drvdata->idr_mutex);
dev_list = &etr_devs;
break;
case TMC_CONFIG_TYPE_ETF:
desc.type = CORESIGHT_DEV_TYPE_LINKSINK;
desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_FIFO;
desc.ops = &tmc_etf_cs_ops;
dev_list = &etf_devs;
break;
default:
pr_err("%s: Unsupported TMC config\n", pdata->name);
pr_err("%s: Unsupported TMC config\n", desc.name);
ret = -EINVAL;
goto out;
}
desc.name = coresight_alloc_device_name(dev_list, dev);
if (!desc.name) {
ret = -ENOMEM;
goto out;
}
pdata = coresight_get_platform_data(dev);
if (IS_ERR(pdata)) {
ret = PTR_ERR(pdata);
goto out;
}
adev->dev.platform_data = pdata;
desc.pdata = pdata;
drvdata->csdev = coresight_register(&desc);
if (IS_ERR(drvdata->csdev)) {
ret = PTR_ERR(drvdata->csdev);
goto out;
}
drvdata->miscdev.name = pdata->name;
drvdata->miscdev.name = desc.name;
drvdata->miscdev.minor = MISC_DYNAMIC_MINOR;
drvdata->miscdev.fops = &tmc_fops;
ret = misc_register(&drvdata->miscdev);

View File

@ -161,7 +161,6 @@ struct etr_buf {
/**
* struct tmc_drvdata - specifics associated to an TMC component
* @base: memory mapped base address for this component.
* @dev: the device entity associated to this component.
* @csdev: component vitals needed by the framework.
* @miscdev: specifics to handle "/dev/xyz.tmc" entry.
* @spinlock: only one at a time pls.
@ -184,7 +183,6 @@ struct etr_buf {
*/
struct tmc_drvdata {
void __iomem *base;
struct device *dev;
struct coresight_device *csdev;
struct miscdevice miscdev;
spinlock_t spinlock;

View File

@ -47,15 +47,15 @@
#define FFCR_FON_MAN BIT(6)
#define FFCR_STOP_FI BIT(12)
DEFINE_CORESIGHT_DEVLIST(tpiu_devs, "tpiu");
/**
* @base: memory mapped base address for this component.
* @dev: the device entity associated to this component.
* @atclk: optional clock for the core parts of the TPIU.
* @csdev: component vitals needed by the framework.
*/
struct tpiu_drvdata {
void __iomem *base;
struct device *dev;
struct clk *atclk;
struct coresight_device *csdev;
};
@ -75,7 +75,7 @@ static int tpiu_enable(struct coresight_device *csdev, u32 mode, void *__unused)
tpiu_enable_hw(drvdata);
atomic_inc(csdev->refcnt);
dev_dbg(drvdata->dev, "TPIU enabled\n");
dev_dbg(&csdev->dev, "TPIU enabled\n");
return 0;
}
@ -104,7 +104,7 @@ static int tpiu_disable(struct coresight_device *csdev)
tpiu_disable_hw(drvdata);
dev_dbg(drvdata->dev, "TPIU disabled\n");
dev_dbg(&csdev->dev, "TPIU disabled\n");
return 0;
}
@ -126,20 +126,15 @@ static int tpiu_probe(struct amba_device *adev, const struct amba_id *id)
struct tpiu_drvdata *drvdata;
struct resource *res = &adev->res;
struct coresight_desc desc = { 0 };
struct device_node *np = adev->dev.of_node;
if (np) {
pdata = of_get_coresight_platform_data(dev, np);
if (IS_ERR(pdata))
return PTR_ERR(pdata);
adev->dev.platform_data = pdata;
}
desc.name = coresight_alloc_device_name(&tpiu_devs, dev);
if (!desc.name)
return -ENOMEM;
drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
if (!drvdata)
return -ENOMEM;
drvdata->dev = &adev->dev;
drvdata->atclk = devm_clk_get(&adev->dev, "atclk"); /* optional */
if (!IS_ERR(drvdata->atclk)) {
ret = clk_prepare_enable(drvdata->atclk);
@ -158,6 +153,11 @@ static int tpiu_probe(struct amba_device *adev, const struct amba_id *id)
/* Disable tpiu to support older devices */
tpiu_disable_hw(drvdata);
pdata = coresight_get_platform_data(dev);
if (IS_ERR(pdata))
return PTR_ERR(pdata);
dev->platform_data = pdata;
desc.type = CORESIGHT_DEV_TYPE_SINK;
desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_PORT;
desc.ops = &tpiu_cs_ops;

View File

@ -100,8 +100,8 @@ static int coresight_find_link_inport(struct coresight_device *csdev,
int i;
struct coresight_connection *conn;
for (i = 0; i < parent->nr_outport; i++) {
conn = &parent->conns[i];
for (i = 0; i < parent->pdata->nr_outport; i++) {
conn = &parent->pdata->conns[i];
if (conn->child_dev == csdev)
return conn->child_port;
}
@ -118,8 +118,8 @@ static int coresight_find_link_outport(struct coresight_device *csdev,
int i;
struct coresight_connection *conn;
for (i = 0; i < csdev->nr_outport; i++) {
conn = &csdev->conns[i];
for (i = 0; i < csdev->pdata->nr_outport; i++) {
conn = &csdev->pdata->conns[i];
if (conn->child_dev == child)
return conn->outport;
}
@ -306,10 +306,10 @@ static void coresight_disable_link(struct coresight_device *csdev,
if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_MERG) {
refport = inport;
nr_conns = csdev->nr_inport;
nr_conns = csdev->pdata->nr_inport;
} else if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_SPLIT) {
refport = outport;
nr_conns = csdev->nr_outport;
nr_conns = csdev->pdata->nr_outport;
} else {
refport = 0;
nr_conns = 1;
@ -595,9 +595,10 @@ static void coresight_grab_device(struct coresight_device *csdev)
{
int i;
for (i = 0; i < csdev->nr_outport; i++) {
struct coresight_device *child = csdev->conns[i].child_dev;
for (i = 0; i < csdev->pdata->nr_outport; i++) {
struct coresight_device *child;
child = csdev->pdata->conns[i].child_dev;
if (child && child->type == CORESIGHT_DEV_TYPE_HELPER)
pm_runtime_get_sync(child->dev.parent);
}
@ -613,9 +614,10 @@ static void coresight_drop_device(struct coresight_device *csdev)
int i;
pm_runtime_put(csdev->dev.parent);
for (i = 0; i < csdev->nr_outport; i++) {
struct coresight_device *child = csdev->conns[i].child_dev;
for (i = 0; i < csdev->pdata->nr_outport; i++) {
struct coresight_device *child;
child = csdev->pdata->conns[i].child_dev;
if (child && child->type == CORESIGHT_DEV_TYPE_HELPER)
pm_runtime_put(child->dev.parent);
}
@ -645,9 +647,10 @@ static int _coresight_build_path(struct coresight_device *csdev,
goto out;
/* Not a sink - recursively explore each port found on this element */
for (i = 0; i < csdev->nr_outport; i++) {
struct coresight_device *child_dev = csdev->conns[i].child_dev;
for (i = 0; i < csdev->pdata->nr_outport; i++) {
struct coresight_device *child_dev;
child_dev = csdev->pdata->conns[i].child_dev;
if (child_dev &&
_coresight_build_path(child_dev, sink, path) == 0) {
found = true;
@ -975,6 +978,7 @@ static void coresight_device_release(struct device *dev)
{
struct coresight_device *csdev = to_coresight_device(dev);
fwnode_handle_put(csdev->dev.fwnode);
kfree(csdev->refcnt);
kfree(csdev);
}
@ -1000,19 +1004,17 @@ static int coresight_orphan_match(struct device *dev, void *data)
* Circle throuch all the connection of that component. If we find
* an orphan connection whose name matches @csdev, link it.
*/
for (i = 0; i < i_csdev->nr_outport; i++) {
conn = &i_csdev->conns[i];
for (i = 0; i < i_csdev->pdata->nr_outport; i++) {
conn = &i_csdev->pdata->conns[i];
/* We have found at least one orphan connection */
if (conn->child_dev == NULL) {
/* Does it match this newly added device? */
if (conn->child_name &&
!strcmp(dev_name(&csdev->dev), conn->child_name)) {
if (conn->child_fwnode == csdev->dev.fwnode)
conn->child_dev = csdev;
} else {
else
/* This component still has an orphan */
still_orphan = true;
}
}
}
@ -1040,13 +1042,13 @@ static void coresight_fixup_device_conns(struct coresight_device *csdev)
{
int i;
for (i = 0; i < csdev->nr_outport; i++) {
struct coresight_connection *conn = &csdev->conns[i];
for (i = 0; i < csdev->pdata->nr_outport; i++) {
struct coresight_connection *conn = &csdev->pdata->conns[i];
struct device *dev = NULL;
if (conn->child_name)
dev = bus_find_device_by_name(&coresight_bustype, NULL,
conn->child_name);
dev = bus_find_device(&coresight_bustype, NULL,
(void *)conn->child_fwnode,
coresight_device_fwnode_match);
if (dev) {
conn->child_dev = to_coresight_device(dev);
/* and put reference from 'bus_find_device()' */
@ -1075,15 +1077,21 @@ static int coresight_remove_match(struct device *dev, void *data)
* Circle throuch all the connection of that component. If we find
* a connection whose name matches @csdev, remove it.
*/
for (i = 0; i < iterator->nr_outport; i++) {
conn = &iterator->conns[i];
for (i = 0; i < iterator->pdata->nr_outport; i++) {
conn = &iterator->pdata->conns[i];
if (conn->child_dev == NULL)
continue;
if (!strcmp(dev_name(&csdev->dev), conn->child_name)) {
if (csdev->dev.fwnode == conn->child_fwnode) {
iterator->orphan = true;
conn->child_dev = NULL;
/*
* Drop the reference to the handle for the remote
* device acquired in parsing the connections from
* platform data.
*/
fwnode_handle_put(conn->child_fwnode);
/* No need to continue */
break;
}
@ -1096,10 +1104,21 @@ static int coresight_remove_match(struct device *dev, void *data)
return 0;
}
/*
* coresight_remove_conns - Remove references to this given devices
* from the connections of other devices.
*/
static void coresight_remove_conns(struct coresight_device *csdev)
{
bus_for_each_dev(&coresight_bustype, NULL,
csdev, coresight_remove_match);
/*
* Another device will point to this device only if there is
* an output port connected to this one. i.e, if the device
* doesn't have at least one input port, there is no point
* in searching all the devices.
*/
if (csdev->pdata->nr_inport)
bus_for_each_dev(&coresight_bustype, NULL,
csdev, coresight_remove_match);
}
/**
@ -1152,6 +1171,22 @@ static int __init coresight_init(void)
}
postcore_initcall(coresight_init);
/*
* coresight_release_platform_data: Release references to the devices connected
* to the output port of this device.
*/
void coresight_release_platform_data(struct coresight_platform_data *pdata)
{
int i;
for (i = 0; i < pdata->nr_outport; i++) {
if (pdata->conns[i].child_fwnode) {
fwnode_handle_put(pdata->conns[i].child_fwnode);
pdata->conns[i].child_fwnode = NULL;
}
}
}
struct coresight_device *coresight_register(struct coresight_desc *desc)
{
int ret;
@ -1184,10 +1219,7 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
csdev->refcnt = refcnts;
csdev->nr_inport = desc->pdata->nr_inport;
csdev->nr_outport = desc->pdata->nr_outport;
csdev->conns = desc->pdata->conns;
csdev->pdata = desc->pdata;
csdev->type = desc->type;
csdev->subtype = desc->subtype;
@ -1199,7 +1231,12 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
csdev->dev.parent = desc->dev;
csdev->dev.release = coresight_device_release;
csdev->dev.bus = &coresight_bustype;
dev_set_name(&csdev->dev, "%s", desc->pdata->name);
/*
* Hold the reference to our parent device. This will be
* dropped only in coresight_device_release().
*/
csdev->dev.fwnode = fwnode_handle_get(dev_fwnode(desc->dev));
dev_set_name(&csdev->dev, "%s", desc->name);
ret = device_register(&csdev->dev);
if (ret) {
@ -1239,6 +1276,8 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
err_free_csdev:
kfree(csdev);
err_out:
/* Cleanup the connection information */
coresight_release_platform_data(desc->pdata);
return ERR_PTR(ret);
}
EXPORT_SYMBOL_GPL(coresight_register);
@ -1248,6 +1287,65 @@ void coresight_unregister(struct coresight_device *csdev)
etm_perf_del_symlink_sink(csdev);
/* Remove references of that device in the topology */
coresight_remove_conns(csdev);
coresight_release_platform_data(csdev->pdata);
device_unregister(&csdev->dev);
}
EXPORT_SYMBOL_GPL(coresight_unregister);
/*
* coresight_search_device_idx - Search the fwnode handle of a device
* in the given dev_idx list. Must be called with the coresight_mutex held.
*
* Returns the index of the entry, when found. Otherwise, -ENOENT.
*/
static inline int coresight_search_device_idx(struct coresight_dev_list *dict,
struct fwnode_handle *fwnode)
{
int i;
for (i = 0; i < dict->nr_idx; i++)
if (dict->fwnode_list[i] == fwnode)
return i;
return -ENOENT;
}
/*
* coresight_alloc_device_name - Get an index for a given device in the
* device index list specific to a driver. An index is allocated for a
* device and is tracked with the fwnode_handle to prevent allocating
* duplicate indices for the same device (e.g, if we defer probing of
* a device due to dependencies), in case the index is requested again.
*/
char *coresight_alloc_device_name(struct coresight_dev_list *dict,
struct device *dev)
{
int idx;
char *name = NULL;
struct fwnode_handle **list;
mutex_lock(&coresight_mutex);
idx = coresight_search_device_idx(dict, dev_fwnode(dev));
if (idx < 0) {
/* Make space for the new entry */
idx = dict->nr_idx;
list = krealloc(dict->fwnode_list,
(idx + 1) * sizeof(*dict->fwnode_list),
GFP_KERNEL);
if (ZERO_OR_NULL_PTR(list)) {
idx = -ENOMEM;
goto done;
}
list[idx] = dev_fwnode(dev);
dict->fwnode_list = list;
dict->nr_idx = idx + 1;
}
name = devm_kasprintf(dev, GFP_KERNEL, "%s%d", dict->pfx, idx);
done:
mutex_unlock(&coresight_mutex);
return name;
}
EXPORT_SYMBOL_GPL(coresight_alloc_device_name);

View File

@ -1,297 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2012, The Linux Foundation. All rights reserved.
*/
#include <linux/types.h>
#include <linux/err.h>
#include <linux/slab.h>
#include <linux/clk.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_graph.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/amba/bus.h>
#include <linux/coresight.h>
#include <linux/cpumask.h>
#include <asm/smp_plat.h>
static int of_dev_node_match(struct device *dev, void *data)
{
return dev->of_node == data;
}
static struct device *
of_coresight_get_endpoint_device(struct device_node *endpoint)
{
struct device *dev = NULL;
/*
* If we have a non-configurable replicator, it will be found on the
* platform bus.
*/
dev = bus_find_device(&platform_bus_type, NULL,
endpoint, of_dev_node_match);
if (dev)
return dev;
/*
* We have a configurable component - circle through the AMBA bus
* looking for the device that matches the endpoint node.
*/
return bus_find_device(&amba_bustype, NULL,
endpoint, of_dev_node_match);
}
static inline bool of_coresight_legacy_ep_is_input(struct device_node *ep)
{
return of_property_read_bool(ep, "slave-mode");
}
static void of_coresight_get_ports_legacy(const struct device_node *node,
int *nr_inport, int *nr_outport)
{
struct device_node *ep = NULL;
int in = 0, out = 0;
do {
ep = of_graph_get_next_endpoint(node, ep);
if (!ep)
break;
if (of_coresight_legacy_ep_is_input(ep))
in++;
else
out++;
} while (ep);
*nr_inport = in;
*nr_outport = out;
}
static struct device_node *of_coresight_get_port_parent(struct device_node *ep)
{
struct device_node *parent = of_graph_get_port_parent(ep);
/*
* Skip one-level up to the real device node, if we
* are using the new bindings.
*/
if (of_node_name_eq(parent, "in-ports") ||
of_node_name_eq(parent, "out-ports"))
parent = of_get_next_parent(parent);
return parent;
}
static inline struct device_node *
of_coresight_get_input_ports_node(const struct device_node *node)
{
return of_get_child_by_name(node, "in-ports");
}
static inline struct device_node *
of_coresight_get_output_ports_node(const struct device_node *node)
{
return of_get_child_by_name(node, "out-ports");
}
static inline int
of_coresight_count_ports(struct device_node *port_parent)
{
int i = 0;
struct device_node *ep = NULL;
while ((ep = of_graph_get_next_endpoint(port_parent, ep)))
i++;
return i;
}
static void of_coresight_get_ports(const struct device_node *node,
int *nr_inport, int *nr_outport)
{
struct device_node *input_ports = NULL, *output_ports = NULL;
input_ports = of_coresight_get_input_ports_node(node);
output_ports = of_coresight_get_output_ports_node(node);
if (input_ports || output_ports) {
if (input_ports) {
*nr_inport = of_coresight_count_ports(input_ports);
of_node_put(input_ports);
}
if (output_ports) {
*nr_outport = of_coresight_count_ports(output_ports);
of_node_put(output_ports);
}
} else {
/* Fall back to legacy DT bindings parsing */
of_coresight_get_ports_legacy(node, nr_inport, nr_outport);
}
}
static int of_coresight_alloc_memory(struct device *dev,
struct coresight_platform_data *pdata)
{
if (pdata->nr_outport) {
pdata->conns = devm_kzalloc(dev, pdata->nr_outport *
sizeof(*pdata->conns),
GFP_KERNEL);
if (!pdata->conns)
return -ENOMEM;
}
return 0;
}
int of_coresight_get_cpu(const struct device_node *node)
{
int cpu;
struct device_node *dn;
dn = of_parse_phandle(node, "cpu", 0);
/* Affinity defaults to CPU0 */
if (!dn)
return 0;
cpu = of_cpu_node_to_id(dn);
of_node_put(dn);
/* Affinity to CPU0 if no cpu nodes are found */
return (cpu < 0) ? 0 : cpu;
}
EXPORT_SYMBOL_GPL(of_coresight_get_cpu);
/*
* of_coresight_parse_endpoint : Parse the given output endpoint @ep
* and fill the connection information in @conn
*
* Parses the local port, remote device name and the remote port.
*
* Returns :
* 1 - If the parsing is successful and a connection record
* was created for an output connection.
* 0 - If the parsing completed without any fatal errors.
* -Errno - Fatal error, abort the scanning.
*/
static int of_coresight_parse_endpoint(struct device *dev,
struct device_node *ep,
struct coresight_connection *conn)
{
int ret = 0;
struct of_endpoint endpoint, rendpoint;
struct device_node *rparent = NULL;
struct device_node *rep = NULL;
struct device *rdev = NULL;
do {
/* Parse the local port details */
if (of_graph_parse_endpoint(ep, &endpoint))
break;
/*
* Get a handle on the remote endpoint and the device it is
* attached to.
*/
rep = of_graph_get_remote_endpoint(ep);
if (!rep)
break;
rparent = of_coresight_get_port_parent(rep);
if (!rparent)
break;
if (of_graph_parse_endpoint(rep, &rendpoint))
break;
/* If the remote device is not available, defer probing */
rdev = of_coresight_get_endpoint_device(rparent);
if (!rdev) {
ret = -EPROBE_DEFER;
break;
}
conn->outport = endpoint.port;
conn->child_name = devm_kstrdup(dev,
dev_name(rdev),
GFP_KERNEL);
conn->child_port = rendpoint.port;
/* Connection record updated */
ret = 1;
} while (0);
of_node_put(rparent);
of_node_put(rep);
put_device(rdev);
return ret;
}
struct coresight_platform_data *
of_get_coresight_platform_data(struct device *dev,
const struct device_node *node)
{
int ret = 0;
struct coresight_platform_data *pdata;
struct coresight_connection *conn;
struct device_node *ep = NULL;
const struct device_node *parent = NULL;
bool legacy_binding = false;
pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
if (!pdata)
return ERR_PTR(-ENOMEM);
/* Use device name as sysfs handle */
pdata->name = dev_name(dev);
pdata->cpu = of_coresight_get_cpu(node);
/* Get the number of input and output port for this component */
of_coresight_get_ports(node, &pdata->nr_inport, &pdata->nr_outport);
/* If there are no output connections, we are done */
if (!pdata->nr_outport)
return pdata;
ret = of_coresight_alloc_memory(dev, pdata);
if (ret)
return ERR_PTR(ret);
parent = of_coresight_get_output_ports_node(node);
/*
* If the DT uses obsoleted bindings, the ports are listed
* under the device and we need to filter out the input
* ports.
*/
if (!parent) {
legacy_binding = true;
parent = node;
dev_warn_once(dev, "Uses obsolete Coresight DT bindings\n");
}
conn = pdata->conns;
/* Iterate through each output port to discover topology */
while ((ep = of_graph_get_next_endpoint(parent, ep))) {
/*
* Legacy binding mixes input/output ports under the
* same parent. So, skip the input ports if we are dealing
* with legacy binding, as they processed with their
* connected output ports.
*/
if (legacy_binding && of_coresight_legacy_ep_is_input(ep))
continue;
ret = of_coresight_parse_endpoint(dev, ep, conn);
switch (ret) {
case 1:
conn++; /* Fall through */
case 0:
break;
default:
return ERR_PTR(ret);
}
}
return pdata;
}
EXPORT_SYMBOL_GPL(of_get_coresight_platform_data);

View File

@ -33,14 +33,18 @@
* @entry: window list linkage (msc::win_list)
* @pgoff: page offset into the buffer that this window starts at
* @nr_blocks: number of blocks (pages) in this window
* @nr_segs: number of segments in this window (<= @nr_blocks)
* @_sgt: array of block descriptors
* @sgt: array of block descriptors
*/
struct msc_window {
struct list_head entry;
unsigned long pgoff;
unsigned int nr_blocks;
unsigned int nr_segs;
struct msc *msc;
struct sg_table sgt;
struct sg_table _sgt;
struct sg_table *sgt;
};
/**
@ -138,13 +142,19 @@ static inline bool msc_block_is_empty(struct msc_block_desc *bdesc)
static inline struct msc_block_desc *
msc_win_block(struct msc_window *win, unsigned int block)
{
return sg_virt(&win->sgt.sgl[block]);
return sg_virt(&win->sgt->sgl[block]);
}
static inline size_t
msc_win_actual_bsz(struct msc_window *win, unsigned int block)
{
return win->sgt->sgl[block].length;
}
static inline dma_addr_t
msc_win_baddr(struct msc_window *win, unsigned int block)
{
return sg_dma_address(&win->sgt.sgl[block]);
return sg_dma_address(&win->sgt->sgl[block]);
}
static inline unsigned long
@ -179,17 +189,18 @@ static struct msc_window *msc_next_window(struct msc_window *win)
}
/**
* msc_oldest_window() - locate the window with oldest data
* msc_find_window() - find a window matching a given sg_table
* @msc: MSC device
* @sgt: SG table of the window
* @nonempty: skip over empty windows
*
* This should only be used in multiblock mode. Caller should hold the
* msc::user_count reference.
*
* Return: the oldest window with valid data
* Return: MSC window structure pointer or NULL if the window
* could not be found.
*/
static struct msc_window *msc_oldest_window(struct msc *msc)
static struct msc_window *
msc_find_window(struct msc *msc, struct sg_table *sgt, bool nonempty)
{
struct msc_window *win, *next = msc_next_window(msc->cur_win);
struct msc_window *win;
unsigned int found = 0;
if (list_empty(&msc->win_list))
@ -201,17 +212,40 @@ static struct msc_window *msc_oldest_window(struct msc *msc)
* something like 2, in which case we're good
*/
list_for_each_entry(win, &msc->win_list, entry) {
if (win == next)
if (win->sgt == sgt)
found++;
/* skip the empty ones */
if (msc_block_is_empty(msc_win_block(win, 0)))
if (nonempty && msc_block_is_empty(msc_win_block(win, 0)))
continue;
if (found)
return win;
}
return NULL;
}
/**
* msc_oldest_window() - locate the window with oldest data
* @msc: MSC device
*
* This should only be used in multiblock mode. Caller should hold the
* msc::user_count reference.
*
* Return: the oldest window with valid data
*/
static struct msc_window *msc_oldest_window(struct msc *msc)
{
struct msc_window *win;
if (list_empty(&msc->win_list))
return NULL;
win = msc_find_window(msc, msc_next_window(msc->cur_win)->sgt, true);
if (win)
return win;
return list_first_entry(&msc->win_list, struct msc_window, entry);
}
@ -234,7 +268,7 @@ static unsigned int msc_win_oldest_block(struct msc_window *win)
* with wrapping, last written block contains both the newest and the
* oldest data for this window.
*/
for (blk = 0; blk < win->nr_blocks; blk++) {
for (blk = 0; blk < win->nr_segs; blk++) {
bdesc = msc_win_block(win, blk);
if (msc_block_last_written(bdesc))
@ -366,7 +400,7 @@ static int msc_iter_block_advance(struct msc_iter *iter)
return msc_iter_win_advance(iter);
/* block advance */
if (++iter->block == iter->win->nr_blocks)
if (++iter->block == iter->win->nr_segs)
iter->block = 0;
/* no wrapping, sanity check in case there is no last written block */
@ -478,7 +512,7 @@ static void msc_buffer_clear_hw_header(struct msc *msc)
size_t hw_sz = sizeof(struct msc_block_desc) -
offsetof(struct msc_block_desc, hw_tag);
for (blk = 0; blk < win->nr_blocks; blk++) {
for (blk = 0; blk < win->nr_segs; blk++) {
struct msc_block_desc *bdesc = msc_win_block(win, blk);
memset(&bdesc->hw_tag, 0, hw_sz);
@ -667,7 +701,7 @@ static int msc_buffer_contig_alloc(struct msc *msc, unsigned long size)
goto err_out;
ret = -ENOMEM;
page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
page = alloc_pages(GFP_KERNEL | __GFP_ZERO | GFP_DMA32, order);
if (!page)
goto err_free_sgt;
@ -734,17 +768,17 @@ static struct page *msc_buffer_contig_get_page(struct msc *msc,
}
static int __msc_buffer_win_alloc(struct msc_window *win,
unsigned int nr_blocks)
unsigned int nr_segs)
{
struct scatterlist *sg_ptr;
void *block;
int i, ret;
ret = sg_alloc_table(&win->sgt, nr_blocks, GFP_KERNEL);
ret = sg_alloc_table(win->sgt, nr_segs, GFP_KERNEL);
if (ret)
return -ENOMEM;
for_each_sg(win->sgt.sgl, sg_ptr, nr_blocks, i) {
for_each_sg(win->sgt->sgl, sg_ptr, nr_segs, i) {
block = dma_alloc_coherent(msc_dev(win->msc)->parent->parent,
PAGE_SIZE, &sg_dma_address(sg_ptr),
GFP_KERNEL);
@ -754,7 +788,7 @@ static int __msc_buffer_win_alloc(struct msc_window *win,
sg_set_buf(sg_ptr, block, PAGE_SIZE);
}
return nr_blocks;
return nr_segs;
err_nomem:
for (i--; i >= 0; i--)
@ -762,11 +796,35 @@ err_nomem:
msc_win_block(win, i),
msc_win_baddr(win, i));
sg_free_table(&win->sgt);
sg_free_table(win->sgt);
return -ENOMEM;
}
#ifdef CONFIG_X86
static void msc_buffer_set_uc(struct msc_window *win, unsigned int nr_segs)
{
int i;
for (i = 0; i < nr_segs; i++)
/* Set the page as uncached */
set_memory_uc((unsigned long)msc_win_block(win, i), 1);
}
static void msc_buffer_set_wb(struct msc_window *win)
{
int i;
for (i = 0; i < win->nr_segs; i++)
/* Reset the page to write-back */
set_memory_wb((unsigned long)msc_win_block(win, i), 1);
}
#else /* !X86 */
static inline void
msc_buffer_set_uc(struct msc_window *win, unsigned int nr_segs) {}
static inline void msc_buffer_set_wb(struct msc_window *win) {}
#endif /* CONFIG_X86 */
/**
* msc_buffer_win_alloc() - alloc a window for a multiblock mode
* @msc: MSC device
@ -780,7 +838,7 @@ err_nomem:
static int msc_buffer_win_alloc(struct msc *msc, unsigned int nr_blocks)
{
struct msc_window *win;
int ret = -ENOMEM, i;
int ret = -ENOMEM;
if (!nr_blocks)
return 0;
@ -797,13 +855,13 @@ static int msc_buffer_win_alloc(struct msc *msc, unsigned int nr_blocks)
return -ENOMEM;
win->msc = msc;
win->sgt = &win->_sgt;
if (!list_empty(&msc->win_list)) {
struct msc_window *prev = list_last_entry(&msc->win_list,
struct msc_window,
entry);
/* This works as long as blocks are page-sized */
win->pgoff = prev->pgoff + prev->nr_blocks;
}
@ -811,13 +869,10 @@ static int msc_buffer_win_alloc(struct msc *msc, unsigned int nr_blocks)
if (ret < 0)
goto err_nomem;
#ifdef CONFIG_X86
for (i = 0; i < ret; i++)
/* Set the page as uncached */
set_memory_uc((unsigned long)msc_win_block(win, i), 1);
#endif
msc_buffer_set_uc(win, ret);
win->nr_blocks = ret;
win->nr_segs = ret;
win->nr_blocks = nr_blocks;
if (list_empty(&msc->win_list)) {
msc->base = msc_win_block(win, 0);
@ -840,14 +895,14 @@ static void __msc_buffer_win_free(struct msc *msc, struct msc_window *win)
{
int i;
for (i = 0; i < win->nr_blocks; i++) {
struct page *page = sg_page(&win->sgt.sgl[i]);
for (i = 0; i < win->nr_segs; i++) {
struct page *page = sg_page(&win->sgt->sgl[i]);
page->mapping = NULL;
dma_free_coherent(msc_dev(win->msc)->parent->parent, PAGE_SIZE,
msc_win_block(win, i), msc_win_baddr(win, i));
}
sg_free_table(&win->sgt);
sg_free_table(win->sgt);
}
/**
@ -860,8 +915,6 @@ static void __msc_buffer_win_free(struct msc *msc, struct msc_window *win)
*/
static void msc_buffer_win_free(struct msc *msc, struct msc_window *win)
{
int i;
msc->nr_pages -= win->nr_blocks;
list_del(&win->entry);
@ -870,11 +923,7 @@ static void msc_buffer_win_free(struct msc *msc, struct msc_window *win)
msc->base_addr = 0;
}
#ifdef CONFIG_X86
for (i = 0; i < win->nr_blocks; i++)
/* Reset the page to write-back */
set_memory_wb((unsigned long)msc_win_block(win, i), 1);
#endif
msc_buffer_set_wb(win);
__msc_buffer_win_free(msc, win);
@ -909,7 +958,7 @@ static void msc_buffer_relink(struct msc *msc)
next_win = list_next_entry(win, entry);
}
for (blk = 0; blk < win->nr_blocks; blk++) {
for (blk = 0; blk < win->nr_segs; blk++) {
struct msc_block_desc *bdesc = msc_win_block(win, blk);
memset(bdesc, 0, sizeof(*bdesc));
@ -920,7 +969,7 @@ static void msc_buffer_relink(struct msc *msc)
* Similarly to last window, last block should point
* to the first one.
*/
if (blk == win->nr_blocks - 1) {
if (blk == win->nr_segs - 1) {
sw_tag |= MSC_SW_TAG_LASTBLK;
bdesc->next_blk = msc_win_bpfn(win, 0);
} else {
@ -928,7 +977,7 @@ static void msc_buffer_relink(struct msc *msc)
}
bdesc->sw_tag = sw_tag;
bdesc->block_sz = PAGE_SIZE / 64;
bdesc->block_sz = msc_win_actual_bsz(win, blk) / 64;
}
}
@ -1087,6 +1136,7 @@ static int msc_buffer_free_unless_used(struct msc *msc)
static struct page *msc_buffer_get_page(struct msc *msc, unsigned long pgoff)
{
struct msc_window *win;
unsigned int blk;
if (msc->mode == MSC_MODE_SINGLE)
return msc_buffer_contig_get_page(msc, pgoff);
@ -1099,7 +1149,18 @@ static struct page *msc_buffer_get_page(struct msc *msc, unsigned long pgoff)
found:
pgoff -= win->pgoff;
return sg_page(&win->sgt.sgl[pgoff]);
for (blk = 0; blk < win->nr_segs; blk++) {
struct page *page = sg_page(&win->sgt->sgl[blk]);
size_t pgsz = PFN_DOWN(msc_win_actual_bsz(win, blk));
if (pgoff < pgsz)
return page + pgoff;
pgoff -= pgsz;
}
return NULL;
}
/**
@ -1386,10 +1447,9 @@ static int intel_th_msc_init(struct msc *msc)
static void msc_win_switch(struct msc *msc)
{
struct msc_window *last, *first;
struct msc_window *first;
first = list_first_entry(&msc->win_list, struct msc_window, entry);
last = list_last_entry(&msc->win_list, struct msc_window, entry);
if (msc_is_last_win(msc->cur_win))
msc->cur_win = first;

View File

@ -194,6 +194,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x02a6),
.driver_data = (kernel_ulong_t)&intel_th_2x,
},
{
/* Ice Lake NNPI */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x45c5),
.driver_data = (kernel_ulong_t)&intel_th_2x,
},
{ 0 },
};

View File

@ -123,7 +123,7 @@ config FSL_IFC
config JZ4780_NEMC
bool "Ingenic JZ4780 SoC NEMC driver"
default y
depends on MACH_JZ4780 || COMPILE_TEST
depends on MIPS || COMPILE_TEST
depends on HAS_IOMEM && OF
help
This driver is for the NAND/External Memory Controller (NEMC) in

View File

@ -41,9 +41,14 @@
#define NEMC_NFCSR_NFCEn(n) BIT((((n) - 1) << 1) + 1)
#define NEMC_NFCSR_TNFEn(n) BIT(16 + (n) - 1)
struct jz_soc_info {
u8 tas_tah_cycles_max;
};
struct jz4780_nemc {
spinlock_t lock;
struct device *dev;
const struct jz_soc_info *soc_info;
void __iomem *base;
struct clk *clk;
uint32_t clk_period;
@ -158,7 +163,7 @@ static bool jz4780_nemc_configure_bank(struct jz4780_nemc *nemc,
* Conversion of tBP and tAW cycle counts to values supported by the
* hardware (round up to the next supported value).
*/
static const uint32_t convert_tBP_tAW[] = {
static const u8 convert_tBP_tAW[] = {
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
/* 11 - 12 -> 12 cycles */
@ -199,7 +204,7 @@ static bool jz4780_nemc_configure_bank(struct jz4780_nemc *nemc,
if (of_property_read_u32(node, "ingenic,nemc-tAS", &val) == 0) {
smcr &= ~NEMC_SMCR_TAS_MASK;
cycles = jz4780_nemc_ns_to_cycles(nemc, val);
if (cycles > 15) {
if (cycles > nemc->soc_info->tas_tah_cycles_max) {
dev_err(nemc->dev, "tAS %u is too high (%u cycles)\n",
val, cycles);
return false;
@ -211,7 +216,7 @@ static bool jz4780_nemc_configure_bank(struct jz4780_nemc *nemc,
if (of_property_read_u32(node, "ingenic,nemc-tAH", &val) == 0) {
smcr &= ~NEMC_SMCR_TAH_MASK;
cycles = jz4780_nemc_ns_to_cycles(nemc, val);
if (cycles > 15) {
if (cycles > nemc->soc_info->tas_tah_cycles_max) {
dev_err(nemc->dev, "tAH %u is too high (%u cycles)\n",
val, cycles);
return false;
@ -275,6 +280,10 @@ static int jz4780_nemc_probe(struct platform_device *pdev)
if (!nemc)
return -ENOMEM;
nemc->soc_info = device_get_match_data(dev);
if (!nemc->soc_info)
return -EINVAL;
spin_lock_init(&nemc->lock);
nemc->dev = dev;
@ -367,8 +376,17 @@ static int jz4780_nemc_remove(struct platform_device *pdev)
return 0;
}
static const struct jz_soc_info jz4740_soc_info = {
.tas_tah_cycles_max = 7,
};
static const struct jz_soc_info jz4780_soc_info = {
.tas_tah_cycles_max = 15,
};
static const struct of_device_id jz4780_nemc_dt_match[] = {
{ .compatible = "ingenic,jz4780-nemc" },
{ .compatible = "ingenic,jz4740-nemc", .data = &jz4740_soc_info, },
{ .compatible = "ingenic,jz4780-nemc", .data = &jz4780_soc_info, },
{},
};

View File

@ -9,7 +9,6 @@ config SENSORS_LIS3LV02D
tristate
depends on INPUT
select INPUT_POLLDEV
default n
config AD525X_DPOT
tristate "Analog Devices Digital Potentiometers"
@ -62,7 +61,6 @@ config ATMEL_TCLIB
config DUMMY_IRQ
tristate "Dummy IRQ handler"
default n
---help---
This module accepts a single 'irq' parameter, which it should register for.
The sole purpose of this module is to help with debugging of systems on
@ -118,7 +116,6 @@ config PHANTOM
config INTEL_MID_PTI
tristate "Parallel Trace Interface for MIPI P1149.7 cJTAG standard"
depends on PCI && TTY && (X86_INTEL_MID || COMPILE_TEST)
default n
help
The PTI (Parallel Trace Interface) driver directs
trace data routed from various parts in the system out
@ -194,7 +191,6 @@ config ATMEL_SSC
config ENCLOSURE_SERVICES
tristate "Enclosure Services"
default n
help
Provides support for intelligent enclosures (bays which
contain storage devices). You also need either a host
@ -218,7 +214,6 @@ config SGI_XP
config CS5535_MFGPT
tristate "CS5535/CS5536 Geode Multi-Function General Purpose Timer (MFGPT) support"
depends on MFD_CS5535
default n
help
This driver provides access to MFGPT functionality for other
drivers that need timers. MFGPTs are available in the CS5535 and
@ -251,7 +246,6 @@ config CS5535_CLOCK_EVENT_SRC
config HP_ILO
tristate "Channel interface driver for the HP iLO processor"
depends on PCI
default n
help
The channel interface driver allows applications to communicate
with iLO management processors present on HP ProLiant servers.
@ -286,7 +280,6 @@ config QCOM_FASTRPC
config SGI_GRU
tristate "SGI GRU driver"
depends on X86_UV && SMP
default n
select MMU_NOTIFIER
---help---
The GRU is a hardware resource located in the system chipset. The GRU
@ -301,7 +294,6 @@ config SGI_GRU
config SGI_GRU_DEBUG
bool "SGI GRU driver debug"
depends on SGI_GRU
default n
---help---
This option enables additional debugging code for the SGI GRU driver.
If you are unsure, say N.
@ -359,7 +351,6 @@ config SENSORS_BH1770
config SENSORS_APDS990X
tristate "APDS990X combined als and proximity sensors"
depends on I2C
default n
---help---
Say Y here if you want to build a driver for Avago APDS990x
combined ambient light and proximity sensor chip.
@ -387,7 +378,6 @@ config DS1682
config SPEAR13XX_PCIE_GADGET
bool "PCIe gadget support for SPEAr13XX platform"
depends on ARCH_SPEAR13XX && BROKEN
default n
help
This option enables gadget support for PCIe controller. If
board file defines any controller as PCIe endpoint then a sysfs
@ -397,6 +387,7 @@ config SPEAR13XX_PCIE_GADGET
config VMWARE_BALLOON
tristate "VMware Balloon Driver"
depends on VMWARE_VMCI && X86 && HYPERVISOR_GUEST
select MEMORY_BALLOON
help
This is VMware physical memory management driver which acts
like a "balloon" that can be inflated to reclaim physical pages
@ -431,15 +422,6 @@ config PCH_PHUB
To compile this driver as a module, choose M here: the module will
be called pch_phub.
config USB_SWITCH_FSA9480
tristate "FSA9480 USB Switch"
depends on I2C
help
The FSA9480 is a USB port accessory detector and switch.
The FSA9480 is fully controlled using I2C and enables USB data,
stereo and mono audio, video, microphone and UART data to use
a common connector port.
config LATTICE_ECP3_CONFIG
tristate "Lattice ECP3 FPGA bitstream configuration via SPI"
depends on SPI && SYSFS
@ -481,6 +463,18 @@ config PCI_ENDPOINT_TEST
Enable this configuration option to enable the host side test driver
for PCI Endpoint.
config XILINX_SDFEC
tristate "Xilinx SDFEC 16"
help
This option enables support for the Xilinx SDFEC (Soft Decision
Forward Error Correction) driver. This enables a char driver
for the SDFEC.
You may select this driver if your design instantiates the
SDFEC(16nm) hardened block. To compile this as a module choose M.
If unsure, say N.
config MISC_RTSX
tristate
default MISC_RTSX_PCI || MISC_RTSX_USB

View File

@ -42,7 +42,6 @@ obj-$(CONFIG_VMWARE_BALLOON) += vmw_balloon.o
obj-$(CONFIG_PCH_PHUB) += pch_phub.o
obj-y += ti-st/
obj-y += lis3lv02d/
obj-$(CONFIG_USB_SWITCH_FSA9480) += fsa9480.o
obj-$(CONFIG_ALTERA_STAPL) +=altera-stapl/
obj-$(CONFIG_INTEL_MEI) += mei/
obj-$(CONFIG_VMWARE_VMCI) += vmw_vmci/
@ -59,3 +58,4 @@ obj-$(CONFIG_OCXL) += ocxl/
obj-y += cardreader/
obj-$(CONFIG_PVPANIC) += pvpanic.o
obj-$(CONFIG_HABANA_AI) += habanalabs/
obj-$(CONFIG_XILINX_SDFEC) += xilinx_sdfec.o

View File

@ -5,6 +5,5 @@ comment "Altera FPGA firmware download module (requires I2C)"
config ALTERA_STAPL
tristate "Altera FPGA firmware download module"
depends on I2C
default n
help
An Altera FPGA module. Say Y when you want to support this tool.

View File

@ -5,7 +5,6 @@
menuconfig C2PORT
tristate "Silicon Labs C2 port support"
default n
help
This option enables support for Silicon Labs C2 port used to
program Silicon micro controller chips (and other 8051 compatible).
@ -24,7 +23,6 @@ if C2PORT
config C2PORT_DURAMAR_2150
tristate "C2 port support for Eurotech's Duramar 2150"
depends on X86
default n
help
This option enables C2 support for the Eurotech's Duramar 2150
on board micro controller.

View File

@ -15,7 +15,6 @@ config CB710_CORE
config CB710_DEBUG
bool "Enable driver debugging"
depends on CB710_CORE != n
default n
help
This is an option for use by developers; most people should
say N here. This adds a lot of debugging output to dmesg.

View File

@ -5,16 +5,13 @@
config CXL_BASE
bool
default n
select PPC_COPRO_BASE
config CXL_AFU_DRIVER_OPS
bool
default n
config CXL_LIB
bool
default n
config CXL
tristate "Support for IBM Coherent Accelerators (CXL)"

View File

@ -1,7 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
config ECHO
tristate "Line Echo Canceller support"
default n
---help---
This driver provides line echo cancelling support for mISDN and
Zaptel drivers.

View File

@ -2,7 +2,7 @@
/*
* ee1004 - driver for DDR4 SPD EEPROMs
*
* Copyright (C) 2017 Jean Delvare
* Copyright (C) 2017-2019 Jean Delvare
*
* Based on the at24 driver:
* Copyright (C) 2005-2007 David Brownell
@ -53,6 +53,24 @@ MODULE_DEVICE_TABLE(i2c, ee1004_ids);
/*-------------------------------------------------------------------------*/
static int ee1004_get_current_page(void)
{
int err;
err = i2c_smbus_read_byte(ee1004_set_page[0]);
if (err == -ENXIO) {
/* Nack means page 1 is selected */
return 1;
}
if (err < 0) {
/* Anything else is a real error, bail out */
return err;
}
/* Ack means page 0 is selected, returned value meaningless */
return 0;
}
static ssize_t ee1004_eeprom_read(struct i2c_client *client, char *buf,
unsigned int offset, size_t count)
{
@ -102,6 +120,16 @@ static ssize_t ee1004_read(struct file *filp, struct kobject *kobj,
/* Data is ignored */
status = i2c_smbus_write_byte(ee1004_set_page[page],
0x00);
if (status == -ENXIO) {
/*
* Don't give up just yet. Some memory
* modules will select the page but not
* ack the command. Check which page is
* selected now.
*/
if (ee1004_get_current_page() == page)
status = 0;
}
if (status < 0) {
dev_err(dev, "Failed to select page %d (%d)\n",
page, status);
@ -186,17 +214,10 @@ static int ee1004_probe(struct i2c_client *client,
}
/* Remember current page to avoid unneeded page select */
err = i2c_smbus_read_byte(ee1004_set_page[0]);
if (err == -ENXIO) {
/* Nack means page 1 is selected */
ee1004_current_page = 1;
} else if (err < 0) {
/* Anything else is a real error, bail out */
err = ee1004_get_current_page();
if (err < 0)
goto err_clients;
} else {
/* Ack means page 0 is selected, returned value meaningless */
ee1004_current_page = 0;
}
ee1004_current_page = err;
dev_dbg(&client->dev, "Currently selected page: %d\n",
ee1004_current_page);
mutex_unlock(&ee1004_bus_lock);

View File

@ -115,7 +115,6 @@ static struct dentry *csr_dbgdir;
* @client: i2c client used to perform IO operations
*
* @ee_file: EEPROM read/write sysfs-file
* @csr_file: CSR read/write debugfs-node
*/
struct idt_smb_seq;
struct idt_89hpesx_dev {
@ -137,7 +136,6 @@ struct idt_89hpesx_dev {
struct bin_attribute *ee_file;
struct dentry *csr_dir;
struct dentry *csr_file;
};
/*
@ -1378,8 +1376,8 @@ static void idt_create_dbgfs_files(struct idt_89hpesx_dev *pdev)
pdev->csr_dir = debugfs_create_dir(fname, csr_dbgdir);
/* Create Debugfs file for CSR read/write operations */
pdev->csr_file = debugfs_create_file(cli->name, 0600,
pdev->csr_dir, pdev, &csr_dbgfs_ops);
debugfs_create_file(cli->name, 0600, pdev->csr_dir, pdev,
&csr_dbgfs_ops);
}
/*

View File

@ -1,547 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* fsa9480.c - FSA9480 micro USB switch device driver
*
* Copyright (C) 2010 Samsung Electronics
* Minkyu Kang <mk7.kang@samsung.com>
* Wonguk Jeong <wonguk.jeong@samsung.com>
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/err.h>
#include <linux/i2c.h>
#include <linux/platform_data/fsa9480.h>
#include <linux/irq.h>
#include <linux/interrupt.h>
#include <linux/workqueue.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/pm_runtime.h>
/* FSA9480 I2C registers */
#define FSA9480_REG_DEVID 0x01
#define FSA9480_REG_CTRL 0x02
#define FSA9480_REG_INT1 0x03
#define FSA9480_REG_INT2 0x04
#define FSA9480_REG_INT1_MASK 0x05
#define FSA9480_REG_INT2_MASK 0x06
#define FSA9480_REG_ADC 0x07
#define FSA9480_REG_TIMING1 0x08
#define FSA9480_REG_TIMING2 0x09
#define FSA9480_REG_DEV_T1 0x0a
#define FSA9480_REG_DEV_T2 0x0b
#define FSA9480_REG_BTN1 0x0c
#define FSA9480_REG_BTN2 0x0d
#define FSA9480_REG_CK 0x0e
#define FSA9480_REG_CK_INT1 0x0f
#define FSA9480_REG_CK_INT2 0x10
#define FSA9480_REG_CK_INTMASK1 0x11
#define FSA9480_REG_CK_INTMASK2 0x12
#define FSA9480_REG_MANSW1 0x13
#define FSA9480_REG_MANSW2 0x14
/* Control */
#define CON_SWITCH_OPEN (1 << 4)
#define CON_RAW_DATA (1 << 3)
#define CON_MANUAL_SW (1 << 2)
#define CON_WAIT (1 << 1)
#define CON_INT_MASK (1 << 0)
#define CON_MASK (CON_SWITCH_OPEN | CON_RAW_DATA | \
CON_MANUAL_SW | CON_WAIT)
/* Device Type 1 */
#define DEV_USB_OTG (1 << 7)
#define DEV_DEDICATED_CHG (1 << 6)
#define DEV_USB_CHG (1 << 5)
#define DEV_CAR_KIT (1 << 4)
#define DEV_UART (1 << 3)
#define DEV_USB (1 << 2)
#define DEV_AUDIO_2 (1 << 1)
#define DEV_AUDIO_1 (1 << 0)
#define DEV_T1_USB_MASK (DEV_USB_OTG | DEV_USB)
#define DEV_T1_UART_MASK (DEV_UART)
#define DEV_T1_CHARGER_MASK (DEV_DEDICATED_CHG | DEV_USB_CHG)
/* Device Type 2 */
#define DEV_AV (1 << 6)
#define DEV_TTY (1 << 5)
#define DEV_PPD (1 << 4)
#define DEV_JIG_UART_OFF (1 << 3)
#define DEV_JIG_UART_ON (1 << 2)
#define DEV_JIG_USB_OFF (1 << 1)
#define DEV_JIG_USB_ON (1 << 0)
#define DEV_T2_USB_MASK (DEV_JIG_USB_OFF | DEV_JIG_USB_ON)
#define DEV_T2_UART_MASK (DEV_JIG_UART_OFF | DEV_JIG_UART_ON)
#define DEV_T2_JIG_MASK (DEV_JIG_USB_OFF | DEV_JIG_USB_ON | \
DEV_JIG_UART_OFF | DEV_JIG_UART_ON)
/*
* Manual Switch
* D- [7:5] / D+ [4:2]
* 000: Open all / 001: USB / 010: AUDIO / 011: UART / 100: V_AUDIO
*/
#define SW_VAUDIO ((4 << 5) | (4 << 2))
#define SW_UART ((3 << 5) | (3 << 2))
#define SW_AUDIO ((2 << 5) | (2 << 2))
#define SW_DHOST ((1 << 5) | (1 << 2))
#define SW_AUTO ((0 << 5) | (0 << 2))
/* Interrupt 1 */
#define INT_DETACH (1 << 1)
#define INT_ATTACH (1 << 0)
struct fsa9480_usbsw {
struct i2c_client *client;
struct fsa9480_platform_data *pdata;
int dev1;
int dev2;
int mansw;
};
static struct fsa9480_usbsw *chip;
static int fsa9480_write_reg(struct i2c_client *client,
int reg, int value)
{
int ret;
ret = i2c_smbus_write_byte_data(client, reg, value);
if (ret < 0)
dev_err(&client->dev, "%s: err %d\n", __func__, ret);
return ret;
}
static int fsa9480_read_reg(struct i2c_client *client, int reg)
{
int ret;
ret = i2c_smbus_read_byte_data(client, reg);
if (ret < 0)
dev_err(&client->dev, "%s: err %d\n", __func__, ret);
return ret;
}
static int fsa9480_read_irq(struct i2c_client *client, int *value)
{
int ret;
ret = i2c_smbus_read_i2c_block_data(client,
FSA9480_REG_INT1, 2, (u8 *)value);
*value &= 0xffff;
if (ret < 0)
dev_err(&client->dev, "%s: err %d\n", __func__, ret);
return ret;
}
static void fsa9480_set_switch(const char *buf)
{
struct fsa9480_usbsw *usbsw = chip;
struct i2c_client *client = usbsw->client;
unsigned int value;
unsigned int path = 0;
value = fsa9480_read_reg(client, FSA9480_REG_CTRL);
if (!strncmp(buf, "VAUDIO", 6)) {
path = SW_VAUDIO;
value &= ~CON_MANUAL_SW;
} else if (!strncmp(buf, "UART", 4)) {
path = SW_UART;
value &= ~CON_MANUAL_SW;
} else if (!strncmp(buf, "AUDIO", 5)) {
path = SW_AUDIO;
value &= ~CON_MANUAL_SW;
} else if (!strncmp(buf, "DHOST", 5)) {
path = SW_DHOST;
value &= ~CON_MANUAL_SW;
} else if (!strncmp(buf, "AUTO", 4)) {
path = SW_AUTO;
value |= CON_MANUAL_SW;
} else {
printk(KERN_ERR "Wrong command\n");
return;
}
usbsw->mansw = path;
fsa9480_write_reg(client, FSA9480_REG_MANSW1, path);
fsa9480_write_reg(client, FSA9480_REG_CTRL, value);
}
static ssize_t fsa9480_get_switch(char *buf)
{
struct fsa9480_usbsw *usbsw = chip;
struct i2c_client *client = usbsw->client;
unsigned int value;
value = fsa9480_read_reg(client, FSA9480_REG_MANSW1);
if (value == SW_VAUDIO)
return sprintf(buf, "VAUDIO\n");
else if (value == SW_UART)
return sprintf(buf, "UART\n");
else if (value == SW_AUDIO)
return sprintf(buf, "AUDIO\n");
else if (value == SW_DHOST)
return sprintf(buf, "DHOST\n");
else if (value == SW_AUTO)
return sprintf(buf, "AUTO\n");
else
return sprintf(buf, "%x", value);
}
static ssize_t fsa9480_show_device(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct fsa9480_usbsw *usbsw = dev_get_drvdata(dev);
struct i2c_client *client = usbsw->client;
int dev1, dev2;
dev1 = fsa9480_read_reg(client, FSA9480_REG_DEV_T1);
dev2 = fsa9480_read_reg(client, FSA9480_REG_DEV_T2);
if (!dev1 && !dev2)
return sprintf(buf, "NONE\n");
/* USB */
if (dev1 & DEV_T1_USB_MASK || dev2 & DEV_T2_USB_MASK)
return sprintf(buf, "USB\n");
/* UART */
if (dev1 & DEV_T1_UART_MASK || dev2 & DEV_T2_UART_MASK)
return sprintf(buf, "UART\n");
/* CHARGER */
if (dev1 & DEV_T1_CHARGER_MASK)
return sprintf(buf, "CHARGER\n");
/* JIG */
if (dev2 & DEV_T2_JIG_MASK)
return sprintf(buf, "JIG\n");
return sprintf(buf, "UNKNOWN\n");
}
static ssize_t fsa9480_show_manualsw(struct device *dev,
struct device_attribute *attr, char *buf)
{
return fsa9480_get_switch(buf);
}
static ssize_t fsa9480_set_manualsw(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
fsa9480_set_switch(buf);
return count;
}
static DEVICE_ATTR(device, S_IRUGO, fsa9480_show_device, NULL);
static DEVICE_ATTR(switch, S_IRUGO | S_IWUSR,
fsa9480_show_manualsw, fsa9480_set_manualsw);
static struct attribute *fsa9480_attributes[] = {
&dev_attr_device.attr,
&dev_attr_switch.attr,
NULL
};
static const struct attribute_group fsa9480_group = {
.attrs = fsa9480_attributes,
};
static void fsa9480_detect_dev(struct fsa9480_usbsw *usbsw, int intr)
{
int val1, val2, ctrl;
struct fsa9480_platform_data *pdata = usbsw->pdata;
struct i2c_client *client = usbsw->client;
val1 = fsa9480_read_reg(client, FSA9480_REG_DEV_T1);
val2 = fsa9480_read_reg(client, FSA9480_REG_DEV_T2);
ctrl = fsa9480_read_reg(client, FSA9480_REG_CTRL);
dev_info(&client->dev, "intr: 0x%x, dev1: 0x%x, dev2: 0x%x\n",
intr, val1, val2);
if (!intr)
goto out;
if (intr & INT_ATTACH) { /* Attached */
/* USB */
if (val1 & DEV_T1_USB_MASK || val2 & DEV_T2_USB_MASK) {
if (pdata->usb_cb)
pdata->usb_cb(FSA9480_ATTACHED);
if (usbsw->mansw) {
fsa9480_write_reg(client,
FSA9480_REG_MANSW1, usbsw->mansw);
}
}
/* UART */
if (val1 & DEV_T1_UART_MASK || val2 & DEV_T2_UART_MASK) {
if (pdata->uart_cb)
pdata->uart_cb(FSA9480_ATTACHED);
if (!(ctrl & CON_MANUAL_SW)) {
fsa9480_write_reg(client,
FSA9480_REG_MANSW1, SW_UART);
}
}
/* CHARGER */
if (val1 & DEV_T1_CHARGER_MASK) {
if (pdata->charger_cb)
pdata->charger_cb(FSA9480_ATTACHED);
}
/* JIG */
if (val2 & DEV_T2_JIG_MASK) {
if (pdata->jig_cb)
pdata->jig_cb(FSA9480_ATTACHED);
}
} else if (intr & INT_DETACH) { /* Detached */
/* USB */
if (usbsw->dev1 & DEV_T1_USB_MASK ||
usbsw->dev2 & DEV_T2_USB_MASK) {
if (pdata->usb_cb)
pdata->usb_cb(FSA9480_DETACHED);
}
/* UART */
if (usbsw->dev1 & DEV_T1_UART_MASK ||
usbsw->dev2 & DEV_T2_UART_MASK) {
if (pdata->uart_cb)
pdata->uart_cb(FSA9480_DETACHED);
}
/* CHARGER */
if (usbsw->dev1 & DEV_T1_CHARGER_MASK) {
if (pdata->charger_cb)
pdata->charger_cb(FSA9480_DETACHED);
}
/* JIG */
if (usbsw->dev2 & DEV_T2_JIG_MASK) {
if (pdata->jig_cb)
pdata->jig_cb(FSA9480_DETACHED);
}
}
usbsw->dev1 = val1;
usbsw->dev2 = val2;
out:
ctrl &= ~CON_INT_MASK;
fsa9480_write_reg(client, FSA9480_REG_CTRL, ctrl);
}
static irqreturn_t fsa9480_irq_handler(int irq, void *data)
{
struct fsa9480_usbsw *usbsw = data;
struct i2c_client *client = usbsw->client;
int intr;
/* clear interrupt */
fsa9480_read_irq(client, &intr);
/* device detection */
fsa9480_detect_dev(usbsw, intr);
return IRQ_HANDLED;
}
static int fsa9480_irq_init(struct fsa9480_usbsw *usbsw)
{
struct fsa9480_platform_data *pdata = usbsw->pdata;
struct i2c_client *client = usbsw->client;
int ret;
int intr;
unsigned int ctrl = CON_MASK;
/* clear interrupt */
fsa9480_read_irq(client, &intr);
/* unmask interrupt (attach/detach only) */
fsa9480_write_reg(client, FSA9480_REG_INT1_MASK, 0xfc);
fsa9480_write_reg(client, FSA9480_REG_INT2_MASK, 0x1f);
usbsw->mansw = fsa9480_read_reg(client, FSA9480_REG_MANSW1);
if (usbsw->mansw)
ctrl &= ~CON_MANUAL_SW; /* Manual Switching Mode */
fsa9480_write_reg(client, FSA9480_REG_CTRL, ctrl);
if (pdata && pdata->cfg_gpio)
pdata->cfg_gpio();
if (client->irq) {
ret = request_threaded_irq(client->irq, NULL,
fsa9480_irq_handler,
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
"fsa9480 micro USB", usbsw);
if (ret) {
dev_err(&client->dev, "failed to request IRQ\n");
return ret;
}
if (pdata)
device_init_wakeup(&client->dev, pdata->wakeup);
}
return 0;
}
static int fsa9480_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
struct i2c_adapter *adapter = to_i2c_adapter(client->dev.parent);
struct fsa9480_usbsw *usbsw;
int ret = 0;
if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA))
return -EIO;
usbsw = kzalloc(sizeof(struct fsa9480_usbsw), GFP_KERNEL);
if (!usbsw) {
dev_err(&client->dev, "failed to allocate driver data\n");
return -ENOMEM;
}
usbsw->client = client;
usbsw->pdata = client->dev.platform_data;
chip = usbsw;
i2c_set_clientdata(client, usbsw);
ret = fsa9480_irq_init(usbsw);
if (ret)
goto fail1;
ret = sysfs_create_group(&client->dev.kobj, &fsa9480_group);
if (ret) {
dev_err(&client->dev,
"failed to create fsa9480 attribute group\n");
goto fail2;
}
/* ADC Detect Time: 500ms */
fsa9480_write_reg(client, FSA9480_REG_TIMING1, 0x6);
if (chip->pdata->reset_cb)
chip->pdata->reset_cb();
/* device detection */
fsa9480_detect_dev(usbsw, INT_ATTACH);
pm_runtime_set_active(&client->dev);
return 0;
fail2:
if (client->irq)
free_irq(client->irq, usbsw);
fail1:
kfree(usbsw);
return ret;
}
static int fsa9480_remove(struct i2c_client *client)
{
struct fsa9480_usbsw *usbsw = i2c_get_clientdata(client);
if (client->irq)
free_irq(client->irq, usbsw);
sysfs_remove_group(&client->dev.kobj, &fsa9480_group);
device_init_wakeup(&client->dev, 0);
kfree(usbsw);
return 0;
}
#ifdef CONFIG_PM_SLEEP
static int fsa9480_suspend(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct fsa9480_usbsw *usbsw = i2c_get_clientdata(client);
struct fsa9480_platform_data *pdata = usbsw->pdata;
if (device_may_wakeup(&client->dev) && client->irq)
enable_irq_wake(client->irq);
if (pdata->usb_power)
pdata->usb_power(0);
return 0;
}
static int fsa9480_resume(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct fsa9480_usbsw *usbsw = i2c_get_clientdata(client);
int dev1, dev2;
if (device_may_wakeup(&client->dev) && client->irq)
disable_irq_wake(client->irq);
/*
* Clear Pending interrupt. Note that detect_dev does what
* the interrupt handler does. So, we don't miss pending and
* we reenable interrupt if there is one.
*/
fsa9480_read_reg(client, FSA9480_REG_INT1);
fsa9480_read_reg(client, FSA9480_REG_INT2);
dev1 = fsa9480_read_reg(client, FSA9480_REG_DEV_T1);
dev2 = fsa9480_read_reg(client, FSA9480_REG_DEV_T2);
/* device detection */
fsa9480_detect_dev(usbsw, (dev1 || dev2) ? INT_ATTACH : INT_DETACH);
return 0;
}
static SIMPLE_DEV_PM_OPS(fsa9480_pm_ops, fsa9480_suspend, fsa9480_resume);
#define FSA9480_PM_OPS (&fsa9480_pm_ops)
#else
#define FSA9480_PM_OPS NULL
#endif /* CONFIG_PM_SLEEP */
static const struct i2c_device_id fsa9480_id[] = {
{"fsa9480", 0},
{}
};
MODULE_DEVICE_TABLE(i2c, fsa9480_id);
static struct i2c_driver fsa9480_i2c_driver = {
.driver = {
.name = "fsa9480",
.pm = FSA9480_PM_OPS,
},
.probe = fsa9480_probe,
.remove = fsa9480_remove,
.id_table = fsa9480_id,
};
module_i2c_driver(fsa9480_i2c_driver);
MODULE_AUTHOR("Minkyu Kang <mk7.kang@samsung.com>");
MODULE_DESCRIPTION("FSA9480 USB Switch driver");
MODULE_LICENSE("GPL");

View File

@ -7,7 +7,6 @@ menuconfig GENWQE
tristate "GenWQE PCIe Accelerator"
depends on PCI && 64BIT
select CRC_ITU_T
default n
help
Enables PCIe card driver for IBM GenWQE accelerators.
The user-space interface is described in

View File

@ -18,7 +18,7 @@ int hl_asid_init(struct hl_device *hdev)
mutex_init(&hdev->asid_mutex);
/* ASID 0 is reserved for KMD */
/* ASID 0 is reserved for KMD and device CPU */
set_bit(0, hdev->asid_bitmap);
return 0;

View File

@ -682,14 +682,12 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data)
u32 tmp;
rc = hl_poll_timeout_memory(hdev,
(u64) (uintptr_t) &ctx->thread_ctx_switch_wait_token,
jiffies_to_usecs(hdev->timeout_jiffies),
&tmp);
&ctx->thread_ctx_switch_wait_token, tmp, (tmp == 1),
100, jiffies_to_usecs(hdev->timeout_jiffies));
if (rc || !tmp) {
if (rc == -ETIMEDOUT) {
dev_err(hdev->dev,
"context switch phase didn't finish in time\n");
rc = -ETIMEDOUT;
"context switch phase timeout (%d)\n", tmp);
goto out;
}
}

View File

@ -31,9 +31,13 @@ static void hl_ctx_fini(struct hl_ctx *ctx)
* Coresight might be still working by accessing addresses
* related to the stopped engines. Hence stop it explicitly.
*/
hdev->asic_funcs->halt_coresight(hdev);
if (hdev->in_debug)
hl_device_set_debug_mode(hdev, false);
hl_vm_ctx_fini(ctx);
hl_asid_free(hdev, ctx->asid);
} else {
hl_mmu_ctx_fini(ctx);
}
}
@ -117,6 +121,11 @@ int hl_ctx_init(struct hl_device *hdev, struct hl_ctx *ctx, bool is_kernel_ctx)
if (is_kernel_ctx) {
ctx->asid = HL_KERNEL_ASID_ID; /* KMD gets ASID 0 */
rc = hl_mmu_ctx_init(ctx);
if (rc) {
dev_err(hdev->dev, "Failed to init mmu ctx module\n");
goto mem_ctx_err;
}
} else {
ctx->asid = hl_asid_alloc(hdev);
if (!ctx->asid) {

View File

@ -355,7 +355,7 @@ static int mmu_show(struct seq_file *s, void *data)
struct hl_debugfs_entry *entry = s->private;
struct hl_dbg_device_entry *dev_entry = entry->dev_entry;
struct hl_device *hdev = dev_entry->hdev;
struct hl_ctx *ctx = hdev->user_ctx;
struct hl_ctx *ctx;
u64 hop0_addr = 0, hop0_pte_addr = 0, hop0_pte = 0,
hop1_addr = 0, hop1_pte_addr = 0, hop1_pte = 0,
@ -367,6 +367,11 @@ static int mmu_show(struct seq_file *s, void *data)
if (!hdev->mmu_enable)
return 0;
if (dev_entry->mmu_asid == HL_KERNEL_ASID_ID)
ctx = hdev->kernel_ctx;
else
ctx = hdev->user_ctx;
if (!ctx) {
dev_err(hdev->dev, "no ctx available\n");
return 0;
@ -495,6 +500,36 @@ err:
return -EINVAL;
}
static int engines_show(struct seq_file *s, void *data)
{
struct hl_debugfs_entry *entry = s->private;
struct hl_dbg_device_entry *dev_entry = entry->dev_entry;
struct hl_device *hdev = dev_entry->hdev;
hdev->asic_funcs->is_device_idle(hdev, NULL, s);
return 0;
}
static bool hl_is_device_va(struct hl_device *hdev, u64 addr)
{
struct asic_fixed_properties *prop = &hdev->asic_prop;
if (!hdev->mmu_enable)
goto out;
if (hdev->dram_supports_virtual_memory &&
addr >= prop->va_space_dram_start_address &&
addr < prop->va_space_dram_end_address)
return true;
if (addr >= prop->va_space_host_start_address &&
addr < prop->va_space_host_end_address)
return true;
out:
return false;
}
static int device_va_to_pa(struct hl_device *hdev, u64 virt_addr,
u64 *phys_addr)
{
@ -568,7 +603,6 @@ static ssize_t hl_data_read32(struct file *f, char __user *buf,
{
struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
struct hl_device *hdev = entry->hdev;
struct asic_fixed_properties *prop = &hdev->asic_prop;
char tmp_buf[32];
u64 addr = entry->addr;
u32 val;
@ -577,11 +611,8 @@ static ssize_t hl_data_read32(struct file *f, char __user *buf,
if (*ppos)
return 0;
if (addr >= prop->va_space_dram_start_address &&
addr < prop->va_space_dram_end_address &&
hdev->mmu_enable &&
hdev->dram_supports_virtual_memory) {
rc = device_va_to_pa(hdev, entry->addr, &addr);
if (hl_is_device_va(hdev, addr)) {
rc = device_va_to_pa(hdev, addr, &addr);
if (rc)
return rc;
}
@ -602,7 +633,6 @@ static ssize_t hl_data_write32(struct file *f, const char __user *buf,
{
struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
struct hl_device *hdev = entry->hdev;
struct asic_fixed_properties *prop = &hdev->asic_prop;
u64 addr = entry->addr;
u32 value;
ssize_t rc;
@ -611,11 +641,8 @@ static ssize_t hl_data_write32(struct file *f, const char __user *buf,
if (rc)
return rc;
if (addr >= prop->va_space_dram_start_address &&
addr < prop->va_space_dram_end_address &&
hdev->mmu_enable &&
hdev->dram_supports_virtual_memory) {
rc = device_va_to_pa(hdev, entry->addr, &addr);
if (hl_is_device_va(hdev, addr)) {
rc = device_va_to_pa(hdev, addr, &addr);
if (rc)
return rc;
}
@ -877,6 +904,7 @@ static const struct hl_info_list hl_debugfs_list[] = {
{"userptr", userptr_show, NULL},
{"vm", vm_show, NULL},
{"mmu", mmu_show, mmu_write},
{"engines", engines_show, NULL}
};
static int hl_debugfs_open(struct inode *inode, struct file *file)

Some files were not shown because too many files have changed in this diff Show More