1
0
Fork 0

Char/Misc driver patches for 5.7-rc1

Here is the big set of char/misc/other driver patches for 5.7-rc1.
 
 Lots of things in here, and it's later than expected due to some reverts
 to resolve some reported issues.  All is now clean with no reported
 problems in linux-next.
 
 Included in here is:
 	- interconnect updates
 	- mei driver updates
 	- uio updates
 	- nvmem driver updates
 	- soundwire updates
 	- binderfs updates
 	- coresight updates
 	- habanalabs updates
 	- mhi new bus type and core
 	- extcon driver updates
 	- some Kconfig cleanups
 	- other small misc driver cleanups and updates
 
 As mentioned, all have been in linux-next for a while, and with the last
 two reverts, all is calm and good.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCXodfvA8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ynzCQCfROhar3E8EhYEqSOP6xq6uhX9uegAnRgGY2rs
 rN4JJpOcTddvZcVlD+vo
 =ocWk
 -----END PGP SIGNATURE-----

Merge tag 'char-misc-5.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver updates from Greg KH:
 "Here is the big set of char/misc/other driver patches for 5.7-rc1.

  Lots of things in here, and it's later than expected due to some
  reverts to resolve some reported issues. All is now clean with no
  reported problems in linux-next.

  Included in here is:
   - interconnect updates
   - mei driver updates
   - uio updates
   - nvmem driver updates
   - soundwire updates
   - binderfs updates
   - coresight updates
   - habanalabs updates
   - mhi new bus type and core
   - extcon driver updates
   - some Kconfig cleanups
   - other small misc driver cleanups and updates

  As mentioned, all have been in linux-next for a while, and with the
  last two reverts, all is calm and good"

* tag 'char-misc-5.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (174 commits)
  Revert "driver core: platform: Initialize dma_parms for platform devices"
  Revert "amba: Initialize dma_parms for amba devices"
  amba: Initialize dma_parms for amba devices
  driver core: platform: Initialize dma_parms for platform devices
  bus: mhi: core: Drop the references to mhi_dev in mhi_destroy_device()
  bus: mhi: core: Initialize bhie field in mhi_cntrl for RDDM capture
  bus: mhi: core: Add support for reading MHI info from device
  misc: rtsx: set correct pcr_ops for rts522A
  speakup: misc: Use dynamic minor numbers for speakup devices
  mei: me: add cedar fork device ids
  coresight: do not use the BIT() macro in the UAPI header
  Documentation: provide IBM contacts for embargoed hardware
  nvmem: core: remove nvmem_sysfs_get_groups()
  nvmem: core: use is_bin_visible for permissions
  nvmem: core: use device_register and device_unregister
  nvmem: core: add root_only member to nvmem device struct
  extcon: axp288: Add wakeup support
  extcon: Mark extcon_get_edev_name() function as exported symbol
  extcon: palmas: Hide error messages if gpio returns -EPROBE_DEFER
  dt-bindings: extcon: usbc-cros-ec: convert extcon-usbc-cros-ec.txt to yaml format
  ...
alistair/sensors
Linus Torvalds 2020-04-03 13:22:40 -07:00
commit 0ad5b053d4
171 changed files with 16154 additions and 4025 deletions

View File

@ -43,6 +43,20 @@ Description: Allows the root user to read or write directly through the
If the IOMMU is disabled, it also allows the root user to read
or write from the host a device VA of a host mapped memory
What: /sys/kernel/debug/habanalabs/hl<n>/data64
Date: Jan 2020
KernelVersion: 5.6
Contact: oded.gabbay@gmail.com
Description: Allows the root user to read or write 64 bit data directly
through the device's PCI bar. Writing to this file generates a
write transaction while reading from the file generates a read
transaction. This custom interface is needed (instead of using
the generic Linux user-space PCI mapping) because the DDR bar
is very small compared to the DDR memory and only the driver can
move the bar before and after the transaction.
If the IOMMU is disabled, it also allows the root user to read
or write from the host a device VA of a host mapped memory
What: /sys/kernel/debug/habanalabs/hl<n>/device
Date: Jan 2019
KernelVersion: 5.1

View File

@ -0,0 +1,241 @@
What: /sys/bus/coresight/devices/<cti-name>/enable
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Enable/Disable the CTI hardware.
What: /sys/bus/coresight/devices/<cti-name>/powered
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (R) Indicate if the CTI hardware is powered.
What: /sys/bus/coresight/devices/<cti-name>/ctmid
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (R) Display the associated CTM ID
What: /sys/bus/coresight/devices/<cti-name>/nr_trigger_cons
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (R) Number of devices connected to triggers on this CTI
What: /sys/bus/coresight/devices/<cti-name>/triggers<N>/name
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (R) Name of connected device <N>
What: /sys/bus/coresight/devices/<cti-name>/triggers<N>/in_signals
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (R) Input trigger signals from connected device <N>
What: /sys/bus/coresight/devices/<cti-name>/triggers<N>/in_types
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (R) Functional types for the input trigger signals
from connected device <N>
What: /sys/bus/coresight/devices/<cti-name>/triggers<N>/out_signals
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (R) Output trigger signals to connected device <N>
What: /sys/bus/coresight/devices/<cti-name>/triggers<N>/out_types
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (R) Functional types for the output trigger signals
to connected device <N>
What: /sys/bus/coresight/devices/<cti-name>/regs/inout_sel
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Select the index for inen and outen registers.
What: /sys/bus/coresight/devices/<cti-name>/regs/inen
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Read or write the CTIINEN register selected by inout_sel.
What: /sys/bus/coresight/devices/<cti-name>/regs/outen
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Read or write the CTIOUTEN register selected by inout_sel.
What: /sys/bus/coresight/devices/<cti-name>/regs/gate
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Read or write CTIGATE register.
What: /sys/bus/coresight/devices/<cti-name>/regs/asicctl
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Read or write ASICCTL register.
What: /sys/bus/coresight/devices/<cti-name>/regs/intack
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (W) Write the INTACK register.
What: /sys/bus/coresight/devices/<cti-name>/regs/appset
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Set CTIAPPSET register to activate channel. Read back to
determine current value of register.
What: /sys/bus/coresight/devices/<cti-name>/regs/appclear
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (W) Write APPCLEAR register to deactivate channel.
What: /sys/bus/coresight/devices/<cti-name>/regs/apppulse
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (W) Write APPPULSE to pulse a channel active for one clock
cycle.
What: /sys/bus/coresight/devices/<cti-name>/regs/chinstatus
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (R) Read current status of channel inputs.
What: /sys/bus/coresight/devices/<cti-name>/regs/choutstatus
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (R) read current status of channel outputs.
What: /sys/bus/coresight/devices/<cti-name>/regs/triginstatus
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (R) read current status of input trigger signals
What: /sys/bus/coresight/devices/<cti-name>/regs/trigoutstatus
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (R) read current status of output trigger signals.
What: /sys/bus/coresight/devices/<cti-name>/channels/trigin_attach
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (W) Attach a CTI input trigger to a CTM channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/trigin_detach
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (W) Detach a CTI input trigger from a CTM channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/trigout_attach
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (W) Attach a CTI output trigger to a CTM channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/trigout_detach
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (W) Detach a CTI output trigger from a CTM channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_gate_enable
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Enable CTIGATE for single channel (W) or list enabled
channels through the gate (R).
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_gate_disable
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (W) Disable CTIGATE for single channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_set
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (W) Activate a single channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_clear
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (W) Deactivate a single channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_pulse
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (W) Pulse a single channel - activate for a single clock cycle.
What: /sys/bus/coresight/devices/<cti-name>/channels/trigout_filtered
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (R) List of output triggers filtered across all connections.
What: /sys/bus/coresight/devices/<cti-name>/channels/trig_filter_enable
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Enable or disable trigger output signal filtering.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_inuse
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (R) show channels with at least one attached trigger signal.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_free
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (R) show channels with no attached trigger signals.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_xtrigs_sel
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Write channel number to select a channel to view, read to
see selected channel number.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_xtrigs_in
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (R) Read to see input triggers connected to selected view
channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_xtrigs_out
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (R) Read to see output triggers connected to selected view
channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_xtrigs_reset
Date: March 2020
KernelVersion 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (W) Clear all channel / trigger programming.

View File

@ -40,3 +40,11 @@ Description: (RW) Trigger window switch for the MSC's buffer, in
triggering a window switch for the buffer. Returns an error in any
other operating mode or attempts to write something other than "1".
What: /sys/bus/intel_th/devices/<intel_th_id>-msc<msc-id>/stop_on_full
Date: March 2020
KernelVersion: 5.7
Contact: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Description: (RW) Configure whether trace stops when the last available window
becomes full (1/y/Y) or wraps around and continues until the next
window becomes available again (0/n/N).

View File

@ -0,0 +1,16 @@
What: /sys/devices/*/<our-device>/nvmem
Date: December 2017
Contact: PrasannaKumar Muralidharan <prasannatsmkumar@gmail.com>
Description: read-only access to the efuse on the Ingenic JZ4780 SoC
The SoC has a one time programmable 8K efuse that is
split into segments. The driver supports read only.
The segments are
0x000 64 bit Random Number
0x008 128 bit Ingenic Chip ID
0x018 128 bit Customer ID
0x028 3520 bit Reserved
0x1E0 8 bit Protect Segment
0x1E1 2296 bit HDMI Key
0x300 2048 bit Security boot key
Users: any user space application which wants to read the Chip
and Customer ID

View File

@ -54,6 +54,9 @@ If you make a mistake with the syntax, the write will fail thus::
<debugfs>/dynamic_debug/control
-bash: echo: write error: Invalid argument
Note, for systems without 'debugfs' enabled, the control file can be
found in ``/proc/dynamic_debug/control``.
Viewing Dynamic Debug Behaviour
===============================

View File

@ -0,0 +1,336 @@
# SPDX-License-Identifier: GPL-2.0-only or BSD-2-Clause
# Copyright 2019 Linaro Ltd.
%YAML 1.2
---
$id: http://devicetree.org/schemas/arm/coresight-cti.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: ARM Coresight Cross Trigger Interface (CTI) device.
description: |
The CoreSight Embedded Cross Trigger (ECT) consists of CTI devices connected
to one or more CoreSight components and/or a CPU, with CTIs interconnected in
a star topology via the Cross Trigger Matrix (CTM), which is not programmable.
The ECT components are not part of the trace generation data path and are thus
not part of the CoreSight graph described in the general CoreSight bindings
file coresight.txt.
The CTI component properties define the connections between the individual
CTI and the components it is directly connected to, consisting of input and
output hardware trigger signals. CTIs can have a maximum number of input and
output hardware trigger signals (8 each for v1 CTI, 32 each for v2 CTI). The
number is defined at design time, the maximum of each defined in the DEVID
register.
CTIs are interconnected in a star topology via the CTM, using a number of
programmable channels, usually 4, but again implementation defined and
described in the DEVID register. The star topology is not required to be
described in the bindings as the actual connections are software
programmable.
In general the connections between CTI and components via the trigger signals
are implementation defined, except when the CTI is connected to an ARM v8
architecture core and optional ETM.
In this case the ARM v8 architecture defines the required signal connections
between CTI and the CPU core and ETM if present. In the case of a v8
architecturally connected CTI an additional compatible string is used to
indicate this feature (arm,coresight-cti-v8-arch).
When CTI trigger connection information is unavailable then a minimal driver
binding can be declared with no explicit trigger signals. This will result
the driver detecting the maximum available triggers and channels from the
DEVID register and make them all available for use as a single default
connection. Any user / client application will require additional information
on the connections between the CTI and other components for correct operation.
This information might be found by enabling the Integration Test registers in
the driver (set CONFIG_CORESIGHT_CTI_INTEGRATION_TEST in Kernel
configuration). These registers may be used to explore the trigger connections
between CTI and other CoreSight components.
Certain triggers between CoreSight devices and the CTI have specific types
and usages. These can be defined along with the signal indexes with the
constants defined in <dt-bindings/arm/coresight-cti-dt.h>
For example a CTI connected to a core will usually have a DBGREQ signal. This
is defined in the binding as type PE_EDBGREQ. These types will appear in an
optional array alongside the signal indexes. Omitting types will default all
signals to GEN_IO.
Note that some hardware trigger signals can be connected to non-CoreSight
components (e.g. UART etc) depending on hardware implementation.
maintainers:
- Mike Leach <mike.leach@linaro.org>
allOf:
- $ref: /schemas/arm/primecell.yaml#
# Need a custom select here or 'arm,primecell' will match on lots of nodes
select:
properties:
compatible:
contains:
enum:
- arm,coresight-cti
required:
- compatible
properties:
$nodename:
pattern: "^cti(@[0-9a-f]+)$"
compatible:
oneOf:
- items:
- const: arm,coresight-cti
- const: arm,primecell
- items:
- const: arm,coresight-cti-v8-arch
- const: arm,coresight-cti
- const: arm,primecell
reg:
maxItems: 1
cpu:
$ref: /schemas/types.yaml#/definitions/phandle
description:
Handle to cpu this device is associated with. This must appear in the
base cti node if compatible string arm,coresight-cti-v8-arch is used,
or may appear in a trig-conns child node when appropriate.
arm,cti-ctm-id:
$ref: /schemas/types.yaml#/definitions/uint32
description:
Defines the CTM this CTI is connected to, in large systems with multiple
separate CTI/CTM nets. Typically multi-socket systems where the CTM is
propagated between sockets.
arm,cs-dev-assoc:
$ref: /schemas/types.yaml#/definitions/phandle
description:
defines a phandle reference to an associated CoreSight trace device.
When the associated trace device is enabled, then the respective CTI
will be enabled. Use in a trig-conns node, or in CTI base node when
compatible string arm,coresight-cti-v8-arch used. If the associated
device has not been registered then the node name will be stored as
the connection name for later resolution. If the associated device is
not a CoreSight device or not registered then the node name will remain
the connection name and automatic enabling will not occur.
# size cells and address cells required if trig-conns node present.
"#size-cells":
const: 0
"#address-cells":
const: 1
patternProperties:
'^trig-conns@([0-9]+)$':
type: object
description:
A trigger connections child node which describes the trigger signals
between this CTI and another hardware device. This device may be a CPU,
CoreSight device, any other hardware device or simple external IO lines.
The connection may have both input and output triggers, or only one or the
other.
properties:
reg:
maxItems: 1
arm,trig-in-sigs:
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32-array
minItems: 1
maxItems: 32
description:
List of CTI trigger in signal numbers in use by a trig-conns node.
arm,trig-in-types:
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32-array
minItems: 1
maxItems: 32
description:
List of constants representing the types for the CTI trigger in
signals. Types in this array match to the corresponding signal in the
arm,trig-in-sigs array. If the -types array is smaller, or omitted
completely, then the types will default to GEN_IO.
arm,trig-out-sigs:
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32-array
minItems: 1
maxItems: 32
description:
List of CTI trigger out signal numbers in use by a trig-conns node.
arm,trig-out-types:
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32-array
minItems: 1
maxItems: 32
description:
List of constants representing the types for the CTI trigger out
signals. Types in this array match to the corresponding signal
in the arm,trig-out-sigs array. If the "-types" array is smaller,
or omitted completely, then the types will default to GEN_IO.
arm,trig-filters:
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32-array
minItems: 1
maxItems: 32
description:
List of CTI trigger out signals that will be blocked from becoming
active, unless filtering is disabled on the driver.
arm,trig-conn-name:
allOf:
- $ref: /schemas/types.yaml#/definitions/string
description:
Defines a connection name that will be displayed, if the cpu or
arm,cs-dev-assoc properties are not being used in this connection.
Principle use for CTI that are connected to non-CoreSight devices, or
external IO.
anyOf:
- required:
- arm,trig-in-sigs
- required:
- arm,trig-out-sigs
oneOf:
- required:
- arm,trig-conn-name
- required:
- cpu
- required:
- arm,cs-dev-assoc
required:
- reg
required:
- compatible
- reg
- clocks
- clock-names
if:
properties:
compatible:
contains:
const: arm,coresight-cti-v8-arch
then:
required:
- cpu
examples:
# minimum CTI definition. DEVID register used to set number of triggers.
- |
cti@20020000 {
compatible = "arm,coresight-cti", "arm,primecell";
reg = <0x20020000 0x1000>;
clocks = <&soc_smc50mhz>;
clock-names = "apb_pclk";
};
# v8 architecturally defined CTI - CPU + ETM connections generated by the
# driver according to the v8 architecture specification.
- |
cti@859000 {
compatible = "arm,coresight-cti-v8-arch", "arm,coresight-cti",
"arm,primecell";
reg = <0x859000 0x1000>;
clocks = <&soc_smc50mhz>;
clock-names = "apb_pclk";
cpu = <&CPU1>;
arm,cs-dev-assoc = <&etm1>;
};
# Implementation defined CTI - CPU + ETM connections explicitly defined..
# Shows use of type constants from dt-bindings/arm/coresight-cti-dt.h
# #size-cells and #address-cells are required if trig-conns@ nodes present.
- |
#include <dt-bindings/arm/coresight-cti-dt.h>
cti@858000 {
compatible = "arm,coresight-cti", "arm,primecell";
reg = <0x858000 0x1000>;
clocks = <&soc_smc50mhz>;
clock-names = "apb_pclk";
arm,cti-ctm-id = <1>;
#address-cells = <1>;
#size-cells = <0>;
trig-conns@0 {
reg = <0>;
arm,trig-in-sigs = <4 5 6 7>;
arm,trig-in-types = <ETM_EXTOUT
ETM_EXTOUT
ETM_EXTOUT
ETM_EXTOUT>;
arm,trig-out-sigs = <4 5 6 7>;
arm,trig-out-types = <ETM_EXTIN
ETM_EXTIN
ETM_EXTIN
ETM_EXTIN>;
arm,cs-dev-assoc = <&etm0>;
};
trig-conns@1 {
reg = <1>;
cpu = <&CPU0>;
arm,trig-in-sigs = <0 1>;
arm,trig-in-types = <PE_DBGTRIGGER
PE_PMUIRQ>;
arm,trig-out-sigs=<0 1 2 >;
arm,trig-out-types = <PE_EDBGREQ
PE_DBGRESTART
PE_CTIIRQ>;
arm,trig-filters = <0>;
};
};
# Implementation defined CTI - non CoreSight component connections.
- |
cti@20110000 {
compatible = "arm,coresight-cti", "arm,primecell";
reg = <0 0x20110000 0 0x1000>;
clocks = <&soc_smc50mhz>;
clock-names = "apb_pclk";
#address-cells = <1>;
#size-cells = <0>;
trig-conns@0 {
reg = <0>;
arm,trig-in-sigs=<0>;
arm,trig-in-types=<GEN_INTREQ>;
arm,trig-out-sigs=<0>;
arm,trig-out-types=<GEN_HALTREQ>;
arm,trig-conn-name = "sys_profiler";
};
trig-conns@1 {
reg = <1>;
arm,trig-out-sigs=<2 3>;
arm,trig-out-types=<GEN_HALTREQ GEN_RESTARTREQ>;
arm,trig-conn-name = "watchdog";
};
trig-conns@2 {
reg = <2>;
arm,trig-in-sigs=<1 6>;
arm,trig-in-types=<GEN_HALTREQ GEN_RESTARTREQ>;
arm,trig-conn-name = "g_counter";
};
};
...

View File

@ -45,6 +45,10 @@ its hardware characteristcs.
- Coresight Address Translation Unit (CATU)
"arm,coresight-catu", "arm,primecell";
- Coresight Cross Trigger Interface (CTI):
"arm,coresight-cti", "arm,primecell";
See coresight-cti.yaml for full CTI definitions.
* reg: physical base address and length of the register
set(s) of the component.
@ -72,6 +76,9 @@ its hardware characteristcs.
* reg-names: the only acceptable values are "stm-base" and
"stm-stimulus-base", each corresponding to the areas defined in "reg".
* Required properties for Coresight Cross Trigger Interface (CTI)
See coresight-cti.yaml for full CTI definitions.
* Required properties for devices that don't show up on the AMBA bus, such as
non-configurable replicators and non-configurable funnels:

View File

@ -1,24 +0,0 @@
ChromeOS EC USB Type-C cable and accessories detection
On ChromeOS systems with USB Type C ports, the ChromeOS Embedded Controller is
able to detect the state of external accessories such as display adapters
or USB devices when said accessories are attached or detached.
The node for this device must be under a cros-ec node like google,cros-ec-spi
or google,cros-ec-i2c.
Required properties:
- compatible: Should be "google,extcon-usbc-cros-ec".
- google,usb-port-id: Specifies the USB port ID to use.
Example:
cros-ec@0 {
compatible = "google,cros-ec-i2c";
...
extcon {
compatible = "google,extcon-usbc-cros-ec";
google,usb-port-id = <0>;
};
}

View File

@ -0,0 +1,56 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/extcon/extcon-usbc-cros-ec.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: ChromeOS EC USB Type-C cable and accessories detection
maintainers:
- Benson Leung <bleung@chromium.org>
- Enric Balletbo i Serra <enric.balletbo@collabora.com>
description: |
On ChromeOS systems with USB Type C ports, the ChromeOS Embedded Controller is
able to detect the state of external accessories such as display adapters
or USB devices when said accessories are attached or detached.
The node for this device must be under a cros-ec node like google,cros-ec-spi
or google,cros-ec-i2c.
properties:
compatible:
const: google,extcon-usbc-cros-ec
google,usb-port-id:
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
description: the port id
minimum: 0
maximum: 255
required:
- compatible
- google,usb-port-id
additionalProperties: false
examples:
- |
spi0 {
#address-cells = <1>;
#size-cells = <0>;
cros-ec@0 {
compatible = "google,cros-ec-spi";
reg = <0>;
usbc_extcon0: extcon0 {
compatible = "google,extcon-usbc-cros-ec";
google,usb-port-id = <0>;
};
usbc_extcon1: extcon1 {
compatible = "google,extcon-usbc-cros-ec";
google,usb-port-id = <1>;
};
};
};

View File

@ -0,0 +1,45 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/interconnect/qcom,bcm-voter.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm BCM-Voter Interconnect
maintainers:
- Georgi Djakov <georgi.djakov@linaro.org>
description: |
The Bus Clock Manager (BCM) is a dedicated hardware accelerator that manages
shared system resources by aggregating requests from multiple Resource State
Coordinators (RSC). Interconnect providers are able to vote for aggregated
thresholds values from consumers by communicating through their respective
RSCs.
properties:
compatible:
enum:
- qcom,bcm-voter
required:
- compatible
additionalProperties: false
examples:
# Example 1: apps bcm_voter on SDM845 SoC should be defined inside &apps_rsc node
# as defined in Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
- |
apps_bcm_voter: bcm_voter {
compatible = "qcom,bcm-voter";
};
# Example 2: disp bcm_voter on SDM845 should be defined inside &disp_rsc node
# as defined in Documentation/devicetree/bindings/soc/qcom/rpmh-rsc.txt
- |
disp_bcm_voter: bcm_voter {
compatible = "qcom,bcm-voter";
};
...

View File

@ -0,0 +1,62 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/interconnect/qcom,osm-l3.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Operating State Manager (OSM) L3 Interconnect Provider
maintainers:
- Sibi Sankar <sibis@codeaurora.org>
description:
L3 cache bandwidth requirements on Qualcomm SoCs is serviced by the OSM.
The OSM L3 interconnect provider aggregates the L3 bandwidth requests
from CPU/GPU and relays it to the OSM.
properties:
compatible:
enum:
- qcom,sc7180-osm-l3
- qcom,sdm845-osm-l3
reg:
maxItems: 1
clocks:
items:
- description: xo clock
- description: alternate clock
clock-names:
items:
- const: xo
- const: alternate
'#interconnect-cells':
const: 1
required:
- compatible
- reg
- clocks
- clock-names
- '#interconnect-cells'
additionalProperties: false
examples:
- |
#define GPLL0 165
#define RPMH_CXO_CLK 0
osm_l3: interconnect@17d41000 {
compatible = "qcom,sdm845-osm-l3";
reg = <0x17d41000 0x1400>;
clocks = <&rpmhcc RPMH_CXO_CLK>, <&gcc GPLL0>;
clock-names = "xo", "alternate";
#interconnect-cells = <1>;
};

View File

@ -0,0 +1,85 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/interconnect/qcom,sc7180.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm SC7180 Network-On-Chip Interconnect
maintainers:
- Odelu Kukatla <okukatla@codeaurora.org>
description: |
SC7180 interconnect providers support system bandwidth requirements through
RPMh hardware accelerators known as Bus Clock Manager (BCM). The provider is
able to communicate with the BCM through the Resource State Coordinator (RSC)
associated with each execution environment. Provider nodes must point to at
least one RPMh device child node pertaining to their RSC and each provider
can map to multiple RPMh resources.
properties:
reg:
maxItems: 1
compatible:
enum:
- qcom,sc7180-aggre1-noc
- qcom,sc7180-aggre2-noc
- qcom,sc7180-camnoc-virt
- qcom,sc7180-compute-noc
- qcom,sc7180-config-noc
- qcom,sc7180-dc-noc
- qcom,sc7180-gem-noc
- qcom,sc7180-ipa-virt
- qcom,sc7180-mc-virt
- qcom,sc7180-mmss-noc
- qcom,sc7180-npu-noc
- qcom,sc7180-qup-virt
- qcom,sc7180-system-noc
'#interconnect-cells':
const: 1
qcom,bcm-voters:
$ref: /schemas/types.yaml#/definitions/phandle-array
description: |
List of phandles to qcom,bcm-voter nodes that are required by
this interconnect to send RPMh commands.
qcom,bcm-voter-names:
$ref: /schemas/types.yaml#/definitions/string-array
description: |
Names for each of the qcom,bcm-voters specified.
required:
- compatible
- reg
- '#interconnect-cells'
- qcom,bcm-voters
additionalProperties: false
examples:
- |
#include <dt-bindings/interconnect/qcom,sc7180.h>
config_noc: interconnect@1500000 {
compatible = "qcom,sc7180-config-noc";
reg = <0 0x01500000 0 0x28000>;
#interconnect-cells = <1>;
qcom,bcm-voters = <&apps_bcm_voter>;
};
system_noc: interconnect@1620000 {
compatible = "qcom,sc7180-system-noc";
reg = <0 0x01620000 0 0x17080>;
#interconnect-cells = <1>;
qcom,bcm-voters = <&apps_bcm_voter>;
};
mmss_noc: interconnect@1740000 {
compatible = "qcom,sc7180-mmss-noc";
reg = <0 0x01740000 0 0x1c100>;
#interconnect-cells = <1>;
qcom,bcm-voters = <&apps_bcm_voter>;
};

View File

@ -1,24 +0,0 @@
Qualcomm SDM845 Network-On-Chip interconnect driver binding
-----------------------------------------------------------
SDM845 interconnect providers support system bandwidth requirements through
RPMh hardware accelerators known as Bus Clock Manager (BCM). The provider is
able to communicate with the BCM through the Resource State Coordinator (RSC)
associated with each execution environment. Provider nodes must reside within
an RPMh device node pertaining to their RSC and each provider maps to a single
RPMh resource.
Required properties :
- compatible : shall contain only one of the following:
"qcom,sdm845-rsc-hlos"
- #interconnect-cells : should contain 1
Examples:
apps_rsc: rsc {
rsc_hlos: interconnect {
compatible = "qcom,sdm845-rsc-hlos";
#interconnect-cells = <1>;
};
};

View File

@ -0,0 +1,74 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/interconnect/qcom,sdm845.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm SDM845 Network-On-Chip Interconnect
maintainers:
- Georgi Djakov <georgi.djakov@linaro.org>
description: |
SDM845 interconnect providers support system bandwidth requirements through
RPMh hardware accelerators known as Bus Clock Manager (BCM). The provider is
able to communicate with the BCM through the Resource State Coordinator (RSC)
associated with each execution environment. Provider nodes must point to at
least one RPMh device child node pertaining to their RSC and each provider
can map to multiple RPMh resources.
properties:
reg:
maxItems: 1
compatible:
enum:
- qcom,sdm845-aggre1-noc
- qcom,sdm845-aggre2-noc
- qcom,sdm845-config-noc
- qcom,sdm845-dc-noc
- qcom,sdm845-gladiator-noc
- qcom,sdm845-mem-noc
- qcom,sdm845-mmss-noc
- qcom,sdm845-system-noc
'#interconnect-cells':
const: 1
qcom,bcm-voters:
$ref: /schemas/types.yaml#/definitions/phandle-array
description: |
List of phandles to qcom,bcm-voter nodes that are required by
this interconnect to send RPMh commands.
qcom,bcm-voter-names:
$ref: /schemas/types.yaml#/definitions/string-array
description: |
Names for each of the qcom,bcm-voters specified.
required:
- compatible
- reg
- '#interconnect-cells'
- qcom,bcm-voters
additionalProperties: false
examples:
- |
#include <dt-bindings/interconnect/qcom,sdm845.h>
mem_noc: interconnect@1380000 {
compatible = "qcom,sdm845-mem-noc";
reg = <0 0x01380000 0 0x27200>;
#interconnect-cells = <1>;
qcom,bcm-voters = <&apps_bcm_voter>;
};
mmss_noc: interconnect@1740000 {
compatible = "qcom,sdm845-mmss-noc";
reg = <0 0x01740000 0 0x1c1000>;
#interconnect-cells = <1>;
qcom,bcm-voter-names = "apps", "disp";
qcom,bcm-voters = <&apps_bcm_voter>, <&disp_bcm_voter>;
};

View File

@ -0,0 +1,45 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/nvmem/ingenic,jz4780-efuse.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Ingenic JZ EFUSE driver bindings
maintainers:
- PrasannaKumar Muralidharan <prasannatsmkumar@gmail.com>
allOf:
- $ref: "nvmem.yaml#"
properties:
compatible:
enum:
- ingenic,jz4780-efuse
reg:
maxItems: 1
clocks:
# Handle for the ahb for the efuse.
maxItems: 1
required:
- compatible
- reg
- clocks
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/jz4780-cgu.h>
efuse@134100d0 {
compatible = "ingenic,jz4780-efuse";
reg = <0x134100d0 0x2c>;
clocks = <&cgu JZ4780_CLK_AHB2>;
};
...

View File

@ -156,22 +156,27 @@ Below shows the SoundWire stream states and state transition diagram. ::
+-----------+ +------------+ +----------+ +----------+
| ALLOCATED +---->| CONFIGURED +---->| PREPARED +---->| ENABLED |
| STATE | | STATE | | STATE | | STATE |
+-----------+ +------------+ +----------+ +----+-----+
^
|
|
v
+----------+ +------------+ +----+-----+
+-----------+ +------------+ +---+--+---+ +----+-----+
^ ^ ^
| | |
__| |___________ |
| | |
v | v
+----------+ +-----+------+ +-+--+-----+
| RELEASED |<----------+ DEPREPARED |<-------+ DISABLED |
| STATE | | STATE | | STATE |
+----------+ +------------+ +----------+
NOTE: State transition between prepare and deprepare is supported in Spec
but not in the software (subsystem)
NOTE: State transitions between ``SDW_STREAM_ENABLED`` and
``SDW_STREAM_DISABLED`` are only relevant when then INFO_PAUSE flag is
supported at the ALSA/ASoC level. Likewise the transition between
``SDW_DISABLED_STATE`` and ``SDW_PREPARED_STATE`` depends on the
INFO_RESUME flag.
NOTE2: Stream state transition checks need to be handled by caller
framework, for example ALSA/ASoC. No checks for stream transition exist in
SoundWire subsystem.
NOTE2: The framework implements basic state transition checks, but
does not e.g. check if a transition from DISABLED to ENABLED is valid
on a specific platform. Such tests need to be added at the ALSA/ASoC
level.
Stream State Operations
-----------------------
@ -246,6 +251,9 @@ SDW_STREAM_PREPARED
Prepare state of stream. Operations performed before entering in this state:
(0) Steps 1 and 2 are omitted in the case of a resume operation,
where the bus bandwidth is known.
(1) Bus parameters such as bandwidth, frame shape, clock frequency,
are computed based on current stream as well as already active
stream(s) on Bus. Re-computation is required to accommodate current
@ -270,9 +278,11 @@ Prepare state of stream. Operations performed before entering in this state:
After all above operations are successful, stream state is set to
``SDW_STREAM_PREPARED``.
Bus implements below API for PREPARE state which needs to be called once per
stream. From ASoC DPCM framework, this stream state is linked to
.prepare() operation.
Bus implements below API for PREPARE state which needs to be called
once per stream. From ASoC DPCM framework, this stream state is linked
to .prepare() operation. Since the .trigger() operations may not
follow the .prepare(), a direct transition from
``SDW_STREAM_PREPARED`` to ``SDW_STREAM_DEPREPARED`` is allowed.
.. code-block:: c
@ -332,6 +342,14 @@ Bus implements below API for DISABLED state which needs to be called once
per stream. From ASoC DPCM framework, this stream state is linked to
.trigger() stop operation.
When the INFO_PAUSE flag is supported, a direct transition to
``SDW_STREAM_ENABLED`` is allowed.
For resume operations where ASoC will use the .prepare() callback, the
stream can transition from ``SDW_STREAM_DISABLED`` to
``SDW_STREAM_PREPARED``, with all required settings restored but
without updating the bandwidth and bit allocation.
.. code-block:: c
int sdw_disable_stream(struct sdw_stream_runtime * stream);
@ -353,9 +371,18 @@ state:
After all above operations are successful, stream state is set to
``SDW_STREAM_DEPREPARED``.
Bus implements below API for DEPREPARED state which needs to be called once
per stream. From ASoC DPCM framework, this stream state is linked to
.trigger() stop operation.
Bus implements below API for DEPREPARED state which needs to be called
once per stream. ALSA/ASoC do not have a concept of 'deprepare', and
the mapping from this stream state to ALSA/ASoC operation may be
implementation specific.
When the INFO_PAUSE flag is supported, the stream state is linked to
the .hw_free() operation - the stream is not deprepared on a
TRIGGER_STOP.
Other implementations may transition to the ``SDW_STREAM_DEPREPARED``
state on TRIGGER_STOP, should they require a transition through the
``SDW_STREAM_PREPARED`` state.
.. code-block:: c

View File

@ -134,6 +134,7 @@ needed).
scsi/index
misc-devices/index
scheduler/index
mhi/index
Architecture-agnostic documentation
-----------------------------------

View File

@ -0,0 +1,18 @@
.. SPDX-License-Identifier: GPL-2.0
===
MHI
===
.. toctree::
:maxdepth: 1
mhi
topology
.. only:: subproject and html
Indices
=======
* :ref:`genindex`

View File

@ -0,0 +1,218 @@
.. SPDX-License-Identifier: GPL-2.0
==========================
MHI (Modem Host Interface)
==========================
This document provides information about the MHI protocol.
Overview
========
MHI is a protocol developed by Qualcomm Innovation Center, Inc. It is used
by the host processors to control and communicate with modem devices over high
speed peripheral buses or shared memory. Even though MHI can be easily adapted
to any peripheral buses, it is primarily used with PCIe based devices. MHI
provides logical channels over the physical buses and allows transporting the
modem protocols, such as IP data packets, modem control messages, and
diagnostics over at least one of those logical channels. Also, the MHI
protocol provides data acknowledgment feature and manages the power state of the
modems via one or more logical channels.
MHI Internals
=============
MMIO
----
MMIO (Memory mapped IO) consists of a set of registers in the device hardware,
which are mapped to the host memory space by the peripheral buses like PCIe.
Following are the major components of MMIO register space:
MHI control registers: Access to MHI configurations registers
MHI BHI registers: BHI (Boot Host Interface) registers are used by the host
for downloading the firmware to the device before MHI initialization.
Channel Doorbell array: Channel Doorbell (DB) registers used by the host to
notify the device when there is new work to do.
Event Doorbell array: Associated with event context array, the Event Doorbell
(DB) registers are used by the host to notify the device when new events are
available.
Debug registers: A set of registers and counters used by the device to expose
debugging information like performance, functional, and stability to the host.
Data structures
---------------
All data structures used by MHI are in the host system memory. Using the
physical interface, the device accesses those data structures. MHI data
structures and data buffers in the host system memory regions are mapped for
the device.
Channel context array: All channel configurations are organized in channel
context data array.
Transfer rings: Used by the host to schedule work items for a channel. The
transfer rings are organized as a circular queue of Transfer Descriptors (TD).
Event context array: All event configurations are organized in the event context
data array.
Event rings: Used by the device to send completion and state transition messages
to the host
Command context array: All command configurations are organized in command
context data array.
Command rings: Used by the host to send MHI commands to the device. The command
rings are organized as a circular queue of Command Descriptors (CD).
Channels
--------
MHI channels are logical, unidirectional data pipes between a host and a device.
The concept of channels in MHI is similar to endpoints in USB. MHI supports up
to 256 channels. However, specific device implementations may support less than
the maximum number of channels allowed.
Two unidirectional channels with their associated transfer rings form a
bidirectional data pipe, which can be used by the upper-layer protocols to
transport application data packets (such as IP packets, modem control messages,
diagnostics messages, and so on). Each channel is associated with a single
transfer ring.
Transfer rings
--------------
Transfers between the host and device are organized by channels and defined by
Transfer Descriptors (TD). TDs are managed through transfer rings, which are
defined for each channel between the device and host and reside in the host
memory. TDs consist of one or more ring elements (or transfer blocks)::
[Read Pointer (RP)] ----------->[Ring Element] } TD
[Write Pointer (WP)]- [Ring Element]
- [Ring Element]
--------->[Ring Element]
[Ring Element]
Below is the basic usage of transfer rings:
* Host allocates memory for transfer ring.
* Host sets the base pointer, read pointer, and write pointer in corresponding
channel context.
* Ring is considered empty when RP == WP.
* Ring is considered full when WP + 1 == RP.
* RP indicates the next element to be serviced by the device.
* When the host has a new buffer to send, it updates the ring element with
buffer information, increments the WP to the next element and rings the
associated channel DB.
Event rings
-----------
Events from the device to host are organized in event rings and defined by Event
Descriptors (ED). Event rings are used by the device to report events such as
data transfer completion status, command completion status, and state changes
to the host. Event rings are the array of EDs that resides in the host
memory. EDs consist of one or more ring elements (or transfer blocks)::
[Read Pointer (RP)] ----------->[Ring Element] } ED
[Write Pointer (WP)]- [Ring Element]
- [Ring Element]
--------->[Ring Element]
[Ring Element]
Below is the basic usage of event rings:
* Host allocates memory for event ring.
* Host sets the base pointer, read pointer, and write pointer in corresponding
channel context.
* Both host and device has a local copy of RP, WP.
* Ring is considered empty (no events to service) when WP + 1 == RP.
* Ring is considered full of events when RP == WP.
* When there is a new event the device needs to send, the device updates ED
pointed by RP, increments the RP to the next element and triggers the
interrupt.
Ring Element
------------
A Ring Element is a data structure used to transfer a single block
of data between the host and the device. Transfer ring element types contain a
single buffer pointer, the size of the buffer, and additional control
information. Other ring element types may only contain control and status
information. For single buffer operations, a ring descriptor is composed of a
single element. For large multi-buffer operations (such as scatter and gather),
elements can be chained to form a longer descriptor.
MHI Operations
==============
MHI States
----------
MHI_STATE_RESET
~~~~~~~~~~~~~~~
MHI is in reset state after power-up or hardware reset. The host is not allowed
to access device MMIO register space.
MHI_STATE_READY
~~~~~~~~~~~~~~~
MHI is ready for initialization. The host can start MHI initialization by
programming MMIO registers.
MHI_STATE_M0
~~~~~~~~~~~~
MHI is running and operational in the device. The host can start channels by
issuing channel start command.
MHI_STATE_M1
~~~~~~~~~~~~
MHI operation is suspended by the device. This state is entered when the
device detects inactivity at the physical interface within a preset time.
MHI_STATE_M2
~~~~~~~~~~~~
MHI is in low power state. MHI operation is suspended and the device may
enter lower power mode.
MHI_STATE_M3
~~~~~~~~~~~~
MHI operation stopped by the host. This state is entered when the host suspends
MHI operation.
MHI Initialization
------------------
After system boots, the device is enumerated over the physical interface.
In the case of PCIe, the device is enumerated and assigned BAR-0 for
the device's MMIO register space. To initialize the MHI in a device,
the host performs the following operations:
* Allocates the MHI context for event, channel and command arrays.
* Initializes the context array, and prepares interrupts.
* Waits until the device enters READY state.
* Programs MHI MMIO registers and sets device into MHI_M0 state.
* Waits for the device to enter M0 state.
MHI Data Transfer
-----------------
MHI data transfer is initiated by the host to transfer data to the device.
Following are the sequence of operations performed by the host to transfer
data to device:
* Host prepares TD with buffer information.
* Host increments the WP of the corresponding channel transfer ring.
* Host rings the channel DB register.
* Device wakes up to process the TD.
* Device generates a completion event for the processed TD by updating ED.
* Device increments the RP of the corresponding event ring.
* Device triggers IRQ to wake up the host.
* Host wakes up and checks the event ring for completion event.
* Host updates the WP of the corresponding event ring to indicate that the
data transfer has been completed successfully.

View File

@ -0,0 +1,60 @@
.. SPDX-License-Identifier: GPL-2.0
============
MHI Topology
============
This document provides information about the MHI topology modeling and
representation in the kernel.
MHI Controller
--------------
MHI controller driver manages the interaction with the MHI client devices
such as the external modems and WiFi chipsets. It is also the MHI bus master
which is in charge of managing the physical link between the host and device.
It is however not involved in the actual data transfer as the data transfer
is taken care by the physical bus such as PCIe. Each controller driver exposes
channels and events based on the client device type.
Below are the roles of the MHI controller driver:
* Turns on the physical bus and establishes the link to the device
* Configures IRQs, IOMMU, and IOMEM
* Allocates struct mhi_controller and registers with the MHI bus framework
with channel and event configurations using mhi_register_controller.
* Initiates power on and shutdown sequence
* Initiates suspend and resume power management operations of the device.
MHI Device
----------
MHI device is the logical device which binds to a maximum of two MHI channels
for bi-directional communication. Once MHI is in powered on state, the MHI
core will create MHI devices based on the channel configuration exposed
by the controller. There can be a single MHI device for each channel or for a
couple of channels.
Each supported device is enumerated in::
/sys/bus/mhi/devices/
MHI Driver
----------
MHI driver is the client driver which binds to one or more MHI devices. The MHI
driver sends and receives the upper-layer protocol packets like IP packets,
modem control messages, and diagnostics messages over MHI. The MHI core will
bind the MHI devices to the MHI driver.
Each supported driver is enumerated in::
/sys/bus/mhi/drivers/
Below are the roles of the MHI driver:
* Registers the driver with the MHI bus framework using mhi_driver_register.
* Prepares the device for transfer by calling mhi_prepare_for_transfer.
* Initiates data transfer by calling mhi_queue_transfer.
* Once the data transfer is finished, calls mhi_unprepare_from_transfer to
end data transfer.

View File

@ -246,7 +246,8 @@ an involved disclosed party. The current ambassadors list:
============= ========================================================
ARM Grant Likely <grant.likely@arm.com>
AMD Tom Lendacky <tom.lendacky@amd.com>
IBM
IBM Z Christian Borntraeger <borntraeger@de.ibm.com>
IBM Power Anton Blanchard <anton@linux.ibm.com>
Intel Tony Luck <tony.luck@intel.com>
Qualcomm Trilok Soni <tsoni@codeaurora.org>

View File

@ -0,0 +1,222 @@
.. SPDX-License-Identifier: GPL-2.0
=============================================
CoreSight Embedded Cross Trigger (CTI & CTM).
=============================================
:Author: Mike Leach <mike.leach@linaro.org>
:Date: November 2019
Hardware Description
--------------------
The CoreSight Cross Trigger Interface (CTI) is a hardware device that takes
individual input and output hardware signals known as triggers to and from
devices and interconnects them via the Cross Trigger Matrix (CTM) to other
devices via numbered channels, in order to propagate events between devices.
e.g.::
0000000 in_trigs :::::::
0 C 0----------->: : +======>(other CTI channel IO)
0 P 0<-----------: : v
0 U 0 out_trigs : : Channels ***** :::::::
0000000 : CTI :<=========>*CTM*<====>: CTI :---+
####### in_trigs : : (id 0-3) ***** ::::::: v
# ETM #----------->: : ^ #######
# #<-----------: : +---# ETR #
####### out_trigs ::::::: #######
The CTI driver enables the programming of the CTI to attach triggers to
channels. When an input trigger becomes active, the attached channel will
become active. Any output trigger attached to that channel will also
become active. The active channel is propagated to other CTIs via the CTM,
activating connected output triggers there, unless filtered by the CTI
channel gate.
It is also possible to activate a channel using system software directly
programming registers in the CTI.
The CTIs are registered by the system to be associated with CPUs and/or other
CoreSight devices on the trace data path. When these devices are enabled the
attached CTIs will also be enabled. By default/on power up the CTIs have
no programmed trigger/channel attachments, so will not affect the system
until explicitly programmed.
The hardware trigger connections between CTIs and devices is implementation
defined, unless the CPU/ETM combination is a v8 architecture, in which case
the connections have an architecturally defined standard layout.
The hardware trigger signals can also be connected to non-CoreSight devices
(e.g. UART), or be propagated off chip as hardware IO lines.
All the CTI devices are associated with a CTM. On many systems there will be a
single effective CTM (one CTM, or multiple CTMs all interconnected), but it is
possible that systems can have nets of CTIs+CTM that are not interconnected by
a CTM to each other. On these systems a CTM index is declared to associate
CTI devices that are interconnected via a given CTM.
Sysfs files and directories
---------------------------
The CTI devices appear on the existing CoreSight bus alongside the other
CoreSight devices::
>$ ls /sys/bus/coresight/devices
cti_cpu0 cti_cpu2 cti_sys0 etm0 etm2 funnel0 replicator0 tmc_etr0
cti_cpu1 cti_cpu3 cti_sys1 etm1 etm3 funnel1 tmc_etf0 tpiu0
The ``cti_cpu<N>`` named CTIs are associated with a CPU, and any ETM used by
that core. The ``cti_sys<N>`` CTIs are general system infrastructure CTIs that
can be associated with other CoreSight devices, or other system hardware
capable of generating or using trigger signals.::
>$ ls /sys/bus/coresight/devices/etm0/cti_cpu0
channels ctmid enable nr_trigger_cons mgmt power powered regs
subsystem triggers0 triggers1 uevent
*Key file items are:-*
* ``enable``: enables/disables the CTI. Read to determine current state.
If this shows as enabled (1), but ``powered`` shows unpowered (0), then
the enable indicates a request to enabled when the device is powered.
* ``ctmid`` : associated CTM - only relevant if system has multiple CTI+CTM
clusters that are not interconnected.
* ``nr_trigger_cons`` : total connections - triggers<N> directories.
* ``powered`` : Read to determine if the CTI is currently powered.
*Sub-directories:-*
* ``triggers<N>``: contains list of triggers for an individual connection.
* ``channels``: Contains the channel API - CTI main programming interface.
* ``regs``: Gives access to the raw programmable CTI regs.
* ``mgmt``: the standard CoreSight management registers.
triggers<N> directories
~~~~~~~~~~~~~~~~~~~~~~~
Individual trigger connection information. This describes trigger signals for
CoreSight and non-CoreSight connections.
Each triggers directory has a set of parameters describing the triggers for
the connection.
* ``name`` : name of connection
* ``in_signals`` : input trigger signal indexes used in this connection.
* ``in_types`` : functional types for in signals.
* ``out_signals`` : output trigger signals for this connection.
* ``out_types`` : functional types for out signals.
e.g::
>$ ls ./cti_cpu0/triggers0/
in_signals in_types name out_signals out_types
>$ cat ./cti_cpu0/triggers0/name
cpu0
>$ cat ./cti_cpu0/triggers0/out_signals
0-2
>$ cat ./cti_cpu0/triggers0/out_types
pe_edbgreq pe_dbgrestart pe_ctiirq
>$ cat ./cti_cpu0/triggers0/in_signals
0-1
>$ cat ./cti_cpu0/triggers0/in_types
pe_dbgtrigger pe_pmuirq
If a connection has zero signals in either the 'in' or 'out' triggers then
those parameters will be omitted.
Channels API Directory
~~~~~~~~~~~~~~~~~~~~~~
This provides an easy way to attach triggers to channels, without needing
the multiple register operations that are required if manipulating the
'regs' sub-directory elements directly.
A number of files provide this API::
>$ ls ./cti_sys0/channels/
chan_clear chan_inuse chan_xtrigs_out trigin_attach
chan_free chan_pulse chan_xtrigs_reset trigin_detach
chan_gate_disable chan_set chan_xtrigs_sel trigout_attach
chan_gate_enable chan_xtrigs_in trig_filter_enable trigout_detach
trigout_filtered
Most access to these elements take the form::
echo <chan> [<trigger>] > /<device_path>/<operation>
where the optional <trigger> is only needed for trigXX_attach | detach
operations.
e.g.::
>$ echo 0 1 > ./cti_sys0/channels/trigout_attach
>$ echo 0 > ./cti_sys0/channels/chan_set
Attaches trigout(1) to channel(0), then activates channel(0) generating a
set state on cti_sys0.trigout(1)
*API operations*
* ``trigin_attach, trigout_attach``: Attach a channel to a trigger signal.
* ``trigin_detach, trigout_detach``: Detach a channel from a trigger signal.
* ``chan_set``: Set the channel - the set state will be propagated around
the CTM to other connected devices.
* ``chan_clear``: Clear the channel.
* ``chan_pulse``: Set the channel for a single CoreSight clock cycle.
* ``chan_gate_enable``: Write operation sets the CTI gate to propagate
(enable) the channel to other devices. This operation takes a channel
number. CTI gate is enabled for all channels by default at power up. Read
to list the currently enabled channels on the gate.
* ``chan_gate_disable``: Write channel number to disable gate for that
channel.
* ``chan_inuse``: Show the current channels attached to any signal
* ``chan_free``: Show channels with no attached signals.
* ``chan_xtrigs_sel``: write a channel number to select a channel to view,
read to show the selected channel number.
* ``chan_xtrigs_in``: Read to show the input triggers attached to
the selected view channel.
* ``chan_xtrigs_out``:Read to show the output triggers attached to
the selected view channel.
* ``trig_filter_enable``: Defaults to enabled, disable to allow potentially
dangerous output signals to be set.
* ``trigout_filtered``: Trigger out signals that are prevented from being
set if filtering ``trig_filter_enable`` is enabled. One use is to prevent
accidental ``EDBGREQ`` signals stopping a core.
* ``chan_xtrigs_reset``: Write 1 to clear all channel / trigger programming.
Resets device hardware to default state.
The example below attaches input trigger index 1 to channel 2, and output
trigger index 6 to the same channel. It then examines the state of the
channel / trigger connections using the appropriate sysfs attributes.
The settings mean that if either input trigger 1, or channel 2 go active then
trigger out 6 will go active. We then enable the CTI, and use the software
channel control to activate channel 2. We see the active channel on the
``choutstatus`` register and the active signal on the ``trigoutstatus``
register. Finally clearing the channel removes this.
e.g.::
.../cti_sys0/channels# echo 2 1 > trigin_attach
.../cti_sys0/channels# echo 2 6 > trigout_attach
.../cti_sys0/channels# cat chan_free
0-1,3
.../cti_sys0/channels# cat chan_inuse
2
.../cti_sys0/channels# echo 2 > chan_xtrigs_sel
.../cti_sys0/channels# cat chan_xtrigs_trigin
1
.../cti_sys0/channels# cat chan_xtrigs_trigout
6
.../cti_sys0/# echo 1 > enable
.../cti_sys0/channels# echo 2 > chan_set
.../cti_sys0/channels# cat ../regs/choutstatus
0x4
.../cti_sys0/channels# cat ../regs/trigoutstatus
0x40
.../cti_sys0/channels# echo 2 > chan_clear
.../cti_sys0/channels# cat ../regs/trigoutstatus
0x0
.../cti_sys0/channels# cat ../regs/choutstatus
0x0

View File

@ -491,8 +491,21 @@ interface provided for that purpose by the generic STM API::
Details on how to use the generic STM API can be found here:- :doc:`../stm` [#second]_.
The CTI & CTM Modules
---------------------
The CTI (Cross Trigger Interface) provides a set of trigger signals between
individual CTIs and components, and can propagate these between all CTIs via
channels on the CTM (Cross Trigger Matrix).
A separate documentation file is provided to explain the use of these devices.
(:doc:`coresight-ect`) [#fourth]_.
.. [#first] Documentation/ABI/testing/sysfs-bus-coresight-devices-stm
.. [#second] Documentation/trace/stm.rst
.. [#third] https://github.com/Linaro/perf-opencsd
.. [#fourth] Documentation/trace/coresight/coresight-ect.rst

View File

@ -1693,12 +1693,15 @@ F: arch/arm/mach-ep93xx/micro9.c
ARM/CORESIGHT FRAMEWORK AND DRIVERS
M: Mathieu Poirier <mathieu.poirier@linaro.org>
R: Suzuki K Poulose <suzuki.poulose@arm.com>
R: Mike Leach <mike.leach@linaro.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: drivers/hwtracing/coresight/*
F: include/dt-bindings/arm/coresight-cti-dt.h
F: Documentation/trace/coresight/*
F: Documentation/devicetree/bindings/arm/coresight.txt
F: Documentation/devicetree/bindings/arm/coresight-cpu-debug.txt
F: Documentation/devicetree/bindings/arm/coresight-cti.yaml
F: Documentation/ABI/testing/sysfs-bus-coresight-devices-*
F: tools/perf/arch/arm/util/pmu.c
F: tools/perf/arch/arm/util/auxtrace.c
@ -10993,6 +10996,16 @@ M: Vladimir Vid <vladimir.vid@sartura.hr>
S: Maintained
F: arch/arm64/boot/dts/marvell/armada-3720-uDPU.dts
MHI BUS
M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
M: Hemant Kumar <hemantk@codeaurora.org>
L: linux-arm-msm@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi.git
F: Documentation/mhi/
F: drivers/bus/mhi/
F: include/linux/mhi.h
MICROBLAZE ARCHITECTURE
M: Michal Simek <monstr@monstr.eu>
W: http://www.monstr.eu/fdt/

View File

@ -52,7 +52,8 @@ CONFIG_NET_PCI=y
CONFIG_YELLOWFIN=y
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_RTC=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_CMOS=y
CONFIG_EXT2_FS=y
CONFIG_REISERFS_FS=m
CONFIG_ISO9660_FS=y

View File

@ -2,7 +2,6 @@
#ifndef _FLASH_H
#define _FLASH_H
#define FLASH_MINOR 160 /* MAJOR is 10 - miscdevice */
#define CMD_WRITE_DISABLE 0
#define CMD_WRITE_ENABLE 0x28
#define CMD_WRITE_BASE64K_ENABLE 0x47

View File

@ -57,7 +57,8 @@ CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_HW_RANDOM is not set
CONFIG_EFI_RTC=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_EFI=y
CONFIG_I2C=y
CONFIG_I2C_CHARDEV=y
CONFIG_AGP=m

View File

@ -94,7 +94,8 @@ CONFIG_SERIAL_8250_NR_UARTS=6
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_HW_RANDOM is not set
CONFIG_EFI_RTC=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_EFI=y
CONFIG_RAW_DRIVER=m
CONFIG_HPET=y
CONFIG_AGP=m

View File

@ -82,7 +82,8 @@ CONFIG_SERIAL_8250_NR_UARTS=6
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_HW_RANDOM is not set
CONFIG_EFI_RTC=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_EFI=y
CONFIG_RAW_DRIVER=m
CONFIG_HPET=y
CONFIG_AGP=m

View File

@ -86,7 +86,8 @@ CONFIG_SERIAL_8250_NR_UARTS=6
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_HW_RANDOM is not set
CONFIG_EFI_RTC=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_EFI=y
CONFIG_RAW_DRIVER=m
CONFIG_HPET=y
CONFIG_AGP=m

View File

@ -68,7 +68,8 @@ CONFIG_SERIAL_8250_NR_UARTS=8
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_HW_RANDOM is not set
CONFIG_EFI_RTC=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_EFI=y
CONFIG_I2C_CHARDEV=y
CONFIG_AGP=y
CONFIG_AGP_HP_ZX1=y

View File

@ -23,8 +23,6 @@
#define RNG_VERSION "1.0.0"
#define RNG_MODULE_NAME "hw_random"
#define RNG_MISCDEV_MINOR 183 /* official */
/* Changed at init time, in the non-modular case, and at module load
* time, in the module case. Presumably, the module subsystem
* protects against a module being loaded twice at the same time.
@ -104,7 +102,7 @@ static const struct file_operations rng_chrdev_ops = {
/* rng_init shouldn't be called more than once at boot time */
static struct miscdevice rng_miscdev = {
RNG_MISCDEV_MINOR,
HWRNG_MINOR,
RNG_MODULE_NAME,
&rng_chrdev_ops,
};

View File

@ -18,7 +18,7 @@
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/mount.h>
#include <linux/parser.h>
#include <linux/fs_parser.h>
#include <linux/radix-tree.h>
#include <linux/sched.h>
#include <linux/seq_file.h>
@ -48,26 +48,30 @@ static dev_t binderfs_dev;
static DEFINE_MUTEX(binderfs_minors_mutex);
static DEFINE_IDA(binderfs_minors);
enum {
enum binderfs_param {
Opt_max,
Opt_stats_mode,
Opt_err
};
enum binderfs_stats_mode {
STATS_NONE,
STATS_GLOBAL,
binderfs_stats_mode_unset,
binderfs_stats_mode_global,
};
static const match_table_t tokens = {
{ Opt_max, "max=%d" },
{ Opt_stats_mode, "stats=%s" },
{ Opt_err, NULL }
static const struct constant_table binderfs_param_stats[] = {
{ "global", binderfs_stats_mode_global },
{}
};
static inline struct binderfs_info *BINDERFS_I(const struct inode *inode)
const struct fs_parameter_spec binderfs_fs_parameters[] = {
fsparam_u32("max", Opt_max),
fsparam_enum("stats", Opt_stats_mode, binderfs_param_stats),
{}
};
static inline struct binderfs_info *BINDERFS_SB(const struct super_block *sb)
{
return inode->i_sb->s_fs_info;
return sb->s_fs_info;
}
bool is_binderfs_device(const struct inode *inode)
@ -246,7 +250,7 @@ static long binder_ctl_ioctl(struct file *file, unsigned int cmd,
static void binderfs_evict_inode(struct inode *inode)
{
struct binder_device *device = inode->i_private;
struct binderfs_info *info = BINDERFS_I(inode);
struct binderfs_info *info = BINDERFS_SB(inode->i_sb);
clear_inode(inode);
@ -264,97 +268,84 @@ static void binderfs_evict_inode(struct inode *inode)
}
}
/**
* binderfs_parse_mount_opts - parse binderfs mount options
* @data: options to set (can be NULL in which case defaults are used)
*/
static int binderfs_parse_mount_opts(char *data,
struct binderfs_mount_opts *opts)
static int binderfs_fs_context_parse_param(struct fs_context *fc,
struct fs_parameter *param)
{
char *p, *stats;
opts->max = BINDERFS_MAX_MINOR;
opts->stats_mode = STATS_NONE;
int opt;
struct binderfs_mount_opts *ctx = fc->fs_private;
struct fs_parse_result result;
while ((p = strsep(&data, ",")) != NULL) {
substring_t args[MAX_OPT_ARGS];
int token;
int max_devices;
opt = fs_parse(fc, binderfs_fs_parameters, param, &result);
if (opt < 0)
return opt;
if (!*p)
continue;
switch (opt) {
case Opt_max:
if (result.uint_32 > BINDERFS_MAX_MINOR)
return invalfc(fc, "Bad value for '%s'", param->key);
token = match_token(p, tokens, args);
switch (token) {
case Opt_max:
if (match_int(&args[0], &max_devices) ||
(max_devices < 0 ||
(max_devices > BINDERFS_MAX_MINOR)))
return -EINVAL;
ctx->max = result.uint_32;
break;
case Opt_stats_mode:
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
opts->max = max_devices;
break;
case Opt_stats_mode:
if (!capable(CAP_SYS_ADMIN))
return -EINVAL;
stats = match_strdup(&args[0]);
if (!stats)
return -ENOMEM;
if (strcmp(stats, "global") != 0) {
kfree(stats);
return -EINVAL;
}
opts->stats_mode = STATS_GLOBAL;
kfree(stats);
break;
default:
pr_err("Invalid mount options\n");
return -EINVAL;
}
ctx->stats_mode = result.uint_32;
break;
default:
return invalfc(fc, "Unsupported parameter '%s'", param->key);
}
return 0;
}
static int binderfs_remount(struct super_block *sb, int *flags, char *data)
static int binderfs_fs_context_reconfigure(struct fs_context *fc)
{
int prev_stats_mode, ret;
struct binderfs_info *info = sb->s_fs_info;
struct binderfs_mount_opts *ctx = fc->fs_private;
struct binderfs_info *info = BINDERFS_SB(fc->root->d_sb);
prev_stats_mode = info->mount_opts.stats_mode;
ret = binderfs_parse_mount_opts(data, &info->mount_opts);
if (ret)
return ret;
if (prev_stats_mode != info->mount_opts.stats_mode) {
pr_err("Binderfs stats mode cannot be changed during a remount\n");
info->mount_opts.stats_mode = prev_stats_mode;
return -EINVAL;
}
if (info->mount_opts.stats_mode != ctx->stats_mode)
return invalfc(fc, "Binderfs stats mode cannot be changed during a remount");
info->mount_opts.stats_mode = ctx->stats_mode;
info->mount_opts.max = ctx->max;
return 0;
}
static int binderfs_show_mount_opts(struct seq_file *seq, struct dentry *root)
static int binderfs_show_options(struct seq_file *seq, struct dentry *root)
{
struct binderfs_info *info;
struct binderfs_info *info = BINDERFS_SB(root->d_sb);
info = root->d_sb->s_fs_info;
if (info->mount_opts.max <= BINDERFS_MAX_MINOR)
seq_printf(seq, ",max=%d", info->mount_opts.max);
if (info->mount_opts.stats_mode == STATS_GLOBAL)
switch (info->mount_opts.stats_mode) {
case binderfs_stats_mode_unset:
break;
case binderfs_stats_mode_global:
seq_printf(seq, ",stats=global");
break;
}
return 0;
}
static void binderfs_put_super(struct super_block *sb)
{
struct binderfs_info *info = sb->s_fs_info;
if (info && info->ipc_ns)
put_ipc_ns(info->ipc_ns);
kfree(info);
sb->s_fs_info = NULL;
}
static const struct super_operations binderfs_super_ops = {
.evict_inode = binderfs_evict_inode,
.remount_fs = binderfs_remount,
.show_options = binderfs_show_mount_opts,
.show_options = binderfs_show_options,
.statfs = simple_statfs,
.put_super = binderfs_put_super,
};
static inline bool is_binderfs_control_device(const struct dentry *dentry)
@ -653,10 +644,11 @@ out:
return ret;
}
static int binderfs_fill_super(struct super_block *sb, void *data, int silent)
static int binderfs_fill_super(struct super_block *sb, struct fs_context *fc)
{
int ret;
struct binderfs_info *info;
struct binderfs_mount_opts *ctx = fc->fs_private;
struct inode *inode = NULL;
struct binderfs_device device_info = { 0 };
const char *name;
@ -689,16 +681,14 @@ static int binderfs_fill_super(struct super_block *sb, void *data, int silent)
info->ipc_ns = get_ipc_ns(current->nsproxy->ipc_ns);
ret = binderfs_parse_mount_opts(data, &info->mount_opts);
if (ret)
return ret;
info->root_gid = make_kgid(sb->s_user_ns, 0);
if (!gid_valid(info->root_gid))
info->root_gid = GLOBAL_ROOT_GID;
info->root_uid = make_kuid(sb->s_user_ns, 0);
if (!uid_valid(info->root_uid))
info->root_uid = GLOBAL_ROOT_UID;
info->mount_opts.max = ctx->max;
info->mount_opts.stats_mode = ctx->stats_mode;
inode = new_inode(sb);
if (!inode)
@ -730,36 +720,54 @@ static int binderfs_fill_super(struct super_block *sb, void *data, int silent)
name++;
}
if (info->mount_opts.stats_mode == STATS_GLOBAL)
if (info->mount_opts.stats_mode == binderfs_stats_mode_global)
return init_binder_logs(sb);
return 0;
}
static struct dentry *binderfs_mount(struct file_system_type *fs_type,
int flags, const char *dev_name,
void *data)
static int binderfs_fs_context_get_tree(struct fs_context *fc)
{
return mount_nodev(fs_type, flags, data, binderfs_fill_super);
return get_tree_nodev(fc, binderfs_fill_super);
}
static void binderfs_kill_super(struct super_block *sb)
static void binderfs_fs_context_free(struct fs_context *fc)
{
struct binderfs_info *info = sb->s_fs_info;
struct binderfs_mount_opts *ctx = fc->fs_private;
kill_litter_super(sb);
kfree(ctx);
}
if (info && info->ipc_ns)
put_ipc_ns(info->ipc_ns);
static const struct fs_context_operations binderfs_fs_context_ops = {
.free = binderfs_fs_context_free,
.get_tree = binderfs_fs_context_get_tree,
.parse_param = binderfs_fs_context_parse_param,
.reconfigure = binderfs_fs_context_reconfigure,
};
kfree(info);
static int binderfs_init_fs_context(struct fs_context *fc)
{
struct binderfs_mount_opts *ctx = fc->fs_private;
ctx = kzalloc(sizeof(struct binderfs_mount_opts), GFP_KERNEL);
if (!ctx)
return -ENOMEM;
ctx->max = BINDERFS_MAX_MINOR;
ctx->stats_mode = binderfs_stats_mode_unset;
fc->fs_private = ctx;
fc->ops = &binderfs_fs_context_ops;
return 0;
}
static struct file_system_type binder_fs_type = {
.name = "binder",
.mount = binderfs_mount,
.kill_sb = binderfs_kill_super,
.fs_flags = FS_USERNS_MOUNT,
.name = "binder",
.init_fs_context = binderfs_init_fs_context,
.parameters = binderfs_fs_parameters,
.kill_sb = kill_litter_super,
.fs_flags = FS_USERNS_MOUNT,
};
int __init init_binderfs(void)

View File

@ -22,8 +22,6 @@
#include "charlcd.h"
#define LCD_MINOR 156
#define DEFAULT_LCD_BWIDTH 40
#define DEFAULT_LCD_HWIDTH 64

View File

@ -57,8 +57,6 @@
#include "charlcd.h"
#define KEYPAD_MINOR 185
#define LCD_MAXBYTES 256 /* max burst write */
#define KEYPAD_BUFFER 64

View File

@ -201,5 +201,6 @@ config DA8XX_MSTPRI
peripherals.
source "drivers/bus/fsl-mc/Kconfig"
source "drivers/bus/mhi/Kconfig"
endmenu

View File

@ -34,3 +34,6 @@ obj-$(CONFIG_UNIPHIER_SYSTEM_BUS) += uniphier-system-bus.o
obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
obj-$(CONFIG_DA8XX_MSTPRI) += da8xx-mstpri.o
# MHI
obj-$(CONFIG_MHI_BUS) += mhi/

View File

@ -0,0 +1,14 @@
# SPDX-License-Identifier: GPL-2.0
#
# MHI bus
#
# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
#
config MHI_BUS
tristate "Modem Host Interface (MHI) bus"
help
Bus driver for MHI protocol. Modem Host Interface (MHI) is a
communication protocol used by the host processors to control
and communicate with modem devices over a high speed peripheral
bus or shared memory.

View File

@ -0,0 +1,2 @@
# core layer
obj-y += core/

View File

@ -0,0 +1,3 @@
obj-$(CONFIG_MHI_BUS) := mhi.o
mhi-y := init.o main.o pm.o boot.o

View File

@ -0,0 +1,507 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
*
*/
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/dma-direction.h>
#include <linux/dma-mapping.h>
#include <linux/firmware.h>
#include <linux/interrupt.h>
#include <linux/list.h>
#include <linux/mhi.h>
#include <linux/module.h>
#include <linux/random.h>
#include <linux/slab.h>
#include <linux/wait.h>
#include "internal.h"
/* Setup RDDM vector table for RDDM transfer and program RXVEC */
void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
struct image_info *img_info)
{
struct mhi_buf *mhi_buf = img_info->mhi_buf;
struct bhi_vec_entry *bhi_vec = img_info->bhi_vec;
void __iomem *base = mhi_cntrl->bhie;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
u32 sequence_id;
unsigned int i;
for (i = 0; i < img_info->entries - 1; i++, mhi_buf++, bhi_vec++) {
bhi_vec->dma_addr = mhi_buf->dma_addr;
bhi_vec->size = mhi_buf->len;
}
dev_dbg(dev, "BHIe programming for RDDM\n");
mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_HIGH_OFFS,
upper_32_bits(mhi_buf->dma_addr));
mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_LOW_OFFS,
lower_32_bits(mhi_buf->dma_addr));
mhi_write_reg(mhi_cntrl, base, BHIE_RXVECSIZE_OFFS, mhi_buf->len);
sequence_id = prandom_u32() & BHIE_RXVECSTATUS_SEQNUM_BMSK;
if (unlikely(!sequence_id))
sequence_id = 1;
mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT,
sequence_id);
dev_dbg(dev, "Address: %p and len: 0x%zx sequence: %u\n",
&mhi_buf->dma_addr, mhi_buf->len, sequence_id);
}
/* Collect RDDM buffer during kernel panic */
static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
{
int ret;
u32 rx_status;
enum mhi_ee_type ee;
const u32 delayus = 2000;
u32 retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
const u32 rddm_timeout_us = 200000;
int rddm_retry = rddm_timeout_us / delayus;
void __iomem *base = mhi_cntrl->bhie;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
dev_dbg(dev, "Entered with pm_state:%s dev_state:%s ee:%s\n",
to_mhi_pm_state_str(mhi_cntrl->pm_state),
TO_MHI_STATE_STR(mhi_cntrl->dev_state),
TO_MHI_EXEC_STR(mhi_cntrl->ee));
/*
* This should only be executing during a kernel panic, we expect all
* other cores to shutdown while we're collecting RDDM buffer. After
* returning from this function, we expect the device to reset.
*
* Normaly, we read/write pm_state only after grabbing the
* pm_lock, since we're in a panic, skipping it. Also there is no
* gurantee that this state change would take effect since
* we're setting it w/o grabbing pm_lock
*/
mhi_cntrl->pm_state = MHI_PM_LD_ERR_FATAL_DETECT;
/* update should take the effect immediately */
smp_wmb();
/*
* Make sure device is not already in RDDM. In case the device asserts
* and a kernel panic follows, device will already be in RDDM.
* Do not trigger SYS ERR again and proceed with waiting for
* image download completion.
*/
ee = mhi_get_exec_env(mhi_cntrl);
if (ee != MHI_EE_RDDM) {
dev_dbg(dev, "Trigger device into RDDM mode using SYS ERR\n");
mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
dev_dbg(dev, "Waiting for device to enter RDDM\n");
while (rddm_retry--) {
ee = mhi_get_exec_env(mhi_cntrl);
if (ee == MHI_EE_RDDM)
break;
udelay(delayus);
}
if (rddm_retry <= 0) {
/* Hardware reset so force device to enter RDDM */
dev_dbg(dev,
"Did not enter RDDM, do a host req reset\n");
mhi_write_reg(mhi_cntrl, mhi_cntrl->regs,
MHI_SOC_RESET_REQ_OFFSET,
MHI_SOC_RESET_REQ);
udelay(delayus);
}
ee = mhi_get_exec_env(mhi_cntrl);
}
dev_dbg(dev, "Waiting for image download completion, current EE: %s\n",
TO_MHI_EXEC_STR(ee));
while (retry--) {
ret = mhi_read_reg_field(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS,
BHIE_RXVECSTATUS_STATUS_BMSK,
BHIE_RXVECSTATUS_STATUS_SHFT,
&rx_status);
if (ret)
return -EIO;
if (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL)
return 0;
udelay(delayus);
}
ee = mhi_get_exec_env(mhi_cntrl);
ret = mhi_read_reg(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS, &rx_status);
dev_err(dev, "Did not complete RDDM transfer\n");
dev_err(dev, "Current EE: %s\n", TO_MHI_EXEC_STR(ee));
dev_err(dev, "RXVEC_STATUS: 0x%x\n", rx_status);
return -EIO;
}
/* Download RDDM image from device */
int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic)
{
void __iomem *base = mhi_cntrl->bhie;
u32 rx_status;
if (in_panic)
return __mhi_download_rddm_in_panic(mhi_cntrl);
/* Wait for the image download to complete */
wait_event_timeout(mhi_cntrl->state_event,
mhi_read_reg_field(mhi_cntrl, base,
BHIE_RXVECSTATUS_OFFS,
BHIE_RXVECSTATUS_STATUS_BMSK,
BHIE_RXVECSTATUS_STATUS_SHFT,
&rx_status) || rx_status,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
return (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL) ? 0 : -EIO;
}
EXPORT_SYMBOL_GPL(mhi_download_rddm_img);
static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl,
const struct mhi_buf *mhi_buf)
{
void __iomem *base = mhi_cntrl->bhie;
rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
u32 tx_status, sequence_id;
read_lock_bh(pm_lock);
if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
read_unlock_bh(pm_lock);
return -EIO;
}
mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_HIGH_OFFS,
upper_32_bits(mhi_buf->dma_addr));
mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_LOW_OFFS,
lower_32_bits(mhi_buf->dma_addr));
mhi_write_reg(mhi_cntrl, base, BHIE_TXVECSIZE_OFFS, mhi_buf->len);
sequence_id = prandom_u32() & BHIE_TXVECSTATUS_SEQNUM_BMSK;
mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS,
BHIE_TXVECDB_SEQNUM_BMSK, BHIE_TXVECDB_SEQNUM_SHFT,
sequence_id);
read_unlock_bh(pm_lock);
/* Wait for the image download to complete */
wait_event_timeout(mhi_cntrl->state_event,
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
mhi_read_reg_field(mhi_cntrl, base,
BHIE_TXVECSTATUS_OFFS,
BHIE_TXVECSTATUS_STATUS_BMSK,
BHIE_TXVECSTATUS_STATUS_SHFT,
&tx_status) || tx_status,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
return -EIO;
return (tx_status == BHIE_TXVECSTATUS_STATUS_XFER_COMPL) ? 0 : -EIO;
}
static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
dma_addr_t dma_addr,
size_t size)
{
u32 tx_status, val, session_id;
int i, ret;
void __iomem *base = mhi_cntrl->bhi;
rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
struct {
char *name;
u32 offset;
} error_reg[] = {
{ "ERROR_CODE", BHI_ERRCODE },
{ "ERROR_DBG1", BHI_ERRDBG1 },
{ "ERROR_DBG2", BHI_ERRDBG2 },
{ "ERROR_DBG3", BHI_ERRDBG3 },
{ NULL },
};
read_lock_bh(pm_lock);
if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
read_unlock_bh(pm_lock);
goto invalid_pm_state;
}
dev_dbg(dev, "Starting SBL download via BHI\n");
mhi_write_reg(mhi_cntrl, base, BHI_STATUS, 0);
mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_HIGH,
upper_32_bits(dma_addr));
mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_LOW,
lower_32_bits(dma_addr));
mhi_write_reg(mhi_cntrl, base, BHI_IMGSIZE, size);
session_id = prandom_u32() & BHI_TXDB_SEQNUM_BMSK;
mhi_write_reg(mhi_cntrl, base, BHI_IMGTXDB, session_id);
read_unlock_bh(pm_lock);
/* Wait for the image download to complete */
ret = wait_event_timeout(mhi_cntrl->state_event,
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
mhi_read_reg_field(mhi_cntrl, base, BHI_STATUS,
BHI_STATUS_MASK, BHI_STATUS_SHIFT,
&tx_status) || tx_status,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
goto invalid_pm_state;
if (tx_status == BHI_STATUS_ERROR) {
dev_err(dev, "Image transfer failed\n");
read_lock_bh(pm_lock);
if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
for (i = 0; error_reg[i].name; i++) {
ret = mhi_read_reg(mhi_cntrl, base,
error_reg[i].offset, &val);
if (ret)
break;
dev_err(dev, "Reg: %s value: 0x%x\n",
error_reg[i].name, val);
}
}
read_unlock_bh(pm_lock);
goto invalid_pm_state;
}
return (!ret) ? -ETIMEDOUT : 0;
invalid_pm_state:
return -EIO;
}
void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
struct image_info *image_info)
{
int i;
struct mhi_buf *mhi_buf = image_info->mhi_buf;
for (i = 0; i < image_info->entries; i++, mhi_buf++)
mhi_free_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf,
mhi_buf->dma_addr);
kfree(image_info->mhi_buf);
kfree(image_info);
}
int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
struct image_info **image_info,
size_t alloc_size)
{
size_t seg_size = mhi_cntrl->seg_len;
int segments = DIV_ROUND_UP(alloc_size, seg_size) + 1;
int i;
struct image_info *img_info;
struct mhi_buf *mhi_buf;
img_info = kzalloc(sizeof(*img_info), GFP_KERNEL);
if (!img_info)
return -ENOMEM;
/* Allocate memory for entries */
img_info->mhi_buf = kcalloc(segments, sizeof(*img_info->mhi_buf),
GFP_KERNEL);
if (!img_info->mhi_buf)
goto error_alloc_mhi_buf;
/* Allocate and populate vector table */
mhi_buf = img_info->mhi_buf;
for (i = 0; i < segments; i++, mhi_buf++) {
size_t vec_size = seg_size;
/* Vector table is the last entry */
if (i == segments - 1)
vec_size = sizeof(struct bhi_vec_entry) * i;
mhi_buf->len = vec_size;
mhi_buf->buf = mhi_alloc_coherent(mhi_cntrl, vec_size,
&mhi_buf->dma_addr,
GFP_KERNEL);
if (!mhi_buf->buf)
goto error_alloc_segment;
}
img_info->bhi_vec = img_info->mhi_buf[segments - 1].buf;
img_info->entries = segments;
*image_info = img_info;
return 0;
error_alloc_segment:
for (--i, --mhi_buf; i >= 0; i--, mhi_buf--)
mhi_free_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf,
mhi_buf->dma_addr);
error_alloc_mhi_buf:
kfree(img_info);
return -ENOMEM;
}
static void mhi_firmware_copy(struct mhi_controller *mhi_cntrl,
const struct firmware *firmware,
struct image_info *img_info)
{
size_t remainder = firmware->size;
size_t to_cpy;
const u8 *buf = firmware->data;
int i = 0;
struct mhi_buf *mhi_buf = img_info->mhi_buf;
struct bhi_vec_entry *bhi_vec = img_info->bhi_vec;
while (remainder) {
to_cpy = min(remainder, mhi_buf->len);
memcpy(mhi_buf->buf, buf, to_cpy);
bhi_vec->dma_addr = mhi_buf->dma_addr;
bhi_vec->size = to_cpy;
buf += to_cpy;
remainder -= to_cpy;
i++;
bhi_vec++;
mhi_buf++;
}
}
void mhi_fw_load_worker(struct work_struct *work)
{
struct mhi_controller *mhi_cntrl;
const struct firmware *firmware = NULL;
struct image_info *image_info;
struct device *dev;
const char *fw_name;
void *buf;
dma_addr_t dma_addr;
size_t size;
int ret;
mhi_cntrl = container_of(work, struct mhi_controller, fw_worker);
dev = &mhi_cntrl->mhi_dev->dev;
dev_dbg(dev, "Waiting for device to enter PBL from: %s\n",
TO_MHI_EXEC_STR(mhi_cntrl->ee));
ret = wait_event_timeout(mhi_cntrl->state_event,
MHI_IN_PBL(mhi_cntrl->ee) ||
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
dev_err(dev, "Device MHI is not in valid state\n");
return;
}
/* If device is in pass through, do reset to ready state transition */
if (mhi_cntrl->ee == MHI_EE_PTHRU)
goto fw_load_ee_pthru;
fw_name = (mhi_cntrl->ee == MHI_EE_EDL) ?
mhi_cntrl->edl_image : mhi_cntrl->fw_image;
if (!fw_name || (mhi_cntrl->fbc_download && (!mhi_cntrl->sbl_size ||
!mhi_cntrl->seg_len))) {
dev_err(dev,
"No firmware image defined or !sbl_size || !seg_len\n");
return;
}
ret = request_firmware(&firmware, fw_name, dev);
if (ret) {
dev_err(dev, "Error loading firmware: %d\n", ret);
return;
}
size = (mhi_cntrl->fbc_download) ? mhi_cntrl->sbl_size : firmware->size;
/* SBL size provided is maximum size, not necessarily the image size */
if (size > firmware->size)
size = firmware->size;
buf = mhi_alloc_coherent(mhi_cntrl, size, &dma_addr, GFP_KERNEL);
if (!buf) {
release_firmware(firmware);
return;
}
/* Download SBL image */
memcpy(buf, firmware->data, size);
ret = mhi_fw_load_sbl(mhi_cntrl, dma_addr, size);
mhi_free_coherent(mhi_cntrl, size, buf, dma_addr);
if (!mhi_cntrl->fbc_download || ret || mhi_cntrl->ee == MHI_EE_EDL)
release_firmware(firmware);
/* Error or in EDL mode, we're done */
if (ret || mhi_cntrl->ee == MHI_EE_EDL)
return;
write_lock_irq(&mhi_cntrl->pm_lock);
mhi_cntrl->dev_state = MHI_STATE_RESET;
write_unlock_irq(&mhi_cntrl->pm_lock);
/*
* If we're doing fbc, populate vector tables while
* device transitioning into MHI READY state
*/
if (mhi_cntrl->fbc_download) {
ret = mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->fbc_image,
firmware->size);
if (ret)
goto error_alloc_fw_table;
/* Load the firmware into BHIE vec table */
mhi_firmware_copy(mhi_cntrl, firmware, mhi_cntrl->fbc_image);
}
fw_load_ee_pthru:
/* Transitioning into MHI RESET->READY state */
ret = mhi_ready_state_transition(mhi_cntrl);
if (!mhi_cntrl->fbc_download)
return;
if (ret)
goto error_read;
/* Wait for the SBL event */
ret = wait_event_timeout(mhi_cntrl->state_event,
mhi_cntrl->ee == MHI_EE_SBL ||
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
dev_err(dev, "MHI did not enter SBL\n");
goto error_read;
}
/* Start full firmware image download */
image_info = mhi_cntrl->fbc_image;
ret = mhi_fw_load_amss(mhi_cntrl,
/* Vector table is the last entry */
&image_info->mhi_buf[image_info->entries - 1]);
release_firmware(firmware);
return;
error_read:
mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
mhi_cntrl->fbc_image = NULL;
error_alloc_fw_table:
release_firmware(firmware);
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,687 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
*
*/
#ifndef _MHI_INT_H
#define _MHI_INT_H
#include <linux/mhi.h>
extern struct bus_type mhi_bus_type;
/* MHI MMIO register mapping */
#define PCI_INVALID_READ(val) (val == U32_MAX)
#define MHIREGLEN (0x0)
#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF)
#define MHIREGLEN_MHIREGLEN_SHIFT (0)
#define MHIVER (0x8)
#define MHIVER_MHIVER_MASK (0xFFFFFFFF)
#define MHIVER_MHIVER_SHIFT (0)
#define MHICFG (0x10)
#define MHICFG_NHWER_MASK (0xFF000000)
#define MHICFG_NHWER_SHIFT (24)
#define MHICFG_NER_MASK (0xFF0000)
#define MHICFG_NER_SHIFT (16)
#define MHICFG_NHWCH_MASK (0xFF00)
#define MHICFG_NHWCH_SHIFT (8)
#define MHICFG_NCH_MASK (0xFF)
#define MHICFG_NCH_SHIFT (0)
#define CHDBOFF (0x18)
#define CHDBOFF_CHDBOFF_MASK (0xFFFFFFFF)
#define CHDBOFF_CHDBOFF_SHIFT (0)
#define ERDBOFF (0x20)
#define ERDBOFF_ERDBOFF_MASK (0xFFFFFFFF)
#define ERDBOFF_ERDBOFF_SHIFT (0)
#define BHIOFF (0x28)
#define BHIOFF_BHIOFF_MASK (0xFFFFFFFF)
#define BHIOFF_BHIOFF_SHIFT (0)
#define BHIEOFF (0x2C)
#define BHIEOFF_BHIEOFF_MASK (0xFFFFFFFF)
#define BHIEOFF_BHIEOFF_SHIFT (0)
#define DEBUGOFF (0x30)
#define DEBUGOFF_DEBUGOFF_MASK (0xFFFFFFFF)
#define DEBUGOFF_DEBUGOFF_SHIFT (0)
#define MHICTRL (0x38)
#define MHICTRL_MHISTATE_MASK (0x0000FF00)
#define MHICTRL_MHISTATE_SHIFT (8)
#define MHICTRL_RESET_MASK (0x2)
#define MHICTRL_RESET_SHIFT (1)
#define MHISTATUS (0x48)
#define MHISTATUS_MHISTATE_MASK (0x0000FF00)
#define MHISTATUS_MHISTATE_SHIFT (8)
#define MHISTATUS_SYSERR_MASK (0x4)
#define MHISTATUS_SYSERR_SHIFT (2)
#define MHISTATUS_READY_MASK (0x1)
#define MHISTATUS_READY_SHIFT (0)
#define CCABAP_LOWER (0x58)
#define CCABAP_LOWER_CCABAP_LOWER_MASK (0xFFFFFFFF)
#define CCABAP_LOWER_CCABAP_LOWER_SHIFT (0)
#define CCABAP_HIGHER (0x5C)
#define CCABAP_HIGHER_CCABAP_HIGHER_MASK (0xFFFFFFFF)
#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT (0)
#define ECABAP_LOWER (0x60)
#define ECABAP_LOWER_ECABAP_LOWER_MASK (0xFFFFFFFF)
#define ECABAP_LOWER_ECABAP_LOWER_SHIFT (0)
#define ECABAP_HIGHER (0x64)
#define ECABAP_HIGHER_ECABAP_HIGHER_MASK (0xFFFFFFFF)
#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT (0)
#define CRCBAP_LOWER (0x68)
#define CRCBAP_LOWER_CRCBAP_LOWER_MASK (0xFFFFFFFF)
#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT (0)
#define CRCBAP_HIGHER (0x6C)
#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK (0xFFFFFFFF)
#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT (0)
#define CRDB_LOWER (0x70)
#define CRDB_LOWER_CRDB_LOWER_MASK (0xFFFFFFFF)
#define CRDB_LOWER_CRDB_LOWER_SHIFT (0)
#define CRDB_HIGHER (0x74)
#define CRDB_HIGHER_CRDB_HIGHER_MASK (0xFFFFFFFF)
#define CRDB_HIGHER_CRDB_HIGHER_SHIFT (0)
#define MHICTRLBASE_LOWER (0x80)
#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK (0xFFFFFFFF)
#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT (0)
#define MHICTRLBASE_HIGHER (0x84)
#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK (0xFFFFFFFF)
#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT (0)
#define MHICTRLLIMIT_LOWER (0x88)
#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK (0xFFFFFFFF)
#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT (0)
#define MHICTRLLIMIT_HIGHER (0x8C)
#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK (0xFFFFFFFF)
#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT (0)
#define MHIDATABASE_LOWER (0x98)
#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK (0xFFFFFFFF)
#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT (0)
#define MHIDATABASE_HIGHER (0x9C)
#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK (0xFFFFFFFF)
#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT (0)
#define MHIDATALIMIT_LOWER (0xA0)
#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK (0xFFFFFFFF)
#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT (0)
#define MHIDATALIMIT_HIGHER (0xA4)
#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF)
#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0)
/* Host request register */
#define MHI_SOC_RESET_REQ_OFFSET (0xB0)
#define MHI_SOC_RESET_REQ BIT(0)
/* MHI BHI offfsets */
#define BHI_BHIVERSION_MINOR (0x00)
#define BHI_BHIVERSION_MAJOR (0x04)
#define BHI_IMGADDR_LOW (0x08)
#define BHI_IMGADDR_HIGH (0x0C)
#define BHI_IMGSIZE (0x10)
#define BHI_RSVD1 (0x14)
#define BHI_IMGTXDB (0x18)
#define BHI_TXDB_SEQNUM_BMSK (0x3FFFFFFF)
#define BHI_TXDB_SEQNUM_SHFT (0)
#define BHI_RSVD2 (0x1C)
#define BHI_INTVEC (0x20)
#define BHI_RSVD3 (0x24)
#define BHI_EXECENV (0x28)
#define BHI_STATUS (0x2C)
#define BHI_ERRCODE (0x30)
#define BHI_ERRDBG1 (0x34)
#define BHI_ERRDBG2 (0x38)
#define BHI_ERRDBG3 (0x3C)
#define BHI_SERIALNU (0x40)
#define BHI_SBLANTIROLLVER (0x44)
#define BHI_NUMSEG (0x48)
#define BHI_MSMHWID(n) (0x4C + (0x4 * n))
#define BHI_OEMPKHASH(n) (0x64 + (0x4 * n))
#define BHI_RSVD5 (0xC4)
#define BHI_STATUS_MASK (0xC0000000)
#define BHI_STATUS_SHIFT (30)
#define BHI_STATUS_ERROR (3)
#define BHI_STATUS_SUCCESS (2)
#define BHI_STATUS_RESET (0)
/* MHI BHIE offsets */
#define BHIE_MSMSOCID_OFFS (0x0000)
#define BHIE_TXVECADDR_LOW_OFFS (0x002C)
#define BHIE_TXVECADDR_HIGH_OFFS (0x0030)
#define BHIE_TXVECSIZE_OFFS (0x0034)
#define BHIE_TXVECDB_OFFS (0x003C)
#define BHIE_TXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
#define BHIE_TXVECDB_SEQNUM_SHFT (0)
#define BHIE_TXVECSTATUS_OFFS (0x0044)
#define BHIE_TXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
#define BHIE_TXVECSTATUS_SEQNUM_SHFT (0)
#define BHIE_TXVECSTATUS_STATUS_BMSK (0xC0000000)
#define BHIE_TXVECSTATUS_STATUS_SHFT (30)
#define BHIE_TXVECSTATUS_STATUS_RESET (0x00)
#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL (0x02)
#define BHIE_TXVECSTATUS_STATUS_ERROR (0x03)
#define BHIE_RXVECADDR_LOW_OFFS (0x0060)
#define BHIE_RXVECADDR_HIGH_OFFS (0x0064)
#define BHIE_RXVECSIZE_OFFS (0x0068)
#define BHIE_RXVECDB_OFFS (0x0070)
#define BHIE_RXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
#define BHIE_RXVECDB_SEQNUM_SHFT (0)
#define BHIE_RXVECSTATUS_OFFS (0x0078)
#define BHIE_RXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
#define BHIE_RXVECSTATUS_SEQNUM_SHFT (0)
#define BHIE_RXVECSTATUS_STATUS_BMSK (0xC0000000)
#define BHIE_RXVECSTATUS_STATUS_SHFT (30)
#define BHIE_RXVECSTATUS_STATUS_RESET (0x00)
#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL (0x02)
#define BHIE_RXVECSTATUS_STATUS_ERROR (0x03)
#define SOC_HW_VERSION_OFFS (0x224)
#define SOC_HW_VERSION_FAM_NUM_BMSK (0xF0000000)
#define SOC_HW_VERSION_FAM_NUM_SHFT (28)
#define SOC_HW_VERSION_DEV_NUM_BMSK (0x0FFF0000)
#define SOC_HW_VERSION_DEV_NUM_SHFT (16)
#define SOC_HW_VERSION_MAJOR_VER_BMSK (0x0000FF00)
#define SOC_HW_VERSION_MAJOR_VER_SHFT (8)
#define SOC_HW_VERSION_MINOR_VER_BMSK (0x000000FF)
#define SOC_HW_VERSION_MINOR_VER_SHFT (0)
#define EV_CTX_RESERVED_MASK GENMASK(7, 0)
#define EV_CTX_INTMODC_MASK GENMASK(15, 8)
#define EV_CTX_INTMODC_SHIFT 8
#define EV_CTX_INTMODT_MASK GENMASK(31, 16)
#define EV_CTX_INTMODT_SHIFT 16
struct mhi_event_ctxt {
__u32 intmod;
__u32 ertype;
__u32 msivec;
__u64 rbase __packed __aligned(4);
__u64 rlen __packed __aligned(4);
__u64 rp __packed __aligned(4);
__u64 wp __packed __aligned(4);
};
#define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
#define CHAN_CTX_CHSTATE_SHIFT 0
#define CHAN_CTX_BRSTMODE_MASK GENMASK(9, 8)
#define CHAN_CTX_BRSTMODE_SHIFT 8
#define CHAN_CTX_POLLCFG_MASK GENMASK(15, 10)
#define CHAN_CTX_POLLCFG_SHIFT 10
#define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
struct mhi_chan_ctxt {
__u32 chcfg;
__u32 chtype;
__u32 erindex;
__u64 rbase __packed __aligned(4);
__u64 rlen __packed __aligned(4);
__u64 rp __packed __aligned(4);
__u64 wp __packed __aligned(4);
};
struct mhi_cmd_ctxt {
__u32 reserved0;
__u32 reserved1;
__u32 reserved2;
__u64 rbase __packed __aligned(4);
__u64 rlen __packed __aligned(4);
__u64 rp __packed __aligned(4);
__u64 wp __packed __aligned(4);
};
struct mhi_ctxt {
struct mhi_event_ctxt *er_ctxt;
struct mhi_chan_ctxt *chan_ctxt;
struct mhi_cmd_ctxt *cmd_ctxt;
dma_addr_t er_ctxt_addr;
dma_addr_t chan_ctxt_addr;
dma_addr_t cmd_ctxt_addr;
};
struct mhi_tre {
u64 ptr;
u32 dword[2];
};
struct bhi_vec_entry {
u64 dma_addr;
u64 size;
};
enum mhi_cmd_type {
MHI_CMD_NOP = 1,
MHI_CMD_RESET_CHAN = 16,
MHI_CMD_STOP_CHAN = 17,
MHI_CMD_START_CHAN = 18,
};
/* No operation command */
#define MHI_TRE_CMD_NOOP_PTR (0)
#define MHI_TRE_CMD_NOOP_DWORD0 (0)
#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16)
/* Channel reset command */
#define MHI_TRE_CMD_RESET_PTR (0)
#define MHI_TRE_CMD_RESET_DWORD0 (0)
#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
(MHI_CMD_RESET_CHAN << 16))
/* Channel stop command */
#define MHI_TRE_CMD_STOP_PTR (0)
#define MHI_TRE_CMD_STOP_DWORD0 (0)
#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \
(MHI_CMD_STOP_CHAN << 16))
/* Channel start command */
#define MHI_TRE_CMD_START_PTR (0)
#define MHI_TRE_CMD_START_DWORD0 (0)
#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
(MHI_CMD_START_CHAN << 16))
#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
/* Event descriptor macros */
#define MHI_TRE_EV_PTR(ptr) (ptr)
#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
/* Transfer descriptor macros */
#define MHI_TRE_DATA_PTR(ptr) (ptr)
#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
| (ieot << 9) | (ieob << 8) | chain)
/* RSC transfer descriptor macros */
#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr)
#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie)
#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16)
enum mhi_pkt_type {
MHI_PKT_TYPE_INVALID = 0x0,
MHI_PKT_TYPE_NOOP_CMD = 0x1,
MHI_PKT_TYPE_TRANSFER = 0x2,
MHI_PKT_TYPE_COALESCING = 0x8,
MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
MHI_PKT_TYPE_TX_EVENT = 0x22,
MHI_PKT_TYPE_RSC_TX_EVENT = 0x28,
MHI_PKT_TYPE_EE_EVENT = 0x40,
MHI_PKT_TYPE_TSYNC_EVENT = 0x48,
MHI_PKT_TYPE_BW_REQ_EVENT = 0x50,
MHI_PKT_TYPE_STALE_EVENT, /* internal event */
};
/* MHI transfer completion events */
enum mhi_ev_ccs {
MHI_EV_CC_INVALID = 0x0,
MHI_EV_CC_SUCCESS = 0x1,
MHI_EV_CC_EOT = 0x2, /* End of transfer event */
MHI_EV_CC_OVERFLOW = 0x3,
MHI_EV_CC_EOB = 0x4, /* End of block event */
MHI_EV_CC_OOB = 0x5, /* Out of block event */
MHI_EV_CC_DB_MODE = 0x6,
MHI_EV_CC_UNDEFINED_ERR = 0x10,
MHI_EV_CC_BAD_TRE = 0x11,
};
enum mhi_ch_state {
MHI_CH_STATE_DISABLED = 0x0,
MHI_CH_STATE_ENABLED = 0x1,
MHI_CH_STATE_RUNNING = 0x2,
MHI_CH_STATE_SUSPENDED = 0x3,
MHI_CH_STATE_STOP = 0x4,
MHI_CH_STATE_ERROR = 0x5,
};
#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_DB_BRST_DISABLE && \
mode != MHI_DB_BRST_ENABLE)
extern const char * const mhi_ee_str[MHI_EE_MAX];
#define TO_MHI_EXEC_STR(ee) (((ee) >= MHI_EE_MAX) ? \
"INVALID_EE" : mhi_ee_str[ee])
#define MHI_IN_PBL(ee) (ee == MHI_EE_PBL || ee == MHI_EE_PTHRU || \
ee == MHI_EE_EDL)
#define MHI_IN_MISSION_MODE(ee) (ee == MHI_EE_AMSS || ee == MHI_EE_WFW)
enum dev_st_transition {
DEV_ST_TRANSITION_PBL,
DEV_ST_TRANSITION_READY,
DEV_ST_TRANSITION_SBL,
DEV_ST_TRANSITION_MISSION_MODE,
DEV_ST_TRANSITION_MAX,
};
extern const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX];
#define TO_DEV_STATE_TRANS_STR(state) (((state) >= DEV_ST_TRANSITION_MAX) ? \
"INVALID_STATE" : dev_state_tran_str[state])
extern const char * const mhi_state_str[MHI_STATE_MAX];
#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
!mhi_state_str[state]) ? \
"INVALID_STATE" : mhi_state_str[state])
/* internal power states */
enum mhi_pm_state {
MHI_PM_STATE_DISABLE,
MHI_PM_STATE_POR,
MHI_PM_STATE_M0,
MHI_PM_STATE_M2,
MHI_PM_STATE_M3_ENTER,
MHI_PM_STATE_M3,
MHI_PM_STATE_M3_EXIT,
MHI_PM_STATE_FW_DL_ERR,
MHI_PM_STATE_SYS_ERR_DETECT,
MHI_PM_STATE_SYS_ERR_PROCESS,
MHI_PM_STATE_SHUTDOWN_PROCESS,
MHI_PM_STATE_LD_ERR_FATAL_DETECT,
MHI_PM_STATE_MAX
};
#define MHI_PM_DISABLE BIT(0)
#define MHI_PM_POR BIT(1)
#define MHI_PM_M0 BIT(2)
#define MHI_PM_M2 BIT(3)
#define MHI_PM_M3_ENTER BIT(4)
#define MHI_PM_M3 BIT(5)
#define MHI_PM_M3_EXIT BIT(6)
/* firmware download failure state */
#define MHI_PM_FW_DL_ERR BIT(7)
#define MHI_PM_SYS_ERR_DETECT BIT(8)
#define MHI_PM_SYS_ERR_PROCESS BIT(9)
#define MHI_PM_SHUTDOWN_PROCESS BIT(10)
/* link not accessible */
#define MHI_PM_LD_ERR_FATAL_DETECT BIT(11)
#define MHI_REG_ACCESS_VALID(pm_state) ((pm_state & (MHI_PM_POR | MHI_PM_M0 | \
MHI_PM_M2 | MHI_PM_M3_ENTER | MHI_PM_M3_EXIT | \
MHI_PM_SYS_ERR_DETECT | MHI_PM_SYS_ERR_PROCESS | \
MHI_PM_SHUTDOWN_PROCESS | MHI_PM_FW_DL_ERR)))
#define MHI_PM_IN_ERROR_STATE(pm_state) (pm_state >= MHI_PM_FW_DL_ERR)
#define MHI_PM_IN_FATAL_STATE(pm_state) (pm_state == MHI_PM_LD_ERR_FATAL_DETECT)
#define MHI_DB_ACCESS_VALID(mhi_cntrl) (mhi_cntrl->pm_state & \
mhi_cntrl->db_access)
#define MHI_WAKE_DB_CLEAR_VALID(pm_state) (pm_state & (MHI_PM_M0 | \
MHI_PM_M2 | MHI_PM_M3_EXIT))
#define MHI_WAKE_DB_SET_VALID(pm_state) (pm_state & MHI_PM_M2)
#define MHI_WAKE_DB_FORCE_SET_VALID(pm_state) MHI_WAKE_DB_CLEAR_VALID(pm_state)
#define MHI_EVENT_ACCESS_INVALID(pm_state) (pm_state == MHI_PM_DISABLE || \
MHI_PM_IN_ERROR_STATE(pm_state))
#define MHI_PM_IN_SUSPEND_STATE(pm_state) (pm_state & \
(MHI_PM_M3_ENTER | MHI_PM_M3))
#define NR_OF_CMD_RINGS 1
#define CMD_EL_PER_RING 128
#define PRIMARY_CMD_RING 0
#define MHI_DEV_WAKE_DB 127
#define MHI_MAX_MTU 0xffff
enum mhi_er_type {
MHI_ER_TYPE_INVALID = 0x0,
MHI_ER_TYPE_VALID = 0x1,
};
struct db_cfg {
bool reset_req;
bool db_mode;
u32 pollcfg;
enum mhi_db_brst_mode brstmode;
dma_addr_t db_val;
void (*process_db)(struct mhi_controller *mhi_cntrl,
struct db_cfg *db_cfg, void __iomem *io_addr,
dma_addr_t db_val);
};
struct mhi_pm_transitions {
enum mhi_pm_state from_state;
u32 to_states;
};
struct state_transition {
struct list_head node;
enum dev_st_transition state;
};
struct mhi_ring {
dma_addr_t dma_handle;
dma_addr_t iommu_base;
u64 *ctxt_wp; /* point to ctxt wp */
void *pre_aligned;
void *base;
void *rp;
void *wp;
size_t el_size;
size_t len;
size_t elements;
size_t alloc_size;
void __iomem *db_addr;
};
struct mhi_cmd {
struct mhi_ring ring;
spinlock_t lock;
};
struct mhi_buf_info {
void *v_addr;
void *bb_addr;
void *wp;
void *cb_buf;
dma_addr_t p_addr;
size_t len;
enum dma_data_direction dir;
bool used; /* Indicates whether the buffer is used or not */
bool pre_mapped; /* Already pre-mapped by client */
};
struct mhi_event {
struct mhi_controller *mhi_cntrl;
struct mhi_chan *mhi_chan; /* dedicated to channel */
u32 er_index;
u32 intmod;
u32 irq;
int chan; /* this event ring is dedicated to a channel (optional) */
u32 priority;
enum mhi_er_data_type data_type;
struct mhi_ring ring;
struct db_cfg db_cfg;
struct tasklet_struct task;
spinlock_t lock;
int (*process_event)(struct mhi_controller *mhi_cntrl,
struct mhi_event *mhi_event,
u32 event_quota);
bool hw_ring;
bool cl_manage;
bool offload_ev; /* managed by a device driver */
};
struct mhi_chan {
const char *name;
/*
* Important: When consuming, increment tre_ring first and when
* releasing, decrement buf_ring first. If tre_ring has space, buf_ring
* is guranteed to have space so we do not need to check both rings.
*/
struct mhi_ring buf_ring;
struct mhi_ring tre_ring;
u32 chan;
u32 er_index;
u32 intmod;
enum mhi_ch_type type;
enum dma_data_direction dir;
struct db_cfg db_cfg;
enum mhi_ch_ee_mask ee_mask;
enum mhi_ch_state ch_state;
enum mhi_ev_ccs ccs;
struct mhi_device *mhi_dev;
void (*xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result *result);
struct mutex mutex;
struct completion completion;
rwlock_t lock;
struct list_head node;
bool lpm_notify;
bool configured;
bool offload_ch;
bool pre_alloc;
bool auto_start;
bool wake_capable;
};
/* Default MHI timeout */
#define MHI_TIMEOUT_MS (1000)
struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl);
int mhi_destroy_device(struct device *dev, void *data);
void mhi_create_devices(struct mhi_controller *mhi_cntrl);
int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
struct image_info **image_info, size_t alloc_size);
void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
struct image_info *image_info);
/* Power management APIs */
enum mhi_pm_state __must_check mhi_tryset_pm_state(
struct mhi_controller *mhi_cntrl,
enum mhi_pm_state state);
const char *to_mhi_pm_state_str(enum mhi_pm_state state);
enum mhi_ee_type mhi_get_exec_env(struct mhi_controller *mhi_cntrl);
int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
enum dev_st_transition state);
void mhi_pm_st_worker(struct work_struct *work);
void mhi_pm_sys_err_worker(struct work_struct *work);
void mhi_fw_load_worker(struct work_struct *work);
int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl);
void mhi_ctrl_ev_task(unsigned long data);
int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl);
void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl);
int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl);
int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl);
int mhi_send_cmd(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
enum mhi_cmd_type cmd);
/* Register access methods */
void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg,
void __iomem *db_addr, dma_addr_t db_val);
void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
struct db_cfg *db_mode, void __iomem *db_addr,
dma_addr_t db_val);
int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
void __iomem *base, u32 offset, u32 *out);
int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
void __iomem *base, u32 offset, u32 mask,
u32 shift, u32 *out);
void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
u32 offset, u32 val);
void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
u32 offset, u32 mask, u32 shift, u32 val);
void mhi_ring_er_db(struct mhi_event *mhi_event);
void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
dma_addr_t db_val);
void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd);
void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan);
/* Initialization methods */
int mhi_init_mmio(struct mhi_controller *mhi_cntrl);
int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl);
void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl);
int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl);
void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl);
void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
struct image_info *img_info);
int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan);
int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan);
void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan);
void mhi_reset_chan(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan);
/* Memory allocation methods */
static inline void *mhi_alloc_coherent(struct mhi_controller *mhi_cntrl,
size_t size,
dma_addr_t *dma_handle,
gfp_t gfp)
{
void *buf = dma_alloc_coherent(mhi_cntrl->cntrl_dev, size, dma_handle,
gfp);
return buf;
}
static inline void mhi_free_coherent(struct mhi_controller *mhi_cntrl,
size_t size,
void *vaddr,
dma_addr_t dma_handle)
{
dma_free_coherent(mhi_cntrl->cntrl_dev, size, vaddr, dma_handle);
}
/* Event processing methods */
void mhi_ctrl_ev_task(unsigned long data);
void mhi_ev_task(unsigned long data);
int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
struct mhi_event *mhi_event, u32 event_quota);
int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
struct mhi_event *mhi_event, u32 event_quota);
/* ISR handlers */
irqreturn_t mhi_irq_handler(int irq_number, void *dev);
irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *dev);
irqreturn_t mhi_intvec_handler(int irq_number, void *dev);
int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
void *buf, void *cb, size_t buf_len, enum mhi_flags flags);
int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf_info);
int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf_info);
void mhi_unmap_single_no_bb(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf_info);
void mhi_unmap_single_use_bb(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf_info);
#endif /* _MHI_INT_H */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,969 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
*
*/
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/dma-direction.h>
#include <linux/dma-mapping.h>
#include <linux/interrupt.h>
#include <linux/list.h>
#include <linux/mhi.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/wait.h>
#include "internal.h"
/*
* Not all MHI state transitions are synchronous. Transitions like Linkdown,
* SYS_ERR, and shutdown can happen anytime asynchronously. This function will
* transition to a new state only if we're allowed to.
*
* Priority increases as we go down. For instance, from any state in L0, the
* transition can be made to states in L1, L2 and L3. A notable exception to
* this rule is state DISABLE. From DISABLE state we can only transition to
* POR state. Also, while in L2 state, user cannot jump back to previous
* L1 or L0 states.
*
* Valid transitions:
* L0: DISABLE <--> POR
* POR <--> POR
* POR -> M0 -> M2 --> M0
* POR -> FW_DL_ERR
* FW_DL_ERR <--> FW_DL_ERR
* M0 <--> M0
* M0 -> FW_DL_ERR
* M0 -> M3_ENTER -> M3 -> M3_EXIT --> M0
* L1: SYS_ERR_DETECT -> SYS_ERR_PROCESS --> POR
* L2: SHUTDOWN_PROCESS -> DISABLE
* L3: LD_ERR_FATAL_DETECT <--> LD_ERR_FATAL_DETECT
* LD_ERR_FATAL_DETECT -> SHUTDOWN_PROCESS
*/
static struct mhi_pm_transitions const dev_state_transitions[] = {
/* L0 States */
{
MHI_PM_DISABLE,
MHI_PM_POR
},
{
MHI_PM_POR,
MHI_PM_POR | MHI_PM_DISABLE | MHI_PM_M0 |
MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_FW_DL_ERR
},
{
MHI_PM_M0,
MHI_PM_M0 | MHI_PM_M2 | MHI_PM_M3_ENTER |
MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_FW_DL_ERR
},
{
MHI_PM_M2,
MHI_PM_M0 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
MHI_PM_LD_ERR_FATAL_DETECT
},
{
MHI_PM_M3_ENTER,
MHI_PM_M3 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
MHI_PM_LD_ERR_FATAL_DETECT
},
{
MHI_PM_M3,
MHI_PM_M3_EXIT | MHI_PM_SYS_ERR_DETECT |
MHI_PM_SHUTDOWN_PROCESS | MHI_PM_LD_ERR_FATAL_DETECT
},
{
MHI_PM_M3_EXIT,
MHI_PM_M0 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
MHI_PM_LD_ERR_FATAL_DETECT
},
{
MHI_PM_FW_DL_ERR,
MHI_PM_FW_DL_ERR | MHI_PM_SYS_ERR_DETECT |
MHI_PM_SHUTDOWN_PROCESS | MHI_PM_LD_ERR_FATAL_DETECT
},
/* L1 States */
{
MHI_PM_SYS_ERR_DETECT,
MHI_PM_SYS_ERR_PROCESS | MHI_PM_SHUTDOWN_PROCESS |
MHI_PM_LD_ERR_FATAL_DETECT
},
{
MHI_PM_SYS_ERR_PROCESS,
MHI_PM_POR | MHI_PM_SHUTDOWN_PROCESS |
MHI_PM_LD_ERR_FATAL_DETECT
},
/* L2 States */
{
MHI_PM_SHUTDOWN_PROCESS,
MHI_PM_DISABLE | MHI_PM_LD_ERR_FATAL_DETECT
},
/* L3 States */
{
MHI_PM_LD_ERR_FATAL_DETECT,
MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_SHUTDOWN_PROCESS
},
};
enum mhi_pm_state __must_check mhi_tryset_pm_state(struct mhi_controller *mhi_cntrl,
enum mhi_pm_state state)
{
unsigned long cur_state = mhi_cntrl->pm_state;
int index = find_last_bit(&cur_state, 32);
if (unlikely(index >= ARRAY_SIZE(dev_state_transitions)))
return cur_state;
if (unlikely(dev_state_transitions[index].from_state != cur_state))
return cur_state;
if (unlikely(!(dev_state_transitions[index].to_states & state)))
return cur_state;
mhi_cntrl->pm_state = state;
return mhi_cntrl->pm_state;
}
void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum mhi_state state)
{
if (state == MHI_STATE_RESET) {
mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 1);
} else {
mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
MHICTRL_MHISTATE_MASK,
MHICTRL_MHISTATE_SHIFT, state);
}
}
/* NOP for backward compatibility, host allowed to ring DB in M2 state */
static void mhi_toggle_dev_wake_nop(struct mhi_controller *mhi_cntrl)
{
}
static void mhi_toggle_dev_wake(struct mhi_controller *mhi_cntrl)
{
mhi_cntrl->wake_get(mhi_cntrl, false);
mhi_cntrl->wake_put(mhi_cntrl, true);
}
/* Handle device ready state transition */
int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
{
void __iomem *base = mhi_cntrl->regs;
struct mhi_event *mhi_event;
enum mhi_pm_state cur_state;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
u32 reset = 1, ready = 0;
int ret, i;
/* Wait for RESET to be cleared and READY bit to be set by the device */
wait_event_timeout(mhi_cntrl->state_event,
MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state) ||
mhi_read_reg_field(mhi_cntrl, base, MHICTRL,
MHICTRL_RESET_MASK,
MHICTRL_RESET_SHIFT, &reset) ||
mhi_read_reg_field(mhi_cntrl, base, MHISTATUS,
MHISTATUS_READY_MASK,
MHISTATUS_READY_SHIFT, &ready) ||
(!reset && ready),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
/* Check if device entered error state */
if (MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state)) {
dev_err(dev, "Device link is not accessible\n");
return -EIO;
}
/* Timeout if device did not transition to ready state */
if (reset || !ready) {
dev_err(dev, "Device Ready timeout\n");
return -ETIMEDOUT;
}
dev_dbg(dev, "Device in READY State\n");
write_lock_irq(&mhi_cntrl->pm_lock);
cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_POR);
mhi_cntrl->dev_state = MHI_STATE_READY;
write_unlock_irq(&mhi_cntrl->pm_lock);
if (cur_state != MHI_PM_POR) {
dev_err(dev, "Error moving to state %s from %s\n",
to_mhi_pm_state_str(MHI_PM_POR),
to_mhi_pm_state_str(cur_state));
return -EIO;
}
read_lock_bh(&mhi_cntrl->pm_lock);
if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
dev_err(dev, "Device registers not accessible\n");
goto error_mmio;
}
/* Configure MMIO registers */
ret = mhi_init_mmio(mhi_cntrl);
if (ret) {
dev_err(dev, "Error configuring MMIO registers\n");
goto error_mmio;
}
/* Add elements to all SW event rings */
mhi_event = mhi_cntrl->mhi_event;
for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
struct mhi_ring *ring = &mhi_event->ring;
/* Skip if this is an offload or HW event */
if (mhi_event->offload_ev || mhi_event->hw_ring)
continue;
ring->wp = ring->base + ring->len - ring->el_size;
*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
/* Update all cores */
smp_wmb();
/* Ring the event ring db */
spin_lock_irq(&mhi_event->lock);
mhi_ring_er_db(mhi_event);
spin_unlock_irq(&mhi_event->lock);
}
/* Set MHI to M0 state */
mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
read_unlock_bh(&mhi_cntrl->pm_lock);
return 0;
error_mmio:
read_unlock_bh(&mhi_cntrl->pm_lock);
return -EIO;
}
int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl)
{
enum mhi_pm_state cur_state;
struct mhi_chan *mhi_chan;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
int i;
write_lock_irq(&mhi_cntrl->pm_lock);
mhi_cntrl->dev_state = MHI_STATE_M0;
cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M0);
write_unlock_irq(&mhi_cntrl->pm_lock);
if (unlikely(cur_state != MHI_PM_M0)) {
dev_err(dev, "Unable to transition to M0 state\n");
return -EIO;
}
/* Wake up the device */
read_lock_bh(&mhi_cntrl->pm_lock);
mhi_cntrl->wake_get(mhi_cntrl, true);
/* Ring all event rings and CMD ring only if we're in mission mode */
if (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) {
struct mhi_event *mhi_event = mhi_cntrl->mhi_event;
struct mhi_cmd *mhi_cmd =
&mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
if (mhi_event->offload_ev)
continue;
spin_lock_irq(&mhi_event->lock);
mhi_ring_er_db(mhi_event);
spin_unlock_irq(&mhi_event->lock);
}
/* Only ring primary cmd ring if ring is not empty */
spin_lock_irq(&mhi_cmd->lock);
if (mhi_cmd->ring.rp != mhi_cmd->ring.wp)
mhi_ring_cmd_db(mhi_cntrl, mhi_cmd);
spin_unlock_irq(&mhi_cmd->lock);
}
/* Ring channel DB registers */
mhi_chan = mhi_cntrl->mhi_chan;
for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
write_lock_irq(&mhi_chan->lock);
if (mhi_chan->db_cfg.reset_req)
mhi_chan->db_cfg.db_mode = true;
/* Only ring DB if ring is not empty */
if (tre_ring->base && tre_ring->wp != tre_ring->rp)
mhi_ring_chan_db(mhi_cntrl, mhi_chan);
write_unlock_irq(&mhi_chan->lock);
}
mhi_cntrl->wake_put(mhi_cntrl, false);
read_unlock_bh(&mhi_cntrl->pm_lock);
wake_up_all(&mhi_cntrl->state_event);
return 0;
}
/*
* After receiving the MHI state change event from the device indicating the
* transition to M1 state, the host can transition the device to M2 state
* for keeping it in low power state.
*/
void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl)
{
enum mhi_pm_state state;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
write_lock_irq(&mhi_cntrl->pm_lock);
state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M2);
if (state == MHI_PM_M2) {
mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M2);
mhi_cntrl->dev_state = MHI_STATE_M2;
write_unlock_irq(&mhi_cntrl->pm_lock);
wake_up_all(&mhi_cntrl->state_event);
/* If there are any pending resources, exit M2 immediately */
if (unlikely(atomic_read(&mhi_cntrl->pending_pkts) ||
atomic_read(&mhi_cntrl->dev_wake))) {
dev_dbg(dev,
"Exiting M2, pending_pkts: %d dev_wake: %d\n",
atomic_read(&mhi_cntrl->pending_pkts),
atomic_read(&mhi_cntrl->dev_wake));
read_lock_bh(&mhi_cntrl->pm_lock);
mhi_cntrl->wake_get(mhi_cntrl, true);
mhi_cntrl->wake_put(mhi_cntrl, true);
read_unlock_bh(&mhi_cntrl->pm_lock);
} else {
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_IDLE);
}
} else {
write_unlock_irq(&mhi_cntrl->pm_lock);
}
}
/* MHI M3 completion handler */
int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl)
{
enum mhi_pm_state state;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
write_lock_irq(&mhi_cntrl->pm_lock);
mhi_cntrl->dev_state = MHI_STATE_M3;
state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3);
write_unlock_irq(&mhi_cntrl->pm_lock);
if (state != MHI_PM_M3) {
dev_err(dev, "Unable to transition to M3 state\n");
return -EIO;
}
wake_up_all(&mhi_cntrl->state_event);
return 0;
}
/* Handle device Mission Mode transition */
static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
{
struct mhi_event *mhi_event;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
int i, ret;
dev_dbg(dev, "Processing Mission Mode transition\n");
write_lock_irq(&mhi_cntrl->pm_lock);
if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
write_unlock_irq(&mhi_cntrl->pm_lock);
if (!MHI_IN_MISSION_MODE(mhi_cntrl->ee))
return -EIO;
wake_up_all(&mhi_cntrl->state_event);
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_MISSION_MODE);
/* Force MHI to be in M0 state before continuing */
ret = __mhi_device_get_sync(mhi_cntrl);
if (ret)
return ret;
read_lock_bh(&mhi_cntrl->pm_lock);
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
ret = -EIO;
goto error_mission_mode;
}
/* Add elements to all HW event rings */
mhi_event = mhi_cntrl->mhi_event;
for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
struct mhi_ring *ring = &mhi_event->ring;
if (mhi_event->offload_ev || !mhi_event->hw_ring)
continue;
ring->wp = ring->base + ring->len - ring->el_size;
*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
/* Update to all cores */
smp_wmb();
spin_lock_irq(&mhi_event->lock);
if (MHI_DB_ACCESS_VALID(mhi_cntrl))
mhi_ring_er_db(mhi_event);
spin_unlock_irq(&mhi_event->lock);
}
read_unlock_bh(&mhi_cntrl->pm_lock);
/*
* The MHI devices are only created when the client device switches its
* Execution Environment (EE) to either SBL or AMSS states
*/
mhi_create_devices(mhi_cntrl);
read_lock_bh(&mhi_cntrl->pm_lock);
error_mission_mode:
mhi_cntrl->wake_put(mhi_cntrl, false);
read_unlock_bh(&mhi_cntrl->pm_lock);
return ret;
}
/* Handle SYS_ERR and Shutdown transitions */
static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
enum mhi_pm_state transition_state)
{
enum mhi_pm_state cur_state, prev_state;
struct mhi_event *mhi_event;
struct mhi_cmd_ctxt *cmd_ctxt;
struct mhi_cmd *mhi_cmd;
struct mhi_event_ctxt *er_ctxt;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
int ret, i;
dev_dbg(dev, "Transitioning from PM state: %s to: %s\n",
to_mhi_pm_state_str(mhi_cntrl->pm_state),
to_mhi_pm_state_str(transition_state));
/* We must notify MHI control driver so it can clean up first */
if (transition_state == MHI_PM_SYS_ERR_PROCESS) {
/*
* If controller supports RDDM, we do not process
* SYS error state, instead we will jump directly
* to RDDM state
*/
if (mhi_cntrl->rddm_image) {
dev_dbg(dev,
"Controller supports RDDM, so skip SYS_ERR\n");
return;
}
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_SYS_ERROR);
}
mutex_lock(&mhi_cntrl->pm_mutex);
write_lock_irq(&mhi_cntrl->pm_lock);
prev_state = mhi_cntrl->pm_state;
cur_state = mhi_tryset_pm_state(mhi_cntrl, transition_state);
if (cur_state == transition_state) {
mhi_cntrl->ee = MHI_EE_DISABLE_TRANSITION;
mhi_cntrl->dev_state = MHI_STATE_RESET;
}
write_unlock_irq(&mhi_cntrl->pm_lock);
/* Wake up threads waiting for state transition */
wake_up_all(&mhi_cntrl->state_event);
if (cur_state != transition_state) {
dev_err(dev, "Failed to transition to state: %s from: %s\n",
to_mhi_pm_state_str(transition_state),
to_mhi_pm_state_str(cur_state));
mutex_unlock(&mhi_cntrl->pm_mutex);
return;
}
/* Trigger MHI RESET so that the device will not access host memory */
if (MHI_REG_ACCESS_VALID(prev_state)) {
u32 in_reset = -1;
unsigned long timeout = msecs_to_jiffies(mhi_cntrl->timeout_ms);
dev_dbg(dev, "Triggering MHI Reset in device\n");
mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
/* Wait for the reset bit to be cleared by the device */
ret = wait_event_timeout(mhi_cntrl->state_event,
mhi_read_reg_field(mhi_cntrl,
mhi_cntrl->regs,
MHICTRL,
MHICTRL_RESET_MASK,
MHICTRL_RESET_SHIFT,
&in_reset) ||
!in_reset, timeout);
if ((!ret || in_reset) && cur_state == MHI_PM_SYS_ERR_PROCESS) {
dev_err(dev, "Device failed to exit MHI Reset state\n");
mutex_unlock(&mhi_cntrl->pm_mutex);
return;
}
/*
* Device will clear BHI_INTVEC as a part of RESET processing,
* hence re-program it
*/
mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
}
dev_dbg(dev,
"Waiting for all pending event ring processing to complete\n");
mhi_event = mhi_cntrl->mhi_event;
for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
if (mhi_event->offload_ev)
continue;
tasklet_kill(&mhi_event->task);
}
/* Release lock and wait for all pending threads to complete */
mutex_unlock(&mhi_cntrl->pm_mutex);
dev_dbg(dev, "Waiting for all pending threads to complete\n");
wake_up_all(&mhi_cntrl->state_event);
flush_work(&mhi_cntrl->st_worker);
flush_work(&mhi_cntrl->fw_worker);
dev_dbg(dev, "Reset all active channels and remove MHI devices\n");
device_for_each_child(mhi_cntrl->cntrl_dev, NULL, mhi_destroy_device);
mutex_lock(&mhi_cntrl->pm_mutex);
WARN_ON(atomic_read(&mhi_cntrl->dev_wake));
WARN_ON(atomic_read(&mhi_cntrl->pending_pkts));
/* Reset the ev rings and cmd rings */
dev_dbg(dev, "Resetting EV CTXT and CMD CTXT\n");
mhi_cmd = mhi_cntrl->mhi_cmd;
cmd_ctxt = mhi_cntrl->mhi_ctxt->cmd_ctxt;
for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++, cmd_ctxt++) {
struct mhi_ring *ring = &mhi_cmd->ring;
ring->rp = ring->base;
ring->wp = ring->base;
cmd_ctxt->rp = cmd_ctxt->rbase;
cmd_ctxt->wp = cmd_ctxt->rbase;
}
mhi_event = mhi_cntrl->mhi_event;
er_ctxt = mhi_cntrl->mhi_ctxt->er_ctxt;
for (i = 0; i < mhi_cntrl->total_ev_rings; i++, er_ctxt++,
mhi_event++) {
struct mhi_ring *ring = &mhi_event->ring;
/* Skip offload events */
if (mhi_event->offload_ev)
continue;
ring->rp = ring->base;
ring->wp = ring->base;
er_ctxt->rp = er_ctxt->rbase;
er_ctxt->wp = er_ctxt->rbase;
}
if (cur_state == MHI_PM_SYS_ERR_PROCESS) {
mhi_ready_state_transition(mhi_cntrl);
} else {
/* Move to disable state */
write_lock_irq(&mhi_cntrl->pm_lock);
cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_DISABLE);
write_unlock_irq(&mhi_cntrl->pm_lock);
if (unlikely(cur_state != MHI_PM_DISABLE))
dev_err(dev, "Error moving from PM state: %s to: %s\n",
to_mhi_pm_state_str(cur_state),
to_mhi_pm_state_str(MHI_PM_DISABLE));
}
dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
to_mhi_pm_state_str(mhi_cntrl->pm_state),
TO_MHI_STATE_STR(mhi_cntrl->dev_state));
mutex_unlock(&mhi_cntrl->pm_mutex);
}
/* Queue a new work item and schedule work */
int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
enum dev_st_transition state)
{
struct state_transition *item = kmalloc(sizeof(*item), GFP_ATOMIC);
unsigned long flags;
if (!item)
return -ENOMEM;
item->state = state;
spin_lock_irqsave(&mhi_cntrl->transition_lock, flags);
list_add_tail(&item->node, &mhi_cntrl->transition_list);
spin_unlock_irqrestore(&mhi_cntrl->transition_lock, flags);
schedule_work(&mhi_cntrl->st_worker);
return 0;
}
/* SYS_ERR worker */
void mhi_pm_sys_err_worker(struct work_struct *work)
{
struct mhi_controller *mhi_cntrl = container_of(work,
struct mhi_controller,
syserr_worker);
mhi_pm_disable_transition(mhi_cntrl, MHI_PM_SYS_ERR_PROCESS);
}
/* Device State Transition worker */
void mhi_pm_st_worker(struct work_struct *work)
{
struct state_transition *itr, *tmp;
LIST_HEAD(head);
struct mhi_controller *mhi_cntrl = container_of(work,
struct mhi_controller,
st_worker);
struct device *dev = &mhi_cntrl->mhi_dev->dev;
spin_lock_irq(&mhi_cntrl->transition_lock);
list_splice_tail_init(&mhi_cntrl->transition_list, &head);
spin_unlock_irq(&mhi_cntrl->transition_lock);
list_for_each_entry_safe(itr, tmp, &head, node) {
list_del(&itr->node);
dev_dbg(dev, "Handling state transition: %s\n",
TO_DEV_STATE_TRANS_STR(itr->state));
switch (itr->state) {
case DEV_ST_TRANSITION_PBL:
write_lock_irq(&mhi_cntrl->pm_lock);
if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
write_unlock_irq(&mhi_cntrl->pm_lock);
if (MHI_IN_PBL(mhi_cntrl->ee))
wake_up_all(&mhi_cntrl->state_event);
break;
case DEV_ST_TRANSITION_SBL:
write_lock_irq(&mhi_cntrl->pm_lock);
mhi_cntrl->ee = MHI_EE_SBL;
write_unlock_irq(&mhi_cntrl->pm_lock);
/*
* The MHI devices are only created when the client
* device switches its Execution Environment (EE) to
* either SBL or AMSS states
*/
mhi_create_devices(mhi_cntrl);
break;
case DEV_ST_TRANSITION_MISSION_MODE:
mhi_pm_mission_mode_transition(mhi_cntrl);
break;
case DEV_ST_TRANSITION_READY:
mhi_ready_state_transition(mhi_cntrl);
break;
default:
break;
}
kfree(itr);
}
}
int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl)
{
int ret;
/* Wake up the device */
read_lock_bh(&mhi_cntrl->pm_lock);
mhi_cntrl->wake_get(mhi_cntrl, true);
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
pm_wakeup_event(&mhi_cntrl->mhi_dev->dev, 0);
mhi_cntrl->runtime_get(mhi_cntrl);
mhi_cntrl->runtime_put(mhi_cntrl);
}
read_unlock_bh(&mhi_cntrl->pm_lock);
ret = wait_event_timeout(mhi_cntrl->state_event,
mhi_cntrl->pm_state == MHI_PM_M0 ||
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
read_lock_bh(&mhi_cntrl->pm_lock);
mhi_cntrl->wake_put(mhi_cntrl, false);
read_unlock_bh(&mhi_cntrl->pm_lock);
return -EIO;
}
return 0;
}
/* Assert device wake db */
static void mhi_assert_dev_wake(struct mhi_controller *mhi_cntrl, bool force)
{
unsigned long flags;
/*
* If force flag is set, then increment the wake count value and
* ring wake db
*/
if (unlikely(force)) {
spin_lock_irqsave(&mhi_cntrl->wlock, flags);
atomic_inc(&mhi_cntrl->dev_wake);
if (MHI_WAKE_DB_FORCE_SET_VALID(mhi_cntrl->pm_state) &&
!mhi_cntrl->wake_set) {
mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 1);
mhi_cntrl->wake_set = true;
}
spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
} else {
/*
* If resources are already requested, then just increment
* the wake count value and return
*/
if (likely(atomic_add_unless(&mhi_cntrl->dev_wake, 1, 0)))
return;
spin_lock_irqsave(&mhi_cntrl->wlock, flags);
if ((atomic_inc_return(&mhi_cntrl->dev_wake) == 1) &&
MHI_WAKE_DB_SET_VALID(mhi_cntrl->pm_state) &&
!mhi_cntrl->wake_set) {
mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 1);
mhi_cntrl->wake_set = true;
}
spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
}
}
/* De-assert device wake db */
static void mhi_deassert_dev_wake(struct mhi_controller *mhi_cntrl,
bool override)
{
unsigned long flags;
/*
* Only continue if there is a single resource, else just decrement
* and return
*/
if (likely(atomic_add_unless(&mhi_cntrl->dev_wake, -1, 1)))
return;
spin_lock_irqsave(&mhi_cntrl->wlock, flags);
if ((atomic_dec_return(&mhi_cntrl->dev_wake) == 0) &&
MHI_WAKE_DB_CLEAR_VALID(mhi_cntrl->pm_state) && !override &&
mhi_cntrl->wake_set) {
mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 0);
mhi_cntrl->wake_set = false;
}
spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
}
int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
{
enum mhi_ee_type current_ee;
enum dev_st_transition next_state;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
u32 val;
int ret;
dev_info(dev, "Requested to power ON\n");
if (mhi_cntrl->nr_irqs < mhi_cntrl->total_ev_rings)
return -EINVAL;
/* Supply default wake routines if not provided by controller driver */
if (!mhi_cntrl->wake_get || !mhi_cntrl->wake_put ||
!mhi_cntrl->wake_toggle) {
mhi_cntrl->wake_get = mhi_assert_dev_wake;
mhi_cntrl->wake_put = mhi_deassert_dev_wake;
mhi_cntrl->wake_toggle = (mhi_cntrl->db_access & MHI_PM_M2) ?
mhi_toggle_dev_wake_nop : mhi_toggle_dev_wake;
}
mutex_lock(&mhi_cntrl->pm_mutex);
mhi_cntrl->pm_state = MHI_PM_DISABLE;
if (!mhi_cntrl->pre_init) {
/* Setup device context */
ret = mhi_init_dev_ctxt(mhi_cntrl);
if (ret)
goto error_dev_ctxt;
}
ret = mhi_init_irq_setup(mhi_cntrl);
if (ret)
goto error_setup_irq;
/* Setup BHI offset & INTVEC */
write_lock_irq(&mhi_cntrl->pm_lock);
ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIOFF, &val);
if (ret) {
write_unlock_irq(&mhi_cntrl->pm_lock);
goto error_bhi_offset;
}
mhi_cntrl->bhi = mhi_cntrl->regs + val;
/* Setup BHIE offset */
if (mhi_cntrl->fbc_download) {
ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIEOFF, &val);
if (ret) {
write_unlock_irq(&mhi_cntrl->pm_lock);
dev_err(dev, "Error reading BHIE offset\n");
goto error_bhi_offset;
}
mhi_cntrl->bhie = mhi_cntrl->regs + val;
}
mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
mhi_cntrl->pm_state = MHI_PM_POR;
mhi_cntrl->ee = MHI_EE_MAX;
current_ee = mhi_get_exec_env(mhi_cntrl);
write_unlock_irq(&mhi_cntrl->pm_lock);
/* Confirm that the device is in valid exec env */
if (!MHI_IN_PBL(current_ee) && current_ee != MHI_EE_AMSS) {
dev_err(dev, "Not a valid EE for power on\n");
ret = -EIO;
goto error_bhi_offset;
}
/* Transition to next state */
next_state = MHI_IN_PBL(current_ee) ?
DEV_ST_TRANSITION_PBL : DEV_ST_TRANSITION_READY;
if (next_state == DEV_ST_TRANSITION_PBL)
schedule_work(&mhi_cntrl->fw_worker);
mhi_queue_state_transition(mhi_cntrl, next_state);
mutex_unlock(&mhi_cntrl->pm_mutex);
dev_info(dev, "Power on setup success\n");
return 0;
error_bhi_offset:
mhi_deinit_free_irq(mhi_cntrl);
error_setup_irq:
if (!mhi_cntrl->pre_init)
mhi_deinit_dev_ctxt(mhi_cntrl);
error_dev_ctxt:
mutex_unlock(&mhi_cntrl->pm_mutex);
return ret;
}
EXPORT_SYMBOL_GPL(mhi_async_power_up);
void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
{
enum mhi_pm_state cur_state;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
/* If it's not a graceful shutdown, force MHI to linkdown state */
if (!graceful) {
mutex_lock(&mhi_cntrl->pm_mutex);
write_lock_irq(&mhi_cntrl->pm_lock);
cur_state = mhi_tryset_pm_state(mhi_cntrl,
MHI_PM_LD_ERR_FATAL_DETECT);
write_unlock_irq(&mhi_cntrl->pm_lock);
mutex_unlock(&mhi_cntrl->pm_mutex);
if (cur_state != MHI_PM_LD_ERR_FATAL_DETECT)
dev_dbg(dev, "Failed to move to state: %s from: %s\n",
to_mhi_pm_state_str(MHI_PM_LD_ERR_FATAL_DETECT),
to_mhi_pm_state_str(mhi_cntrl->pm_state));
}
mhi_pm_disable_transition(mhi_cntrl, MHI_PM_SHUTDOWN_PROCESS);
mhi_deinit_free_irq(mhi_cntrl);
if (!mhi_cntrl->pre_init) {
/* Free all allocated resources */
if (mhi_cntrl->fbc_image) {
mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
mhi_cntrl->fbc_image = NULL;
}
mhi_deinit_dev_ctxt(mhi_cntrl);
}
}
EXPORT_SYMBOL_GPL(mhi_power_down);
int mhi_sync_power_up(struct mhi_controller *mhi_cntrl)
{
int ret = mhi_async_power_up(mhi_cntrl);
if (ret)
return ret;
wait_event_timeout(mhi_cntrl->state_event,
MHI_IN_MISSION_MODE(mhi_cntrl->ee) ||
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
return (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) ? 0 : -EIO;
}
EXPORT_SYMBOL(mhi_sync_power_up);
int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
{
struct device *dev = &mhi_cntrl->mhi_dev->dev;
int ret;
/* Check if device is already in RDDM */
if (mhi_cntrl->ee == MHI_EE_RDDM)
return 0;
dev_dbg(dev, "Triggering SYS_ERR to force RDDM state\n");
mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
/* Wait for RDDM event */
ret = wait_event_timeout(mhi_cntrl->state_event,
mhi_cntrl->ee == MHI_EE_RDDM,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
ret = ret ? 0 : -EIO;
return ret;
}
EXPORT_SYMBOL_GPL(mhi_force_rddm_mode);
void mhi_device_get(struct mhi_device *mhi_dev)
{
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
mhi_dev->dev_wake++;
read_lock_bh(&mhi_cntrl->pm_lock);
mhi_cntrl->wake_get(mhi_cntrl, true);
read_unlock_bh(&mhi_cntrl->pm_lock);
}
EXPORT_SYMBOL_GPL(mhi_device_get);
int mhi_device_get_sync(struct mhi_device *mhi_dev)
{
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
int ret;
ret = __mhi_device_get_sync(mhi_cntrl);
if (!ret)
mhi_dev->dev_wake++;
return ret;
}
EXPORT_SYMBOL_GPL(mhi_device_get_sync);
void mhi_device_put(struct mhi_device *mhi_dev)
{
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
mhi_dev->dev_wake--;
read_lock_bh(&mhi_cntrl->pm_lock);
if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
mhi_cntrl->runtime_get(mhi_cntrl);
mhi_cntrl->runtime_put(mhi_cntrl);
}
mhi_cntrl->wake_put(mhi_cntrl, false);
read_unlock_bh(&mhi_cntrl->pm_lock);
}
EXPORT_SYMBOL_GPL(mhi_device_put);

View File

@ -7,28 +7,6 @@ menu "Character devices"
source "drivers/tty/Kconfig"
config DEVMEM
bool "/dev/mem virtual device support"
default y
help
Say Y here if you want to support the /dev/mem device.
The /dev/mem device is used to access areas of physical
memory.
When in doubt, say "Y".
config DEVKMEM
bool "/dev/kmem virtual device support"
# On arm64, VMALLOC_START < PAGE_OFFSET, which confuses kmem read/write
depends on !ARM64
help
Say Y here if you want to support the /dev/kmem device. The
/dev/kmem device is rarely used, but can be used for certain
kind of kernel debugging operations.
When in doubt, say "N".
source "drivers/tty/serial/Kconfig"
source "drivers/tty/serdev/Kconfig"
config TTY_PRINTK
tristate "TTY driver to output user messages via printk"
depends on EXPERT && TTY
@ -113,8 +91,6 @@ config PPDEV
If unsure, say N.
source "drivers/tty/hvc/Kconfig"
config VIRTIO_CONSOLE
tristate "Virtio console"
depends on VIRTIO && TTY
@ -220,89 +196,6 @@ config NWFLASH
source "drivers/char/hw_random/Kconfig"
config NVRAM
tristate "/dev/nvram support"
depends on X86 || HAVE_ARCH_NVRAM_OPS
default M68K || PPC
---help---
If you say Y here and create a character special file /dev/nvram
with major number 10 and minor number 144 using mknod ("man mknod"),
you get read and write access to the non-volatile memory.
/dev/nvram may be used to view settings in NVRAM or to change them
(with some utility). It could also be used to frequently
save a few bits of very important data that may not be lost over
power-off and for which writing to disk is too insecure. Note
however that most NVRAM space in a PC belongs to the BIOS and you
should NEVER idly tamper with it. See Ralf Brown's interrupt list
for a guide to the use of CMOS bytes by your BIOS.
This memory is conventionally called "NVRAM" on PowerPC machines,
"CMOS RAM" on PCs, "NVRAM" on Ataris and "PRAM" on Macintoshes.
To compile this driver as a module, choose M here: the
module will be called nvram.
#
# These legacy RTC drivers just cause too many conflicts with the generic
# RTC framework ... let's not even try to coexist any more.
#
if RTC_LIB=n
config RTC
tristate "Enhanced Real Time Clock Support (legacy PC RTC driver)"
depends on ALPHA
---help---
If you say Y here and create a character special file /dev/rtc with
major number 10 and minor number 135 using mknod ("man mknod"), you
will get access to the real time clock (or hardware clock) built
into your computer.
Every PC has such a clock built in. It can be used to generate
signals from as low as 1Hz up to 8192Hz, and can also be used
as a 24 hour alarm. It reports status information via the file
/proc/driver/rtc and its behaviour is set by various ioctls on
/dev/rtc.
If you run Linux on a multiprocessor machine and said Y to
"Symmetric Multi Processing" above, you should say Y here to read
and set the RTC in an SMP compatible fashion.
If you think you have a use for such a device (such as periodic data
sampling), then say Y here, and read <file:Documentation/admin-guide/rtc.rst>
for details.
To compile this driver as a module, choose M here: the
module will be called rtc.
config JS_RTC
tristate "Enhanced Real Time Clock Support"
depends on SPARC32 && PCI
---help---
If you say Y here and create a character special file /dev/rtc with
major number 10 and minor number 135 using mknod ("man mknod"), you
will get access to the real time clock (or hardware clock) built
into your computer.
Every PC has such a clock built in. It can be used to generate
signals from as low as 1Hz up to 8192Hz, and can also be used
as a 24 hour alarm. It reports status information via the file
/proc/driver/rtc and its behaviour is set by various ioctls on
/dev/rtc.
If you think you have a use for such a device (such as periodic data
sampling), then say Y here, and read <file:Documentation/admin-guide/rtc.rst>
for details.
To compile this driver as a module, choose M here: the
module will be called js-rtc.
config EFI_RTC
bool "EFI Real Time Clock Services"
depends on IA64
endif # RTC_LIB
config DTLK
tristate "Double Talk PC internal speech card support"
depends on ISA
@ -431,6 +324,48 @@ config NSC_GPIO
pc8736x_gpio drivers. If those drivers are built as
modules, this one will be too, named nsc_gpio
config DEVMEM
bool "/dev/mem virtual device support"
default y
help
Say Y here if you want to support the /dev/mem device.
The /dev/mem device is used to access areas of physical
memory.
When in doubt, say "Y".
config DEVKMEM
bool "/dev/kmem virtual device support"
# On arm64, VMALLOC_START < PAGE_OFFSET, which confuses kmem read/write
depends on !ARM64
help
Say Y here if you want to support the /dev/kmem device. The
/dev/kmem device is rarely used, but can be used for certain
kind of kernel debugging operations.
When in doubt, say "N".
config NVRAM
tristate "/dev/nvram support"
depends on X86 || HAVE_ARCH_NVRAM_OPS
default M68K || PPC
---help---
If you say Y here and create a character special file /dev/nvram
with major number 10 and minor number 144 using mknod ("man mknod"),
you get read and write access to the non-volatile memory.
/dev/nvram may be used to view settings in NVRAM or to change them
(with some utility). It could also be used to frequently
save a few bits of very important data that may not be lost over
power-off and for which writing to disk is too insecure. Note
however that most NVRAM space in a PC belongs to the BIOS and you
should NEVER idly tamper with it. See Ralf Brown's interrupt list
for a guide to the use of CMOS bytes by your BIOS.
This memory is conventionally called "NVRAM" on PowerPC machines,
"CMOS RAM" on PCs, "NVRAM" on Ataris and "PRAM" on Macintoshes.
To compile this driver as a module, choose M here: the
module will be called nvram.
config RAW_DRIVER
tristate "RAW driver (/dev/raw/rawN)"
depends on BLOCK
@ -452,6 +387,14 @@ config MAX_RAW_DEVS
Default is 256. Increase this number in case you need lots of
raw devices.
config DEVPORT
bool "/dev/port character device"
depends on ISA || PCI
default y
help
Say Y here if you want to support the /dev/port device. The /dev/port
device is similar to /dev/mem, but for I/O ports.
config HPET
bool "HPET - High Precision Event Timer" if (X86 || IA64)
default n
@ -511,14 +454,6 @@ config TELCLOCK
/sys/devices/platform/telco_clock, with a number of files for
controlling the behavior of this hardware.
config DEVPORT
bool "/dev/port character device"
depends on ISA || PCI
default y
help
Say Y here if you want to support the /dev/port device. The /dev/port
device is similar to /dev/mem, but for I/O ports.
source "drivers/s390/char/Kconfig"
source "drivers/char/xillybus/Kconfig"

View File

@ -20,9 +20,7 @@ obj-$(CONFIG_APM_EMULATION) += apm-emulation.o
obj-$(CONFIG_DTLK) += dtlk.o
obj-$(CONFIG_APPLICOM) += applicom.o
obj-$(CONFIG_SONYPI) += sonypi.o
obj-$(CONFIG_RTC) += rtc.o
obj-$(CONFIG_HPET) += hpet.o
obj-$(CONFIG_EFI_RTC) += efirtc.o
obj-$(CONFIG_XILINX_HWICAP) += xilinx_hwicap/
obj-$(CONFIG_NVRAM) += nvram.o
obj-$(CONFIG_TOSHIBA) += toshiba.o
@ -46,9 +44,6 @@ obj-$(CONFIG_TCG_TPM) += tpm/
obj-$(CONFIG_PS3_FLASH) += ps3flash.o
obj-$(CONFIG_JS_RTC) += js-rtc.o
js-rtc-y = rtc.o
obj-$(CONFIG_XILLYBUS) += xillybus/
obj-$(CONFIG_POWERNV_OP_PANEL) += powernv-op-panel.o
obj-$(CONFIG_ADI) += adi.o

View File

@ -53,7 +53,6 @@
#define MAX_BOARD 8 /* maximum of pc board possible */
#define MAX_ISA_BOARD 4
#define LEN_RAM_IO 0x800
#define AC_MINOR 157
#ifndef PCI_VENDOR_ID_APPLICOM
#define PCI_VENDOR_ID_APPLICOM 0x1389

View File

@ -1,366 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* EFI Time Services Driver for Linux
*
* Copyright (C) 1999 Hewlett-Packard Co
* Copyright (C) 1999 Stephane Eranian <eranian@hpl.hp.com>
*
* Based on skeleton from the drivers/char/rtc.c driver by P. Gortmaker
*
* This code provides an architected & portable interface to the real time
* clock by using EFI instead of direct bit fiddling. The functionalities are
* quite different from the rtc.c driver. The only way to talk to the device
* is by using ioctl(). There is a /proc interface which provides the raw
* information.
*
* Please note that we have kept the API as close as possible to the
* legacy RTC. The standard /sbin/hwclock program should work normally
* when used to get/set the time.
*
* NOTES:
* - Locking is required for safe execution of EFI calls with regards
* to interrupts and SMP.
*
* TODO (December 1999):
* - provide the API to set/get the WakeUp Alarm (different from the
* rtc.c alarm).
* - SMP testing
* - Add module support
*/
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/miscdevice.h>
#include <linux/init.h>
#include <linux/rtc.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include <linux/efi.h>
#include <linux/uaccess.h>
#define EFI_RTC_VERSION "0.4"
#define EFI_ISDST (EFI_TIME_ADJUST_DAYLIGHT|EFI_TIME_IN_DAYLIGHT)
/*
* EFI Epoch is 1/1/1998
*/
#define EFI_RTC_EPOCH 1998
static DEFINE_SPINLOCK(efi_rtc_lock);
static long efi_rtc_ioctl(struct file *file, unsigned int cmd,
unsigned long arg);
#define is_leap(year) \
((year) % 4 == 0 && ((year) % 100 != 0 || (year) % 400 == 0))
static const unsigned short int __mon_yday[2][13] =
{
/* Normal years. */
{ 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365 },
/* Leap years. */
{ 0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366 }
};
/*
* returns day of the year [0-365]
*/
static inline int
compute_yday(efi_time_t *eft)
{
/* efi_time_t.month is in the [1-12] so, we need -1 */
return __mon_yday[is_leap(eft->year)][eft->month-1]+ eft->day -1;
}
/*
* returns day of the week [0-6] 0=Sunday
*
* Don't try to provide a year that's before 1998, please !
*/
static int
compute_wday(efi_time_t *eft)
{
int y;
int ndays = 0;
if ( eft->year < 1998 ) {
printk(KERN_ERR "efirtc: EFI year < 1998, invalid date\n");
return -1;
}
for(y=EFI_RTC_EPOCH; y < eft->year; y++ ) {
ndays += 365 + (is_leap(y) ? 1 : 0);
}
ndays += compute_yday(eft);
/*
* 4=1/1/1998 was a Thursday
*/
return (ndays + 4) % 7;
}
static void
convert_to_efi_time(struct rtc_time *wtime, efi_time_t *eft)
{
eft->year = wtime->tm_year + 1900;
eft->month = wtime->tm_mon + 1;
eft->day = wtime->tm_mday;
eft->hour = wtime->tm_hour;
eft->minute = wtime->tm_min;
eft->second = wtime->tm_sec;
eft->nanosecond = 0;
eft->daylight = wtime->tm_isdst ? EFI_ISDST: 0;
eft->timezone = EFI_UNSPECIFIED_TIMEZONE;
}
static void
convert_from_efi_time(efi_time_t *eft, struct rtc_time *wtime)
{
memset(wtime, 0, sizeof(*wtime));
wtime->tm_sec = eft->second;
wtime->tm_min = eft->minute;
wtime->tm_hour = eft->hour;
wtime->tm_mday = eft->day;
wtime->tm_mon = eft->month - 1;
wtime->tm_year = eft->year - 1900;
/* day of the week [0-6], Sunday=0 */
wtime->tm_wday = compute_wday(eft);
/* day in the year [1-365]*/
wtime->tm_yday = compute_yday(eft);
switch (eft->daylight & EFI_ISDST) {
case EFI_ISDST:
wtime->tm_isdst = 1;
break;
case EFI_TIME_ADJUST_DAYLIGHT:
wtime->tm_isdst = 0;
break;
default:
wtime->tm_isdst = -1;
}
}
static long efi_rtc_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
efi_status_t status;
unsigned long flags;
efi_time_t eft;
efi_time_cap_t cap;
struct rtc_time wtime;
struct rtc_wkalrm __user *ewp;
unsigned char enabled, pending;
switch (cmd) {
case RTC_UIE_ON:
case RTC_UIE_OFF:
case RTC_PIE_ON:
case RTC_PIE_OFF:
case RTC_AIE_ON:
case RTC_AIE_OFF:
case RTC_ALM_SET:
case RTC_ALM_READ:
case RTC_IRQP_READ:
case RTC_IRQP_SET:
case RTC_EPOCH_READ:
case RTC_EPOCH_SET:
return -EINVAL;
case RTC_RD_TIME:
spin_lock_irqsave(&efi_rtc_lock, flags);
status = efi.get_time(&eft, &cap);
spin_unlock_irqrestore(&efi_rtc_lock,flags);
if (status != EFI_SUCCESS) {
/* should never happen */
printk(KERN_ERR "efitime: can't read time\n");
return -EINVAL;
}
convert_from_efi_time(&eft, &wtime);
return copy_to_user((void __user *)arg, &wtime,
sizeof (struct rtc_time)) ? - EFAULT : 0;
case RTC_SET_TIME:
if (!capable(CAP_SYS_TIME)) return -EACCES;
if (copy_from_user(&wtime, (struct rtc_time __user *)arg,
sizeof(struct rtc_time)) )
return -EFAULT;
convert_to_efi_time(&wtime, &eft);
spin_lock_irqsave(&efi_rtc_lock, flags);
status = efi.set_time(&eft);
spin_unlock_irqrestore(&efi_rtc_lock,flags);
return status == EFI_SUCCESS ? 0 : -EINVAL;
case RTC_WKALM_SET:
if (!capable(CAP_SYS_TIME)) return -EACCES;
ewp = (struct rtc_wkalrm __user *)arg;
if ( get_user(enabled, &ewp->enabled)
|| copy_from_user(&wtime, &ewp->time, sizeof(struct rtc_time)) )
return -EFAULT;
convert_to_efi_time(&wtime, &eft);
spin_lock_irqsave(&efi_rtc_lock, flags);
/*
* XXX Fixme:
* As of EFI 0.92 with the firmware I have on my
* machine this call does not seem to work quite
* right
*/
status = efi.set_wakeup_time((efi_bool_t)enabled, &eft);
spin_unlock_irqrestore(&efi_rtc_lock,flags);
return status == EFI_SUCCESS ? 0 : -EINVAL;
case RTC_WKALM_RD:
spin_lock_irqsave(&efi_rtc_lock, flags);
status = efi.get_wakeup_time((efi_bool_t *)&enabled, (efi_bool_t *)&pending, &eft);
spin_unlock_irqrestore(&efi_rtc_lock,flags);
if (status != EFI_SUCCESS) return -EINVAL;
ewp = (struct rtc_wkalrm __user *)arg;
if ( put_user(enabled, &ewp->enabled)
|| put_user(pending, &ewp->pending)) return -EFAULT;
convert_from_efi_time(&eft, &wtime);
return copy_to_user(&ewp->time, &wtime,
sizeof(struct rtc_time)) ? -EFAULT : 0;
}
return -ENOTTY;
}
/*
* The various file operations we support.
*/
static const struct file_operations efi_rtc_fops = {
.owner = THIS_MODULE,
.unlocked_ioctl = efi_rtc_ioctl,
.llseek = no_llseek,
};
static struct miscdevice efi_rtc_dev= {
EFI_RTC_MINOR,
"efirtc",
&efi_rtc_fops
};
/*
* We export RAW EFI information to /proc/driver/efirtc
*/
static int efi_rtc_proc_show(struct seq_file *m, void *v)
{
efi_time_t eft, alm;
efi_time_cap_t cap;
efi_bool_t enabled, pending;
unsigned long flags;
memset(&eft, 0, sizeof(eft));
memset(&alm, 0, sizeof(alm));
memset(&cap, 0, sizeof(cap));
spin_lock_irqsave(&efi_rtc_lock, flags);
efi.get_time(&eft, &cap);
efi.get_wakeup_time(&enabled, &pending, &alm);
spin_unlock_irqrestore(&efi_rtc_lock,flags);
seq_printf(m,
"Time : %u:%u:%u.%09u\n"
"Date : %u-%u-%u\n"
"Daylight : %u\n",
eft.hour, eft.minute, eft.second, eft.nanosecond,
eft.year, eft.month, eft.day,
eft.daylight);
if (eft.timezone == EFI_UNSPECIFIED_TIMEZONE)
seq_puts(m, "Timezone : unspecified\n");
else
/* XXX fixme: convert to string? */
seq_printf(m, "Timezone : %u\n", eft.timezone);
seq_printf(m,
"Alarm Time : %u:%u:%u.%09u\n"
"Alarm Date : %u-%u-%u\n"
"Alarm Daylight : %u\n"
"Enabled : %s\n"
"Pending : %s\n",
alm.hour, alm.minute, alm.second, alm.nanosecond,
alm.year, alm.month, alm.day,
alm.daylight,
enabled == 1 ? "yes" : "no",
pending == 1 ? "yes" : "no");
if (eft.timezone == EFI_UNSPECIFIED_TIMEZONE)
seq_puts(m, "Timezone : unspecified\n");
else
/* XXX fixme: convert to string? */
seq_printf(m, "Timezone : %u\n", alm.timezone);
/*
* now prints the capabilities
*/
seq_printf(m,
"Resolution : %u\n"
"Accuracy : %u\n"
"SetstoZero : %u\n",
cap.resolution, cap.accuracy, cap.sets_to_zero);
return 0;
}
static int __init
efi_rtc_init(void)
{
int ret;
struct proc_dir_entry *dir;
printk(KERN_INFO "EFI Time Services Driver v%s\n", EFI_RTC_VERSION);
ret = misc_register(&efi_rtc_dev);
if (ret) {
printk(KERN_ERR "efirtc: can't misc_register on minor=%d\n",
EFI_RTC_MINOR);
return ret;
}
dir = proc_create_single("driver/efirtc", 0, NULL, efi_rtc_proc_show);
if (dir == NULL) {
printk(KERN_ERR "efirtc: can't create /proc/driver/efirtc.\n");
misc_deregister(&efi_rtc_dev);
return -1;
}
return 0;
}
device_initcall(efi_rtc_init);
/*
MODULE_LICENSE("GPL");
*/

View File

@ -75,7 +75,7 @@ struct vma_data {
enum mspec_page_type type; /* Type of pages allocated. */
unsigned long vm_start; /* Original (unsplit) base. */
unsigned long vm_end; /* Original (unsplit) end. */
unsigned long maddr[0]; /* Array of MSPEC addresses. */
unsigned long maddr[]; /* Array of MSPEC addresses. */
};
/*

View File

@ -14,7 +14,6 @@
#define NUM_PRESSES_REBOOT 2 /* How many presses to activate shutdown */
#define BUTTON_DELAY 30 /* How many jiffies for sequence to end */
#define VERSION "0.3" /* Driver version number */
#define BUTTON_MINOR 158 /* Major 10, Minor 158, /dev/nwbutton */
/* Structure definitions: */

View File

@ -576,7 +576,7 @@ static const struct file_operations flash_fops =
static struct miscdevice flash_miscdev =
{
FLASH_MINOR,
NWFLASH_MINOR,
"nwflash",
&flash_fops
};

View File

@ -731,8 +731,9 @@ static void monitor_card(struct timer_list *t)
}
switch (dev->mstate) {
case M_CARDOFF: {
unsigned char flags0;
case M_CARDOFF:
DEBUGP(4, dev, "M_CARDOFF\n");
flags0 = inb(REG_FLAGS0(iobase));
if (flags0 & 0x02) {
@ -755,6 +756,7 @@ static void monitor_card(struct timer_list *t)
dev->mdelay = T_50MSEC;
}
break;
}
case M_FETCH_ATR:
DEBUGP(4, dev, "M_FETCH_ATR\n");
xoutb(0x80, REG_FLAGS0(iobase));

View File

@ -355,14 +355,19 @@ static int pp_do_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
struct pp_struct *pp = file->private_data;
struct parport *port;
void __user *argp = (void __user *)arg;
struct ieee1284_info *info;
unsigned char reg;
unsigned char mask;
int mode;
s32 time32[2];
s64 time64[2];
struct timespec64 ts;
int ret;
/* First handle the cases that don't take arguments. */
switch (cmd) {
case PPCLAIM:
{
struct ieee1284_info *info;
int ret;
if (pp->flags & PP_CLAIMED) {
dev_dbg(&pp->pdev->dev, "you've already got it!\n");
return -EINVAL;
@ -513,15 +518,6 @@ static int pp_do_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
port = pp->pdev->port;
switch (cmd) {
struct ieee1284_info *info;
unsigned char reg;
unsigned char mask;
int mode;
s32 time32[2];
s64 time64[2];
struct timespec64 ts;
int ret;
case PPRSTATUS:
reg = parport_read_status(port);
if (copy_to_user(argp, &reg, sizeof(reg)))

File diff suppressed because it is too large Load Diff

View File

@ -61,8 +61,6 @@
#include <linux/mutex.h>
#include <linux/toshiba.h>
#define TOSH_MINOR_DEV 181
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Jonathan Buzzard <jonathan@buzzard.org.uk>");
MODULE_DESCRIPTION("Toshiba laptop SMM driver");

View File

@ -112,7 +112,7 @@ struct port_buffer {
unsigned int sgpages;
/* sg is used if spages > 0. sg must be the last in is struct */
struct scatterlist sg[0];
struct scatterlist sg[];
};
/*

View File

@ -443,9 +443,40 @@ static int axp288_extcon_probe(struct platform_device *pdev)
/* Start charger cable type detection */
axp288_extcon_enable(info);
device_init_wakeup(dev, true);
platform_set_drvdata(pdev, info);
return 0;
}
static int __maybe_unused axp288_extcon_suspend(struct device *dev)
{
struct axp288_extcon_info *info = dev_get_drvdata(dev);
if (device_may_wakeup(dev))
enable_irq_wake(info->irq[VBUS_RISING_IRQ]);
return 0;
}
static int __maybe_unused axp288_extcon_resume(struct device *dev)
{
struct axp288_extcon_info *info = dev_get_drvdata(dev);
/*
* Wakeup when a charger is connected to do charger-type
* connection and generate an extcon event which makes the
* axp288 charger driver set the input current limit.
*/
if (device_may_wakeup(dev))
disable_irq_wake(info->irq[VBUS_RISING_IRQ]);
return 0;
}
static SIMPLE_DEV_PM_OPS(axp288_extcon_pm_ops, axp288_extcon_suspend,
axp288_extcon_resume);
static const struct platform_device_id axp288_extcon_table[] = {
{ .name = "axp288_extcon" },
{},
@ -457,6 +488,7 @@ static struct platform_driver axp288_extcon_driver = {
.id_table = axp288_extcon_table,
.driver = {
.name = "axp288_extcon",
.pm = &axp288_extcon_pm_ops,
},
};

View File

@ -205,14 +205,18 @@ static int palmas_usb_probe(struct platform_device *pdev)
palmas_usb->id_gpiod = devm_gpiod_get_optional(&pdev->dev, "id",
GPIOD_IN);
if (IS_ERR(palmas_usb->id_gpiod)) {
if (PTR_ERR(palmas_usb->id_gpiod) == -EPROBE_DEFER) {
return -EPROBE_DEFER;
} else if (IS_ERR(palmas_usb->id_gpiod)) {
dev_err(&pdev->dev, "failed to get id gpio\n");
return PTR_ERR(palmas_usb->id_gpiod);
}
palmas_usb->vbus_gpiod = devm_gpiod_get_optional(&pdev->dev, "vbus",
GPIOD_IN);
if (IS_ERR(palmas_usb->vbus_gpiod)) {
if (PTR_ERR(palmas_usb->vbus_gpiod) == -EPROBE_DEFER) {
return -EPROBE_DEFER;
} else if (IS_ERR(palmas_usb->vbus_gpiod)) {
dev_err(&pdev->dev, "failed to get vbus gpio\n");
return PTR_ERR(palmas_usb->vbus_gpiod);
}

View File

@ -1406,6 +1406,7 @@ const char *extcon_get_edev_name(struct extcon_dev *edev)
{
return !edev ? NULL : edev->name;
}
EXPORT_SYMBOL_GPL(extcon_get_edev_name);
static int __init extcon_class_init(void)
{

View File

@ -206,7 +206,7 @@ config FW_CFG_SYSFS_CMDLINE
config INTEL_STRATIX10_SERVICE
tristate "Intel Stratix10 Service Layer"
depends on ARCH_STRATIX10 && HAVE_ARM_SMCCC
depends on (ARCH_STRATIX10 || ARCH_AGILEX) && HAVE_ARM_SMCCC
default n
help
Intel Stratix10 service layer runs at privileged exception level,

View File

@ -12,7 +12,7 @@ config IMX_DSP
config IMX_SCU
bool "IMX SCU Protocol driver"
depends on IMX_MBOX
depends on IMX_MBOX || COMPILE_TEST
help
The System Controller Firmware (SCFW) is a low-level system function
which runs on a dedicated Cortex-M core to provide power, clock, and
@ -24,6 +24,6 @@ config IMX_SCU
config IMX_SCU_PD
bool "IMX SCU Power Domain driver"
depends on IMX_SCU
depends on IMX_SCU || COMPILE_TEST
help
The System Controller Firmware (SCFW) based power domain driver.

View File

@ -966,6 +966,7 @@ EXPORT_SYMBOL_GPL(stratix10_svc_free_memory);
static const struct of_device_id stratix10_svc_drv_match[] = {
{.compatible = "intel,stratix10-svc"},
{.compatible = "intel,agilex-svc"},
{},
};

View File

@ -110,4 +110,25 @@ config CORESIGHT_CPU_DEBUG
properly, please refer Documentation/trace/coresight-cpu-debug.rst
for detailed description and the example for usage.
config CORESIGHT_CTI
bool "CoreSight Cross Trigger Interface (CTI) driver"
depends on ARM || ARM64
help
This driver provides support for CoreSight CTI and CTM components.
These provide hardware triggering events between CoreSight trace
source and sink components. These can be used to halt trace or
inject events into the trace stream. CTI also provides a software
control to trigger the same halt events. This can provide fast trace
halt compared to disabling sources and sinks normally in driver
software.
config CORESIGHT_CTI_INTEGRATION_REGS
bool "Access CTI CoreSight Integration Registers"
depends on CORESIGHT_CTI
help
This option adds support for the CoreSight integration registers on
this device. The integration registers allow the exploration of the
CTI trigger connections between this and other devices.These
registers are not used in normal operation and can leave devices in
an inconsistent state.
endif

View File

@ -17,3 +17,6 @@ obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o \
obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o
obj-$(CONFIG_CORESIGHT_CPU_DEBUG) += coresight-cpu-debug.o
obj-$(CONFIG_CORESIGHT_CATU) += coresight-catu.o
obj-$(CONFIG_CORESIGHT_CTI) += coresight-cti.o \
coresight-cti-platform.o \
coresight-cti-sysfs.o

View File

@ -0,0 +1,485 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2019, The Linaro Limited. All rights reserved.
*/
#include <dt-bindings/arm/coresight-cti-dt.h>
#include <linux/of.h>
#include "coresight-cti.h"
/* Number of CTI signals in the v8 architecturally defined connection */
#define NR_V8PE_IN_SIGS 2
#define NR_V8PE_OUT_SIGS 3
#define NR_V8ETM_INOUT_SIGS 4
/* CTI device tree trigger connection node keyword */
#define CTI_DT_CONNS "trig-conns"
/* CTI device tree connection property keywords */
#define CTI_DT_V8ARCH_COMPAT "arm,coresight-cti-v8-arch"
#define CTI_DT_CSDEV_ASSOC "arm,cs-dev-assoc"
#define CTI_DT_TRIGIN_SIGS "arm,trig-in-sigs"
#define CTI_DT_TRIGOUT_SIGS "arm,trig-out-sigs"
#define CTI_DT_TRIGIN_TYPES "arm,trig-in-types"
#define CTI_DT_TRIGOUT_TYPES "arm,trig-out-types"
#define CTI_DT_FILTER_OUT_SIGS "arm,trig-filters"
#define CTI_DT_CONN_NAME "arm,trig-conn-name"
#define CTI_DT_CTM_ID "arm,cti-ctm-id"
#ifdef CONFIG_OF
/*
* CTI can be bound to a CPU, or a system device.
* CPU can be declared at the device top level or in a connections node
* so need to check relative to node not device.
*/
static int of_cti_get_cpu_at_node(const struct device_node *node)
{
int cpu;
struct device_node *dn;
if (node == NULL)
return -1;
dn = of_parse_phandle(node, "cpu", 0);
/* CTI affinity defaults to no cpu */
if (!dn)
return -1;
cpu = of_cpu_node_to_id(dn);
of_node_put(dn);
/* No Affinity if no cpu nodes are found */
return (cpu < 0) ? -1 : cpu;
}
#else
static int of_cti_get_cpu_at_node(const struct device_node *node)
{
return -1;
}
#endif
/*
* CTI can be bound to a CPU, or a system device.
* CPU can be declared at the device top level or in a connections node
* so need to check relative to node not device.
*/
static int cti_plat_get_cpu_at_node(struct fwnode_handle *fwnode)
{
if (is_of_node(fwnode))
return of_cti_get_cpu_at_node(to_of_node(fwnode));
return -1;
}
const char *cti_plat_get_node_name(struct fwnode_handle *fwnode)
{
if (is_of_node(fwnode))
return of_node_full_name(to_of_node(fwnode));
return "unknown";
}
/*
* Extract a name from the fwnode.
* If the device associated with the node is a coresight_device, then return
* that name and the coresight_device pointer, otherwise return the node name.
*/
static const char *
cti_plat_get_csdev_or_node_name(struct fwnode_handle *fwnode,
struct coresight_device **csdev)
{
const char *name = NULL;
*csdev = coresight_find_csdev_by_fwnode(fwnode);
if (*csdev)
name = dev_name(&(*csdev)->dev);
else
name = cti_plat_get_node_name(fwnode);
return name;
}
static bool cti_plat_node_name_eq(struct fwnode_handle *fwnode,
const char *name)
{
if (is_of_node(fwnode))
return of_node_name_eq(to_of_node(fwnode), name);
return false;
}
static int cti_plat_create_v8_etm_connection(struct device *dev,
struct cti_drvdata *drvdata)
{
int ret = -ENOMEM, i;
struct fwnode_handle *root_fwnode, *cs_fwnode;
const char *assoc_name = NULL;
struct coresight_device *csdev;
struct cti_trig_con *tc = NULL;
root_fwnode = dev_fwnode(dev);
if (IS_ERR_OR_NULL(root_fwnode))
return -EINVAL;
/* Can optionally have an etm node - return if not */
cs_fwnode = fwnode_find_reference(root_fwnode, CTI_DT_CSDEV_ASSOC, 0);
if (IS_ERR_OR_NULL(cs_fwnode))
return 0;
/* allocate memory */
tc = cti_allocate_trig_con(dev, NR_V8ETM_INOUT_SIGS,
NR_V8ETM_INOUT_SIGS);
if (!tc)
goto create_v8_etm_out;
/* build connection data */
tc->con_in->used_mask = 0xF0; /* sigs <4,5,6,7> */
tc->con_out->used_mask = 0xF0; /* sigs <4,5,6,7> */
/*
* The EXTOUT type signals from the ETM are connected to a set of input
* triggers on the CTI, the EXTIN being connected to output triggers.
*/
for (i = 0; i < NR_V8ETM_INOUT_SIGS; i++) {
tc->con_in->sig_types[i] = ETM_EXTOUT;
tc->con_out->sig_types[i] = ETM_EXTIN;
}
/*
* We look to see if the ETM coresight device associated with this
* handle has been registered with the system - i.e. probed before
* this CTI. If so csdev will be non NULL and we can use the device
* name and pass the csdev to the connection entry function where
* the association will be recorded.
* If not, then simply record the name in the connection data, the
* probing of the ETM will call into the CTI driver API to update the
* association then.
*/
assoc_name = cti_plat_get_csdev_or_node_name(cs_fwnode, &csdev);
ret = cti_add_connection_entry(dev, drvdata, tc, csdev, assoc_name);
create_v8_etm_out:
fwnode_handle_put(cs_fwnode);
return ret;
}
/*
* Create an architecturally defined v8 connection
* must have a cpu, can have an ETM.
*/
static int cti_plat_create_v8_connections(struct device *dev,
struct cti_drvdata *drvdata)
{
struct cti_device *cti_dev = &drvdata->ctidev;
struct cti_trig_con *tc = NULL;
int cpuid = 0;
char cpu_name_str[16];
int ret = -ENOMEM;
/* Must have a cpu node */
cpuid = cti_plat_get_cpu_at_node(dev_fwnode(dev));
if (cpuid < 0) {
dev_warn(dev,
"ARM v8 architectural CTI connection: missing cpu\n");
return -EINVAL;
}
cti_dev->cpu = cpuid;
/* Allocate the v8 cpu connection memory */
tc = cti_allocate_trig_con(dev, NR_V8PE_IN_SIGS, NR_V8PE_OUT_SIGS);
if (!tc)
goto of_create_v8_out;
/* Set the v8 PE CTI connection data */
tc->con_in->used_mask = 0x3; /* sigs <0 1> */
tc->con_in->sig_types[0] = PE_DBGTRIGGER;
tc->con_in->sig_types[1] = PE_PMUIRQ;
tc->con_out->used_mask = 0x7; /* sigs <0 1 2 > */
tc->con_out->sig_types[0] = PE_EDBGREQ;
tc->con_out->sig_types[1] = PE_DBGRESTART;
tc->con_out->sig_types[2] = PE_CTIIRQ;
scnprintf(cpu_name_str, sizeof(cpu_name_str), "cpu%d", cpuid);
ret = cti_add_connection_entry(dev, drvdata, tc, NULL, cpu_name_str);
if (ret)
goto of_create_v8_out;
/* Create the v8 ETM associated connection */
ret = cti_plat_create_v8_etm_connection(dev, drvdata);
if (ret)
goto of_create_v8_out;
/* filter pe_edbgreq - PE trigout sig <0> */
drvdata->config.trig_out_filter |= 0x1;
of_create_v8_out:
return ret;
}
static int cti_plat_check_v8_arch_compatible(struct device *dev)
{
struct fwnode_handle *fwnode = dev_fwnode(dev);
if (is_of_node(fwnode))
return of_device_is_compatible(to_of_node(fwnode),
CTI_DT_V8ARCH_COMPAT);
return 0;
}
static int cti_plat_count_sig_elements(const struct fwnode_handle *fwnode,
const char *name)
{
int nr_elem = fwnode_property_count_u32(fwnode, name);
return (nr_elem < 0 ? 0 : nr_elem);
}
static int cti_plat_read_trig_group(struct cti_trig_grp *tgrp,
const struct fwnode_handle *fwnode,
const char *grp_name)
{
int idx, err = 0;
u32 *values;
if (!tgrp->nr_sigs)
return 0;
values = kcalloc(tgrp->nr_sigs, sizeof(u32), GFP_KERNEL);
if (!values)
return -ENOMEM;
err = fwnode_property_read_u32_array(fwnode, grp_name,
values, tgrp->nr_sigs);
if (!err) {
/* set the signal usage mask */
for (idx = 0; idx < tgrp->nr_sigs; idx++)
tgrp->used_mask |= BIT(values[idx]);
}
kfree(values);
return err;
}
static int cti_plat_read_trig_types(struct cti_trig_grp *tgrp,
const struct fwnode_handle *fwnode,
const char *type_name)
{
int items, err = 0, nr_sigs;
u32 *values = NULL, i;
/* allocate an array according to number of signals in connection */
nr_sigs = tgrp->nr_sigs;
if (!nr_sigs)
return 0;
/* see if any types have been included in the device description */
items = cti_plat_count_sig_elements(fwnode, type_name);
if (items > nr_sigs)
return -EINVAL;
/* need an array to store the values iff there are any */
if (items) {
values = kcalloc(items, sizeof(u32), GFP_KERNEL);
if (!values)
return -ENOMEM;
err = fwnode_property_read_u32_array(fwnode, type_name,
values, items);
if (err)
goto read_trig_types_out;
}
/*
* Match type id to signal index, 1st type to 1st index etc.
* If fewer types than signals default remainder to GEN_IO.
*/
for (i = 0; i < nr_sigs; i++) {
if (i < items) {
tgrp->sig_types[i] =
values[i] < CTI_TRIG_MAX ? values[i] : GEN_IO;
} else {
tgrp->sig_types[i] = GEN_IO;
}
}
read_trig_types_out:
kfree(values);
return err;
}
static int cti_plat_process_filter_sigs(struct cti_drvdata *drvdata,
const struct fwnode_handle *fwnode)
{
struct cti_trig_grp *tg = NULL;
int err = 0, nr_filter_sigs;
nr_filter_sigs = cti_plat_count_sig_elements(fwnode,
CTI_DT_FILTER_OUT_SIGS);
if (nr_filter_sigs == 0)
return 0;
if (nr_filter_sigs > drvdata->config.nr_trig_max)
return -EINVAL;
tg = kzalloc(sizeof(*tg), GFP_KERNEL);
if (!tg)
return -ENOMEM;
err = cti_plat_read_trig_group(tg, fwnode, CTI_DT_FILTER_OUT_SIGS);
if (!err)
drvdata->config.trig_out_filter |= tg->used_mask;
kfree(tg);
return err;
}
static int cti_plat_create_connection(struct device *dev,
struct cti_drvdata *drvdata,
struct fwnode_handle *fwnode)
{
struct cti_trig_con *tc = NULL;
int cpuid = -1, err = 0;
struct fwnode_handle *cs_fwnode = NULL;
struct coresight_device *csdev = NULL;
const char *assoc_name = "unknown";
char cpu_name_str[16];
int nr_sigs_in, nr_sigs_out;
/* look to see how many in and out signals we have */
nr_sigs_in = cti_plat_count_sig_elements(fwnode, CTI_DT_TRIGIN_SIGS);
nr_sigs_out = cti_plat_count_sig_elements(fwnode, CTI_DT_TRIGOUT_SIGS);
if ((nr_sigs_in > drvdata->config.nr_trig_max) ||
(nr_sigs_out > drvdata->config.nr_trig_max))
return -EINVAL;
tc = cti_allocate_trig_con(dev, nr_sigs_in, nr_sigs_out);
if (!tc)
return -ENOMEM;
/* look for the signals properties. */
err = cti_plat_read_trig_group(tc->con_in, fwnode,
CTI_DT_TRIGIN_SIGS);
if (err)
goto create_con_err;
err = cti_plat_read_trig_types(tc->con_in, fwnode,
CTI_DT_TRIGIN_TYPES);
if (err)
goto create_con_err;
err = cti_plat_read_trig_group(tc->con_out, fwnode,
CTI_DT_TRIGOUT_SIGS);
if (err)
goto create_con_err;
err = cti_plat_read_trig_types(tc->con_out, fwnode,
CTI_DT_TRIGOUT_TYPES);
if (err)
goto create_con_err;
err = cti_plat_process_filter_sigs(drvdata, fwnode);
if (err)
goto create_con_err;
/* read the connection name if set - may be overridden by later */
fwnode_property_read_string(fwnode, CTI_DT_CONN_NAME, &assoc_name);
/* associated cpu ? */
cpuid = cti_plat_get_cpu_at_node(fwnode);
if (cpuid >= 0) {
drvdata->ctidev.cpu = cpuid;
scnprintf(cpu_name_str, sizeof(cpu_name_str), "cpu%d", cpuid);
assoc_name = cpu_name_str;
} else {
/* associated device ? */
cs_fwnode = fwnode_find_reference(fwnode,
CTI_DT_CSDEV_ASSOC, 0);
if (!IS_ERR_OR_NULL(cs_fwnode)) {
assoc_name = cti_plat_get_csdev_or_node_name(cs_fwnode,
&csdev);
fwnode_handle_put(cs_fwnode);
}
}
/* set up a connection */
err = cti_add_connection_entry(dev, drvdata, tc, csdev, assoc_name);
create_con_err:
return err;
}
static int cti_plat_create_impdef_connections(struct device *dev,
struct cti_drvdata *drvdata)
{
int rc = 0;
struct fwnode_handle *fwnode = dev_fwnode(dev);
struct fwnode_handle *child = NULL;
if (IS_ERR_OR_NULL(fwnode))
return -EINVAL;
fwnode_for_each_child_node(fwnode, child) {
if (cti_plat_node_name_eq(child, CTI_DT_CONNS))
rc = cti_plat_create_connection(dev, drvdata,
child);
if (rc != 0)
break;
}
fwnode_handle_put(child);
return rc;
}
/* get the hardware configuration & connection data. */
int cti_plat_get_hw_data(struct device *dev,
struct cti_drvdata *drvdata)
{
int rc = 0;
struct cti_device *cti_dev = &drvdata->ctidev;
/* get any CTM ID - defaults to 0 */
device_property_read_u32(dev, CTI_DT_CTM_ID, &cti_dev->ctm_id);
/* check for a v8 architectural CTI device */
if (cti_plat_check_v8_arch_compatible(dev))
rc = cti_plat_create_v8_connections(dev, drvdata);
else
rc = cti_plat_create_impdef_connections(dev, drvdata);
if (rc)
return rc;
/* if no connections, just add a single default based on max IN-OUT */
if (cti_dev->nr_trig_con == 0)
rc = cti_add_default_connection(dev, drvdata);
return rc;
}
struct coresight_platform_data *
coresight_cti_get_platform_data(struct device *dev)
{
int ret = -ENOENT;
struct coresight_platform_data *pdata = NULL;
struct fwnode_handle *fwnode = dev_fwnode(dev);
struct cti_drvdata *drvdata = dev_get_drvdata(dev);
if (IS_ERR_OR_NULL(fwnode))
goto error;
/*
* Alloc platform data but leave it zero init. CTI does not use the
* same connection infrastructuree as trace path components but an
* empty struct enables us to use the standard coresight component
* registration code.
*/
pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
if (!pdata) {
ret = -ENOMEM;
goto error;
}
/* get some CTI specifics */
ret = cti_plat_get_hw_data(dev, drvdata);
if (!ret)
return pdata;
error:
return ERR_PTR(ret);
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,745 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2018 Linaro Limited, All rights reserved.
* Author: Mike Leach <mike.leach@linaro.org>
*/
#include <linux/property.h>
#include "coresight-cti.h"
/**
* CTI devices can be associated with a PE, or be connected to CoreSight
* hardware. We have a list of all CTIs irrespective of CPU bound or
* otherwise.
*
* We assume that the non-CPU CTIs are always powered as we do with sinks etc.
*
* We leave the client to figure out if all the CTIs are interconnected with
* the same CTM, in general this is the case but does not always have to be.
*/
/* net of CTI devices connected via CTM */
LIST_HEAD(ect_net);
/* protect the list */
static DEFINE_MUTEX(ect_mutex);
#define csdev_to_cti_drvdata(csdev) \
dev_get_drvdata(csdev->dev.parent)
/*
* CTI naming. CTI bound to cores will have the name cti_cpu<N> where
* N is the CPU ID. System CTIs will have the name cti_sys<I> where I
* is an index allocated by order of discovery.
*
* CTI device name list - for CTI not bound to cores.
*/
DEFINE_CORESIGHT_DEVLIST(cti_sys_devs, "cti_sys");
/* write set of regs to hardware - call with spinlock claimed */
void cti_write_all_hw_regs(struct cti_drvdata *drvdata)
{
struct cti_config *config = &drvdata->config;
int i;
CS_UNLOCK(drvdata->base);
/* disable CTI before writing registers */
writel_relaxed(0, drvdata->base + CTICONTROL);
/* write the CTI trigger registers */
for (i = 0; i < config->nr_trig_max; i++) {
writel_relaxed(config->ctiinen[i], drvdata->base + CTIINEN(i));
writel_relaxed(config->ctiouten[i],
drvdata->base + CTIOUTEN(i));
}
/* other regs */
writel_relaxed(config->ctigate, drvdata->base + CTIGATE);
writel_relaxed(config->asicctl, drvdata->base + ASICCTL);
writel_relaxed(config->ctiappset, drvdata->base + CTIAPPSET);
/* re-enable CTI */
writel_relaxed(1, drvdata->base + CTICONTROL);
CS_LOCK(drvdata->base);
}
static void cti_enable_hw_smp_call(void *info)
{
struct cti_drvdata *drvdata = info;
cti_write_all_hw_regs(drvdata);
}
/* write regs to hardware and enable */
static int cti_enable_hw(struct cti_drvdata *drvdata)
{
struct cti_config *config = &drvdata->config;
struct device *dev = &drvdata->csdev->dev;
int rc = 0;
pm_runtime_get_sync(dev->parent);
spin_lock(&drvdata->spinlock);
/* no need to do anything if enabled or unpowered*/
if (config->hw_enabled || !config->hw_powered)
goto cti_state_unchanged;
/* claim the device */
rc = coresight_claim_device(drvdata->base);
if (rc)
goto cti_err_not_enabled;
if (drvdata->ctidev.cpu >= 0) {
rc = smp_call_function_single(drvdata->ctidev.cpu,
cti_enable_hw_smp_call,
drvdata, 1);
if (rc)
goto cti_err_not_enabled;
} else {
cti_write_all_hw_regs(drvdata);
}
config->hw_enabled = true;
atomic_inc(&drvdata->config.enable_req_count);
spin_unlock(&drvdata->spinlock);
return rc;
cti_state_unchanged:
atomic_inc(&drvdata->config.enable_req_count);
/* cannot enable due to error */
cti_err_not_enabled:
spin_unlock(&drvdata->spinlock);
pm_runtime_put(dev->parent);
return rc;
}
/* disable hardware */
static int cti_disable_hw(struct cti_drvdata *drvdata)
{
struct cti_config *config = &drvdata->config;
struct device *dev = &drvdata->csdev->dev;
spin_lock(&drvdata->spinlock);
/* check refcount - disable on 0 */
if (atomic_dec_return(&drvdata->config.enable_req_count) > 0)
goto cti_not_disabled;
/* no need to do anything if disabled or cpu unpowered */
if (!config->hw_enabled || !config->hw_powered)
goto cti_not_disabled;
CS_UNLOCK(drvdata->base);
/* disable CTI */
writel_relaxed(0, drvdata->base + CTICONTROL);
config->hw_enabled = false;
coresight_disclaim_device_unlocked(drvdata->base);
CS_LOCK(drvdata->base);
spin_unlock(&drvdata->spinlock);
pm_runtime_put(dev);
return 0;
/* not disabled this call */
cti_not_disabled:
spin_unlock(&drvdata->spinlock);
return 0;
}
void cti_write_single_reg(struct cti_drvdata *drvdata, int offset, u32 value)
{
CS_UNLOCK(drvdata->base);
writel_relaxed(value, drvdata->base + offset);
CS_LOCK(drvdata->base);
}
void cti_write_intack(struct device *dev, u32 ackval)
{
struct cti_drvdata *drvdata = dev_get_drvdata(dev->parent);
struct cti_config *config = &drvdata->config;
spin_lock(&drvdata->spinlock);
/* write if enabled */
if (cti_active(config))
cti_write_single_reg(drvdata, CTIINTACK, ackval);
spin_unlock(&drvdata->spinlock);
}
/*
* Look at the HW DEVID register for some of the HW settings.
* DEVID[15:8] - max number of in / out triggers.
*/
#define CTI_DEVID_MAXTRIGS(devid_val) ((int) BMVAL(devid_val, 8, 15))
/* DEVID[19:16] - number of CTM channels */
#define CTI_DEVID_CTMCHANNELS(devid_val) ((int) BMVAL(devid_val, 16, 19))
static void cti_set_default_config(struct device *dev,
struct cti_drvdata *drvdata)
{
struct cti_config *config = &drvdata->config;
u32 devid;
devid = readl_relaxed(drvdata->base + CORESIGHT_DEVID);
config->nr_trig_max = CTI_DEVID_MAXTRIGS(devid);
/*
* no current hardware should exceed this, but protect the driver
* in case of fault / out of spec hw
*/
if (config->nr_trig_max > CTIINOUTEN_MAX) {
dev_warn_once(dev,
"Limiting HW MaxTrig value(%d) to driver max(%d)\n",
config->nr_trig_max, CTIINOUTEN_MAX);
config->nr_trig_max = CTIINOUTEN_MAX;
}
config->nr_ctm_channels = CTI_DEVID_CTMCHANNELS(devid);
/* Most regs default to 0 as zalloc'ed except...*/
config->trig_filter_enable = true;
config->ctigate = GENMASK(config->nr_ctm_channels - 1, 0);
atomic_set(&config->enable_req_count, 0);
}
/*
* Add a connection entry to the list of connections for this
* CTI device.
*/
int cti_add_connection_entry(struct device *dev, struct cti_drvdata *drvdata,
struct cti_trig_con *tc,
struct coresight_device *csdev,
const char *assoc_dev_name)
{
struct cti_device *cti_dev = &drvdata->ctidev;
tc->con_dev = csdev;
/*
* Prefer actual associated CS device dev name to supplied value -
* which is likely to be node name / other conn name.
*/
if (csdev)
tc->con_dev_name = dev_name(&csdev->dev);
else if (assoc_dev_name != NULL) {
tc->con_dev_name = devm_kstrdup(dev,
assoc_dev_name, GFP_KERNEL);
if (!tc->con_dev_name)
return -ENOMEM;
}
list_add_tail(&tc->node, &cti_dev->trig_cons);
cti_dev->nr_trig_con++;
/* add connection usage bit info to overall info */
drvdata->config.trig_in_use |= tc->con_in->used_mask;
drvdata->config.trig_out_use |= tc->con_out->used_mask;
return 0;
}
/* create a trigger connection with appropriately sized signal groups */
struct cti_trig_con *cti_allocate_trig_con(struct device *dev, int in_sigs,
int out_sigs)
{
struct cti_trig_con *tc = NULL;
struct cti_trig_grp *in = NULL, *out = NULL;
tc = devm_kzalloc(dev, sizeof(struct cti_trig_con), GFP_KERNEL);
if (!tc)
return tc;
in = devm_kzalloc(dev,
offsetof(struct cti_trig_grp, sig_types[in_sigs]),
GFP_KERNEL);
if (!in)
return NULL;
out = devm_kzalloc(dev,
offsetof(struct cti_trig_grp, sig_types[out_sigs]),
GFP_KERNEL);
if (!out)
return NULL;
tc->con_in = in;
tc->con_out = out;
tc->con_in->nr_sigs = in_sigs;
tc->con_out->nr_sigs = out_sigs;
return tc;
}
/*
* Add a default connection if nothing else is specified.
* single connection based on max in/out info, no assoc device
*/
int cti_add_default_connection(struct device *dev, struct cti_drvdata *drvdata)
{
int ret = 0;
int n_trigs = drvdata->config.nr_trig_max;
u32 n_trig_mask = GENMASK(n_trigs - 1, 0);
struct cti_trig_con *tc = NULL;
/*
* Assume max trigs for in and out,
* all used, default sig types allocated
*/
tc = cti_allocate_trig_con(dev, n_trigs, n_trigs);
if (!tc)
return -ENOMEM;
tc->con_in->used_mask = n_trig_mask;
tc->con_out->used_mask = n_trig_mask;
ret = cti_add_connection_entry(dev, drvdata, tc, NULL, "default");
return ret;
}
/** cti channel api **/
/* attach/detach channel from trigger - write through if enabled. */
int cti_channel_trig_op(struct device *dev, enum cti_chan_op op,
enum cti_trig_dir direction, u32 channel_idx,
u32 trigger_idx)
{
struct cti_drvdata *drvdata = dev_get_drvdata(dev->parent);
struct cti_config *config = &drvdata->config;
u32 trig_bitmask;
u32 chan_bitmask;
u32 reg_value;
int reg_offset;
/* ensure indexes in range */
if ((channel_idx >= config->nr_ctm_channels) ||
(trigger_idx >= config->nr_trig_max))
return -EINVAL;
trig_bitmask = BIT(trigger_idx);
/* ensure registered triggers and not out filtered */
if (direction == CTI_TRIG_IN) {
if (!(trig_bitmask & config->trig_in_use))
return -EINVAL;
} else {
if (!(trig_bitmask & config->trig_out_use))
return -EINVAL;
if ((config->trig_filter_enable) &&
(config->trig_out_filter & trig_bitmask))
return -EINVAL;
}
/* update the local register values */
chan_bitmask = BIT(channel_idx);
reg_offset = (direction == CTI_TRIG_IN ? CTIINEN(trigger_idx) :
CTIOUTEN(trigger_idx));
spin_lock(&drvdata->spinlock);
/* read - modify write - the trigger / channel enable value */
reg_value = direction == CTI_TRIG_IN ? config->ctiinen[trigger_idx] :
config->ctiouten[trigger_idx];
if (op == CTI_CHAN_ATTACH)
reg_value |= chan_bitmask;
else
reg_value &= ~chan_bitmask;
/* write local copy */
if (direction == CTI_TRIG_IN)
config->ctiinen[trigger_idx] = reg_value;
else
config->ctiouten[trigger_idx] = reg_value;
/* write through if enabled */
if (cti_active(config))
cti_write_single_reg(drvdata, reg_offset, reg_value);
spin_unlock(&drvdata->spinlock);
return 0;
}
int cti_channel_gate_op(struct device *dev, enum cti_chan_gate_op op,
u32 channel_idx)
{
struct cti_drvdata *drvdata = dev_get_drvdata(dev->parent);
struct cti_config *config = &drvdata->config;
u32 chan_bitmask;
u32 reg_value;
int err = 0;
if (channel_idx >= config->nr_ctm_channels)
return -EINVAL;
chan_bitmask = BIT(channel_idx);
spin_lock(&drvdata->spinlock);
reg_value = config->ctigate;
switch (op) {
case CTI_GATE_CHAN_ENABLE:
reg_value |= chan_bitmask;
break;
case CTI_GATE_CHAN_DISABLE:
reg_value &= ~chan_bitmask;
break;
default:
err = -EINVAL;
break;
}
if (err == 0) {
config->ctigate = reg_value;
if (cti_active(config))
cti_write_single_reg(drvdata, CTIGATE, reg_value);
}
spin_unlock(&drvdata->spinlock);
return err;
}
int cti_channel_setop(struct device *dev, enum cti_chan_set_op op,
u32 channel_idx)
{
struct cti_drvdata *drvdata = dev_get_drvdata(dev->parent);
struct cti_config *config = &drvdata->config;
u32 chan_bitmask;
u32 reg_value;
u32 reg_offset;
int err = 0;
if (channel_idx >= config->nr_ctm_channels)
return -EINVAL;
chan_bitmask = BIT(channel_idx);
spin_lock(&drvdata->spinlock);
reg_value = config->ctiappset;
switch (op) {
case CTI_CHAN_SET:
config->ctiappset |= chan_bitmask;
reg_value = config->ctiappset;
reg_offset = CTIAPPSET;
break;
case CTI_CHAN_CLR:
config->ctiappset &= ~chan_bitmask;
reg_value = chan_bitmask;
reg_offset = CTIAPPCLEAR;
break;
case CTI_CHAN_PULSE:
config->ctiappset &= ~chan_bitmask;
reg_value = chan_bitmask;
reg_offset = CTIAPPPULSE;
break;
default:
err = -EINVAL;
break;
}
if ((err == 0) && cti_active(config))
cti_write_single_reg(drvdata, reg_offset, reg_value);
spin_unlock(&drvdata->spinlock);
return err;
}
/*
* Look for a matching connection device name in the list of connections.
* If found then swap in the csdev name, set trig con association pointer
* and return found.
*/
static bool
cti_match_fixup_csdev(struct cti_device *ctidev, const char *node_name,
struct coresight_device *csdev)
{
struct cti_trig_con *tc;
list_for_each_entry(tc, &ctidev->trig_cons, node) {
if (tc->con_dev_name) {
if (!strcmp(node_name, tc->con_dev_name)) {
/* match: so swap in csdev name & dev */
tc->con_dev_name = dev_name(&csdev->dev);
tc->con_dev = csdev;
return true;
}
}
}
return false;
}
/*
* Search the cti list to add an associated CTI into the supplied CS device
* This will set the association if CTI declared before the CS device.
* (called from coresight_register() with coresight_mutex locked).
*/
void cti_add_assoc_to_csdev(struct coresight_device *csdev)
{
struct cti_drvdata *ect_item;
struct cti_device *ctidev;
const char *node_name = NULL;
/* protect the list */
mutex_lock(&ect_mutex);
/* exit if current is an ECT device.*/
if ((csdev->type == CORESIGHT_DEV_TYPE_ECT) || list_empty(&ect_net))
goto cti_add_done;
/* if we didn't find the csdev previously we used the fwnode name */
node_name = cti_plat_get_node_name(dev_fwnode(csdev->dev.parent));
if (!node_name)
goto cti_add_done;
/* for each CTI in list... */
list_for_each_entry(ect_item, &ect_net, node) {
ctidev = &ect_item->ctidev;
if (cti_match_fixup_csdev(ctidev, node_name, csdev)) {
/*
* if we found a matching csdev then update the ECT
* association pointer for the device with this CTI.
*/
csdev->ect_dev = ect_item->csdev;
break;
}
}
cti_add_done:
mutex_unlock(&ect_mutex);
}
EXPORT_SYMBOL_GPL(cti_add_assoc_to_csdev);
/*
* Removing the associated devices is easier.
* A CTI will not have a value for csdev->ect_dev.
*/
void cti_remove_assoc_from_csdev(struct coresight_device *csdev)
{
struct cti_drvdata *ctidrv;
struct cti_trig_con *tc;
struct cti_device *ctidev;
mutex_lock(&ect_mutex);
if (csdev->ect_dev) {
ctidrv = csdev_to_cti_drvdata(csdev->ect_dev);
ctidev = &ctidrv->ctidev;
list_for_each_entry(tc, &ctidev->trig_cons, node) {
if (tc->con_dev == csdev->ect_dev) {
tc->con_dev = NULL;
break;
}
}
csdev->ect_dev = NULL;
}
mutex_unlock(&ect_mutex);
}
EXPORT_SYMBOL_GPL(cti_remove_assoc_from_csdev);
/*
* Update the cross references where the associated device was found
* while we were building the connection info. This will occur if the
* assoc device was registered before the CTI.
*/
static void cti_update_conn_xrefs(struct cti_drvdata *drvdata)
{
struct cti_trig_con *tc;
struct cti_device *ctidev = &drvdata->ctidev;
list_for_each_entry(tc, &ctidev->trig_cons, node) {
if (tc->con_dev)
/* set tc->con_dev->ect_dev */
coresight_set_assoc_ectdev_mutex(tc->con_dev,
drvdata->csdev);
}
}
static void cti_remove_conn_xrefs(struct cti_drvdata *drvdata)
{
struct cti_trig_con *tc;
struct cti_device *ctidev = &drvdata->ctidev;
list_for_each_entry(tc, &ctidev->trig_cons, node) {
if (tc->con_dev) {
coresight_set_assoc_ectdev_mutex(tc->con_dev,
NULL);
}
}
}
/** cti ect operations **/
int cti_enable(struct coresight_device *csdev)
{
struct cti_drvdata *drvdata = csdev_to_cti_drvdata(csdev);
return cti_enable_hw(drvdata);
}
int cti_disable(struct coresight_device *csdev)
{
struct cti_drvdata *drvdata = csdev_to_cti_drvdata(csdev);
return cti_disable_hw(drvdata);
}
const struct coresight_ops_ect cti_ops_ect = {
.enable = cti_enable,
.disable = cti_disable,
};
const struct coresight_ops cti_ops = {
.ect_ops = &cti_ops_ect,
};
/*
* Free up CTI specific resources
* called by dev->release, need to call down to underlying csdev release.
*/
static void cti_device_release(struct device *dev)
{
struct cti_drvdata *drvdata = dev_get_drvdata(dev->parent);
struct cti_drvdata *ect_item, *ect_tmp;
mutex_lock(&ect_mutex);
cti_remove_conn_xrefs(drvdata);
/* remove from the list */
list_for_each_entry_safe(ect_item, ect_tmp, &ect_net, node) {
if (ect_item == drvdata) {
list_del(&ect_item->node);
break;
}
}
mutex_unlock(&ect_mutex);
if (drvdata->csdev_release)
drvdata->csdev_release(dev);
}
static int cti_probe(struct amba_device *adev, const struct amba_id *id)
{
int ret = 0;
void __iomem *base;
struct device *dev = &adev->dev;
struct cti_drvdata *drvdata = NULL;
struct coresight_desc cti_desc;
struct coresight_platform_data *pdata = NULL;
struct resource *res = &adev->res;
/* driver data*/
drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
if (!drvdata) {
ret = -ENOMEM;
dev_info(dev, "%s, mem err\n", __func__);
goto err_out;
}
/* Validity for the resource is already checked by the AMBA core */
base = devm_ioremap_resource(dev, res);
if (IS_ERR(base)) {
ret = PTR_ERR(base);
dev_err(dev, "%s, remap err\n", __func__);
goto err_out;
}
drvdata->base = base;
dev_set_drvdata(dev, drvdata);
/* default CTI device info */
drvdata->ctidev.cpu = -1;
drvdata->ctidev.nr_trig_con = 0;
drvdata->ctidev.ctm_id = 0;
INIT_LIST_HEAD(&drvdata->ctidev.trig_cons);
spin_lock_init(&drvdata->spinlock);
/* initialise CTI driver config values */
cti_set_default_config(dev, drvdata);
pdata = coresight_cti_get_platform_data(dev);
if (IS_ERR(pdata)) {
dev_err(dev, "coresight_cti_get_platform_data err\n");
ret = PTR_ERR(pdata);
goto err_out;
}
/* default to powered - could change on PM notifications */
drvdata->config.hw_powered = true;
/* set up device name - will depend if cpu bound or otherwise */
if (drvdata->ctidev.cpu >= 0)
cti_desc.name = devm_kasprintf(dev, GFP_KERNEL, "cti_cpu%d",
drvdata->ctidev.cpu);
else
cti_desc.name = coresight_alloc_device_name(&cti_sys_devs, dev);
if (!cti_desc.name) {
ret = -ENOMEM;
goto err_out;
}
/* create dynamic attributes for connections */
ret = cti_create_cons_sysfs(dev, drvdata);
if (ret) {
dev_err(dev, "%s: create dynamic sysfs entries failed\n",
cti_desc.name);
goto err_out;
}
/* set up coresight component description */
cti_desc.pdata = pdata;
cti_desc.type = CORESIGHT_DEV_TYPE_ECT;
cti_desc.subtype.ect_subtype = CORESIGHT_DEV_SUBTYPE_ECT_CTI;
cti_desc.ops = &cti_ops;
cti_desc.groups = drvdata->ctidev.con_groups;
cti_desc.dev = dev;
drvdata->csdev = coresight_register(&cti_desc);
if (IS_ERR(drvdata->csdev)) {
ret = PTR_ERR(drvdata->csdev);
goto err_out;
}
/* add to list of CTI devices */
mutex_lock(&ect_mutex);
list_add(&drvdata->node, &ect_net);
/* set any cross references */
cti_update_conn_xrefs(drvdata);
mutex_unlock(&ect_mutex);
/* set up release chain */
drvdata->csdev_release = drvdata->csdev->dev.release;
drvdata->csdev->dev.release = cti_device_release;
/* all done - dec pm refcount */
pm_runtime_put(&adev->dev);
dev_info(&drvdata->csdev->dev, "CTI initialized\n");
return 0;
err_out:
return ret;
}
static struct amba_cs_uci_id uci_id_cti[] = {
{
/* CTI UCI data */
.devarch = 0x47701a14, /* CTI v2 */
.devarch_mask = 0xfff0ffff,
.devtype = 0x00000014, /* maj(0x4-debug) min(0x1-ECT) */
}
};
static const struct amba_id cti_ids[] = {
CS_AMBA_ID(0x000bb906), /* Coresight CTI (SoC 400), C-A72, C-A57 */
CS_AMBA_ID(0x000bb922), /* CTI - C-A8 */
CS_AMBA_ID(0x000bb9a8), /* CTI - C-A53 */
CS_AMBA_ID(0x000bb9aa), /* CTI - C-A73 */
CS_AMBA_UCI_ID(0x000bb9da, uci_id_cti), /* CTI - C-A35 */
CS_AMBA_UCI_ID(0x000bb9ed, uci_id_cti), /* Coresight CTI (SoC 600) */
{ 0, 0},
};
static struct amba_driver cti_driver = {
.drv = {
.name = "coresight-cti",
.owner = THIS_MODULE,
.suppress_bind_attrs = true,
},
.probe = cti_probe,
.id_table = cti_ids,
};
builtin_amba_driver(cti_driver);

View File

@ -0,0 +1,235 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2018 Linaro Limited, All rights reserved.
* Author: Mike Leach <mike.leach@linaro.org>
*/
#ifndef _CORESIGHT_CORESIGHT_CTI_H
#define _CORESIGHT_CORESIGHT_CTI_H
#include <asm/local.h>
#include <linux/spinlock.h>
#include "coresight-priv.h"
/*
* Device registers
* 0x000 - 0x144: CTI programming and status
* 0xEDC - 0xEF8: CTI integration test.
* 0xF00 - 0xFFC: Coresight management registers.
*/
/* CTI programming registers */
#define CTICONTROL 0x000
#define CTIINTACK 0x010
#define CTIAPPSET 0x014
#define CTIAPPCLEAR 0x018
#define CTIAPPPULSE 0x01C
#define CTIINEN(n) (0x020 + (4 * n))
#define CTIOUTEN(n) (0x0A0 + (4 * n))
#define CTITRIGINSTATUS 0x130
#define CTITRIGOUTSTATUS 0x134
#define CTICHINSTATUS 0x138
#define CTICHOUTSTATUS 0x13C
#define CTIGATE 0x140
#define ASICCTL 0x144
/* Integration test registers */
#define ITCHINACK 0xEDC /* WO CTI CSSoc 400 only*/
#define ITTRIGINACK 0xEE0 /* WO CTI CSSoc 400 only*/
#define ITCHOUT 0xEE4 /* WO RW-600 */
#define ITTRIGOUT 0xEE8 /* WO RW-600 */
#define ITCHOUTACK 0xEEC /* RO CTI CSSoc 400 only*/
#define ITTRIGOUTACK 0xEF0 /* RO CTI CSSoc 400 only*/
#define ITCHIN 0xEF4 /* RO */
#define ITTRIGIN 0xEF8 /* RO */
/* management registers */
#define CTIDEVAFF0 0xFA8
#define CTIDEVAFF1 0xFAC
/*
* CTI CSSoc 600 has a max of 32 trigger signals per direction.
* CTI CSSoc 400 has 8 IO triggers - other CTIs can be impl def.
* Max of in and out defined in the DEVID register.
* - pick up actual number used from .dts parameters if present.
*/
#define CTIINOUTEN_MAX 32
/**
* Group of related trigger signals
*
* @nr_sigs: number of signals in the group.
* @used_mask: bitmask representing the signal indexes in the group.
* @sig_types: array of types for the signals, length nr_sigs.
*/
struct cti_trig_grp {
int nr_sigs;
u32 used_mask;
int sig_types[];
};
/**
* Trigger connection - connection between a CTI and other (coresight) device
* lists input and output trigger signals for the device
*
* @con_in: connected CTIIN signals for the device.
* @con_out: connected CTIOUT signals for the device.
* @con_dev: coresight device connected to the CTI, NULL if not CS device
* @con_dev_name: name of connected device (CS or CPU)
* @node: entry node in list of connections.
* @con_attrs: Dynamic sysfs attributes specific to this connection.
* @attr_group: Dynamic attribute group created for this connection.
*/
struct cti_trig_con {
struct cti_trig_grp *con_in;
struct cti_trig_grp *con_out;
struct coresight_device *con_dev;
const char *con_dev_name;
struct list_head node;
struct attribute **con_attrs;
struct attribute_group *attr_group;
};
/**
* struct cti_device - description of CTI device properties.
*
* @nt_trig_con: Number of external devices connected to this device.
* @ctm_id: which CTM this device is connected to (by default it is
* assumed there is a single CTM per SoC, ID 0).
* @trig_cons: list of connections to this device.
* @cpu: CPU ID if associated with CPU, -1 otherwise.
* @con_groups: combined static and dynamic sysfs groups for trigger
* connections.
*/
struct cti_device {
int nr_trig_con;
u32 ctm_id;
struct list_head trig_cons;
int cpu;
const struct attribute_group **con_groups;
};
/**
* struct cti_config - configuration of the CTI device hardware
*
* @nr_trig_max: Max number of trigger signals implemented on device.
* (max of trig_in or trig_out) - from ID register.
* @nr_ctm_channels: number of available CTM channels - from ID register.
* @enable_req_count: CTI is enabled alongside >=1 associated devices.
* @hw_enabled: true if hw is currently enabled.
* @hw_powered: true if associated cpu powered on, or no cpu.
* @trig_in_use: bitfield of in triggers registered as in use.
* @trig_out_use: bitfield of out triggers registered as in use.
* @trig_out_filter: bitfield of out triggers that are blocked if filter
* enabled. Typically this would be dbgreq / restart on
* a core CTI.
* @trig_filter_enable: 1 if filtering enabled.
* @xtrig_rchan_sel: channel selection for xtrigger connection show.
* @ctiappset: CTI Software application channel set.
* @ctiinout_sel: register selector for INEN and OUTEN regs.
* @ctiinen: enable input trigger to a channel.
* @ctiouten: enable output trigger from a channel.
* @ctigate: gate channel output from CTI to CTM.
* @asicctl: asic control register.
*/
struct cti_config {
/* hardware description */
int nr_ctm_channels;
int nr_trig_max;
/* cti enable control */
atomic_t enable_req_count;
bool hw_enabled;
bool hw_powered;
/* registered triggers and filtering */
u32 trig_in_use;
u32 trig_out_use;
u32 trig_out_filter;
bool trig_filter_enable;
u8 xtrig_rchan_sel;
/* cti cross trig programmable regs */
u32 ctiappset;
u8 ctiinout_sel;
u32 ctiinen[CTIINOUTEN_MAX];
u32 ctiouten[CTIINOUTEN_MAX];
u32 ctigate;
u32 asicctl;
};
/**
* struct cti_drvdata - specifics for the CTI device
* @base: Memory mapped base address for this component..
* @csdev: Standard CoreSight device information.
* @ctidev: Extra information needed by the CTI/CTM framework.
* @spinlock: Control data access to one at a time.
* @config: Configuration data for this CTI device.
* @node: List entry of this device in the list of CTI devices.
* @csdev_release: release function for underlying coresight_device.
*/
struct cti_drvdata {
void __iomem *base;
struct coresight_device *csdev;
struct cti_device ctidev;
spinlock_t spinlock;
struct cti_config config;
struct list_head node;
void (*csdev_release)(struct device *dev);
};
/*
* Channel operation types.
*/
enum cti_chan_op {
CTI_CHAN_ATTACH,
CTI_CHAN_DETACH,
};
enum cti_trig_dir {
CTI_TRIG_IN,
CTI_TRIG_OUT,
};
enum cti_chan_gate_op {
CTI_GATE_CHAN_ENABLE,
CTI_GATE_CHAN_DISABLE,
};
enum cti_chan_set_op {
CTI_CHAN_SET,
CTI_CHAN_CLR,
CTI_CHAN_PULSE,
};
/* private cti driver fns & vars */
extern const struct attribute_group *coresight_cti_groups[];
int cti_add_default_connection(struct device *dev,
struct cti_drvdata *drvdata);
int cti_add_connection_entry(struct device *dev, struct cti_drvdata *drvdata,
struct cti_trig_con *tc,
struct coresight_device *csdev,
const char *assoc_dev_name);
struct cti_trig_con *cti_allocate_trig_con(struct device *dev, int in_sigs,
int out_sigs);
int cti_enable(struct coresight_device *csdev);
int cti_disable(struct coresight_device *csdev);
void cti_write_all_hw_regs(struct cti_drvdata *drvdata);
void cti_write_intack(struct device *dev, u32 ackval);
void cti_write_single_reg(struct cti_drvdata *drvdata, int offset, u32 value);
int cti_channel_trig_op(struct device *dev, enum cti_chan_op op,
enum cti_trig_dir direction, u32 channel_idx,
u32 trigger_idx);
int cti_channel_gate_op(struct device *dev, enum cti_chan_gate_op op,
u32 channel_idx);
int cti_channel_setop(struct device *dev, enum cti_chan_set_op op,
u32 channel_idx);
int cti_create_cons_sysfs(struct device *dev, struct cti_drvdata *drvdata);
struct coresight_platform_data *
coresight_cti_get_platform_data(struct device *dev);
const char *cti_plat_get_node_name(struct fwnode_handle *fwnode);
/* cti powered and enabled */
static inline bool cti_active(struct cti_config *cfg)
{
return cfg->hw_powered && cfg->hw_enabled;
}
#endif /* _CORESIGHT_CORESIGHT_CTI_H */

View File

@ -57,6 +57,26 @@ coresight_find_device_by_fwnode(struct fwnode_handle *fwnode)
return bus_find_device_by_fwnode(&amba_bustype, fwnode);
}
/*
* Find a registered coresight device from a device fwnode.
* The node info is associated with the AMBA parent, but the
* csdev keeps a copy so iterate round the coresight bus to
* find the device.
*/
struct coresight_device *
coresight_find_csdev_by_fwnode(struct fwnode_handle *r_fwnode)
{
struct device *dev;
struct coresight_device *csdev = NULL;
dev = bus_find_device_by_fwnode(&coresight_bustype, r_fwnode);
if (dev) {
csdev = to_coresight_device(dev);
put_device(dev);
}
return csdev;
}
#ifdef CONFIG_OF
static inline bool of_coresight_legacy_ep_is_input(struct device_node *ep)
{

View File

@ -22,6 +22,7 @@
#define CORESIGHT_CLAIMCLR 0xfa4
#define CORESIGHT_LAR 0xfb0
#define CORESIGHT_LSR 0xfb4
#define CORESIGHT_DEVARCH 0xfbc
#define CORESIGHT_AUTHSTATUS 0xfb8
#define CORESIGHT_DEVID 0xfc8
#define CORESIGHT_DEVTYPE 0xfcc
@ -161,6 +162,16 @@ static inline int etm_readl_cp14(u32 off, unsigned int *val) { return 0; }
static inline int etm_writel_cp14(u32 off, u32 val) { return 0; }
#endif
#ifdef CONFIG_CORESIGHT_CTI
extern void cti_add_assoc_to_csdev(struct coresight_device *csdev);
extern void cti_remove_assoc_from_csdev(struct coresight_device *csdev);
#else
static inline void cti_add_assoc_to_csdev(struct coresight_device *csdev) {}
static inline void
cti_remove_assoc_from_csdev(struct coresight_device *csdev) {}
#endif
/*
* Macros and inline functions to handle CoreSight UCI data and driver
* private data in AMBA ID table entries, and extract data values.
@ -201,5 +212,9 @@ static inline void *coresight_get_uci_data(const struct amba_id *id)
}
void coresight_release_platform_data(struct coresight_platform_data *pdata);
struct coresight_device *
coresight_find_csdev_by_fwnode(struct fwnode_handle *r_fwnode);
void coresight_set_assoc_ectdev_mutex(struct coresight_device *csdev,
struct coresight_device *ect_csdev);
#endif

View File

@ -216,6 +216,44 @@ void coresight_disclaim_device(void __iomem *base)
CS_LOCK(base);
}
/* enable or disable an associated CTI device of the supplied CS device */
static int
coresight_control_assoc_ectdev(struct coresight_device *csdev, bool enable)
{
int ect_ret = 0;
struct coresight_device *ect_csdev = csdev->ect_dev;
if (!ect_csdev)
return 0;
if (enable) {
if (ect_ops(ect_csdev)->enable)
ect_ret = ect_ops(ect_csdev)->enable(ect_csdev);
} else {
if (ect_ops(ect_csdev)->disable)
ect_ret = ect_ops(ect_csdev)->disable(ect_csdev);
}
/* output warning if ECT enable is preventing trace operation */
if (ect_ret)
dev_info(&csdev->dev, "Associated ECT device (%s) %s failed\n",
dev_name(&ect_csdev->dev),
enable ? "enable" : "disable");
return ect_ret;
}
/*
* Set the associated ect / cti device while holding the coresight_mutex
* to avoid a race with coresight_enable that may try to use this value.
*/
void coresight_set_assoc_ectdev_mutex(struct coresight_device *csdev,
struct coresight_device *ect_csdev)
{
mutex_lock(&coresight_mutex);
csdev->ect_dev = ect_csdev;
mutex_unlock(&coresight_mutex);
}
static int coresight_enable_sink(struct coresight_device *csdev,
u32 mode, void *data)
{
@ -228,9 +266,14 @@ static int coresight_enable_sink(struct coresight_device *csdev,
if (!sink_ops(csdev)->enable)
return -EINVAL;
ret = sink_ops(csdev)->enable(csdev, mode, data);
ret = coresight_control_assoc_ectdev(csdev, true);
if (ret)
return ret;
ret = sink_ops(csdev)->enable(csdev, mode, data);
if (ret) {
coresight_control_assoc_ectdev(csdev, false);
return ret;
}
csdev->enable = true;
return 0;
@ -246,6 +289,7 @@ static void coresight_disable_sink(struct coresight_device *csdev)
ret = sink_ops(csdev)->disable(csdev);
if (ret)
return;
coresight_control_assoc_ectdev(csdev, false);
csdev->enable = false;
}
@ -269,8 +313,15 @@ static int coresight_enable_link(struct coresight_device *csdev,
if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_SPLIT && outport < 0)
return outport;
if (link_ops(csdev)->enable)
ret = link_ops(csdev)->enable(csdev, inport, outport);
if (link_ops(csdev)->enable) {
ret = coresight_control_assoc_ectdev(csdev, true);
if (!ret) {
ret = link_ops(csdev)->enable(csdev, inport, outport);
if (ret)
coresight_control_assoc_ectdev(csdev, false);
}
}
if (!ret)
csdev->enable = true;
@ -300,8 +351,10 @@ static void coresight_disable_link(struct coresight_device *csdev,
nr_conns = 1;
}
if (link_ops(csdev)->disable)
if (link_ops(csdev)->disable) {
link_ops(csdev)->disable(csdev, inport, outport);
coresight_control_assoc_ectdev(csdev, false);
}
for (i = 0; i < nr_conns; i++)
if (atomic_read(&csdev->refcnt[i]) != 0)
@ -322,9 +375,14 @@ static int coresight_enable_source(struct coresight_device *csdev, u32 mode)
if (!csdev->enable) {
if (source_ops(csdev)->enable) {
ret = source_ops(csdev)->enable(csdev, NULL, mode);
ret = coresight_control_assoc_ectdev(csdev, true);
if (ret)
return ret;
ret = source_ops(csdev)->enable(csdev, NULL, mode);
if (ret) {
coresight_control_assoc_ectdev(csdev, false);
return ret;
};
}
csdev->enable = true;
}
@ -347,6 +405,7 @@ static bool coresight_disable_source(struct coresight_device *csdev)
if (atomic_dec_return(csdev->refcnt) == 0) {
if (source_ops(csdev)->disable)
source_ops(csdev)->disable(csdev, NULL);
coresight_control_assoc_ectdev(csdev, false);
csdev->enable = false;
}
return !csdev->enable;
@ -955,12 +1014,16 @@ static struct device_type coresight_dev_type[] = {
{
.name = "helper",
},
{
.name = "ect",
},
};
static void coresight_device_release(struct device *dev)
{
struct coresight_device *csdev = to_coresight_device(dev);
cti_remove_assoc_from_csdev(csdev);
fwnode_handle_put(csdev->dev.fwnode);
kfree(csdev->refcnt);
kfree(csdev);
@ -1027,17 +1090,11 @@ static void coresight_fixup_device_conns(struct coresight_device *csdev)
for (i = 0; i < csdev->pdata->nr_outport; i++) {
struct coresight_connection *conn = &csdev->pdata->conns[i];
struct device *dev = NULL;
dev = bus_find_device_by_fwnode(&coresight_bustype, conn->child_fwnode);
if (dev) {
conn->child_dev = to_coresight_device(dev);
/* and put reference from 'bus_find_device()' */
put_device(dev);
} else {
conn->child_dev =
coresight_find_csdev_by_fwnode(conn->child_fwnode);
if (!conn->child_dev)
csdev->orphan = true;
conn->child_dev = NULL;
}
}
}
@ -1249,6 +1306,7 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
coresight_fixup_device_conns(csdev);
coresight_fixup_orphan_conns(csdev);
cti_add_assoc_to_csdev(csdev);
mutex_unlock(&coresight_mutex);

View File

@ -47,11 +47,13 @@ struct intel_th_output {
/**
* struct intel_th_drvdata - describes hardware capabilities and quirks
* @tscu_enable: device needs SW to enable time stamping unit
* @multi_is_broken: device has multiblock mode is broken
* @has_mintctl: device has interrupt control (MINTCTL) register
* @host_mode_only: device can only operate in 'host debugger' mode
*/
struct intel_th_drvdata {
unsigned int tscu_enable : 1,
multi_is_broken : 1,
has_mintctl : 1,
host_mode_only : 1;
};

View File

@ -138,6 +138,7 @@ struct msc {
struct list_head win_list;
struct sg_table single_sgt;
struct msc_window *cur_win;
struct msc_window *switch_on_unlock;
unsigned long nr_pages;
unsigned long single_sz;
unsigned int single_wrap : 1;
@ -154,10 +155,13 @@ struct msc {
struct list_head iter_list;
bool stop_on_full;
/* config */
unsigned int enabled : 1,
wrap : 1,
do_irq : 1;
do_irq : 1,
multi_is_broken : 1;
unsigned int mode;
unsigned int burst_len;
unsigned int index;
@ -1665,7 +1669,7 @@ static int intel_th_msc_init(struct msc *msc)
{
atomic_set(&msc->user_count, -1);
msc->mode = MSC_MODE_MULTI;
msc->mode = msc->multi_is_broken ? MSC_MODE_SINGLE : MSC_MODE_MULTI;
mutex_init(&msc->buf_mutex);
INIT_LIST_HEAD(&msc->win_list);
INIT_LIST_HEAD(&msc->iter_list);
@ -1717,6 +1721,10 @@ void intel_th_msc_window_unlock(struct device *dev, struct sg_table *sgt)
return;
msc_win_set_lockout(win, WIN_LOCKED, WIN_READY);
if (msc->switch_on_unlock == win) {
msc->switch_on_unlock = NULL;
msc_win_switch(msc);
}
}
EXPORT_SYMBOL_GPL(intel_th_msc_window_unlock);
@ -1757,7 +1765,11 @@ static irqreturn_t intel_th_msc_interrupt(struct intel_th_device *thdev)
/* next window: if READY, proceed, if LOCKED, stop the trace */
if (msc_win_set_lockout(next_win, WIN_READY, WIN_INUSE)) {
schedule_work(&msc->work);
if (msc->stop_on_full)
schedule_work(&msc->work);
else
msc->switch_on_unlock = next_win;
return IRQ_HANDLED;
}
@ -1877,6 +1889,9 @@ mode_store(struct device *dev, struct device_attribute *attr, const char *buf,
return -EINVAL;
found:
if (i == MSC_MODE_MULTI && msc->multi_is_broken)
return -EOPNOTSUPP;
mutex_lock(&msc->buf_mutex);
ret = 0;
@ -2047,11 +2062,36 @@ win_switch_store(struct device *dev, struct device_attribute *attr,
static DEVICE_ATTR_WO(win_switch);
static ssize_t stop_on_full_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct msc *msc = dev_get_drvdata(dev);
return sprintf(buf, "%d\n", msc->stop_on_full);
}
static ssize_t stop_on_full_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t size)
{
struct msc *msc = dev_get_drvdata(dev);
int ret;
ret = kstrtobool(buf, &msc->stop_on_full);
if (ret)
return ret;
return size;
}
static DEVICE_ATTR_RW(stop_on_full);
static struct attribute *msc_output_attrs[] = {
&dev_attr_wrap.attr,
&dev_attr_mode.attr,
&dev_attr_nr_pages.attr,
&dev_attr_win_switch.attr,
&dev_attr_stop_on_full.attr,
NULL,
};
@ -2083,6 +2123,9 @@ static int intel_th_msc_probe(struct intel_th_device *thdev)
if (!res)
msc->do_irq = 1;
if (INTEL_TH_CAP(to_intel_th(thdev), multi_is_broken))
msc->multi_is_broken = 1;
msc->index = thdev->id;
msc->thdev = thdev;

View File

@ -120,6 +120,10 @@ static void intel_th_pci_remove(struct pci_dev *pdev)
pci_free_irq_vectors(pdev);
}
static const struct intel_th_drvdata intel_th_1x_multi_is_broken = {
.multi_is_broken = 1,
};
static const struct intel_th_drvdata intel_th_2x = {
.tscu_enable = 1,
.has_mintctl = 1,
@ -152,7 +156,7 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
{
/* Kaby Lake PCH-H */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa2a6),
.driver_data = (kernel_ulong_t)0,
.driver_data = (kernel_ulong_t)&intel_th_1x_multi_is_broken,
},
{
/* Denverton */
@ -207,7 +211,7 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
{
/* Comet Lake PCH-V */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa3a6),
.driver_data = (kernel_ulong_t)&intel_th_2x,
.driver_data = (kernel_ulong_t)&intel_th_1x_multi_is_broken,
},
{
/* Ice Lake NNPI */

View File

@ -5,6 +5,9 @@ config INTERCONNECT_QCOM
help
Support for Qualcomm's Network-on-Chip interconnect hardware.
config INTERCONNECT_QCOM_BCM_VOTER
tristate
config INTERCONNECT_QCOM_MSM8916
tristate "Qualcomm MSM8916 interconnect driver"
depends on INTERCONNECT_QCOM
@ -23,6 +26,13 @@ config INTERCONNECT_QCOM_MSM8974
This is a driver for the Qualcomm Network-on-Chip on msm8974-based
platforms.
config INTERCONNECT_QCOM_OSM_L3
tristate "Qualcomm OSM L3 interconnect driver"
depends on INTERCONNECT_QCOM || COMPILE_TEST
help
Say y here to support the Operating State Manager (OSM) interconnect
driver which controls the scaling of L3 caches on Qualcomm SoCs.
config INTERCONNECT_QCOM_QCS404
tristate "Qualcomm QCS404 interconnect driver"
depends on INTERCONNECT_QCOM
@ -32,10 +42,25 @@ config INTERCONNECT_QCOM_QCS404
This is a driver for the Qualcomm Network-on-Chip on qcs404-based
platforms.
config INTERCONNECT_QCOM_RPMH
tristate
config INTERCONNECT_QCOM_SC7180
tristate "Qualcomm SC7180 interconnect driver"
depends on INTERCONNECT_QCOM
depends on (QCOM_RPMH && QCOM_COMMAND_DB && OF) || COMPILE_TEST
select INTERCONNECT_QCOM_RPMH
select INTERCONNECT_QCOM_BCM_VOTER
help
This is a driver for the Qualcomm Network-on-Chip on sc7180-based
platforms.
config INTERCONNECT_QCOM_SDM845
tristate "Qualcomm SDM845 interconnect driver"
depends on INTERCONNECT_QCOM
depends on (QCOM_RPMH && QCOM_COMMAND_DB && OF) || COMPILE_TEST
select INTERCONNECT_QCOM_RPMH
select INTERCONNECT_QCOM_BCM_VOTER
help
This is a driver for the Qualcomm Network-on-Chip on sdm845-based
platforms.

View File

@ -1,13 +1,21 @@
# SPDX-License-Identifier: GPL-2.0
icc-bcm-voter-objs := bcm-voter.o
qnoc-msm8916-objs := msm8916.o
qnoc-msm8974-objs := msm8974.o
icc-osm-l3-objs := osm-l3.o
qnoc-qcs404-objs := qcs404.o
icc-rpmh-obj := icc-rpmh.o
qnoc-sc7180-objs := sc7180.o
qnoc-sdm845-objs := sdm845.o
icc-smd-rpm-objs := smd-rpm.o
obj-$(CONFIG_INTERCONNECT_QCOM_BCM_VOTER) += icc-bcm-voter.o
obj-$(CONFIG_INTERCONNECT_QCOM_MSM8916) += qnoc-msm8916.o
obj-$(CONFIG_INTERCONNECT_QCOM_MSM8974) += qnoc-msm8974.o
obj-$(CONFIG_INTERCONNECT_QCOM_OSM_L3) += icc-osm-l3.o
obj-$(CONFIG_INTERCONNECT_QCOM_QCS404) += qnoc-qcs404.o
obj-$(CONFIG_INTERCONNECT_QCOM_RPMH) += icc-rpmh.o
obj-$(CONFIG_INTERCONNECT_QCOM_SC7180) += qnoc-sc7180.o
obj-$(CONFIG_INTERCONNECT_QCOM_SDM845) += qnoc-sdm845.o
obj-$(CONFIG_INTERCONNECT_QCOM_SMD_RPM) += icc-smd-rpm.o

View File

@ -0,0 +1,366 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2020, The Linux Foundation. All rights reserved.
*/
#include <asm/div64.h>
#include <linux/interconnect-provider.h>
#include <linux/list_sort.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <soc/qcom/rpmh.h>
#include <soc/qcom/tcs.h>
#include "bcm-voter.h"
#include "icc-rpmh.h"
static LIST_HEAD(bcm_voters);
static DEFINE_MUTEX(bcm_voter_lock);
/**
* struct bcm_voter - Bus Clock Manager voter
* @dev: reference to the device that communicates with the BCM
* @np: reference to the device node to match bcm voters
* @lock: mutex to protect commit and wake/sleep lists in the voter
* @commit_list: list containing bcms to be committed to hardware
* @ws_list: list containing bcms that have different wake/sleep votes
* @voter_node: list of bcm voters
*/
struct bcm_voter {
struct device *dev;
struct device_node *np;
struct mutex lock;
struct list_head commit_list;
struct list_head ws_list;
struct list_head voter_node;
};
static int cmp_vcd(void *priv, struct list_head *a, struct list_head *b)
{
const struct qcom_icc_bcm *bcm_a =
list_entry(a, struct qcom_icc_bcm, list);
const struct qcom_icc_bcm *bcm_b =
list_entry(b, struct qcom_icc_bcm, list);
if (bcm_a->aux_data.vcd < bcm_b->aux_data.vcd)
return -1;
else if (bcm_a->aux_data.vcd == bcm_b->aux_data.vcd)
return 0;
else
return 1;
}
static void bcm_aggregate(struct qcom_icc_bcm *bcm)
{
size_t i, bucket;
u64 agg_avg[QCOM_ICC_NUM_BUCKETS] = {0};
u64 agg_peak[QCOM_ICC_NUM_BUCKETS] = {0};
u64 temp;
for (bucket = 0; bucket < QCOM_ICC_NUM_BUCKETS; bucket++) {
for (i = 0; i < bcm->num_nodes; i++) {
temp = bcm->nodes[i]->sum_avg[bucket] * bcm->aux_data.width;
do_div(temp, bcm->nodes[i]->buswidth * bcm->nodes[i]->channels);
agg_avg[bucket] = max(agg_avg[bucket], temp);
temp = bcm->nodes[i]->max_peak[bucket] * bcm->aux_data.width;
do_div(temp, bcm->nodes[i]->buswidth);
agg_peak[bucket] = max(agg_peak[bucket], temp);
}
temp = agg_avg[bucket] * 1000ULL;
do_div(temp, bcm->aux_data.unit);
bcm->vote_x[bucket] = temp;
temp = agg_peak[bucket] * 1000ULL;
do_div(temp, bcm->aux_data.unit);
bcm->vote_y[bucket] = temp;
}
if (bcm->keepalive && bcm->vote_x[QCOM_ICC_BUCKET_AMC] == 0 &&
bcm->vote_y[QCOM_ICC_BUCKET_AMC] == 0) {
bcm->vote_x[QCOM_ICC_BUCKET_AMC] = 1;
bcm->vote_x[QCOM_ICC_BUCKET_WAKE] = 1;
bcm->vote_y[QCOM_ICC_BUCKET_AMC] = 1;
bcm->vote_y[QCOM_ICC_BUCKET_WAKE] = 1;
}
}
static inline void tcs_cmd_gen(struct tcs_cmd *cmd, u64 vote_x, u64 vote_y,
u32 addr, bool commit)
{
bool valid = true;
if (!cmd)
return;
if (vote_x == 0 && vote_y == 0)
valid = false;
if (vote_x > BCM_TCS_CMD_VOTE_MASK)
vote_x = BCM_TCS_CMD_VOTE_MASK;
if (vote_y > BCM_TCS_CMD_VOTE_MASK)
vote_y = BCM_TCS_CMD_VOTE_MASK;
cmd->addr = addr;
cmd->data = BCM_TCS_CMD(commit, valid, vote_x, vote_y);
/*
* Set the wait for completion flag on command that need to be completed
* before the next command.
*/
if (commit)
cmd->wait = true;
}
static void tcs_list_gen(struct list_head *bcm_list, int bucket,
struct tcs_cmd tcs_list[MAX_BCMS],
int n[MAX_VCD + 1])
{
struct qcom_icc_bcm *bcm;
bool commit;
size_t idx = 0, batch = 0, cur_vcd_size = 0;
memset(n, 0, sizeof(int) * (MAX_VCD + 1));
list_for_each_entry(bcm, bcm_list, list) {
commit = false;
cur_vcd_size++;
if ((list_is_last(&bcm->list, bcm_list)) ||
bcm->aux_data.vcd != list_next_entry(bcm, list)->aux_data.vcd) {
commit = true;
cur_vcd_size = 0;
}
tcs_cmd_gen(&tcs_list[idx], bcm->vote_x[bucket],
bcm->vote_y[bucket], bcm->addr, commit);
idx++;
n[batch]++;
/*
* Batch the BCMs in such a way that we do not split them in
* multiple payloads when they are under the same VCD. This is
* to ensure that every BCM is committed since we only set the
* commit bit on the last BCM request of every VCD.
*/
if (n[batch] >= MAX_RPMH_PAYLOAD) {
if (!commit) {
n[batch] -= cur_vcd_size;
n[batch + 1] = cur_vcd_size;
}
batch++;
}
}
}
/**
* of_bcm_voter_get - gets a bcm voter handle from DT node
* @dev: device pointer for the consumer device
* @name: name for the bcm voter device
*
* This function will match a device_node pointer for the phandle
* specified in the device DT and return a bcm_voter handle on success.
*
* Returns bcm_voter pointer or ERR_PTR() on error. EPROBE_DEFER is returned
* when matching bcm voter is yet to be found.
*/
struct bcm_voter *of_bcm_voter_get(struct device *dev, const char *name)
{
struct bcm_voter *voter = ERR_PTR(-EPROBE_DEFER);
struct bcm_voter *temp;
struct device_node *np, *node;
int idx = 0;
if (!dev || !dev->of_node)
return ERR_PTR(-ENODEV);
np = dev->of_node;
if (name) {
idx = of_property_match_string(np, "qcom,bcm-voter-names", name);
if (idx < 0)
return ERR_PTR(idx);
}
node = of_parse_phandle(np, "qcom,bcm-voters", idx);
mutex_lock(&bcm_voter_lock);
list_for_each_entry(temp, &bcm_voters, voter_node) {
if (temp->np == node) {
voter = temp;
break;
}
}
mutex_unlock(&bcm_voter_lock);
return voter;
}
EXPORT_SYMBOL_GPL(of_bcm_voter_get);
/**
* qcom_icc_bcm_voter_add - queues up the bcm nodes that require updates
* @voter: voter that the bcms are being added to
* @bcm: bcm to add to the commit and wake sleep list
*/
void qcom_icc_bcm_voter_add(struct bcm_voter *voter, struct qcom_icc_bcm *bcm)
{
if (!voter)
return;
mutex_lock(&voter->lock);
if (list_empty(&bcm->list))
list_add_tail(&bcm->list, &voter->commit_list);
if (list_empty(&bcm->ws_list))
list_add_tail(&bcm->ws_list, &voter->ws_list);
mutex_unlock(&voter->lock);
}
EXPORT_SYMBOL_GPL(qcom_icc_bcm_voter_add);
/**
* qcom_icc_bcm_voter_commit - generates and commits tcs cmds based on bcms
* @voter: voter that needs flushing
*
* This function generates a set of AMC commands and flushes to the BCM device
* associated with the voter. It conditionally generate WAKE and SLEEP commands
* based on deltas between WAKE/SLEEP requirements. The ws_list persists
* through multiple commit requests and bcm nodes are removed only when the
* requirements for WAKE matches SLEEP.
*
* Returns 0 on success, or an appropriate error code otherwise.
*/
int qcom_icc_bcm_voter_commit(struct bcm_voter *voter)
{
struct qcom_icc_bcm *bcm;
struct qcom_icc_bcm *bcm_tmp;
int commit_idx[MAX_VCD + 1];
struct tcs_cmd cmds[MAX_BCMS];
int ret = 0;
if (!voter)
return 0;
mutex_lock(&voter->lock);
list_for_each_entry(bcm, &voter->commit_list, list)
bcm_aggregate(bcm);
/*
* Pre sort the BCMs based on VCD for ease of generating a command list
* that groups the BCMs with the same VCD together. VCDs are numbered
* with lowest being the most expensive time wise, ensuring that
* those commands are being sent the earliest in the queue. This needs
* to be sorted every commit since we can't guarantee the order in which
* the BCMs are added to the list.
*/
list_sort(NULL, &voter->commit_list, cmp_vcd);
/*
* Construct the command list based on a pre ordered list of BCMs
* based on VCD.
*/
tcs_list_gen(&voter->commit_list, QCOM_ICC_BUCKET_AMC, cmds, commit_idx);
if (!commit_idx[0])
goto out;
ret = rpmh_invalidate(voter->dev);
if (ret) {
pr_err("Error invalidating RPMH client (%d)\n", ret);
goto out;
}
ret = rpmh_write_batch(voter->dev, RPMH_ACTIVE_ONLY_STATE,
cmds, commit_idx);
if (ret) {
pr_err("Error sending AMC RPMH requests (%d)\n", ret);
goto out;
}
list_for_each_entry_safe(bcm, bcm_tmp, &voter->commit_list, list)
list_del_init(&bcm->list);
list_for_each_entry_safe(bcm, bcm_tmp, &voter->ws_list, ws_list) {
/*
* Only generate WAKE and SLEEP commands if a resource's
* requirements change as the execution environment transitions
* between different power states.
*/
if (bcm->vote_x[QCOM_ICC_BUCKET_WAKE] !=
bcm->vote_x[QCOM_ICC_BUCKET_SLEEP] ||
bcm->vote_y[QCOM_ICC_BUCKET_WAKE] !=
bcm->vote_y[QCOM_ICC_BUCKET_SLEEP])
list_add_tail(&bcm->list, &voter->commit_list);
else
list_del_init(&bcm->ws_list);
}
if (list_empty(&voter->commit_list))
goto out;
list_sort(NULL, &voter->commit_list, cmp_vcd);
tcs_list_gen(&voter->commit_list, QCOM_ICC_BUCKET_WAKE, cmds, commit_idx);
ret = rpmh_write_batch(voter->dev, RPMH_WAKE_ONLY_STATE, cmds, commit_idx);
if (ret) {
pr_err("Error sending WAKE RPMH requests (%d)\n", ret);
goto out;
}
tcs_list_gen(&voter->commit_list, QCOM_ICC_BUCKET_SLEEP, cmds, commit_idx);
ret = rpmh_write_batch(voter->dev, RPMH_SLEEP_STATE, cmds, commit_idx);
if (ret) {
pr_err("Error sending SLEEP RPMH requests (%d)\n", ret);
goto out;
}
out:
list_for_each_entry_safe(bcm, bcm_tmp, &voter->commit_list, list)
list_del_init(&bcm->list);
mutex_unlock(&voter->lock);
return ret;
}
EXPORT_SYMBOL_GPL(qcom_icc_bcm_voter_commit);
static int qcom_icc_bcm_voter_probe(struct platform_device *pdev)
{
struct bcm_voter *voter;
voter = devm_kzalloc(&pdev->dev, sizeof(*voter), GFP_KERNEL);
if (!voter)
return -ENOMEM;
voter->dev = &pdev->dev;
voter->np = pdev->dev.of_node;
mutex_init(&voter->lock);
INIT_LIST_HEAD(&voter->commit_list);
INIT_LIST_HEAD(&voter->ws_list);
mutex_lock(&bcm_voter_lock);
list_add_tail(&voter->voter_node, &bcm_voters);
mutex_unlock(&bcm_voter_lock);
return 0;
}
static const struct of_device_id bcm_voter_of_match[] = {
{ .compatible = "qcom,bcm-voter" },
{ }
};
static struct platform_driver qcom_icc_bcm_voter_driver = {
.probe = qcom_icc_bcm_voter_probe,
.driver = {
.name = "bcm_voter",
.of_match_table = bcm_voter_of_match,
},
};
module_platform_driver(qcom_icc_bcm_voter_driver);
MODULE_AUTHOR("David Dai <daidavid1@codeaurora.org>");
MODULE_DESCRIPTION("Qualcomm BCM Voter interconnect driver");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,27 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2020, The Linux Foundation. All rights reserved.
*/
#ifndef __DRIVERS_INTERCONNECT_QCOM_BCM_VOTER_H__
#define __DRIVERS_INTERCONNECT_QCOM_BCM_VOTER_H__
#include <soc/qcom/cmd-db.h>
#include <soc/qcom/rpmh.h>
#include <soc/qcom/tcs.h>
#include "icc-rpmh.h"
#define DEFINE_QBCM(_name, _bcmname, _keepalive, ...) \
static struct qcom_icc_bcm _name = { \
.name = _bcmname, \
.keepalive = _keepalive, \
.num_nodes = ARRAY_SIZE(((struct qcom_icc_node *[]){ __VA_ARGS__ })), \
.nodes = { __VA_ARGS__ }, \
}
struct bcm_voter *of_bcm_voter_get(struct device *dev, const char *name);
void qcom_icc_bcm_voter_add(struct bcm_voter *voter, struct qcom_icc_bcm *bcm);
int qcom_icc_bcm_voter_commit(struct bcm_voter *voter);
#endif

View File

@ -0,0 +1,150 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2020, The Linux Foundation. All rights reserved.
*/
#include <linux/interconnect.h>
#include <linux/interconnect-provider.h>
#include <linux/module.h>
#include "bcm-voter.h"
#include "icc-rpmh.h"
/**
* qcom_icc_pre_aggregate - cleans up stale values from prior icc_set
* @node: icc node to operate on
*/
void qcom_icc_pre_aggregate(struct icc_node *node)
{
size_t i;
struct qcom_icc_node *qn;
qn = node->data;
for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
qn->sum_avg[i] = 0;
qn->max_peak[i] = 0;
}
}
EXPORT_SYMBOL_GPL(qcom_icc_pre_aggregate);
/**
* qcom_icc_aggregate - aggregate bw for buckets indicated by tag
* @node: node to aggregate
* @tag: tag to indicate which buckets to aggregate
* @avg_bw: new bw to sum aggregate
* @peak_bw: new bw to max aggregate
* @agg_avg: existing aggregate avg bw val
* @agg_peak: existing aggregate peak bw val
*/
int qcom_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
u32 peak_bw, u32 *agg_avg, u32 *agg_peak)
{
size_t i;
struct qcom_icc_node *qn;
struct qcom_icc_provider *qp;
qn = node->data;
qp = to_qcom_provider(node->provider);
if (!tag)
tag = QCOM_ICC_TAG_ALWAYS;
for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
if (tag & BIT(i)) {
qn->sum_avg[i] += avg_bw;
qn->max_peak[i] = max_t(u32, qn->max_peak[i], peak_bw);
}
}
*agg_avg += avg_bw;
*agg_peak = max_t(u32, *agg_peak, peak_bw);
for (i = 0; i < qn->num_bcms; i++)
qcom_icc_bcm_voter_add(qp->voter, qn->bcms[i]);
return 0;
}
EXPORT_SYMBOL_GPL(qcom_icc_aggregate);
/**
* qcom_icc_set - set the constraints based on path
* @src: source node for the path to set constraints on
* @dst: destination node for the path to set constraints on
*
* Return: 0 on success, or an error code otherwise
*/
int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
{
struct qcom_icc_provider *qp;
struct icc_node *node;
if (!src)
node = dst;
else
node = src;
qp = to_qcom_provider(node->provider);
qcom_icc_bcm_voter_commit(qp->voter);
return 0;
}
EXPORT_SYMBOL_GPL(qcom_icc_set);
/**
* qcom_icc_bcm_init - populates bcm aux data and connect qnodes
* @bcm: bcm to be initialized
* @dev: associated provider device
*
* Return: 0 on success, or an error code otherwise
*/
int qcom_icc_bcm_init(struct qcom_icc_bcm *bcm, struct device *dev)
{
struct qcom_icc_node *qn;
const struct bcm_db *data;
size_t data_count;
int i;
/* BCM is already initialised*/
if (bcm->addr)
return 0;
bcm->addr = cmd_db_read_addr(bcm->name);
if (!bcm->addr) {
dev_err(dev, "%s could not find RPMh address\n",
bcm->name);
return -EINVAL;
}
data = cmd_db_read_aux_data(bcm->name, &data_count);
if (IS_ERR(data)) {
dev_err(dev, "%s command db read error (%ld)\n",
bcm->name, PTR_ERR(data));
return PTR_ERR(data);
}
if (!data_count) {
dev_err(dev, "%s command db missing or partial aux data\n",
bcm->name);
return -EINVAL;
}
bcm->aux_data.unit = le32_to_cpu(data->unit);
bcm->aux_data.width = le16_to_cpu(data->width);
bcm->aux_data.vcd = data->vcd;
bcm->aux_data.reserved = data->reserved;
INIT_LIST_HEAD(&bcm->list);
INIT_LIST_HEAD(&bcm->ws_list);
/* Link Qnodes to their respective BCMs */
for (i = 0; i < bcm->num_nodes; i++) {
qn = bcm->nodes[i];
qn->bcms[qn->num_bcms] = bcm;
qn->num_bcms++;
}
return 0;
}
EXPORT_SYMBOL_GPL(qcom_icc_bcm_init);
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,149 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2020, The Linux Foundation. All rights reserved.
*/
#ifndef __DRIVERS_INTERCONNECT_QCOM_ICC_RPMH_H__
#define __DRIVERS_INTERCONNECT_QCOM_ICC_RPMH_H__
#define to_qcom_provider(_provider) \
container_of(_provider, struct qcom_icc_provider, provider)
/**
* struct qcom_icc_provider - Qualcomm specific interconnect provider
* @provider: generic interconnect provider
* @dev: reference to the NoC device
* @bcms: list of bcms that maps to the provider
* @num_bcms: number of @bcms
* @voter: bcm voter targeted by this provider
*/
struct qcom_icc_provider {
struct icc_provider provider;
struct device *dev;
struct qcom_icc_bcm **bcms;
size_t num_bcms;
struct bcm_voter *voter;
};
/**
* struct bcm_db - Auxiliary data pertaining to each Bus Clock Manager (BCM)
* @unit: divisor used to convert bytes/sec bw value to an RPMh msg
* @width: multiplier used to convert bytes/sec bw value to an RPMh msg
* @vcd: virtual clock domain that this bcm belongs to
* @reserved: reserved field
*/
struct bcm_db {
__le32 unit;
__le16 width;
u8 vcd;
u8 reserved;
};
#define MAX_LINKS 128
#define MAX_BCMS 64
#define MAX_BCM_PER_NODE 3
#define MAX_VCD 10
/*
* The AMC bucket denotes constraints that are applied to hardware when
* icc_set_bw() completes, whereas the WAKE and SLEEP constraints are applied
* when the execution environment transitions between active and low power mode.
*/
#define QCOM_ICC_BUCKET_AMC 0
#define QCOM_ICC_BUCKET_WAKE 1
#define QCOM_ICC_BUCKET_SLEEP 2
#define QCOM_ICC_NUM_BUCKETS 3
#define QCOM_ICC_TAG_AMC BIT(QCOM_ICC_BUCKET_AMC)
#define QCOM_ICC_TAG_WAKE BIT(QCOM_ICC_BUCKET_WAKE)
#define QCOM_ICC_TAG_SLEEP BIT(QCOM_ICC_BUCKET_SLEEP)
#define QCOM_ICC_TAG_ACTIVE_ONLY (QCOM_ICC_TAG_AMC | QCOM_ICC_TAG_WAKE)
#define QCOM_ICC_TAG_ALWAYS (QCOM_ICC_TAG_AMC | QCOM_ICC_TAG_WAKE |\
QCOM_ICC_TAG_SLEEP)
/**
* struct qcom_icc_node - Qualcomm specific interconnect nodes
* @name: the node name used in debugfs
* @links: an array of nodes where we can go next while traversing
* @id: a unique node identifier
* @num_links: the total number of @links
* @channels: num of channels at this node
* @buswidth: width of the interconnect between a node and the bus
* @sum_avg: current sum aggregate value of all avg bw requests
* @max_peak: current max aggregate value of all peak bw requests
* @bcms: list of bcms associated with this logical node
* @num_bcms: num of @bcms
*/
struct qcom_icc_node {
const char *name;
u16 links[MAX_LINKS];
u16 id;
u16 num_links;
u16 channels;
u16 buswidth;
u64 sum_avg[QCOM_ICC_NUM_BUCKETS];
u64 max_peak[QCOM_ICC_NUM_BUCKETS];
struct qcom_icc_bcm *bcms[MAX_BCM_PER_NODE];
size_t num_bcms;
};
/**
* struct qcom_icc_bcm - Qualcomm specific hardware accelerator nodes
* known as Bus Clock Manager (BCM)
* @name: the bcm node name used to fetch BCM data from command db
* @type: latency or bandwidth bcm
* @addr: address offsets used when voting to RPMH
* @vote_x: aggregated threshold values, represents sum_bw when @type is bw bcm
* @vote_y: aggregated threshold values, represents peak_bw when @type is bw bcm
* @dirty: flag used to indicate whether the bcm needs to be committed
* @keepalive: flag used to indicate whether a keepalive is required
* @aux_data: auxiliary data used when calculating threshold values and
* communicating with RPMh
* @list: used to link to other bcms when compiling lists for commit
* @ws_list: used to keep track of bcms that may transition between wake/sleep
* @num_nodes: total number of @num_nodes
* @nodes: list of qcom_icc_nodes that this BCM encapsulates
*/
struct qcom_icc_bcm {
const char *name;
u32 type;
u32 addr;
u64 vote_x[QCOM_ICC_NUM_BUCKETS];
u64 vote_y[QCOM_ICC_NUM_BUCKETS];
bool dirty;
bool keepalive;
struct bcm_db aux_data;
struct list_head list;
struct list_head ws_list;
size_t num_nodes;
struct qcom_icc_node *nodes[];
};
struct qcom_icc_fabric {
struct qcom_icc_node **nodes;
size_t num_nodes;
};
struct qcom_icc_desc {
struct qcom_icc_node **nodes;
size_t num_nodes;
struct qcom_icc_bcm **bcms;
size_t num_bcms;
};
#define DEFINE_QNODE(_name, _id, _channels, _buswidth, ...) \
static struct qcom_icc_node _name = { \
.id = _id, \
.name = #_name, \
.channels = _channels, \
.buswidth = _buswidth, \
.num_links = ARRAY_SIZE(((int[]){ __VA_ARGS__ })), \
.links = { __VA_ARGS__ }, \
}
int qcom_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
u32 peak_bw, u32 *agg_avg, u32 *agg_peak);
int qcom_icc_set(struct icc_node *src, struct icc_node *dst);
int qcom_icc_bcm_init(struct qcom_icc_bcm *bcm, struct device *dev);
void qcom_icc_pre_aggregate(struct icc_node *node);
#endif

View File

@ -0,0 +1,276 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2020, The Linux Foundation. All rights reserved.
*/
#include <linux/bitfield.h>
#include <linux/clk.h>
#include <linux/interconnect-provider.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <dt-bindings/interconnect/qcom,osm-l3.h>
#include "sc7180.h"
#include "sdm845.h"
#define LUT_MAX_ENTRIES 40U
#define LUT_SRC GENMASK(31, 30)
#define LUT_L_VAL GENMASK(7, 0)
#define LUT_ROW_SIZE 32
#define CLK_HW_DIV 2
/* Register offsets */
#define REG_ENABLE 0x0
#define REG_FREQ_LUT 0x110
#define REG_PERF_STATE 0x920
#define OSM_L3_MAX_LINKS 1
#define to_qcom_provider(_provider) \
container_of(_provider, struct qcom_osm_l3_icc_provider, provider)
struct qcom_osm_l3_icc_provider {
void __iomem *base;
unsigned int max_state;
unsigned long lut_tables[LUT_MAX_ENTRIES];
struct icc_provider provider;
};
/**
* struct qcom_icc_node - Qualcomm specific interconnect nodes
* @name: the node name used in debugfs
* @links: an array of nodes where we can go next while traversing
* @id: a unique node identifier
* @num_links: the total number of @links
* @buswidth: width of the interconnect between a node and the bus
*/
struct qcom_icc_node {
const char *name;
u16 links[OSM_L3_MAX_LINKS];
u16 id;
u16 num_links;
u16 buswidth;
};
struct qcom_icc_desc {
struct qcom_icc_node **nodes;
size_t num_nodes;
};
#define DEFINE_QNODE(_name, _id, _buswidth, ...) \
static struct qcom_icc_node _name = { \
.name = #_name, \
.id = _id, \
.buswidth = _buswidth, \
.num_links = ARRAY_SIZE(((int[]){ __VA_ARGS__ })), \
.links = { __VA_ARGS__ }, \
}
DEFINE_QNODE(sdm845_osm_apps_l3, SDM845_MASTER_OSM_L3_APPS, 16, SDM845_SLAVE_OSM_L3);
DEFINE_QNODE(sdm845_osm_l3, SDM845_SLAVE_OSM_L3, 16);
static struct qcom_icc_node *sdm845_osm_l3_nodes[] = {
[MASTER_OSM_L3_APPS] = &sdm845_osm_apps_l3,
[SLAVE_OSM_L3] = &sdm845_osm_l3,
};
const static struct qcom_icc_desc sdm845_icc_osm_l3 = {
.nodes = sdm845_osm_l3_nodes,
.num_nodes = ARRAY_SIZE(sdm845_osm_l3_nodes),
};
DEFINE_QNODE(sc7180_osm_apps_l3, SC7180_MASTER_OSM_L3_APPS, 16, SC7180_SLAVE_OSM_L3);
DEFINE_QNODE(sc7180_osm_l3, SC7180_SLAVE_OSM_L3, 16);
static struct qcom_icc_node *sc7180_osm_l3_nodes[] = {
[MASTER_OSM_L3_APPS] = &sc7180_osm_apps_l3,
[SLAVE_OSM_L3] = &sc7180_osm_l3,
};
const static struct qcom_icc_desc sc7180_icc_osm_l3 = {
.nodes = sc7180_osm_l3_nodes,
.num_nodes = ARRAY_SIZE(sc7180_osm_l3_nodes),
};
static int qcom_icc_set(struct icc_node *src, struct icc_node *dst)
{
struct qcom_osm_l3_icc_provider *qp;
struct icc_provider *provider;
struct qcom_icc_node *qn;
struct icc_node *n;
unsigned int index;
u32 agg_peak = 0;
u32 agg_avg = 0;
u64 rate;
qn = src->data;
provider = src->provider;
qp = to_qcom_provider(provider);
list_for_each_entry(n, &provider->nodes, node_list)
provider->aggregate(n, 0, n->avg_bw, n->peak_bw,
&agg_avg, &agg_peak);
rate = max(agg_avg, agg_peak);
rate = icc_units_to_bps(rate);
do_div(rate, qn->buswidth);
for (index = 0; index < qp->max_state - 1; index++) {
if (qp->lut_tables[index] >= rate)
break;
}
writel_relaxed(index, qp->base + REG_PERF_STATE);
return 0;
}
static int qcom_osm_l3_remove(struct platform_device *pdev)
{
struct qcom_osm_l3_icc_provider *qp = platform_get_drvdata(pdev);
icc_nodes_remove(&qp->provider);
return icc_provider_del(&qp->provider);
}
static int qcom_osm_l3_probe(struct platform_device *pdev)
{
u32 info, src, lval, i, prev_freq = 0, freq;
static unsigned long hw_rate, xo_rate;
struct qcom_osm_l3_icc_provider *qp;
const struct qcom_icc_desc *desc;
struct icc_onecell_data *data;
struct icc_provider *provider;
struct qcom_icc_node **qnodes;
struct icc_node *node;
size_t num_nodes;
struct clk *clk;
int ret;
clk = clk_get(&pdev->dev, "xo");
if (IS_ERR(clk))
return PTR_ERR(clk);
xo_rate = clk_get_rate(clk);
clk_put(clk);
clk = clk_get(&pdev->dev, "alternate");
if (IS_ERR(clk))
return PTR_ERR(clk);
hw_rate = clk_get_rate(clk) / CLK_HW_DIV;
clk_put(clk);
qp = devm_kzalloc(&pdev->dev, sizeof(*qp), GFP_KERNEL);
if (!qp)
return -ENOMEM;
qp->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(qp->base))
return PTR_ERR(qp->base);
/* HW should be in enabled state to proceed */
if (!(readl_relaxed(qp->base + REG_ENABLE) & 0x1)) {
dev_err(&pdev->dev, "error hardware not enabled\n");
return -ENODEV;
}
for (i = 0; i < LUT_MAX_ENTRIES; i++) {
info = readl_relaxed(qp->base + REG_FREQ_LUT +
i * LUT_ROW_SIZE);
src = FIELD_GET(LUT_SRC, info);
lval = FIELD_GET(LUT_L_VAL, info);
if (src)
freq = xo_rate * lval;
else
freq = hw_rate;
/* Two of the same frequencies signify end of table */
if (i > 0 && prev_freq == freq)
break;
dev_dbg(&pdev->dev, "index=%d freq=%d\n", i, freq);
qp->lut_tables[i] = freq;
prev_freq = freq;
}
qp->max_state = i;
desc = device_get_match_data(&pdev->dev);
if (!desc)
return -EINVAL;
qnodes = desc->nodes;
num_nodes = desc->num_nodes;
data = devm_kcalloc(&pdev->dev, num_nodes, sizeof(*node), GFP_KERNEL);
if (!data)
return -ENOMEM;
provider = &qp->provider;
provider->dev = &pdev->dev;
provider->set = qcom_icc_set;
provider->aggregate = icc_std_aggregate;
provider->xlate = of_icc_xlate_onecell;
INIT_LIST_HEAD(&provider->nodes);
provider->data = data;
ret = icc_provider_add(provider);
if (ret) {
dev_err(&pdev->dev, "error adding interconnect provider\n");
return ret;
}
for (i = 0; i < num_nodes; i++) {
size_t j;
node = icc_node_create(qnodes[i]->id);
if (IS_ERR(node)) {
ret = PTR_ERR(node);
goto err;
}
node->name = qnodes[i]->name;
node->data = qnodes[i];
icc_node_add(node, provider);
for (j = 0; j < qnodes[i]->num_links; j++)
icc_link_create(node, qnodes[i]->links[j]);
data->nodes[i] = node;
}
data->num_nodes = num_nodes;
platform_set_drvdata(pdev, qp);
return 0;
err:
icc_nodes_remove(provider);
icc_provider_del(provider);
return ret;
}
static const struct of_device_id osm_l3_of_match[] = {
{ .compatible = "qcom,sc7180-osm-l3", .data = &sc7180_icc_osm_l3 },
{ .compatible = "qcom,sdm845-osm-l3", .data = &sdm845_icc_osm_l3 },
{ }
};
MODULE_DEVICE_TABLE(of, osm_l3_of_match);
static struct platform_driver osm_l3_driver = {
.probe = qcom_osm_l3_probe,
.remove = qcom_osm_l3_remove,
.driver = {
.name = "osm-l3",
.of_match_table = osm_l3_of_match,
},
};
module_platform_driver(osm_l3_driver);
MODULE_DESCRIPTION("Qualcomm OSM L3 interconnect driver");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,641 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2020, The Linux Foundation. All rights reserved.
*
*/
#include <linux/device.h>
#include <linux/interconnect.h>
#include <linux/interconnect-provider.h>
#include <linux/module.h>
#include <linux/of_platform.h>
#include <dt-bindings/interconnect/qcom,sc7180.h>
#include "bcm-voter.h"
#include "icc-rpmh.h"
#include "sc7180.h"
DEFINE_QNODE(qhm_a1noc_cfg, SC7180_MASTER_A1NOC_CFG, 1, 4, SC7180_SLAVE_SERVICE_A1NOC);
DEFINE_QNODE(qhm_qspi, SC7180_MASTER_QSPI, 1, 4, SC7180_SLAVE_A1NOC_SNOC);
DEFINE_QNODE(qhm_qup_0, SC7180_MASTER_QUP_0, 1, 4, SC7180_SLAVE_A1NOC_SNOC);
DEFINE_QNODE(xm_sdc2, SC7180_MASTER_SDCC_2, 1, 8, SC7180_SLAVE_A1NOC_SNOC);
DEFINE_QNODE(xm_emmc, SC7180_MASTER_EMMC, 1, 8, SC7180_SLAVE_A1NOC_SNOC);
DEFINE_QNODE(xm_ufs_mem, SC7180_MASTER_UFS_MEM, 1, 8, SC7180_SLAVE_A1NOC_SNOC);
DEFINE_QNODE(qhm_a2noc_cfg, SC7180_MASTER_A2NOC_CFG, 1, 4, SC7180_SLAVE_SERVICE_A2NOC);
DEFINE_QNODE(qhm_qdss_bam, SC7180_MASTER_QDSS_BAM, 1, 4, SC7180_SLAVE_A2NOC_SNOC);
DEFINE_QNODE(qhm_qup_1, SC7180_MASTER_QUP_1, 1, 4, SC7180_SLAVE_A2NOC_SNOC);
DEFINE_QNODE(qxm_crypto, SC7180_MASTER_CRYPTO, 1, 8, SC7180_SLAVE_A2NOC_SNOC);
DEFINE_QNODE(qxm_ipa, SC7180_MASTER_IPA, 1, 8, SC7180_SLAVE_A2NOC_SNOC);
DEFINE_QNODE(xm_qdss_etr, SC7180_MASTER_QDSS_ETR, 1, 8, SC7180_SLAVE_A2NOC_SNOC);
DEFINE_QNODE(qhm_usb3, SC7180_MASTER_USB3, 1, 8, SC7180_SLAVE_A2NOC_SNOC);
DEFINE_QNODE(qxm_camnoc_hf0_uncomp, SC7180_MASTER_CAMNOC_HF0_UNCOMP, 1, 32, SC7180_SLAVE_CAMNOC_UNCOMP);
DEFINE_QNODE(qxm_camnoc_hf1_uncomp, SC7180_MASTER_CAMNOC_HF1_UNCOMP, 1, 32, SC7180_SLAVE_CAMNOC_UNCOMP);
DEFINE_QNODE(qxm_camnoc_sf_uncomp, SC7180_MASTER_CAMNOC_SF_UNCOMP, 1, 32, SC7180_SLAVE_CAMNOC_UNCOMP);
DEFINE_QNODE(qnm_npu, SC7180_MASTER_NPU, 2, 32, SC7180_SLAVE_CDSP_GEM_NOC);
DEFINE_QNODE(qxm_npu_dsp, SC7180_MASTER_NPU_PROC, 1, 8, SC7180_SLAVE_CDSP_GEM_NOC);
DEFINE_QNODE(qnm_snoc, SC7180_MASTER_SNOC_CNOC, 1, 8, SC7180_SLAVE_A1NOC_CFG, SC7180_SLAVE_A2NOC_CFG, SC7180_SLAVE_AHB2PHY_SOUTH, SC7180_SLAVE_AHB2PHY_CENTER, SC7180_SLAVE_AOP, SC7180_SLAVE_AOSS, SC7180_SLAVE_BOOT_ROM, SC7180_SLAVE_CAMERA_CFG, SC7180_SLAVE_CAMERA_NRT_THROTTLE_CFG, SC7180_SLAVE_CAMERA_RT_THROTTLE_CFG, SC7180_SLAVE_CLK_CTL, SC7180_SLAVE_RBCPR_CX_CFG, SC7180_SLAVE_RBCPR_MX_CFG, SC7180_SLAVE_CRYPTO_0_CFG, SC7180_SLAVE_DCC_CFG, SC7180_SLAVE_CNOC_DDRSS, SC7180_SLAVE_DISPLAY_CFG, SC7180_SLAVE_DISPLAY_RT_THROTTLE_CFG, SC7180_SLAVE_DISPLAY_THROTTLE_CFG, SC7180_SLAVE_EMMC_CFG, SC7180_SLAVE_GLM,
SC7180_SLAVE_GFX3D_CFG, SC7180_SLAVE_IMEM_CFG, SC7180_SLAVE_IPA_CFG, SC7180_SLAVE_CNOC_MNOC_CFG, SC7180_SLAVE_CNOC_MSS, SC7180_SLAVE_NPU_CFG, SC7180_SLAVE_NPU_DMA_BWMON_CFG, SC7180_SLAVE_NPU_PROC_BWMON_CFG, SC7180_SLAVE_PDM, SC7180_SLAVE_PIMEM_CFG, SC7180_SLAVE_PRNG, SC7180_SLAVE_QDSS_CFG, SC7180_SLAVE_QM_CFG, SC7180_SLAVE_QM_MPU_CFG, SC7180_SLAVE_QSPI_0, SC7180_SLAVE_QUP_0, SC7180_SLAVE_QUP_1, SC7180_SLAVE_SDCC_2, SC7180_SLAVE_SECURITY, SC7180_SLAVE_SNOC_CFG, SC7180_SLAVE_TCSR, SC7180_SLAVE_TLMM_WEST, SC7180_SLAVE_TLMM_NORTH, SC7180_SLAVE_TLMM_SOUTH, SC7180_SLAVE_UFS_MEM_CFG, SC7180_SLAVE_USB3, SC7180_SLAVE_VENUS_CFG, SC7180_SLAVE_VENUS_THROTTLE_CFG, SC7180_SLAVE_VSENSE_CTRL_CFG, SC7180_SLAVE_SERVICE_CNOC);
DEFINE_QNODE(xm_qdss_dap, SC7180_MASTER_QDSS_DAP, 1, 8, SC7180_SLAVE_A1NOC_CFG, SC7180_SLAVE_A2NOC_CFG, SC7180_SLAVE_AHB2PHY_SOUTH, SC7180_SLAVE_AHB2PHY_CENTER, SC7180_SLAVE_AOP, SC7180_SLAVE_AOSS, SC7180_SLAVE_BOOT_ROM, SC7180_SLAVE_CAMERA_CFG, SC7180_SLAVE_CAMERA_NRT_THROTTLE_CFG, SC7180_SLAVE_CAMERA_RT_THROTTLE_CFG, SC7180_SLAVE_CLK_CTL, SC7180_SLAVE_RBCPR_CX_CFG, SC7180_SLAVE_RBCPR_MX_CFG, SC7180_SLAVE_CRYPTO_0_CFG, SC7180_SLAVE_DCC_CFG, SC7180_SLAVE_CNOC_DDRSS, SC7180_SLAVE_DISPLAY_CFG, SC7180_SLAVE_DISPLAY_RT_THROTTLE_CFG, SC7180_SLAVE_DISPLAY_THROTTLE_CFG, SC7180_SLAVE_EMMC_CFG, SC7180_SLAVE_GLM, SC7180_SLAVE_GFX3D_CFG, SC7180_SLAVE_IMEM_CFG, SC7180_SLAVE_IPA_CFG, SC7180_SLAVE_CNOC_MNOC_CFG, SC7180_SLAVE_CNOC_MSS, SC7180_SLAVE_NPU_CFG, SC7180_SLAVE_NPU_DMA_BWMON_CFG,
SC7180_SLAVE_NPU_PROC_BWMON_CFG, SC7180_SLAVE_PDM, SC7180_SLAVE_PIMEM_CFG, SC7180_SLAVE_PRNG, SC7180_SLAVE_QDSS_CFG, SC7180_SLAVE_QM_CFG, SC7180_SLAVE_QM_MPU_CFG, SC7180_SLAVE_QSPI_0, SC7180_SLAVE_QUP_0, SC7180_SLAVE_QUP_1, SC7180_SLAVE_SDCC_2, SC7180_SLAVE_SECURITY, SC7180_SLAVE_SNOC_CFG, SC7180_SLAVE_TCSR, SC7180_SLAVE_TLMM_WEST, SC7180_SLAVE_TLMM_NORTH, SC7180_SLAVE_TLMM_SOUTH, SC7180_SLAVE_UFS_MEM_CFG, SC7180_SLAVE_USB3, SC7180_SLAVE_VENUS_CFG, SC7180_SLAVE_VENUS_THROTTLE_CFG, SC7180_SLAVE_VSENSE_CTRL_CFG, SC7180_SLAVE_SERVICE_CNOC);
DEFINE_QNODE(qhm_cnoc_dc_noc, SC7180_MASTER_CNOC_DC_NOC, 1, 4, SC7180_SLAVE_GEM_NOC_CFG, SC7180_SLAVE_LLCC_CFG);
DEFINE_QNODE(acm_apps0, SC7180_MASTER_APPSS_PROC, 1, 16, SC7180_SLAVE_GEM_NOC_SNOC, SC7180_SLAVE_LLCC);
DEFINE_QNODE(acm_sys_tcu, SC7180_MASTER_SYS_TCU, 1, 8, SC7180_SLAVE_GEM_NOC_SNOC, SC7180_SLAVE_LLCC);
DEFINE_QNODE(qhm_gemnoc_cfg, SC7180_MASTER_GEM_NOC_CFG, 1, 4, SC7180_SLAVE_MSS_PROC_MS_MPU_CFG, SC7180_SLAVE_SERVICE_GEM_NOC);
DEFINE_QNODE(qnm_cmpnoc, SC7180_MASTER_COMPUTE_NOC, 1, 32, SC7180_SLAVE_GEM_NOC_SNOC, SC7180_SLAVE_LLCC);
DEFINE_QNODE(qnm_mnoc_hf, SC7180_MASTER_MNOC_HF_MEM_NOC, 1, 32, SC7180_SLAVE_LLCC);
DEFINE_QNODE(qnm_mnoc_sf, SC7180_MASTER_MNOC_SF_MEM_NOC, 1, 32, SC7180_SLAVE_GEM_NOC_SNOC, SC7180_SLAVE_LLCC);
DEFINE_QNODE(qnm_snoc_gc, SC7180_MASTER_SNOC_GC_MEM_NOC, 1, 8, SC7180_SLAVE_LLCC);
DEFINE_QNODE(qnm_snoc_sf, SC7180_MASTER_SNOC_SF_MEM_NOC, 1, 16, SC7180_SLAVE_LLCC);
DEFINE_QNODE(qxm_gpu, SC7180_MASTER_GFX3D, 2, 32, SC7180_SLAVE_GEM_NOC_SNOC, SC7180_SLAVE_LLCC);
DEFINE_QNODE(ipa_core_master, SC7180_MASTER_IPA_CORE, 1, 8, SC7180_SLAVE_IPA_CORE);
DEFINE_QNODE(llcc_mc, SC7180_MASTER_LLCC, 2, 4, SC7180_SLAVE_EBI1);
DEFINE_QNODE(qhm_mnoc_cfg, SC7180_MASTER_CNOC_MNOC_CFG, 1, 4, SC7180_SLAVE_SERVICE_MNOC);
DEFINE_QNODE(qxm_camnoc_hf0, SC7180_MASTER_CAMNOC_HF0, 2, 32, SC7180_SLAVE_MNOC_HF_MEM_NOC);
DEFINE_QNODE(qxm_camnoc_hf1, SC7180_MASTER_CAMNOC_HF1, 2, 32, SC7180_SLAVE_MNOC_HF_MEM_NOC);
DEFINE_QNODE(qxm_camnoc_sf, SC7180_MASTER_CAMNOC_SF, 1, 32, SC7180_SLAVE_MNOC_SF_MEM_NOC);
DEFINE_QNODE(qxm_mdp0, SC7180_MASTER_MDP0, 1, 32, SC7180_SLAVE_MNOC_HF_MEM_NOC);
DEFINE_QNODE(qxm_rot, SC7180_MASTER_ROTATOR, 1, 16, SC7180_SLAVE_MNOC_SF_MEM_NOC);
DEFINE_QNODE(qxm_venus0, SC7180_MASTER_VIDEO_P0, 1, 32, SC7180_SLAVE_MNOC_SF_MEM_NOC);
DEFINE_QNODE(qxm_venus_arm9, SC7180_MASTER_VIDEO_PROC, 1, 8, SC7180_SLAVE_MNOC_SF_MEM_NOC);
DEFINE_QNODE(amm_npu_sys, SC7180_MASTER_NPU_SYS, 2, 32, SC7180_SLAVE_NPU_COMPUTE_NOC);
DEFINE_QNODE(qhm_npu_cfg, SC7180_MASTER_NPU_NOC_CFG, 1, 4, SC7180_SLAVE_NPU_CAL_DP0, SC7180_SLAVE_NPU_CP, SC7180_SLAVE_NPU_INT_DMA_BWMON_CFG, SC7180_SLAVE_NPU_DPM, SC7180_SLAVE_ISENSE_CFG, SC7180_SLAVE_NPU_LLM_CFG, SC7180_SLAVE_NPU_TCM, SC7180_SLAVE_SERVICE_NPU_NOC);
DEFINE_QNODE(qup_core_master_1, SC7180_MASTER_QUP_CORE_0, 1, 4, SC7180_SLAVE_QUP_CORE_0);
DEFINE_QNODE(qup_core_master_2, SC7180_MASTER_QUP_CORE_1, 1, 4, SC7180_SLAVE_QUP_CORE_1);
DEFINE_QNODE(qhm_snoc_cfg, SC7180_MASTER_SNOC_CFG, 1, 4, SC7180_SLAVE_SERVICE_SNOC);
DEFINE_QNODE(qnm_aggre1_noc, SC7180_MASTER_A1NOC_SNOC, 1, 16, SC7180_SLAVE_APPSS, SC7180_SLAVE_SNOC_CNOC, SC7180_SLAVE_SNOC_GEM_NOC_SF, SC7180_SLAVE_IMEM, SC7180_SLAVE_PIMEM, SC7180_SLAVE_QDSS_STM);
DEFINE_QNODE(qnm_aggre2_noc, SC7180_MASTER_A2NOC_SNOC, 1, 16, SC7180_SLAVE_APPSS, SC7180_SLAVE_SNOC_CNOC, SC7180_SLAVE_SNOC_GEM_NOC_SF, SC7180_SLAVE_IMEM, SC7180_SLAVE_PIMEM, SC7180_SLAVE_QDSS_STM, SC7180_SLAVE_TCU);
DEFINE_QNODE(qnm_gemnoc, SC7180_MASTER_GEM_NOC_SNOC, 1, 8, SC7180_SLAVE_APPSS, SC7180_SLAVE_SNOC_CNOC, SC7180_SLAVE_IMEM, SC7180_SLAVE_PIMEM, SC7180_SLAVE_QDSS_STM, SC7180_SLAVE_TCU);
DEFINE_QNODE(qxm_pimem, SC7180_MASTER_PIMEM, 1, 8, SC7180_SLAVE_SNOC_GEM_NOC_GC, SC7180_SLAVE_IMEM);
DEFINE_QNODE(qns_a1noc_snoc, SC7180_SLAVE_A1NOC_SNOC, 1, 16, SC7180_MASTER_A1NOC_SNOC);
DEFINE_QNODE(srvc_aggre1_noc, SC7180_SLAVE_SERVICE_A1NOC, 1, 4);
DEFINE_QNODE(qns_a2noc_snoc, SC7180_SLAVE_A2NOC_SNOC, 1, 16, SC7180_MASTER_A2NOC_SNOC);
DEFINE_QNODE(srvc_aggre2_noc, SC7180_SLAVE_SERVICE_A2NOC, 1, 4);
DEFINE_QNODE(qns_camnoc_uncomp, SC7180_SLAVE_CAMNOC_UNCOMP, 1, 32);
DEFINE_QNODE(qns_cdsp_gemnoc, SC7180_SLAVE_CDSP_GEM_NOC, 1, 32, SC7180_MASTER_COMPUTE_NOC);
DEFINE_QNODE(qhs_a1_noc_cfg, SC7180_SLAVE_A1NOC_CFG, 1, 4, SC7180_MASTER_A1NOC_CFG);
DEFINE_QNODE(qhs_a2_noc_cfg, SC7180_SLAVE_A2NOC_CFG, 1, 4, SC7180_MASTER_A2NOC_CFG);
DEFINE_QNODE(qhs_ahb2phy0, SC7180_SLAVE_AHB2PHY_SOUTH, 1, 4);
DEFINE_QNODE(qhs_ahb2phy2, SC7180_SLAVE_AHB2PHY_CENTER, 1, 4);
DEFINE_QNODE(qhs_aop, SC7180_SLAVE_AOP, 1, 4);
DEFINE_QNODE(qhs_aoss, SC7180_SLAVE_AOSS, 1, 4);
DEFINE_QNODE(qhs_boot_rom, SC7180_SLAVE_BOOT_ROM, 1, 4);
DEFINE_QNODE(qhs_camera_cfg, SC7180_SLAVE_CAMERA_CFG, 1, 4);
DEFINE_QNODE(qhs_camera_nrt_throttle_cfg, SC7180_SLAVE_CAMERA_NRT_THROTTLE_CFG, 1, 4);
DEFINE_QNODE(qhs_camera_rt_throttle_cfg, SC7180_SLAVE_CAMERA_RT_THROTTLE_CFG, 1, 4);
DEFINE_QNODE(qhs_clk_ctl, SC7180_SLAVE_CLK_CTL, 1, 4);
DEFINE_QNODE(qhs_cpr_cx, SC7180_SLAVE_RBCPR_CX_CFG, 1, 4);
DEFINE_QNODE(qhs_cpr_mx, SC7180_SLAVE_RBCPR_MX_CFG, 1, 4);
DEFINE_QNODE(qhs_crypto0_cfg, SC7180_SLAVE_CRYPTO_0_CFG, 1, 4);
DEFINE_QNODE(qhs_dcc_cfg, SC7180_SLAVE_DCC_CFG, 1, 4);
DEFINE_QNODE(qhs_ddrss_cfg, SC7180_SLAVE_CNOC_DDRSS, 1, 4, SC7180_MASTER_CNOC_DC_NOC);
DEFINE_QNODE(qhs_display_cfg, SC7180_SLAVE_DISPLAY_CFG, 1, 4);
DEFINE_QNODE(qhs_display_rt_throttle_cfg, SC7180_SLAVE_DISPLAY_RT_THROTTLE_CFG, 1, 4);
DEFINE_QNODE(qhs_display_throttle_cfg, SC7180_SLAVE_DISPLAY_THROTTLE_CFG, 1, 4);
DEFINE_QNODE(qhs_emmc_cfg, SC7180_SLAVE_EMMC_CFG, 1, 4);
DEFINE_QNODE(qhs_glm, SC7180_SLAVE_GLM, 1, 4);
DEFINE_QNODE(qhs_gpuss_cfg, SC7180_SLAVE_GFX3D_CFG, 1, 8);
DEFINE_QNODE(qhs_imem_cfg, SC7180_SLAVE_IMEM_CFG, 1, 4);
DEFINE_QNODE(qhs_ipa, SC7180_SLAVE_IPA_CFG, 1, 4);
DEFINE_QNODE(qhs_mnoc_cfg, SC7180_SLAVE_CNOC_MNOC_CFG, 1, 4, SC7180_MASTER_CNOC_MNOC_CFG);
DEFINE_QNODE(qhs_mss_cfg, SC7180_SLAVE_CNOC_MSS, 1, 4);
DEFINE_QNODE(qhs_npu_cfg, SC7180_SLAVE_NPU_CFG, 1, 4, SC7180_MASTER_NPU_NOC_CFG);
DEFINE_QNODE(qhs_npu_dma_throttle_cfg, SC7180_SLAVE_NPU_DMA_BWMON_CFG, 1, 4);
DEFINE_QNODE(qhs_npu_dsp_throttle_cfg, SC7180_SLAVE_NPU_PROC_BWMON_CFG, 1, 4);
DEFINE_QNODE(qhs_pdm, SC7180_SLAVE_PDM, 1, 4);
DEFINE_QNODE(qhs_pimem_cfg, SC7180_SLAVE_PIMEM_CFG, 1, 4);
DEFINE_QNODE(qhs_prng, SC7180_SLAVE_PRNG, 1, 4);
DEFINE_QNODE(qhs_qdss_cfg, SC7180_SLAVE_QDSS_CFG, 1, 4);
DEFINE_QNODE(qhs_qm_cfg, SC7180_SLAVE_QM_CFG, 1, 4);
DEFINE_QNODE(qhs_qm_mpu_cfg, SC7180_SLAVE_QM_MPU_CFG, 1, 4);
DEFINE_QNODE(qhs_qspi, SC7180_SLAVE_QSPI_0, 1, 4);
DEFINE_QNODE(qhs_qup0, SC7180_SLAVE_QUP_0, 1, 4);
DEFINE_QNODE(qhs_qup1, SC7180_SLAVE_QUP_1, 1, 4);
DEFINE_QNODE(qhs_sdc2, SC7180_SLAVE_SDCC_2, 1, 4);
DEFINE_QNODE(qhs_security, SC7180_SLAVE_SECURITY, 1, 4);
DEFINE_QNODE(qhs_snoc_cfg, SC7180_SLAVE_SNOC_CFG, 1, 4, SC7180_MASTER_SNOC_CFG);
DEFINE_QNODE(qhs_tcsr, SC7180_SLAVE_TCSR, 1, 4);
DEFINE_QNODE(qhs_tlmm_1, SC7180_SLAVE_TLMM_WEST, 1, 4);
DEFINE_QNODE(qhs_tlmm_2, SC7180_SLAVE_TLMM_NORTH, 1, 4);
DEFINE_QNODE(qhs_tlmm_3, SC7180_SLAVE_TLMM_SOUTH, 1, 4);
DEFINE_QNODE(qhs_ufs_mem_cfg, SC7180_SLAVE_UFS_MEM_CFG, 1, 4);
DEFINE_QNODE(qhs_usb3, SC7180_SLAVE_USB3, 1, 4);
DEFINE_QNODE(qhs_venus_cfg, SC7180_SLAVE_VENUS_CFG, 1, 4);
DEFINE_QNODE(qhs_venus_throttle_cfg, SC7180_SLAVE_VENUS_THROTTLE_CFG, 1, 4);
DEFINE_QNODE(qhs_vsense_ctrl_cfg, SC7180_SLAVE_VSENSE_CTRL_CFG, 1, 4);
DEFINE_QNODE(srvc_cnoc, SC7180_SLAVE_SERVICE_CNOC, 1, 4);
DEFINE_QNODE(qhs_gemnoc, SC7180_SLAVE_GEM_NOC_CFG, 1, 4, SC7180_MASTER_GEM_NOC_CFG);
DEFINE_QNODE(qhs_llcc, SC7180_SLAVE_LLCC_CFG, 1, 4);
DEFINE_QNODE(qhs_mdsp_ms_mpu_cfg, SC7180_SLAVE_MSS_PROC_MS_MPU_CFG, 1, 4);
DEFINE_QNODE(qns_gem_noc_snoc, SC7180_SLAVE_GEM_NOC_SNOC, 1, 8, SC7180_MASTER_GEM_NOC_SNOC);
DEFINE_QNODE(qns_llcc, SC7180_SLAVE_LLCC, 1, 16, SC7180_MASTER_LLCC);
DEFINE_QNODE(srvc_gemnoc, SC7180_SLAVE_SERVICE_GEM_NOC, 1, 4);
DEFINE_QNODE(ipa_core_slave, SC7180_SLAVE_IPA_CORE, 1, 8);
DEFINE_QNODE(ebi, SC7180_SLAVE_EBI1, 2, 4);
DEFINE_QNODE(qns_mem_noc_hf, SC7180_SLAVE_MNOC_HF_MEM_NOC, 1, 32, SC7180_MASTER_MNOC_HF_MEM_NOC);
DEFINE_QNODE(qns_mem_noc_sf, SC7180_SLAVE_MNOC_SF_MEM_NOC, 1, 32, SC7180_MASTER_MNOC_SF_MEM_NOC);
DEFINE_QNODE(srvc_mnoc, SC7180_SLAVE_SERVICE_MNOC, 1, 4);
DEFINE_QNODE(qhs_cal_dp0, SC7180_SLAVE_NPU_CAL_DP0, 1, 4);
DEFINE_QNODE(qhs_cp, SC7180_SLAVE_NPU_CP, 1, 4);
DEFINE_QNODE(qhs_dma_bwmon, SC7180_SLAVE_NPU_INT_DMA_BWMON_CFG, 1, 4);
DEFINE_QNODE(qhs_dpm, SC7180_SLAVE_NPU_DPM, 1, 4);
DEFINE_QNODE(qhs_isense, SC7180_SLAVE_ISENSE_CFG, 1, 4);
DEFINE_QNODE(qhs_llm, SC7180_SLAVE_NPU_LLM_CFG, 1, 4);
DEFINE_QNODE(qhs_tcm, SC7180_SLAVE_NPU_TCM, 1, 4);
DEFINE_QNODE(qns_npu_sys, SC7180_SLAVE_NPU_COMPUTE_NOC, 2, 32);
DEFINE_QNODE(srvc_noc, SC7180_SLAVE_SERVICE_NPU_NOC, 1, 4);
DEFINE_QNODE(qup_core_slave_1, SC7180_SLAVE_QUP_CORE_0, 1, 4);
DEFINE_QNODE(qup_core_slave_2, SC7180_SLAVE_QUP_CORE_1, 1, 4);
DEFINE_QNODE(qhs_apss, SC7180_SLAVE_APPSS, 1, 8);
DEFINE_QNODE(qns_cnoc, SC7180_SLAVE_SNOC_CNOC, 1, 8, SC7180_MASTER_SNOC_CNOC);
DEFINE_QNODE(qns_gemnoc_gc, SC7180_SLAVE_SNOC_GEM_NOC_GC, 1, 8, SC7180_MASTER_SNOC_GC_MEM_NOC);
DEFINE_QNODE(qns_gemnoc_sf, SC7180_SLAVE_SNOC_GEM_NOC_SF, 1, 16, SC7180_MASTER_SNOC_SF_MEM_NOC);
DEFINE_QNODE(qxs_imem, SC7180_SLAVE_IMEM, 1, 8);
DEFINE_QNODE(qxs_pimem, SC7180_SLAVE_PIMEM, 1, 8);
DEFINE_QNODE(srvc_snoc, SC7180_SLAVE_SERVICE_SNOC, 1, 4);
DEFINE_QNODE(xs_qdss_stm, SC7180_SLAVE_QDSS_STM, 1, 4);
DEFINE_QNODE(xs_sys_tcu_cfg, SC7180_SLAVE_TCU, 1, 8);
DEFINE_QBCM(bcm_acv, "ACV", false, &ebi);
DEFINE_QBCM(bcm_mc0, "MC0", true, &ebi);
DEFINE_QBCM(bcm_sh0, "SH0", true, &qns_llcc);
DEFINE_QBCM(bcm_mm0, "MM0", false, &qns_mem_noc_hf);
DEFINE_QBCM(bcm_ce0, "CE0", false, &qxm_crypto);
DEFINE_QBCM(bcm_ip0, "IP0", false, &ipa_core_slave);
DEFINE_QBCM(bcm_cn0, "CN0", true, &qnm_snoc, &xm_qdss_dap, &qhs_a1_noc_cfg, &qhs_a2_noc_cfg, &qhs_ahb2phy0, &qhs_aop, &qhs_aoss, &qhs_boot_rom, &qhs_camera_cfg, &qhs_camera_nrt_throttle_cfg, &qhs_camera_rt_throttle_cfg, &qhs_clk_ctl, &qhs_cpr_cx, &qhs_cpr_mx, &qhs_crypto0_cfg, &qhs_dcc_cfg, &qhs_ddrss_cfg, &qhs_display_cfg, &qhs_display_rt_throttle_cfg, &qhs_display_throttle_cfg, &qhs_glm, &qhs_gpuss_cfg, &qhs_imem_cfg, &qhs_ipa, &qhs_mnoc_cfg, &qhs_mss_cfg, &qhs_npu_cfg, &qhs_npu_dma_throttle_cfg, &qhs_npu_dsp_throttle_cfg, &qhs_pimem_cfg, &qhs_prng, &qhs_qdss_cfg, &qhs_qm_cfg, &qhs_qm_mpu_cfg, &qhs_qup0, &qhs_qup1, &qhs_security, &qhs_snoc_cfg, &qhs_tcsr, &qhs_tlmm_1, &qhs_tlmm_2, &qhs_tlmm_3, &qhs_ufs_mem_cfg, &qhs_usb3, &qhs_venus_cfg, &qhs_venus_throttle_cfg, &qhs_vsense_ctrl_cfg, &srvc_cnoc);
DEFINE_QBCM(bcm_mm1, "MM1", false, &qxm_camnoc_hf0_uncomp, &qxm_camnoc_hf1_uncomp, &qxm_camnoc_sf_uncomp, &qhm_mnoc_cfg, &qxm_mdp0, &qxm_rot, &qxm_venus0, &qxm_venus_arm9);
DEFINE_QBCM(bcm_sh2, "SH2", false, &acm_sys_tcu);
DEFINE_QBCM(bcm_mm2, "MM2", false, &qns_mem_noc_sf);
DEFINE_QBCM(bcm_qup0, "QUP0", false, &qup_core_master_1, &qup_core_master_2);
DEFINE_QBCM(bcm_sh3, "SH3", false, &qnm_cmpnoc);
DEFINE_QBCM(bcm_sh4, "SH4", false, &acm_apps0);
DEFINE_QBCM(bcm_sn0, "SN0", true, &qns_gemnoc_sf);
DEFINE_QBCM(bcm_co0, "CO0", false, &qns_cdsp_gemnoc);
DEFINE_QBCM(bcm_sn1, "SN1", false, &qxs_imem);
DEFINE_QBCM(bcm_cn1, "CN1", false, &qhm_qspi, &xm_sdc2, &xm_emmc, &qhs_ahb2phy2, &qhs_emmc_cfg, &qhs_pdm, &qhs_qspi, &qhs_sdc2);
DEFINE_QBCM(bcm_sn2, "SN2", false, &qxm_pimem, &qns_gemnoc_gc);
DEFINE_QBCM(bcm_co2, "CO2", false, &qnm_npu);
DEFINE_QBCM(bcm_sn3, "SN3", false, &qxs_pimem);
DEFINE_QBCM(bcm_co3, "CO3", false, &qxm_npu_dsp);
DEFINE_QBCM(bcm_sn4, "SN4", false, &xs_qdss_stm);
DEFINE_QBCM(bcm_sn7, "SN7", false, &qnm_aggre1_noc);
DEFINE_QBCM(bcm_sn9, "SN9", false, &qnm_aggre2_noc);
DEFINE_QBCM(bcm_sn12, "SN12", false, &qnm_gemnoc);
static struct qcom_icc_bcm *aggre1_noc_bcms[] = {
&bcm_cn1,
};
static struct qcom_icc_node *aggre1_noc_nodes[] = {
[MASTER_A1NOC_CFG] = &qhm_a1noc_cfg,
[MASTER_QSPI] = &qhm_qspi,
[MASTER_QUP_0] = &qhm_qup_0,
[MASTER_SDCC_2] = &xm_sdc2,
[MASTER_EMMC] = &xm_emmc,
[MASTER_UFS_MEM] = &xm_ufs_mem,
[SLAVE_A1NOC_SNOC] = &qns_a1noc_snoc,
[SLAVE_SERVICE_A1NOC] = &srvc_aggre1_noc,
};
static struct qcom_icc_desc sc7180_aggre1_noc = {
.nodes = aggre1_noc_nodes,
.num_nodes = ARRAY_SIZE(aggre1_noc_nodes),
.bcms = aggre1_noc_bcms,
.num_bcms = ARRAY_SIZE(aggre1_noc_bcms),
};
static struct qcom_icc_bcm *aggre2_noc_bcms[] = {
&bcm_ce0,
};
static struct qcom_icc_node *aggre2_noc_nodes[] = {
[MASTER_A2NOC_CFG] = &qhm_a2noc_cfg,
[MASTER_QDSS_BAM] = &qhm_qdss_bam,
[MASTER_QUP_1] = &qhm_qup_1,
[MASTER_USB3] = &qhm_usb3,
[MASTER_CRYPTO] = &qxm_crypto,
[MASTER_IPA] = &qxm_ipa,
[MASTER_QDSS_ETR] = &xm_qdss_etr,
[SLAVE_A2NOC_SNOC] = &qns_a2noc_snoc,
[SLAVE_SERVICE_A2NOC] = &srvc_aggre2_noc,
};
static struct qcom_icc_desc sc7180_aggre2_noc = {
.nodes = aggre2_noc_nodes,
.num_nodes = ARRAY_SIZE(aggre2_noc_nodes),
.bcms = aggre2_noc_bcms,
.num_bcms = ARRAY_SIZE(aggre2_noc_bcms),
};
static struct qcom_icc_bcm *camnoc_virt_bcms[] = {
&bcm_mm1,
};
static struct qcom_icc_node *camnoc_virt_nodes[] = {
[MASTER_CAMNOC_HF0_UNCOMP] = &qxm_camnoc_hf0_uncomp,
[MASTER_CAMNOC_HF1_UNCOMP] = &qxm_camnoc_hf1_uncomp,
[MASTER_CAMNOC_SF_UNCOMP] = &qxm_camnoc_sf_uncomp,
[SLAVE_CAMNOC_UNCOMP] = &qns_camnoc_uncomp,
};
static struct qcom_icc_desc sc7180_camnoc_virt = {
.nodes = camnoc_virt_nodes,
.num_nodes = ARRAY_SIZE(camnoc_virt_nodes),
.bcms = camnoc_virt_bcms,
.num_bcms = ARRAY_SIZE(camnoc_virt_bcms),
};
static struct qcom_icc_bcm *compute_noc_bcms[] = {
&bcm_co0,
&bcm_co2,
&bcm_co3,
};
static struct qcom_icc_node *compute_noc_nodes[] = {
[MASTER_NPU] = &qnm_npu,
[MASTER_NPU_PROC] = &qxm_npu_dsp,
[SLAVE_CDSP_GEM_NOC] = &qns_cdsp_gemnoc,
};
static struct qcom_icc_desc sc7180_compute_noc = {
.nodes = compute_noc_nodes,
.num_nodes = ARRAY_SIZE(compute_noc_nodes),
.bcms = compute_noc_bcms,
.num_bcms = ARRAY_SIZE(compute_noc_bcms),
};
static struct qcom_icc_bcm *config_noc_bcms[] = {
&bcm_cn0,
&bcm_cn1,
};
static struct qcom_icc_node *config_noc_nodes[] = {
[MASTER_SNOC_CNOC] = &qnm_snoc,
[MASTER_QDSS_DAP] = &xm_qdss_dap,
[SLAVE_A1NOC_CFG] = &qhs_a1_noc_cfg,
[SLAVE_A2NOC_CFG] = &qhs_a2_noc_cfg,
[SLAVE_AHB2PHY_SOUTH] = &qhs_ahb2phy0,
[SLAVE_AHB2PHY_CENTER] = &qhs_ahb2phy2,
[SLAVE_AOP] = &qhs_aop,
[SLAVE_AOSS] = &qhs_aoss,
[SLAVE_BOOT_ROM] = &qhs_boot_rom,
[SLAVE_CAMERA_CFG] = &qhs_camera_cfg,
[SLAVE_CAMERA_NRT_THROTTLE_CFG] = &qhs_camera_nrt_throttle_cfg,
[SLAVE_CAMERA_RT_THROTTLE_CFG] = &qhs_camera_rt_throttle_cfg,
[SLAVE_CLK_CTL] = &qhs_clk_ctl,
[SLAVE_RBCPR_CX_CFG] = &qhs_cpr_cx,
[SLAVE_RBCPR_MX_CFG] = &qhs_cpr_mx,
[SLAVE_CRYPTO_0_CFG] = &qhs_crypto0_cfg,
[SLAVE_DCC_CFG] = &qhs_dcc_cfg,
[SLAVE_CNOC_DDRSS] = &qhs_ddrss_cfg,
[SLAVE_DISPLAY_CFG] = &qhs_display_cfg,
[SLAVE_DISPLAY_RT_THROTTLE_CFG] = &qhs_display_rt_throttle_cfg,
[SLAVE_DISPLAY_THROTTLE_CFG] = &qhs_display_throttle_cfg,
[SLAVE_EMMC_CFG] = &qhs_emmc_cfg,
[SLAVE_GLM] = &qhs_glm,
[SLAVE_GFX3D_CFG] = &qhs_gpuss_cfg,
[SLAVE_IMEM_CFG] = &qhs_imem_cfg,
[SLAVE_IPA_CFG] = &qhs_ipa,
[SLAVE_CNOC_MNOC_CFG] = &qhs_mnoc_cfg,
[SLAVE_CNOC_MSS] = &qhs_mss_cfg,
[SLAVE_NPU_CFG] = &qhs_npu_cfg,
[SLAVE_NPU_DMA_BWMON_CFG] = &qhs_npu_dma_throttle_cfg,
[SLAVE_NPU_PROC_BWMON_CFG] = &qhs_npu_dsp_throttle_cfg,
[SLAVE_PDM] = &qhs_pdm,
[SLAVE_PIMEM_CFG] = &qhs_pimem_cfg,
[SLAVE_PRNG] = &qhs_prng,
[SLAVE_QDSS_CFG] = &qhs_qdss_cfg,
[SLAVE_QM_CFG] = &qhs_qm_cfg,
[SLAVE_QM_MPU_CFG] = &qhs_qm_mpu_cfg,
[SLAVE_QSPI_0] = &qhs_qspi,
[SLAVE_QUP_0] = &qhs_qup0,
[SLAVE_QUP_1] = &qhs_qup1,
[SLAVE_SDCC_2] = &qhs_sdc2,
[SLAVE_SECURITY] = &qhs_security,
[SLAVE_SNOC_CFG] = &qhs_snoc_cfg,
[SLAVE_TCSR] = &qhs_tcsr,
[SLAVE_TLMM_WEST] = &qhs_tlmm_1,
[SLAVE_TLMM_NORTH] = &qhs_tlmm_2,
[SLAVE_TLMM_SOUTH] = &qhs_tlmm_3,
[SLAVE_UFS_MEM_CFG] = &qhs_ufs_mem_cfg,
[SLAVE_USB3] = &qhs_usb3,
[SLAVE_VENUS_CFG] = &qhs_venus_cfg,
[SLAVE_VENUS_THROTTLE_CFG] = &qhs_venus_throttle_cfg,
[SLAVE_VSENSE_CTRL_CFG] = &qhs_vsense_ctrl_cfg,
[SLAVE_SERVICE_CNOC] = &srvc_cnoc,
};
static struct qcom_icc_desc sc7180_config_noc = {
.nodes = config_noc_nodes,
.num_nodes = ARRAY_SIZE(config_noc_nodes),
.bcms = config_noc_bcms,
.num_bcms = ARRAY_SIZE(config_noc_bcms),
};
static struct qcom_icc_node *dc_noc_nodes[] = {
[MASTER_CNOC_DC_NOC] = &qhm_cnoc_dc_noc,
[SLAVE_GEM_NOC_CFG] = &qhs_gemnoc,
[SLAVE_LLCC_CFG] = &qhs_llcc,
};
static struct qcom_icc_desc sc7180_dc_noc = {
.nodes = dc_noc_nodes,
.num_nodes = ARRAY_SIZE(dc_noc_nodes),
};
static struct qcom_icc_bcm *gem_noc_bcms[] = {
&bcm_sh0,
&bcm_sh2,
&bcm_sh3,
&bcm_sh4,
};
static struct qcom_icc_node *gem_noc_nodes[] = {
[MASTER_APPSS_PROC] = &acm_apps0,
[MASTER_SYS_TCU] = &acm_sys_tcu,
[MASTER_GEM_NOC_CFG] = &qhm_gemnoc_cfg,
[MASTER_COMPUTE_NOC] = &qnm_cmpnoc,
[MASTER_MNOC_HF_MEM_NOC] = &qnm_mnoc_hf,
[MASTER_MNOC_SF_MEM_NOC] = &qnm_mnoc_sf,
[MASTER_SNOC_GC_MEM_NOC] = &qnm_snoc_gc,
[MASTER_SNOC_SF_MEM_NOC] = &qnm_snoc_sf,
[MASTER_GFX3D] = &qxm_gpu,
[SLAVE_MSS_PROC_MS_MPU_CFG] = &qhs_mdsp_ms_mpu_cfg,
[SLAVE_GEM_NOC_SNOC] = &qns_gem_noc_snoc,
[SLAVE_LLCC] = &qns_llcc,
[SLAVE_SERVICE_GEM_NOC] = &srvc_gemnoc,
};
static struct qcom_icc_desc sc7180_gem_noc = {
.nodes = gem_noc_nodes,
.num_nodes = ARRAY_SIZE(gem_noc_nodes),
.bcms = gem_noc_bcms,
.num_bcms = ARRAY_SIZE(gem_noc_bcms),
};
static struct qcom_icc_bcm *ipa_virt_bcms[] = {
&bcm_ip0,
};
static struct qcom_icc_node *ipa_virt_nodes[] = {
[MASTER_IPA_CORE] = &ipa_core_master,
[SLAVE_IPA_CORE] = &ipa_core_slave,
};
static struct qcom_icc_desc sc7180_ipa_virt = {
.nodes = ipa_virt_nodes,
.num_nodes = ARRAY_SIZE(ipa_virt_nodes),
.bcms = ipa_virt_bcms,
.num_bcms = ARRAY_SIZE(ipa_virt_bcms),
};
static struct qcom_icc_bcm *mc_virt_bcms[] = {
&bcm_acv,
&bcm_mc0,
};
static struct qcom_icc_node *mc_virt_nodes[] = {
[MASTER_LLCC] = &llcc_mc,
[SLAVE_EBI1] = &ebi,
};
static struct qcom_icc_desc sc7180_mc_virt = {
.nodes = mc_virt_nodes,
.num_nodes = ARRAY_SIZE(mc_virt_nodes),
.bcms = mc_virt_bcms,
.num_bcms = ARRAY_SIZE(mc_virt_bcms),
};
static struct qcom_icc_bcm *mmss_noc_bcms[] = {
&bcm_mm0,
&bcm_mm1,
&bcm_mm2,
};
static struct qcom_icc_node *mmss_noc_nodes[] = {
[MASTER_CNOC_MNOC_CFG] = &qhm_mnoc_cfg,
[MASTER_CAMNOC_HF0] = &qxm_camnoc_hf0,
[MASTER_CAMNOC_HF1] = &qxm_camnoc_hf1,
[MASTER_CAMNOC_SF] = &qxm_camnoc_sf,
[MASTER_MDP0] = &qxm_mdp0,
[MASTER_ROTATOR] = &qxm_rot,
[MASTER_VIDEO_P0] = &qxm_venus0,
[MASTER_VIDEO_PROC] = &qxm_venus_arm9,
[SLAVE_MNOC_HF_MEM_NOC] = &qns_mem_noc_hf,
[SLAVE_MNOC_SF_MEM_NOC] = &qns_mem_noc_sf,
[SLAVE_SERVICE_MNOC] = &srvc_mnoc,
};
static struct qcom_icc_desc sc7180_mmss_noc = {
.nodes = mmss_noc_nodes,
.num_nodes = ARRAY_SIZE(mmss_noc_nodes),
.bcms = mmss_noc_bcms,
.num_bcms = ARRAY_SIZE(mmss_noc_bcms),
};
static struct qcom_icc_node *npu_noc_nodes[] = {
[MASTER_NPU_SYS] = &amm_npu_sys,
[MASTER_NPU_NOC_CFG] = &qhm_npu_cfg,
[SLAVE_NPU_CAL_DP0] = &qhs_cal_dp0,
[SLAVE_NPU_CP] = &qhs_cp,
[SLAVE_NPU_INT_DMA_BWMON_CFG] = &qhs_dma_bwmon,
[SLAVE_NPU_DPM] = &qhs_dpm,
[SLAVE_ISENSE_CFG] = &qhs_isense,
[SLAVE_NPU_LLM_CFG] = &qhs_llm,
[SLAVE_NPU_TCM] = &qhs_tcm,
[SLAVE_NPU_COMPUTE_NOC] = &qns_npu_sys,
[SLAVE_SERVICE_NPU_NOC] = &srvc_noc,
};
static struct qcom_icc_desc sc7180_npu_noc = {
.nodes = npu_noc_nodes,
.num_nodes = ARRAY_SIZE(npu_noc_nodes),
};
static struct qcom_icc_bcm *qup_virt_bcms[] = {
&bcm_qup0,
};
static struct qcom_icc_node *qup_virt_nodes[] = {
[MASTER_QUP_CORE_0] = &qup_core_master_1,
[MASTER_QUP_CORE_1] = &qup_core_master_2,
[SLAVE_QUP_CORE_0] = &qup_core_slave_1,
[SLAVE_QUP_CORE_1] = &qup_core_slave_2,
};
static struct qcom_icc_desc sc7180_qup_virt = {
.nodes = qup_virt_nodes,
.num_nodes = ARRAY_SIZE(qup_virt_nodes),
.bcms = qup_virt_bcms,
.num_bcms = ARRAY_SIZE(qup_virt_bcms),
};
static struct qcom_icc_bcm *system_noc_bcms[] = {
&bcm_sn0,
&bcm_sn1,
&bcm_sn2,
&bcm_sn3,
&bcm_sn4,
&bcm_sn7,
&bcm_sn9,
&bcm_sn12,
};
static struct qcom_icc_node *system_noc_nodes[] = {
[MASTER_SNOC_CFG] = &qhm_snoc_cfg,
[MASTER_A1NOC_SNOC] = &qnm_aggre1_noc,
[MASTER_A2NOC_SNOC] = &qnm_aggre2_noc,
[MASTER_GEM_NOC_SNOC] = &qnm_gemnoc,
[MASTER_PIMEM] = &qxm_pimem,
[SLAVE_APPSS] = &qhs_apss,
[SLAVE_SNOC_CNOC] = &qns_cnoc,
[SLAVE_SNOC_GEM_NOC_GC] = &qns_gemnoc_gc,
[SLAVE_SNOC_GEM_NOC_SF] = &qns_gemnoc_sf,
[SLAVE_IMEM] = &qxs_imem,
[SLAVE_PIMEM] = &qxs_pimem,
[SLAVE_SERVICE_SNOC] = &srvc_snoc,
[SLAVE_QDSS_STM] = &xs_qdss_stm,
[SLAVE_TCU] = &xs_sys_tcu_cfg,
};
static struct qcom_icc_desc sc7180_system_noc = {
.nodes = system_noc_nodes,
.num_nodes = ARRAY_SIZE(system_noc_nodes),
.bcms = system_noc_bcms,
.num_bcms = ARRAY_SIZE(system_noc_bcms),
};
static int qnoc_probe(struct platform_device *pdev)
{
const struct qcom_icc_desc *desc;
struct icc_onecell_data *data;
struct icc_provider *provider;
struct qcom_icc_node **qnodes;
struct qcom_icc_provider *qp;
struct icc_node *node;
size_t num_nodes, i;
int ret;
desc = device_get_match_data(&pdev->dev);
if (!desc)
return -EINVAL;
qnodes = desc->nodes;
num_nodes = desc->num_nodes;
qp = devm_kzalloc(&pdev->dev, sizeof(*qp), GFP_KERNEL);
if (!qp)
return -ENOMEM;
data = devm_kcalloc(&pdev->dev, num_nodes, sizeof(*node), GFP_KERNEL);
if (!data)
return -ENOMEM;
provider = &qp->provider;
provider->dev = &pdev->dev;
provider->set = qcom_icc_set;
provider->pre_aggregate = qcom_icc_pre_aggregate;
provider->aggregate = qcom_icc_aggregate;
provider->xlate = of_icc_xlate_onecell;
INIT_LIST_HEAD(&provider->nodes);
provider->data = data;
qp->dev = &pdev->dev;
qp->bcms = desc->bcms;
qp->num_bcms = desc->num_bcms;
qp->voter = of_bcm_voter_get(qp->dev, NULL);
if (IS_ERR(qp->voter))
return PTR_ERR(qp->voter);
ret = icc_provider_add(provider);
if (ret) {
dev_err(&pdev->dev, "error adding interconnect provider\n");
return ret;
}
for (i = 0; i < num_nodes; i++) {
size_t j;
if (!qnodes[i])
continue;
node = icc_node_create(qnodes[i]->id);
if (IS_ERR(node)) {
ret = PTR_ERR(node);
goto err;
}
node->name = qnodes[i]->name;
node->data = qnodes[i];
icc_node_add(node, provider);
for (j = 0; j < qnodes[i]->num_links; j++)
icc_link_create(node, qnodes[i]->links[j]);
data->nodes[i] = node;
}
data->num_nodes = num_nodes;
for (i = 0; i < qp->num_bcms; i++)
qcom_icc_bcm_init(qp->bcms[i], &pdev->dev);
platform_set_drvdata(pdev, qp);
return 0;
err:
icc_nodes_remove(provider);
icc_provider_del(provider);
return ret;
}
static int qnoc_remove(struct platform_device *pdev)
{
struct qcom_icc_provider *qp = platform_get_drvdata(pdev);
icc_nodes_remove(&qp->provider);
return icc_provider_del(&qp->provider);
}
static const struct of_device_id qnoc_of_match[] = {
{ .compatible = "qcom,sc7180-aggre1-noc",
.data = &sc7180_aggre1_noc},
{ .compatible = "qcom,sc7180-aggre2-noc",
.data = &sc7180_aggre2_noc},
{ .compatible = "qcom,sc7180-camnoc-virt",
.data = &sc7180_camnoc_virt},
{ .compatible = "qcom,sc7180-compute-noc",
.data = &sc7180_compute_noc},
{ .compatible = "qcom,sc7180-config-noc",
.data = &sc7180_config_noc},
{ .compatible = "qcom,sc7180-dc-noc",
.data = &sc7180_dc_noc},
{ .compatible = "qcom,sc7180-gem-noc",
.data = &sc7180_gem_noc},
{ .compatible = "qcom,sc7180-ipa-virt",
.data = &sc7180_ipa_virt},
{ .compatible = "qcom,sc7180-mc-virt",
.data = &sc7180_mc_virt},
{ .compatible = "qcom,sc7180-mmss-noc",
.data = &sc7180_mmss_noc},
{ .compatible = "qcom,sc7180-npu-noc",
.data = &sc7180_npu_noc},
{ .compatible = "qcom,sc7180-qup-virt",
.data = &sc7180_qup_virt},
{ .compatible = "qcom,sc7180-system-noc",
.data = &sc7180_system_noc},
{ }
};
MODULE_DEVICE_TABLE(of, qnoc_of_match);
static struct platform_driver qnoc_driver = {
.probe = qnoc_probe,
.remove = qnoc_remove,
.driver = {
.name = "qnoc-sc7180",
.of_match_table = qnoc_of_match,
},
};
module_platform_driver(qnoc_driver);
MODULE_DESCRIPTION("Qualcomm SC7180 NoC driver");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,151 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Qualcomm #define SC7180 interconnect IDs
*
* Copyright (c) 2020, The Linux Foundation. All rights reserved.
*/
#ifndef __DRIVERS_INTERCONNECT_QCOM_SC7180_H
#define __DRIVERS_INTERCONNECT_QCOM_SC7180_H
#define SC7180_MASTER_APPSS_PROC 0
#define SC7180_MASTER_SYS_TCU 1
#define SC7180_MASTER_NPU_SYS 2
#define SC7180_MASTER_IPA_CORE 3
#define SC7180_MASTER_LLCC 4
#define SC7180_MASTER_A1NOC_CFG 5
#define SC7180_MASTER_A2NOC_CFG 6
#define SC7180_MASTER_CNOC_DC_NOC 7
#define SC7180_MASTER_GEM_NOC_CFG 8
#define SC7180_MASTER_CNOC_MNOC_CFG 9
#define SC7180_MASTER_NPU_NOC_CFG 10
#define SC7180_MASTER_QDSS_BAM 11
#define SC7180_MASTER_QSPI 12
#define SC7180_MASTER_QUP_0 13
#define SC7180_MASTER_QUP_1 14
#define SC7180_MASTER_SNOC_CFG 15
#define SC7180_MASTER_A1NOC_SNOC 16
#define SC7180_MASTER_A2NOC_SNOC 17
#define SC7180_MASTER_COMPUTE_NOC 18
#define SC7180_MASTER_GEM_NOC_SNOC 19
#define SC7180_MASTER_MNOC_HF_MEM_NOC 20
#define SC7180_MASTER_MNOC_SF_MEM_NOC 21
#define SC7180_MASTER_NPU 22
#define SC7180_MASTER_SNOC_CNOC 23
#define SC7180_MASTER_SNOC_GC_MEM_NOC 24
#define SC7180_MASTER_SNOC_SF_MEM_NOC 25
#define SC7180_MASTER_QUP_CORE_0 26
#define SC7180_MASTER_QUP_CORE_1 27
#define SC7180_MASTER_CAMNOC_HF0 28
#define SC7180_MASTER_CAMNOC_HF1 29
#define SC7180_MASTER_CAMNOC_HF0_UNCOMP 30
#define SC7180_MASTER_CAMNOC_HF1_UNCOMP 31
#define SC7180_MASTER_CAMNOC_SF 32
#define SC7180_MASTER_CAMNOC_SF_UNCOMP 33
#define SC7180_MASTER_CRYPTO 34
#define SC7180_MASTER_GFX3D 35
#define SC7180_MASTER_IPA 36
#define SC7180_MASTER_MDP0 37
#define SC7180_MASTER_NPU_PROC 38
#define SC7180_MASTER_PIMEM 39
#define SC7180_MASTER_ROTATOR 40
#define SC7180_MASTER_VIDEO_P0 41
#define SC7180_MASTER_VIDEO_PROC 42
#define SC7180_MASTER_QDSS_DAP 43
#define SC7180_MASTER_QDSS_ETR 44
#define SC7180_MASTER_SDCC_2 45
#define SC7180_MASTER_UFS_MEM 46
#define SC7180_MASTER_USB3 47
#define SC7180_MASTER_EMMC 48
#define SC7180_SLAVE_EBI1 49
#define SC7180_SLAVE_IPA_CORE 50
#define SC7180_SLAVE_A1NOC_CFG 51
#define SC7180_SLAVE_A2NOC_CFG 52
#define SC7180_SLAVE_AHB2PHY_SOUTH 53
#define SC7180_SLAVE_AHB2PHY_CENTER 54
#define SC7180_SLAVE_AOP 55
#define SC7180_SLAVE_AOSS 56
#define SC7180_SLAVE_APPSS 57
#define SC7180_SLAVE_BOOT_ROM 58
#define SC7180_SLAVE_NPU_CAL_DP0 59
#define SC7180_SLAVE_CAMERA_CFG 60
#define SC7180_SLAVE_CAMERA_NRT_THROTTLE_CFG 61
#define SC7180_SLAVE_CAMERA_RT_THROTTLE_CFG 62
#define SC7180_SLAVE_CLK_CTL 63
#define SC7180_SLAVE_NPU_CP 64
#define SC7180_SLAVE_RBCPR_CX_CFG 65
#define SC7180_SLAVE_RBCPR_MX_CFG 66
#define SC7180_SLAVE_CRYPTO_0_CFG 67
#define SC7180_SLAVE_DCC_CFG 68
#define SC7180_SLAVE_CNOC_DDRSS 69
#define SC7180_SLAVE_DISPLAY_CFG 70
#define SC7180_SLAVE_DISPLAY_RT_THROTTLE_CFG 71
#define SC7180_SLAVE_DISPLAY_THROTTLE_CFG 72
#define SC7180_SLAVE_NPU_INT_DMA_BWMON_CFG 73
#define SC7180_SLAVE_NPU_DPM 74
#define SC7180_SLAVE_EMMC_CFG 75
#define SC7180_SLAVE_GEM_NOC_CFG 76
#define SC7180_SLAVE_GLM 77
#define SC7180_SLAVE_GFX3D_CFG 78
#define SC7180_SLAVE_IMEM_CFG 79
#define SC7180_SLAVE_IPA_CFG 80
#define SC7180_SLAVE_ISENSE_CFG 81
#define SC7180_SLAVE_LLCC_CFG 82
#define SC7180_SLAVE_NPU_LLM_CFG 83
#define SC7180_SLAVE_MSS_PROC_MS_MPU_CFG 84
#define SC7180_SLAVE_CNOC_MNOC_CFG 85
#define SC7180_SLAVE_CNOC_MSS 86
#define SC7180_SLAVE_NPU_CFG 87
#define SC7180_SLAVE_NPU_DMA_BWMON_CFG 88
#define SC7180_SLAVE_NPU_PROC_BWMON_CFG 89
#define SC7180_SLAVE_PDM 90
#define SC7180_SLAVE_PIMEM_CFG 91
#define SC7180_SLAVE_PRNG 92
#define SC7180_SLAVE_QDSS_CFG 93
#define SC7180_SLAVE_QM_CFG 94
#define SC7180_SLAVE_QM_MPU_CFG 95
#define SC7180_SLAVE_QSPI_0 96
#define SC7180_SLAVE_QUP_0 97
#define SC7180_SLAVE_QUP_1 98
#define SC7180_SLAVE_SDCC_2 99
#define SC7180_SLAVE_SECURITY 100
#define SC7180_SLAVE_SNOC_CFG 101
#define SC7180_SLAVE_NPU_TCM 102
#define SC7180_SLAVE_TCSR 103
#define SC7180_SLAVE_TLMM_WEST 104
#define SC7180_SLAVE_TLMM_NORTH 105
#define SC7180_SLAVE_TLMM_SOUTH 106
#define SC7180_SLAVE_UFS_MEM_CFG 107
#define SC7180_SLAVE_USB3 108
#define SC7180_SLAVE_VENUS_CFG 109
#define SC7180_SLAVE_VENUS_THROTTLE_CFG 110
#define SC7180_SLAVE_VSENSE_CTRL_CFG 111
#define SC7180_SLAVE_A1NOC_SNOC 112
#define SC7180_SLAVE_A2NOC_SNOC 113
#define SC7180_SLAVE_CAMNOC_UNCOMP 114
#define SC7180_SLAVE_CDSP_GEM_NOC 115
#define SC7180_SLAVE_SNOC_CNOC 116
#define SC7180_SLAVE_GEM_NOC_SNOC 117
#define SC7180_SLAVE_SNOC_GEM_NOC_GC 118
#define SC7180_SLAVE_SNOC_GEM_NOC_SF 119
#define SC7180_SLAVE_LLCC 120
#define SC7180_SLAVE_MNOC_HF_MEM_NOC 121
#define SC7180_SLAVE_MNOC_SF_MEM_NOC 122
#define SC7180_SLAVE_NPU_COMPUTE_NOC 123
#define SC7180_SLAVE_QUP_CORE_0 124
#define SC7180_SLAVE_QUP_CORE_1 125
#define SC7180_SLAVE_IMEM 126
#define SC7180_SLAVE_PIMEM 127
#define SC7180_SLAVE_SERVICE_A1NOC 128
#define SC7180_SLAVE_SERVICE_A2NOC 129
#define SC7180_SLAVE_SERVICE_CNOC 130
#define SC7180_SLAVE_SERVICE_GEM_NOC 131
#define SC7180_SLAVE_SERVICE_MNOC 132
#define SC7180_SLAVE_SERVICE_NPU_NOC 133
#define SC7180_SLAVE_SERVICE_SNOC 134
#define SC7180_SLAVE_QDSS_STM 135
#define SC7180_SLAVE_TCU 136
#define SC7180_MASTER_OSM_L3_APPS 137
#define SC7180_SLAVE_OSM_L3 138
#endif

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,142 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2020, The Linux Foundation. All rights reserved.
*/
#ifndef __DRIVERS_INTERCONNECT_QCOM_SDM845_H__
#define __DRIVERS_INTERCONNECT_QCOM_SDM845_H__
#define SDM845_MASTER_A1NOC_CFG 1
#define SDM845_MASTER_BLSP_1 2
#define SDM845_MASTER_TSIF 3
#define SDM845_MASTER_SDCC_2 4
#define SDM845_MASTER_SDCC_4 5
#define SDM845_MASTER_UFS_CARD 6
#define SDM845_MASTER_UFS_MEM 7
#define SDM845_MASTER_PCIE_0 8
#define SDM845_MASTER_A2NOC_CFG 9
#define SDM845_MASTER_QDSS_BAM 10
#define SDM845_MASTER_BLSP_2 11
#define SDM845_MASTER_CNOC_A2NOC 12
#define SDM845_MASTER_CRYPTO 13
#define SDM845_MASTER_IPA 14
#define SDM845_MASTER_PCIE_1 15
#define SDM845_MASTER_QDSS_ETR 16
#define SDM845_MASTER_USB3_0 17
#define SDM845_MASTER_USB3_1 18
#define SDM845_MASTER_CAMNOC_HF0_UNCOMP 19
#define SDM845_MASTER_CAMNOC_HF1_UNCOMP 20
#define SDM845_MASTER_CAMNOC_SF_UNCOMP 21
#define SDM845_MASTER_SPDM 22
#define SDM845_MASTER_TIC 23
#define SDM845_MASTER_SNOC_CNOC 24
#define SDM845_MASTER_QDSS_DAP 25
#define SDM845_MASTER_CNOC_DC_NOC 26
#define SDM845_MASTER_APPSS_PROC 27
#define SDM845_MASTER_GNOC_CFG 28
#define SDM845_MASTER_LLCC 29
#define SDM845_MASTER_TCU_0 30
#define SDM845_MASTER_MEM_NOC_CFG 31
#define SDM845_MASTER_GNOC_MEM_NOC 32
#define SDM845_MASTER_MNOC_HF_MEM_NOC 33
#define SDM845_MASTER_MNOC_SF_MEM_NOC 34
#define SDM845_MASTER_SNOC_GC_MEM_NOC 35
#define SDM845_MASTER_SNOC_SF_MEM_NOC 36
#define SDM845_MASTER_GFX3D 37
#define SDM845_MASTER_CNOC_MNOC_CFG 38
#define SDM845_MASTER_CAMNOC_HF0 39
#define SDM845_MASTER_CAMNOC_HF1 40
#define SDM845_MASTER_CAMNOC_SF 41
#define SDM845_MASTER_MDP0 42
#define SDM845_MASTER_MDP1 43
#define SDM845_MASTER_ROTATOR 44
#define SDM845_MASTER_VIDEO_P0 45
#define SDM845_MASTER_VIDEO_P1 46
#define SDM845_MASTER_VIDEO_PROC 47
#define SDM845_MASTER_SNOC_CFG 48
#define SDM845_MASTER_A1NOC_SNOC 49
#define SDM845_MASTER_A2NOC_SNOC 50
#define SDM845_MASTER_GNOC_SNOC 51
#define SDM845_MASTER_MEM_NOC_SNOC 52
#define SDM845_MASTER_ANOC_PCIE_SNOC 53
#define SDM845_MASTER_PIMEM 54
#define SDM845_MASTER_GIC 55
#define SDM845_SLAVE_A1NOC_SNOC 56
#define SDM845_SLAVE_SERVICE_A1NOC 57
#define SDM845_SLAVE_ANOC_PCIE_A1NOC_SNOC 58
#define SDM845_SLAVE_A2NOC_SNOC 59
#define SDM845_SLAVE_ANOC_PCIE_SNOC 60
#define SDM845_SLAVE_SERVICE_A2NOC 61
#define SDM845_SLAVE_CAMNOC_UNCOMP 62
#define SDM845_SLAVE_A1NOC_CFG 63
#define SDM845_SLAVE_A2NOC_CFG 64
#define SDM845_SLAVE_AOP 65
#define SDM845_SLAVE_AOSS 66
#define SDM845_SLAVE_CAMERA_CFG 67
#define SDM845_SLAVE_CLK_CTL 68
#define SDM845_SLAVE_CDSP_CFG 69
#define SDM845_SLAVE_RBCPR_CX_CFG 70
#define SDM845_SLAVE_CRYPTO_0_CFG 71
#define SDM845_SLAVE_DCC_CFG 72
#define SDM845_SLAVE_CNOC_DDRSS 73
#define SDM845_SLAVE_DISPLAY_CFG 74
#define SDM845_SLAVE_GLM 75
#define SDM845_SLAVE_GFX3D_CFG 76
#define SDM845_SLAVE_IMEM_CFG 77
#define SDM845_SLAVE_IPA_CFG 78
#define SDM845_SLAVE_CNOC_MNOC_CFG 79
#define SDM845_SLAVE_PCIE_0_CFG 80
#define SDM845_SLAVE_PCIE_1_CFG 81
#define SDM845_SLAVE_PDM 82
#define SDM845_SLAVE_SOUTH_PHY_CFG 83
#define SDM845_SLAVE_PIMEM_CFG 84
#define SDM845_SLAVE_PRNG 85
#define SDM845_SLAVE_QDSS_CFG 86
#define SDM845_SLAVE_BLSP_2 87
#define SDM845_SLAVE_BLSP_1 88
#define SDM845_SLAVE_SDCC_2 89
#define SDM845_SLAVE_SDCC_4 90
#define SDM845_SLAVE_SNOC_CFG 91
#define SDM845_SLAVE_SPDM_WRAPPER 92
#define SDM845_SLAVE_SPSS_CFG 93
#define SDM845_SLAVE_TCSR 94
#define SDM845_SLAVE_TLMM_NORTH 95
#define SDM845_SLAVE_TLMM_SOUTH 96
#define SDM845_SLAVE_TSIF 97
#define SDM845_SLAVE_UFS_CARD_CFG 98
#define SDM845_SLAVE_UFS_MEM_CFG 99
#define SDM845_SLAVE_USB3_0 100
#define SDM845_SLAVE_USB3_1 101
#define SDM845_SLAVE_VENUS_CFG 102
#define SDM845_SLAVE_VSENSE_CTRL_CFG 103
#define SDM845_SLAVE_CNOC_A2NOC 104
#define SDM845_SLAVE_SERVICE_CNOC 105
#define SDM845_SLAVE_LLCC_CFG 106
#define SDM845_SLAVE_MEM_NOC_CFG 107
#define SDM845_SLAVE_GNOC_SNOC 108
#define SDM845_SLAVE_GNOC_MEM_NOC 109
#define SDM845_SLAVE_SERVICE_GNOC 110
#define SDM845_SLAVE_EBI1 111
#define SDM845_SLAVE_MSS_PROC_MS_MPU_CFG 112
#define SDM845_SLAVE_MEM_NOC_GNOC 113
#define SDM845_SLAVE_LLCC 114
#define SDM845_SLAVE_MEM_NOC_SNOC 115
#define SDM845_SLAVE_SERVICE_MEM_NOC 116
#define SDM845_SLAVE_MNOC_SF_MEM_NOC 117
#define SDM845_SLAVE_MNOC_HF_MEM_NOC 118
#define SDM845_SLAVE_SERVICE_MNOC 119
#define SDM845_SLAVE_APPSS 120
#define SDM845_SLAVE_SNOC_CNOC 121
#define SDM845_SLAVE_SNOC_MEM_NOC_GC 122
#define SDM845_SLAVE_SNOC_MEM_NOC_SF 123
#define SDM845_SLAVE_IMEM 124
#define SDM845_SLAVE_PCIE_0 125
#define SDM845_SLAVE_PCIE_1 126
#define SDM845_SLAVE_PIMEM 127
#define SDM845_SLAVE_SERVICE_SNOC 128
#define SDM845_SLAVE_QDSS_STM 129
#define SDM845_SLAVE_TCU 130
#define SDM845_MASTER_OSM_L3_APPS 131
#define SDM845_SLAVE_OSM_L3 132
#endif /* __DRIVERS_INTERCONNECT_QCOM_SDM845_H__ */

View File

@ -142,7 +142,7 @@ const struct file_operations anslcd_fops = {
};
static struct miscdevice anslcd_dev = {
ANSLCD_MINOR,
LCD_MINOR,
"anslcd",
&anslcd_fops
};

View File

@ -2,8 +2,6 @@
#ifndef _PPC_ANS_LCD_H
#define _PPC_ANS_LCD_H
#define ANSLCD_MINOR 156
#define ANSLCD_CLEAR 0x01
#define ANSLCD_SENDCTRL 0x02
#define ANSLCD_SETSHORTDELAY 0x03

View File

@ -75,9 +75,6 @@
/* Some compile options */
#undef DEBUG_SLEEP
/* Misc minor number allocated for /dev/pmu */
#define PMU_MINOR 154
/* How many iterations between battery polls */
#define BATTERY_POLLING_COUNT 2

View File

@ -394,6 +394,7 @@ static const struct pcr_ops rts522a_pcr_ops = {
void rts522a_init_params(struct rtsx_pcr *pcr)
{
rts5227_init_params(pcr);
pcr->ops = &rts522a_pcr_ops;
pcr->tx_initial_phase = SET_CLOCK_PHASE(20, 20, 11);
pcr->reg_pm_ctrl3 = RTS522A_PM_CTRL3;

View File

@ -129,6 +129,8 @@ static int cs_parser(struct hl_fpriv *hpriv, struct hl_cs_job *job)
spin_unlock(&job->user_cb->lock);
hl_cb_put(job->user_cb);
job->user_cb = NULL;
} else if (!rc) {
job->job_cb_size = job->user_cb_size;
}
return rc;
@ -507,7 +509,7 @@ static int _hl_cs_ioctl(struct hl_fpriv *hpriv, void __user *chunks,
struct hl_cb *cb;
bool int_queues_only = true;
u32 size_to_copy;
int rc, i, parse_cnt;
int rc, i;
*cs_seq = ULLONG_MAX;
@ -547,7 +549,7 @@ static int _hl_cs_ioctl(struct hl_fpriv *hpriv, void __user *chunks,
hl_debugfs_add_cs(cs);
/* Validate ALL the CS chunks before submitting the CS */
for (i = 0, parse_cnt = 0 ; i < num_chunks ; i++, parse_cnt++) {
for (i = 0 ; i < num_chunks ; i++) {
struct hl_cs_chunk *chunk = &cs_chunk_array[i];
enum hl_queue_type queue_type;
bool is_kernel_allocated_cb;
@ -585,10 +587,6 @@ static int _hl_cs_ioctl(struct hl_fpriv *hpriv, void __user *chunks,
job->cs = cs;
job->user_cb = cb;
job->user_cb_size = chunk->cb_size;
if (is_kernel_allocated_cb)
job->job_cb_size = cb->size;
else
job->job_cb_size = chunk->cb_size;
job->hw_queue_id = chunk->queue_index;
cs->jobs_in_queue_cnt[job->hw_queue_id]++;
@ -659,8 +657,8 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data)
struct hl_device *hdev = hpriv->hdev;
union hl_cs_args *args = data;
struct hl_ctx *ctx = hpriv->ctx;
void __user *chunks;
u32 num_chunks;
void __user *chunks_execute, *chunks_restore;
u32 num_chunks_execute, num_chunks_restore;
u64 cs_seq = ULONG_MAX;
int rc, do_ctx_switch;
bool need_soft_reset = false;
@ -673,13 +671,25 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data)
goto out;
}
chunks_execute = (void __user *) (uintptr_t) args->in.chunks_execute;
num_chunks_execute = args->in.num_chunks_execute;
if (!num_chunks_execute) {
dev_err(hdev->dev,
"Got execute CS with 0 chunks, context %d\n",
ctx->asid);
rc = -EINVAL;
goto out;
}
do_ctx_switch = atomic_cmpxchg(&ctx->thread_ctx_switch_token, 1, 0);
if (do_ctx_switch || (args->in.cs_flags & HL_CS_FLAGS_FORCE_RESTORE)) {
long ret;
chunks = (void __user *)(uintptr_t)args->in.chunks_restore;
num_chunks = args->in.num_chunks_restore;
chunks_restore =
(void __user *) (uintptr_t) args->in.chunks_restore;
num_chunks_restore = args->in.num_chunks_restore;
mutex_lock(&hpriv->restore_phase_mutex);
@ -707,13 +717,13 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data)
hdev->asic_funcs->restore_phase_topology(hdev);
if (num_chunks == 0) {
if (!num_chunks_restore) {
dev_dbg(hdev->dev,
"Need to run restore phase but restore CS is empty\n");
rc = 0;
} else {
rc = _hl_cs_ioctl(hpriv, chunks, num_chunks,
&cs_seq);
rc = _hl_cs_ioctl(hpriv, chunks_restore,
num_chunks_restore, &cs_seq);
}
mutex_unlock(&hpriv->restore_phase_mutex);
@ -726,7 +736,7 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data)
}
/* Need to wait for restore completion before execution phase */
if (num_chunks > 0) {
if (num_chunks_restore) {
ret = _hl_cs_wait_ioctl(hdev, ctx,
jiffies_to_usecs(hdev->timeout_jiffies),
cs_seq);
@ -754,18 +764,7 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data)
}
}
chunks = (void __user *)(uintptr_t)args->in.chunks_execute;
num_chunks = args->in.num_chunks_execute;
if (num_chunks == 0) {
dev_err(hdev->dev,
"Got execute CS with 0 chunks, context %d\n",
ctx->asid);
rc = -EINVAL;
goto out;
}
rc = _hl_cs_ioctl(hpriv, chunks, num_chunks, &cs_seq);
rc = _hl_cs_ioctl(hpriv, chunks_execute, num_chunks_execute, &cs_seq);
out:
if (rc != -EAGAIN) {

View File

@ -393,9 +393,10 @@ static int mmu_show(struct seq_file *s, void *data)
}
is_dram_addr = hl_mem_area_inside_range(virt_addr, prop->dmmu.page_size,
prop->va_space_dram_start_address,
prop->va_space_dram_end_address);
prop->dmmu.start_addr,
prop->dmmu.end_addr);
/* shifts and masks are the same in PMMU and HPMMU, use one of them */
mmu_prop = is_dram_addr ? &prop->dmmu : &prop->pmmu;
mutex_lock(&ctx->mmu_lock);
@ -547,12 +548,15 @@ static bool hl_is_device_va(struct hl_device *hdev, u64 addr)
goto out;
if (hdev->dram_supports_virtual_memory &&
addr >= prop->va_space_dram_start_address &&
addr < prop->va_space_dram_end_address)
(addr >= prop->dmmu.start_addr && addr < prop->dmmu.end_addr))
return true;
if (addr >= prop->va_space_host_start_address &&
addr < prop->va_space_host_end_address)
if (addr >= prop->pmmu.start_addr &&
addr < prop->pmmu.end_addr)
return true;
if (addr >= prop->pmmu_huge.start_addr &&
addr < prop->pmmu_huge.end_addr)
return true;
out:
return false;
@ -575,9 +579,10 @@ static int device_va_to_pa(struct hl_device *hdev, u64 virt_addr,
}
is_dram_addr = hl_mem_area_inside_range(virt_addr, prop->dmmu.page_size,
prop->va_space_dram_start_address,
prop->va_space_dram_end_address);
prop->dmmu.start_addr,
prop->dmmu.end_addr);
/* shifts and masks are the same in PMMU and HPMMU, use one of them */
mmu_prop = is_dram_addr ? &prop->dmmu : &prop->pmmu;
mutex_lock(&ctx->mmu_lock);
@ -705,6 +710,65 @@ static ssize_t hl_data_write32(struct file *f, const char __user *buf,
return count;
}
static ssize_t hl_data_read64(struct file *f, char __user *buf,
size_t count, loff_t *ppos)
{
struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
struct hl_device *hdev = entry->hdev;
char tmp_buf[32];
u64 addr = entry->addr;
u64 val;
ssize_t rc;
if (*ppos)
return 0;
if (hl_is_device_va(hdev, addr)) {
rc = device_va_to_pa(hdev, addr, &addr);
if (rc)
return rc;
}
rc = hdev->asic_funcs->debugfs_read64(hdev, addr, &val);
if (rc) {
dev_err(hdev->dev, "Failed to read from 0x%010llx\n", addr);
return rc;
}
sprintf(tmp_buf, "0x%016llx\n", val);
return simple_read_from_buffer(buf, count, ppos, tmp_buf,
strlen(tmp_buf));
}
static ssize_t hl_data_write64(struct file *f, const char __user *buf,
size_t count, loff_t *ppos)
{
struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
struct hl_device *hdev = entry->hdev;
u64 addr = entry->addr;
u64 value;
ssize_t rc;
rc = kstrtoull_from_user(buf, count, 16, &value);
if (rc)
return rc;
if (hl_is_device_va(hdev, addr)) {
rc = device_va_to_pa(hdev, addr, &addr);
if (rc)
return rc;
}
rc = hdev->asic_funcs->debugfs_write64(hdev, addr, value);
if (rc) {
dev_err(hdev->dev, "Failed to write 0x%016llx to 0x%010llx\n",
value, addr);
return rc;
}
return count;
}
static ssize_t hl_get_power_state(struct file *f, char __user *buf,
size_t count, loff_t *ppos)
{
@ -912,6 +976,12 @@ static const struct file_operations hl_data32b_fops = {
.write = hl_data_write32
};
static const struct file_operations hl_data64b_fops = {
.owner = THIS_MODULE,
.read = hl_data_read64,
.write = hl_data_write64
};
static const struct file_operations hl_i2c_data_fops = {
.owner = THIS_MODULE,
.read = hl_i2c_data_read,
@ -1025,6 +1095,12 @@ void hl_debugfs_add_device(struct hl_device *hdev)
dev_entry,
&hl_data32b_fops);
debugfs_create_file("data64",
0644,
dev_entry->root,
dev_entry,
&hl_data64b_fops);
debugfs_create_file("set_power_state",
0200,
dev_entry->root,

View File

@ -36,7 +36,7 @@ enum hl_device_status hl_device_status(struct hl_device *hdev)
status = HL_DEVICE_STATUS_OPERATIONAL;
return status;
};
}
static void hpriv_release(struct kref *ref)
{

View File

@ -324,7 +324,11 @@ static u32 goya_all_events[] = {
GOYA_ASYNC_EVENT_ID_DMA_BM_CH1,
GOYA_ASYNC_EVENT_ID_DMA_BM_CH2,
GOYA_ASYNC_EVENT_ID_DMA_BM_CH3,
GOYA_ASYNC_EVENT_ID_DMA_BM_CH4
GOYA_ASYNC_EVENT_ID_DMA_BM_CH4,
GOYA_ASYNC_EVENT_ID_FIX_POWER_ENV_S,
GOYA_ASYNC_EVENT_ID_FIX_POWER_ENV_E,
GOYA_ASYNC_EVENT_ID_FIX_THERMAL_ENV_S,
GOYA_ASYNC_EVENT_ID_FIX_THERMAL_ENV_E
};
static int goya_mmu_clear_pgt_range(struct hl_device *hdev);
@ -393,19 +397,21 @@ void goya_get_fixed_properties(struct hl_device *hdev)
prop->dmmu.hop2_mask = HOP2_MASK;
prop->dmmu.hop3_mask = HOP3_MASK;
prop->dmmu.hop4_mask = HOP4_MASK;
prop->dmmu.huge_page_size = PAGE_SIZE_2MB;
/* No difference between PMMU and DMMU except of page size */
memcpy(&prop->pmmu, &prop->dmmu, sizeof(prop->dmmu));
prop->dmmu.start_addr = VA_DDR_SPACE_START;
prop->dmmu.end_addr = VA_DDR_SPACE_END;
prop->dmmu.page_size = PAGE_SIZE_2MB;
/* shifts and masks are the same in PMMU and DMMU */
memcpy(&prop->pmmu, &prop->dmmu, sizeof(prop->dmmu));
prop->pmmu.start_addr = VA_HOST_SPACE_START;
prop->pmmu.end_addr = VA_HOST_SPACE_END;
prop->pmmu.page_size = PAGE_SIZE_4KB;
prop->va_space_host_start_address = VA_HOST_SPACE_START;
prop->va_space_host_end_address = VA_HOST_SPACE_END;
prop->va_space_dram_start_address = VA_DDR_SPACE_START;
prop->va_space_dram_end_address = VA_DDR_SPACE_END;
prop->dram_size_for_default_page_mapping =
prop->va_space_dram_end_address;
/* PMMU and HPMMU are the same except of page size */
memcpy(&prop->pmmu_huge, &prop->pmmu, sizeof(prop->pmmu));
prop->pmmu_huge.page_size = PAGE_SIZE_2MB;
prop->dram_size_for_default_page_mapping = VA_DDR_SPACE_END;
prop->cfg_size = CFG_SIZE;
prop->max_asid = MAX_ASID;
prop->num_of_events = GOYA_ASYNC_EVENT_ID_SIZE;
@ -2573,8 +2579,7 @@ static int goya_hw_init(struct hl_device *hdev)
* After CPU initialization is finished, change DDR bar mapping inside
* iATU to point to the start address of the MMU page tables
*/
if (goya_set_ddr_bar_base(hdev, DRAM_PHYS_BASE +
(MMU_PAGE_TABLES_ADDR &
if (goya_set_ddr_bar_base(hdev, (MMU_PAGE_TABLES_ADDR &
~(prop->dram_pci_bar_size - 0x1ull))) == U64_MAX) {
dev_err(hdev->dev,
"failed to map DDR bar to MMU page tables\n");
@ -3443,12 +3448,13 @@ static int goya_validate_dma_pkt_mmu(struct hl_device *hdev,
/*
* WA for HW-23.
* We can't allow user to read from Host using QMANs other than 1.
* PMMU and HPMMU addresses are equal, check only one of them.
*/
if (parser->hw_queue_id != GOYA_QUEUE_ID_DMA_1 &&
hl_mem_area_inside_range(le64_to_cpu(user_dma_pkt->src_addr),
le32_to_cpu(user_dma_pkt->tsize),
hdev->asic_prop.va_space_host_start_address,
hdev->asic_prop.va_space_host_end_address)) {
hdev->asic_prop.pmmu.start_addr,
hdev->asic_prop.pmmu.end_addr)) {
dev_err(hdev->dev,
"Can't DMA from host on queue other then 1\n");
return -EFAULT;
@ -4178,6 +4184,96 @@ static int goya_debugfs_write32(struct hl_device *hdev, u64 addr, u32 val)
return rc;
}
static int goya_debugfs_read64(struct hl_device *hdev, u64 addr, u64 *val)
{
struct asic_fixed_properties *prop = &hdev->asic_prop;
u64 ddr_bar_addr;
int rc = 0;
if ((addr >= CFG_BASE) && (addr <= CFG_BASE + CFG_SIZE - sizeof(u64))) {
u32 val_l = RREG32(addr - CFG_BASE);
u32 val_h = RREG32(addr + sizeof(u32) - CFG_BASE);
*val = (((u64) val_h) << 32) | val_l;
} else if ((addr >= SRAM_BASE_ADDR) &&
(addr <= SRAM_BASE_ADDR + SRAM_SIZE - sizeof(u64))) {
*val = readq(hdev->pcie_bar[SRAM_CFG_BAR_ID] +
(addr - SRAM_BASE_ADDR));
} else if ((addr >= DRAM_PHYS_BASE) &&
(addr <=
DRAM_PHYS_BASE + hdev->asic_prop.dram_size - sizeof(u64))) {
u64 bar_base_addr = DRAM_PHYS_BASE +
(addr & ~(prop->dram_pci_bar_size - 0x1ull));
ddr_bar_addr = goya_set_ddr_bar_base(hdev, bar_base_addr);
if (ddr_bar_addr != U64_MAX) {
*val = readq(hdev->pcie_bar[DDR_BAR_ID] +
(addr - bar_base_addr));
ddr_bar_addr = goya_set_ddr_bar_base(hdev,
ddr_bar_addr);
}
if (ddr_bar_addr == U64_MAX)
rc = -EIO;
} else if (addr >= HOST_PHYS_BASE && !iommu_present(&pci_bus_type)) {
*val = *(u64 *) phys_to_virt(addr - HOST_PHYS_BASE);
} else {
rc = -EFAULT;
}
return rc;
}
static int goya_debugfs_write64(struct hl_device *hdev, u64 addr, u64 val)
{
struct asic_fixed_properties *prop = &hdev->asic_prop;
u64 ddr_bar_addr;
int rc = 0;
if ((addr >= CFG_BASE) && (addr <= CFG_BASE + CFG_SIZE - sizeof(u64))) {
WREG32(addr - CFG_BASE, lower_32_bits(val));
WREG32(addr + sizeof(u32) - CFG_BASE, upper_32_bits(val));
} else if ((addr >= SRAM_BASE_ADDR) &&
(addr <= SRAM_BASE_ADDR + SRAM_SIZE - sizeof(u64))) {
writeq(val, hdev->pcie_bar[SRAM_CFG_BAR_ID] +
(addr - SRAM_BASE_ADDR));
} else if ((addr >= DRAM_PHYS_BASE) &&
(addr <=
DRAM_PHYS_BASE + hdev->asic_prop.dram_size - sizeof(u64))) {
u64 bar_base_addr = DRAM_PHYS_BASE +
(addr & ~(prop->dram_pci_bar_size - 0x1ull));
ddr_bar_addr = goya_set_ddr_bar_base(hdev, bar_base_addr);
if (ddr_bar_addr != U64_MAX) {
writeq(val, hdev->pcie_bar[DDR_BAR_ID] +
(addr - bar_base_addr));
ddr_bar_addr = goya_set_ddr_bar_base(hdev,
ddr_bar_addr);
}
if (ddr_bar_addr == U64_MAX)
rc = -EIO;
} else if (addr >= HOST_PHYS_BASE && !iommu_present(&pci_bus_type)) {
*(u64 *) phys_to_virt(addr - HOST_PHYS_BASE) = val;
} else {
rc = -EFAULT;
}
return rc;
}
static u64 goya_read_pte(struct hl_device *hdev, u64 addr)
{
struct goya_device *goya = hdev->asic_specific;
@ -4297,6 +4393,14 @@ static const char *_goya_get_event_desc(u16 event_type)
return "TPC%d_bmon_spmu";
case GOYA_ASYNC_EVENT_ID_DMA_BM_CH0 ... GOYA_ASYNC_EVENT_ID_DMA_BM_CH4:
return "DMA_bm_ch%d";
case GOYA_ASYNC_EVENT_ID_FIX_POWER_ENV_S:
return "POWER_ENV_S";
case GOYA_ASYNC_EVENT_ID_FIX_POWER_ENV_E:
return "POWER_ENV_E";
case GOYA_ASYNC_EVENT_ID_FIX_THERMAL_ENV_S:
return "THERMAL_ENV_S";
case GOYA_ASYNC_EVENT_ID_FIX_THERMAL_ENV_E:
return "THERMAL_ENV_E";
default:
return "N/A";
}
@ -4388,22 +4492,22 @@ static void goya_get_event_desc(u16 event_type, char *desc, size_t size)
static void goya_print_razwi_info(struct hl_device *hdev)
{
if (RREG32(mmDMA_MACRO_RAZWI_LBW_WT_VLD)) {
dev_err(hdev->dev, "Illegal write to LBW\n");
dev_err_ratelimited(hdev->dev, "Illegal write to LBW\n");
WREG32(mmDMA_MACRO_RAZWI_LBW_WT_VLD, 0);
}
if (RREG32(mmDMA_MACRO_RAZWI_LBW_RD_VLD)) {
dev_err(hdev->dev, "Illegal read from LBW\n");
dev_err_ratelimited(hdev->dev, "Illegal read from LBW\n");
WREG32(mmDMA_MACRO_RAZWI_LBW_RD_VLD, 0);
}
if (RREG32(mmDMA_MACRO_RAZWI_HBW_WT_VLD)) {
dev_err(hdev->dev, "Illegal write to HBW\n");
dev_err_ratelimited(hdev->dev, "Illegal write to HBW\n");
WREG32(mmDMA_MACRO_RAZWI_HBW_WT_VLD, 0);
}
if (RREG32(mmDMA_MACRO_RAZWI_HBW_RD_VLD)) {
dev_err(hdev->dev, "Illegal read from HBW\n");
dev_err_ratelimited(hdev->dev, "Illegal read from HBW\n");
WREG32(mmDMA_MACRO_RAZWI_HBW_RD_VLD, 0);
}
}
@ -4423,7 +4527,8 @@ static void goya_print_mmu_error_info(struct hl_device *hdev)
addr <<= 32;
addr |= RREG32(mmMMU_PAGE_ERROR_CAPTURE_VA);
dev_err(hdev->dev, "MMU page fault on va 0x%llx\n", addr);
dev_err_ratelimited(hdev->dev, "MMU page fault on va 0x%llx\n",
addr);
WREG32(mmMMU_PAGE_ERROR_CAPTURE, 0);
}
@ -4435,7 +4540,7 @@ static void goya_print_irq_info(struct hl_device *hdev, u16 event_type,
char desc[20] = "";
goya_get_event_desc(event_type, desc, sizeof(desc));
dev_err(hdev->dev, "Received H/W interrupt %d [\"%s\"]\n",
dev_err_ratelimited(hdev->dev, "Received H/W interrupt %d [\"%s\"]\n",
event_type, desc);
if (razwi) {
@ -4526,6 +4631,33 @@ static int goya_unmask_irq(struct hl_device *hdev, u16 event_type)
return rc;
}
static void goya_print_clk_change_info(struct hl_device *hdev, u16 event_type)
{
switch (event_type) {
case GOYA_ASYNC_EVENT_ID_FIX_POWER_ENV_S:
dev_info_ratelimited(hdev->dev,
"Clock throttling due to power consumption\n");
break;
case GOYA_ASYNC_EVENT_ID_FIX_POWER_ENV_E:
dev_info_ratelimited(hdev->dev,
"Power envelop is safe, back to optimal clock\n");
break;
case GOYA_ASYNC_EVENT_ID_FIX_THERMAL_ENV_S:
dev_info_ratelimited(hdev->dev,
"Clock throttling due to overheating\n");
break;
case GOYA_ASYNC_EVENT_ID_FIX_THERMAL_ENV_E:
dev_info_ratelimited(hdev->dev,
"Thermal envelop is safe, back to optimal clock\n");
break;
default:
dev_err(hdev->dev, "Received invalid clock change event %d\n",
event_type);
break;
}
}
void goya_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_entry)
{
u32 ctl = le32_to_cpu(eq_entry->hdr.ctl);
@ -4609,6 +4741,14 @@ void goya_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_entry)
goya_unmask_irq(hdev, event_type);
break;
case GOYA_ASYNC_EVENT_ID_FIX_POWER_ENV_S:
case GOYA_ASYNC_EVENT_ID_FIX_POWER_ENV_E:
case GOYA_ASYNC_EVENT_ID_FIX_THERMAL_ENV_S:
case GOYA_ASYNC_EVENT_ID_FIX_THERMAL_ENV_E:
goya_print_clk_change_info(hdev, event_type);
goya_unmask_irq(hdev, event_type);
break;
default:
dev_err(hdev->dev, "Received invalid H/W interrupt %d\n",
event_type);
@ -4776,7 +4916,8 @@ static int goya_mmu_add_mappings_for_device_cpu(struct hl_device *hdev)
for (off = 0 ; off < CPU_FW_IMAGE_SIZE ; off += PAGE_SIZE_2MB) {
rc = hl_mmu_map(hdev->kernel_ctx, prop->dram_base_address + off,
prop->dram_base_address + off, PAGE_SIZE_2MB);
prop->dram_base_address + off, PAGE_SIZE_2MB,
(off + PAGE_SIZE_2MB) == CPU_FW_IMAGE_SIZE);
if (rc) {
dev_err(hdev->dev, "Map failed for address 0x%llx\n",
prop->dram_base_address + off);
@ -4786,7 +4927,7 @@ static int goya_mmu_add_mappings_for_device_cpu(struct hl_device *hdev)
if (!(hdev->cpu_accessible_dma_address & (PAGE_SIZE_2MB - 1))) {
rc = hl_mmu_map(hdev->kernel_ctx, VA_CPU_ACCESSIBLE_MEM_ADDR,
hdev->cpu_accessible_dma_address, PAGE_SIZE_2MB);
hdev->cpu_accessible_dma_address, PAGE_SIZE_2MB, true);
if (rc) {
dev_err(hdev->dev,
@ -4799,7 +4940,7 @@ static int goya_mmu_add_mappings_for_device_cpu(struct hl_device *hdev)
rc = hl_mmu_map(hdev->kernel_ctx,
VA_CPU_ACCESSIBLE_MEM_ADDR + cpu_off,
hdev->cpu_accessible_dma_address + cpu_off,
PAGE_SIZE_4KB);
PAGE_SIZE_4KB, true);
if (rc) {
dev_err(hdev->dev,
"Map failed for CPU accessible memory\n");
@ -4825,14 +4966,15 @@ unmap_cpu:
for (; cpu_off >= 0 ; cpu_off -= PAGE_SIZE_4KB)
if (hl_mmu_unmap(hdev->kernel_ctx,
VA_CPU_ACCESSIBLE_MEM_ADDR + cpu_off,
PAGE_SIZE_4KB))
PAGE_SIZE_4KB, true))
dev_warn_ratelimited(hdev->dev,
"failed to unmap address 0x%llx\n",
VA_CPU_ACCESSIBLE_MEM_ADDR + cpu_off);
unmap:
for (; off >= 0 ; off -= PAGE_SIZE_2MB)
if (hl_mmu_unmap(hdev->kernel_ctx,
prop->dram_base_address + off, PAGE_SIZE_2MB))
prop->dram_base_address + off, PAGE_SIZE_2MB,
true))
dev_warn_ratelimited(hdev->dev,
"failed to unmap address 0x%llx\n",
prop->dram_base_address + off);
@ -4857,14 +4999,15 @@ void goya_mmu_remove_device_cpu_mappings(struct hl_device *hdev)
if (!(hdev->cpu_accessible_dma_address & (PAGE_SIZE_2MB - 1))) {
if (hl_mmu_unmap(hdev->kernel_ctx, VA_CPU_ACCESSIBLE_MEM_ADDR,
PAGE_SIZE_2MB))
PAGE_SIZE_2MB, true))
dev_warn(hdev->dev,
"Failed to unmap CPU accessible memory\n");
} else {
for (cpu_off = 0 ; cpu_off < SZ_2M ; cpu_off += PAGE_SIZE_4KB)
if (hl_mmu_unmap(hdev->kernel_ctx,
VA_CPU_ACCESSIBLE_MEM_ADDR + cpu_off,
PAGE_SIZE_4KB))
PAGE_SIZE_4KB,
(cpu_off + PAGE_SIZE_4KB) >= SZ_2M))
dev_warn_ratelimited(hdev->dev,
"failed to unmap address 0x%llx\n",
VA_CPU_ACCESSIBLE_MEM_ADDR + cpu_off);
@ -4872,7 +5015,8 @@ void goya_mmu_remove_device_cpu_mappings(struct hl_device *hdev)
for (off = 0 ; off < CPU_FW_IMAGE_SIZE ; off += PAGE_SIZE_2MB)
if (hl_mmu_unmap(hdev->kernel_ctx,
prop->dram_base_address + off, PAGE_SIZE_2MB))
prop->dram_base_address + off, PAGE_SIZE_2MB,
(off + PAGE_SIZE_2MB) >= CPU_FW_IMAGE_SIZE))
dev_warn_ratelimited(hdev->dev,
"Failed to unmap address 0x%llx\n",
prop->dram_base_address + off);
@ -5113,6 +5257,7 @@ static bool goya_is_device_idle(struct hl_device *hdev, u32 *mask,
}
static void goya_hw_queues_lock(struct hl_device *hdev)
__acquires(&goya->hw_queues_lock)
{
struct goya_device *goya = hdev->asic_specific;
@ -5120,6 +5265,7 @@ static void goya_hw_queues_lock(struct hl_device *hdev)
}
static void goya_hw_queues_unlock(struct hl_device *hdev)
__releases(&goya->hw_queues_lock)
{
struct goya_device *goya = hdev->asic_specific;
@ -5180,6 +5326,8 @@ static const struct hl_asic_funcs goya_funcs = {
.restore_phase_topology = goya_restore_phase_topology,
.debugfs_read32 = goya_debugfs_read32,
.debugfs_write32 = goya_debugfs_write32,
.debugfs_read64 = goya_debugfs_read64,
.debugfs_write64 = goya_debugfs_write64,
.add_device_attr = goya_add_device_attr,
.handle_eqe = goya_handle_eqe,
.set_pll_profile = goya_set_pll_profile,

View File

@ -364,8 +364,8 @@ static int goya_etr_validate_address(struct hl_device *hdev, u64 addr,
u64 range_start, range_end;
if (hdev->mmu_enable) {
range_start = prop->va_space_dram_start_address;
range_end = prop->va_space_dram_end_address;
range_start = prop->dmmu.start_addr;
range_end = prop->dmmu.end_addr;
} else {
range_start = prop->dram_user_base_address;
range_end = prop->dram_end_address;

View File

@ -298,8 +298,8 @@ static ssize_t pm_mng_profile_store(struct device *dev,
/* Make sure we are in LOW PLL when changing modes */
if (hdev->pm_mng_profile == PM_MANUAL) {
hdev->curr_pll_profile = PLL_HIGH;
hl_device_set_frequency(hdev, PLL_LOW);
hdev->pm_mng_profile = PM_AUTO;
hl_device_set_frequency(hdev, PLL_LOW);
}
} else if (strncmp("manual", buf, strlen("manual")) == 0) {
if (hdev->pm_mng_profile == PM_AUTO) {

View File

@ -132,6 +132,8 @@ enum hl_device_hw_state {
/**
* struct hl_mmu_properties - ASIC specific MMU address translation properties.
* @start_addr: virtual start address of the memory region.
* @end_addr: virtual end address of the memory region.
* @hop0_shift: shift of hop 0 mask.
* @hop1_shift: shift of hop 1 mask.
* @hop2_shift: shift of hop 2 mask.
@ -143,9 +145,10 @@ enum hl_device_hw_state {
* @hop3_mask: mask to get the PTE address in hop 3.
* @hop4_mask: mask to get the PTE address in hop 4.
* @page_size: default page size used to allocate memory.
* @huge_page_size: page size used to allocate memory with huge pages.
*/
struct hl_mmu_properties {
u64 start_addr;
u64 end_addr;
u64 hop0_shift;
u64 hop1_shift;
u64 hop2_shift;
@ -157,7 +160,6 @@ struct hl_mmu_properties {
u64 hop3_mask;
u64 hop4_mask;
u32 page_size;
u32 huge_page_size;
};
/**
@ -169,6 +171,8 @@ struct hl_mmu_properties {
* @preboot_ver: F/W Preboot version.
* @dmmu: DRAM MMU address translation properties.
* @pmmu: PCI (host) MMU address translation properties.
* @pmmu_huge: PCI (host) MMU address translation properties for memory
* allocated with huge pages.
* @sram_base_address: SRAM physical start address.
* @sram_end_address: SRAM physical end address.
* @sram_user_base_address - SRAM physical start address for user access.
@ -178,14 +182,6 @@ struct hl_mmu_properties {
* @dram_size: DRAM total size.
* @dram_pci_bar_size: size of PCI bar towards DRAM.
* @max_power_default: max power of the device after reset
* @va_space_host_start_address: base address of virtual memory range for
* mapping host memory.
* @va_space_host_end_address: end address of virtual memory range for
* mapping host memory.
* @va_space_dram_start_address: base address of virtual memory range for
* mapping DRAM memory.
* @va_space_dram_end_address: end address of virtual memory range for
* mapping DRAM memory.
* @dram_size_for_default_page_mapping: DRAM size needed to map to avoid page
* fault.
* @pcie_dbi_base_address: Base address of the PCIE_DBI block.
@ -218,6 +214,7 @@ struct asic_fixed_properties {
char preboot_ver[VERSION_MAX_LEN];
struct hl_mmu_properties dmmu;
struct hl_mmu_properties pmmu;
struct hl_mmu_properties pmmu_huge;
u64 sram_base_address;
u64 sram_end_address;
u64 sram_user_base_address;
@ -227,10 +224,6 @@ struct asic_fixed_properties {
u64 dram_size;
u64 dram_pci_bar_size;
u64 max_power_default;
u64 va_space_host_start_address;
u64 va_space_host_end_address;
u64 va_space_dram_start_address;
u64 va_space_dram_end_address;
u64 dram_size_for_default_page_mapping;
u64 pcie_dbi_base_address;
u64 pcie_aux_dbi_reg_addr;
@ -431,10 +424,12 @@ struct hl_eq {
* enum hl_asic_type - supported ASIC types.
* @ASIC_INVALID: Invalid ASIC type.
* @ASIC_GOYA: Goya device.
* @ASIC_GAUDI: Gaudi device.
*/
enum hl_asic_type {
ASIC_INVALID,
ASIC_GOYA
ASIC_GOYA,
ASIC_GAUDI
};
struct hl_cs_parser;
@ -589,6 +584,8 @@ struct hl_asic_funcs {
void (*restore_phase_topology)(struct hl_device *hdev);
int (*debugfs_read32)(struct hl_device *hdev, u64 addr, u32 *val);
int (*debugfs_write32)(struct hl_device *hdev, u64 addr, u32 val);
int (*debugfs_read64)(struct hl_device *hdev, u64 addr, u64 *val);
int (*debugfs_write64)(struct hl_device *hdev, u64 addr, u64 val);
void (*add_device_attr)(struct hl_device *hdev,
struct attribute_group *dev_attr_grp);
void (*handle_eqe)(struct hl_device *hdev,
@ -658,6 +655,8 @@ struct hl_va_range {
* this hits 0l. It is incremented on CS and CS_WAIT.
* @cs_pending: array of DMA fence objects representing pending CS.
* @host_va_range: holds available virtual addresses for host mappings.
* @host_huge_va_range: holds available virtual addresses for host mappings
* with huge pages.
* @dram_va_range: holds available virtual addresses for DRAM mappings.
* @mem_hash_lock: protects the mem_hash.
* @mmu_lock: protects the MMU page tables. Any change to the PGT, modifing the
@ -688,8 +687,9 @@ struct hl_ctx {
struct hl_device *hdev;
struct kref refcount;
struct dma_fence *cs_pending[HL_MAX_PENDING_CS];
struct hl_va_range host_va_range;
struct hl_va_range dram_va_range;
struct hl_va_range *host_va_range;
struct hl_va_range *host_huge_va_range;
struct hl_va_range *dram_va_range;
struct mutex mem_hash_lock;
struct mutex mmu_lock;
struct list_head debugfs_list;
@ -763,7 +763,7 @@ struct hl_userptr {
* @aborted: true if CS was aborted due to some device error.
*/
struct hl_cs {
u8 jobs_in_queue_cnt[HL_MAX_QUEUES];
u16 jobs_in_queue_cnt[HL_MAX_QUEUES];
struct hl_ctx *ctx;
struct list_head job_list;
spinlock_t job_lock;
@ -1291,6 +1291,8 @@ struct hl_device_idle_busy_ts {
* otherwise.
* @dram_supports_virtual_memory: is MMU enabled towards DRAM.
* @dram_default_page_mapping: is DRAM default page mapping enabled.
* @pmmu_huge_range: is a different virtual addresses range used for PMMU with
* huge pages.
* @init_done: is the initialization of the device done.
* @mmu_enable: is MMU enabled.
* @device_cpu_disabled: is the device CPU disabled (due to timeouts)
@ -1372,6 +1374,7 @@ struct hl_device {
u8 reset_on_lockup;
u8 dram_supports_virtual_memory;
u8 dram_default_page_mapping;
u8 pmmu_huge_range;
u8 init_done;
u8 device_cpu_disabled;
u8 dma_mask;
@ -1573,8 +1576,10 @@ int hl_mmu_init(struct hl_device *hdev);
void hl_mmu_fini(struct hl_device *hdev);
int hl_mmu_ctx_init(struct hl_ctx *ctx);
void hl_mmu_ctx_fini(struct hl_ctx *ctx);
int hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr, u32 page_size);
int hl_mmu_unmap(struct hl_ctx *ctx, u64 virt_addr, u32 page_size);
int hl_mmu_map(struct hl_ctx *ctx, u64 virt_addr, u64 phys_addr,
u32 page_size, bool flush_pte);
int hl_mmu_unmap(struct hl_ctx *ctx, u64 virt_addr, u32 page_size,
bool flush_pte);
void hl_mmu_swap_out(struct hl_ctx *ctx);
void hl_mmu_swap_in(struct hl_ctx *ctx);
@ -1606,11 +1611,18 @@ int hl_pci_set_dma_mask(struct hl_device *hdev, u8 dma_mask);
long hl_get_frequency(struct hl_device *hdev, u32 pll_index, bool curr);
void hl_set_frequency(struct hl_device *hdev, u32 pll_index, u64 freq);
long hl_get_temperature(struct hl_device *hdev, int sensor_index, u32 attr);
long hl_get_voltage(struct hl_device *hdev, int sensor_index, u32 attr);
long hl_get_current(struct hl_device *hdev, int sensor_index, u32 attr);
long hl_get_fan_speed(struct hl_device *hdev, int sensor_index, u32 attr);
long hl_get_pwm_info(struct hl_device *hdev, int sensor_index, u32 attr);
int hl_get_temperature(struct hl_device *hdev,
int sensor_index, u32 attr, long *value);
int hl_set_temperature(struct hl_device *hdev,
int sensor_index, u32 attr, long value);
int hl_get_voltage(struct hl_device *hdev,
int sensor_index, u32 attr, long *value);
int hl_get_current(struct hl_device *hdev,
int sensor_index, u32 attr, long *value);
int hl_get_fan_speed(struct hl_device *hdev,
int sensor_index, u32 attr, long *value);
int hl_get_pwm_info(struct hl_device *hdev,
int sensor_index, u32 attr, long *value);
void hl_set_pwm_info(struct hl_device *hdev, int sensor_index, u32 attr,
long value);
u64 hl_get_max_power(struct hl_device *hdev);

View File

@ -40,12 +40,13 @@ MODULE_PARM_DESC(reset_on_lockup,
#define PCI_VENDOR_ID_HABANALABS 0x1da3
#define PCI_IDS_GOYA 0x0001
#define PCI_IDS_GAUDI 0x1000
static const struct pci_device_id ids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_HABANALABS, PCI_IDS_GOYA), },
{ PCI_DEVICE(PCI_VENDOR_ID_HABANALABS, PCI_IDS_GAUDI), },
{ 0, }
};
MODULE_DEVICE_TABLE(pci, ids);
/*
* get_asic_type - translate device id to asic type
@ -63,6 +64,9 @@ static enum hl_asic_type get_asic_type(u16 device)
case PCI_IDS_GOYA:
asic_type = ASIC_GOYA;
break;
case PCI_IDS_GAUDI:
asic_type = ASIC_GAUDI;
break;
default:
asic_type = ASIC_INVALID;
break;
@ -263,6 +267,11 @@ int create_hdev(struct hl_device **dev, struct pci_dev *pdev,
dev_err(&pdev->dev, "Unsupported ASIC\n");
rc = -ENODEV;
goto free_hdev;
} else if (hdev->asic_type == ASIC_GAUDI) {
dev_err(&pdev->dev,
"GAUDI is not supported by the current kernel\n");
rc = -ENODEV;
goto free_hdev;
}
} else {
hdev->asic_type = asic_type;

View File

@ -113,6 +113,7 @@ static int hl_read(struct device *dev, enum hwmon_sensor_types type,
u32 attr, int channel, long *val)
{
struct hl_device *hdev = dev_get_drvdata(dev);
int rc;
if (hl_device_disabled_or_in_reset(hdev))
return -ENODEV;
@ -125,36 +126,40 @@ static int hl_read(struct device *dev, enum hwmon_sensor_types type,
case hwmon_temp_crit:
case hwmon_temp_max_hyst:
case hwmon_temp_crit_hyst:
case hwmon_temp_offset:
case hwmon_temp_highest:
break;
default:
return -EINVAL;
}
*val = hl_get_temperature(hdev, channel, attr);
rc = hl_get_temperature(hdev, channel, attr, val);
break;
case hwmon_in:
switch (attr) {
case hwmon_in_input:
case hwmon_in_min:
case hwmon_in_max:
case hwmon_in_highest:
break;
default:
return -EINVAL;
}
*val = hl_get_voltage(hdev, channel, attr);
rc = hl_get_voltage(hdev, channel, attr, val);
break;
case hwmon_curr:
switch (attr) {
case hwmon_curr_input:
case hwmon_curr_min:
case hwmon_curr_max:
case hwmon_curr_highest:
break;
default:
return -EINVAL;
}
*val = hl_get_current(hdev, channel, attr);
rc = hl_get_current(hdev, channel, attr, val);
break;
case hwmon_fan:
switch (attr) {
@ -165,7 +170,7 @@ static int hl_read(struct device *dev, enum hwmon_sensor_types type,
default:
return -EINVAL;
}
*val = hl_get_fan_speed(hdev, channel, attr);
rc = hl_get_fan_speed(hdev, channel, attr, val);
break;
case hwmon_pwm:
switch (attr) {
@ -175,12 +180,12 @@ static int hl_read(struct device *dev, enum hwmon_sensor_types type,
default:
return -EINVAL;
}
*val = hl_get_pwm_info(hdev, channel, attr);
rc = hl_get_pwm_info(hdev, channel, attr, val);
break;
default:
return -EINVAL;
}
return 0;
return rc;
}
static int hl_write(struct device *dev, enum hwmon_sensor_types type,
@ -192,6 +197,15 @@ static int hl_write(struct device *dev, enum hwmon_sensor_types type,
return -ENODEV;
switch (type) {
case hwmon_temp:
switch (attr) {
case hwmon_temp_offset:
break;
default:
return -EINVAL;
}
hl_set_temperature(hdev, channel, attr, val);
break;
case hwmon_pwm:
switch (attr) {
case hwmon_pwm_input:
@ -219,7 +233,10 @@ static umode_t hl_is_visible(const void *data, enum hwmon_sensor_types type,
case hwmon_temp_max_hyst:
case hwmon_temp_crit:
case hwmon_temp_crit_hyst:
case hwmon_temp_highest:
return 0444;
case hwmon_temp_offset:
return 0644;
}
break;
case hwmon_in:
@ -227,6 +244,7 @@ static umode_t hl_is_visible(const void *data, enum hwmon_sensor_types type,
case hwmon_in_input:
case hwmon_in_min:
case hwmon_in_max:
case hwmon_in_highest:
return 0444;
}
break;
@ -235,6 +253,7 @@ static umode_t hl_is_visible(const void *data, enum hwmon_sensor_types type,
case hwmon_curr_input:
case hwmon_curr_min:
case hwmon_curr_max:
case hwmon_curr_highest:
return 0444;
}
break;
@ -265,10 +284,10 @@ static const struct hwmon_ops hl_hwmon_ops = {
.write = hl_write
};
long hl_get_temperature(struct hl_device *hdev, int sensor_index, u32 attr)
int hl_get_temperature(struct hl_device *hdev,
int sensor_index, u32 attr, long *value)
{
struct armcp_packet pkt;
long result;
int rc;
memset(&pkt, 0, sizeof(pkt));
@ -279,22 +298,47 @@ long hl_get_temperature(struct hl_device *hdev, int sensor_index, u32 attr)
pkt.type = __cpu_to_le16(attr);
rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt),
SENSORS_PKT_TIMEOUT, &result);
SENSORS_PKT_TIMEOUT, value);
if (rc) {
dev_err(hdev->dev,
"Failed to get temperature from sensor %d, error %d\n",
sensor_index, rc);
result = 0;
*value = 0;
}
return result;
return rc;
}
long hl_get_voltage(struct hl_device *hdev, int sensor_index, u32 attr)
int hl_set_temperature(struct hl_device *hdev,
int sensor_index, u32 attr, long value)
{
struct armcp_packet pkt;
int rc;
memset(&pkt, 0, sizeof(pkt));
pkt.ctl = cpu_to_le32(ARMCP_PACKET_TEMPERATURE_SET <<
ARMCP_PKT_CTL_OPCODE_SHIFT);
pkt.sensor_index = __cpu_to_le16(sensor_index);
pkt.type = __cpu_to_le16(attr);
pkt.value = __cpu_to_le64(value);
rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt),
SENSORS_PKT_TIMEOUT, NULL);
if (rc)
dev_err(hdev->dev,
"Failed to set temperature of sensor %d, error %d\n",
sensor_index, rc);
return rc;
}
int hl_get_voltage(struct hl_device *hdev,
int sensor_index, u32 attr, long *value)
{
struct armcp_packet pkt;
long result;
int rc;
memset(&pkt, 0, sizeof(pkt));
@ -305,22 +349,22 @@ long hl_get_voltage(struct hl_device *hdev, int sensor_index, u32 attr)
pkt.type = __cpu_to_le16(attr);
rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt),
SENSORS_PKT_TIMEOUT, &result);
SENSORS_PKT_TIMEOUT, value);
if (rc) {
dev_err(hdev->dev,
"Failed to get voltage from sensor %d, error %d\n",
sensor_index, rc);
result = 0;
*value = 0;
}
return result;
return rc;
}
long hl_get_current(struct hl_device *hdev, int sensor_index, u32 attr)
int hl_get_current(struct hl_device *hdev,
int sensor_index, u32 attr, long *value)
{
struct armcp_packet pkt;
long result;
int rc;
memset(&pkt, 0, sizeof(pkt));
@ -331,22 +375,22 @@ long hl_get_current(struct hl_device *hdev, int sensor_index, u32 attr)
pkt.type = __cpu_to_le16(attr);
rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt),
SENSORS_PKT_TIMEOUT, &result);
SENSORS_PKT_TIMEOUT, value);
if (rc) {
dev_err(hdev->dev,
"Failed to get current from sensor %d, error %d\n",
sensor_index, rc);
result = 0;
*value = 0;
}
return result;
return rc;
}
long hl_get_fan_speed(struct hl_device *hdev, int sensor_index, u32 attr)
int hl_get_fan_speed(struct hl_device *hdev,
int sensor_index, u32 attr, long *value)
{
struct armcp_packet pkt;
long result;
int rc;
memset(&pkt, 0, sizeof(pkt));
@ -357,22 +401,22 @@ long hl_get_fan_speed(struct hl_device *hdev, int sensor_index, u32 attr)
pkt.type = __cpu_to_le16(attr);
rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt),
SENSORS_PKT_TIMEOUT, &result);
SENSORS_PKT_TIMEOUT, value);
if (rc) {
dev_err(hdev->dev,
"Failed to get fan speed from sensor %d, error %d\n",
sensor_index, rc);
result = 0;
*value = 0;
}
return result;
return rc;
}
long hl_get_pwm_info(struct hl_device *hdev, int sensor_index, u32 attr)
int hl_get_pwm_info(struct hl_device *hdev,
int sensor_index, u32 attr, long *value)
{
struct armcp_packet pkt;
long result;
int rc;
memset(&pkt, 0, sizeof(pkt));
@ -383,16 +427,16 @@ long hl_get_pwm_info(struct hl_device *hdev, int sensor_index, u32 attr)
pkt.type = __cpu_to_le16(attr);
rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt),
SENSORS_PKT_TIMEOUT, &result);
SENSORS_PKT_TIMEOUT, value);
if (rc) {
dev_err(hdev->dev,
"Failed to get pwm info from sensor %d, error %d\n",
sensor_index, rc);
result = 0;
*value = 0;
}
return result;
return rc;
}
void hl_set_pwm_info(struct hl_device *hdev, int sensor_index, u32 attr,

View File

@ -189,6 +189,10 @@ enum pq_init_status {
* ArmCP to write to the structure, to prevent data corruption in case of
* mismatched driver/FW versions.
*
* ARMCP_PACKET_TEMPERATURE_SET -
* Set the value of the offset property of a specified thermal sensor.
* The packet's arguments specify the desired sensor and the field to
* set.
*/
enum armcp_packet_id {
@ -214,6 +218,8 @@ enum armcp_packet_id {
ARMCP_PACKET_MAX_POWER_GET, /* sysfs */
ARMCP_PACKET_MAX_POWER_SET, /* sysfs */
ARMCP_PACKET_EEPROM_DATA_GET, /* sysfs */
ARMCP_RESERVED,
ARMCP_PACKET_TEMPERATURE_SET, /* sysfs */
};
#define ARMCP_PACKET_FENCE_VAL 0xFE8CE7A5
@ -271,24 +277,32 @@ enum armcp_packet_rc {
armcp_packet_fault
};
/*
* armcp_temp_type should adhere to hwmon_temp_attributes
* defined in Linux kernel hwmon.h file
*/
enum armcp_temp_type {
armcp_temp_input,
armcp_temp_max = 6,
armcp_temp_max_hyst,
armcp_temp_crit,
armcp_temp_crit_hyst
armcp_temp_crit_hyst,
armcp_temp_offset = 19,
armcp_temp_highest = 22
};
enum armcp_in_attributes {
armcp_in_input,
armcp_in_min,
armcp_in_max
armcp_in_max,
armcp_in_highest = 7
};
enum armcp_curr_attributes {
armcp_curr_input,
armcp_curr_min,
armcp_curr_max
armcp_curr_max,
armcp_curr_highest = 7
};
enum armcp_fan_attributes {

Some files were not shown because too many files have changed in this diff Show More