1
0
Fork 0

ARM/SoC: drivers for v5.7

These are updates to SoC specific drivers that did not have
 another subsystem maintainer tree to go through for some
 reason:
 
 - Some bus and memory drivers for the MIPS P5600 based
   Baikal-T1 SoC that is getting added through the MIPS tree.
 
 - There are new soc_device identification drivers for TI K3,
   Qualcomm MSM8939
 
 - New reset controller drivers for NXP i.MX8MP, Renesas
   RZ/G1H, and Hisilicon hi6220
 
 - The SCMI firmware interface can now work across ARM SMC/HVC
   as a transport.
 
 - Mediatek platforms now use a new driver for their "MMSYS"
   hardware block that controls clocks and some other aspects
   in behalf of the media and gpu drivers.
 
 - Some Tegra processors have improved power management
   support, including getting woken up by the PMIC and cluster
   power down during idle.
 
 - A new v4l staging driver for Tegra is added.
 
 - Cleanups and minor bugfixes for TI, NXP, Hisilicon,
   Mediatek, and Tegra.
 
 Signed-off-by: Arnd Bergmann <arnd@arndb.de>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEo6/YBQwIrVS28WGKmmx57+YAGNkFAl7XvgAACgkQmmx57+YA
 GNmj/hAAnAJ/hYehLfgCe711HUntgeRkaoTVpCt8BJNMdxsa23sn3V6k5+WYn1uG
 PtlgpefZEMHLUEEVDegR4nZXLG0Pzu1SR12KW34YPcQKkNo/+vlQ9zYUajnJ/KX6
 10zdLSIzHfk1VtXKvvQQ8xFyE+S/trGmjC57E6gfoCUT3rl1maD+ccVXUBaz9oob
 wuMxGXQAl57mio5yT1OfSk6Fev39xRE2dN1hzP7KUYhsemZajBwBBW5wVJZCsCB8
 LCGmxVkavM7BV4r2NokbBDs5rlfedBl/P/IPd9Is5a5tuGUkSsVRG9zqShxYLGM3
 S06az6POQFwXKFJoUKW0dK/Koy0D7BK+vhUBPzFv4HZ8iDCVf6Jju2MJ02GMqHPj
 OOrXaCbLYrvN/edVUWeeFywqwMbYTRwC4DxyTq5m7HxEB004xTOhs0rX5aR0u4n1
 bbsR97LguolwH9iEMzd3F3jCiKBcMecH3lAh5WcrtwlFIRrNhbWoGDoA/4TuORFS
 b11rgsTRIJ5Vc++D1HnSnx0ZZvUzyluMvygdALnSgVah6xYe6KVw9Kg/wioAJ04G
 uSTidqP3qRhsyET2HQo7CxdVfZbKfP25iKCKrdhQziztKvhF8qrUmZKloXOodRw+
 ewYSRmv8c324OYYit1X43oAdW8dntq1XbSIauaqxEb4JC7x/xRY=
 =44Jc
 -----END PGP SIGNATURE-----

Merge tag 'arm-drivers-5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM/SoC driver updates from Arnd Bergmann:
 "These are updates to SoC specific drivers that did not have another
  subsystem maintainer tree to go through for some reason:

   - Some bus and memory drivers for the MIPS P5600 based Baikal-T1 SoC
     that is getting added through the MIPS tree.

   - There are new soc_device identification drivers for TI K3, Qualcomm
     MSM8939

   - New reset controller drivers for NXP i.MX8MP, Renesas RZ/G1H, and
     Hisilicon hi6220

   - The SCMI firmware interface can now work across ARM SMC/HVC as a
     transport.

   - Mediatek platforms now use a new driver for their "MMSYS" hardware
     block that controls clocks and some other aspects in behalf of the
     media and gpu drivers.

   - Some Tegra processors have improved power management support,
     including getting woken up by the PMIC and cluster power down
     during idle.

   - A new v4l staging driver for Tegra is added.

   - Cleanups and minor bugfixes for TI, NXP, Hisilicon, Mediatek, and
     Tegra"

* tag 'arm-drivers-5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (155 commits)
  clk: sprd: fix compile-testing
  bus: bt1-axi: Build the driver into the kernel
  bus: bt1-apb: Build the driver into the kernel
  bus: bt1-axi: Use sysfs_streq instead of strncmp
  bus: bt1-axi: Optimize the return points in the driver
  bus: bt1-apb: Use sysfs_streq instead of strncmp
  bus: bt1-apb: Use PTR_ERR_OR_ZERO to return from request-regs method
  bus: bt1-apb: Fix show/store callback identations
  bus: bt1-apb: Include linux/io.h
  dt-bindings: memory: Add Baikal-T1 L2-cache Control Block binding
  memory: Add Baikal-T1 L2-cache Control Block driver
  bus: Add Baikal-T1 APB-bus driver
  bus: Add Baikal-T1 AXI-bus driver
  dt-bindings: bus: Add Baikal-T1 APB-bus binding
  dt-bindings: bus: Add Baikal-T1 AXI-bus binding
  staging: tegra-video: fix V4L2 dependency
  tee: fix crypto select
  drivers: soc: ti: knav_qmss_queue: Make knav_gp_range_ops static
  soc: ti: add k3 platforms chipid module driver
  dt-bindings: soc: ti: add binding for k3 platforms chipid module
  ...
alistair/sunxi64-5.8
Linus Torvalds 2020-06-04 19:56:20 -07:00
commit 828f3e18e1
150 changed files with 8003 additions and 1238 deletions

View File

@ -14,7 +14,7 @@ Required properties:
The scmi node with the following properties shall be under the /firmware/ node.
- compatible : shall be "arm,scmi"
- compatible : shall be "arm,scmi" or "arm,scmi-smc" for smc/hvc transports
- mboxes: List of phandle and mailbox channel specifiers. It should contain
exactly one or two mailboxes, one for transmitting messages("tx")
and another optional for receiving the notifications("rx") if
@ -25,6 +25,7 @@ The scmi node with the following properties shall be under the /firmware/ node.
protocol identifier for a given sub-node.
- #size-cells : should be '0' as 'reg' property doesn't have any size
associated with it.
- arm,smc-id : SMC id required when using smc or hvc transports
Optional properties:

View File

@ -1,7 +1,8 @@
Mediatek mmsys controller
============================
The Mediatek mmsys controller provides various clocks to the system.
The Mediatek mmsys system controller provides clock control, routing control,
and miscellaneous control in mmsys partition.
Required Properties:
@ -15,13 +16,13 @@ Required Properties:
- "mediatek,mt8183-mmsys", "syscon"
- #clock-cells: Must be 1
The mmsys controller uses the common clk binding from
For the clock control, the mmsys controller uses the common clk binding from
Documentation/devicetree/bindings/clock/clock-bindings.txt
The available clocks are defined in dt-bindings/clock/mt*-clk.h.
Example:
mmsys: clock-controller@14000000 {
mmsys: syscon@14000000 {
compatible = "mediatek,mt8173-mmsys", "syscon";
reg = <0 0x14000000 0 0x1000>;
#clock-cells = <1>;

View File

@ -0,0 +1,90 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright (C) 2020 BAIKAL ELECTRONICS, JSC
%YAML 1.2
---
$id: http://devicetree.org/schemas/bus/baikal,bt1-apb.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Baikal-T1 APB-bus
maintainers:
- Serge Semin <fancer.lancer@gmail.com>
description: |
Baikal-T1 CPU or DMAC MMIO requests are handled by the AMBA 3 AXI Interconnect
which routes them to the AXI-APB bridge. This interface is a single master
multiple slaves bus in turn serializing IO accesses and routing them to the
addressed APB slave devices. In case of any APB protocol collisions, slave
device not responding on timeout an IRQ is raised with an erroneous address
reported to the APB terminator (APB Errors Handler Block).
allOf:
- $ref: /schemas/simple-bus.yaml#
properties:
compatible:
contains:
const: baikal,bt1-apb
reg:
items:
- description: APB EHB MMIO registers
- description: APB MMIO region with no any device mapped
reg-names:
items:
- const: ehb
- const: nodev
interrupts:
maxItems: 1
clocks:
items:
- description: APB reference clock
clock-names:
items:
- const: pclk
resets:
items:
- description: APB domain reset line
reset-names:
items:
- const: prst
unevaluatedProperties: false
required:
- compatible
- reg
- reg-names
- interrupts
- clocks
- clock-names
examples:
- |
#include <dt-bindings/interrupt-controller/mips-gic.h>
bus@1f059000 {
compatible = "baikal,bt1-apb", "simple-bus";
reg = <0 0x1f059000 0 0x1000>,
<0 0x1d000000 0 0x2040000>;
reg-names = "ehb", "nodev";
#address-cells = <1>;
#size-cells = <1>;
ranges;
interrupts = <GIC_SHARED 16 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&ccu_sys 1>;
clock-names = "pclk";
resets = <&ccu_sys 1>;
reset-names = "prst";
};
...

View File

@ -0,0 +1,107 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright (C) 2020 BAIKAL ELECTRONICS, JSC
%YAML 1.2
---
$id: http://devicetree.org/schemas/bus/baikal,bt1-axi.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Baikal-T1 AXI-bus
maintainers:
- Serge Semin <fancer.lancer@gmail.com>
description: |
AXI3-bus is the main communication bus of Baikal-T1 SoC connecting all
high-speed peripheral IP-cores with RAM controller and with MIPS P5600
cores. Traffic arbitration is done by means of DW AXI Interconnect (so
called AXI Main Interconnect) routing IO requests from one block to
another: from CPU to SoC peripherals and between some SoC peripherals
(mostly between peripheral devices and RAM, but also between DMA and
some peripherals). In case of any protocol error, device not responding
an IRQ is raised and a faulty situation is reported to the AXI EHB
(Errors Handler Block) embedded on top of the DW AXI Interconnect and
accessible by means of the Baikal-T1 System Controller.
allOf:
- $ref: /schemas/simple-bus.yaml#
properties:
compatible:
contains:
const: baikal,bt1-axi
reg:
minItems: 1
items:
- description: Synopsys DesignWare AXI Interconnect QoS registers
- description: AXI EHB MMIO system controller registers
reg-names:
minItems: 1
items:
- const: qos
- const: ehb
'#interconnect-cells':
const: 1
syscon:
$ref: /schemas/types.yaml#definitions/phandle
description: Phandle to the Baikal-T1 System Controller DT node
interrupts:
maxItems: 1
clocks:
items:
- description: Main Interconnect uplink reference clock
clock-names:
items:
- const: aclk
resets:
items:
- description: Main Interconnect reset line
reset-names:
items:
- const: arst
unevaluatedProperties: false
required:
- compatible
- reg
- reg-names
- syscon
- interrupts
- clocks
- clock-names
examples:
- |
#include <dt-bindings/interrupt-controller/mips-gic.h>
bus@1f05a000 {
compatible = "baikal,bt1-axi", "simple-bus";
reg = <0 0x1f05a000 0 0x1000>,
<0 0x1f04d110 0 0x8>;
reg-names = "qos", "ehb";
#address-cells = <1>;
#size-cells = <1>;
#interconnect-cells = <1>;
syscon = <&syscon>;
ranges;
interrupts = <GIC_SHARED 127 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&ccu_axi 0>;
clock-names = "aclk";
resets = <&ccu_axi 0>;
reset-names = "arst";
};
...

View File

@ -0,0 +1,56 @@
Binding for NVIDIA Tegra20 CPUFreq
==================================
Required properties:
- clocks: Must contain an entry for the CPU clock.
See ../clocks/clock-bindings.txt for details.
- operating-points-v2: See ../bindings/opp/opp.txt for details.
- #cooling-cells: Should be 2. See ../thermal/thermal.txt for details.
For each opp entry in 'operating-points-v2' table:
- opp-supported-hw: Two bitfields indicating:
On Tegra20:
1. CPU process ID mask
2. SoC speedo ID mask
On Tegra30:
1. CPU process ID mask
2. CPU speedo ID mask
A bitwise AND is performed against these values and if any bit
matches, the OPP gets enabled.
- opp-microvolt: CPU voltage triplet.
Optional properties:
- cpu-supply: Phandle to the CPU power supply.
Example:
regulators {
cpu_reg: regulator0 {
regulator-name = "vdd_cpu";
};
};
cpu0_opp_table: opp_table0 {
compatible = "operating-points-v2";
opp@456000000 {
clock-latency-ns = <125000>;
opp-microvolt = <825000 825000 1125000>;
opp-supported-hw = <0x03 0x0001>;
opp-hz = /bits/ 64 <456000000>;
};
...
};
cpus {
cpu@0 {
compatible = "arm,cortex-a9";
clocks = <&tegra_car TEGRA20_CLK_CCLK>;
operating-points-v2 = <&cpu0_opp_table>;
cpu-supply = <&cpu_reg>;
#cooling-cells = <2>;
};
};

View File

@ -40,14 +40,30 @@ of the following host1x client modules:
Required properties:
- compatible: "nvidia,tegra<chip>-vi"
- reg: Physical base address and length of the controller's registers.
- reg: Physical base address and length of the controller registers.
- interrupts: The interrupt outputs from the controller.
- clocks: Must contain one entry, for the module clock.
- clocks: clocks: Must contain one entry, for the module clock.
See ../clocks/clock-bindings.txt for details.
- resets: Must contain an entry for each entry in reset-names.
See ../reset/reset.txt for details.
- reset-names: Must include the following entries:
- vi
- Tegra20/Tegra30/Tegra114/Tegra124:
- resets: Must contain an entry for each entry in reset-names.
See ../reset/reset.txt for details.
- reset-names: Must include the following entries:
- vi
- Tegra210:
- power-domains: Must include venc powergate node as vi is in VE partition.
- Tegra210 has CSI part of VI sharing same host interface and register space.
So, VI device node should have CSI child node.
- csi: mipi csi interface to vi
Required properties:
- compatible: "nvidia,tegra210-csi"
- reg: Physical base address offset to parent and length of the controller
registers.
- clocks: Must contain entries csi, cilab, cilcd, cile, csi_tpg clocks.
See ../clocks/clock-bindings.txt for details.
- power-domains: Must include sor powergate node as csicil is in
SOR partition.
- epp: encoder pre-processor
@ -309,13 +325,44 @@ Example:
reset-names = "mpe";
};
vi {
compatible = "nvidia,tegra20-vi";
reg = <0x54080000 0x00040000>;
interrupts = <0 69 0x04>;
clocks = <&tegra_car TEGRA20_CLK_VI>;
resets = <&tegra_car 100>;
reset-names = "vi";
vi@54080000 {
compatible = "nvidia,tegra210-vi";
reg = <0x0 0x54080000 0x0 0x700>;
interrupts = <GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>;
assigned-clocks = <&tegra_car TEGRA210_CLK_VI>;
assigned-clock-parents = <&tegra_car TEGRA210_CLK_PLL_C4_OUT0>;
clocks = <&tegra_car TEGRA210_CLK_VI>;
power-domains = <&pd_venc>;
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x0 0x0 0x54080000 0x2000>;
csi@838 {
compatible = "nvidia,tegra210-csi";
reg = <0x838 0x1300>;
assigned-clocks = <&tegra_car TEGRA210_CLK_CILAB>,
<&tegra_car TEGRA210_CLK_CILCD>,
<&tegra_car TEGRA210_CLK_CILE>,
<&tegra_car TEGRA210_CLK_CSI_TPG>;
assigned-clock-parents = <&tegra_car TEGRA210_CLK_PLL_P>,
<&tegra_car TEGRA210_CLK_PLL_P>,
<&tegra_car TEGRA210_CLK_PLL_P>;
assigned-clock-rates = <102000000>,
<102000000>,
<102000000>,
<972000000>;
clocks = <&tegra_car TEGRA210_CLK_CSI>,
<&tegra_car TEGRA210_CLK_CILAB>,
<&tegra_car TEGRA210_CLK_CILCD>,
<&tegra_car TEGRA210_CLK_CILE>,
<&tegra_car TEGRA210_CLK_CSI_TPG>;
clock-names = "csi", "cilab", "cilcd", "cile", "csi_tpg";
power-domains = <&pd_sor>;
};
};
epp {

View File

@ -35,6 +35,12 @@ Required properties:
Due to above changes, Tegra114 I2C driver makes incompatible with
previous hardware driver. Hence, tegra114 I2C controller is compatible
with "nvidia,tegra114-i2c".
nvidia,tegra210-i2c-vi: Tegra210 has one I2C controller that is part of the
host1x domain and typically used for camera use-cases. This VI I2C
controller is mostly compatible with the programming model of the
regular I2C controllers with a few exceptions. The I2C registers start
at an offset of 0xc00 (instead of 0), registers are 16 bytes apart
(rather than 4) and the controller does not support slave mode.
- reg: Should contain I2C controller registers physical address and length.
- interrupts: Should contain I2C controller interrupts.
- address-cells: Address cells for I2C device address.

View File

@ -0,0 +1,63 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright (C) 2020 BAIKAL ELECTRONICS, JSC
%YAML 1.2
---
$id: http://devicetree.org/schemas/memory-controllers/baikal,bt1-l2-ctl.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Baikal-T1 L2-cache Control Block
maintainers:
- Serge Semin <fancer.lancer@gmail.com>
description: |
By means of the System Controller Baikal-T1 SoC exposes a few settings to
tune the MIPS P5600 CM2 L2 cache performance up. In particular it's possible
to change the Tag, Data and Way-select RAM access latencies. Baikal-T1
L2-cache controller block is responsible for the tuning. Its DT node is
supposed to be a child of the system controller.
properties:
compatible:
const: baikal,bt1-l2-ctl
reg:
maxItems: 1
baikal,l2-ws-latency:
$ref: /schemas/types.yaml#/definitions/uint32
description: Cycles of latency for Way-select RAM accesses
default: 0
minimum: 0
maximum: 3
baikal,l2-tag-latency:
$ref: /schemas/types.yaml#/definitions/uint32
description: Cycles of latency for Tag RAM accesses
default: 0
minimum: 0
maximum: 3
baikal,l2-data-latency:
$ref: /schemas/types.yaml#/definitions/uint32
description: Cycles of latency for Data RAM accesses
default: 1
minimum: 0
maximum: 3
additionalProperties: false
required:
- compatible
examples:
- |
l2@1f04d028 {
compatible = "baikal,bt1-l2-ctl";
reg = <0x1f04d028 0x004>;
baikal,l2-ws-latency = <1>;
baikal,l2-tag-latency = <1>;
baikal,l2-data-latency = <2>;
};
...

View File

@ -0,0 +1,82 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/memory-controllers/nvidia,tegra210-emc.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: NVIDIA Tegra210 SoC External Memory Controller
maintainers:
- Thierry Reding <thierry.reding@gmail.com>
- Jon Hunter <jonathanh@nvidia.com>
description: |
The EMC interfaces with the off-chip SDRAM to service the request stream
sent from the memory controller.
properties:
compatible:
const: nvidia,tegra210-emc
reg:
maxItems: 3
clocks:
items:
- description: external memory clock
clock-names:
items:
- const: emc
interrupts:
items:
- description: EMC general interrupt
memory-region:
$ref: /schemas/types.yaml#/definitions/phandle
description:
phandle to a reserved memory region describing the table of EMC
frequencies trained by the firmware
nvidia,memory-controller:
$ref: /schemas/types.yaml#/definitions/phandle
description:
phandle of the memory controller node
required:
- compatible
- reg
- clocks
- clock-names
- nvidia,memory-controller
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/tegra210-car.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
reserved-memory {
#address-cells = <1>;
#size-cells = <1>;
ranges;
emc_table: emc-table@83400000 {
compatible = "nvidia,tegra210-emc-table";
reg = <0x83400000 0x10000>;
};
};
external-memory-controller@7001b000 {
compatible = "nvidia,tegra210-emc";
reg = <0x7001b000 0x1000>,
<0x7001e000 0x1000>,
<0x7001f000 0x1000>;
clocks = <&tegra_car TEGRA210_CLK_EMC>;
clock-names = "emc";
interrupts = <GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>;
memory-region = <&emc_table>;
nvidia,memory-controller = <&mc>;
};

View File

@ -23,33 +23,31 @@ description: |+
properties:
compatible:
enum:
- amlogic,meson8-pwrc
- amlogic,meson8b-pwrc
- amlogic,meson8m2-pwrc
- amlogic,meson-gxbb-pwrc
- amlogic,meson-g12a-pwrc
- amlogic,meson-sm1-pwrc
clocks:
minItems: 2
minItems: 1
maxItems: 2
clock-names:
minItems: 1
maxItems: 2
items:
- const: vpu
- const: vapb
resets:
minItems: 11
maxItems: 12
reset-names:
items:
- const: viu
- const: venc
- const: vcbus
- const: bt656
- const: rdma
- const: venci
- const: vencp
- const: vdac
- const: vdi6
- const: vencl
- const: vid_lock
minItems: 11
maxItems: 12
"#power-domain-cells":
const: 1
@ -59,12 +57,86 @@ properties:
allOf:
- $ref: /schemas/types.yaml#/definitions/phandle
allOf:
- if:
properties:
compatible:
enum:
- amlogic,meson8b-pwrc
- amlogic,meson8m2-pwrc
then:
properties:
reset-names:
items:
- const: dblk
- const: pic_dc
- const: hdmi_apb
- const: hdmi_system
- const: venci
- const: vencp
- const: vdac
- const: vencl
- const: viu
- const: venc
- const: rdma
required:
- resets
- reset-names
- if:
properties:
compatible:
enum:
- amlogic,meson-gxbb-pwrc
then:
properties:
reset-names:
items:
- const: viu
- const: venc
- const: vcbus
- const: bt656
- const: dvin
- const: rdma
- const: venci
- const: vencp
- const: vdac
- const: vdi6
- const: vencl
- const: vid_lock
required:
- resets
- reset-names
- if:
properties:
compatible:
enum:
- amlogic,meson-g12a-pwrc
- amlogic,meson-sm1-pwrc
then:
properties:
reset-names:
items:
- const: viu
- const: venc
- const: vcbus
- const: bt656
- const: rdma
- const: venci
- const: vencp
- const: vdac
- const: vdi6
- const: vencl
- const: vid_lock
required:
- resets
- reset-names
required:
- compatible
- clocks
- clock-names
- resets
- reset-names
- "#power-domain-cells"
- amlogic,ao-sysctrl

View File

@ -23,6 +23,7 @@ properties:
- qcom,sc7180-rpmhpd
- qcom,sdm845-rpmhpd
- qcom,sm8150-rpmhpd
- qcom,sm8250-rpmhpd
'#power-domain-cells':
const: 1

View File

@ -9,6 +9,8 @@ Required properties:
- For i.MX7 SoCs should be "fsl,imx7d-src", "syscon"
- For i.MX8MQ SoCs should be "fsl,imx8mq-src", "syscon"
- For i.MX8MM SoCs should be "fsl,imx8mm-src", "fsl,imx8mq-src", "syscon"
- For i.MX8MN SoCs should be "fsl,imx8mn-src", "fsl,imx8mq-src", "syscon"
- For i.MX8MP SoCs should be "fsl,imx8mp-src", "syscon"
- reg: should be register base and length as documented in the
datasheet
- interrupts: Should contain SRC interrupt
@ -49,4 +51,6 @@ Example:
For list of all valid reset indices see
<dt-bindings/reset/imx7-reset.h> for i.MX7,
<dt-bindings/reset/imx8mq-reset.h> for i.MX8MQ and
<dt-bindings/reset/imx8mq-reset.h> for i.MX8MM
<dt-bindings/reset/imx8mq-reset.h> for i.MX8MM and
<dt-bindings/reset/imx8mq-reset.h> for i.MX8MN and
<dt-bindings/reset/imx8mp-reset.h> for i.MX8MP

View File

@ -19,6 +19,7 @@ power-domains.
"qcom,sc7180-aoss-qmp"
"qcom,sdm845-aoss-qmp"
"qcom,sm8150-aoss-qmp"
"qcom,sm8250-aoss-qmp"
- reg:
Usage: required

View File

@ -65,30 +65,30 @@ which uses apr as communication between Apps and QDSP.
compatible = "qcom,apr-v2";
qcom,apr-domain = <APR_DOMAIN_ADSP>;
q6core@3 {
apr-service@3 {
compatible = "qcom,q6core";
reg = <APR_SVC_ADSP_CORE>;
};
q6afe@4 {
apr-service@4 {
compatible = "qcom,q6afe";
reg = <APR_SVC_AFE>;
dais {
#sound-dai-cells = <1>;
hdmi@1 {
reg = <1>;
dai@1 {
reg = <HDMI_RX>;
};
};
};
q6asm@7 {
apr-service@7 {
compatible = "qcom,q6asm";
reg = <APR_SVC_ASM>;
...
};
q6adm@8 {
apr-service@8 {
compatible = "qcom,q6adm";
reg = <APR_SVC_ADM>;
...
@ -106,26 +106,26 @@ have no such dependency.
qcom,glink-channels = "apr_audio_svc";
qcom,apr-domain = <APR_DOMAIN_ADSP>;
q6core {
apr-service@3 {
compatible = "qcom,q6core";
reg = <APR_SVC_ADSP_CORE>;
};
q6afe: q6afe {
q6afe: apr-service@4 {
compatible = "qcom,q6afe";
reg = <APR_SVC_AFE>;
qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
...
};
q6asm: q6asm {
q6asm: apr-service@7 {
compatible = "qcom,q6asm";
reg = <APR_SVC_ASM>;
qcom,protection-domain = "tms/servreg", "msm/slpi/sensor_pd";
...
};
q6adm: q6adm {
q6adm: apr-service@8 {
compatible = "qcom,q6adm";
reg = <APR_SVC_ADM>;
qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";

View File

@ -0,0 +1,40 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/soc/ti/k3-socinfo.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Texas Instruments K3 Multicore SoC platforms chipid module
maintainers:
- Tero Kristo <t-kristo@ti.com>
- Nishanth Menon <nm@ti.com>
description: |
Texas Instruments (ARM64) K3 Multicore SoC platforms chipid module is
represented by CTRLMMR_xxx_JTAGID register which contains information about
SoC id and revision.
properties:
$nodename:
pattern: "^chipid@[0-9a-f]+$"
compatible:
items:
- const: ti,am654-chipid
reg:
maxItems: 1
required:
- compatible
- reg
additionalProperties: false
examples:
- |
chipid@43000014 {
compatible = "ti,am654-chipid";
reg = <0x43000014 0x4>;
};

View File

@ -16729,6 +16729,16 @@ M: Laxman Dewangan <ldewangan@nvidia.com>
S: Supported
F: drivers/spi/spi-tegra*
TEGRA VIDEO DRIVER
M: Thierry Reding <thierry.reding@gmail.com>
M: Jonathan Hunter <jonathanh@nvidia.com>
M: Sowjanya Komatineni <skomatineni@nvidia.com>
L: linux-media@vger.kernel.org
L: linux-tegra@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.txt
F: drivers/staging/media/tegra-video/
TEGRA XUSB PADCTL DRIVER
M: JC Kuo <jckuo@nvidia.com>
S: Supported

View File

@ -248,7 +248,7 @@ config ARCH_TEGRA
This enables support for the NVIDIA Tegra SoC family.
config ARCH_SPRD
tristate "Spreadtrum SoC platform"
bool "Spreadtrum SoC platform"
help
Support for Spreadtrum ARM based SoCs

View File

@ -38,6 +38,36 @@ config BRCMSTB_GISB_ARB
arbiter. This driver provides timeout and target abort error handling
and internal bus master decoding.
config BT1_APB
bool "Baikal-T1 APB-bus driver"
depends on MIPS_BAIKAL_T1 || COMPILE_TEST
select REGMAP_MMIO
help
Baikal-T1 AXI-APB bridge is used to access the SoC subsystem CSRs.
IO requests are routed to this bus by means of the DW AMBA 3 AXI
Interconnect. In case of any APB protocol collisions, slave device
not responding on timeout an IRQ is raised with an erroneous address
reported to the APB terminator (APB Errors Handler Block). This
driver provides the interrupt handler to detect the erroneous
address, prints an error message about the address fault, updates an
errors counter. The counter and the APB-bus operations timeout can be
accessed via corresponding sysfs nodes.
config BT1_AXI
bool "Baikal-T1 AXI-bus driver"
depends on MIPS_BAIKAL_T1 || COMPILE_TEST
select MFD_SYSCON
help
AXI3-bus is the main communication bus connecting all high-speed
peripheral IP-cores with RAM controller and with MIPS P5600 cores on
Baikal-T1 SoC. Traffic arbitration is done by means of DW AMBA 3 AXI
Interconnect (so called AXI Main Interconnect) routing IO requests
from one SoC block to another. This driver provides a way to detect
any bus protocol errors and device not responding situations by
means of an embedded on top of the interconnect errors handler
block (EHB). AXI Interconnect QoS arbitration tuning is currently
unsupported.
config MOXTET
tristate "CZ.NIC Turris Mox module configuration bus"
depends on SPI_MASTER && OF

View File

@ -13,6 +13,8 @@ obj-$(CONFIG_MOXTET) += moxtet.o
# DPAA2 fsl-mc bus
obj-$(CONFIG_FSL_MC_BUS) += fsl-mc/
obj-$(CONFIG_BT1_APB) += bt1-apb.o
obj-$(CONFIG_BT1_AXI) += bt1-axi.o
obj-$(CONFIG_IMX_WEIM) += imx-weim.o
obj-$(CONFIG_MIPS_CDMM) += mips_cdmm.o
obj-$(CONFIG_MVEBU_MBUS) += mvebu-mbus.o

View File

@ -0,0 +1,421 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2020 BAIKAL ELECTRONICS, JSC
*
* Authors:
* Serge Semin <Sergey.Semin@baikalelectronics.ru>
*
* Baikal-T1 APB-bus driver
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/types.h>
#include <linux/device.h>
#include <linux/atomic.h>
#include <linux/platform_device.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/nmi.h>
#include <linux/of.h>
#include <linux/regmap.h>
#include <linux/clk.h>
#include <linux/reset.h>
#include <linux/time64.h>
#include <linux/clk.h>
#include <linux/sysfs.h>
#define APB_EHB_ISR 0x00
#define APB_EHB_ISR_PENDING BIT(0)
#define APB_EHB_ISR_MASK BIT(1)
#define APB_EHB_ADDR 0x04
#define APB_EHB_TIMEOUT 0x08
#define APB_EHB_TIMEOUT_MIN 0x000003FFU
#define APB_EHB_TIMEOUT_MAX 0xFFFFFFFFU
/*
* struct bt1_apb - Baikal-T1 APB EHB private data
* @dev: Pointer to the device structure.
* @regs: APB EHB registers map.
* @res: No-device error injection memory region.
* @irq: Errors IRQ number.
* @rate: APB-bus reference clock rate.
* @pclk: APB-reference clock.
* @prst: APB domain reset line.
* @count: Number of errors detected.
*/
struct bt1_apb {
struct device *dev;
struct regmap *regs;
void __iomem *res;
int irq;
unsigned long rate;
struct clk *pclk;
struct reset_control *prst;
atomic_t count;
};
static const struct regmap_config bt1_apb_regmap_cfg = {
.reg_bits = 32,
.val_bits = 32,
.reg_stride = 4,
.max_register = APB_EHB_TIMEOUT,
.fast_io = true
};
static inline unsigned long bt1_apb_n_to_timeout_us(struct bt1_apb *apb, u32 n)
{
u64 timeout = (u64)n * USEC_PER_SEC;
do_div(timeout, apb->rate);
return timeout;
}
static inline unsigned long bt1_apb_timeout_to_n_us(struct bt1_apb *apb,
unsigned long timeout)
{
u64 n = (u64)timeout * apb->rate;
do_div(n, USEC_PER_SEC);
return n;
}
static irqreturn_t bt1_apb_isr(int irq, void *data)
{
struct bt1_apb *apb = data;
u32 addr = 0;
regmap_read(apb->regs, APB_EHB_ADDR, &addr);
dev_crit_ratelimited(apb->dev,
"APB-bus fault %d: Slave access timeout at 0x%08x\n",
atomic_inc_return(&apb->count),
addr);
/*
* Print backtrace on each CPU. This might be pointless if the fault
* has happened on the same CPU as the IRQ handler is executed or
* the other core proceeded further execution despite the error.
* But if it's not, by looking at the trace we would get straight to
* the cause of the problem.
*/
trigger_all_cpu_backtrace();
regmap_update_bits(apb->regs, APB_EHB_ISR, APB_EHB_ISR_PENDING, 0);
return IRQ_HANDLED;
}
static void bt1_apb_clear_data(void *data)
{
struct bt1_apb *apb = data;
struct platform_device *pdev = to_platform_device(apb->dev);
platform_set_drvdata(pdev, NULL);
}
static struct bt1_apb *bt1_apb_create_data(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct bt1_apb *apb;
int ret;
apb = devm_kzalloc(dev, sizeof(*apb), GFP_KERNEL);
if (!apb)
return ERR_PTR(-ENOMEM);
ret = devm_add_action(dev, bt1_apb_clear_data, apb);
if (ret) {
dev_err(dev, "Can't add APB EHB data clear action\n");
return ERR_PTR(ret);
}
apb->dev = dev;
atomic_set(&apb->count, 0);
platform_set_drvdata(pdev, apb);
return apb;
}
static int bt1_apb_request_regs(struct bt1_apb *apb)
{
struct platform_device *pdev = to_platform_device(apb->dev);
void __iomem *regs;
regs = devm_platform_ioremap_resource_byname(pdev, "ehb");
if (IS_ERR(regs)) {
dev_err(apb->dev, "Couldn't map APB EHB registers\n");
return PTR_ERR(regs);
}
apb->regs = devm_regmap_init_mmio(apb->dev, regs, &bt1_apb_regmap_cfg);
if (IS_ERR(apb->regs)) {
dev_err(apb->dev, "Couldn't create APB EHB regmap\n");
return PTR_ERR(apb->regs);
}
apb->res = devm_platform_ioremap_resource_byname(pdev, "nodev");
if (IS_ERR(apb->res))
dev_err(apb->dev, "Couldn't map reserved region\n");
return PTR_ERR_OR_ZERO(apb->res);
}
static int bt1_apb_request_rst(struct bt1_apb *apb)
{
int ret;
apb->prst = devm_reset_control_get_optional_exclusive(apb->dev, "prst");
if (IS_ERR(apb->prst)) {
dev_warn(apb->dev, "Couldn't get reset control line\n");
return PTR_ERR(apb->prst);
}
ret = reset_control_deassert(apb->prst);
if (ret)
dev_err(apb->dev, "Failed to deassert the reset line\n");
return ret;
}
static void bt1_apb_disable_clk(void *data)
{
struct bt1_apb *apb = data;
clk_disable_unprepare(apb->pclk);
}
static int bt1_apb_request_clk(struct bt1_apb *apb)
{
int ret;
apb->pclk = devm_clk_get(apb->dev, "pclk");
if (IS_ERR(apb->pclk)) {
dev_err(apb->dev, "Couldn't get APB clock descriptor\n");
return PTR_ERR(apb->pclk);
}
ret = clk_prepare_enable(apb->pclk);
if (ret) {
dev_err(apb->dev, "Couldn't enable the APB clock\n");
return ret;
}
ret = devm_add_action_or_reset(apb->dev, bt1_apb_disable_clk, apb);
if (ret) {
dev_err(apb->dev, "Can't add APB EHB clocks disable action\n");
return ret;
}
apb->rate = clk_get_rate(apb->pclk);
if (!apb->rate) {
dev_err(apb->dev, "Invalid clock rate\n");
return -EINVAL;
}
return 0;
}
static void bt1_apb_clear_irq(void *data)
{
struct bt1_apb *apb = data;
regmap_update_bits(apb->regs, APB_EHB_ISR, APB_EHB_ISR_MASK, 0);
}
static int bt1_apb_request_irq(struct bt1_apb *apb)
{
struct platform_device *pdev = to_platform_device(apb->dev);
int ret;
apb->irq = platform_get_irq(pdev, 0);
if (apb->irq < 0)
return apb->irq;
ret = devm_request_irq(apb->dev, apb->irq, bt1_apb_isr, IRQF_SHARED,
"bt1-apb", apb);
if (ret) {
dev_err(apb->dev, "Couldn't request APB EHB IRQ\n");
return ret;
}
ret = devm_add_action(apb->dev, bt1_apb_clear_irq, apb);
if (ret) {
dev_err(apb->dev, "Can't add APB EHB IRQs clear action\n");
return ret;
}
/* Unmask IRQ and clear it' pending flag. */
regmap_update_bits(apb->regs, APB_EHB_ISR,
APB_EHB_ISR_PENDING | APB_EHB_ISR_MASK,
APB_EHB_ISR_MASK);
return 0;
}
static ssize_t count_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct bt1_apb *apb = dev_get_drvdata(dev);
return scnprintf(buf, PAGE_SIZE, "%d\n", atomic_read(&apb->count));
}
static DEVICE_ATTR_RO(count);
static ssize_t timeout_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct bt1_apb *apb = dev_get_drvdata(dev);
unsigned long timeout;
int ret;
u32 n;
ret = regmap_read(apb->regs, APB_EHB_TIMEOUT, &n);
if (ret)
return ret;
timeout = bt1_apb_n_to_timeout_us(apb, n);
return scnprintf(buf, PAGE_SIZE, "%lu\n", timeout);
}
static ssize_t timeout_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct bt1_apb *apb = dev_get_drvdata(dev);
unsigned long timeout;
int ret;
u32 n;
if (kstrtoul(buf, 0, &timeout) < 0)
return -EINVAL;
n = bt1_apb_timeout_to_n_us(apb, timeout);
n = clamp(n, APB_EHB_TIMEOUT_MIN, APB_EHB_TIMEOUT_MAX);
ret = regmap_write(apb->regs, APB_EHB_TIMEOUT, n);
return ret ?: count;
}
static DEVICE_ATTR_RW(timeout);
static ssize_t inject_error_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "Error injection: nodev irq\n");
}
static ssize_t inject_error_store(struct device *dev,
struct device_attribute *attr,
const char *data, size_t count)
{
struct bt1_apb *apb = dev_get_drvdata(dev);
/*
* Either dummy read from the unmapped address in the APB IO area
* or manually set the IRQ status.
*/
if (sysfs_streq(data, "nodev"))
readl(apb->res);
else if (sysfs_streq(data, "irq"))
regmap_update_bits(apb->regs, APB_EHB_ISR, APB_EHB_ISR_PENDING,
APB_EHB_ISR_PENDING);
else
return -EINVAL;
return count;
}
static DEVICE_ATTR_RW(inject_error);
static struct attribute *bt1_apb_sysfs_attrs[] = {
&dev_attr_count.attr,
&dev_attr_timeout.attr,
&dev_attr_inject_error.attr,
NULL
};
ATTRIBUTE_GROUPS(bt1_apb_sysfs);
static void bt1_apb_remove_sysfs(void *data)
{
struct bt1_apb *apb = data;
device_remove_groups(apb->dev, bt1_apb_sysfs_groups);
}
static int bt1_apb_init_sysfs(struct bt1_apb *apb)
{
int ret;
ret = device_add_groups(apb->dev, bt1_apb_sysfs_groups);
if (ret) {
dev_err(apb->dev, "Failed to create EHB APB sysfs nodes\n");
return ret;
}
ret = devm_add_action_or_reset(apb->dev, bt1_apb_remove_sysfs, apb);
if (ret)
dev_err(apb->dev, "Can't add APB EHB sysfs remove action\n");
return ret;
}
static int bt1_apb_probe(struct platform_device *pdev)
{
struct bt1_apb *apb;
int ret;
apb = bt1_apb_create_data(pdev);
if (IS_ERR(apb))
return PTR_ERR(apb);
ret = bt1_apb_request_regs(apb);
if (ret)
return ret;
ret = bt1_apb_request_rst(apb);
if (ret)
return ret;
ret = bt1_apb_request_clk(apb);
if (ret)
return ret;
ret = bt1_apb_request_irq(apb);
if (ret)
return ret;
ret = bt1_apb_init_sysfs(apb);
if (ret)
return ret;
return 0;
}
static const struct of_device_id bt1_apb_of_match[] = {
{ .compatible = "baikal,bt1-apb" },
{ }
};
MODULE_DEVICE_TABLE(of, bt1_apb_of_match);
static struct platform_driver bt1_apb_driver = {
.probe = bt1_apb_probe,
.driver = {
.name = "bt1-apb",
.of_match_table = bt1_apb_of_match
}
};
module_platform_driver(bt1_apb_driver);
MODULE_AUTHOR("Serge Semin <Sergey.Semin@baikalelectronics.ru>");
MODULE_DESCRIPTION("Baikal-T1 APB-bus driver");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,314 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2020 BAIKAL ELECTRONICS, JSC
*
* Authors:
* Serge Semin <Sergey.Semin@baikalelectronics.ru>
*
* Baikal-T1 AXI-bus driver
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/types.h>
#include <linux/bitfield.h>
#include <linux/device.h>
#include <linux/atomic.h>
#include <linux/regmap.h>
#include <linux/platform_device.h>
#include <linux/mfd/syscon.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/nmi.h>
#include <linux/of.h>
#include <linux/clk.h>
#include <linux/reset.h>
#include <linux/sysfs.h>
#define BT1_AXI_WERRL 0x110
#define BT1_AXI_WERRH 0x114
#define BT1_AXI_WERRH_TYPE BIT(23)
#define BT1_AXI_WERRH_ADDR_FLD 24
#define BT1_AXI_WERRH_ADDR_MASK GENMASK(31, BT1_AXI_WERRH_ADDR_FLD)
/*
* struct bt1_axi - Baikal-T1 AXI-bus private data
* @dev: Pointer to the device structure.
* @qos_regs: AXI Interconnect QoS tuning registers.
* @sys_regs: Baikal-T1 System Controller registers map.
* @irq: Errors IRQ number.
* @aclk: AXI reference clock.
* @arst: AXI Interconnect reset line.
* @count: Number of errors detected.
*/
struct bt1_axi {
struct device *dev;
void __iomem *qos_regs;
struct regmap *sys_regs;
int irq;
struct clk *aclk;
struct reset_control *arst;
atomic_t count;
};
static irqreturn_t bt1_axi_isr(int irq, void *data)
{
struct bt1_axi *axi = data;
u32 low = 0, high = 0;
regmap_read(axi->sys_regs, BT1_AXI_WERRL, &low);
regmap_read(axi->sys_regs, BT1_AXI_WERRH, &high);
dev_crit_ratelimited(axi->dev,
"AXI-bus fault %d: %s at 0x%x%08x\n",
atomic_inc_return(&axi->count),
high & BT1_AXI_WERRH_TYPE ? "no slave" : "slave protocol error",
high, low);
/*
* Print backtrace on each CPU. This might be pointless if the fault
* has happened on the same CPU as the IRQ handler is executed or
* the other core proceeded further execution despite the error.
* But if it's not, by looking at the trace we would get straight to
* the cause of the problem.
*/
trigger_all_cpu_backtrace();
return IRQ_HANDLED;
}
static void bt1_axi_clear_data(void *data)
{
struct bt1_axi *axi = data;
struct platform_device *pdev = to_platform_device(axi->dev);
platform_set_drvdata(pdev, NULL);
}
static struct bt1_axi *bt1_axi_create_data(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct bt1_axi *axi;
int ret;
axi = devm_kzalloc(dev, sizeof(*axi), GFP_KERNEL);
if (!axi)
return ERR_PTR(-ENOMEM);
ret = devm_add_action(dev, bt1_axi_clear_data, axi);
if (ret) {
dev_err(dev, "Can't add AXI EHB data clear action\n");
return ERR_PTR(ret);
}
axi->dev = dev;
atomic_set(&axi->count, 0);
platform_set_drvdata(pdev, axi);
return axi;
}
static int bt1_axi_request_regs(struct bt1_axi *axi)
{
struct platform_device *pdev = to_platform_device(axi->dev);
struct device *dev = axi->dev;
axi->sys_regs = syscon_regmap_lookup_by_phandle(dev->of_node, "syscon");
if (IS_ERR(axi->sys_regs)) {
dev_err(dev, "Couldn't find syscon registers\n");
return PTR_ERR(axi->sys_regs);
}
axi->qos_regs = devm_platform_ioremap_resource_byname(pdev, "qos");
if (IS_ERR(axi->qos_regs))
dev_err(dev, "Couldn't map AXI-bus QoS registers\n");
return PTR_ERR_OR_ZERO(axi->qos_regs);
}
static int bt1_axi_request_rst(struct bt1_axi *axi)
{
int ret;
axi->arst = devm_reset_control_get_optional_exclusive(axi->dev, "arst");
if (IS_ERR(axi->arst)) {
dev_warn(axi->dev, "Couldn't get reset control line\n");
return PTR_ERR(axi->arst);
}
ret = reset_control_deassert(axi->arst);
if (ret)
dev_err(axi->dev, "Failed to deassert the reset line\n");
return ret;
}
static void bt1_axi_disable_clk(void *data)
{
struct bt1_axi *axi = data;
clk_disable_unprepare(axi->aclk);
}
static int bt1_axi_request_clk(struct bt1_axi *axi)
{
int ret;
axi->aclk = devm_clk_get(axi->dev, "aclk");
if (IS_ERR(axi->aclk)) {
dev_err(axi->dev, "Couldn't get AXI Interconnect clock\n");
return PTR_ERR(axi->aclk);
}
ret = clk_prepare_enable(axi->aclk);
if (ret) {
dev_err(axi->dev, "Couldn't enable the AXI clock\n");
return ret;
}
ret = devm_add_action_or_reset(axi->dev, bt1_axi_disable_clk, axi);
if (ret)
dev_err(axi->dev, "Can't add AXI clock disable action\n");
return ret;
}
static int bt1_axi_request_irq(struct bt1_axi *axi)
{
struct platform_device *pdev = to_platform_device(axi->dev);
int ret;
axi->irq = platform_get_irq(pdev, 0);
if (axi->irq < 0)
return axi->irq;
ret = devm_request_irq(axi->dev, axi->irq, bt1_axi_isr, IRQF_SHARED,
"bt1-axi", axi);
if (ret)
dev_err(axi->dev, "Couldn't request AXI EHB IRQ\n");
return ret;
}
static ssize_t count_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct bt1_axi *axi = dev_get_drvdata(dev);
return scnprintf(buf, PAGE_SIZE, "%d\n", atomic_read(&axi->count));
}
static DEVICE_ATTR_RO(count);
static ssize_t inject_error_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "Error injection: bus unaligned\n");
}
static ssize_t inject_error_store(struct device *dev,
struct device_attribute *attr,
const char *data, size_t count)
{
struct bt1_axi *axi = dev_get_drvdata(dev);
/*
* Performing unaligned read from the memory will cause the CM2 bus
* error while unaligned writing - the AXI bus write error handled
* by this driver.
*/
if (sysfs_streq(data, "bus"))
readb(axi->qos_regs);
else if (sysfs_streq(data, "unaligned"))
writeb(0, axi->qos_regs);
else
return -EINVAL;
return count;
}
static DEVICE_ATTR_RW(inject_error);
static struct attribute *bt1_axi_sysfs_attrs[] = {
&dev_attr_count.attr,
&dev_attr_inject_error.attr,
NULL
};
ATTRIBUTE_GROUPS(bt1_axi_sysfs);
static void bt1_axi_remove_sysfs(void *data)
{
struct bt1_axi *axi = data;
device_remove_groups(axi->dev, bt1_axi_sysfs_groups);
}
static int bt1_axi_init_sysfs(struct bt1_axi *axi)
{
int ret;
ret = device_add_groups(axi->dev, bt1_axi_sysfs_groups);
if (ret) {
dev_err(axi->dev, "Failed to add sysfs files group\n");
return ret;
}
ret = devm_add_action_or_reset(axi->dev, bt1_axi_remove_sysfs, axi);
if (ret)
dev_err(axi->dev, "Can't add AXI EHB sysfs remove action\n");
return ret;
}
static int bt1_axi_probe(struct platform_device *pdev)
{
struct bt1_axi *axi;
int ret;
axi = bt1_axi_create_data(pdev);
if (IS_ERR(axi))
return PTR_ERR(axi);
ret = bt1_axi_request_regs(axi);
if (ret)
return ret;
ret = bt1_axi_request_rst(axi);
if (ret)
return ret;
ret = bt1_axi_request_clk(axi);
if (ret)
return ret;
ret = bt1_axi_request_irq(axi);
if (ret)
return ret;
ret = bt1_axi_init_sysfs(axi);
if (ret)
return ret;
return 0;
}
static const struct of_device_id bt1_axi_of_match[] = {
{ .compatible = "baikal,bt1-axi" },
{ }
};
MODULE_DEVICE_TABLE(of, bt1_axi_of_match);
static struct platform_driver bt1_axi_driver = {
.probe = bt1_axi_probe,
.driver = {
.name = "bt1-axi",
.of_match_table = bt1_axi_of_match
}
};
module_platform_driver(bt1_axi_driver);
MODULE_AUTHOR("Serge Semin <Sergey.Semin@baikalelectronics.ru>");
MODULE_DESCRIPTION("Baikal-T1 AXI-bus driver");
MODULE_LICENSE("GPL v2");

View File

@ -105,7 +105,7 @@ obj-$(CONFIG_CLK_SIFIVE) += sifive/
obj-$(CONFIG_ARCH_SIRF) += sirf/
obj-$(CONFIG_ARCH_SOCFPGA) += socfpga/
obj-$(CONFIG_PLAT_SPEAR) += spear/
obj-$(CONFIG_ARCH_SPRD) += sprd/
obj-y += sprd/
obj-$(CONFIG_ARCH_STI) += st/
obj-$(CONFIG_ARCH_STRATIX10) += socfpga/
obj-$(CONFIG_ARCH_SUNXI) += sunxi/

View File

@ -274,6 +274,13 @@ config COMMON_CLK_MT8173
---help---
This driver supports MediaTek MT8173 clocks.
config COMMON_CLK_MT8173_MMSYS
bool "Clock driver for MediaTek MT8173 mmsys"
depends on COMMON_CLK_MT8173
default COMMON_CLK_MT8173
help
This driver supports MediaTek MT8173 mmsys clocks.
config COMMON_CLK_MT8183
bool "Clock driver for MediaTek MT8183"
depends on (ARCH_MEDIATEK && ARM64) || COMPILE_TEST

View File

@ -41,6 +41,7 @@ obj-$(CONFIG_COMMON_CLK_MT7629_ETHSYS) += clk-mt7629-eth.o
obj-$(CONFIG_COMMON_CLK_MT7629_HIFSYS) += clk-mt7629-hif.o
obj-$(CONFIG_COMMON_CLK_MT8135) += clk-mt8135.o
obj-$(CONFIG_COMMON_CLK_MT8173) += clk-mt8173.o
obj-$(CONFIG_COMMON_CLK_MT8173_MMSYS) += clk-mt8173-mm.o
obj-$(CONFIG_COMMON_CLK_MT8183) += clk-mt8183.o
obj-$(CONFIG_COMMON_CLK_MT8183_AUDIOSYS) += clk-mt8183-audio.o
obj-$(CONFIG_COMMON_CLK_MT8183_CAMSYS) += clk-mt8183-cam.o

View File

@ -79,16 +79,12 @@ static const struct mtk_gate mm_clks[] = {
GATE_DISP1(CLK_MM_TVE_FMM, "mm_tve_fmm", "mm_sel", 14),
};
static const struct of_device_id of_match_clk_mt2701_mm[] = {
{ .compatible = "mediatek,mt2701-mmsys", },
{}
};
static int clk_mt2701_mm_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *node = dev->parent->of_node;
struct clk_onecell_data *clk_data;
int r;
struct device_node *node = pdev->dev.of_node;
clk_data = mtk_alloc_clk_data(CLK_MM_NR);
@ -108,7 +104,6 @@ static struct platform_driver clk_mt2701_mm_drv = {
.probe = clk_mt2701_mm_probe,
.driver = {
.name = "clk-mt2701-mm",
.of_match_table = of_match_clk_mt2701_mm,
},
};

View File

@ -128,9 +128,10 @@ static const struct mtk_gate mm_clks[] = {
static int clk_mt2712_mm_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *node = dev->parent->of_node;
struct clk_onecell_data *clk_data;
int r;
struct device_node *node = pdev->dev.of_node;
clk_data = mtk_alloc_clk_data(CLK_MM_NR_CLK);
@ -146,16 +147,10 @@ static int clk_mt2712_mm_probe(struct platform_device *pdev)
return r;
}
static const struct of_device_id of_match_clk_mt2712_mm[] = {
{ .compatible = "mediatek,mt2712-mmsys", },
{}
};
static struct platform_driver clk_mt2712_mm_drv = {
.probe = clk_mt2712_mm_probe,
.driver = {
.name = "clk-mt2712-mm",
.of_match_table = of_match_clk_mt2712_mm,
},
};

View File

@ -84,15 +84,11 @@ static const struct mtk_gate mm_clks[] = {
GATE_MM1(CLK_MM_DISP_OVL_FBDC, "mm_disp_ovl_fbdc", "mm_sel", 16),
};
static const struct of_device_id of_match_clk_mt6779_mm[] = {
{ .compatible = "mediatek,mt6779-mmsys", },
{}
};
static int clk_mt6779_mm_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *node = dev->parent->of_node;
struct clk_onecell_data *clk_data;
struct device_node *node = pdev->dev.of_node;
clk_data = mtk_alloc_clk_data(CLK_MM_NR_CLK);
@ -106,7 +102,6 @@ static struct platform_driver clk_mt6779_mm_drv = {
.probe = clk_mt6779_mm_probe,
.driver = {
.name = "clk-mt6779-mm",
.of_match_table = of_match_clk_mt6779_mm,
},
};

View File

@ -92,16 +92,12 @@ static const struct mtk_gate mm_clks[] = {
"clk26m", 3),
};
static const struct of_device_id of_match_clk_mt6797_mm[] = {
{ .compatible = "mediatek,mt6797-mmsys", },
{}
};
static int clk_mt6797_mm_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *node = dev->parent->of_node;
struct clk_onecell_data *clk_data;
int r;
struct device_node *node = pdev->dev.of_node;
clk_data = mtk_alloc_clk_data(CLK_MM_NR);
@ -121,7 +117,6 @@ static struct platform_driver clk_mt6797_mm_drv = {
.probe = clk_mt6797_mm_probe,
.driver = {
.name = "clk-mt6797-mm",
.of_match_table = of_match_clk_mt6797_mm,
},
};

View File

@ -0,0 +1,146 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2014 MediaTek Inc.
* Author: James Liao <jamesjj.liao@mediatek.com>
*/
#include <linux/clk-provider.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include "clk-gate.h"
#include "clk-mtk.h"
#include <dt-bindings/clock/mt8173-clk.h>
static const struct mtk_gate_regs mm0_cg_regs = {
.set_ofs = 0x0104,
.clr_ofs = 0x0108,
.sta_ofs = 0x0100,
};
static const struct mtk_gate_regs mm1_cg_regs = {
.set_ofs = 0x0114,
.clr_ofs = 0x0118,
.sta_ofs = 0x0110,
};
#define GATE_MM0(_id, _name, _parent, _shift) { \
.id = _id, \
.name = _name, \
.parent_name = _parent, \
.regs = &mm0_cg_regs, \
.shift = _shift, \
.ops = &mtk_clk_gate_ops_setclr, \
}
#define GATE_MM1(_id, _name, _parent, _shift) { \
.id = _id, \
.name = _name, \
.parent_name = _parent, \
.regs = &mm1_cg_regs, \
.shift = _shift, \
.ops = &mtk_clk_gate_ops_setclr, \
}
static const struct mtk_gate mt8173_mm_clks[] = {
/* MM0 */
GATE_MM0(CLK_MM_SMI_COMMON, "mm_smi_common", "mm_sel", 0),
GATE_MM0(CLK_MM_SMI_LARB0, "mm_smi_larb0", "mm_sel", 1),
GATE_MM0(CLK_MM_CAM_MDP, "mm_cam_mdp", "mm_sel", 2),
GATE_MM0(CLK_MM_MDP_RDMA0, "mm_mdp_rdma0", "mm_sel", 3),
GATE_MM0(CLK_MM_MDP_RDMA1, "mm_mdp_rdma1", "mm_sel", 4),
GATE_MM0(CLK_MM_MDP_RSZ0, "mm_mdp_rsz0", "mm_sel", 5),
GATE_MM0(CLK_MM_MDP_RSZ1, "mm_mdp_rsz1", "mm_sel", 6),
GATE_MM0(CLK_MM_MDP_RSZ2, "mm_mdp_rsz2", "mm_sel", 7),
GATE_MM0(CLK_MM_MDP_TDSHP0, "mm_mdp_tdshp0", "mm_sel", 8),
GATE_MM0(CLK_MM_MDP_TDSHP1, "mm_mdp_tdshp1", "mm_sel", 9),
GATE_MM0(CLK_MM_MDP_WDMA, "mm_mdp_wdma", "mm_sel", 11),
GATE_MM0(CLK_MM_MDP_WROT0, "mm_mdp_wrot0", "mm_sel", 12),
GATE_MM0(CLK_MM_MDP_WROT1, "mm_mdp_wrot1", "mm_sel", 13),
GATE_MM0(CLK_MM_FAKE_ENG, "mm_fake_eng", "mm_sel", 14),
GATE_MM0(CLK_MM_MUTEX_32K, "mm_mutex_32k", "rtc_sel", 15),
GATE_MM0(CLK_MM_DISP_OVL0, "mm_disp_ovl0", "mm_sel", 16),
GATE_MM0(CLK_MM_DISP_OVL1, "mm_disp_ovl1", "mm_sel", 17),
GATE_MM0(CLK_MM_DISP_RDMA0, "mm_disp_rdma0", "mm_sel", 18),
GATE_MM0(CLK_MM_DISP_RDMA1, "mm_disp_rdma1", "mm_sel", 19),
GATE_MM0(CLK_MM_DISP_RDMA2, "mm_disp_rdma2", "mm_sel", 20),
GATE_MM0(CLK_MM_DISP_WDMA0, "mm_disp_wdma0", "mm_sel", 21),
GATE_MM0(CLK_MM_DISP_WDMA1, "mm_disp_wdma1", "mm_sel", 22),
GATE_MM0(CLK_MM_DISP_COLOR0, "mm_disp_color0", "mm_sel", 23),
GATE_MM0(CLK_MM_DISP_COLOR1, "mm_disp_color1", "mm_sel", 24),
GATE_MM0(CLK_MM_DISP_AAL, "mm_disp_aal", "mm_sel", 25),
GATE_MM0(CLK_MM_DISP_GAMMA, "mm_disp_gamma", "mm_sel", 26),
GATE_MM0(CLK_MM_DISP_UFOE, "mm_disp_ufoe", "mm_sel", 27),
GATE_MM0(CLK_MM_DISP_SPLIT0, "mm_disp_split0", "mm_sel", 28),
GATE_MM0(CLK_MM_DISP_SPLIT1, "mm_disp_split1", "mm_sel", 29),
GATE_MM0(CLK_MM_DISP_MERGE, "mm_disp_merge", "mm_sel", 30),
GATE_MM0(CLK_MM_DISP_OD, "mm_disp_od", "mm_sel", 31),
/* MM1 */
GATE_MM1(CLK_MM_DISP_PWM0MM, "mm_disp_pwm0mm", "mm_sel", 0),
GATE_MM1(CLK_MM_DISP_PWM026M, "mm_disp_pwm026m", "pwm_sel", 1),
GATE_MM1(CLK_MM_DISP_PWM1MM, "mm_disp_pwm1mm", "mm_sel", 2),
GATE_MM1(CLK_MM_DISP_PWM126M, "mm_disp_pwm126m", "pwm_sel", 3),
GATE_MM1(CLK_MM_DSI0_ENGINE, "mm_dsi0_engine", "mm_sel", 4),
GATE_MM1(CLK_MM_DSI0_DIGITAL, "mm_dsi0_digital", "dsi0_dig", 5),
GATE_MM1(CLK_MM_DSI1_ENGINE, "mm_dsi1_engine", "mm_sel", 6),
GATE_MM1(CLK_MM_DSI1_DIGITAL, "mm_dsi1_digital", "dsi1_dig", 7),
GATE_MM1(CLK_MM_DPI_PIXEL, "mm_dpi_pixel", "dpi0_sel", 8),
GATE_MM1(CLK_MM_DPI_ENGINE, "mm_dpi_engine", "mm_sel", 9),
GATE_MM1(CLK_MM_DPI1_PIXEL, "mm_dpi1_pixel", "lvds_pxl", 10),
GATE_MM1(CLK_MM_DPI1_ENGINE, "mm_dpi1_engine", "mm_sel", 11),
GATE_MM1(CLK_MM_HDMI_PIXEL, "mm_hdmi_pixel", "dpi0_sel", 12),
GATE_MM1(CLK_MM_HDMI_PLLCK, "mm_hdmi_pllck", "hdmi_sel", 13),
GATE_MM1(CLK_MM_HDMI_AUDIO, "mm_hdmi_audio", "apll1", 14),
GATE_MM1(CLK_MM_HDMI_SPDIF, "mm_hdmi_spdif", "apll2", 15),
GATE_MM1(CLK_MM_LVDS_PIXEL, "mm_lvds_pixel", "lvds_pxl", 16),
GATE_MM1(CLK_MM_LVDS_CTS, "mm_lvds_cts", "lvds_cts", 17),
GATE_MM1(CLK_MM_SMI_LARB4, "mm_smi_larb4", "mm_sel", 18),
GATE_MM1(CLK_MM_HDMI_HDCP, "mm_hdmi_hdcp", "hdcp_sel", 19),
GATE_MM1(CLK_MM_HDMI_HDCP24M, "mm_hdmi_hdcp24m", "hdcp_24m_sel", 20),
};
struct clk_mt8173_mm_driver_data {
const struct mtk_gate *gates_clk;
int gates_num;
};
static const struct clk_mt8173_mm_driver_data mt8173_mmsys_driver_data = {
.gates_clk = mt8173_mm_clks,
.gates_num = ARRAY_SIZE(mt8173_mm_clks),
};
static int clk_mt8173_mm_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *node = dev->parent->of_node;
const struct clk_mt8173_mm_driver_data *data;
struct clk_onecell_data *clk_data;
int ret;
clk_data = mtk_alloc_clk_data(CLK_MM_NR_CLK);
if (!clk_data)
return -ENOMEM;
data = &mt8173_mmsys_driver_data;
ret = mtk_clk_register_gates(node, data->gates_clk, data->gates_num,
clk_data);
if (ret)
return ret;
ret = of_clk_add_provider(node, of_clk_src_onecell_get, clk_data);
if (ret)
return ret;
return 0;
}
static struct platform_driver clk_mt8173_mm_drv = {
.driver = {
.name = "clk-mt8173-mm",
},
.probe = clk_mt8173_mm_probe,
};
builtin_platform_driver(clk_mt8173_mm_drv);

View File

@ -753,93 +753,6 @@ static const struct mtk_gate img_clks[] __initconst = {
GATE_IMG(CLK_IMG_FD, "img_fd", "mm_sel", 11),
};
static const struct mtk_gate_regs mm0_cg_regs __initconst = {
.set_ofs = 0x0104,
.clr_ofs = 0x0108,
.sta_ofs = 0x0100,
};
static const struct mtk_gate_regs mm1_cg_regs __initconst = {
.set_ofs = 0x0114,
.clr_ofs = 0x0118,
.sta_ofs = 0x0110,
};
#define GATE_MM0(_id, _name, _parent, _shift) { \
.id = _id, \
.name = _name, \
.parent_name = _parent, \
.regs = &mm0_cg_regs, \
.shift = _shift, \
.ops = &mtk_clk_gate_ops_setclr, \
}
#define GATE_MM1(_id, _name, _parent, _shift) { \
.id = _id, \
.name = _name, \
.parent_name = _parent, \
.regs = &mm1_cg_regs, \
.shift = _shift, \
.ops = &mtk_clk_gate_ops_setclr, \
}
static const struct mtk_gate mm_clks[] __initconst = {
/* MM0 */
GATE_MM0(CLK_MM_SMI_COMMON, "mm_smi_common", "mm_sel", 0),
GATE_MM0(CLK_MM_SMI_LARB0, "mm_smi_larb0", "mm_sel", 1),
GATE_MM0(CLK_MM_CAM_MDP, "mm_cam_mdp", "mm_sel", 2),
GATE_MM0(CLK_MM_MDP_RDMA0, "mm_mdp_rdma0", "mm_sel", 3),
GATE_MM0(CLK_MM_MDP_RDMA1, "mm_mdp_rdma1", "mm_sel", 4),
GATE_MM0(CLK_MM_MDP_RSZ0, "mm_mdp_rsz0", "mm_sel", 5),
GATE_MM0(CLK_MM_MDP_RSZ1, "mm_mdp_rsz1", "mm_sel", 6),
GATE_MM0(CLK_MM_MDP_RSZ2, "mm_mdp_rsz2", "mm_sel", 7),
GATE_MM0(CLK_MM_MDP_TDSHP0, "mm_mdp_tdshp0", "mm_sel", 8),
GATE_MM0(CLK_MM_MDP_TDSHP1, "mm_mdp_tdshp1", "mm_sel", 9),
GATE_MM0(CLK_MM_MDP_WDMA, "mm_mdp_wdma", "mm_sel", 11),
GATE_MM0(CLK_MM_MDP_WROT0, "mm_mdp_wrot0", "mm_sel", 12),
GATE_MM0(CLK_MM_MDP_WROT1, "mm_mdp_wrot1", "mm_sel", 13),
GATE_MM0(CLK_MM_FAKE_ENG, "mm_fake_eng", "mm_sel", 14),
GATE_MM0(CLK_MM_MUTEX_32K, "mm_mutex_32k", "rtc_sel", 15),
GATE_MM0(CLK_MM_DISP_OVL0, "mm_disp_ovl0", "mm_sel", 16),
GATE_MM0(CLK_MM_DISP_OVL1, "mm_disp_ovl1", "mm_sel", 17),
GATE_MM0(CLK_MM_DISP_RDMA0, "mm_disp_rdma0", "mm_sel", 18),
GATE_MM0(CLK_MM_DISP_RDMA1, "mm_disp_rdma1", "mm_sel", 19),
GATE_MM0(CLK_MM_DISP_RDMA2, "mm_disp_rdma2", "mm_sel", 20),
GATE_MM0(CLK_MM_DISP_WDMA0, "mm_disp_wdma0", "mm_sel", 21),
GATE_MM0(CLK_MM_DISP_WDMA1, "mm_disp_wdma1", "mm_sel", 22),
GATE_MM0(CLK_MM_DISP_COLOR0, "mm_disp_color0", "mm_sel", 23),
GATE_MM0(CLK_MM_DISP_COLOR1, "mm_disp_color1", "mm_sel", 24),
GATE_MM0(CLK_MM_DISP_AAL, "mm_disp_aal", "mm_sel", 25),
GATE_MM0(CLK_MM_DISP_GAMMA, "mm_disp_gamma", "mm_sel", 26),
GATE_MM0(CLK_MM_DISP_UFOE, "mm_disp_ufoe", "mm_sel", 27),
GATE_MM0(CLK_MM_DISP_SPLIT0, "mm_disp_split0", "mm_sel", 28),
GATE_MM0(CLK_MM_DISP_SPLIT1, "mm_disp_split1", "mm_sel", 29),
GATE_MM0(CLK_MM_DISP_MERGE, "mm_disp_merge", "mm_sel", 30),
GATE_MM0(CLK_MM_DISP_OD, "mm_disp_od", "mm_sel", 31),
/* MM1 */
GATE_MM1(CLK_MM_DISP_PWM0MM, "mm_disp_pwm0mm", "mm_sel", 0),
GATE_MM1(CLK_MM_DISP_PWM026M, "mm_disp_pwm026m", "pwm_sel", 1),
GATE_MM1(CLK_MM_DISP_PWM1MM, "mm_disp_pwm1mm", "mm_sel", 2),
GATE_MM1(CLK_MM_DISP_PWM126M, "mm_disp_pwm126m", "pwm_sel", 3),
GATE_MM1(CLK_MM_DSI0_ENGINE, "mm_dsi0_engine", "mm_sel", 4),
GATE_MM1(CLK_MM_DSI0_DIGITAL, "mm_dsi0_digital", "dsi0_dig", 5),
GATE_MM1(CLK_MM_DSI1_ENGINE, "mm_dsi1_engine", "mm_sel", 6),
GATE_MM1(CLK_MM_DSI1_DIGITAL, "mm_dsi1_digital", "dsi1_dig", 7),
GATE_MM1(CLK_MM_DPI_PIXEL, "mm_dpi_pixel", "dpi0_sel", 8),
GATE_MM1(CLK_MM_DPI_ENGINE, "mm_dpi_engine", "mm_sel", 9),
GATE_MM1(CLK_MM_DPI1_PIXEL, "mm_dpi1_pixel", "lvds_pxl", 10),
GATE_MM1(CLK_MM_DPI1_ENGINE, "mm_dpi1_engine", "mm_sel", 11),
GATE_MM1(CLK_MM_HDMI_PIXEL, "mm_hdmi_pixel", "dpi0_sel", 12),
GATE_MM1(CLK_MM_HDMI_PLLCK, "mm_hdmi_pllck", "hdmi_sel", 13),
GATE_MM1(CLK_MM_HDMI_AUDIO, "mm_hdmi_audio", "apll1", 14),
GATE_MM1(CLK_MM_HDMI_SPDIF, "mm_hdmi_spdif", "apll2", 15),
GATE_MM1(CLK_MM_LVDS_PIXEL, "mm_lvds_pixel", "lvds_pxl", 16),
GATE_MM1(CLK_MM_LVDS_CTS, "mm_lvds_cts", "lvds_cts", 17),
GATE_MM1(CLK_MM_SMI_LARB4, "mm_smi_larb4", "mm_sel", 18),
GATE_MM1(CLK_MM_HDMI_HDCP, "mm_hdmi_hdcp", "hdcp_sel", 19),
GATE_MM1(CLK_MM_HDMI_HDCP24M, "mm_hdmi_hdcp24m", "hdcp_24m_sel", 20),
};
static const struct mtk_gate_regs vdec0_cg_regs __initconst = {
.set_ofs = 0x0000,
.clr_ofs = 0x0004,
@ -1144,23 +1057,6 @@ static void __init mtk_imgsys_init(struct device_node *node)
}
CLK_OF_DECLARE(mtk_imgsys, "mediatek,mt8173-imgsys", mtk_imgsys_init);
static void __init mtk_mmsys_init(struct device_node *node)
{
struct clk_onecell_data *clk_data;
int r;
clk_data = mtk_alloc_clk_data(CLK_MM_NR_CLK);
mtk_clk_register_gates(node, mm_clks, ARRAY_SIZE(mm_clks),
clk_data);
r = of_clk_add_provider(node, of_clk_src_onecell_get, clk_data);
if (r)
pr_err("%s(): could not register clock provider: %d\n",
__func__, r);
}
CLK_OF_DECLARE(mtk_mmsys, "mediatek,mt8173-mmsys", mtk_mmsys_init);
static void __init mtk_vdecsys_init(struct device_node *node)
{
struct clk_onecell_data *clk_data;

View File

@ -84,8 +84,9 @@ static const struct mtk_gate mm_clks[] = {
static int clk_mt8183_mm_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *node = dev->parent->of_node;
struct clk_onecell_data *clk_data;
struct device_node *node = pdev->dev.of_node;
clk_data = mtk_alloc_clk_data(CLK_MM_NR_CLK);
@ -95,16 +96,10 @@ static int clk_mt8183_mm_probe(struct platform_device *pdev)
return of_clk_add_provider(node, of_clk_src_onecell_get, clk_data);
}
static const struct of_device_id of_match_clk_mt8183_mm[] = {
{ .compatible = "mediatek,mt8183-mmsys", },
{}
};
static struct platform_driver clk_mt8183_mm_drv = {
.probe = clk_mt8183_mm_probe,
.driver = {
.name = "clk-mt8183-mm",
.of_match_table = of_match_clk_mt8183_mm,
},
};

View File

@ -295,11 +295,11 @@ config ARM_TANGO_CPUFREQ
default y
config ARM_TEGRA20_CPUFREQ
tristate "Tegra20 CPUFreq support"
depends on ARCH_TEGRA
tristate "Tegra20/30 CPUFreq support"
depends on ARCH_TEGRA && CPUFREQ_DT
default y
help
This adds the CPUFreq driver support for Tegra20 SOCs.
This adds the CPUFreq driver support for Tegra20/30 SOCs.
config ARM_TEGRA124_CPUFREQ
bool "Tegra124 CPUFreq support"

View File

@ -7,201 +7,96 @@
* Based on arch/arm/plat-omap/cpu-omap.c, (C) 2005 Nokia Corporation
*/
#include <linux/clk.h>
#include <linux/cpufreq.h>
#include <linux/bits.h>
#include <linux/cpu.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/types.h>
static struct cpufreq_frequency_table freq_table[] = {
{ .frequency = 216000 },
{ .frequency = 312000 },
{ .frequency = 456000 },
{ .frequency = 608000 },
{ .frequency = 760000 },
{ .frequency = 816000 },
{ .frequency = 912000 },
{ .frequency = 1000000 },
{ .frequency = CPUFREQ_TABLE_END },
};
#include <soc/tegra/common.h>
#include <soc/tegra/fuse.h>
struct tegra20_cpufreq {
struct device *dev;
struct cpufreq_driver driver;
struct clk *cpu_clk;
struct clk *pll_x_clk;
struct clk *pll_p_clk;
bool pll_x_prepared;
};
static unsigned int tegra_get_intermediate(struct cpufreq_policy *policy,
unsigned int index)
static bool cpu0_node_has_opp_v2_prop(void)
{
struct tegra20_cpufreq *cpufreq = cpufreq_get_driver_data();
unsigned int ifreq = clk_get_rate(cpufreq->pll_p_clk) / 1000;
struct device_node *np = of_cpu_device_node_get(0);
bool ret = false;
/*
* Don't switch to intermediate freq if:
* - we are already at it, i.e. policy->cur == ifreq
* - index corresponds to ifreq
*/
if (freq_table[index].frequency == ifreq || policy->cur == ifreq)
return 0;
return ifreq;
}
static int tegra_target_intermediate(struct cpufreq_policy *policy,
unsigned int index)
{
struct tegra20_cpufreq *cpufreq = cpufreq_get_driver_data();
int ret;
/*
* Take an extra reference to the main pll so it doesn't turn
* off when we move the cpu off of it as enabling it again while we
* switch to it from tegra_target() would take additional time.
*
* When target-freq is equal to intermediate freq we don't need to
* switch to an intermediate freq and so this routine isn't called.
* Also, we wouldn't be using pll_x anymore and must not take extra
* reference to it, as it can be disabled now to save some power.
*/
clk_prepare_enable(cpufreq->pll_x_clk);
ret = clk_set_parent(cpufreq->cpu_clk, cpufreq->pll_p_clk);
if (ret)
clk_disable_unprepare(cpufreq->pll_x_clk);
else
cpufreq->pll_x_prepared = true;
if (of_get_property(np, "operating-points-v2", NULL))
ret = true;
of_node_put(np);
return ret;
}
static int tegra_target(struct cpufreq_policy *policy, unsigned int index)
{
struct tegra20_cpufreq *cpufreq = cpufreq_get_driver_data();
unsigned long rate = freq_table[index].frequency;
unsigned int ifreq = clk_get_rate(cpufreq->pll_p_clk) / 1000;
int ret;
/*
* target freq == pll_p, don't need to take extra reference to pll_x_clk
* as it isn't used anymore.
*/
if (rate == ifreq)
return clk_set_parent(cpufreq->cpu_clk, cpufreq->pll_p_clk);
ret = clk_set_rate(cpufreq->pll_x_clk, rate * 1000);
/* Restore to earlier frequency on error, i.e. pll_x */
if (ret)
dev_err(cpufreq->dev, "Failed to change pll_x to %lu\n", rate);
ret = clk_set_parent(cpufreq->cpu_clk, cpufreq->pll_x_clk);
/* This shouldn't fail while changing or restoring */
WARN_ON(ret);
/*
* Drop count to pll_x clock only if we switched to intermediate freq
* earlier while transitioning to a target frequency.
*/
if (cpufreq->pll_x_prepared) {
clk_disable_unprepare(cpufreq->pll_x_clk);
cpufreq->pll_x_prepared = false;
}
return ret;
}
static int tegra_cpu_init(struct cpufreq_policy *policy)
{
struct tegra20_cpufreq *cpufreq = cpufreq_get_driver_data();
clk_prepare_enable(cpufreq->cpu_clk);
/* FIXME: what's the actual transition time? */
cpufreq_generic_init(policy, freq_table, 300 * 1000);
policy->clk = cpufreq->cpu_clk;
policy->suspend_freq = freq_table[0].frequency;
return 0;
}
static int tegra_cpu_exit(struct cpufreq_policy *policy)
{
struct tegra20_cpufreq *cpufreq = cpufreq_get_driver_data();
clk_disable_unprepare(cpufreq->cpu_clk);
return 0;
}
static int tegra20_cpufreq_probe(struct platform_device *pdev)
{
struct tegra20_cpufreq *cpufreq;
struct platform_device *cpufreq_dt;
struct opp_table *opp_table;
struct device *cpu_dev;
u32 versions[2];
int err;
cpufreq = devm_kzalloc(&pdev->dev, sizeof(*cpufreq), GFP_KERNEL);
if (!cpufreq)
return -ENOMEM;
cpufreq->cpu_clk = clk_get_sys(NULL, "cclk");
if (IS_ERR(cpufreq->cpu_clk))
return PTR_ERR(cpufreq->cpu_clk);
cpufreq->pll_x_clk = clk_get_sys(NULL, "pll_x");
if (IS_ERR(cpufreq->pll_x_clk)) {
err = PTR_ERR(cpufreq->pll_x_clk);
goto put_cpu;
if (!cpu0_node_has_opp_v2_prop()) {
dev_err(&pdev->dev, "operating points not found\n");
dev_err(&pdev->dev, "please update your device tree\n");
return -ENODEV;
}
cpufreq->pll_p_clk = clk_get_sys(NULL, "pll_p");
if (IS_ERR(cpufreq->pll_p_clk)) {
err = PTR_ERR(cpufreq->pll_p_clk);
goto put_pll_x;
if (of_machine_is_compatible("nvidia,tegra20")) {
versions[0] = BIT(tegra_sku_info.cpu_process_id);
versions[1] = BIT(tegra_sku_info.soc_speedo_id);
} else {
versions[0] = BIT(tegra_sku_info.cpu_process_id);
versions[1] = BIT(tegra_sku_info.cpu_speedo_id);
}
cpufreq->dev = &pdev->dev;
cpufreq->driver.get = cpufreq_generic_get;
cpufreq->driver.attr = cpufreq_generic_attr;
cpufreq->driver.init = tegra_cpu_init;
cpufreq->driver.exit = tegra_cpu_exit;
cpufreq->driver.flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK;
cpufreq->driver.verify = cpufreq_generic_frequency_table_verify;
cpufreq->driver.suspend = cpufreq_generic_suspend;
cpufreq->driver.driver_data = cpufreq;
cpufreq->driver.target_index = tegra_target;
cpufreq->driver.get_intermediate = tegra_get_intermediate;
cpufreq->driver.target_intermediate = tegra_target_intermediate;
snprintf(cpufreq->driver.name, CPUFREQ_NAME_LEN, "tegra");
dev_info(&pdev->dev, "hardware version 0x%x 0x%x\n",
versions[0], versions[1]);
err = cpufreq_register_driver(&cpufreq->driver);
if (err)
goto put_pll_p;
cpu_dev = get_cpu_device(0);
if (WARN_ON(!cpu_dev))
return -ENODEV;
platform_set_drvdata(pdev, cpufreq);
opp_table = dev_pm_opp_set_supported_hw(cpu_dev, versions, 2);
err = PTR_ERR_OR_ZERO(opp_table);
if (err) {
dev_err(&pdev->dev, "failed to set supported hw: %d\n", err);
return err;
}
cpufreq_dt = platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
err = PTR_ERR_OR_ZERO(cpufreq_dt);
if (err) {
dev_err(&pdev->dev,
"failed to create cpufreq-dt device: %d\n", err);
goto err_put_supported_hw;
}
platform_set_drvdata(pdev, cpufreq_dt);
return 0;
put_pll_p:
clk_put(cpufreq->pll_p_clk);
put_pll_x:
clk_put(cpufreq->pll_x_clk);
put_cpu:
clk_put(cpufreq->cpu_clk);
err_put_supported_hw:
dev_pm_opp_put_supported_hw(opp_table);
return err;
}
static int tegra20_cpufreq_remove(struct platform_device *pdev)
{
struct tegra20_cpufreq *cpufreq = platform_get_drvdata(pdev);
struct platform_device *cpufreq_dt;
struct opp_table *opp_table;
cpufreq_unregister_driver(&cpufreq->driver);
cpufreq_dt = platform_get_drvdata(pdev);
platform_device_unregister(cpufreq_dt);
clk_put(cpufreq->pll_p_clk);
clk_put(cpufreq->pll_x_clk);
clk_put(cpufreq->cpu_clk);
opp_table = dev_pm_opp_get_opp_table(get_cpu_device(0));
dev_pm_opp_put_supported_hw(opp_table);
dev_pm_opp_put_opp_table(opp_table);
return 0;
}

View File

@ -365,7 +365,6 @@ static int tegra_cpuidle_probe(struct platform_device *pdev)
break;
case TEGRA30:
tegra_cpuidle_disable_state(TEGRA_CC6);
break;
case TEGRA114:

View File

@ -2,6 +2,8 @@
obj-y = scmi-bus.o scmi-driver.o scmi-protocols.o scmi-transport.o
scmi-bus-y = bus.o
scmi-driver-y = driver.o
scmi-transport-y = mailbox.o shmem.o
scmi-transport-y = shmem.o
scmi-transport-$(CONFIG_MAILBOX) += mailbox.o
scmi-transport-$(CONFIG_ARM_PSCI_FW) += smc.o
scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o
obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o

View File

@ -14,6 +14,13 @@ enum scmi_base_protocol_cmd {
BASE_DISCOVER_LIST_PROTOCOLS = 0x6,
BASE_DISCOVER_AGENT = 0x7,
BASE_NOTIFY_ERRORS = 0x8,
BASE_SET_DEVICE_PERMISSIONS = 0x9,
BASE_SET_PROTOCOL_PERMISSIONS = 0xa,
BASE_RESET_AGENT_CONFIGURATION = 0xb,
};
enum scmi_base_protocol_notify {
BASE_ERROR_EVENT = 0x0,
};
struct scmi_msg_resp_base_attributes {

View File

@ -178,6 +178,8 @@ struct scmi_chan_info {
* @send_message: Callback to send a message
* @mark_txdone: Callback to mark tx as done
* @fetch_response: Callback to fetch response
* @fetch_notification: Callback to fetch notification
* @clear_channel: Callback to clear a channel
* @poll_done: Callback to poll transfer status
*/
struct scmi_transport_ops {
@ -190,6 +192,9 @@ struct scmi_transport_ops {
void (*mark_txdone)(struct scmi_chan_info *cinfo, int ret);
void (*fetch_response)(struct scmi_chan_info *cinfo,
struct scmi_xfer *xfer);
void (*fetch_notification)(struct scmi_chan_info *cinfo,
size_t max_len, struct scmi_xfer *xfer);
void (*clear_channel)(struct scmi_chan_info *cinfo);
bool (*poll_done)(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer);
};
@ -210,6 +215,9 @@ struct scmi_desc {
};
extern const struct scmi_desc scmi_mailbox_desc;
#ifdef CONFIG_HAVE_ARM_SMCCC
extern const struct scmi_desc scmi_smc_desc;
#endif
void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr);
void scmi_free_channel(struct scmi_chan_info *cinfo, struct idr *idr, int id);
@ -222,5 +230,8 @@ void shmem_tx_prepare(struct scmi_shared_mem __iomem *shmem,
u32 shmem_read_header(struct scmi_shared_mem __iomem *shmem);
void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem,
struct scmi_xfer *xfer);
void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
size_t max_len, struct scmi_xfer *xfer);
void shmem_clear_channel(struct scmi_shared_mem __iomem *shmem);
bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem,
struct scmi_xfer *xfer);

View File

@ -76,6 +76,7 @@ struct scmi_xfers_info {
* implementation version and (sub-)vendor identification.
* @handle: Instance of SCMI handle to send to clients
* @tx_minfo: Universal Transmit Message management info
* @rx_minfo: Universal Receive Message management info
* @tx_idr: IDR object to map protocol id to Tx channel info pointer
* @rx_idr: IDR object to map protocol id to Rx channel info pointer
* @protocols_imp: List of protocols implemented, currently maximum of
@ -89,6 +90,7 @@ struct scmi_info {
struct scmi_revision_info version;
struct scmi_handle handle;
struct scmi_xfers_info tx_minfo;
struct scmi_xfers_info rx_minfo;
struct idr tx_idr;
struct idr rx_idr;
u8 *protocols_imp;
@ -200,6 +202,83 @@ __scmi_xfer_put(struct scmi_xfers_info *minfo, struct scmi_xfer *xfer)
spin_unlock_irqrestore(&minfo->xfer_lock, flags);
}
static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr)
{
struct scmi_xfer *xfer;
struct device *dev = cinfo->dev;
struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
struct scmi_xfers_info *minfo = &info->rx_minfo;
xfer = scmi_xfer_get(cinfo->handle, minfo);
if (IS_ERR(xfer)) {
dev_err(dev, "failed to get free message slot (%ld)\n",
PTR_ERR(xfer));
info->desc->ops->clear_channel(cinfo);
return;
}
unpack_scmi_header(msg_hdr, &xfer->hdr);
scmi_dump_header_dbg(dev, &xfer->hdr);
info->desc->ops->fetch_notification(cinfo, info->desc->max_msg_size,
xfer);
trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id,
xfer->hdr.protocol_id, xfer->hdr.seq,
MSG_TYPE_NOTIFICATION);
__scmi_xfer_put(minfo, xfer);
info->desc->ops->clear_channel(cinfo);
}
static void scmi_handle_response(struct scmi_chan_info *cinfo,
u16 xfer_id, u8 msg_type)
{
struct scmi_xfer *xfer;
struct device *dev = cinfo->dev;
struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
struct scmi_xfers_info *minfo = &info->tx_minfo;
/* Are we even expecting this? */
if (!test_bit(xfer_id, minfo->xfer_alloc_table)) {
dev_err(dev, "message for %d is not expected!\n", xfer_id);
info->desc->ops->clear_channel(cinfo);
return;
}
xfer = &minfo->xfer_block[xfer_id];
/*
* Even if a response was indeed expected on this slot at this point,
* a buggy platform could wrongly reply feeding us an unexpected
* delayed response we're not prepared to handle: bail-out safely
* blaming firmware.
*/
if (unlikely(msg_type == MSG_TYPE_DELAYED_RESP && !xfer->async_done)) {
dev_err(dev,
"Delayed Response for %d not expected! Buggy F/W ?\n",
xfer_id);
info->desc->ops->clear_channel(cinfo);
/* It was unexpected, so nobody will clear the xfer if not us */
__scmi_xfer_put(minfo, xfer);
return;
}
scmi_dump_header_dbg(dev, &xfer->hdr);
info->desc->ops->fetch_response(cinfo, xfer);
trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id,
xfer->hdr.protocol_id, xfer->hdr.seq,
msg_type);
if (msg_type == MSG_TYPE_DELAYED_RESP) {
info->desc->ops->clear_channel(cinfo);
complete(xfer->async_done);
} else {
complete(&xfer->done);
}
}
/**
* scmi_rx_callback() - callback for receiving messages
*
@ -214,36 +293,21 @@ __scmi_xfer_put(struct scmi_xfers_info *minfo, struct scmi_xfer *xfer)
*/
void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr)
{
struct scmi_info *info = handle_to_scmi_info(cinfo->handle);
struct scmi_xfers_info *minfo = &info->tx_minfo;
u16 xfer_id = MSG_XTRACT_TOKEN(msg_hdr);
u8 msg_type = MSG_XTRACT_TYPE(msg_hdr);
struct device *dev = cinfo->dev;
struct scmi_xfer *xfer;
if (msg_type == MSG_TYPE_NOTIFICATION)
return; /* Notifications not yet supported */
/* Are we even expecting this? */
if (!test_bit(xfer_id, minfo->xfer_alloc_table)) {
dev_err(dev, "message for %d is not expected!\n", xfer_id);
return;
switch (msg_type) {
case MSG_TYPE_NOTIFICATION:
scmi_handle_notification(cinfo, msg_hdr);
break;
case MSG_TYPE_COMMAND:
case MSG_TYPE_DELAYED_RESP:
scmi_handle_response(cinfo, xfer_id, msg_type);
break;
default:
WARN_ONCE(1, "received unknown msg_type:%d\n", msg_type);
break;
}
xfer = &minfo->xfer_block[xfer_id];
scmi_dump_header_dbg(dev, &xfer->hdr);
info->desc->ops->fetch_response(cinfo, xfer);
trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id,
xfer->hdr.protocol_id, xfer->hdr.seq,
msg_type);
if (msg_type == MSG_TYPE_DELAYED_RESP)
complete(xfer->async_done);
else
complete(&xfer->done);
}
/**
@ -525,13 +589,13 @@ int scmi_handle_put(const struct scmi_handle *handle)
return 0;
}
static int scmi_xfer_info_init(struct scmi_info *sinfo)
static int __scmi_xfer_info_init(struct scmi_info *sinfo,
struct scmi_xfers_info *info)
{
int i;
struct scmi_xfer *xfer;
struct device *dev = sinfo->dev;
const struct scmi_desc *desc = sinfo->desc;
struct scmi_xfers_info *info = &sinfo->tx_minfo;
/* Pre-allocated messages, no more than what hdr.seq can support */
if (WARN_ON(desc->max_msg >= MSG_TOKEN_MAX)) {
@ -566,6 +630,16 @@ static int scmi_xfer_info_init(struct scmi_info *sinfo)
return 0;
}
static int scmi_xfer_info_init(struct scmi_info *sinfo)
{
int ret = __scmi_xfer_info_init(sinfo, &sinfo->tx_minfo);
if (!ret && idr_find(&sinfo->rx_idr, SCMI_PROTOCOL_BASE))
ret = __scmi_xfer_info_init(sinfo, &sinfo->rx_minfo);
return ret;
}
static int scmi_chan_setup(struct scmi_info *info, struct device *dev,
int prot_id, bool tx)
{
@ -699,10 +773,6 @@ static int scmi_probe(struct platform_device *pdev)
info->desc = desc;
INIT_LIST_HEAD(&info->node);
ret = scmi_xfer_info_init(info);
if (ret)
return ret;
platform_set_drvdata(pdev, info);
idr_init(&info->tx_idr);
idr_init(&info->rx_idr);
@ -715,6 +785,10 @@ static int scmi_probe(struct platform_device *pdev)
if (ret)
return ret;
ret = scmi_xfer_info_init(info);
if (ret)
return ret;
ret = scmi_base_protocol_init(handle);
if (ret) {
dev_err(dev, "unable to communicate with SCMI(%d)\n", ret);
@ -827,6 +901,9 @@ ATTRIBUTE_GROUPS(versions);
/* Each compatible listed below must have descriptor associated with it */
static const struct of_device_id scmi_of_match[] = {
{ .compatible = "arm,scmi", .data = &scmi_mailbox_desc },
#ifdef CONFIG_ARM_PSCI_FW
{ .compatible = "arm,scmi-smc", .data = &scmi_smc_desc},
#endif
{ /* Sentinel */ },
};

View File

@ -158,6 +158,21 @@ static void mailbox_fetch_response(struct scmi_chan_info *cinfo,
shmem_fetch_response(smbox->shmem, xfer);
}
static void mailbox_fetch_notification(struct scmi_chan_info *cinfo,
size_t max_len, struct scmi_xfer *xfer)
{
struct scmi_mailbox *smbox = cinfo->transport_info;
shmem_fetch_notification(smbox->shmem, max_len, xfer);
}
static void mailbox_clear_channel(struct scmi_chan_info *cinfo)
{
struct scmi_mailbox *smbox = cinfo->transport_info;
shmem_clear_channel(smbox->shmem);
}
static bool
mailbox_poll_done(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer)
{
@ -173,6 +188,8 @@ static struct scmi_transport_ops scmi_mailbox_ops = {
.send_message = mailbox_send_message,
.mark_txdone = mailbox_mark_txdone,
.fetch_response = mailbox_fetch_response,
.fetch_notification = mailbox_fetch_notification,
.clear_channel = mailbox_clear_channel,
.poll_done = mailbox_poll_done,
};

View File

@ -27,6 +27,11 @@ enum scmi_performance_protocol_cmd {
PERF_DESCRIBE_FASTCHANNEL = 0xb,
};
enum scmi_performance_protocol_notify {
PERFORMANCE_LIMITS_CHANGED = 0x0,
PERFORMANCE_LEVEL_CHANGED = 0x1,
};
struct scmi_opp {
u32 perf;
u32 power;

View File

@ -12,6 +12,12 @@ enum scmi_power_protocol_cmd {
POWER_STATE_SET = 0x4,
POWER_STATE_GET = 0x5,
POWER_STATE_NOTIFY = 0x6,
POWER_STATE_CHANGE_REQUESTED_NOTIFY = 0x7,
};
enum scmi_power_protocol_notify {
POWER_STATE_CHANGED = 0x0,
POWER_STATE_CHANGE_REQUESTED = 0x1,
};
struct scmi_msg_resp_power_attributes {

View File

@ -14,6 +14,10 @@ enum scmi_sensor_protocol_cmd {
SENSOR_READING_GET = 0x6,
};
enum scmi_sensor_protocol_notify {
SENSOR_TRIP_POINT_EVENT = 0x0,
};
struct scmi_msg_resp_sensor_attributes {
__le16 num_sensors;
u8 max_requests;

View File

@ -67,6 +67,21 @@ void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem,
memcpy_fromio(xfer->rx.buf, shmem->msg_payload + 4, xfer->rx.len);
}
void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
size_t max_len, struct scmi_xfer *xfer)
{
/* Skip only the length of header in shmem area i.e 4 bytes */
xfer->rx.len = min_t(size_t, max_len, ioread32(&shmem->length) - 4);
/* Take a copy to the rx buffer.. */
memcpy_fromio(xfer->rx.buf, shmem->msg_payload, xfer->rx.len);
}
void shmem_clear_channel(struct scmi_shared_mem __iomem *shmem)
{
iowrite32(SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE, &shmem->channel_status);
}
bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem,
struct scmi_xfer *xfer)
{

View File

@ -0,0 +1,153 @@
// SPDX-License-Identifier: GPL-2.0
/*
* System Control and Management Interface (SCMI) Message SMC/HVC
* Transport driver
*
* Copyright 2020 NXP
*/
#include <linux/arm-smccc.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/mutex.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/slab.h>
#include "common.h"
/**
* struct scmi_smc - Structure representing a SCMI smc transport
*
* @cinfo: SCMI channel info
* @shmem: Transmit/Receive shared memory area
* @func_id: smc/hvc call function id
*/
struct scmi_smc {
struct scmi_chan_info *cinfo;
struct scmi_shared_mem __iomem *shmem;
struct mutex shmem_lock;
u32 func_id;
};
static bool smc_chan_available(struct device *dev, int idx)
{
struct device_node *np = of_parse_phandle(dev->of_node, "shmem", 0);
if (!np)
return false;
of_node_put(np);
return true;
}
static int smc_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
bool tx)
{
struct device *cdev = cinfo->dev;
struct scmi_smc *scmi_info;
resource_size_t size;
struct resource res;
struct device_node *np;
u32 func_id;
int ret;
if (!tx)
return -ENODEV;
scmi_info = devm_kzalloc(dev, sizeof(*scmi_info), GFP_KERNEL);
if (!scmi_info)
return -ENOMEM;
np = of_parse_phandle(cdev->of_node, "shmem", 0);
ret = of_address_to_resource(np, 0, &res);
of_node_put(np);
if (ret) {
dev_err(cdev, "failed to get SCMI Tx shared memory\n");
return ret;
}
size = resource_size(&res);
scmi_info->shmem = devm_ioremap(dev, res.start, size);
if (!scmi_info->shmem) {
dev_err(dev, "failed to ioremap SCMI Tx shared memory\n");
return -EADDRNOTAVAIL;
}
ret = of_property_read_u32(dev->of_node, "arm,smc-id", &func_id);
if (ret < 0)
return ret;
scmi_info->func_id = func_id;
scmi_info->cinfo = cinfo;
mutex_init(&scmi_info->shmem_lock);
cinfo->transport_info = scmi_info;
return 0;
}
static int smc_chan_free(int id, void *p, void *data)
{
struct scmi_chan_info *cinfo = p;
struct scmi_smc *scmi_info = cinfo->transport_info;
cinfo->transport_info = NULL;
scmi_info->cinfo = NULL;
scmi_free_channel(cinfo, data, id);
return 0;
}
static int smc_send_message(struct scmi_chan_info *cinfo,
struct scmi_xfer *xfer)
{
struct scmi_smc *scmi_info = cinfo->transport_info;
struct arm_smccc_res res;
mutex_lock(&scmi_info->shmem_lock);
shmem_tx_prepare(scmi_info->shmem, xfer);
arm_smccc_1_1_invoke(scmi_info->func_id, 0, 0, 0, 0, 0, 0, 0, &res);
scmi_rx_callback(scmi_info->cinfo, shmem_read_header(scmi_info->shmem));
mutex_unlock(&scmi_info->shmem_lock);
/* Only SMCCC_RET_NOT_SUPPORTED is valid error code */
if (res.a0)
return -EOPNOTSUPP;
return 0;
}
static void smc_fetch_response(struct scmi_chan_info *cinfo,
struct scmi_xfer *xfer)
{
struct scmi_smc *scmi_info = cinfo->transport_info;
shmem_fetch_response(scmi_info->shmem, xfer);
}
static bool
smc_poll_done(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer)
{
struct scmi_smc *scmi_info = cinfo->transport_info;
return shmem_poll_done(scmi_info->shmem, xfer);
}
static struct scmi_transport_ops scmi_smc_ops = {
.chan_available = smc_chan_available,
.chan_setup = smc_chan_setup,
.chan_free = smc_chan_free,
.send_message = smc_send_message,
.fetch_response = smc_fetch_response,
.poll_done = smc_poll_done,
};
const struct scmi_desc scmi_smc_desc = {
.ops = &scmi_smc_ops,
.max_rx_timeout_ms = 30,
.max_msg = 1,
.max_msg_size = 128,
};

View File

@ -8,7 +8,6 @@
*/
#include <linux/err.h>
#include <linux/firmware/imx/types.h>
#include <linux/firmware/imx/ipc.h>
#include <linux/firmware/imx/sci.h>
#include <linux/interrupt.h>
@ -38,6 +37,7 @@ struct imx_sc_ipc {
struct device *dev;
struct mutex lock;
struct completion done;
bool fast_ipc;
/* temporarily store the SCU msg */
u32 *msg;
@ -115,6 +115,7 @@ static void imx_scu_rx_callback(struct mbox_client *c, void *msg)
struct imx_sc_ipc *sc_ipc = sc_chan->sc_ipc;
struct imx_sc_rpc_msg *hdr;
u32 *data = msg;
int i;
if (!sc_ipc->msg) {
dev_warn(sc_ipc->dev, "unexpected rx idx %d 0x%08x, ignore!\n",
@ -122,6 +123,19 @@ static void imx_scu_rx_callback(struct mbox_client *c, void *msg)
return;
}
if (sc_ipc->fast_ipc) {
hdr = msg;
sc_ipc->rx_size = hdr->size;
sc_ipc->msg[0] = *data++;
for (i = 1; i < sc_ipc->rx_size; i++)
sc_ipc->msg[i] = *data++;
complete(&sc_ipc->done);
return;
}
if (sc_chan->idx == 0) {
hdr = msg;
sc_ipc->rx_size = hdr->size;
@ -143,20 +157,22 @@ static void imx_scu_rx_callback(struct mbox_client *c, void *msg)
static int imx_scu_ipc_write(struct imx_sc_ipc *sc_ipc, void *msg)
{
struct imx_sc_rpc_msg *hdr = msg;
struct imx_sc_rpc_msg hdr = *(struct imx_sc_rpc_msg *)msg;
struct imx_sc_chan *sc_chan;
u32 *data = msg;
int ret;
int size;
int i;
/* Check size */
if (hdr->size > IMX_SC_RPC_MAX_MSG)
if (hdr.size > IMX_SC_RPC_MAX_MSG)
return -EINVAL;
dev_dbg(sc_ipc->dev, "RPC SVC %u FUNC %u SIZE %u\n", hdr->svc,
hdr->func, hdr->size);
dev_dbg(sc_ipc->dev, "RPC SVC %u FUNC %u SIZE %u\n", hdr.svc,
hdr.func, hdr.size);
for (i = 0; i < hdr->size; i++) {
size = sc_ipc->fast_ipc ? 1 : hdr.size;
for (i = 0; i < size; i++) {
sc_chan = &sc_ipc->chans[i % 4];
/*
@ -168,8 +184,10 @@ static int imx_scu_ipc_write(struct imx_sc_ipc *sc_ipc, void *msg)
* Wait for tx_done before every send to ensure that no
* queueing happens at the mailbox channel level.
*/
wait_for_completion(&sc_chan->tx_done);
reinit_completion(&sc_chan->tx_done);
if (!sc_ipc->fast_ipc) {
wait_for_completion(&sc_chan->tx_done);
reinit_completion(&sc_chan->tx_done);
}
ret = mbox_send_message(sc_chan->ch, &data[i]);
if (ret < 0)
@ -246,6 +264,8 @@ static int imx_scu_probe(struct platform_device *pdev)
struct imx_sc_chan *sc_chan;
struct mbox_client *cl;
char *chan_name;
struct of_phandle_args args;
int num_channel;
int ret;
int i;
@ -253,11 +273,20 @@ static int imx_scu_probe(struct platform_device *pdev)
if (!sc_ipc)
return -ENOMEM;
for (i = 0; i < SCU_MU_CHAN_NUM; i++) {
if (i < 4)
ret = of_parse_phandle_with_args(pdev->dev.of_node, "mboxes",
"#mbox-cells", 0, &args);
if (ret)
return ret;
sc_ipc->fast_ipc = of_device_is_compatible(args.np, "fsl,imx8-mu-scu");
num_channel = sc_ipc->fast_ipc ? 2 : SCU_MU_CHAN_NUM;
for (i = 0; i < num_channel; i++) {
if (i < num_channel / 2)
chan_name = kasprintf(GFP_KERNEL, "tx%d", i);
else
chan_name = kasprintf(GFP_KERNEL, "rx%d", i - 4);
chan_name = kasprintf(GFP_KERNEL, "rx%d",
i - num_channel / 2);
if (!chan_name)
return -ENOMEM;
@ -269,19 +298,22 @@ static int imx_scu_probe(struct platform_device *pdev)
cl->knows_txdone = true;
cl->rx_callback = imx_scu_rx_callback;
/* Initial tx_done completion as "done" */
cl->tx_done = imx_scu_tx_done;
init_completion(&sc_chan->tx_done);
complete(&sc_chan->tx_done);
if (!sc_ipc->fast_ipc) {
/* Initial tx_done completion as "done" */
cl->tx_done = imx_scu_tx_done;
init_completion(&sc_chan->tx_done);
complete(&sc_chan->tx_done);
}
sc_chan->sc_ipc = sc_ipc;
sc_chan->idx = i % 4;
sc_chan->idx = i % (num_channel / 2);
sc_chan->ch = mbox_request_channel_byname(cl, chan_name);
if (IS_ERR(sc_chan->ch)) {
ret = PTR_ERR(sc_chan->ch);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to request mbox chan %s ret %d\n",
chan_name, ret);
kfree(chan_name);
return ret;
}

View File

@ -56,7 +56,7 @@ struct scm_legacy_command {
__le32 buf_offset;
__le32 resp_hdr_offset;
__le32 id;
__le32 buf[0];
__le32 buf[];
};
/**

View File

@ -6,7 +6,6 @@
#include <linux/init.h>
#include <linux/cpumask.h>
#include <linux/export.h>
#include <linux/dma-direct.h>
#include <linux/dma-mapping.h>
#include <linux/module.h>
#include <linux/types.h>
@ -806,8 +805,7 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
struct qcom_scm_mem_map_info *mem_to_map;
phys_addr_t mem_to_map_phys;
phys_addr_t dest_phys;
phys_addr_t ptr_phys;
dma_addr_t ptr_dma;
dma_addr_t ptr_phys;
size_t mem_to_map_sz;
size_t dest_sz;
size_t src_sz;
@ -824,10 +822,9 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
ptr_sz = ALIGN(src_sz, SZ_64) + ALIGN(mem_to_map_sz, SZ_64) +
ALIGN(dest_sz, SZ_64);
ptr = dma_alloc_coherent(__scm->dev, ptr_sz, &ptr_dma, GFP_KERNEL);
ptr = dma_alloc_coherent(__scm->dev, ptr_sz, &ptr_phys, GFP_KERNEL);
if (!ptr)
return -ENOMEM;
ptr_phys = dma_to_phys(__scm->dev, ptr_dma);
/* Fill source vmid detail */
src = ptr;
@ -855,7 +852,7 @@ int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz,
ret = __qcom_scm_assign_mem(__scm->dev, mem_to_map_phys, mem_to_map_sz,
ptr_phys, src_sz, dest_phys, dest_sz);
dma_free_coherent(__scm->dev, ptr_sz, ptr, ptr_dma);
dma_free_coherent(__scm->dev, ptr_sz, ptr, ptr_phys);
if (ret) {
dev_err(__scm->dev,
"Assign memory protection call failed %d\n", ret);
@ -943,7 +940,7 @@ bool qcom_scm_hdcp_available(void)
qcom_scm_clk_disable();
return ret > 0 ? true : false;
return ret > 0;
}
EXPORT_SYMBOL(qcom_scm_hdcp_available);

View File

@ -176,7 +176,7 @@ static int tegra186_bpmp_init(struct tegra_bpmp *bpmp)
priv->tx.pool = of_gen_pool_get(bpmp->dev->of_node, "shmem", 0);
if (!priv->tx.pool) {
dev_err(bpmp->dev, "TX shmem pool not found\n");
return -ENOMEM;
return -EPROBE_DEFER;
}
priv->tx.virt = gen_pool_dma_alloc(priv->tx.pool, 4096, &priv->tx.phys);
@ -188,7 +188,7 @@ static int tegra186_bpmp_init(struct tegra_bpmp *bpmp)
priv->rx.pool = of_gen_pool_get(bpmp->dev->of_node, "shmem", 1);
if (!priv->rx.pool) {
dev_err(bpmp->dev, "RX shmem pool not found\n");
err = -ENOMEM;
err = -EPROBE_DEFER;
goto free_tx;
}

View File

@ -11,6 +11,7 @@ config DRM_MEDIATEK
select DRM_MIPI_DSI
select DRM_PANEL
select MEMORY
select MTK_MMSYS
select MTK_SMI
select VIDEOMODE_HELPERS
help

View File

@ -119,7 +119,10 @@ static int mtk_disp_color_probe(struct platform_device *pdev)
ret = mtk_ddp_comp_init(dev, dev->of_node, &priv->ddp_comp, comp_id,
&mtk_disp_color_funcs);
if (ret) {
dev_err(dev, "Failed to initialize component: %d\n", ret);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to initialize component: %d\n",
ret);
return ret;
}

View File

@ -386,7 +386,10 @@ static int mtk_disp_ovl_probe(struct platform_device *pdev)
ret = mtk_ddp_comp_init(dev, dev->of_node, &priv->ddp_comp, comp_id,
&mtk_disp_ovl_funcs);
if (ret) {
dev_err(dev, "Failed to initialize component: %d\n", ret);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to initialize component: %d\n",
ret);
return ret;
}

View File

@ -294,7 +294,10 @@ static int mtk_disp_rdma_probe(struct platform_device *pdev)
ret = mtk_ddp_comp_init(dev, dev->of_node, &priv->ddp_comp, comp_id,
&mtk_disp_rdma_funcs);
if (ret) {
dev_err(dev, "Failed to initialize component: %d\n", ret);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to initialize component: %d\n",
ret);
return ret;
}

View File

@ -739,21 +739,27 @@ static int mtk_dpi_probe(struct platform_device *pdev)
dpi->engine_clk = devm_clk_get(dev, "engine");
if (IS_ERR(dpi->engine_clk)) {
ret = PTR_ERR(dpi->engine_clk);
dev_err(dev, "Failed to get engine clock: %d\n", ret);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to get engine clock: %d\n", ret);
return ret;
}
dpi->pixel_clk = devm_clk_get(dev, "pixel");
if (IS_ERR(dpi->pixel_clk)) {
ret = PTR_ERR(dpi->pixel_clk);
dev_err(dev, "Failed to get pixel clock: %d\n", ret);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to get pixel clock: %d\n", ret);
return ret;
}
dpi->tvd_clk = devm_clk_get(dev, "pll");
if (IS_ERR(dpi->tvd_clk)) {
ret = PTR_ERR(dpi->tvd_clk);
dev_err(dev, "Failed to get tvdpll clock: %d\n", ret);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to get tvdpll clock: %d\n", ret);
return ret;
}

View File

@ -6,6 +6,7 @@
#include <linux/clk.h>
#include <linux/pm_runtime.h>
#include <linux/soc/mediatek/mtk-cmdq.h>
#include <linux/soc/mediatek/mtk-mmsys.h>
#include <asm/barrier.h>
#include <soc/mediatek/smi.h>
@ -28,7 +29,7 @@
* @enabled: records whether crtc_enable succeeded
* @planes: array of 4 drm_plane structures, one for each overlay plane
* @pending_planes: whether any plane has pending changes to be applied
* @config_regs: memory mapped mmsys configuration register space
* @mmsys_dev: pointer to the mmsys device for configuration registers
* @mutex: handle to one of the ten disp_mutex streams
* @ddp_comp_nr: number of components in ddp_comp
* @ddp_comp: array of pointers the mtk_ddp_comp structures used by this crtc
@ -50,7 +51,7 @@ struct mtk_drm_crtc {
u32 cmdq_event;
#endif
void __iomem *config_regs;
struct device *mmsys_dev;
struct mtk_disp_mutex *mutex;
unsigned int ddp_comp_nr;
struct mtk_ddp_comp **ddp_comp;
@ -300,9 +301,9 @@ static int mtk_crtc_ddp_hw_init(struct mtk_drm_crtc *mtk_crtc)
DRM_DEBUG_DRIVER("mediatek_ddp_ddp_path_setup\n");
for (i = 0; i < mtk_crtc->ddp_comp_nr - 1; i++) {
mtk_ddp_add_comp_to_path(mtk_crtc->config_regs,
mtk_crtc->ddp_comp[i]->id,
mtk_crtc->ddp_comp[i + 1]->id);
mtk_mmsys_ddp_connect(mtk_crtc->mmsys_dev,
mtk_crtc->ddp_comp[i]->id,
mtk_crtc->ddp_comp[i + 1]->id);
mtk_disp_mutex_add_comp(mtk_crtc->mutex,
mtk_crtc->ddp_comp[i]->id);
}
@ -360,9 +361,9 @@ static void mtk_crtc_ddp_hw_fini(struct mtk_drm_crtc *mtk_crtc)
mtk_crtc->ddp_comp[i]->id);
mtk_disp_mutex_disable(mtk_crtc->mutex);
for (i = 0; i < mtk_crtc->ddp_comp_nr - 1; i++) {
mtk_ddp_remove_comp_from_path(mtk_crtc->config_regs,
mtk_crtc->ddp_comp[i]->id,
mtk_crtc->ddp_comp[i + 1]->id);
mtk_mmsys_ddp_disconnect(mtk_crtc->mmsys_dev,
mtk_crtc->ddp_comp[i]->id,
mtk_crtc->ddp_comp[i + 1]->id);
mtk_disp_mutex_remove_comp(mtk_crtc->mutex,
mtk_crtc->ddp_comp[i]->id);
}
@ -766,7 +767,7 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
if (!mtk_crtc)
return -ENOMEM;
mtk_crtc->config_regs = priv->config_regs;
mtk_crtc->mmsys_dev = priv->mmsys_dev;
mtk_crtc->ddp_comp_nr = path_len;
mtk_crtc->ddp_comp = devm_kmalloc_array(dev, mtk_crtc->ddp_comp_nr,
sizeof(*mtk_crtc->ddp_comp),

View File

@ -13,26 +13,6 @@
#include "mtk_drm_ddp.h"
#include "mtk_drm_ddp_comp.h"
#define DISP_REG_CONFIG_DISP_OVL0_MOUT_EN 0x040
#define DISP_REG_CONFIG_DISP_OVL1_MOUT_EN 0x044
#define DISP_REG_CONFIG_DISP_OD_MOUT_EN 0x048
#define DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN 0x04c
#define DISP_REG_CONFIG_DISP_UFOE_MOUT_EN 0x050
#define DISP_REG_CONFIG_DISP_COLOR0_SEL_IN 0x084
#define DISP_REG_CONFIG_DISP_COLOR1_SEL_IN 0x088
#define DISP_REG_CONFIG_DSIE_SEL_IN 0x0a4
#define DISP_REG_CONFIG_DSIO_SEL_IN 0x0a8
#define DISP_REG_CONFIG_DPI_SEL_IN 0x0ac
#define DISP_REG_CONFIG_DISP_RDMA2_SOUT 0x0b8
#define DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN 0x0c4
#define DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN 0x0c8
#define DISP_REG_CONFIG_MMSYS_CG_CON0 0x100
#define DISP_REG_CONFIG_DISP_OVL_MOUT_EN 0x030
#define DISP_REG_CONFIG_OUT_SEL 0x04c
#define DISP_REG_CONFIG_DSI_SEL 0x050
#define DISP_REG_CONFIG_DPI_SEL 0x064
#define MT2701_DISP_MUTEX0_MOD0 0x2c
#define MT2701_DISP_MUTEX0_SOF0 0x30
@ -94,48 +74,6 @@
#define MUTEX_SOF_DSI2 5
#define MUTEX_SOF_DSI3 6
#define OVL0_MOUT_EN_COLOR0 0x1
#define OD_MOUT_EN_RDMA0 0x1
#define OD1_MOUT_EN_RDMA1 BIT(16)
#define UFOE_MOUT_EN_DSI0 0x1
#define COLOR0_SEL_IN_OVL0 0x1
#define OVL1_MOUT_EN_COLOR1 0x1
#define GAMMA_MOUT_EN_RDMA1 0x1
#define RDMA0_SOUT_DPI0 0x2
#define RDMA0_SOUT_DPI1 0x3
#define RDMA0_SOUT_DSI1 0x1
#define RDMA0_SOUT_DSI2 0x4
#define RDMA0_SOUT_DSI3 0x5
#define RDMA1_SOUT_DPI0 0x2
#define RDMA1_SOUT_DPI1 0x3
#define RDMA1_SOUT_DSI1 0x1
#define RDMA1_SOUT_DSI2 0x4
#define RDMA1_SOUT_DSI3 0x5
#define RDMA2_SOUT_DPI0 0x2
#define RDMA2_SOUT_DPI1 0x3
#define RDMA2_SOUT_DSI1 0x1
#define RDMA2_SOUT_DSI2 0x4
#define RDMA2_SOUT_DSI3 0x5
#define DPI0_SEL_IN_RDMA1 0x1
#define DPI0_SEL_IN_RDMA2 0x3
#define DPI1_SEL_IN_RDMA1 (0x1 << 8)
#define DPI1_SEL_IN_RDMA2 (0x3 << 8)
#define DSI0_SEL_IN_RDMA1 0x1
#define DSI0_SEL_IN_RDMA2 0x4
#define DSI1_SEL_IN_RDMA1 0x1
#define DSI1_SEL_IN_RDMA2 0x4
#define DSI2_SEL_IN_RDMA1 (0x1 << 16)
#define DSI2_SEL_IN_RDMA2 (0x4 << 16)
#define DSI3_SEL_IN_RDMA1 (0x1 << 16)
#define DSI3_SEL_IN_RDMA2 (0x4 << 16)
#define COLOR1_SEL_IN_OVL1 0x1
#define OVL_MOUT_EN_RDMA 0x1
#define BLS_TO_DSI_RDMA1_TO_DPI1 0x8
#define BLS_TO_DPI_RDMA1_TO_DSI 0x2
#define DSI_SEL_IN_BLS 0x0
#define DPI_SEL_IN_BLS 0x0
#define DSI_SEL_IN_RDMA 0x1
struct mtk_disp_mutex {
int id;
@ -246,200 +184,6 @@ static const struct mtk_ddp_data mt8173_ddp_driver_data = {
.mutex_sof_reg = MT2701_DISP_MUTEX0_SOF0,
};
static unsigned int mtk_ddp_mout_en(enum mtk_ddp_comp_id cur,
enum mtk_ddp_comp_id next,
unsigned int *addr)
{
unsigned int value;
if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_COLOR0) {
*addr = DISP_REG_CONFIG_DISP_OVL0_MOUT_EN;
value = OVL0_MOUT_EN_COLOR0;
} else if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_RDMA0) {
*addr = DISP_REG_CONFIG_DISP_OVL_MOUT_EN;
value = OVL_MOUT_EN_RDMA;
} else if (cur == DDP_COMPONENT_OD0 && next == DDP_COMPONENT_RDMA0) {
*addr = DISP_REG_CONFIG_DISP_OD_MOUT_EN;
value = OD_MOUT_EN_RDMA0;
} else if (cur == DDP_COMPONENT_UFOE && next == DDP_COMPONENT_DSI0) {
*addr = DISP_REG_CONFIG_DISP_UFOE_MOUT_EN;
value = UFOE_MOUT_EN_DSI0;
} else if (cur == DDP_COMPONENT_OVL1 && next == DDP_COMPONENT_COLOR1) {
*addr = DISP_REG_CONFIG_DISP_OVL1_MOUT_EN;
value = OVL1_MOUT_EN_COLOR1;
} else if (cur == DDP_COMPONENT_GAMMA && next == DDP_COMPONENT_RDMA1) {
*addr = DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN;
value = GAMMA_MOUT_EN_RDMA1;
} else if (cur == DDP_COMPONENT_OD1 && next == DDP_COMPONENT_RDMA1) {
*addr = DISP_REG_CONFIG_DISP_OD_MOUT_EN;
value = OD1_MOUT_EN_RDMA1;
} else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DPI0) {
*addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN;
value = RDMA0_SOUT_DPI0;
} else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DPI1) {
*addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN;
value = RDMA0_SOUT_DPI1;
} else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI1) {
*addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN;
value = RDMA0_SOUT_DSI1;
} else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI2) {
*addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN;
value = RDMA0_SOUT_DSI2;
} else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI3) {
*addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN;
value = RDMA0_SOUT_DSI3;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI1) {
*addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN;
value = RDMA1_SOUT_DSI1;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI2) {
*addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN;
value = RDMA1_SOUT_DSI2;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI3) {
*addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN;
value = RDMA1_SOUT_DSI3;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI0) {
*addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN;
value = RDMA1_SOUT_DPI0;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI1) {
*addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN;
value = RDMA1_SOUT_DPI1;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI0) {
*addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT;
value = RDMA2_SOUT_DPI0;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI1) {
*addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT;
value = RDMA2_SOUT_DPI1;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI1) {
*addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT;
value = RDMA2_SOUT_DSI1;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI2) {
*addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT;
value = RDMA2_SOUT_DSI2;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI3) {
*addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT;
value = RDMA2_SOUT_DSI3;
} else {
value = 0;
}
return value;
}
static unsigned int mtk_ddp_sel_in(enum mtk_ddp_comp_id cur,
enum mtk_ddp_comp_id next,
unsigned int *addr)
{
unsigned int value;
if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_COLOR0) {
*addr = DISP_REG_CONFIG_DISP_COLOR0_SEL_IN;
value = COLOR0_SEL_IN_OVL0;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI0) {
*addr = DISP_REG_CONFIG_DPI_SEL_IN;
value = DPI0_SEL_IN_RDMA1;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI1) {
*addr = DISP_REG_CONFIG_DPI_SEL_IN;
value = DPI1_SEL_IN_RDMA1;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI0) {
*addr = DISP_REG_CONFIG_DSIE_SEL_IN;
value = DSI0_SEL_IN_RDMA1;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI1) {
*addr = DISP_REG_CONFIG_DSIO_SEL_IN;
value = DSI1_SEL_IN_RDMA1;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI2) {
*addr = DISP_REG_CONFIG_DSIE_SEL_IN;
value = DSI2_SEL_IN_RDMA1;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI3) {
*addr = DISP_REG_CONFIG_DSIO_SEL_IN;
value = DSI3_SEL_IN_RDMA1;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI0) {
*addr = DISP_REG_CONFIG_DPI_SEL_IN;
value = DPI0_SEL_IN_RDMA2;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI1) {
*addr = DISP_REG_CONFIG_DPI_SEL_IN;
value = DPI1_SEL_IN_RDMA2;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI0) {
*addr = DISP_REG_CONFIG_DSIE_SEL_IN;
value = DSI0_SEL_IN_RDMA2;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI1) {
*addr = DISP_REG_CONFIG_DSIO_SEL_IN;
value = DSI1_SEL_IN_RDMA2;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI2) {
*addr = DISP_REG_CONFIG_DSIE_SEL_IN;
value = DSI2_SEL_IN_RDMA2;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI3) {
*addr = DISP_REG_CONFIG_DSIE_SEL_IN;
value = DSI3_SEL_IN_RDMA2;
} else if (cur == DDP_COMPONENT_OVL1 && next == DDP_COMPONENT_COLOR1) {
*addr = DISP_REG_CONFIG_DISP_COLOR1_SEL_IN;
value = COLOR1_SEL_IN_OVL1;
} else if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DSI0) {
*addr = DISP_REG_CONFIG_DSI_SEL;
value = DSI_SEL_IN_BLS;
} else {
value = 0;
}
return value;
}
static void mtk_ddp_sout_sel(void __iomem *config_regs,
enum mtk_ddp_comp_id cur,
enum mtk_ddp_comp_id next)
{
if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DSI0) {
writel_relaxed(BLS_TO_DSI_RDMA1_TO_DPI1,
config_regs + DISP_REG_CONFIG_OUT_SEL);
} else if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DPI0) {
writel_relaxed(BLS_TO_DPI_RDMA1_TO_DSI,
config_regs + DISP_REG_CONFIG_OUT_SEL);
writel_relaxed(DSI_SEL_IN_RDMA,
config_regs + DISP_REG_CONFIG_DSI_SEL);
writel_relaxed(DPI_SEL_IN_BLS,
config_regs + DISP_REG_CONFIG_DPI_SEL);
}
}
void mtk_ddp_add_comp_to_path(void __iomem *config_regs,
enum mtk_ddp_comp_id cur,
enum mtk_ddp_comp_id next)
{
unsigned int addr, value, reg;
value = mtk_ddp_mout_en(cur, next, &addr);
if (value) {
reg = readl_relaxed(config_regs + addr) | value;
writel_relaxed(reg, config_regs + addr);
}
mtk_ddp_sout_sel(config_regs, cur, next);
value = mtk_ddp_sel_in(cur, next, &addr);
if (value) {
reg = readl_relaxed(config_regs + addr) | value;
writel_relaxed(reg, config_regs + addr);
}
}
void mtk_ddp_remove_comp_from_path(void __iomem *config_regs,
enum mtk_ddp_comp_id cur,
enum mtk_ddp_comp_id next)
{
unsigned int addr, value, reg;
value = mtk_ddp_mout_en(cur, next, &addr);
if (value) {
reg = readl_relaxed(config_regs + addr) & ~value;
writel_relaxed(reg, config_regs + addr);
}
value = mtk_ddp_sel_in(cur, next, &addr);
if (value) {
reg = readl_relaxed(config_regs + addr) & ~value;
writel_relaxed(reg, config_regs + addr);
}
}
struct mtk_disp_mutex *mtk_disp_mutex_get(struct device *dev, unsigned int id)
{
struct mtk_ddp *ddp = dev_get_drvdata(dev);
@ -628,7 +372,8 @@ static int mtk_ddp_probe(struct platform_device *pdev)
if (!ddp->data->no_clk) {
ddp->clk = devm_clk_get(dev, NULL);
if (IS_ERR(ddp->clk)) {
dev_err(dev, "Failed to get clock\n");
if (PTR_ERR(ddp->clk) != -EPROBE_DEFER)
dev_err(dev, "Failed to get clock\n");
return PTR_ERR(ddp->clk);
}
}

View File

@ -12,13 +12,6 @@ struct regmap;
struct device;
struct mtk_disp_mutex;
void mtk_ddp_add_comp_to_path(void __iomem *config_regs,
enum mtk_ddp_comp_id cur,
enum mtk_ddp_comp_id next);
void mtk_ddp_remove_comp_from_path(void __iomem *config_regs,
enum mtk_ddp_comp_id cur,
enum mtk_ddp_comp_id next);
struct mtk_disp_mutex *mtk_disp_mutex_get(struct device *dev, unsigned int id);
int mtk_disp_mutex_prepare(struct mtk_disp_mutex *mutex);
void mtk_disp_mutex_add_comp(struct mtk_disp_mutex *mutex,

View File

@ -10,6 +10,7 @@
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/pm_runtime.h>
#include <linux/soc/mediatek/mtk-mmsys.h>
#include <linux/dma-mapping.h>
#include <drm/drm_atomic.h>
@ -418,11 +419,22 @@ static const struct of_device_id mtk_ddp_comp_dt_ids[] = {
{ }
};
static const struct of_device_id mtk_drm_of_ids[] = {
{ .compatible = "mediatek,mt2701-mmsys",
.data = &mt2701_mmsys_driver_data},
{ .compatible = "mediatek,mt2712-mmsys",
.data = &mt2712_mmsys_driver_data},
{ .compatible = "mediatek,mt8173-mmsys",
.data = &mt8173_mmsys_driver_data},
{ }
};
static int mtk_drm_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *phandle = dev->parent->of_node;
const struct of_device_id *of_id;
struct mtk_drm_private *private;
struct resource *mem;
struct device_node *node;
struct component_match *match = NULL;
int ret;
@ -433,18 +445,20 @@ static int mtk_drm_probe(struct platform_device *pdev)
return -ENOMEM;
private->data = of_device_get_match_data(dev);
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
private->config_regs = devm_ioremap_resource(dev, mem);
if (IS_ERR(private->config_regs)) {
ret = PTR_ERR(private->config_regs);
dev_err(dev, "Failed to ioremap mmsys-config resource: %d\n",
ret);
return ret;
private->mmsys_dev = dev->parent;
if (!private->mmsys_dev) {
dev_err(dev, "Failed to get MMSYS device\n");
return -ENODEV;
}
of_id = of_match_node(mtk_drm_of_ids, phandle);
if (!of_id)
return -ENODEV;
private->data = of_id->data;
/* Iterate over sibling DISP function blocks */
for_each_child_of_node(dev->of_node->parent, node) {
for_each_child_of_node(phandle->parent, node) {
const struct of_device_id *of_id;
enum mtk_ddp_comp_type comp_type;
int comp_id;
@ -578,22 +592,11 @@ static int mtk_drm_sys_resume(struct device *dev)
static SIMPLE_DEV_PM_OPS(mtk_drm_pm_ops, mtk_drm_sys_suspend,
mtk_drm_sys_resume);
static const struct of_device_id mtk_drm_of_ids[] = {
{ .compatible = "mediatek,mt2701-mmsys",
.data = &mt2701_mmsys_driver_data},
{ .compatible = "mediatek,mt2712-mmsys",
.data = &mt2712_mmsys_driver_data},
{ .compatible = "mediatek,mt8173-mmsys",
.data = &mt8173_mmsys_driver_data},
{ }
};
static struct platform_driver mtk_drm_platform_driver = {
.probe = mtk_drm_probe,
.remove = mtk_drm_remove,
.driver = {
.name = "mediatek-drm",
.of_match_table = mtk_drm_of_ids,
.pm = &mtk_drm_pm_ops,
},
};

View File

@ -39,7 +39,7 @@ struct mtk_drm_private {
struct device_node *mutex_node;
struct device *mutex_dev;
void __iomem *config_regs;
struct device *mmsys_dev;
struct device_node *comp_node[DDP_COMPONENT_ID_MAX];
struct mtk_ddp_comp *ddp_comp[DDP_COMPONENT_ID_MAX];
const struct mtk_mmsys_driver_data *data;

View File

@ -1186,14 +1186,18 @@ static int mtk_dsi_probe(struct platform_device *pdev)
dsi->engine_clk = devm_clk_get(dev, "engine");
if (IS_ERR(dsi->engine_clk)) {
ret = PTR_ERR(dsi->engine_clk);
dev_err(dev, "Failed to get engine clock: %d\n", ret);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to get engine clock: %d\n", ret);
goto err_unregister_host;
}
dsi->digital_clk = devm_clk_get(dev, "digital");
if (IS_ERR(dsi->digital_clk)) {
ret = PTR_ERR(dsi->digital_clk);
dev_err(dev, "Failed to get digital clock: %d\n", ret);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to get digital clock: %d\n", ret);
goto err_unregister_host;
}

View File

@ -1470,7 +1470,9 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
ret = mtk_hdmi_get_all_clk(hdmi, np);
if (ret) {
dev_err(dev, "Failed to get clocks: %d\n", ret);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to get clocks: %d\n", ret);
return ret;
}

View File

@ -46,6 +46,17 @@ config ATMEL_EBI
tree is used. This bus supports NANDs, external ethernet controller,
SRAMs, ATA devices, etc.
config BT1_L2_CTL
bool "Baikal-T1 CM2 L2-RAM Cache Control Block"
depends on MIPS_BAIKAL_T1 || COMPILE_TEST
select MFD_SYSCON
help
Baikal-T1 CPU is based on the MIPS P5600 Warrior IP-core. The CPU
resides Coherency Manager v2 with embedded 1MB L2-cache. It's
possible to tune the L2 cache performance up by setting the data,
tags and way-select latencies of RAM access. This driver provides a
dt properties-based and sysfs interface for it.
config TI_AEMIF
tristate "Texas Instruments AEMIF driver"
depends on (ARCH_DAVINCI || ARCH_KEYSTONE) && OF

View File

@ -11,6 +11,7 @@ obj-$(CONFIG_ARM_PL172_MPMC) += pl172.o
obj-$(CONFIG_ATMEL_SDRAMC) += atmel-sdramc.o
obj-$(CONFIG_ATMEL_EBI) += atmel-ebi.o
obj-$(CONFIG_ARCH_BRCMSTB) += brcmstb_dpfe.o
obj-$(CONFIG_BT1_L2_CTL) += bt1-l2-ctl.o
obj-$(CONFIG_TI_AEMIF) += ti-aemif.o
obj-$(CONFIG_TI_EMIF) += emif.o
obj-$(CONFIG_OMAP_GPMC) += omap-gpmc.o

View File

@ -0,0 +1,322 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2020 BAIKAL ELECTRONICS, JSC
*
* Authors:
* Serge Semin <Sergey.Semin@baikalelectronics.ru>
*
* Baikal-T1 CM2 L2-cache Control Block driver.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/bitfield.h>
#include <linux/types.h>
#include <linux/device.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/mfd/syscon.h>
#include <linux/sysfs.h>
#include <linux/of.h>
#define L2_CTL_REG 0x028
#define L2_CTL_DATA_STALL_FLD 0
#define L2_CTL_DATA_STALL_MASK GENMASK(1, L2_CTL_DATA_STALL_FLD)
#define L2_CTL_TAG_STALL_FLD 2
#define L2_CTL_TAG_STALL_MASK GENMASK(3, L2_CTL_TAG_STALL_FLD)
#define L2_CTL_WS_STALL_FLD 4
#define L2_CTL_WS_STALL_MASK GENMASK(5, L2_CTL_WS_STALL_FLD)
#define L2_CTL_SET_CLKRATIO BIT(13)
#define L2_CTL_CLKRATIO_LOCK BIT(31)
#define L2_CTL_STALL_MIN 0
#define L2_CTL_STALL_MAX 3
#define L2_CTL_STALL_SET_DELAY_US 1
#define L2_CTL_STALL_SET_TOUT_US 1000
/*
* struct l2_ctl - Baikal-T1 L2 Control block private data.
* @dev: Pointer to the device structure.
* @sys_regs: Baikal-T1 System Controller registers map.
*/
struct l2_ctl {
struct device *dev;
struct regmap *sys_regs;
};
/*
* enum l2_ctl_stall - Baikal-T1 L2-cache-RAM stall identifier.
* @L2_WSSTALL: Way-select latency.
* @L2_TAGSTALL: Tag latency.
* @L2_DATASTALL: Data latency.
*/
enum l2_ctl_stall {
L2_WS_STALL,
L2_TAG_STALL,
L2_DATA_STALL
};
/*
* struct l2_ctl_device_attribute - Baikal-T1 L2-cache device attribute.
* @dev_attr: Actual sysfs device attribute.
* @id: L2-cache stall field identifier.
*/
struct l2_ctl_device_attribute {
struct device_attribute dev_attr;
enum l2_ctl_stall id;
};
#define to_l2_ctl_dev_attr(_dev_attr) \
container_of(_dev_attr, struct l2_ctl_device_attribute, dev_attr)
#define L2_CTL_ATTR_RW(_name, _prefix, _id) \
struct l2_ctl_device_attribute l2_ctl_attr_##_name = \
{ __ATTR(_name, 0644, _prefix##_show, _prefix##_store), _id }
static int l2_ctl_get_latency(struct l2_ctl *l2, enum l2_ctl_stall id, u32 *val)
{
u32 data = 0;
int ret;
ret = regmap_read(l2->sys_regs, L2_CTL_REG, &data);
if (ret)
return ret;
switch (id) {
case L2_WS_STALL:
*val = FIELD_GET(L2_CTL_WS_STALL_MASK, data);
break;
case L2_TAG_STALL:
*val = FIELD_GET(L2_CTL_TAG_STALL_MASK, data);
break;
case L2_DATA_STALL:
*val = FIELD_GET(L2_CTL_DATA_STALL_MASK, data);
break;
default:
return -EINVAL;
}
return 0;
}
static int l2_ctl_set_latency(struct l2_ctl *l2, enum l2_ctl_stall id, u32 val)
{
u32 mask = 0, data = 0;
int ret;
val = clamp_val(val, L2_CTL_STALL_MIN, L2_CTL_STALL_MAX);
switch (id) {
case L2_WS_STALL:
data = FIELD_PREP(L2_CTL_WS_STALL_MASK, val);
mask = L2_CTL_WS_STALL_MASK;
break;
case L2_TAG_STALL:
data = FIELD_PREP(L2_CTL_TAG_STALL_MASK, val);
mask = L2_CTL_TAG_STALL_MASK;
break;
case L2_DATA_STALL:
data = FIELD_PREP(L2_CTL_DATA_STALL_MASK, val);
mask = L2_CTL_DATA_STALL_MASK;
break;
default:
return -EINVAL;
}
data |= L2_CTL_SET_CLKRATIO;
mask |= L2_CTL_SET_CLKRATIO;
ret = regmap_update_bits(l2->sys_regs, L2_CTL_REG, mask, data);
if (ret)
return ret;
return regmap_read_poll_timeout(l2->sys_regs, L2_CTL_REG, data,
data & L2_CTL_CLKRATIO_LOCK,
L2_CTL_STALL_SET_DELAY_US,
L2_CTL_STALL_SET_TOUT_US);
}
static void l2_ctl_clear_data(void *data)
{
struct l2_ctl *l2 = data;
struct platform_device *pdev = to_platform_device(l2->dev);
platform_set_drvdata(pdev, NULL);
}
static struct l2_ctl *l2_ctl_create_data(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct l2_ctl *l2;
int ret;
l2 = devm_kzalloc(dev, sizeof(*l2), GFP_KERNEL);
if (!l2)
return ERR_PTR(-ENOMEM);
ret = devm_add_action(dev, l2_ctl_clear_data, l2);
if (ret) {
dev_err(dev, "Can't add L2 CTL data clear action\n");
return ERR_PTR(ret);
}
l2->dev = dev;
platform_set_drvdata(pdev, l2);
return l2;
}
static int l2_ctl_find_sys_regs(struct l2_ctl *l2)
{
l2->sys_regs = syscon_node_to_regmap(l2->dev->of_node->parent);
if (IS_ERR(l2->sys_regs)) {
dev_err(l2->dev, "Couldn't get L2 CTL register map\n");
return PTR_ERR(l2->sys_regs);
}
return 0;
}
static int l2_ctl_of_parse_property(struct l2_ctl *l2, enum l2_ctl_stall id,
const char *propname)
{
int ret = 0;
u32 data;
if (!of_property_read_u32(l2->dev->of_node, propname, &data)) {
ret = l2_ctl_set_latency(l2, id, data);
if (ret)
dev_err(l2->dev, "Invalid value of '%s'\n", propname);
}
return ret;
}
static int l2_ctl_of_parse(struct l2_ctl *l2)
{
int ret;
ret = l2_ctl_of_parse_property(l2, L2_WS_STALL, "baikal,l2-ws-latency");
if (ret)
return ret;
ret = l2_ctl_of_parse_property(l2, L2_TAG_STALL, "baikal,l2-tag-latency");
if (ret)
return ret;
return l2_ctl_of_parse_property(l2, L2_DATA_STALL,
"baikal,l2-data-latency");
}
static ssize_t l2_ctl_latency_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct l2_ctl_device_attribute *devattr = to_l2_ctl_dev_attr(attr);
struct l2_ctl *l2 = dev_get_drvdata(dev);
u32 data;
int ret;
ret = l2_ctl_get_latency(l2, devattr->id, &data);
if (ret)
return ret;
return scnprintf(buf, PAGE_SIZE, "%u\n", data);
}
static ssize_t l2_ctl_latency_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct l2_ctl_device_attribute *devattr = to_l2_ctl_dev_attr(attr);
struct l2_ctl *l2 = dev_get_drvdata(dev);
u32 data;
int ret;
if (kstrtouint(buf, 0, &data) < 0)
return -EINVAL;
ret = l2_ctl_set_latency(l2, devattr->id, data);
if (ret)
return ret;
return count;
}
static L2_CTL_ATTR_RW(l2_ws_latency, l2_ctl_latency, L2_WS_STALL);
static L2_CTL_ATTR_RW(l2_tag_latency, l2_ctl_latency, L2_TAG_STALL);
static L2_CTL_ATTR_RW(l2_data_latency, l2_ctl_latency, L2_DATA_STALL);
static struct attribute *l2_ctl_sysfs_attrs[] = {
&l2_ctl_attr_l2_ws_latency.dev_attr.attr,
&l2_ctl_attr_l2_tag_latency.dev_attr.attr,
&l2_ctl_attr_l2_data_latency.dev_attr.attr,
NULL
};
ATTRIBUTE_GROUPS(l2_ctl_sysfs);
static void l2_ctl_remove_sysfs(void *data)
{
struct l2_ctl *l2 = data;
device_remove_groups(l2->dev, l2_ctl_sysfs_groups);
}
static int l2_ctl_init_sysfs(struct l2_ctl *l2)
{
int ret;
ret = device_add_groups(l2->dev, l2_ctl_sysfs_groups);
if (ret) {
dev_err(l2->dev, "Failed to create L2 CTL sysfs nodes\n");
return ret;
}
ret = devm_add_action_or_reset(l2->dev, l2_ctl_remove_sysfs, l2);
if (ret)
dev_err(l2->dev, "Can't add L2 CTL sysfs remove action\n");
return ret;
}
static int l2_ctl_probe(struct platform_device *pdev)
{
struct l2_ctl *l2;
int ret;
l2 = l2_ctl_create_data(pdev);
if (IS_ERR(l2))
return PTR_ERR(l2);
ret = l2_ctl_find_sys_regs(l2);
if (ret)
return ret;
ret = l2_ctl_of_parse(l2);
if (ret)
return ret;
ret = l2_ctl_init_sysfs(l2);
if (ret)
return ret;
return 0;
}
static const struct of_device_id l2_ctl_of_match[] = {
{ .compatible = "baikal,bt1-l2-ctl" },
{ }
};
MODULE_DEVICE_TABLE(of, l2_ctl_of_match);
static struct platform_driver l2_ctl_driver = {
.probe = l2_ctl_probe,
.driver = {
.name = "bt1-l2-ctl",
.of_match_table = l2_ctl_of_match
}
};
module_platform_driver(l2_ctl_driver);
MODULE_AUTHOR("Serge Semin <Sergey.Semin@baikalelectronics.ru>");
MODULE_DESCRIPTION("Baikal-T1 L2-cache driver");
MODULE_LICENSE("GPL v2");

View File

@ -1091,7 +1091,7 @@ static int create_timings_aligned(struct exynos5_dmc *dmc, u32 *reg_timing_row,
/* power related timings */
val = dmc->timings->tFAW / clk_period_ps;
val += dmc->timings->tFAW % clk_period_ps ? 1 : 0;
val = max(val, dmc->min_tck->tXP);
val = max(val, dmc->min_tck->tFAW);
reg = &timing_power[0];
*reg_timing_power |= TIMING_VAL2REG(reg, val);
@ -1346,15 +1346,13 @@ static irqreturn_t dmc_irq_thread(int irq, void *priv)
struct exynos5_dmc *dmc = priv;
mutex_lock(&dmc->df->lock);
exynos5_dmc_perf_events_check(dmc);
res = update_devfreq(dmc->df);
mutex_unlock(&dmc->df->lock);
if (res)
dev_warn(dmc->dev, "devfreq failed with %d\n", res);
mutex_unlock(&dmc->df->lock);
return IRQ_HANDLED;
}

View File

@ -357,6 +357,25 @@ int of_reserved_mem_device_init_by_idx(struct device *dev,
}
EXPORT_SYMBOL_GPL(of_reserved_mem_device_init_by_idx);
/**
* of_reserved_mem_device_init_by_name() - assign named reserved memory region
* to given device
* @dev: pointer to the device to configure
* @np: pointer to the device node with 'memory-region' property
* @name: name of the selected memory region
*
* Returns: 0 on success or a negative error-code on failure.
*/
int of_reserved_mem_device_init_by_name(struct device *dev,
struct device_node *np,
const char *name)
{
int idx = of_property_match_string(np, "memory-region-names", name);
return of_reserved_mem_device_init_by_idx(dev, np, idx);
}
EXPORT_SYMBOL_GPL(of_reserved_mem_device_init_by_name);
/**
* of_reserved_mem_device_release() - release reserved memory device structures
* @dev: Pointer to the device to deconfigure
@ -366,24 +385,22 @@ EXPORT_SYMBOL_GPL(of_reserved_mem_device_init_by_idx);
*/
void of_reserved_mem_device_release(struct device *dev)
{
struct rmem_assigned_device *rd;
struct reserved_mem *rmem = NULL;
struct rmem_assigned_device *rd, *tmp;
LIST_HEAD(release_list);
mutex_lock(&of_rmem_assigned_device_mutex);
list_for_each_entry(rd, &of_rmem_assigned_device_list, list) {
if (rd->dev == dev) {
rmem = rd->rmem;
list_del(&rd->list);
kfree(rd);
break;
}
list_for_each_entry_safe(rd, tmp, &of_rmem_assigned_device_list, list) {
if (rd->dev == dev)
list_move_tail(&rd->list, &release_list);
}
mutex_unlock(&of_rmem_assigned_device_mutex);
if (!rmem || !rmem->ops || !rmem->ops->device_release)
return;
list_for_each_entry_safe(rd, tmp, &release_list, list) {
if (rd->rmem && rd->rmem->ops && rd->rmem->ops->device_release)
rd->rmem->ops->device_release(rd->rmem, dev);
rmem->ops->device_release(rmem, dev);
kfree(rd);
}
}
EXPORT_SYMBOL_GPL(of_reserved_mem_device_release);

View File

@ -33,6 +33,7 @@
enum hi6220_reset_ctrl_type {
PERIPHERAL,
MEDIA,
AO,
};
struct hi6220_reset_data {
@ -92,6 +93,65 @@ static const struct reset_control_ops hi6220_media_reset_ops = {
.deassert = hi6220_media_deassert,
};
#define AO_SCTRL_SC_PW_CLKEN0 0x800
#define AO_SCTRL_SC_PW_CLKDIS0 0x804
#define AO_SCTRL_SC_PW_RSTEN0 0x810
#define AO_SCTRL_SC_PW_RSTDIS0 0x814
#define AO_SCTRL_SC_PW_ISOEN0 0x820
#define AO_SCTRL_SC_PW_ISODIS0 0x824
#define AO_MAX_INDEX 12
static int hi6220_ao_assert(struct reset_controller_dev *rc_dev,
unsigned long idx)
{
struct hi6220_reset_data *data = to_reset_data(rc_dev);
struct regmap *regmap = data->regmap;
int ret;
ret = regmap_write(regmap, AO_SCTRL_SC_PW_RSTEN0, BIT(idx));
if (ret)
return ret;
ret = regmap_write(regmap, AO_SCTRL_SC_PW_ISOEN0, BIT(idx));
if (ret)
return ret;
ret = regmap_write(regmap, AO_SCTRL_SC_PW_CLKDIS0, BIT(idx));
return ret;
}
static int hi6220_ao_deassert(struct reset_controller_dev *rc_dev,
unsigned long idx)
{
struct hi6220_reset_data *data = to_reset_data(rc_dev);
struct regmap *regmap = data->regmap;
int ret;
/*
* It was suggested to disable isolation before enabling
* the clocks and deasserting reset, to avoid glitches.
* But this order is preserved to keep it matching the
* vendor code.
*/
ret = regmap_write(regmap, AO_SCTRL_SC_PW_RSTDIS0, BIT(idx));
if (ret)
return ret;
ret = regmap_write(regmap, AO_SCTRL_SC_PW_ISODIS0, BIT(idx));
if (ret)
return ret;
ret = regmap_write(regmap, AO_SCTRL_SC_PW_CLKEN0, BIT(idx));
return ret;
}
static const struct reset_control_ops hi6220_ao_reset_ops = {
.assert = hi6220_ao_assert,
.deassert = hi6220_ao_deassert,
};
static int hi6220_reset_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
@ -117,9 +177,12 @@ static int hi6220_reset_probe(struct platform_device *pdev)
if (type == MEDIA) {
data->rc_dev.ops = &hi6220_media_reset_ops;
data->rc_dev.nr_resets = MEDIA_MAX_INDEX;
} else {
} else if (type == PERIPHERAL) {
data->rc_dev.ops = &hi6220_peripheral_reset_ops;
data->rc_dev.nr_resets = PERIPH_MAX_INDEX;
} else {
data->rc_dev.ops = &hi6220_ao_reset_ops;
data->rc_dev.nr_resets = AO_MAX_INDEX;
}
return reset_controller_register(&data->rc_dev);
@ -134,6 +197,10 @@ static const struct of_device_id hi6220_reset_match[] = {
.compatible = "hisilicon,hi6220-mediactrl",
.data = (void *)MEDIA,
},
{
.compatible = "hisilicon,hi6220-aoctrl",
.data = (void *)AO,
},
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, hi6220_reset_match);

View File

@ -15,6 +15,7 @@
#include <linux/regmap.h>
#include <dt-bindings/reset/imx7-reset.h>
#include <dt-bindings/reset/imx8mq-reset.h>
#include <dt-bindings/reset/imx8mp-reset.h>
struct imx7_src_signal {
unsigned int offset, bit;
@ -145,6 +146,18 @@ enum imx8mq_src_registers {
SRC_DDRC2_RCR = 0x1004,
};
enum imx8mp_src_registers {
SRC_SUPERMIX_RCR = 0x0018,
SRC_AUDIOMIX_RCR = 0x001c,
SRC_MLMIX_RCR = 0x0028,
SRC_GPU2D_RCR = 0x0038,
SRC_GPU3D_RCR = 0x003c,
SRC_VPU_G1_RCR = 0x0048,
SRC_VPU_G2_RCR = 0x004c,
SRC_VPUVC8KE_RCR = 0x0050,
SRC_NOC_RCR = 0x0054,
};
static const struct imx7_src_signal imx8mq_src_signals[IMX8MQ_RESET_NUM] = {
[IMX8MQ_RESET_A53_CORE_POR_RESET0] = { SRC_A53RCR0, BIT(0) },
[IMX8MQ_RESET_A53_CORE_POR_RESET1] = { SRC_A53RCR0, BIT(1) },
@ -253,6 +266,93 @@ static const struct imx7_src_variant variant_imx8mq = {
},
};
static const struct imx7_src_signal imx8mp_src_signals[IMX8MP_RESET_NUM] = {
[IMX8MP_RESET_A53_CORE_POR_RESET0] = { SRC_A53RCR0, BIT(0) },
[IMX8MP_RESET_A53_CORE_POR_RESET1] = { SRC_A53RCR0, BIT(1) },
[IMX8MP_RESET_A53_CORE_POR_RESET2] = { SRC_A53RCR0, BIT(2) },
[IMX8MP_RESET_A53_CORE_POR_RESET3] = { SRC_A53RCR0, BIT(3) },
[IMX8MP_RESET_A53_CORE_RESET0] = { SRC_A53RCR0, BIT(4) },
[IMX8MP_RESET_A53_CORE_RESET1] = { SRC_A53RCR0, BIT(5) },
[IMX8MP_RESET_A53_CORE_RESET2] = { SRC_A53RCR0, BIT(6) },
[IMX8MP_RESET_A53_CORE_RESET3] = { SRC_A53RCR0, BIT(7) },
[IMX8MP_RESET_A53_DBG_RESET0] = { SRC_A53RCR0, BIT(8) },
[IMX8MP_RESET_A53_DBG_RESET1] = { SRC_A53RCR0, BIT(9) },
[IMX8MP_RESET_A53_DBG_RESET2] = { SRC_A53RCR0, BIT(10) },
[IMX8MP_RESET_A53_DBG_RESET3] = { SRC_A53RCR0, BIT(11) },
[IMX8MP_RESET_A53_ETM_RESET0] = { SRC_A53RCR0, BIT(12) },
[IMX8MP_RESET_A53_ETM_RESET1] = { SRC_A53RCR0, BIT(13) },
[IMX8MP_RESET_A53_ETM_RESET2] = { SRC_A53RCR0, BIT(14) },
[IMX8MP_RESET_A53_ETM_RESET3] = { SRC_A53RCR0, BIT(15) },
[IMX8MP_RESET_A53_SOC_DBG_RESET] = { SRC_A53RCR0, BIT(20) },
[IMX8MP_RESET_A53_L2RESET] = { SRC_A53RCR0, BIT(21) },
[IMX8MP_RESET_SW_NON_SCLR_M7C_RST] = { SRC_M4RCR, BIT(0) },
[IMX8MP_RESET_OTG1_PHY_RESET] = { SRC_USBOPHY1_RCR, BIT(0) },
[IMX8MP_RESET_OTG2_PHY_RESET] = { SRC_USBOPHY2_RCR, BIT(0) },
[IMX8MP_RESET_SUPERMIX_RESET] = { SRC_SUPERMIX_RCR, BIT(0) },
[IMX8MP_RESET_AUDIOMIX_RESET] = { SRC_AUDIOMIX_RCR, BIT(0) },
[IMX8MP_RESET_MLMIX_RESET] = { SRC_MLMIX_RCR, BIT(0) },
[IMX8MP_RESET_PCIEPHY] = { SRC_PCIEPHY_RCR, BIT(2) },
[IMX8MP_RESET_PCIEPHY_PERST] = { SRC_PCIEPHY_RCR, BIT(3) },
[IMX8MP_RESET_PCIE_CTRL_APPS_EN] = { SRC_PCIEPHY_RCR, BIT(6) },
[IMX8MP_RESET_PCIE_CTRL_APPS_TURNOFF] = { SRC_PCIEPHY_RCR, BIT(11) },
[IMX8MP_RESET_HDMI_PHY_APB_RESET] = { SRC_HDMI_RCR, BIT(0) },
[IMX8MP_RESET_MEDIA_RESET] = { SRC_DISP_RCR, BIT(0) },
[IMX8MP_RESET_GPU2D_RESET] = { SRC_GPU2D_RCR, BIT(0) },
[IMX8MP_RESET_GPU3D_RESET] = { SRC_GPU3D_RCR, BIT(0) },
[IMX8MP_RESET_GPU_RESET] = { SRC_GPU_RCR, BIT(0) },
[IMX8MP_RESET_VPU_RESET] = { SRC_VPU_RCR, BIT(0) },
[IMX8MP_RESET_VPU_G1_RESET] = { SRC_VPU_G1_RCR, BIT(0) },
[IMX8MP_RESET_VPU_G2_RESET] = { SRC_VPU_G2_RCR, BIT(0) },
[IMX8MP_RESET_VPUVC8KE_RESET] = { SRC_VPUVC8KE_RCR, BIT(0) },
[IMX8MP_RESET_NOC_RESET] = { SRC_NOC_RCR, BIT(0) },
};
static int imx8mp_reset_set(struct reset_controller_dev *rcdev,
unsigned long id, bool assert)
{
struct imx7_src *imx7src = to_imx7_src(rcdev);
const unsigned int bit = imx7src->signals[id].bit;
unsigned int value = assert ? bit : 0;
switch (id) {
case IMX8MP_RESET_PCIEPHY:
/*
* wait for more than 10us to release phy g_rst and
* btnrst
*/
if (!assert)
udelay(10);
break;
case IMX8MP_RESET_PCIE_CTRL_APPS_EN:
value = assert ? 0 : bit;
break;
}
return imx7_reset_update(imx7src, id, value);
}
static int imx8mp_reset_assert(struct reset_controller_dev *rcdev,
unsigned long id)
{
return imx8mp_reset_set(rcdev, id, true);
}
static int imx8mp_reset_deassert(struct reset_controller_dev *rcdev,
unsigned long id)
{
return imx8mp_reset_set(rcdev, id, false);
}
static const struct imx7_src_variant variant_imx8mp = {
.signals = imx8mp_src_signals,
.signals_num = ARRAY_SIZE(imx8mp_src_signals),
.ops = {
.assert = imx8mp_reset_assert,
.deassert = imx8mp_reset_deassert,
},
};
static int imx7_reset_probe(struct platform_device *pdev)
{
struct imx7_src *imx7src;
@ -283,6 +383,7 @@ static int imx7_reset_probe(struct platform_device *pdev)
static const struct of_device_id imx7_reset_dt_ids[] = {
{ .compatible = "fsl,imx7d-src", .data = &variant_imx7 },
{ .compatible = "fsl,imx8mq-src", .data = &variant_imx8mq },
{ .compatible = "fsl,imx8mp-src", .data = &variant_imx8mp },
{ /* sentinel */ },
};

View File

@ -14,13 +14,23 @@
#include <linux/reset-controller.h>
#include <linux/reset.h>
#include <linux/clk.h>
#include <dt-bindings/power/meson8-power.h>
#include <dt-bindings/power/meson-g12a-power.h>
#include <dt-bindings/power/meson-gxbb-power.h>
#include <dt-bindings/power/meson-sm1-power.h>
/* AO Offsets */
#define AO_RTI_GEN_PWR_SLEEP0 (0x3a << 2)
#define AO_RTI_GEN_PWR_ISO0 (0x3b << 2)
#define GX_AO_RTI_GEN_PWR_SLEEP0 (0x3a << 2)
#define GX_AO_RTI_GEN_PWR_ISO0 (0x3b << 2)
/*
* Meson8/Meson8b/Meson8m2 only expose the power management registers of the
* AO-bus as syscon. 0x3a from GX translates to 0x02, 0x3b translates to 0x03
* and so on.
*/
#define MESON8_AO_RTI_GEN_PWR_SLEEP0 (0x02 << 2)
#define MESON8_AO_RTI_GEN_PWR_ISO0 (0x03 << 2)
/* HHI Offsets */
@ -66,18 +76,25 @@ struct meson_ee_pwrc_domain_data {
/* TOP Power Domains */
static struct meson_ee_pwrc_top_domain g12a_pwrc_vpu = {
.sleep_reg = AO_RTI_GEN_PWR_SLEEP0,
static struct meson_ee_pwrc_top_domain gx_pwrc_vpu = {
.sleep_reg = GX_AO_RTI_GEN_PWR_SLEEP0,
.sleep_mask = BIT(8),
.iso_reg = AO_RTI_GEN_PWR_SLEEP0,
.iso_reg = GX_AO_RTI_GEN_PWR_SLEEP0,
.iso_mask = BIT(9),
};
static struct meson_ee_pwrc_top_domain meson8_pwrc_vpu = {
.sleep_reg = MESON8_AO_RTI_GEN_PWR_SLEEP0,
.sleep_mask = BIT(8),
.iso_reg = MESON8_AO_RTI_GEN_PWR_SLEEP0,
.iso_mask = BIT(9),
};
#define SM1_EE_PD(__bit) \
{ \
.sleep_reg = AO_RTI_GEN_PWR_SLEEP0, \
.sleep_reg = GX_AO_RTI_GEN_PWR_SLEEP0, \
.sleep_mask = BIT(__bit), \
.iso_reg = AO_RTI_GEN_PWR_ISO0, \
.iso_reg = GX_AO_RTI_GEN_PWR_ISO0, \
.iso_mask = BIT(__bit), \
}
@ -124,10 +141,26 @@ static struct meson_ee_pwrc_mem_domain g12a_pwrc_mem_vpu[] = {
VPU_HHI_MEMPD(HHI_MEM_PD_REG0),
};
static struct meson_ee_pwrc_mem_domain g12a_pwrc_mem_eth[] = {
static struct meson_ee_pwrc_mem_domain gxbb_pwrc_mem_vpu[] = {
VPU_MEMPD(HHI_VPU_MEM_PD_REG0),
VPU_MEMPD(HHI_VPU_MEM_PD_REG1),
VPU_HHI_MEMPD(HHI_MEM_PD_REG0),
};
static struct meson_ee_pwrc_mem_domain meson_pwrc_mem_eth[] = {
{ HHI_MEM_PD_REG0, GENMASK(3, 2) },
};
static struct meson_ee_pwrc_mem_domain meson8_pwrc_audio_dsp_mem[] = {
{ HHI_MEM_PD_REG0, GENMASK(1, 0) },
};
static struct meson_ee_pwrc_mem_domain meson8_pwrc_mem_vpu[] = {
VPU_MEMPD(HHI_VPU_MEM_PD_REG0),
VPU_MEMPD(HHI_VPU_MEM_PD_REG1),
VPU_HHI_MEMPD(HHI_MEM_PD_REG0),
};
static struct meson_ee_pwrc_mem_domain sm1_pwrc_mem_vpu[] = {
VPU_MEMPD(HHI_VPU_MEM_PD_REG0),
VPU_MEMPD(HHI_VPU_MEM_PD_REG1),
@ -199,9 +232,35 @@ static struct meson_ee_pwrc_mem_domain sm1_pwrc_mem_audio[] = {
static bool pwrc_ee_get_power(struct meson_ee_pwrc_domain *pwrc_domain);
static struct meson_ee_pwrc_domain_desc g12a_pwrc_domains[] = {
[PWRC_G12A_VPU_ID] = VPU_PD("VPU", &g12a_pwrc_vpu, g12a_pwrc_mem_vpu,
[PWRC_G12A_VPU_ID] = VPU_PD("VPU", &gx_pwrc_vpu, g12a_pwrc_mem_vpu,
pwrc_ee_get_power, 11, 2),
[PWRC_G12A_ETH_ID] = MEM_PD("ETH", g12a_pwrc_mem_eth),
[PWRC_G12A_ETH_ID] = MEM_PD("ETH", meson_pwrc_mem_eth),
};
static struct meson_ee_pwrc_domain_desc gxbb_pwrc_domains[] = {
[PWRC_GXBB_VPU_ID] = VPU_PD("VPU", &gx_pwrc_vpu, gxbb_pwrc_mem_vpu,
pwrc_ee_get_power, 12, 2),
[PWRC_GXBB_ETHERNET_MEM_ID] = MEM_PD("ETH", meson_pwrc_mem_eth),
};
static struct meson_ee_pwrc_domain_desc meson8_pwrc_domains[] = {
[PWRC_MESON8_VPU_ID] = VPU_PD("VPU", &meson8_pwrc_vpu,
meson8_pwrc_mem_vpu, pwrc_ee_get_power,
0, 1),
[PWRC_MESON8_ETHERNET_MEM_ID] = MEM_PD("ETHERNET_MEM",
meson_pwrc_mem_eth),
[PWRC_MESON8_AUDIO_DSP_MEM_ID] = MEM_PD("AUDIO_DSP_MEM",
meson8_pwrc_audio_dsp_mem),
};
static struct meson_ee_pwrc_domain_desc meson8b_pwrc_domains[] = {
[PWRC_MESON8_VPU_ID] = VPU_PD("VPU", &meson8_pwrc_vpu,
meson8_pwrc_mem_vpu, pwrc_ee_get_power,
11, 1),
[PWRC_MESON8_ETHERNET_MEM_ID] = MEM_PD("ETHERNET_MEM",
meson_pwrc_mem_eth),
[PWRC_MESON8_AUDIO_DSP_MEM_ID] = MEM_PD("AUDIO_DSP_MEM",
meson8_pwrc_audio_dsp_mem),
};
static struct meson_ee_pwrc_domain_desc sm1_pwrc_domains[] = {
@ -216,7 +275,7 @@ static struct meson_ee_pwrc_domain_desc sm1_pwrc_domains[] = {
[PWRC_SM1_GE2D_ID] = TOP_PD("GE2D", &sm1_pwrc_ge2d, sm1_pwrc_mem_ge2d,
pwrc_ee_get_power),
[PWRC_SM1_AUDIO_ID] = MEM_PD("AUDIO", sm1_pwrc_mem_audio),
[PWRC_SM1_ETH_ID] = MEM_PD("ETH", g12a_pwrc_mem_eth),
[PWRC_SM1_ETH_ID] = MEM_PD("ETH", meson_pwrc_mem_eth),
};
struct meson_ee_pwrc_domain {
@ -470,12 +529,43 @@ static struct meson_ee_pwrc_domain_data meson_ee_g12a_pwrc_data = {
.domains = g12a_pwrc_domains,
};
static struct meson_ee_pwrc_domain_data meson_ee_gxbb_pwrc_data = {
.count = ARRAY_SIZE(gxbb_pwrc_domains),
.domains = gxbb_pwrc_domains,
};
static struct meson_ee_pwrc_domain_data meson_ee_m8_pwrc_data = {
.count = ARRAY_SIZE(meson8_pwrc_domains),
.domains = meson8_pwrc_domains,
};
static struct meson_ee_pwrc_domain_data meson_ee_m8b_pwrc_data = {
.count = ARRAY_SIZE(meson8b_pwrc_domains),
.domains = meson8b_pwrc_domains,
};
static struct meson_ee_pwrc_domain_data meson_ee_sm1_pwrc_data = {
.count = ARRAY_SIZE(sm1_pwrc_domains),
.domains = sm1_pwrc_domains,
};
static const struct of_device_id meson_ee_pwrc_match_table[] = {
{
.compatible = "amlogic,meson8-pwrc",
.data = &meson_ee_m8_pwrc_data,
},
{
.compatible = "amlogic,meson8b-pwrc",
.data = &meson_ee_m8b_pwrc_data,
},
{
.compatible = "amlogic,meson8m2-pwrc",
.data = &meson_ee_m8b_pwrc_data,
},
{
.compatible = "amlogic,meson-gxbb-pwrc",
.data = &meson_ee_gxbb_pwrc_data,
},
{
.compatible = "amlogic,meson-g12a-pwrc",
.data = &meson_ee_g12a_pwrc_data,

View File

@ -58,7 +58,7 @@ static inline struct dpaa2_io *service_select_by_cpu(struct dpaa2_io *d,
* If cpu == -1, choose the current cpu, with no guarantees about
* potentially being migrated away.
*/
if (unlikely(cpu < 0))
if (cpu < 0)
cpu = smp_processor_id();
/* If a specific cpu was requested, pick it up immediately */
@ -70,6 +70,10 @@ static inline struct dpaa2_io *service_select(struct dpaa2_io *d)
if (d)
return d;
d = service_select_by_cpu(d, -1);
if (d)
return d;
spin_lock(&dpio_list_lock);
d = list_entry(dpio_list.next, struct dpaa2_io, node);
list_del(&d->node);

View File

@ -572,18 +572,6 @@ void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, u32 qdid,
#define EQAR_VB(eqar) ((eqar) & 0x80)
#define EQAR_SUCCESS(eqar) ((eqar) & 0x100)
static inline void qbman_write_eqcr_am_rt_register(struct qbman_swp *p,
u8 idx)
{
if (idx < 16)
qbman_write_register(p, QBMAN_CINH_SWP_EQCR_AM_RT + idx * 4,
QMAN_RT_MODE);
else
qbman_write_register(p, QBMAN_CINH_SWP_EQCR_AM_RT2 +
(idx - 16) * 4,
QMAN_RT_MODE);
}
#define QB_RT_BIT ((u32)0x100)
/**
* qbman_swp_enqueue_direct() - Issue an enqueue command

View File

@ -449,11 +449,6 @@ static inline int qm_eqcr_init(struct qm_portal *portal,
return 0;
}
static inline unsigned int qm_eqcr_get_ci_stashing(struct qm_portal *portal)
{
return (qm_in(portal, QM_REG_CFG) >> 28) & 0x7;
}
static inline void qm_eqcr_finish(struct qm_portal *portal)
{
struct qm_eqcr *eqcr = &portal->eqcr;

View File

@ -448,7 +448,7 @@ int qe_upload_firmware(const struct qe_firmware *firmware)
unsigned int i;
unsigned int j;
u32 crc;
size_t calc_size = sizeof(struct qe_firmware);
size_t calc_size;
size_t length;
const struct qe_header *hdr;
@ -480,7 +480,7 @@ int qe_upload_firmware(const struct qe_firmware *firmware)
}
/* Validate the length and check if there's a CRC */
calc_size += (firmware->count - 1) * sizeof(struct qe_microcode);
calc_size = struct_size(firmware, microcode, firmware->count);
for (i = 0; i < firmware->count; i++)
/*

View File

@ -519,7 +519,7 @@ int ucc_set_tdm_rxtx_clk(u32 tdm_num, enum qe_clock clock,
int clock_bits;
u32 shift;
struct qe_mux __iomem *qe_mux_reg;
__be32 __iomem *cmxs1cr;
__be32 __iomem *cmxs1cr;
qe_mux_reg = &qe_immr->qmx;

View File

@ -53,11 +53,11 @@ static u32 __init imx8mq_soc_revision(void)
struct device_node *np;
void __iomem *ocotp_base;
u32 magic;
u32 rev = 0;
u32 rev;
np = of_find_compatible_node(NULL, NULL, "fsl,imx8mq-ocotp");
if (!np)
goto out;
return 0;
ocotp_base = of_iomap(np, 0);
WARN_ON(!ocotp_base);
@ -78,9 +78,8 @@ static u32 __init imx8mq_soc_revision(void)
soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW);
iounmap(ocotp_base);
out:
of_node_put(np);
return rev;
}

View File

@ -44,4 +44,11 @@ config MTK_SCPSYS
Say yes here to add support for the MediaTek SCPSYS power domain
driver.
config MTK_MMSYS
bool "MediaTek MMSYS Support"
default ARCH_MEDIATEK
help
Say yes here to add support for the MediaTek Multimedia
Subsystem (MMSYS).
endmenu

View File

@ -3,3 +3,4 @@ obj-$(CONFIG_MTK_CMDQ) += mtk-cmdq-helper.o
obj-$(CONFIG_MTK_INFRACFG) += mtk-infracfg.o
obj-$(CONFIG_MTK_PMIC_WRAP) += mtk-pmic-wrap.o
obj-$(CONFIG_MTK_SCPSYS) += mtk-scpsys.o
obj-$(CONFIG_MTK_MMSYS) += mtk-mmsys.o

View File

@ -0,0 +1,378 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2014 MediaTek Inc.
* Author: James Liao <jamesjj.liao@mediatek.com>
*/
#include <linux/device.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/soc/mediatek/mtk-mmsys.h>
#include "../../gpu/drm/mediatek/mtk_drm_ddp.h"
#include "../../gpu/drm/mediatek/mtk_drm_ddp_comp.h"
#define DISP_REG_CONFIG_DISP_OVL0_MOUT_EN 0x040
#define DISP_REG_CONFIG_DISP_OVL1_MOUT_EN 0x044
#define DISP_REG_CONFIG_DISP_OD_MOUT_EN 0x048
#define DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN 0x04c
#define DISP_REG_CONFIG_DISP_UFOE_MOUT_EN 0x050
#define DISP_REG_CONFIG_DISP_COLOR0_SEL_IN 0x084
#define DISP_REG_CONFIG_DISP_COLOR1_SEL_IN 0x088
#define DISP_REG_CONFIG_DSIE_SEL_IN 0x0a4
#define DISP_REG_CONFIG_DSIO_SEL_IN 0x0a8
#define DISP_REG_CONFIG_DPI_SEL_IN 0x0ac
#define DISP_REG_CONFIG_DISP_RDMA2_SOUT 0x0b8
#define DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN 0x0c4
#define DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN 0x0c8
#define DISP_REG_CONFIG_MMSYS_CG_CON0 0x100
#define DISP_REG_CONFIG_DISP_OVL_MOUT_EN 0x030
#define DISP_REG_CONFIG_OUT_SEL 0x04c
#define DISP_REG_CONFIG_DSI_SEL 0x050
#define DISP_REG_CONFIG_DPI_SEL 0x064
#define OVL0_MOUT_EN_COLOR0 0x1
#define OD_MOUT_EN_RDMA0 0x1
#define OD1_MOUT_EN_RDMA1 BIT(16)
#define UFOE_MOUT_EN_DSI0 0x1
#define COLOR0_SEL_IN_OVL0 0x1
#define OVL1_MOUT_EN_COLOR1 0x1
#define GAMMA_MOUT_EN_RDMA1 0x1
#define RDMA0_SOUT_DPI0 0x2
#define RDMA0_SOUT_DPI1 0x3
#define RDMA0_SOUT_DSI1 0x1
#define RDMA0_SOUT_DSI2 0x4
#define RDMA0_SOUT_DSI3 0x5
#define RDMA1_SOUT_DPI0 0x2
#define RDMA1_SOUT_DPI1 0x3
#define RDMA1_SOUT_DSI1 0x1
#define RDMA1_SOUT_DSI2 0x4
#define RDMA1_SOUT_DSI3 0x5
#define RDMA2_SOUT_DPI0 0x2
#define RDMA2_SOUT_DPI1 0x3
#define RDMA2_SOUT_DSI1 0x1
#define RDMA2_SOUT_DSI2 0x4
#define RDMA2_SOUT_DSI3 0x5
#define DPI0_SEL_IN_RDMA1 0x1
#define DPI0_SEL_IN_RDMA2 0x3
#define DPI1_SEL_IN_RDMA1 (0x1 << 8)
#define DPI1_SEL_IN_RDMA2 (0x3 << 8)
#define DSI0_SEL_IN_RDMA1 0x1
#define DSI0_SEL_IN_RDMA2 0x4
#define DSI1_SEL_IN_RDMA1 0x1
#define DSI1_SEL_IN_RDMA2 0x4
#define DSI2_SEL_IN_RDMA1 (0x1 << 16)
#define DSI2_SEL_IN_RDMA2 (0x4 << 16)
#define DSI3_SEL_IN_RDMA1 (0x1 << 16)
#define DSI3_SEL_IN_RDMA2 (0x4 << 16)
#define COLOR1_SEL_IN_OVL1 0x1
#define OVL_MOUT_EN_RDMA 0x1
#define BLS_TO_DSI_RDMA1_TO_DPI1 0x8
#define BLS_TO_DPI_RDMA1_TO_DSI 0x2
#define DSI_SEL_IN_BLS 0x0
#define DPI_SEL_IN_BLS 0x0
#define DSI_SEL_IN_RDMA 0x1
struct mtk_mmsys_driver_data {
const char *clk_driver;
};
static const struct mtk_mmsys_driver_data mt2701_mmsys_driver_data = {
.clk_driver = "clk-mt2701-mm",
};
static const struct mtk_mmsys_driver_data mt2712_mmsys_driver_data = {
.clk_driver = "clk-mt2712-mm",
};
static const struct mtk_mmsys_driver_data mt6779_mmsys_driver_data = {
.clk_driver = "clk-mt6779-mm",
};
static const struct mtk_mmsys_driver_data mt6797_mmsys_driver_data = {
.clk_driver = "clk-mt6797-mm",
};
static const struct mtk_mmsys_driver_data mt8173_mmsys_driver_data = {
.clk_driver = "clk-mt8173-mm",
};
static const struct mtk_mmsys_driver_data mt8183_mmsys_driver_data = {
.clk_driver = "clk-mt8183-mm",
};
static unsigned int mtk_mmsys_ddp_mout_en(enum mtk_ddp_comp_id cur,
enum mtk_ddp_comp_id next,
unsigned int *addr)
{
unsigned int value;
if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_COLOR0) {
*addr = DISP_REG_CONFIG_DISP_OVL0_MOUT_EN;
value = OVL0_MOUT_EN_COLOR0;
} else if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_RDMA0) {
*addr = DISP_REG_CONFIG_DISP_OVL_MOUT_EN;
value = OVL_MOUT_EN_RDMA;
} else if (cur == DDP_COMPONENT_OD0 && next == DDP_COMPONENT_RDMA0) {
*addr = DISP_REG_CONFIG_DISP_OD_MOUT_EN;
value = OD_MOUT_EN_RDMA0;
} else if (cur == DDP_COMPONENT_UFOE && next == DDP_COMPONENT_DSI0) {
*addr = DISP_REG_CONFIG_DISP_UFOE_MOUT_EN;
value = UFOE_MOUT_EN_DSI0;
} else if (cur == DDP_COMPONENT_OVL1 && next == DDP_COMPONENT_COLOR1) {
*addr = DISP_REG_CONFIG_DISP_OVL1_MOUT_EN;
value = OVL1_MOUT_EN_COLOR1;
} else if (cur == DDP_COMPONENT_GAMMA && next == DDP_COMPONENT_RDMA1) {
*addr = DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN;
value = GAMMA_MOUT_EN_RDMA1;
} else if (cur == DDP_COMPONENT_OD1 && next == DDP_COMPONENT_RDMA1) {
*addr = DISP_REG_CONFIG_DISP_OD_MOUT_EN;
value = OD1_MOUT_EN_RDMA1;
} else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DPI0) {
*addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN;
value = RDMA0_SOUT_DPI0;
} else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DPI1) {
*addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN;
value = RDMA0_SOUT_DPI1;
} else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI1) {
*addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN;
value = RDMA0_SOUT_DSI1;
} else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI2) {
*addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN;
value = RDMA0_SOUT_DSI2;
} else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI3) {
*addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN;
value = RDMA0_SOUT_DSI3;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI1) {
*addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN;
value = RDMA1_SOUT_DSI1;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI2) {
*addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN;
value = RDMA1_SOUT_DSI2;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI3) {
*addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN;
value = RDMA1_SOUT_DSI3;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI0) {
*addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN;
value = RDMA1_SOUT_DPI0;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI1) {
*addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN;
value = RDMA1_SOUT_DPI1;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI0) {
*addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT;
value = RDMA2_SOUT_DPI0;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI1) {
*addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT;
value = RDMA2_SOUT_DPI1;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI1) {
*addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT;
value = RDMA2_SOUT_DSI1;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI2) {
*addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT;
value = RDMA2_SOUT_DSI2;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI3) {
*addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT;
value = RDMA2_SOUT_DSI3;
} else {
value = 0;
}
return value;
}
static unsigned int mtk_mmsys_ddp_sel_in(enum mtk_ddp_comp_id cur,
enum mtk_ddp_comp_id next,
unsigned int *addr)
{
unsigned int value;
if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_COLOR0) {
*addr = DISP_REG_CONFIG_DISP_COLOR0_SEL_IN;
value = COLOR0_SEL_IN_OVL0;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI0) {
*addr = DISP_REG_CONFIG_DPI_SEL_IN;
value = DPI0_SEL_IN_RDMA1;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI1) {
*addr = DISP_REG_CONFIG_DPI_SEL_IN;
value = DPI1_SEL_IN_RDMA1;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI0) {
*addr = DISP_REG_CONFIG_DSIE_SEL_IN;
value = DSI0_SEL_IN_RDMA1;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI1) {
*addr = DISP_REG_CONFIG_DSIO_SEL_IN;
value = DSI1_SEL_IN_RDMA1;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI2) {
*addr = DISP_REG_CONFIG_DSIE_SEL_IN;
value = DSI2_SEL_IN_RDMA1;
} else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI3) {
*addr = DISP_REG_CONFIG_DSIO_SEL_IN;
value = DSI3_SEL_IN_RDMA1;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI0) {
*addr = DISP_REG_CONFIG_DPI_SEL_IN;
value = DPI0_SEL_IN_RDMA2;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI1) {
*addr = DISP_REG_CONFIG_DPI_SEL_IN;
value = DPI1_SEL_IN_RDMA2;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI0) {
*addr = DISP_REG_CONFIG_DSIE_SEL_IN;
value = DSI0_SEL_IN_RDMA2;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI1) {
*addr = DISP_REG_CONFIG_DSIO_SEL_IN;
value = DSI1_SEL_IN_RDMA2;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI2) {
*addr = DISP_REG_CONFIG_DSIE_SEL_IN;
value = DSI2_SEL_IN_RDMA2;
} else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI3) {
*addr = DISP_REG_CONFIG_DSIE_SEL_IN;
value = DSI3_SEL_IN_RDMA2;
} else if (cur == DDP_COMPONENT_OVL1 && next == DDP_COMPONENT_COLOR1) {
*addr = DISP_REG_CONFIG_DISP_COLOR1_SEL_IN;
value = COLOR1_SEL_IN_OVL1;
} else if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DSI0) {
*addr = DISP_REG_CONFIG_DSI_SEL;
value = DSI_SEL_IN_BLS;
} else {
value = 0;
}
return value;
}
static void mtk_mmsys_ddp_sout_sel(void __iomem *config_regs,
enum mtk_ddp_comp_id cur,
enum mtk_ddp_comp_id next)
{
if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DSI0) {
writel_relaxed(BLS_TO_DSI_RDMA1_TO_DPI1,
config_regs + DISP_REG_CONFIG_OUT_SEL);
} else if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DPI0) {
writel_relaxed(BLS_TO_DPI_RDMA1_TO_DSI,
config_regs + DISP_REG_CONFIG_OUT_SEL);
writel_relaxed(DSI_SEL_IN_RDMA,
config_regs + DISP_REG_CONFIG_DSI_SEL);
writel_relaxed(DPI_SEL_IN_BLS,
config_regs + DISP_REG_CONFIG_DPI_SEL);
}
}
void mtk_mmsys_ddp_connect(struct device *dev,
enum mtk_ddp_comp_id cur,
enum mtk_ddp_comp_id next)
{
void __iomem *config_regs = dev_get_drvdata(dev);
unsigned int addr, value, reg;
value = mtk_mmsys_ddp_mout_en(cur, next, &addr);
if (value) {
reg = readl_relaxed(config_regs + addr) | value;
writel_relaxed(reg, config_regs + addr);
}
mtk_mmsys_ddp_sout_sel(config_regs, cur, next);
value = mtk_mmsys_ddp_sel_in(cur, next, &addr);
if (value) {
reg = readl_relaxed(config_regs + addr) | value;
writel_relaxed(reg, config_regs + addr);
}
}
EXPORT_SYMBOL_GPL(mtk_mmsys_ddp_connect);
void mtk_mmsys_ddp_disconnect(struct device *dev,
enum mtk_ddp_comp_id cur,
enum mtk_ddp_comp_id next)
{
void __iomem *config_regs = dev_get_drvdata(dev);
unsigned int addr, value, reg;
value = mtk_mmsys_ddp_mout_en(cur, next, &addr);
if (value) {
reg = readl_relaxed(config_regs + addr) & ~value;
writel_relaxed(reg, config_regs + addr);
}
value = mtk_mmsys_ddp_sel_in(cur, next, &addr);
if (value) {
reg = readl_relaxed(config_regs + addr) & ~value;
writel_relaxed(reg, config_regs + addr);
}
}
EXPORT_SYMBOL_GPL(mtk_mmsys_ddp_disconnect);
static int mtk_mmsys_probe(struct platform_device *pdev)
{
const struct mtk_mmsys_driver_data *data;
struct device *dev = &pdev->dev;
struct platform_device *clks;
struct platform_device *drm;
void __iomem *config_regs;
struct resource *mem;
int ret;
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
config_regs = devm_ioremap_resource(dev, mem);
if (IS_ERR(config_regs)) {
ret = PTR_ERR(config_regs);
dev_err(dev, "Failed to ioremap mmsys-config resource: %d\n",
ret);
return ret;
}
platform_set_drvdata(pdev, config_regs);
data = of_device_get_match_data(&pdev->dev);
clks = platform_device_register_data(&pdev->dev, data->clk_driver,
PLATFORM_DEVID_AUTO, NULL, 0);
if (IS_ERR(clks))
return PTR_ERR(clks);
drm = platform_device_register_data(&pdev->dev, "mediatek-drm",
PLATFORM_DEVID_AUTO, NULL, 0);
if (IS_ERR(drm)) {
platform_device_unregister(clks);
return PTR_ERR(drm);
}
return 0;
}
static const struct of_device_id of_match_mtk_mmsys[] = {
{
.compatible = "mediatek,mt2701-mmsys",
.data = &mt2701_mmsys_driver_data,
},
{
.compatible = "mediatek,mt2712-mmsys",
.data = &mt2712_mmsys_driver_data,
},
{
.compatible = "mediatek,mt6779-mmsys",
.data = &mt6779_mmsys_driver_data,
},
{
.compatible = "mediatek,mt6797-mmsys",
.data = &mt6797_mmsys_driver_data,
},
{
.compatible = "mediatek,mt8173-mmsys",
.data = &mt8173_mmsys_driver_data,
},
{
.compatible = "mediatek,mt8183-mmsys",
.data = &mt8183_mmsys_driver_data,
},
{ }
};
static struct platform_driver mtk_mmsys_drv = {
.driver = {
.name = "mtk-mmsys",
.of_match_table = of_match_mtk_mmsys,
},
.probe = mtk_mmsys_probe,
};
builtin_platform_driver(mtk_mmsys_drv);

View File

@ -107,7 +107,7 @@ config QCOM_RPMH
help apply the aggregated state on the resource.
config QCOM_RPMHPD
bool "Qualcomm RPMh Power domain driver"
tristate "Qualcomm RPMh Power domain driver"
depends on QCOM_RPMH && QCOM_COMMAND_DB
help
QCOM RPMh Power domain driver to support power-domains with
@ -116,8 +116,8 @@ config QCOM_RPMHPD
for the voltage rail.
config QCOM_RPMPD
bool "Qualcomm RPM Power domain driver"
depends on QCOM_SMD_RPM=y
tristate "Qualcomm RPM Power domain driver"
depends on QCOM_SMD_RPM
help
QCOM RPM Power domain driver to support power-domains with
performance states. The driver communicates a performance state

View File

@ -1,12 +1,13 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright (c) 2016-2018, The Linux Foundation. All rights reserved. */
#include <linux/debugfs.h>
#include <linux/kernel.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/of_reserved_mem.h>
#include <linux/platform_device.h>
#include <linux/seq_file.h>
#include <linux/types.h>
#include <soc/qcom/cmd-db.h>
@ -236,6 +237,77 @@ enum cmd_db_hw_type cmd_db_read_slave_id(const char *id)
}
EXPORT_SYMBOL(cmd_db_read_slave_id);
#ifdef CONFIG_DEBUG_FS
static int cmd_db_debugfs_dump(struct seq_file *seq, void *p)
{
int i, j;
const struct rsc_hdr *rsc;
const struct entry_header *ent;
const char *name;
u16 len, version;
u8 major, minor;
seq_puts(seq, "Command DB DUMP\n");
for (i = 0; i < MAX_SLV_ID; i++) {
rsc = &cmd_db_header->header[i];
if (!rsc->slv_id)
break;
switch (le16_to_cpu(rsc->slv_id)) {
case CMD_DB_HW_ARC:
name = "ARC";
break;
case CMD_DB_HW_VRM:
name = "VRM";
break;
case CMD_DB_HW_BCM:
name = "BCM";
break;
default:
name = "Unknown";
break;
}
version = le16_to_cpu(rsc->version);
major = version >> 8;
minor = version;
seq_printf(seq, "Slave %s (v%u.%u)\n", name, major, minor);
seq_puts(seq, "-------------------------\n");
ent = rsc_to_entry_header(rsc);
for (j = 0; j < le16_to_cpu(rsc->cnt); j++, ent++) {
seq_printf(seq, "0x%05x: %*pEp", le32_to_cpu(ent->addr),
(int)sizeof(ent->id), ent->id);
len = le16_to_cpu(ent->len);
if (len) {
seq_printf(seq, " [%*ph]",
len, rsc_offset(rsc, ent));
}
seq_putc(seq, '\n');
}
}
return 0;
}
static int open_cmd_db_debugfs(struct inode *inode, struct file *file)
{
return single_open(file, cmd_db_debugfs_dump, inode->i_private);
}
#endif
static const struct file_operations cmd_db_debugfs_ops = {
#ifdef CONFIG_DEBUG_FS
.open = open_cmd_db_debugfs,
#endif
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int cmd_db_dev_probe(struct platform_device *pdev)
{
struct reserved_mem *rmem;
@ -259,12 +331,14 @@ static int cmd_db_dev_probe(struct platform_device *pdev)
return -EINVAL;
}
debugfs_create_file("cmd-db", 0400, NULL, NULL, &cmd_db_debugfs_ops);
return 0;
}
static const struct of_device_id cmd_db_match_table[] = {
{ .compatible = "qcom,cmd-db" },
{ },
{ }
};
static struct platform_driver cmd_db_dev_driver = {

View File

@ -155,10 +155,6 @@ static int pdr_register_listener(struct pdr_handle *pdr,
return ret;
}
if ((int)resp.curr_state < INT_MIN || (int)resp.curr_state > INT_MAX)
pr_err("PDR: %s notification state invalid: 0x%x\n",
pds->service_path, resp.curr_state);
pds->state = resp.curr_state;
return 0;

View File

@ -599,6 +599,7 @@ static const struct of_device_id qmp_dt_match[] = {
{ .compatible = "qcom,sc7180-aoss-qmp", },
{ .compatible = "qcom,sdm845-aoss-qmp", },
{ .compatible = "qcom,sm8150-aoss-qmp", },
{ .compatible = "qcom,sm8250-aoss-qmp", },
{}
};
MODULE_DEVICE_TABLE(of, qmp_dt_match);

View File

@ -22,16 +22,23 @@ struct rsc_drv;
* struct tcs_group: group of Trigger Command Sets (TCS) to send state requests
* to the controller
*
* @drv: the controller
* @type: type of the TCS in this group - active, sleep, wake
* @mask: mask of the TCSes relative to all the TCSes in the RSC
* @offset: start of the TCS group relative to the TCSes in the RSC
* @num_tcs: number of TCSes in this type
* @ncpt: number of commands in each TCS
* @lock: lock for synchronizing this TCS writes
* @req: requests that are sent from the TCS
* @cmd_cache: flattened cache of cmds in sleep/wake TCS
* @slots: indicates which of @cmd_addr are occupied
* @drv: The controller.
* @type: Type of the TCS in this group - active, sleep, wake.
* @mask: Mask of the TCSes relative to all the TCSes in the RSC.
* @offset: Start of the TCS group relative to the TCSes in the RSC.
* @num_tcs: Number of TCSes in this type.
* @ncpt: Number of commands in each TCS.
* @req: Requests that are sent from the TCS; only used for ACTIVE_ONLY
* transfers (could be on a wake/sleep TCS if we are borrowing for
* an ACTIVE_ONLY transfer).
* Start: grab drv->lock, set req, set tcs_in_use, drop drv->lock,
* trigger
* End: get irq, access req,
* grab drv->lock, clear tcs_in_use, drop drv->lock
* @slots: Indicates which of @cmd_addr are occupied; only used for
* SLEEP / WAKE TCSs. Things are tightly packed in the
* case that (ncpt < MAX_CMDS_PER_TCS). That is if ncpt = 2 and
* MAX_CMDS_PER_TCS = 16 then bit[2] = the first bit in 2nd TCS.
*/
struct tcs_group {
struct rsc_drv *drv;
@ -40,9 +47,7 @@ struct tcs_group {
u32 offset;
int num_tcs;
int ncpt;
spinlock_t lock;
const struct tcs_request *req[MAX_TCS_PER_TYPE];
u32 *cmd_cache;
DECLARE_BITMAP(slots, MAX_TCS_SLOTS);
};
@ -84,20 +89,32 @@ struct rpmh_ctrlr {
* struct rsc_drv: the Direct Resource Voter (DRV) of the
* Resource State Coordinator controller (RSC)
*
* @name: controller identifier
* @tcs_base: start address of the TCS registers in this controller
* @id: instance id in the controller (Direct Resource Voter)
* @num_tcs: number of TCSes in this DRV
* @tcs: TCS groups
* @tcs_in_use: s/w state of the TCS
* @lock: synchronize state of the controller
* @client: handle to the DRV's client.
* @name: Controller identifier.
* @tcs_base: Start address of the TCS registers in this controller.
* @id: Instance id in the controller (Direct Resource Voter).
* @num_tcs: Number of TCSes in this DRV.
* @rsc_pm: CPU PM notifier for controller.
* Used when solver mode is not present.
* @cpus_in_pm: Number of CPUs not in idle power collapse.
* Used when solver mode is not present.
* @tcs: TCS groups.
* @tcs_in_use: S/W state of the TCS; only set for ACTIVE_ONLY
* transfers, but might show a sleep/wake TCS in use if
* it was borrowed for an active_only transfer. You
* must hold the lock in this struct (AKA drv->lock) in
* order to update this.
* @lock: Synchronize state of the controller. If RPMH's cache
* lock will also be held, the order is: drv->lock then
* cache_lock.
* @client: Handle to the DRV's client.
*/
struct rsc_drv {
const char *name;
void __iomem *tcs_base;
int id;
int num_tcs;
struct notifier_block rsc_pm;
atomic_t cpus_in_pm;
struct tcs_group tcs[TCS_TYPE_NR];
DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR);
spinlock_t lock;
@ -107,7 +124,7 @@ struct rsc_drv {
int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg);
int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv,
const struct tcs_request *msg);
int rpmh_rsc_invalidate(struct rsc_drv *drv);
void rpmh_rsc_invalidate(struct rsc_drv *drv);
void rpmh_tx_done(const struct tcs_request *msg, int r);
int rpmh_flush(struct rpmh_ctrlr *ctrlr);

File diff suppressed because it is too large Load Diff

View File

@ -9,6 +9,7 @@
#include <linux/jiffies.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/lockdep.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
@ -119,6 +120,7 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
{
struct cache_req *req;
unsigned long flags;
u32 old_sleep_val, old_wake_val;
spin_lock_irqsave(&ctrlr->cache_lock, flags);
req = __find_req(ctrlr, cmd->addr);
@ -133,26 +135,27 @@ static struct cache_req *cache_rpm_request(struct rpmh_ctrlr *ctrlr,
req->addr = cmd->addr;
req->sleep_val = req->wake_val = UINT_MAX;
INIT_LIST_HEAD(&req->list);
list_add_tail(&req->list, &ctrlr->cache);
existing:
old_sleep_val = req->sleep_val;
old_wake_val = req->wake_val;
switch (state) {
case RPMH_ACTIVE_ONLY_STATE:
if (req->sleep_val != UINT_MAX)
req->wake_val = cmd->data;
break;
case RPMH_WAKE_ONLY_STATE:
req->wake_val = cmd->data;
break;
case RPMH_SLEEP_STATE:
req->sleep_val = cmd->data;
break;
default:
break;
}
ctrlr->dirty = true;
ctrlr->dirty |= (req->sleep_val != old_sleep_val ||
req->wake_val != old_wake_val) &&
req->sleep_val != UINT_MAX &&
req->wake_val != UINT_MAX;
unlock:
spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
@ -287,6 +290,7 @@ static void cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req)
spin_lock_irqsave(&ctrlr->cache_lock, flags);
list_add_tail(&req->list, &ctrlr->batch_cache);
ctrlr->dirty = true;
spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
}
@ -294,12 +298,10 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
{
struct batch_cache_req *req;
const struct rpmh_request *rpm_msg;
unsigned long flags;
int ret = 0;
int i;
/* Send Sleep/Wake requests to the controller, expect no response */
spin_lock_irqsave(&ctrlr->cache_lock, flags);
list_for_each_entry(req, &ctrlr->batch_cache, list) {
for (i = 0; i < req->count; i++) {
rpm_msg = req->rpm_msgs + i;
@ -309,23 +311,10 @@ static int flush_batch(struct rpmh_ctrlr *ctrlr)
break;
}
}
spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
return ret;
}
static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
{
struct batch_cache_req *req, *tmp;
unsigned long flags;
spin_lock_irqsave(&ctrlr->cache_lock, flags);
list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
kfree(req);
INIT_LIST_HEAD(&ctrlr->batch_cache);
spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
}
/**
* rpmh_write_batch: Write multiple sets of RPMH commands and wait for the
* batch to finish.
@ -442,36 +431,42 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
}
/**
* rpmh_flush: Flushes the buffered active and sleep sets to TCS
* rpmh_flush() - Flushes the buffered sleep and wake sets to TCSes
*
* @ctrlr: controller making request to flush cached data
* @ctrlr: Controller making request to flush cached data
*
* Return: -EBUSY if the controller is busy, probably waiting on a response
* to a RPMH request sent earlier.
*
* This function is always called from the sleep code from the last CPU
* that is powering down the entire system. Since no other RPMH API would be
* executing at this time, it is safe to run lockless.
* Return:
* * 0 - Success
* * Error code - Otherwise
*/
int rpmh_flush(struct rpmh_ctrlr *ctrlr)
{
struct cache_req *p;
int ret;
int ret = 0;
lockdep_assert_irqs_disabled();
/*
* Currently rpmh_flush() is only called when we think we're running
* on the last processor. If the lock is busy it means another
* processor is up and it's better to abort than spin.
*/
if (!spin_trylock(&ctrlr->cache_lock))
return -EBUSY;
if (!ctrlr->dirty) {
pr_debug("Skipping flush, TCS has latest data.\n");
return 0;
goto exit;
}
/* Invalidate the TCSes first to avoid stale data */
rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
/* First flush the cached batch requests */
ret = flush_batch(ctrlr);
if (ret)
return ret;
goto exit;
/*
* Nobody else should be calling this function other than system PM,
* hence we can run without locks.
*/
list_for_each_entry(p, &ctrlr->cache, list) {
if (!is_req_valid(p)) {
pr_debug("%s: skipping RPMH req: a:%#x s:%#x w:%#x",
@ -481,38 +476,40 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
ret = send_single(ctrlr, RPMH_SLEEP_STATE, p->addr,
p->sleep_val);
if (ret)
return ret;
goto exit;
ret = send_single(ctrlr, RPMH_WAKE_ONLY_STATE, p->addr,
p->wake_val);
if (ret)
return ret;
goto exit;
}
ctrlr->dirty = false;
return 0;
exit:
spin_unlock(&ctrlr->cache_lock);
return ret;
}
/**
* rpmh_invalidate: Invalidate all sleep and active sets
* sets.
* rpmh_invalidate: Invalidate sleep and wake sets in batch_cache
*
* @dev: The device making the request
*
* Invalidate the sleep and active values in the TCS blocks.
* Invalidate the sleep and wake values in batch_cache.
*/
int rpmh_invalidate(const struct device *dev)
{
struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
int ret;
struct batch_cache_req *req, *tmp;
unsigned long flags;
invalidate_batch(ctrlr);
spin_lock_irqsave(&ctrlr->cache_lock, flags);
list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
kfree(req);
INIT_LIST_HEAD(&ctrlr->batch_cache);
ctrlr->dirty = true;
spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
do {
ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr));
} while (ret == -EAGAIN);
return ret;
return 0;
}
EXPORT_SYMBOL(rpmh_invalidate);

View File

@ -4,6 +4,7 @@
#include <linux/err.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/pm_domain.h>
#include <linux/slab.h>
@ -166,6 +167,24 @@ static const struct rpmhpd_desc sm8150_desc = {
.num_pds = ARRAY_SIZE(sm8150_rpmhpds),
};
static struct rpmhpd *sm8250_rpmhpds[] = {
[SM8250_CX] = &sdm845_cx,
[SM8250_CX_AO] = &sdm845_cx_ao,
[SM8250_EBI] = &sdm845_ebi,
[SM8250_GFX] = &sdm845_gfx,
[SM8250_LCX] = &sdm845_lcx,
[SM8250_LMX] = &sdm845_lmx,
[SM8250_MMCX] = &sm8150_mmcx,
[SM8250_MMCX_AO] = &sm8150_mmcx_ao,
[SM8250_MX] = &sdm845_mx,
[SM8250_MX_AO] = &sdm845_mx_ao,
};
static const struct rpmhpd_desc sm8250_desc = {
.rpmhpds = sm8250_rpmhpds,
.num_pds = ARRAY_SIZE(sm8250_rpmhpds),
};
/* SC7180 RPMH powerdomains */
static struct rpmhpd *sc7180_rpmhpds[] = {
[SC7180_CX] = &sdm845_cx,
@ -187,8 +206,10 @@ static const struct of_device_id rpmhpd_match_table[] = {
{ .compatible = "qcom,sc7180-rpmhpd", .data = &sc7180_desc },
{ .compatible = "qcom,sdm845-rpmhpd", .data = &sdm845_desc },
{ .compatible = "qcom,sm8150-rpmhpd", .data = &sm8150_desc },
{ .compatible = "qcom,sm8250-rpmhpd", .data = &sm8250_desc },
{ }
};
MODULE_DEVICE_TABLE(of, rpmhpd_match_table);
static int rpmhpd_send_corner(struct rpmhpd *pd, int state,
unsigned int corner, bool sync)
@ -460,3 +481,6 @@ static int __init rpmhpd_init(void)
return platform_driver_register(&rpmhpd_driver);
}
core_initcall(rpmhpd_init);
MODULE_DESCRIPTION("Qualcomm Technologies, Inc. RPMh Power Domain Driver");
MODULE_LICENSE("GPL v2");

View File

@ -4,6 +4,7 @@
#include <linux/err.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/pm_domain.h>
#include <linux/of.h>
@ -226,6 +227,7 @@ static const struct of_device_id rpmpd_match_table[] = {
{ .compatible = "qcom,qcs404-rpmpd", .data = &qcs404_desc },
{ }
};
MODULE_DEVICE_TABLE(of, rpmpd_match_table);
static int rpmpd_send_enable(struct rpmpd *pd, bool enable)
{
@ -422,3 +424,6 @@ static int __init rpmpd_init(void)
return platform_driver_register(&rpmpd_driver);
}
core_initcall(rpmpd_init);
MODULE_DESCRIPTION("Qualcomm Technologies, Inc. RPM Power Domain Driver");
MODULE_LICENSE("GPL v2");

View File

@ -474,10 +474,8 @@ static int qcom_smp2p_probe(struct platform_device *pdev)
goto report_read_failure;
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
dev_err(&pdev->dev, "unable to acquire smp2p interrupt\n");
if (irq < 0)
return irq;
}
smp2p->mbox_client.dev = &pdev->dev;
smp2p->mbox_client.knows_txdone = true;

View File

@ -188,6 +188,10 @@ static const struct soc_id soc_id[] = {
{ 216, "MSM8674PRO" },
{ 217, "MSM8974-AA" },
{ 218, "MSM8974-AB" },
{ 233, "MSM8936" },
{ 239, "MSM8939" },
{ 240, "APQ8036" },
{ 241, "APQ8039" },
{ 246, "MSM8996" },
{ 247, "APQ8016" },
{ 248, "MSM8216" },
@ -430,6 +434,8 @@ static int qcom_socinfo_probe(struct platform_device *pdev)
qs->attr.family = "Snapdragon";
qs->attr.machine = socinfo_machine(&pdev->dev,
le32_to_cpu(info->id));
qs->attr.soc_id = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%u",
le32_to_cpu(info->id));
qs->attr.revision = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%u.%u",
SOCINFO_MAJOR(le32_to_cpu(info->ver)),
SOCINFO_MINOR(le32_to_cpu(info->ver)));

View File

@ -83,6 +83,13 @@ config ARCH_R8A7740
select ARM_ERRATA_754322
select RENESAS_INTC_IRQPIN
config ARCH_R8A7742
bool "RZ/G1H (R8A77420)"
select ARCH_RCAR_GEN2
select ARM_ERRATA_798181 if SMP
select ARM_ERRATA_814220
select SYSC_R8A7742
config ARCH_R8A7743
bool "RZ/G1M (R8A77430)"
select ARCH_RCAR_GEN2
@ -261,6 +268,10 @@ config ARCH_R8A77995
endif # ARM64
# SoC
config SYSC_R8A7742
bool "RZ/G1H System Controller support" if COMPILE_TEST
select SYSC_RCAR
config SYSC_R8A7743
bool "RZ/G1M System Controller support" if COMPILE_TEST
select SYSC_RCAR

View File

@ -3,6 +3,7 @@
obj-$(CONFIG_SOC_RENESAS) += renesas-soc.o
# SoC
obj-$(CONFIG_SYSC_R8A7742) += r8a7742-sysc.o
obj-$(CONFIG_SYSC_R8A7743) += r8a7743-sysc.o
obj-$(CONFIG_SYSC_R8A7745) += r8a7745-sysc.o
obj-$(CONFIG_SYSC_R8A77470) += r8a77470-sysc.o

View File

@ -0,0 +1,42 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Renesas RZ/G1H System Controller
*
* Copyright (C) 2020 Renesas Electronics Corp.
*/
#include <linux/kernel.h>
#include <dt-bindings/power/r8a7742-sysc.h>
#include "rcar-sysc.h"
static const struct rcar_sysc_area r8a7742_areas[] __initconst = {
{ "always-on", 0, 0, R8A7742_PD_ALWAYS_ON, -1, PD_ALWAYS_ON },
{ "ca15-scu", 0x180, 0, R8A7742_PD_CA15_SCU, R8A7742_PD_ALWAYS_ON,
PD_SCU },
{ "ca15-cpu0", 0x40, 0, R8A7742_PD_CA15_CPU0, R8A7742_PD_CA15_SCU,
PD_CPU_NOCR },
{ "ca15-cpu1", 0x40, 1, R8A7742_PD_CA15_CPU1, R8A7742_PD_CA15_SCU,
PD_CPU_NOCR },
{ "ca15-cpu2", 0x40, 2, R8A7742_PD_CA15_CPU2, R8A7742_PD_CA15_SCU,
PD_CPU_NOCR },
{ "ca15-cpu3", 0x40, 3, R8A7742_PD_CA15_CPU3, R8A7742_PD_CA15_SCU,
PD_CPU_NOCR },
{ "ca7-scu", 0x100, 0, R8A7742_PD_CA7_SCU, R8A7742_PD_ALWAYS_ON,
PD_SCU },
{ "ca7-cpu0", 0x1c0, 0, R8A7742_PD_CA7_CPU0, R8A7742_PD_CA7_SCU,
PD_CPU_NOCR },
{ "ca7-cpu1", 0x1c0, 1, R8A7742_PD_CA7_CPU1, R8A7742_PD_CA7_SCU,
PD_CPU_NOCR },
{ "ca7-cpu2", 0x1c0, 2, R8A7742_PD_CA7_CPU2, R8A7742_PD_CA7_SCU,
PD_CPU_NOCR },
{ "ca7-cpu3", 0x1c0, 3, R8A7742_PD_CA7_CPU3, R8A7742_PD_CA7_SCU,
PD_CPU_NOCR },
{ "rgx", 0xc0, 0, R8A7742_PD_RGX, R8A7742_PD_ALWAYS_ON },
};
const struct rcar_sysc_info r8a7742_sysc_info __initconst = {
.areas = r8a7742_areas,
.num_areas = ARRAY_SIZE(r8a7742_areas),
};

View File

@ -39,6 +39,7 @@ static const struct rst_config rcar_rst_gen3 __initconst = {
static const struct of_device_id rcar_rst_matches[] __initconst = {
/* RZ/G1 is handled like R-Car Gen2 */
{ .compatible = "renesas,r8a7742-rst", .data = &rcar_rst_gen2 },
{ .compatible = "renesas,r8a7743-rst", .data = &rcar_rst_gen2 },
{ .compatible = "renesas,r8a7744-rst", .data = &rcar_rst_gen2 },
{ .compatible = "renesas,r8a7745-rst", .data = &rcar_rst_gen2 },

View File

@ -273,6 +273,9 @@ finalize:
}
static const struct of_device_id rcar_sysc_matches[] __initconst = {
#ifdef CONFIG_SYSC_R8A7742
{ .compatible = "renesas,r8a7742-sysc", .data = &r8a7742_sysc_info },
#endif
#ifdef CONFIG_SYSC_R8A7743
{ .compatible = "renesas,r8a7743-sysc", .data = &r8a7743_sysc_info },
/* RZ/G1N is identical to RZ/G2M w.r.t. power domains. */

View File

@ -49,6 +49,7 @@ struct rcar_sysc_info {
u32 extmask_val; /* SYSCEXTMASK register mask value */
};
extern const struct rcar_sysc_info r8a7742_sysc_info;
extern const struct rcar_sysc_info r8a7743_sysc_info;
extern const struct rcar_sysc_info r8a7745_sysc_info;
extern const struct rcar_sysc_info r8a77470_sysc_info;

View File

@ -133,6 +133,7 @@ config SOC_TEGRA_FLOWCTRL
config SOC_TEGRA_PMC
bool
select GENERIC_PINCONF
config SOC_TEGRA_POWERGATE_BPMP
def_bool y

View File

@ -300,6 +300,59 @@ static void tegra_enable_fuse_clk(void __iomem *base)
writel(reg, base + 0x14);
}
static ssize_t major_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "%d\n", tegra_get_major_rev());
}
static DEVICE_ATTR_RO(major);
static ssize_t minor_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "%d\n", tegra_get_minor_rev());
}
static DEVICE_ATTR_RO(minor);
static struct attribute *tegra_soc_attr[] = {
&dev_attr_major.attr,
&dev_attr_minor.attr,
NULL,
};
const struct attribute_group tegra_soc_attr_group = {
.attrs = tegra_soc_attr,
};
#ifdef CONFIG_ARCH_TEGRA_194_SOC
static ssize_t platform_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
/*
* Displays the value in the 'pre_si_platform' field of the HIDREV
* register for Tegra194 devices. A value of 0 indicates that the
* platform type is silicon and all other non-zero values indicate
* the type of simulation platform is being used.
*/
return sprintf(buf, "%d\n", (tegra_read_chipid() >> 20) & 0xf);
}
static DEVICE_ATTR_RO(platform);
static struct attribute *tegra194_soc_attr[] = {
&dev_attr_major.attr,
&dev_attr_minor.attr,
&dev_attr_platform.attr,
NULL,
};
const struct attribute_group tegra194_soc_attr_group = {
.attrs = tegra194_soc_attr,
};
#endif
struct device * __init tegra_soc_device_register(void)
{
struct soc_device_attribute *attr;
@ -310,8 +363,10 @@ struct device * __init tegra_soc_device_register(void)
return NULL;
attr->family = kasprintf(GFP_KERNEL, "Tegra");
attr->revision = kasprintf(GFP_KERNEL, "%d", tegra_sku_info.revision);
attr->revision = kasprintf(GFP_KERNEL, "%s",
tegra_revision_name[tegra_sku_info.revision]);
attr->soc_id = kasprintf(GFP_KERNEL, "%u", tegra_get_chip_id());
attr->custom_attr_group = fuse->soc->soc_attr_group;
dev = soc_device_register(attr);
if (IS_ERR(dev)) {

View File

@ -164,4 +164,5 @@ const struct tegra_fuse_soc tegra20_fuse_soc = {
.speedo_init = tegra20_init_speedo_data,
.probe = tegra20_fuse_probe,
.info = &tegra20_fuse_info,
.soc_attr_group = &tegra_soc_attr_group,
};

View File

@ -111,6 +111,7 @@ const struct tegra_fuse_soc tegra30_fuse_soc = {
.init = tegra30_fuse_init,
.speedo_init = tegra30_init_speedo_data,
.info = &tegra30_fuse_info,
.soc_attr_group = &tegra_soc_attr_group,
};
#endif
@ -125,6 +126,7 @@ const struct tegra_fuse_soc tegra114_fuse_soc = {
.init = tegra30_fuse_init,
.speedo_init = tegra114_init_speedo_data,
.info = &tegra114_fuse_info,
.soc_attr_group = &tegra_soc_attr_group,
};
#endif
@ -205,6 +207,7 @@ const struct tegra_fuse_soc tegra124_fuse_soc = {
.info = &tegra124_fuse_info,
.lookups = tegra124_fuse_lookups,
.num_lookups = ARRAY_SIZE(tegra124_fuse_lookups),
.soc_attr_group = &tegra_soc_attr_group,
};
#endif
@ -290,6 +293,7 @@ const struct tegra_fuse_soc tegra210_fuse_soc = {
.info = &tegra210_fuse_info,
.lookups = tegra210_fuse_lookups,
.num_lookups = ARRAY_SIZE(tegra210_fuse_lookups),
.soc_attr_group = &tegra_soc_attr_group,
};
#endif
@ -319,6 +323,7 @@ const struct tegra_fuse_soc tegra186_fuse_soc = {
.info = &tegra186_fuse_info,
.lookups = tegra186_fuse_lookups,
.num_lookups = ARRAY_SIZE(tegra186_fuse_lookups),
.soc_attr_group = &tegra_soc_attr_group,
};
#endif
@ -348,5 +353,6 @@ const struct tegra_fuse_soc tegra194_fuse_soc = {
.info = &tegra194_fuse_info,
.lookups = tegra194_fuse_lookups,
.num_lookups = ARRAY_SIZE(tegra194_fuse_lookups),
.soc_attr_group = &tegra194_soc_attr_group,
};
#endif

View File

@ -32,6 +32,8 @@ struct tegra_fuse_soc {
const struct nvmem_cell_lookup *lookups;
unsigned int num_lookups;
const struct attribute_group *soc_attr_group;
};
struct tegra_fuse {
@ -64,6 +66,11 @@ void tegra_init_apbmisc(void);
bool __init tegra_fuse_read_spare(unsigned int spare);
u32 __init tegra_fuse_read_early(unsigned int offset);
u8 tegra_get_major_rev(void);
u8 tegra_get_minor_rev(void);
extern const struct attribute_group tegra_soc_attr_group;
#ifdef CONFIG_ARCH_TEGRA_2x_SOC
void tegra20_init_speedo_data(struct tegra_sku_info *sku_info);
#endif
@ -110,6 +117,7 @@ extern const struct tegra_fuse_soc tegra186_fuse_soc;
#ifdef CONFIG_ARCH_TEGRA_194_SOC
extern const struct tegra_fuse_soc tegra194_fuse_soc;
extern const struct attribute_group tegra194_soc_attr_group;
#endif
#endif

View File

@ -37,6 +37,16 @@ u8 tegra_get_chip_id(void)
return (tegra_read_chipid() >> 8) & 0xff;
}
u8 tegra_get_major_rev(void)
{
return (tegra_read_chipid() >> 4) & 0xf;
}
u8 tegra_get_minor_rev(void)
{
return (tegra_read_chipid() >> 16) & 0xf;
}
u32 tegra_read_straps(void)
{
WARN(!chipid, "Tegra ABP MISC not yet available\n");
@ -65,36 +75,32 @@ static const struct of_device_id apbmisc_match[] __initconst = {
void __init tegra_init_revision(void)
{
u32 id, chip_id, minor_rev;
int rev;
u8 chip_id, minor_rev;
id = tegra_read_chipid();
chip_id = (id >> 8) & 0xff;
minor_rev = (id >> 16) & 0xf;
chip_id = tegra_get_chip_id();
minor_rev = tegra_get_minor_rev();
switch (minor_rev) {
case 1:
rev = TEGRA_REVISION_A01;
tegra_sku_info.revision = TEGRA_REVISION_A01;
break;
case 2:
rev = TEGRA_REVISION_A02;
tegra_sku_info.revision = TEGRA_REVISION_A02;
break;
case 3:
if (chip_id == TEGRA20 && (tegra_fuse_read_spare(18) ||
tegra_fuse_read_spare(19)))
rev = TEGRA_REVISION_A03p;
tegra_sku_info.revision = TEGRA_REVISION_A03p;
else
rev = TEGRA_REVISION_A03;
tegra_sku_info.revision = TEGRA_REVISION_A03;
break;
case 4:
rev = TEGRA_REVISION_A04;
tegra_sku_info.revision = TEGRA_REVISION_A04;
break;
default:
rev = TEGRA_REVISION_UNKNOWN;
tegra_sku_info.revision = TEGRA_REVISION_UNKNOWN;
}
tegra_sku_info.revision = rev;
tegra_sku_info.sku_id = tegra_fuse_read_early(FUSE_SKU_INFO);
}

Some files were not shown because too many files have changed in this diff Show More