1
0
Fork 0

Power management updates for 5.4-rc1

- Rework the main suspend-to-idle control flow to avoid repeating
    "noirq" device resume and suspend operations in case of spurious
    wakeups from the ACPI EC and decouple the ACPI EC wakeups support
    from the LPS0 _DSM support (Rafael Wysocki).
 
  - Extend the wakeup sources framework to expose wakeup sources as
    device objects in sysfs (Tri Vo, Stephen Boyd).
 
  - Expose system suspend statistics in sysfs (Kalesh Singh).
 
  - Introduce a new haltpoll cpuidle driver and a new matching
    governor for virtualized guests wanting to do guest-side polling
    in the idle loop (Marcelo Tosatti, Joao Martins, Wanpeng Li,
    Stephen Rothwell).
 
  - Fix the menu and teo cpuidle governors to allow the scheduler tick
    to be stopped if PM QoS is used to limit the CPU idle state exit
    latency in some cases (Rafael Wysocki).
 
  - Increase the resolution of the play_idle() argument to microseconds
    for more fine-grained injection of CPU idle cycles (Daniel Lezcano).
 
  - Switch over some users of cpuidle notifiers to the new QoS-based
    frequency limits and drop the CPUFREQ_ADJUST and CPUFREQ_NOTIFY
    policy notifier events (Viresh Kumar).
 
  - Add new cpufreq driver based on nvmem for sun50i (Yangtao Li).
 
  - Add support for MT8183 and MT8516 to the mediatek cpufreq driver
    (Andrew-sh.Cheng, Fabien Parent).
 
  - Add i.MX8MN support to the imx-cpufreq-dt cpufreq driver (Anson
    Huang).
 
  - Add qcs404 to cpufreq-dt-platdev blacklist (Jorge Ramirez-Ortiz).
 
  - Update the qcom cpufreq driver (among other things, to make it
    easier to extend and to use kryo cpufreq for other nvmem-based
    SoCs) and add qcs404 support to it  (Niklas Cassel, Douglas
    RAILLARD, Sibi Sankar, Sricharan R).
 
  - Fix assorted issues and make assorted minor improvements in the
    cpufreq code (Colin Ian King, Douglas RAILLARD, Florian Fainelli,
    Gustavo Silva, Hariprasad Kelam).
 
  - Add new devfreq driver for NVidia Tegra20 (Dmitry Osipenko, Arnd
    Bergmann).
 
  - Add new Exynos PPMU events to devfreq events and extend that
    mechanism (Lukasz Luba).
 
  - Fix and clean up the exynos-bus devfreq driver (Kamil Konieczny).
 
  - Improve devfreq documentation and governor code, fix spelling
    typos in devfreq (Ezequiel Garcia, Krzysztof Kozlowski, Leonard
    Crestez, MyungJoo Ham, Gaël PORTAY).
 
  - Add regulators enable and disable to the OPP (operating performance
    points) framework (Kamil Konieczny).
 
  - Update the OPP framework to support multiple opp-suspend properties
    (Anson Huang).
 
  - Fix assorted issues and make assorted minor improvements in the OPP
    code (Niklas Cassel, Viresh Kumar, Yue Hu).
 
  - Clean up the generic power domains (genpd) framework (Ulf Hansson).
 
  - Clean up assorted pieces of power management code and documentation
    (Akinobu Mita, Amit Kucheria, Chuhong Yuan).
 
  - Update the pm-graph tool to version 5.5 including multiple fixes
    and improvements (Todd Brandt).
 
  - Update the cpupower utility (Benjamin Weis, Geert Uytterhoeven,
    Sébastien Szymanski).
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAl2ArZ4SHHJqd0Byand5
 c29ja2kubmV0AAoJEILEb/54YlRxgfYQAK80hs43vWQDmp7XKrN4pQe8+qYULAGO
 fBfrFl+NG9y/cnuqnt3NtA8MoyNsMMkMLkpkEDMfSbYqqH5ehEzX5+uGJWiWx8+Y
 oH5KU8MH7Tj/utYaalGzDt0AHfHZDIGC0NCUNQJVtE/4mOANFabwsCwscp4MrD5Q
 WjFN8U4BrsmWgJdZ/U9QIWcDZ0I+1etCF+rZG2yxSv31FMq2Zk/Qm4YyobqCvQFl
 TR9rxl08wqUmIYIz5cDjt/3AKH7NLLDqOTstbCL7cmufM5XPFc1yox69xc89UrIa
 4AMgmDp7SMwFG/gdUPof0WQNmx7qxmiRAPleAOYBOZW/8jPNZk2y+RhM5NeF72m7
 AFqYiuxqatkSb4IsT8fLzH9IUZOdYr8uSmoMQECw+MHdApaKFjFV8Lb/qx5+AwkD
 y7pwys8dZSamAjAf62eUzJDWcEwkNrujIisGrIXrVHb7ISbweskMOmdAYn9p4KgP
 dfRzpJBJ45IaMIdbaVXNpg3rP7Apfs7X1X+/ZhG6f+zHH3zYwr8Y81WPqX8WaZJ4
 qoVCyxiVWzMYjY2/1lzjaAdqWojPWHQ3or3eBaK52DouyG3jY6hCDTLwU7iuqcCX
 jzAtrnqrNIKufvaObEmqcmYlIIOFT7QaJCtGUSRFQLfSon8fsVSR7LLeXoAMUJKT
 JWQenuNaJngK
 =TBDQ
 -----END PGP SIGNATURE-----

Merge tag 'pm-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
 "These include a rework of the main suspend-to-idle code flow (related
  to the handling of spurious wakeups), a switch over of several users
  of cpufreq notifiers to QoS-based limits, a new devfreq driver for
  Tegra20, a new cpuidle driver and governor for virtualized guests, an
  extension of the wakeup sources framework to expose wakeup sources as
  device objects in sysfs, and more.

  Specifics:

   - Rework the main suspend-to-idle control flow to avoid repeating
     "noirq" device resume and suspend operations in case of spurious
     wakeups from the ACPI EC and decouple the ACPI EC wakeups support
     from the LPS0 _DSM support (Rafael Wysocki).

   - Extend the wakeup sources framework to expose wakeup sources as
     device objects in sysfs (Tri Vo, Stephen Boyd).

   - Expose system suspend statistics in sysfs (Kalesh Singh).

   - Introduce a new haltpoll cpuidle driver and a new matching governor
     for virtualized guests wanting to do guest-side polling in the idle
     loop (Marcelo Tosatti, Joao Martins, Wanpeng Li, Stephen Rothwell).

   - Fix the menu and teo cpuidle governors to allow the scheduler tick
     to be stopped if PM QoS is used to limit the CPU idle state exit
     latency in some cases (Rafael Wysocki).

   - Increase the resolution of the play_idle() argument to microseconds
     for more fine-grained injection of CPU idle cycles (Daniel
     Lezcano).

   - Switch over some users of cpuidle notifiers to the new QoS-based
     frequency limits and drop the CPUFREQ_ADJUST and CPUFREQ_NOTIFY
     policy notifier events (Viresh Kumar).

   - Add new cpufreq driver based on nvmem for sun50i (Yangtao Li).

   - Add support for MT8183 and MT8516 to the mediatek cpufreq driver
     (Andrew-sh.Cheng, Fabien Parent).

   - Add i.MX8MN support to the imx-cpufreq-dt cpufreq driver (Anson
     Huang).

   - Add qcs404 to cpufreq-dt-platdev blacklist (Jorge Ramirez-Ortiz).

   - Update the qcom cpufreq driver (among other things, to make it
     easier to extend and to use kryo cpufreq for other nvmem-based
     SoCs) and add qcs404 support to it (Niklas Cassel, Douglas
     RAILLARD, Sibi Sankar, Sricharan R).

   - Fix assorted issues and make assorted minor improvements in the
     cpufreq code (Colin Ian King, Douglas RAILLARD, Florian Fainelli,
     Gustavo Silva, Hariprasad Kelam).

   - Add new devfreq driver for NVidia Tegra20 (Dmitry Osipenko, Arnd
     Bergmann).

   - Add new Exynos PPMU events to devfreq events and extend that
     mechanism (Lukasz Luba).

   - Fix and clean up the exynos-bus devfreq driver (Kamil Konieczny).

   - Improve devfreq documentation and governor code, fix spelling typos
     in devfreq (Ezequiel Garcia, Krzysztof Kozlowski, Leonard Crestez,
     MyungJoo Ham, Gaël PORTAY).

   - Add regulators enable and disable to the OPP (operating performance
     points) framework (Kamil Konieczny).

   - Update the OPP framework to support multiple opp-suspend properties
     (Anson Huang).

   - Fix assorted issues and make assorted minor improvements in the OPP
     code (Niklas Cassel, Viresh Kumar, Yue Hu).

   - Clean up the generic power domains (genpd) framework (Ulf Hansson).

   - Clean up assorted pieces of power management code and documentation
     (Akinobu Mita, Amit Kucheria, Chuhong Yuan).

   - Update the pm-graph tool to version 5.5 including multiple fixes
     and improvements (Todd Brandt).

   - Update the cpupower utility (Benjamin Weis, Geert Uytterhoeven,
     Sébastien Szymanski)"

* tag 'pm-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (126 commits)
  cpuidle-haltpoll: Enable kvm guest polling when dedicated physical CPUs are available
  cpuidle-haltpoll: do not set an owner to allow modunload
  cpuidle-haltpoll: return -ENODEV on modinit failure
  cpuidle-haltpoll: set haltpoll as preferred governor
  cpuidle: allow governor switch on cpuidle_register_driver()
  PM: runtime: Documentation: add runtime_status ABI document
  pm-graph: make setVal unbuffered again for python2 and python3
  powercap: idle_inject: Use higher resolution for idle injection
  cpuidle: play_idle: Increase the resolution to usec
  cpuidle-haltpoll: vcpu hotplug support
  cpufreq: Add qcs404 to cpufreq-dt-platdev blacklist
  cpufreq: qcom: Add support for qcs404 on nvmem driver
  cpufreq: qcom: Refactor the driver to make it easier to extend
  cpufreq: qcom: Re-organise kryo cpufreq to use it for other nvmem based qcom socs
  dt-bindings: opp: Add qcom-opp bindings with properties needed for CPR
  dt-bindings: opp: qcom-nvmem: Support pstates provided by a power domain
  Documentation: cpufreq: Update policy notifier documentation
  cpufreq: Remove CPUFREQ_ADJUST and CPUFREQ_NOTIFY policy notifier events
  PM / Domains: Verify PM domain type in dev_pm_genpd_set_performance_state()
  PM / Domains: Simplify genpd_lookup_dev()
  ...
alistair/sunxi64-5.4-dsi
Linus Torvalds 2019-09-17 19:15:14 -07:00
commit 77dcfe2b9e
118 changed files with 4055 additions and 1927 deletions

View File

@ -0,0 +1,76 @@
What: /sys/class/wakeup/
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
The /sys/class/wakeup/ directory contains pointers to all
wakeup sources in the kernel at that moment in time.
What: /sys/class/wakeup/.../name
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the name of the wakeup source.
What: /sys/class/wakeup/.../active_count
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the number of times the wakeup source was
activated.
What: /sys/class/wakeup/.../event_count
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the number of signaled wakeup events
associated with the wakeup source.
What: /sys/class/wakeup/.../wakeup_count
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the number of times the wakeup source might
abort suspend.
What: /sys/class/wakeup/.../expire_count
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the number of times the wakeup source's
timeout has expired.
What: /sys/class/wakeup/.../active_time_ms
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the amount of time the wakeup source has
been continuously active, in milliseconds. If the wakeup
source is not active, this file contains '0'.
What: /sys/class/wakeup/.../total_time_ms
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the total amount of time this wakeup source
has been active, in milliseconds.
What: /sys/class/wakeup/.../max_time_ms
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the maximum amount of time this wakeup
source has been continuously active, in milliseconds.
What: /sys/class/wakeup/.../last_change_ms
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
This file contains the monotonic clock time when the wakeup
source was touched last time, in milliseconds.
What: /sys/class/wakeup/.../prevent_suspend_time_ms
Date: June 2019
Contact: Tri Vo <trong@android.com>
Description:
The file contains the total amount of time this wakeup source
has been preventing autosleep, in milliseconds.

View File

@ -260,3 +260,12 @@ Description:
This attribute has no effect on system-wide suspend/resume and
hibernation.
What: /sys/devices/.../power/runtime_status
Date: April 2010
Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../power/runtime_status attribute contains
the current runtime PM status of the device, which may be
"suspended", "suspending", "resuming", "active", "error" (fatal
error), or "unsupported" (runtime PM is disabled).

View File

@ -301,3 +301,109 @@ Description:
Using this sysfs file will override any values that were
set using the kernel command line for disk offset.
What: /sys/power/suspend_stats
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats directory contains suspend related
statistics.
What: /sys/power/suspend_stats/success
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/success file contains the number
of times entering system sleep state succeeded.
What: /sys/power/suspend_stats/fail
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/fail file contains the number
of times entering system sleep state failed.
What: /sys/power/suspend_stats/failed_freeze
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/failed_freeze file contains the
number of times freezing processes failed.
What: /sys/power/suspend_stats/failed_prepare
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/failed_prepare file contains the
number of times preparing all non-sysdev devices for
a system PM transition failed.
What: /sys/power/suspend_stats/failed_resume
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/failed_resume file contains the
number of times executing "resume" callbacks of
non-sysdev devices failed.
What: /sys/power/suspend_stats/failed_resume_early
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/failed_resume_early file contains
the number of times executing "early resume" callbacks
of devices failed.
What: /sys/power/suspend_stats/failed_resume_noirq
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/failed_resume_noirq file contains
the number of times executing "noirq resume" callbacks
of devices failed.
What: /sys/power/suspend_stats/failed_suspend
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/failed_suspend file contains
the number of times executing "suspend" callbacks
of all non-sysdev devices failed.
What: /sys/power/suspend_stats/failed_suspend_late
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/failed_suspend_late file contains
the number of times executing "late suspend" callbacks
of all devices failed.
What: /sys/power/suspend_stats/failed_suspend_noirq
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/failed_suspend_noirq file contains
the number of times executing "noirq suspend" callbacks
of all devices failed.
What: /sys/power/suspend_stats/last_failed_dev
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/last_failed_dev file contains
the last device for which a suspend/resume callback failed.
What: /sys/power/suspend_stats/last_failed_errno
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/last_failed_errno file contains
the errno of the last failed attempt at entering
system sleep state.
What: /sys/power/suspend_stats/last_failed_step
Date: July 2019
Contact: Kalesh Singh <kaleshsingh96@gmail.com>
Description:
The /sys/power/suspend_stats/last_failed_step file contains
the last failed step in the suspend/resume path.

View File

@ -57,19 +57,11 @@ transition notifiers.
2.1 CPUFreq policy notifiers
----------------------------
These are notified when a new policy is intended to be set. Each
CPUFreq policy notifier is called twice for a policy transition:
These are notified when a new policy is created or removed.
1.) During CPUFREQ_ADJUST all CPUFreq notifiers may change the limit if
they see a need for this - may it be thermal considerations or
hardware limitations.
2.) And during CPUFREQ_NOTIFY all notifiers are informed of the new policy
- if two hardware drivers failed to agree on a new policy before this
stage, the incompatible hardware shall be shut down, and the user
informed of this.
The phase is specified in the second argument to the notifier.
The phase is specified in the second argument to the notifier. The phase is
CPUFREQ_CREATE_POLICY when the policy is first created and it is
CPUFREQ_REMOVE_POLICY when the policy is removed.
The third argument, a void *pointer, points to a struct cpufreq_policy
consisting of several values, including min, max (the lower and upper

View File

@ -140,8 +140,8 @@ Optional properties:
frequency for a short duration of time limited by the device's power, current
and thermal limits.
- opp-suspend: Marks the OPP to be used during device suspend. Only one OPP in
the table should have this.
- opp-suspend: Marks the OPP to be used during device suspend. If multiple OPPs
in the table have this, the OPP with highest opp-hz will be used.
- opp-supported-hw: This enables us to select only a subset of OPPs from the
larger OPP table, based on what version of the hardware we are running on. We

View File

@ -1,25 +1,38 @@
Qualcomm Technologies, Inc. KRYO CPUFreq and OPP bindings
Qualcomm Technologies, Inc. NVMEM CPUFreq and OPP bindings
===================================
In Certain Qualcomm Technologies, Inc. SoCs like apq8096 and msm8996
that have KRYO processors, the CPU ferequencies subset and voltage value
of each OPP varies based on the silicon variant in use.
In Certain Qualcomm Technologies, Inc. SoCs like apq8096 and msm8996,
the CPU frequencies subset and voltage value of each OPP varies based on
the silicon variant in use.
Qualcomm Technologies, Inc. Process Voltage Scaling Tables
defines the voltage and frequency value based on the msm-id in SMEM
and speedbin blown in the efuse combination.
The qcom-cpufreq-kryo driver reads the msm-id and efuse value from the SoC
The qcom-cpufreq-nvmem driver reads the msm-id and efuse value from the SoC
to provide the OPP framework with required information (existing HW bitmap).
This is used to determine the voltage and frequency value for each OPP of
operating-points-v2 table when it is parsed by the OPP framework.
Required properties:
--------------------
In 'cpus' nodes:
In 'cpu' nodes:
- operating-points-v2: Phandle to the operating-points-v2 table to use.
In 'operating-points-v2' table:
- compatible: Should be
- 'operating-points-v2-kryo-cpu' for apq8096 and msm8996.
Optional properties:
--------------------
In 'cpu' nodes:
- power-domains: A phandle pointing to the PM domain specifier which provides
the performance states available for active state management.
Please refer to the power-domains bindings
Documentation/devicetree/bindings/power/power_domain.txt
and also examples below.
- power-domain-names: Should be
- 'cpr' for qcs404.
In 'operating-points-v2' table:
- nvmem-cells: A phandle pointing to a nvmem-cells node representing the
efuse registers that has information about the
speedbin that is used to select the right frequency/voltage
@ -678,3 +691,105 @@ soc {
};
};
};
Example 2:
---------
cpus {
#address-cells = <1>;
#size-cells = <0>;
CPU0: cpu@100 {
device_type = "cpu";
compatible = "arm,cortex-a53";
reg = <0x100>;
....
clocks = <&apcs_glb>;
operating-points-v2 = <&cpu_opp_table>;
power-domains = <&cpr>;
power-domain-names = "cpr";
};
CPU1: cpu@101 {
device_type = "cpu";
compatible = "arm,cortex-a53";
reg = <0x101>;
....
clocks = <&apcs_glb>;
operating-points-v2 = <&cpu_opp_table>;
power-domains = <&cpr>;
power-domain-names = "cpr";
};
CPU2: cpu@102 {
device_type = "cpu";
compatible = "arm,cortex-a53";
reg = <0x102>;
....
clocks = <&apcs_glb>;
operating-points-v2 = <&cpu_opp_table>;
power-domains = <&cpr>;
power-domain-names = "cpr";
};
CPU3: cpu@103 {
device_type = "cpu";
compatible = "arm,cortex-a53";
reg = <0x103>;
....
clocks = <&apcs_glb>;
operating-points-v2 = <&cpu_opp_table>;
power-domains = <&cpr>;
power-domain-names = "cpr";
};
};
cpu_opp_table: cpu-opp-table {
compatible = "operating-points-v2-kryo-cpu";
opp-shared;
opp-1094400000 {
opp-hz = /bits/ 64 <1094400000>;
required-opps = <&cpr_opp1>;
};
opp-1248000000 {
opp-hz = /bits/ 64 <1248000000>;
required-opps = <&cpr_opp2>;
};
opp-1401600000 {
opp-hz = /bits/ 64 <1401600000>;
required-opps = <&cpr_opp3>;
};
};
cpr_opp_table: cpr-opp-table {
compatible = "operating-points-v2-qcom-level";
cpr_opp1: opp1 {
opp-level = <1>;
qcom,opp-fuse-level = <1>;
};
cpr_opp2: opp2 {
opp-level = <2>;
qcom,opp-fuse-level = <2>;
};
cpr_opp3: opp3 {
opp-level = <3>;
qcom,opp-fuse-level = <3>;
};
};
....
soc {
....
cpr: power-controller@b018000 {
compatible = "qcom,qcs404-cpr", "qcom,cpr";
reg = <0x0b018000 0x1000>;
....
vdd-apc-supply = <&pms405_s3>;
#power-domain-cells = <0>;
operating-points-v2 = <&cpr_opp_table>;
....
};
};

View File

@ -0,0 +1,19 @@
Qualcomm OPP bindings to describe OPP nodes
The bindings are based on top of the operating-points-v2 bindings
described in Documentation/devicetree/bindings/opp/opp.txt
Additional properties are described below.
* OPP Table Node
Required properties:
- compatible: Allow OPPs to express their compatibility. It should be:
"operating-points-v2-qcom-level"
* OPP Node
Required properties:
- qcom,opp-fuse-level: A positive value representing the fuse corner/level
associated with this OPP node. Sometimes several corners/levels shares
a certain fuse corner/level. A fuse corner/level contains e.g. ref uV,
min uV, and max uV.

View File

@ -0,0 +1,167 @@
Allwinner Technologies, Inc. NVMEM CPUFreq and OPP bindings
===================================
For some SoCs, the CPU frequency subset and voltage value of each OPP
varies based on the silicon variant in use. Allwinner Process Voltage
Scaling Tables defines the voltage and frequency value based on the
speedbin blown in the efuse combination. The sun50i-cpufreq-nvmem driver
reads the efuse value from the SoC to provide the OPP framework with
required information.
Required properties:
--------------------
In 'cpus' nodes:
- operating-points-v2: Phandle to the operating-points-v2 table to use.
In 'operating-points-v2' table:
- compatible: Should be
- 'allwinner,sun50i-h6-operating-points'.
- nvmem-cells: A phandle pointing to a nvmem-cells node representing the
efuse registers that has information about the speedbin
that is used to select the right frequency/voltage value
pair. Please refer the for nvmem-cells bindings
Documentation/devicetree/bindings/nvmem/nvmem.txt and
also examples below.
In every OPP node:
- opp-microvolt-<name>: Voltage in micro Volts.
At runtime, the platform can pick a <name> and
matching opp-microvolt-<name> property.
[See: opp.txt]
HW: <name>:
sun50i-h6 speed0 speed1 speed2
Example 1:
---------
cpus {
#address-cells = <1>;
#size-cells = <0>;
cpu0: cpu@0 {
compatible = "arm,cortex-a53";
device_type = "cpu";
reg = <0>;
enable-method = "psci";
clocks = <&ccu CLK_CPUX>;
clock-latency-ns = <244144>; /* 8 32k periods */
operating-points-v2 = <&cpu_opp_table>;
#cooling-cells = <2>;
};
cpu1: cpu@1 {
compatible = "arm,cortex-a53";
device_type = "cpu";
reg = <1>;
enable-method = "psci";
clocks = <&ccu CLK_CPUX>;
clock-latency-ns = <244144>; /* 8 32k periods */
operating-points-v2 = <&cpu_opp_table>;
#cooling-cells = <2>;
};
cpu2: cpu@2 {
compatible = "arm,cortex-a53";
device_type = "cpu";
reg = <2>;
enable-method = "psci";
clocks = <&ccu CLK_CPUX>;
clock-latency-ns = <244144>; /* 8 32k periods */
operating-points-v2 = <&cpu_opp_table>;
#cooling-cells = <2>;
};
cpu3: cpu@3 {
compatible = "arm,cortex-a53";
device_type = "cpu";
reg = <3>;
enable-method = "psci";
clocks = <&ccu CLK_CPUX>;
clock-latency-ns = <244144>; /* 8 32k periods */
operating-points-v2 = <&cpu_opp_table>;
#cooling-cells = <2>;
};
};
cpu_opp_table: opp_table {
compatible = "allwinner,sun50i-h6-operating-points";
nvmem-cells = <&speedbin_efuse>;
opp-shared;
opp@480000000 {
clock-latency-ns = <244144>; /* 8 32k periods */
opp-hz = /bits/ 64 <480000000>;
opp-microvolt-speed0 = <880000>;
opp-microvolt-speed1 = <820000>;
opp-microvolt-speed2 = <800000>;
};
opp@720000000 {
clock-latency-ns = <244144>; /* 8 32k periods */
opp-hz = /bits/ 64 <720000000>;
opp-microvolt-speed0 = <880000>;
opp-microvolt-speed1 = <820000>;
opp-microvolt-speed2 = <800000>;
};
opp@816000000 {
clock-latency-ns = <244144>; /* 8 32k periods */
opp-hz = /bits/ 64 <816000000>;
opp-microvolt-speed0 = <880000>;
opp-microvolt-speed1 = <820000>;
opp-microvolt-speed2 = <800000>;
};
opp@888000000 {
clock-latency-ns = <244144>; /* 8 32k periods */
opp-hz = /bits/ 64 <888000000>;
opp-microvolt-speed0 = <940000>;
opp-microvolt-speed1 = <820000>;
opp-microvolt-speed2 = <800000>;
};
opp@1080000000 {
clock-latency-ns = <244144>; /* 8 32k periods */
opp-hz = /bits/ 64 <1080000000>;
opp-microvolt-speed0 = <1060000>;
opp-microvolt-speed1 = <880000>;
opp-microvolt-speed2 = <840000>;
};
opp@1320000000 {
clock-latency-ns = <244144>; /* 8 32k periods */
opp-hz = /bits/ 64 <1320000000>;
opp-microvolt-speed0 = <1160000>;
opp-microvolt-speed1 = <940000>;
opp-microvolt-speed2 = <900000>;
};
opp@1488000000 {
clock-latency-ns = <244144>; /* 8 32k periods */
opp-hz = /bits/ 64 <1488000000>;
opp-microvolt-speed0 = <1160000>;
opp-microvolt-speed1 = <1000000>;
opp-microvolt-speed2 = <960000>;
};
};
....
soc {
....
sid: sid@3006000 {
compatible = "allwinner,sun50i-h6-sid";
reg = <0x03006000 0x400>;
#address-cells = <1>;
#size-cells = <1>;
....
speedbin_efuse: speed@1c {
reg = <0x1c 4>;
};
};
};

View File

@ -46,7 +46,7 @@ We can represent these as three OPPs as the following {Hz, uV} tuples:
----------------------------------------
OPP library provides a set of helper functions to organize and query the OPP
information. The library is located in drivers/base/power/opp.c and the header
information. The library is located in drivers/opp/ directory and the header
is located in include/linux/pm_opp.h. OPP library can be enabled by enabling
CONFIG_PM_OPP from power management menuconfig menu. OPP library depends on
CONFIG_PM as certain SoCs such as Texas Instrument's OMAP framework allows to

View File

@ -7,8 +7,7 @@ performance expectations by drivers, subsystems and user space applications on
one of the parameters.
Two different PM QoS frameworks are available:
1. PM QoS classes for cpu_dma_latency, network_latency, network_throughput,
memory_bandwidth.
1. PM QoS classes for cpu_dma_latency
2. the per-device PM QoS framework provides the API to manage the per-device latency
constraints and PM QoS flags.
@ -79,7 +78,7 @@ cleanup of a process, the interface requires the process to register its
parameter requests in the following way:
To register the default pm_qos target for the specific parameter, the process
must open one of /dev/[cpu_dma_latency, network_latency, network_throughput]
must open /dev/cpu_dma_latency
As long as the device node is held open that process has a registered
request on the parameter.

View File

@ -0,0 +1,78 @@
Guest halt polling
==================
The cpuidle_haltpoll driver, with the haltpoll governor, allows
the guest vcpus to poll for a specified amount of time before
halting.
This provides the following benefits to host side polling:
1) The POLL flag is set while polling is performed, which allows
a remote vCPU to avoid sending an IPI (and the associated
cost of handling the IPI) when performing a wakeup.
2) The VM-exit cost can be avoided.
The downside of guest side polling is that polling is performed
even with other runnable tasks in the host.
The basic logic as follows: A global value, guest_halt_poll_ns,
is configured by the user, indicating the maximum amount of
time polling is allowed. This value is fixed.
Each vcpu has an adjustable guest_halt_poll_ns
("per-cpu guest_halt_poll_ns"), which is adjusted by the algorithm
in response to events (explained below).
Module Parameters
=================
The haltpoll governor has 5 tunable module parameters:
1) guest_halt_poll_ns:
Maximum amount of time, in nanoseconds, that polling is
performed before halting.
Default: 200000
2) guest_halt_poll_shrink:
Division factor used to shrink per-cpu guest_halt_poll_ns when
wakeup event occurs after the global guest_halt_poll_ns.
Default: 2
3) guest_halt_poll_grow:
Multiplication factor used to grow per-cpu guest_halt_poll_ns
when event occurs after per-cpu guest_halt_poll_ns
but before global guest_halt_poll_ns.
Default: 2
4) guest_halt_poll_grow_start:
The per-cpu guest_halt_poll_ns eventually reaches zero
in case of an idle system. This value sets the initial
per-cpu guest_halt_poll_ns when growing. This can
be increased from 10000, to avoid misses during the initial
growth stage:
10k, 20k, 40k, ... (example assumes guest_halt_poll_grow=2).
Default: 50000
5) guest_halt_poll_allow_shrink:
Bool parameter which allows shrinking. Set to N
to avoid it (per-cpu guest_halt_poll_ns will remain
high once achieves global guest_halt_poll_ns value).
Default: Y
The module parameters can be set from the debugfs files in:
/sys/module/haltpoll/parameters/
Further Notes
=============
- Care should be taken when setting the guest_halt_poll_ns parameter as a
large value has the potential to drive the cpu usage to 100% on a machine which
would be almost entirely idle otherwise.

View File

@ -668,6 +668,13 @@ L: linux-media@vger.kernel.org
S: Maintained
F: drivers/staging/media/allegro-dvt/
ALLWINNER CPUFREQ DRIVER
M: Yangtao Li <tiny.windzz@gmail.com>
L: linux-pm@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/opp/sun50i-nvmem-cpufreq.txt
F: drivers/cpufreq/sun50i-cpufreq-nvmem.c
ALLWINNER SECURITY SYSTEM
M: Corentin Labbe <clabbe.montjoie@gmail.com>
L: linux-crypto@vger.kernel.org
@ -13311,8 +13318,8 @@ QUALCOMM CPUFREQ DRIVER MSM8996/APQ8096
M: Ilia Lin <ilia.lin@kernel.org>
L: linux-pm@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/opp/kryo-cpufreq.txt
F: drivers/cpufreq/qcom-cpufreq-kryo.c
F: Documentation/devicetree/bindings/opp/qcom-nvmem-cpufreq.txt
F: drivers/cpufreq/qcom-cpufreq-nvmem.c
QUALCOMM EMAC GIGABIT ETHERNET DRIVER
M: Timur Tabi <timur@kernel.org>

View File

@ -794,6 +794,7 @@ config KVM_GUEST
bool "KVM Guest support (including kvmclock)"
depends on PARAVIRT
select PARAVIRT_CLOCK
select ARCH_CPUIDLE_HALTPOLL
default y
---help---
This option enables various optimizations for running under the KVM
@ -802,6 +803,12 @@ config KVM_GUEST
underlying device model, the host provides the guest with
timing infrastructure such as time of day, and system time
config ARCH_CPUIDLE_HALTPOLL
def_bool n
prompt "Disable host haltpoll when loading haltpoll driver"
help
If virtualized under KVM, disable host haltpoll.
config PVH
bool "Support for running PVH guests"
---help---

View File

@ -0,0 +1,8 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ARCH_HALTPOLL_H
#define _ARCH_HALTPOLL_H
void arch_haltpoll_enable(unsigned int cpu);
void arch_haltpoll_disable(unsigned int cpu);
#endif

View File

@ -705,6 +705,7 @@ unsigned int kvm_arch_para_hints(void)
{
return cpuid_edx(kvm_cpuid_base() | KVM_CPUID_FEATURES);
}
EXPORT_SYMBOL_GPL(kvm_arch_para_hints);
static uint32_t __init kvm_detect(void)
{
@ -867,3 +868,39 @@ void __init kvm_spinlock_init(void)
}
#endif /* CONFIG_PARAVIRT_SPINLOCKS */
#ifdef CONFIG_ARCH_CPUIDLE_HALTPOLL
static void kvm_disable_host_haltpoll(void *i)
{
wrmsrl(MSR_KVM_POLL_CONTROL, 0);
}
static void kvm_enable_host_haltpoll(void *i)
{
wrmsrl(MSR_KVM_POLL_CONTROL, 1);
}
void arch_haltpoll_enable(unsigned int cpu)
{
if (!kvm_para_has_feature(KVM_FEATURE_POLL_CONTROL)) {
pr_err_once("kvm: host does not support poll control\n");
pr_err_once("kvm: host upgrade recommended\n");
return;
}
/* Enable guest halt poll disables host halt poll */
smp_call_function_single(cpu, kvm_disable_host_haltpoll, NULL, 1);
}
EXPORT_SYMBOL_GPL(arch_haltpoll_enable);
void arch_haltpoll_disable(unsigned int cpu)
{
if (!kvm_para_has_feature(KVM_FEATURE_POLL_CONTROL))
return;
/* Enable guest halt poll disables host halt poll */
smp_call_function_single(cpu, kvm_enable_host_haltpoll, NULL, 1);
}
EXPORT_SYMBOL_GPL(arch_haltpoll_disable);
#endif

View File

@ -580,7 +580,7 @@ void __cpuidle default_idle(void)
safe_halt();
trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, smp_processor_id());
}
#ifdef CONFIG_APM_MODULE
#if defined(CONFIG_APM_MODULE) || defined(CONFIG_HALTPOLL_CPUIDLE_MODULE)
EXPORT_SYMBOL(default_idle);
#endif

View File

@ -644,17 +644,17 @@ ACPI_EXPORT_SYMBOL(acpi_get_gpe_status)
* PARAMETERS: gpe_device - Parent GPE Device. NULL for GPE0/GPE1
* gpe_number - GPE level within the GPE block
*
* RETURN: None
* RETURN: INTERRUPT_HANDLED or INTERRUPT_NOT_HANDLED
*
* DESCRIPTION: Detect and dispatch a General Purpose Event to either a function
* (e.g. EC) or method (e.g. _Lxx/_Exx) handler.
*
******************************************************************************/
void acpi_dispatch_gpe(acpi_handle gpe_device, u32 gpe_number)
u32 acpi_dispatch_gpe(acpi_handle gpe_device, u32 gpe_number)
{
ACPI_FUNCTION_TRACE(acpi_dispatch_gpe);
acpi_ev_detect_gpe(gpe_device, NULL, gpe_number);
return acpi_ev_detect_gpe(gpe_device, NULL, gpe_number);
}
ACPI_EXPORT_SYMBOL(acpi_dispatch_gpe)

View File

@ -166,6 +166,10 @@ int acpi_device_set_power(struct acpi_device *device, int state)
|| (state < ACPI_STATE_D0) || (state > ACPI_STATE_D3_COLD))
return -EINVAL;
acpi_handle_debug(device->handle, "Power state change: %s -> %s\n",
acpi_power_state_string(device->power.state),
acpi_power_state_string(state));
/* Make sure this is a valid target state */
/* There is a special case for D0 addressed below. */
@ -497,7 +501,8 @@ acpi_status acpi_add_pm_notifier(struct acpi_device *adev, struct device *dev,
goto out;
mutex_lock(&acpi_pm_notifier_lock);
adev->wakeup.ws = wakeup_source_register(dev_name(&adev->dev));
adev->wakeup.ws = wakeup_source_register(&adev->dev,
dev_name(&adev->dev));
adev->wakeup.context.dev = dev;
adev->wakeup.context.func = func;
adev->wakeup.flags.notifier_present = true;

View File

@ -25,6 +25,7 @@
#include <linux/list.h>
#include <linux/spinlock.h>
#include <linux/slab.h>
#include <linux/suspend.h>
#include <linux/acpi.h>
#include <linux/dmi.h>
#include <asm/io.h>
@ -1048,24 +1049,6 @@ void acpi_ec_unblock_transactions(void)
acpi_ec_start(first_ec, true);
}
void acpi_ec_mark_gpe_for_wake(void)
{
if (first_ec && !ec_no_wakeup)
acpi_mark_gpe_for_wake(NULL, first_ec->gpe);
}
void acpi_ec_set_gpe_wake_mask(u8 action)
{
if (first_ec && !ec_no_wakeup)
acpi_set_gpe_wake_mask(NULL, first_ec->gpe, action);
}
void acpi_ec_dispatch_gpe(void)
{
if (first_ec)
acpi_dispatch_gpe(NULL, first_ec->gpe);
}
/* --------------------------------------------------------------------------
Event Management
-------------------------------------------------------------------------- */
@ -1931,7 +1914,7 @@ static int acpi_ec_suspend(struct device *dev)
struct acpi_ec *ec =
acpi_driver_data(to_acpi_device(dev));
if (acpi_sleep_no_ec_events() && ec_freeze_events)
if (!pm_suspend_no_platform() && ec_freeze_events)
acpi_ec_disable_event(ec);
return 0;
}
@ -1948,8 +1931,7 @@ static int acpi_ec_suspend_noirq(struct device *dev)
ec->reference_count >= 1)
acpi_set_gpe(NULL, ec->gpe, ACPI_GPE_DISABLE);
if (acpi_sleep_no_ec_events())
acpi_ec_enter_noirq(ec);
acpi_ec_enter_noirq(ec);
return 0;
}
@ -1958,8 +1940,7 @@ static int acpi_ec_resume_noirq(struct device *dev)
{
struct acpi_ec *ec = acpi_driver_data(to_acpi_device(dev));
if (acpi_sleep_no_ec_events())
acpi_ec_leave_noirq(ec);
acpi_ec_leave_noirq(ec);
if (ec_no_wakeup && test_bit(EC_FLAGS_STARTED, &ec->flags) &&
ec->reference_count >= 1)
@ -1976,7 +1957,35 @@ static int acpi_ec_resume(struct device *dev)
acpi_ec_enable_event(ec);
return 0;
}
#endif
void acpi_ec_mark_gpe_for_wake(void)
{
if (first_ec && !ec_no_wakeup)
acpi_mark_gpe_for_wake(NULL, first_ec->gpe);
}
EXPORT_SYMBOL_GPL(acpi_ec_mark_gpe_for_wake);
void acpi_ec_set_gpe_wake_mask(u8 action)
{
if (pm_suspend_no_platform() && first_ec && !ec_no_wakeup)
acpi_set_gpe_wake_mask(NULL, first_ec->gpe, action);
}
bool acpi_ec_dispatch_gpe(void)
{
u32 ret;
if (!first_ec)
return false;
ret = acpi_dispatch_gpe(NULL, first_ec->gpe);
if (ret == ACPI_INTERRUPT_HANDLED) {
pm_pr_dbg("EC GPE dispatched\n");
return true;
}
return false;
}
#endif /* CONFIG_PM_SLEEP */
static const struct dev_pm_ops acpi_ec_pm = {
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(acpi_ec_suspend_noirq, acpi_ec_resume_noirq)

View File

@ -194,9 +194,6 @@ void acpi_ec_ecdt_probe(void);
void acpi_ec_dsdt_probe(void);
void acpi_ec_block_transactions(void);
void acpi_ec_unblock_transactions(void);
void acpi_ec_mark_gpe_for_wake(void);
void acpi_ec_set_gpe_wake_mask(u8 action);
void acpi_ec_dispatch_gpe(void);
int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit,
acpi_handle handle, acpi_ec_query_func func,
void *data);
@ -204,6 +201,7 @@ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit);
#ifdef CONFIG_PM_SLEEP
void acpi_ec_flush_work(void);
bool acpi_ec_dispatch_gpe(void);
#endif
@ -212,11 +210,9 @@ void acpi_ec_flush_work(void);
-------------------------------------------------------------------------- */
#ifdef CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT
extern bool acpi_s2idle_wakeup(void);
extern bool acpi_sleep_no_ec_events(void);
extern int acpi_sleep_init(void);
#else
static inline bool acpi_s2idle_wakeup(void) { return false; }
static inline bool acpi_sleep_no_ec_events(void) { return true; }
static inline int acpi_sleep_init(void) { return -ENXIO; }
#endif

View File

@ -284,6 +284,29 @@ static int acpi_processor_stop(struct device *dev)
return 0;
}
bool acpi_processor_cpufreq_init;
static int acpi_processor_notifier(struct notifier_block *nb,
unsigned long event, void *data)
{
struct cpufreq_policy *policy = data;
int cpu = policy->cpu;
if (event == CPUFREQ_CREATE_POLICY) {
acpi_thermal_cpufreq_init(cpu);
acpi_processor_ppc_init(cpu);
} else if (event == CPUFREQ_REMOVE_POLICY) {
acpi_processor_ppc_exit(cpu);
acpi_thermal_cpufreq_exit(cpu);
}
return 0;
}
static struct notifier_block acpi_processor_notifier_block = {
.notifier_call = acpi_processor_notifier,
};
/*
* We keep the driver loaded even when ACPI is not running.
* This is needed for the powernow-k8 driver, that works even without
@ -310,8 +333,12 @@ static int __init acpi_processor_driver_init(void)
cpuhp_setup_state_nocalls(CPUHP_ACPI_CPUDRV_DEAD, "acpi/cpu-drv:dead",
NULL, acpi_soft_cpu_dead);
acpi_thermal_cpufreq_init();
acpi_processor_ppc_init();
if (!cpufreq_register_notifier(&acpi_processor_notifier_block,
CPUFREQ_POLICY_NOTIFIER)) {
acpi_processor_cpufreq_init = true;
acpi_processor_ignore_ppc_init();
}
acpi_processor_throttling_init();
return 0;
err:
@ -324,8 +351,12 @@ static void __exit acpi_processor_driver_exit(void)
if (acpi_disabled)
return;
acpi_processor_ppc_exit();
acpi_thermal_cpufreq_exit();
if (acpi_processor_cpufreq_init) {
cpufreq_unregister_notifier(&acpi_processor_notifier_block,
CPUFREQ_POLICY_NOTIFIER);
acpi_processor_cpufreq_init = false;
}
cpuhp_remove_state_nocalls(hp_online);
cpuhp_remove_state_nocalls(CPUHP_ACPI_CPUDRV_DEAD);
driver_unregister(&acpi_processor_driver);

View File

@ -50,57 +50,13 @@ module_param(ignore_ppc, int, 0644);
MODULE_PARM_DESC(ignore_ppc, "If the frequency of your machine gets wrongly" \
"limited by BIOS, this should help");
#define PPC_REGISTERED 1
#define PPC_IN_USE 2
static int acpi_processor_ppc_status;
static int acpi_processor_ppc_notifier(struct notifier_block *nb,
unsigned long event, void *data)
{
struct cpufreq_policy *policy = data;
struct acpi_processor *pr;
unsigned int ppc = 0;
if (ignore_ppc < 0)
ignore_ppc = 0;
if (ignore_ppc)
return 0;
if (event != CPUFREQ_ADJUST)
return 0;
mutex_lock(&performance_mutex);
pr = per_cpu(processors, policy->cpu);
if (!pr || !pr->performance)
goto out;
ppc = (unsigned int)pr->performance_platform_limit;
if (ppc >= pr->performance->state_count)
goto out;
cpufreq_verify_within_limits(policy, 0,
pr->performance->states[ppc].
core_frequency * 1000);
out:
mutex_unlock(&performance_mutex);
return 0;
}
static struct notifier_block acpi_ppc_notifier_block = {
.notifier_call = acpi_processor_ppc_notifier,
};
static bool acpi_processor_ppc_in_use;
static int acpi_processor_get_platform_limit(struct acpi_processor *pr)
{
acpi_status status = 0;
unsigned long long ppc = 0;
int ret;
if (!pr)
return -EINVAL;
@ -112,7 +68,7 @@ static int acpi_processor_get_platform_limit(struct acpi_processor *pr)
status = acpi_evaluate_integer(pr->handle, "_PPC", NULL, &ppc);
if (status != AE_NOT_FOUND)
acpi_processor_ppc_status |= PPC_IN_USE;
acpi_processor_ppc_in_use = true;
if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) {
ACPI_EXCEPTION((AE_INFO, status, "Evaluating _PPC"));
@ -124,6 +80,17 @@ static int acpi_processor_get_platform_limit(struct acpi_processor *pr)
pr->performance_platform_limit = (int)ppc;
if (ppc >= pr->performance->state_count ||
unlikely(!dev_pm_qos_request_active(&pr->perflib_req)))
return 0;
ret = dev_pm_qos_update_request(&pr->perflib_req,
pr->performance->states[ppc].core_frequency * 1000);
if (ret < 0) {
pr_warn("Failed to update perflib freq constraint: CPU%d (%d)\n",
pr->id, ret);
}
return 0;
}
@ -184,23 +151,32 @@ int acpi_processor_get_bios_limit(int cpu, unsigned int *limit)
}
EXPORT_SYMBOL(acpi_processor_get_bios_limit);
void acpi_processor_ppc_init(void)
void acpi_processor_ignore_ppc_init(void)
{
if (!cpufreq_register_notifier
(&acpi_ppc_notifier_block, CPUFREQ_POLICY_NOTIFIER))
acpi_processor_ppc_status |= PPC_REGISTERED;
else
printk(KERN_DEBUG
"Warning: Processor Platform Limit not supported.\n");
if (ignore_ppc < 0)
ignore_ppc = 0;
}
void acpi_processor_ppc_exit(void)
void acpi_processor_ppc_init(int cpu)
{
if (acpi_processor_ppc_status & PPC_REGISTERED)
cpufreq_unregister_notifier(&acpi_ppc_notifier_block,
CPUFREQ_POLICY_NOTIFIER);
struct acpi_processor *pr = per_cpu(processors, cpu);
int ret;
acpi_processor_ppc_status &= ~PPC_REGISTERED;
ret = dev_pm_qos_add_request(get_cpu_device(cpu),
&pr->perflib_req, DEV_PM_QOS_MAX_FREQUENCY,
INT_MAX);
if (ret < 0) {
pr_err("Failed to add freq constraint for CPU%d (%d)\n", cpu,
ret);
return;
}
}
void acpi_processor_ppc_exit(int cpu)
{
struct acpi_processor *pr = per_cpu(processors, cpu);
dev_pm_qos_remove_request(&pr->perflib_req);
}
static int acpi_processor_get_performance_control(struct acpi_processor *pr)
@ -477,7 +453,7 @@ int acpi_processor_notify_smm(struct module *calling_module)
static int is_done = 0;
int result;
if (!(acpi_processor_ppc_status & PPC_REGISTERED))
if (!acpi_processor_cpufreq_init)
return -EBUSY;
if (!try_module_get(calling_module))
@ -513,7 +489,7 @@ int acpi_processor_notify_smm(struct module *calling_module)
* we can allow the cpufreq driver to be rmmod'ed. */
is_done = 1;
if (!(acpi_processor_ppc_status & PPC_IN_USE))
if (!acpi_processor_ppc_in_use)
module_put(calling_module);
return 0;
@ -742,7 +718,7 @@ acpi_processor_register_performance(struct acpi_processor_performance
{
struct acpi_processor *pr;
if (!(acpi_processor_ppc_status & PPC_REGISTERED))
if (!acpi_processor_cpufreq_init)
return -EINVAL;
mutex_lock(&performance_mutex);

View File

@ -35,7 +35,6 @@ ACPI_MODULE_NAME("processor_thermal");
#define CPUFREQ_THERMAL_MAX_STEP 3
static DEFINE_PER_CPU(unsigned int, cpufreq_thermal_reduction_pctg);
static unsigned int acpi_thermal_cpufreq_is_init = 0;
#define reduction_pctg(cpu) \
per_cpu(cpufreq_thermal_reduction_pctg, phys_package_first_cpu(cpu))
@ -61,35 +60,11 @@ static int phys_package_first_cpu(int cpu)
static int cpu_has_cpufreq(unsigned int cpu)
{
struct cpufreq_policy policy;
if (!acpi_thermal_cpufreq_is_init || cpufreq_get_policy(&policy, cpu))
if (!acpi_processor_cpufreq_init || cpufreq_get_policy(&policy, cpu))
return 0;
return 1;
}
static int acpi_thermal_cpufreq_notifier(struct notifier_block *nb,
unsigned long event, void *data)
{
struct cpufreq_policy *policy = data;
unsigned long max_freq = 0;
if (event != CPUFREQ_ADJUST)
goto out;
max_freq = (
policy->cpuinfo.max_freq *
(100 - reduction_pctg(policy->cpu) * 20)
) / 100;
cpufreq_verify_within_limits(policy, 0, max_freq);
out:
return 0;
}
static struct notifier_block acpi_thermal_cpufreq_notifier_block = {
.notifier_call = acpi_thermal_cpufreq_notifier,
};
static int cpufreq_get_max_state(unsigned int cpu)
{
if (!cpu_has_cpufreq(cpu))
@ -108,7 +83,10 @@ static int cpufreq_get_cur_state(unsigned int cpu)
static int cpufreq_set_cur_state(unsigned int cpu, int state)
{
int i;
struct cpufreq_policy *policy;
struct acpi_processor *pr;
unsigned long max_freq;
int i, ret;
if (!cpu_has_cpufreq(cpu))
return 0;
@ -121,33 +99,53 @@ static int cpufreq_set_cur_state(unsigned int cpu, int state)
* frequency.
*/
for_each_online_cpu(i) {
if (topology_physical_package_id(i) ==
if (topology_physical_package_id(i) !=
topology_physical_package_id(cpu))
cpufreq_update_policy(i);
continue;
pr = per_cpu(processors, i);
if (unlikely(!dev_pm_qos_request_active(&pr->thermal_req)))
continue;
policy = cpufreq_cpu_get(i);
if (!policy)
return -EINVAL;
max_freq = (policy->cpuinfo.max_freq * (100 - reduction_pctg(i) * 20)) / 100;
cpufreq_cpu_put(policy);
ret = dev_pm_qos_update_request(&pr->thermal_req, max_freq);
if (ret < 0) {
pr_warn("Failed to update thermal freq constraint: CPU%d (%d)\n",
pr->id, ret);
}
}
return 0;
}
void acpi_thermal_cpufreq_init(void)
void acpi_thermal_cpufreq_init(int cpu)
{
int i;
struct acpi_processor *pr = per_cpu(processors, cpu);
int ret;
i = cpufreq_register_notifier(&acpi_thermal_cpufreq_notifier_block,
CPUFREQ_POLICY_NOTIFIER);
if (!i)
acpi_thermal_cpufreq_is_init = 1;
ret = dev_pm_qos_add_request(get_cpu_device(cpu),
&pr->thermal_req, DEV_PM_QOS_MAX_FREQUENCY,
INT_MAX);
if (ret < 0) {
pr_err("Failed to add freq constraint for CPU%d (%d)\n", cpu,
ret);
return;
}
}
void acpi_thermal_cpufreq_exit(void)
void acpi_thermal_cpufreq_exit(int cpu)
{
if (acpi_thermal_cpufreq_is_init)
cpufreq_unregister_notifier
(&acpi_thermal_cpufreq_notifier_block,
CPUFREQ_POLICY_NOTIFIER);
struct acpi_processor *pr = per_cpu(processors, cpu);
acpi_thermal_cpufreq_is_init = 0;
dev_pm_qos_remove_request(&pr->thermal_req);
}
#else /* ! CONFIG_CPU_FREQ */
static int cpufreq_get_max_state(unsigned int cpu)
{

View File

@ -89,6 +89,10 @@ bool acpi_sleep_state_supported(u8 sleep_state)
}
#ifdef CONFIG_ACPI_SLEEP
static bool sleep_no_lps0 __read_mostly;
module_param(sleep_no_lps0, bool, 0644);
MODULE_PARM_DESC(sleep_no_lps0, "Do not use the special LPS0 device interface");
static u32 acpi_target_sleep_state = ACPI_STATE_S0;
u32 acpi_target_system_state(void)
@ -158,11 +162,11 @@ static int __init init_nvs_nosave(const struct dmi_system_id *d)
return 0;
}
static bool acpi_sleep_no_lps0;
static bool acpi_sleep_default_s3;
static int __init init_no_lps0(const struct dmi_system_id *d)
static int __init init_default_s3(const struct dmi_system_id *d)
{
acpi_sleep_no_lps0 = true;
acpi_sleep_default_s3 = true;
return 0;
}
@ -363,7 +367,7 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
* S0 Idle firmware interface.
*/
{
.callback = init_no_lps0,
.callback = init_default_s3,
.ident = "Dell XPS13 9360",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
@ -376,7 +380,7 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = {
* https://bugzilla.kernel.org/show_bug.cgi?id=199057).
*/
{
.callback = init_no_lps0,
.callback = init_default_s3,
.ident = "ThinkPad X1 Tablet(2016)",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
@ -524,8 +528,9 @@ static void acpi_pm_end(void)
acpi_sleep_tts_switch(acpi_target_sleep_state);
}
#else /* !CONFIG_ACPI_SLEEP */
#define sleep_no_lps0 (1)
#define acpi_target_sleep_state ACPI_STATE_S0
#define acpi_sleep_no_lps0 (false)
#define acpi_sleep_default_s3 (1)
static inline void acpi_sleep_dmi_check(void) {}
#endif /* CONFIG_ACPI_SLEEP */
@ -691,7 +696,6 @@ static const struct platform_suspend_ops acpi_suspend_ops_old = {
.recover = acpi_pm_finish,
};
static bool s2idle_in_progress;
static bool s2idle_wakeup;
/*
@ -904,42 +908,43 @@ static int lps0_device_attach(struct acpi_device *adev,
if (lps0_device_handle)
return 0;
if (acpi_sleep_no_lps0) {
acpi_handle_info(adev->handle,
"Low Power S0 Idle interface disabled\n");
return 0;
}
if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0))
return 0;
guid_parse(ACPI_LPS0_DSM_UUID, &lps0_dsm_guid);
/* Check if the _DSM is present and as expected. */
out_obj = acpi_evaluate_dsm(adev->handle, &lps0_dsm_guid, 1, 0, NULL);
if (out_obj && out_obj->type == ACPI_TYPE_BUFFER) {
char bitmask = *(char *)out_obj->buffer.pointer;
lps0_dsm_func_mask = bitmask;
lps0_device_handle = adev->handle;
/*
* Use suspend-to-idle by default if the default
* suspend mode was not set from the command line.
*/
if (mem_sleep_default > PM_SUSPEND_MEM)
mem_sleep_current = PM_SUSPEND_TO_IDLE;
acpi_handle_debug(adev->handle, "_DSM function mask: 0x%x\n",
bitmask);
acpi_ec_mark_gpe_for_wake();
} else {
if (!out_obj || out_obj->type != ACPI_TYPE_BUFFER) {
acpi_handle_debug(adev->handle,
"_DSM function 0 evaluation failed\n");
return 0;
}
lps0_dsm_func_mask = *(char *)out_obj->buffer.pointer;
ACPI_FREE(out_obj);
acpi_handle_debug(adev->handle, "_DSM function mask: 0x%x\n",
lps0_dsm_func_mask);
lps0_device_handle = adev->handle;
lpi_device_get_constraints();
/*
* Use suspend-to-idle by default if the default suspend mode was not
* set from the command line.
*/
if (mem_sleep_default > PM_SUSPEND_MEM && !acpi_sleep_default_s3)
mem_sleep_current = PM_SUSPEND_TO_IDLE;
/*
* Some LPS0 systems, like ASUS Zenbook UX430UNR/i7-8550U, require the
* EC GPE to be enabled while suspended for certain wakeup devices to
* work, so mark it as wakeup-capable.
*/
acpi_ec_mark_gpe_for_wake();
return 0;
}
@ -951,98 +956,110 @@ static struct acpi_scan_handler lps0_handler = {
static int acpi_s2idle_begin(void)
{
acpi_scan_lock_acquire();
s2idle_in_progress = true;
return 0;
}
static int acpi_s2idle_prepare(void)
{
if (lps0_device_handle) {
acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF);
acpi_sleep_run_lps0_dsm(ACPI_LPS0_ENTRY);
if (acpi_sci_irq_valid()) {
enable_irq_wake(acpi_sci_irq);
acpi_ec_set_gpe_wake_mask(ACPI_GPE_ENABLE);
}
if (acpi_sci_irq_valid())
enable_irq_wake(acpi_sci_irq);
acpi_enable_wakeup_devices(ACPI_STATE_S0);
/* Change the configuration of GPEs to avoid spurious wakeup. */
acpi_enable_all_wakeup_gpes();
acpi_os_wait_events_complete();
s2idle_wakeup = true;
return 0;
}
static int acpi_s2idle_prepare_late(void)
{
if (!lps0_device_handle || sleep_no_lps0)
return 0;
if (pm_debug_messages_on)
lpi_check_constraints();
acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_OFF);
acpi_sleep_run_lps0_dsm(ACPI_LPS0_ENTRY);
return 0;
}
static void acpi_s2idle_wake(void)
{
if (!lps0_device_handle)
/*
* If IRQD_WAKEUP_ARMED is set for the SCI at this point, the SCI has
* not triggered while suspended, so bail out.
*/
if (!acpi_sci_irq_valid() ||
irqd_is_wakeup_armed(irq_get_irq_data(acpi_sci_irq)))
return;
if (pm_debug_messages_on)
lpi_check_constraints();
/*
* If IRQD_WAKEUP_ARMED is not set for the SCI at this point, it means
* that the SCI has triggered while suspended, so cancel the wakeup in
* case it has not been a wakeup event (the GPEs will be checked later).
* If there are EC events to process, the wakeup may be a spurious one
* coming from the EC.
*/
if (acpi_sci_irq_valid() &&
!irqd_is_wakeup_armed(irq_get_irq_data(acpi_sci_irq))) {
pm_system_cancel_wakeup();
s2idle_wakeup = true;
if (acpi_ec_dispatch_gpe()) {
/*
* On some platforms with the LPS0 _DSM device noirq resume
* takes too much time for EC wakeup events to survive, so look
* for them now.
* Cancel the wakeup and process all pending events in case
* there are any wakeup ones in there.
*
* Note that if any non-EC GPEs are active at this point, the
* SCI will retrigger after the rearming below, so no events
* should be missed by canceling the wakeup here.
*/
acpi_ec_dispatch_gpe();
pm_system_cancel_wakeup();
/*
* The EC driver uses the system workqueue and an additional
* special one, so those need to be flushed too.
*/
acpi_os_wait_events_complete(); /* synchronize EC GPE processing */
acpi_ec_flush_work();
acpi_os_wait_events_complete(); /* synchronize Notify handling */
rearm_wake_irq(acpi_sci_irq);
}
}
static void acpi_s2idle_sync(void)
static void acpi_s2idle_restore_early(void)
{
/*
* Process all pending events in case there are any wakeup ones.
*
* The EC driver uses the system workqueue and an additional special
* one, so those need to be flushed too.
*/
acpi_os_wait_events_complete(); /* synchronize SCI IRQ handling */
acpi_ec_flush_work();
acpi_os_wait_events_complete(); /* synchronize Notify handling */
s2idle_wakeup = false;
if (!lps0_device_handle || sleep_no_lps0)
return;
acpi_sleep_run_lps0_dsm(ACPI_LPS0_EXIT);
acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_ON);
}
static void acpi_s2idle_restore(void)
{
s2idle_wakeup = false;
acpi_enable_all_runtime_gpes();
acpi_disable_wakeup_devices(ACPI_STATE_S0);
if (acpi_sci_irq_valid())
disable_irq_wake(acpi_sci_irq);
if (lps0_device_handle) {
if (acpi_sci_irq_valid()) {
acpi_ec_set_gpe_wake_mask(ACPI_GPE_DISABLE);
acpi_sleep_run_lps0_dsm(ACPI_LPS0_EXIT);
acpi_sleep_run_lps0_dsm(ACPI_LPS0_SCREEN_ON);
disable_irq_wake(acpi_sci_irq);
}
}
static void acpi_s2idle_end(void)
{
s2idle_in_progress = false;
acpi_scan_lock_release();
}
static const struct platform_s2idle_ops acpi_s2idle_ops = {
.begin = acpi_s2idle_begin,
.prepare = acpi_s2idle_prepare,
.prepare_late = acpi_s2idle_prepare_late,
.wake = acpi_s2idle_wake,
.sync = acpi_s2idle_sync,
.restore_early = acpi_s2idle_restore_early,
.restore = acpi_s2idle_restore,
.end = acpi_s2idle_end,
};
@ -1063,7 +1080,6 @@ static void acpi_sleep_suspend_setup(void)
}
#else /* !CONFIG_SUSPEND */
#define s2idle_in_progress (false)
#define s2idle_wakeup (false)
#define lps0_device_handle (NULL)
static inline void acpi_sleep_suspend_setup(void) {}
@ -1074,11 +1090,6 @@ bool acpi_s2idle_wakeup(void)
return s2idle_wakeup;
}
bool acpi_sleep_no_ec_events(void)
{
return !s2idle_in_progress || !lps0_device_handle;
}
#ifdef CONFIG_PM_SLEEP
static u32 saved_bm_rld;

View File

@ -179,7 +179,7 @@ init_cpu_capacity_callback(struct notifier_block *nb,
if (!raw_capacity)
return 0;
if (val != CPUFREQ_NOTIFY)
if (val != CPUFREQ_CREATE_POLICY)
return 0;
pr_debug("cpu_capacity: init cpu capacity for CPUs [%*pbl] (to_visit=%*pbl)\n",

View File

@ -1,6 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o wakeirq.o
obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o
obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o wakeup_stats.o
obj-$(CONFIG_PM_TRACE_RTC) += trace.o
obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o
obj-$(CONFIG_HAVE_CLK) += clock_ops.o

View File

@ -149,29 +149,24 @@ static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev,
return ret;
}
static int genpd_runtime_suspend(struct device *dev);
/*
* Get the generic PM domain for a particular struct device.
* This validates the struct device pointer, the PM domain pointer,
* and checks that the PM domain pointer is a real generic PM domain.
* Any failure results in NULL being returned.
*/
static struct generic_pm_domain *genpd_lookup_dev(struct device *dev)
static struct generic_pm_domain *dev_to_genpd_safe(struct device *dev)
{
struct generic_pm_domain *genpd = NULL, *gpd;
if (IS_ERR_OR_NULL(dev) || IS_ERR_OR_NULL(dev->pm_domain))
return NULL;
mutex_lock(&gpd_list_lock);
list_for_each_entry(gpd, &gpd_list, gpd_list_node) {
if (&gpd->domain == dev->pm_domain) {
genpd = gpd;
break;
}
}
mutex_unlock(&gpd_list_lock);
/* A genpd's always have its ->runtime_suspend() callback assigned. */
if (dev->pm_domain->ops.runtime_suspend == genpd_runtime_suspend)
return pd_to_genpd(dev->pm_domain);
return genpd;
return NULL;
}
/*
@ -385,8 +380,8 @@ int dev_pm_genpd_set_performance_state(struct device *dev, unsigned int state)
unsigned int prev;
int ret;
genpd = dev_to_genpd(dev);
if (IS_ERR(genpd))
genpd = dev_to_genpd_safe(dev);
if (!genpd)
return -ENODEV;
if (unlikely(!genpd->set_performance_state))
@ -1610,7 +1605,7 @@ static int genpd_remove_device(struct generic_pm_domain *genpd,
*/
int pm_genpd_remove_device(struct device *dev)
{
struct generic_pm_domain *genpd = genpd_lookup_dev(dev);
struct generic_pm_domain *genpd = dev_to_genpd_safe(dev);
if (!genpd)
return -EINVAL;

View File

@ -716,7 +716,7 @@ static void async_resume_noirq(void *data, async_cookie_t cookie)
put_device(dev);
}
void dpm_noirq_resume_devices(pm_message_t state)
static void dpm_noirq_resume_devices(pm_message_t state)
{
struct device *dev;
ktime_t starttime = ktime_get();
@ -760,13 +760,6 @@ void dpm_noirq_resume_devices(pm_message_t state)
trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false);
}
void dpm_noirq_end(void)
{
resume_device_irqs();
device_wakeup_disarm_wake_irqs();
cpuidle_resume();
}
/**
* dpm_resume_noirq - Execute "noirq resume" callbacks for all devices.
* @state: PM transition of the system being carried out.
@ -777,7 +770,11 @@ void dpm_noirq_end(void)
void dpm_resume_noirq(pm_message_t state)
{
dpm_noirq_resume_devices(state);
dpm_noirq_end();
resume_device_irqs();
device_wakeup_disarm_wake_irqs();
cpuidle_resume();
}
static pm_callback_t dpm_subsys_resume_early_cb(struct device *dev,
@ -1291,11 +1288,6 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a
if (async_error)
goto Complete;
if (pm_wakeup_pending()) {
async_error = -EBUSY;
goto Complete;
}
if (dev->power.syscore || dev->power.direct_complete)
goto Complete;
@ -1362,14 +1354,7 @@ static int device_suspend_noirq(struct device *dev)
return __device_suspend_noirq(dev, pm_transition, false);
}
void dpm_noirq_begin(void)
{
cpuidle_pause();
device_wakeup_arm_wake_irqs();
suspend_device_irqs();
}
int dpm_noirq_suspend_devices(pm_message_t state)
static int dpm_noirq_suspend_devices(pm_message_t state)
{
ktime_t starttime = ktime_get();
int error = 0;
@ -1426,7 +1411,11 @@ int dpm_suspend_noirq(pm_message_t state)
{
int ret;
dpm_noirq_begin();
cpuidle_pause();
device_wakeup_arm_wake_irqs();
suspend_device_irqs();
ret = dpm_noirq_suspend_devices(state);
if (ret)
dpm_resume_noirq(resume_event(state));

View File

@ -149,3 +149,21 @@ static inline void device_pm_init(struct device *dev)
device_pm_sleep_init(dev);
pm_runtime_init(dev);
}
#ifdef CONFIG_PM_SLEEP
/* drivers/base/power/wakeup_stats.c */
extern int wakeup_source_sysfs_add(struct device *parent,
struct wakeup_source *ws);
extern void wakeup_source_sysfs_remove(struct wakeup_source *ws);
extern int pm_wakeup_source_sysfs_add(struct device *parent);
#else /* !CONFIG_PM_SLEEP */
static inline int pm_wakeup_source_sysfs_add(struct device *parent)
{
return 0;
}
#endif /* CONFIG_PM_SLEEP */

View File

@ -5,6 +5,7 @@
#include <linux/export.h>
#include <linux/pm_qos.h>
#include <linux/pm_runtime.h>
#include <linux/pm_wakeup.h>
#include <linux/atomic.h>
#include <linux/jiffies.h>
#include "power.h"
@ -667,8 +668,13 @@ int dpm_sysfs_add(struct device *dev)
if (rc)
goto err_wakeup;
}
rc = pm_wakeup_source_sysfs_add(dev);
if (rc)
goto err_latency;
return 0;
err_latency:
sysfs_unmerge_group(&dev->kobj, &pm_qos_latency_tolerance_attr_group);
err_wakeup:
sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group);
err_runtime:

View File

@ -72,22 +72,7 @@ static struct wakeup_source deleted_ws = {
.lock = __SPIN_LOCK_UNLOCKED(deleted_ws.lock),
};
/**
* wakeup_source_prepare - Prepare a new wakeup source for initialization.
* @ws: Wakeup source to prepare.
* @name: Pointer to the name of the new wakeup source.
*
* Callers must ensure that the @name string won't be freed when @ws is still in
* use.
*/
void wakeup_source_prepare(struct wakeup_source *ws, const char *name)
{
if (ws) {
memset(ws, 0, sizeof(*ws));
ws->name = name;
}
}
EXPORT_SYMBOL_GPL(wakeup_source_prepare);
static DEFINE_IDA(wakeup_ida);
/**
* wakeup_source_create - Create a struct wakeup_source object.
@ -96,13 +81,31 @@ EXPORT_SYMBOL_GPL(wakeup_source_prepare);
struct wakeup_source *wakeup_source_create(const char *name)
{
struct wakeup_source *ws;
const char *ws_name;
int id;
ws = kmalloc(sizeof(*ws), GFP_KERNEL);
ws = kzalloc(sizeof(*ws), GFP_KERNEL);
if (!ws)
return NULL;
goto err_ws;
ws_name = kstrdup_const(name, GFP_KERNEL);
if (!ws_name)
goto err_name;
ws->name = ws_name;
id = ida_alloc(&wakeup_ida, GFP_KERNEL);
if (id < 0)
goto err_id;
ws->id = id;
wakeup_source_prepare(ws, name ? kstrdup_const(name, GFP_KERNEL) : NULL);
return ws;
err_id:
kfree_const(ws->name);
err_name:
kfree(ws);
err_ws:
return NULL;
}
EXPORT_SYMBOL_GPL(wakeup_source_create);
@ -134,6 +137,13 @@ static void wakeup_source_record(struct wakeup_source *ws)
spin_unlock_irqrestore(&deleted_ws.lock, flags);
}
static void wakeup_source_free(struct wakeup_source *ws)
{
ida_free(&wakeup_ida, ws->id);
kfree_const(ws->name);
kfree(ws);
}
/**
* wakeup_source_destroy - Destroy a struct wakeup_source object.
* @ws: Wakeup source to destroy.
@ -147,8 +157,7 @@ void wakeup_source_destroy(struct wakeup_source *ws)
__pm_relax(ws);
wakeup_source_record(ws);
kfree_const(ws->name);
kfree(ws);
wakeup_source_free(ws);
}
EXPORT_SYMBOL_GPL(wakeup_source_destroy);
@ -200,16 +209,26 @@ EXPORT_SYMBOL_GPL(wakeup_source_remove);
/**
* wakeup_source_register - Create wakeup source and add it to the list.
* @dev: Device this wakeup source is associated with (or NULL if virtual).
* @name: Name of the wakeup source to register.
*/
struct wakeup_source *wakeup_source_register(const char *name)
struct wakeup_source *wakeup_source_register(struct device *dev,
const char *name)
{
struct wakeup_source *ws;
int ret;
ws = wakeup_source_create(name);
if (ws)
if (ws) {
if (!dev || device_is_registered(dev)) {
ret = wakeup_source_sysfs_add(dev, ws);
if (ret) {
wakeup_source_free(ws);
return NULL;
}
}
wakeup_source_add(ws);
}
return ws;
}
EXPORT_SYMBOL_GPL(wakeup_source_register);
@ -222,6 +241,7 @@ void wakeup_source_unregister(struct wakeup_source *ws)
{
if (ws) {
wakeup_source_remove(ws);
wakeup_source_sysfs_remove(ws);
wakeup_source_destroy(ws);
}
}
@ -265,7 +285,7 @@ int device_wakeup_enable(struct device *dev)
if (pm_suspend_target_state != PM_SUSPEND_ON)
dev_dbg(dev, "Suspicious %s() during system transition!\n", __func__);
ws = wakeup_source_register(dev_name(dev));
ws = wakeup_source_register(dev, dev_name(dev));
if (!ws)
return -ENOMEM;
@ -859,7 +879,7 @@ EXPORT_SYMBOL_GPL(pm_system_wakeup);
void pm_system_cancel_wakeup(void)
{
atomic_dec(&pm_abort_suspend);
atomic_dec_if_positive(&pm_abort_suspend);
}
void pm_wakeup_clear(bool reset)

View File

@ -0,0 +1,214 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Wakeup statistics in sysfs
*
* Copyright (c) 2019 Linux Foundation
* Copyright (c) 2019 Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* Copyright (c) 2019 Google Inc.
*/
#include <linux/device.h>
#include <linux/idr.h>
#include <linux/init.h>
#include <linux/kdev_t.h>
#include <linux/kernel.h>
#include <linux/kobject.h>
#include <linux/slab.h>
#include <linux/timekeeping.h>
#include "power.h"
static struct class *wakeup_class;
#define wakeup_attr(_name) \
static ssize_t _name##_show(struct device *dev, \
struct device_attribute *attr, char *buf) \
{ \
struct wakeup_source *ws = dev_get_drvdata(dev); \
\
return sprintf(buf, "%lu\n", ws->_name); \
} \
static DEVICE_ATTR_RO(_name)
wakeup_attr(active_count);
wakeup_attr(event_count);
wakeup_attr(wakeup_count);
wakeup_attr(expire_count);
static ssize_t active_time_ms_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct wakeup_source *ws = dev_get_drvdata(dev);
ktime_t active_time =
ws->active ? ktime_sub(ktime_get(), ws->last_time) : 0;
return sprintf(buf, "%lld\n", ktime_to_ms(active_time));
}
static DEVICE_ATTR_RO(active_time_ms);
static ssize_t total_time_ms_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct wakeup_source *ws = dev_get_drvdata(dev);
ktime_t active_time;
ktime_t total_time = ws->total_time;
if (ws->active) {
active_time = ktime_sub(ktime_get(), ws->last_time);
total_time = ktime_add(total_time, active_time);
}
return sprintf(buf, "%lld\n", ktime_to_ms(total_time));
}
static DEVICE_ATTR_RO(total_time_ms);
static ssize_t max_time_ms_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct wakeup_source *ws = dev_get_drvdata(dev);
ktime_t active_time;
ktime_t max_time = ws->max_time;
if (ws->active) {
active_time = ktime_sub(ktime_get(), ws->last_time);
if (active_time > max_time)
max_time = active_time;
}
return sprintf(buf, "%lld\n", ktime_to_ms(max_time));
}
static DEVICE_ATTR_RO(max_time_ms);
static ssize_t last_change_ms_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct wakeup_source *ws = dev_get_drvdata(dev);
return sprintf(buf, "%lld\n", ktime_to_ms(ws->last_time));
}
static DEVICE_ATTR_RO(last_change_ms);
static ssize_t name_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct wakeup_source *ws = dev_get_drvdata(dev);
return sprintf(buf, "%s\n", ws->name);
}
static DEVICE_ATTR_RO(name);
static ssize_t prevent_suspend_time_ms_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct wakeup_source *ws = dev_get_drvdata(dev);
ktime_t prevent_sleep_time = ws->prevent_sleep_time;
if (ws->active && ws->autosleep_enabled) {
prevent_sleep_time = ktime_add(prevent_sleep_time,
ktime_sub(ktime_get(), ws->start_prevent_time));
}
return sprintf(buf, "%lld\n", ktime_to_ms(prevent_sleep_time));
}
static DEVICE_ATTR_RO(prevent_suspend_time_ms);
static struct attribute *wakeup_source_attrs[] = {
&dev_attr_name.attr,
&dev_attr_active_count.attr,
&dev_attr_event_count.attr,
&dev_attr_wakeup_count.attr,
&dev_attr_expire_count.attr,
&dev_attr_active_time_ms.attr,
&dev_attr_total_time_ms.attr,
&dev_attr_max_time_ms.attr,
&dev_attr_last_change_ms.attr,
&dev_attr_prevent_suspend_time_ms.attr,
NULL,
};
ATTRIBUTE_GROUPS(wakeup_source);
static void device_create_release(struct device *dev)
{
kfree(dev);
}
static struct device *wakeup_source_device_create(struct device *parent,
struct wakeup_source *ws)
{
struct device *dev = NULL;
int retval = -ENODEV;
dev = kzalloc(sizeof(*dev), GFP_KERNEL);
if (!dev) {
retval = -ENOMEM;
goto error;
}
device_initialize(dev);
dev->devt = MKDEV(0, 0);
dev->class = wakeup_class;
dev->parent = parent;
dev->groups = wakeup_source_groups;
dev->release = device_create_release;
dev_set_drvdata(dev, ws);
device_set_pm_not_required(dev);
retval = kobject_set_name(&dev->kobj, "wakeup%d", ws->id);
if (retval)
goto error;
retval = device_add(dev);
if (retval)
goto error;
return dev;
error:
put_device(dev);
return ERR_PTR(retval);
}
/**
* wakeup_source_sysfs_add - Add wakeup_source attributes to sysfs.
* @parent: Device given wakeup source is associated with (or NULL if virtual).
* @ws: Wakeup source to be added in sysfs.
*/
int wakeup_source_sysfs_add(struct device *parent, struct wakeup_source *ws)
{
struct device *dev;
dev = wakeup_source_device_create(parent, ws);
if (IS_ERR(dev))
return PTR_ERR(dev);
ws->dev = dev;
return 0;
}
/**
* pm_wakeup_source_sysfs_add - Add wakeup_source attributes to sysfs
* for a device if they're missing.
* @parent: Device given wakeup source is associated with
*/
int pm_wakeup_source_sysfs_add(struct device *parent)
{
if (!parent->power.wakeup || parent->power.wakeup->dev)
return 0;
return wakeup_source_sysfs_add(parent, parent->power.wakeup);
}
/**
* wakeup_source_sysfs_remove - Remove wakeup_source attributes from sysfs.
* @ws: Wakeup source to be removed from sysfs.
*/
void wakeup_source_sysfs_remove(struct wakeup_source *ws)
{
device_unregister(ws->dev);
}
static int __init wakeup_sources_sysfs_init(void)
{
wakeup_class = class_create(THIS_MODULE, "wakeup");
return PTR_ERR_OR_ZERO(wakeup_class);
}
postcore_initcall(wakeup_sources_sysfs_init);

View File

@ -19,6 +19,18 @@ config ACPI_CPPC_CPUFREQ
If in doubt, say N.
config ARM_ALLWINNER_SUN50I_CPUFREQ_NVMEM
tristate "Allwinner nvmem based SUN50I CPUFreq driver"
depends on ARCH_SUNXI
depends on NVMEM_SUNXI_SID
select PM_OPP
help
This adds the nvmem based CPUFreq driver for Allwinner
h6 SoC.
To compile this driver as a module, choose M here: the
module will be called sun50i-cpufreq-nvmem.
config ARM_ARMADA_37XX_CPUFREQ
tristate "Armada 37xx CPUFreq support"
depends on ARCH_MVEBU && CPUFREQ_DT
@ -120,8 +132,8 @@ config ARM_OMAP2PLUS_CPUFREQ
depends on ARCH_OMAP2PLUS
default ARCH_OMAP2PLUS
config ARM_QCOM_CPUFREQ_KRYO
tristate "Qualcomm Kryo based CPUFreq"
config ARM_QCOM_CPUFREQ_NVMEM
tristate "Qualcomm nvmem based CPUFreq"
depends on ARM64
depends on QCOM_QFPROM
depends on QCOM_SMEM

View File

@ -64,7 +64,7 @@ obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o
obj-$(CONFIG_ARM_PXA2xx_CPUFREQ) += pxa2xx-cpufreq.o
obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o
obj-$(CONFIG_ARM_QCOM_CPUFREQ_HW) += qcom-cpufreq-hw.o
obj-$(CONFIG_ARM_QCOM_CPUFREQ_KRYO) += qcom-cpufreq-kryo.o
obj-$(CONFIG_ARM_QCOM_CPUFREQ_NVMEM) += qcom-cpufreq-nvmem.o
obj-$(CONFIG_ARM_RASPBERRYPI_CPUFREQ) += raspberrypi-cpufreq.o
obj-$(CONFIG_ARM_S3C2410_CPUFREQ) += s3c2410-cpufreq.o
obj-$(CONFIG_ARM_S3C2412_CPUFREQ) += s3c2412-cpufreq.o
@ -80,6 +80,7 @@ obj-$(CONFIG_ARM_SCMI_CPUFREQ) += scmi-cpufreq.o
obj-$(CONFIG_ARM_SCPI_CPUFREQ) += scpi-cpufreq.o
obj-$(CONFIG_ARM_SPEAR_CPUFREQ) += spear-cpufreq.o
obj-$(CONFIG_ARM_STI_CPUFREQ) += sti-cpufreq.o
obj-$(CONFIG_ARM_ALLWINNER_SUN50I_CPUFREQ_NVMEM) += sun50i-cpufreq-nvmem.o
obj-$(CONFIG_ARM_TANGO_CPUFREQ) += tango-cpufreq.o
obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o
obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o

View File

@ -136,6 +136,8 @@ static int __init armada_8k_cpufreq_init(void)
nb_cpus = num_possible_cpus();
freq_tables = kcalloc(nb_cpus, sizeof(*freq_tables), GFP_KERNEL);
if (!freq_tables)
return -ENOMEM;
cpumask_copy(&cpus, cpu_possible_mask);
/*

View File

@ -101,12 +101,15 @@ static const struct of_device_id whitelist[] __initconst = {
* platforms using "operating-points-v2" property.
*/
static const struct of_device_id blacklist[] __initconst = {
{ .compatible = "allwinner,sun50i-h6", },
{ .compatible = "calxeda,highbank", },
{ .compatible = "calxeda,ecx-2000", },
{ .compatible = "fsl,imx7d", },
{ .compatible = "fsl,imx8mq", },
{ .compatible = "fsl,imx8mm", },
{ .compatible = "fsl,imx8mn", },
{ .compatible = "marvell,armadaxp", },
@ -117,12 +120,14 @@ static const struct of_device_id blacklist[] __initconst = {
{ .compatible = "mediatek,mt817x", },
{ .compatible = "mediatek,mt8173", },
{ .compatible = "mediatek,mt8176", },
{ .compatible = "mediatek,mt8183", },
{ .compatible = "nvidia,tegra124", },
{ .compatible = "nvidia,tegra210", },
{ .compatible = "qcom,apq8096", },
{ .compatible = "qcom,msm8996", },
{ .compatible = "qcom,qcs404", },
{ .compatible = "st,stih407", },
{ .compatible = "st,stih410", },

View File

@ -1266,7 +1266,17 @@ static void cpufreq_policy_free(struct cpufreq_policy *policy)
DEV_PM_QOS_MAX_FREQUENCY);
dev_pm_qos_remove_notifier(dev, &policy->nb_min,
DEV_PM_QOS_MIN_FREQUENCY);
dev_pm_qos_remove_request(policy->max_freq_req);
if (policy->max_freq_req) {
/*
* CPUFREQ_CREATE_POLICY notification is sent only after
* successfully adding max_freq_req request.
*/
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_REMOVE_POLICY, policy);
dev_pm_qos_remove_request(policy->max_freq_req);
}
dev_pm_qos_remove_request(policy->min_freq_req);
kfree(policy->min_freq_req);
@ -1391,6 +1401,9 @@ static int cpufreq_online(unsigned int cpu)
ret);
goto out_destroy_policy;
}
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_CREATE_POLICY, policy);
}
if (cpufreq_driver->get && has_target()) {
@ -1807,8 +1820,8 @@ void cpufreq_suspend(void)
}
if (cpufreq_driver->suspend && cpufreq_driver->suspend(policy))
pr_err("%s: Failed to suspend driver: %p\n", __func__,
policy);
pr_err("%s: Failed to suspend driver: %s\n", __func__,
cpufreq_driver->name);
}
suspend:
@ -2140,7 +2153,7 @@ int cpufreq_driver_target(struct cpufreq_policy *policy,
unsigned int target_freq,
unsigned int relation)
{
int ret = -EINVAL;
int ret;
down_write(&policy->rwsem);
@ -2347,15 +2360,13 @@ EXPORT_SYMBOL(cpufreq_get_policy);
* @policy: Policy object to modify.
* @new_policy: New policy data.
*
* Pass @new_policy to the cpufreq driver's ->verify() callback, run the
* installed policy notifiers for it with the CPUFREQ_ADJUST value, pass it to
* the driver's ->verify() callback again and run the notifiers for it again
* with the CPUFREQ_NOTIFY value. Next, copy the min and max parameters
* of @new_policy to @policy and either invoke the driver's ->setpolicy()
* callback (if present) or carry out a governor update for @policy. That is,
* run the current governor's ->limits() callback (if the governor field in
* @new_policy points to the same object as the one in @policy) or replace the
* governor for @policy with the new one stored in @new_policy.
* Pass @new_policy to the cpufreq driver's ->verify() callback. Next, copy the
* min and max parameters of @new_policy to @policy and either invoke the
* driver's ->setpolicy() callback (if present) or carry out a governor update
* for @policy. That is, run the current governor's ->limits() callback (if the
* governor field in @new_policy points to the same object as the one in
* @policy) or replace the governor for @policy with the new one stored in
* @new_policy.
*
* The cpuinfo part of @policy is not updated by this function.
*/
@ -2383,26 +2394,6 @@ int cpufreq_set_policy(struct cpufreq_policy *policy,
if (ret)
return ret;
/*
* The notifier-chain shall be removed once all the users of
* CPUFREQ_ADJUST are moved to use the QoS framework.
*/
/* adjust if necessary - all reasons */
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_ADJUST, new_policy);
/*
* verify the cpu speed can be set within this limit, which might be
* different to the first one
*/
ret = cpufreq_driver->verify(new_policy);
if (ret)
return ret;
/* notification of the new policy */
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_NOTIFY, new_policy);
policy->min = new_policy->min;
policy->max = new_policy->max;
trace_cpu_frequency_limits(policy);

View File

@ -16,6 +16,7 @@
#define OCOTP_CFG3_SPEED_GRADE_SHIFT 8
#define OCOTP_CFG3_SPEED_GRADE_MASK (0x3 << 8)
#define IMX8MN_OCOTP_CFG3_SPEED_GRADE_MASK (0xf << 8)
#define OCOTP_CFG3_MKT_SEGMENT_SHIFT 6
#define OCOTP_CFG3_MKT_SEGMENT_MASK (0x3 << 6)
@ -34,7 +35,12 @@ static int imx_cpufreq_dt_probe(struct platform_device *pdev)
if (ret)
return ret;
speed_grade = (cell_value & OCOTP_CFG3_SPEED_GRADE_MASK) >> OCOTP_CFG3_SPEED_GRADE_SHIFT;
if (of_machine_is_compatible("fsl,imx8mn"))
speed_grade = (cell_value & IMX8MN_OCOTP_CFG3_SPEED_GRADE_MASK)
>> OCOTP_CFG3_SPEED_GRADE_SHIFT;
else
speed_grade = (cell_value & OCOTP_CFG3_SPEED_GRADE_MASK)
>> OCOTP_CFG3_SPEED_GRADE_SHIFT;
mkt_segment = (cell_value & OCOTP_CFG3_MKT_SEGMENT_MASK) >> OCOTP_CFG3_MKT_SEGMENT_SHIFT;
/*

View File

@ -24,6 +24,7 @@
#include <linux/fs.h>
#include <linux/acpi.h>
#include <linux/vmalloc.h>
#include <linux/pm_qos.h>
#include <trace/events/power.h>
#include <asm/div64.h>
@ -1085,6 +1086,47 @@ static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b,
return count;
}
static struct cpufreq_driver intel_pstate;
static void update_qos_request(enum dev_pm_qos_req_type type)
{
int max_state, turbo_max, freq, i, perf_pct;
struct dev_pm_qos_request *req;
struct cpufreq_policy *policy;
for_each_possible_cpu(i) {
struct cpudata *cpu = all_cpu_data[i];
policy = cpufreq_cpu_get(i);
if (!policy)
continue;
req = policy->driver_data;
cpufreq_cpu_put(policy);
if (!req)
continue;
if (hwp_active)
intel_pstate_get_hwp_max(i, &turbo_max, &max_state);
else
turbo_max = cpu->pstate.turbo_pstate;
if (type == DEV_PM_QOS_MIN_FREQUENCY) {
perf_pct = global.min_perf_pct;
} else {
req++;
perf_pct = global.max_perf_pct;
}
freq = DIV_ROUND_UP(turbo_max * perf_pct, 100);
freq *= cpu->pstate.scaling;
if (dev_pm_qos_update_request(req, freq) < 0)
pr_warn("Failed to update freq constraint: CPU%d\n", i);
}
}
static ssize_t store_max_perf_pct(struct kobject *a, struct kobj_attribute *b,
const char *buf, size_t count)
{
@ -1108,7 +1150,10 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct kobj_attribute *b,
mutex_unlock(&intel_pstate_limits_lock);
intel_pstate_update_policies();
if (intel_pstate_driver == &intel_pstate)
intel_pstate_update_policies();
else
update_qos_request(DEV_PM_QOS_MAX_FREQUENCY);
mutex_unlock(&intel_pstate_driver_lock);
@ -1139,7 +1184,10 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct kobj_attribute *b,
mutex_unlock(&intel_pstate_limits_lock);
intel_pstate_update_policies();
if (intel_pstate_driver == &intel_pstate)
intel_pstate_update_policies();
else
update_qos_request(DEV_PM_QOS_MIN_FREQUENCY);
mutex_unlock(&intel_pstate_driver_lock);
@ -2332,8 +2380,16 @@ static unsigned int intel_cpufreq_fast_switch(struct cpufreq_policy *policy,
static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy)
{
int ret = __intel_pstate_cpu_init(policy);
int max_state, turbo_max, min_freq, max_freq, ret;
struct dev_pm_qos_request *req;
struct cpudata *cpu;
struct device *dev;
dev = get_cpu_device(policy->cpu);
if (!dev)
return -ENODEV;
ret = __intel_pstate_cpu_init(policy);
if (ret)
return ret;
@ -2342,7 +2398,63 @@ static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy)
/* This reflects the intel_pstate_get_cpu_pstates() setting. */
policy->cur = policy->cpuinfo.min_freq;
req = kcalloc(2, sizeof(*req), GFP_KERNEL);
if (!req) {
ret = -ENOMEM;
goto pstate_exit;
}
cpu = all_cpu_data[policy->cpu];
if (hwp_active)
intel_pstate_get_hwp_max(policy->cpu, &turbo_max, &max_state);
else
turbo_max = cpu->pstate.turbo_pstate;
min_freq = DIV_ROUND_UP(turbo_max * global.min_perf_pct, 100);
min_freq *= cpu->pstate.scaling;
max_freq = DIV_ROUND_UP(turbo_max * global.max_perf_pct, 100);
max_freq *= cpu->pstate.scaling;
ret = dev_pm_qos_add_request(dev, req, DEV_PM_QOS_MIN_FREQUENCY,
min_freq);
if (ret < 0) {
dev_err(dev, "Failed to add min-freq constraint (%d)\n", ret);
goto free_req;
}
ret = dev_pm_qos_add_request(dev, req + 1, DEV_PM_QOS_MAX_FREQUENCY,
max_freq);
if (ret < 0) {
dev_err(dev, "Failed to add max-freq constraint (%d)\n", ret);
goto remove_min_req;
}
policy->driver_data = req;
return 0;
remove_min_req:
dev_pm_qos_remove_request(req);
free_req:
kfree(req);
pstate_exit:
intel_pstate_exit_perf_limits(policy);
return ret;
}
static int intel_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
struct dev_pm_qos_request *req;
req = policy->driver_data;
dev_pm_qos_remove_request(req + 1);
dev_pm_qos_remove_request(req);
kfree(req);
return intel_pstate_cpu_exit(policy);
}
static struct cpufreq_driver intel_cpufreq = {
@ -2351,7 +2463,7 @@ static struct cpufreq_driver intel_cpufreq = {
.target = intel_cpufreq_target,
.fast_switch = intel_cpufreq_fast_switch,
.init = intel_cpufreq_cpu_init,
.exit = intel_pstate_cpu_exit,
.exit = intel_cpufreq_cpu_exit,
.stop_cpu = intel_cpufreq_stop_cpu,
.update_limits = intel_pstate_update_limits,
.name = "intel_cpufreq",

View File

@ -338,7 +338,7 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
goto out_free_resources;
}
proc_reg = regulator_get_exclusive(cpu_dev, "proc");
proc_reg = regulator_get_optional(cpu_dev, "proc");
if (IS_ERR(proc_reg)) {
if (PTR_ERR(proc_reg) == -EPROBE_DEFER)
pr_warn("proc regulator for cpu%d not ready, retry.\n",
@ -535,6 +535,8 @@ static const struct of_device_id mtk_cpufreq_machines[] __initconst = {
{ .compatible = "mediatek,mt817x", },
{ .compatible = "mediatek,mt8173", },
{ .compatible = "mediatek,mt8176", },
{ .compatible = "mediatek,mt8183", },
{ .compatible = "mediatek,mt8516", },
{ }
};

View File

@ -110,6 +110,13 @@ static int cbe_cpufreq_cpu_init(struct cpufreq_policy *policy)
#endif
policy->freq_table = cbe_freqs;
cbe_cpufreq_pmi_policy_init(policy);
return 0;
}
static int cbe_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
cbe_cpufreq_pmi_policy_exit(policy);
return 0;
}
@ -129,6 +136,7 @@ static struct cpufreq_driver cbe_cpufreq_driver = {
.verify = cpufreq_generic_frequency_table_verify,
.target_index = cbe_cpufreq_target,
.init = cbe_cpufreq_cpu_init,
.exit = cbe_cpufreq_cpu_exit,
.name = "cbe-cpufreq",
.flags = CPUFREQ_CONST_LOOPS,
};
@ -139,15 +147,24 @@ static struct cpufreq_driver cbe_cpufreq_driver = {
static int __init cbe_cpufreq_init(void)
{
int ret;
if (!machine_is(cell))
return -ENODEV;
return cpufreq_register_driver(&cbe_cpufreq_driver);
cbe_cpufreq_pmi_init();
ret = cpufreq_register_driver(&cbe_cpufreq_driver);
if (ret)
cbe_cpufreq_pmi_exit();
return ret;
}
static void __exit cbe_cpufreq_exit(void)
{
cpufreq_unregister_driver(&cbe_cpufreq_driver);
cbe_cpufreq_pmi_exit();
}
module_init(cbe_cpufreq_init);

View File

@ -20,6 +20,14 @@ int cbe_cpufreq_set_pmode_pmi(int cpu, unsigned int pmode);
#if IS_ENABLED(CONFIG_CPU_FREQ_CBE_PMI)
extern bool cbe_cpufreq_has_pmi;
void cbe_cpufreq_pmi_policy_init(struct cpufreq_policy *policy);
void cbe_cpufreq_pmi_policy_exit(struct cpufreq_policy *policy);
void cbe_cpufreq_pmi_init(void);
void cbe_cpufreq_pmi_exit(void);
#else
#define cbe_cpufreq_has_pmi (0)
static inline void cbe_cpufreq_pmi_policy_init(struct cpufreq_policy *policy) {}
static inline void cbe_cpufreq_pmi_policy_exit(struct cpufreq_policy *policy) {}
static inline void cbe_cpufreq_pmi_init(void) {}
static inline void cbe_cpufreq_pmi_exit(void) {}
#endif

View File

@ -12,6 +12,7 @@
#include <linux/timer.h>
#include <linux/init.h>
#include <linux/of_platform.h>
#include <linux/pm_qos.h>
#include <asm/processor.h>
#include <asm/prom.h>
@ -24,8 +25,6 @@
#include "ppc_cbe_cpufreq.h"
static u8 pmi_slow_mode_limit[MAX_CBE];
bool cbe_cpufreq_has_pmi = false;
EXPORT_SYMBOL_GPL(cbe_cpufreq_has_pmi);
@ -65,64 +64,89 @@ EXPORT_SYMBOL_GPL(cbe_cpufreq_set_pmode_pmi);
static void cbe_cpufreq_handle_pmi(pmi_message_t pmi_msg)
{
struct cpufreq_policy *policy;
struct dev_pm_qos_request *req;
u8 node, slow_mode;
int cpu, ret;
BUG_ON(pmi_msg.type != PMI_TYPE_FREQ_CHANGE);
node = pmi_msg.data1;
slow_mode = pmi_msg.data2;
pmi_slow_mode_limit[node] = slow_mode;
cpu = cbe_node_to_cpu(node);
pr_debug("cbe_handle_pmi: node: %d max_freq: %d\n", node, slow_mode);
}
static int pmi_notifier(struct notifier_block *nb,
unsigned long event, void *data)
{
struct cpufreq_policy *policy = data;
struct cpufreq_frequency_table *cbe_freqs = policy->freq_table;
u8 node;
/* Should this really be called for CPUFREQ_ADJUST and CPUFREQ_NOTIFY
* policy events?)
*/
node = cbe_cpu_to_node(policy->cpu);
pr_debug("got notified, event=%lu, node=%u\n", event, node);
if (pmi_slow_mode_limit[node] != 0) {
pr_debug("limiting node %d to slow mode %d\n",
node, pmi_slow_mode_limit[node]);
cpufreq_verify_within_limits(policy, 0,
cbe_freqs[pmi_slow_mode_limit[node]].frequency);
policy = cpufreq_cpu_get(cpu);
if (!policy) {
pr_warn("cpufreq policy not found cpu%d\n", cpu);
return;
}
return 0;
}
req = policy->driver_data;
static struct notifier_block pmi_notifier_block = {
.notifier_call = pmi_notifier,
};
ret = dev_pm_qos_update_request(req,
policy->freq_table[slow_mode].frequency);
if (ret < 0)
pr_warn("Failed to update freq constraint: %d\n", ret);
else
pr_debug("limiting node %d to slow mode %d\n", node, slow_mode);
cpufreq_cpu_put(policy);
}
static struct pmi_handler cbe_pmi_handler = {
.type = PMI_TYPE_FREQ_CHANGE,
.handle_pmi_message = cbe_cpufreq_handle_pmi,
};
static int __init cbe_cpufreq_pmi_init(void)
void cbe_cpufreq_pmi_policy_init(struct cpufreq_policy *policy)
{
cbe_cpufreq_has_pmi = pmi_register_handler(&cbe_pmi_handler) == 0;
struct dev_pm_qos_request *req;
int ret;
if (!cbe_cpufreq_has_pmi)
return -ENODEV;
return;
cpufreq_register_notifier(&pmi_notifier_block, CPUFREQ_POLICY_NOTIFIER);
req = kzalloc(sizeof(*req), GFP_KERNEL);
if (!req)
return;
return 0;
ret = dev_pm_qos_add_request(get_cpu_device(policy->cpu), req,
DEV_PM_QOS_MAX_FREQUENCY,
policy->freq_table[0].frequency);
if (ret < 0) {
pr_err("Failed to add freq constraint (%d)\n", ret);
kfree(req);
return;
}
policy->driver_data = req;
}
device_initcall(cbe_cpufreq_pmi_init);
EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_policy_init);
void cbe_cpufreq_pmi_policy_exit(struct cpufreq_policy *policy)
{
struct dev_pm_qos_request *req = policy->driver_data;
if (cbe_cpufreq_has_pmi) {
dev_pm_qos_remove_request(req);
kfree(req);
}
}
EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_policy_exit);
void cbe_cpufreq_pmi_init(void)
{
if (!pmi_register_handler(&cbe_pmi_handler))
cbe_cpufreq_has_pmi = true;
}
EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_init);
void cbe_cpufreq_pmi_exit(void)
{
pmi_unregister_handler(&cbe_pmi_handler);
cbe_cpufreq_has_pmi = false;
}
EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_exit);

View File

@ -20,6 +20,7 @@
#define LUT_VOLT GENMASK(11, 0)
#define LUT_ROW_SIZE 32
#define CLK_HW_DIV 2
#define LUT_TURBO_IND 1
/* Register offsets */
#define REG_ENABLE 0x0
@ -34,9 +35,12 @@ static int qcom_cpufreq_hw_target_index(struct cpufreq_policy *policy,
unsigned int index)
{
void __iomem *perf_state_reg = policy->driver_data;
unsigned long freq = policy->freq_table[index].frequency;
writel_relaxed(index, perf_state_reg);
arch_set_freq_scale(policy->related_cpus, freq,
policy->cpuinfo.max_freq);
return 0;
}
@ -63,6 +67,7 @@ static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy,
{
void __iomem *perf_state_reg = policy->driver_data;
int index;
unsigned long freq;
index = policy->cached_resolved_idx;
if (index < 0)
@ -70,16 +75,19 @@ static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy,
writel_relaxed(index, perf_state_reg);
return policy->freq_table[index].frequency;
freq = policy->freq_table[index].frequency;
arch_set_freq_scale(policy->related_cpus, freq,
policy->cpuinfo.max_freq);
return freq;
}
static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
struct cpufreq_policy *policy,
void __iomem *base)
{
u32 data, src, lval, i, core_count, prev_cc = 0, prev_freq = 0, freq;
u32 data, src, lval, i, core_count, prev_freq = 0, freq;
u32 volt;
unsigned int max_cores = cpumask_weight(policy->cpus);
struct cpufreq_frequency_table *table;
table = kcalloc(LUT_MAX_ENTRIES + 1, sizeof(*table), GFP_KERNEL);
@ -102,12 +110,12 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
else
freq = cpu_hw_rate / 1000;
if (freq != prev_freq && core_count == max_cores) {
if (freq != prev_freq && core_count != LUT_TURBO_IND) {
table[i].frequency = freq;
dev_pm_opp_add(cpu_dev, freq * 1000, volt);
dev_dbg(cpu_dev, "index=%d freq=%d, core_count %d\n", i,
freq, core_count);
} else {
} else if (core_count == LUT_TURBO_IND) {
table[i].frequency = CPUFREQ_ENTRY_INVALID;
}
@ -115,14 +123,14 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
* Two of the same frequencies with the same core counts means
* end of table
*/
if (i > 0 && prev_freq == freq && prev_cc == core_count) {
if (i > 0 && prev_freq == freq) {
struct cpufreq_frequency_table *prev = &table[i - 1];
/*
* Only treat the last frequency that might be a boost
* as the boost frequency
*/
if (prev_cc != max_cores) {
if (prev->frequency == CPUFREQ_ENTRY_INVALID) {
prev->frequency = prev_freq;
prev->flags = CPUFREQ_BOOST_FREQ;
dev_pm_opp_add(cpu_dev, prev_freq * 1000, volt);
@ -131,7 +139,6 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
break;
}
prev_cc = core_count;
prev_freq = freq;
}

View File

@ -1,249 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2018, The Linux Foundation. All rights reserved.
*/
/*
* In Certain QCOM SoCs like apq8096 and msm8996 that have KRYO processors,
* the CPU frequency subset and voltage value of each OPP varies
* based on the silicon variant in use. Qualcomm Process Voltage Scaling Tables
* defines the voltage and frequency value based on the msm-id in SMEM
* and speedbin blown in the efuse combination.
* The qcom-cpufreq-kryo driver reads the msm-id and efuse value from the SoC
* to provide the OPP framework with required information.
* This is used to determine the voltage and frequency value for each OPP of
* operating-points-v2 table when it is parsed by the OPP framework.
*/
#include <linux/cpu.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/nvmem-consumer.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/slab.h>
#include <linux/soc/qcom/smem.h>
#define MSM_ID_SMEM 137
enum _msm_id {
MSM8996V3 = 0xF6ul,
APQ8096V3 = 0x123ul,
MSM8996SG = 0x131ul,
APQ8096SG = 0x138ul,
};
enum _msm8996_version {
MSM8996_V3,
MSM8996_SG,
NUM_OF_MSM8996_VERSIONS,
};
static struct platform_device *cpufreq_dt_pdev, *kryo_cpufreq_pdev;
static enum _msm8996_version qcom_cpufreq_kryo_get_msm_id(void)
{
size_t len;
u32 *msm_id;
enum _msm8996_version version;
msm_id = qcom_smem_get(QCOM_SMEM_HOST_ANY, MSM_ID_SMEM, &len);
if (IS_ERR(msm_id))
return NUM_OF_MSM8996_VERSIONS;
/* The first 4 bytes are format, next to them is the actual msm-id */
msm_id++;
switch ((enum _msm_id)*msm_id) {
case MSM8996V3:
case APQ8096V3:
version = MSM8996_V3;
break;
case MSM8996SG:
case APQ8096SG:
version = MSM8996_SG;
break;
default:
version = NUM_OF_MSM8996_VERSIONS;
}
return version;
}
static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
{
struct opp_table **opp_tables;
enum _msm8996_version msm8996_version;
struct nvmem_cell *speedbin_nvmem;
struct device_node *np;
struct device *cpu_dev;
unsigned cpu;
u8 *speedbin;
u32 versions;
size_t len;
int ret;
cpu_dev = get_cpu_device(0);
if (!cpu_dev)
return -ENODEV;
msm8996_version = qcom_cpufreq_kryo_get_msm_id();
if (NUM_OF_MSM8996_VERSIONS == msm8996_version) {
dev_err(cpu_dev, "Not Snapdragon 820/821!");
return -ENODEV;
}
np = dev_pm_opp_of_get_opp_desc_node(cpu_dev);
if (!np)
return -ENOENT;
ret = of_device_is_compatible(np, "operating-points-v2-kryo-cpu");
if (!ret) {
of_node_put(np);
return -ENOENT;
}
speedbin_nvmem = of_nvmem_cell_get(np, NULL);
of_node_put(np);
if (IS_ERR(speedbin_nvmem)) {
if (PTR_ERR(speedbin_nvmem) != -EPROBE_DEFER)
dev_err(cpu_dev, "Could not get nvmem cell: %ld\n",
PTR_ERR(speedbin_nvmem));
return PTR_ERR(speedbin_nvmem);
}
speedbin = nvmem_cell_read(speedbin_nvmem, &len);
nvmem_cell_put(speedbin_nvmem);
if (IS_ERR(speedbin))
return PTR_ERR(speedbin);
switch (msm8996_version) {
case MSM8996_V3:
versions = 1 << (unsigned int)(*speedbin);
break;
case MSM8996_SG:
versions = 1 << ((unsigned int)(*speedbin) + 4);
break;
default:
BUG();
break;
}
kfree(speedbin);
opp_tables = kcalloc(num_possible_cpus(), sizeof(*opp_tables), GFP_KERNEL);
if (!opp_tables)
return -ENOMEM;
for_each_possible_cpu(cpu) {
cpu_dev = get_cpu_device(cpu);
if (NULL == cpu_dev) {
ret = -ENODEV;
goto free_opp;
}
opp_tables[cpu] = dev_pm_opp_set_supported_hw(cpu_dev,
&versions, 1);
if (IS_ERR(opp_tables[cpu])) {
ret = PTR_ERR(opp_tables[cpu]);
dev_err(cpu_dev, "Failed to set supported hardware\n");
goto free_opp;
}
}
cpufreq_dt_pdev = platform_device_register_simple("cpufreq-dt", -1,
NULL, 0);
if (!IS_ERR(cpufreq_dt_pdev)) {
platform_set_drvdata(pdev, opp_tables);
return 0;
}
ret = PTR_ERR(cpufreq_dt_pdev);
dev_err(cpu_dev, "Failed to register platform device\n");
free_opp:
for_each_possible_cpu(cpu) {
if (IS_ERR_OR_NULL(opp_tables[cpu]))
break;
dev_pm_opp_put_supported_hw(opp_tables[cpu]);
}
kfree(opp_tables);
return ret;
}
static int qcom_cpufreq_kryo_remove(struct platform_device *pdev)
{
struct opp_table **opp_tables = platform_get_drvdata(pdev);
unsigned int cpu;
platform_device_unregister(cpufreq_dt_pdev);
for_each_possible_cpu(cpu)
dev_pm_opp_put_supported_hw(opp_tables[cpu]);
kfree(opp_tables);
return 0;
}
static struct platform_driver qcom_cpufreq_kryo_driver = {
.probe = qcom_cpufreq_kryo_probe,
.remove = qcom_cpufreq_kryo_remove,
.driver = {
.name = "qcom-cpufreq-kryo",
},
};
static const struct of_device_id qcom_cpufreq_kryo_match_list[] __initconst = {
{ .compatible = "qcom,apq8096", },
{ .compatible = "qcom,msm8996", },
{}
};
/*
* Since the driver depends on smem and nvmem drivers, which may
* return EPROBE_DEFER, all the real activity is done in the probe,
* which may be defered as well. The init here is only registering
* the driver and the platform device.
*/
static int __init qcom_cpufreq_kryo_init(void)
{
struct device_node *np = of_find_node_by_path("/");
const struct of_device_id *match;
int ret;
if (!np)
return -ENODEV;
match = of_match_node(qcom_cpufreq_kryo_match_list, np);
of_node_put(np);
if (!match)
return -ENODEV;
ret = platform_driver_register(&qcom_cpufreq_kryo_driver);
if (unlikely(ret < 0))
return ret;
kryo_cpufreq_pdev = platform_device_register_simple(
"qcom-cpufreq-kryo", -1, NULL, 0);
ret = PTR_ERR_OR_ZERO(kryo_cpufreq_pdev);
if (0 == ret)
return 0;
platform_driver_unregister(&qcom_cpufreq_kryo_driver);
return ret;
}
module_init(qcom_cpufreq_kryo_init);
static void __exit qcom_cpufreq_kryo_exit(void)
{
platform_device_unregister(kryo_cpufreq_pdev);
platform_driver_unregister(&qcom_cpufreq_kryo_driver);
}
module_exit(qcom_cpufreq_kryo_exit);
MODULE_DESCRIPTION("Qualcomm Technologies, Inc. Kryo CPUfreq driver");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,352 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2018, The Linux Foundation. All rights reserved.
*/
/*
* In Certain QCOM SoCs like apq8096 and msm8996 that have KRYO processors,
* the CPU frequency subset and voltage value of each OPP varies
* based on the silicon variant in use. Qualcomm Process Voltage Scaling Tables
* defines the voltage and frequency value based on the msm-id in SMEM
* and speedbin blown in the efuse combination.
* The qcom-cpufreq-nvmem driver reads the msm-id and efuse value from the SoC
* to provide the OPP framework with required information.
* This is used to determine the voltage and frequency value for each OPP of
* operating-points-v2 table when it is parsed by the OPP framework.
*/
#include <linux/cpu.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/nvmem-consumer.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/pm_domain.h>
#include <linux/pm_opp.h>
#include <linux/slab.h>
#include <linux/soc/qcom/smem.h>
#define MSM_ID_SMEM 137
enum _msm_id {
MSM8996V3 = 0xF6ul,
APQ8096V3 = 0x123ul,
MSM8996SG = 0x131ul,
APQ8096SG = 0x138ul,
};
enum _msm8996_version {
MSM8996_V3,
MSM8996_SG,
NUM_OF_MSM8996_VERSIONS,
};
struct qcom_cpufreq_drv;
struct qcom_cpufreq_match_data {
int (*get_version)(struct device *cpu_dev,
struct nvmem_cell *speedbin_nvmem,
struct qcom_cpufreq_drv *drv);
const char **genpd_names;
};
struct qcom_cpufreq_drv {
struct opp_table **opp_tables;
struct opp_table **genpd_opp_tables;
u32 versions;
const struct qcom_cpufreq_match_data *data;
};
static struct platform_device *cpufreq_dt_pdev, *cpufreq_pdev;
static enum _msm8996_version qcom_cpufreq_get_msm_id(void)
{
size_t len;
u32 *msm_id;
enum _msm8996_version version;
msm_id = qcom_smem_get(QCOM_SMEM_HOST_ANY, MSM_ID_SMEM, &len);
if (IS_ERR(msm_id))
return NUM_OF_MSM8996_VERSIONS;
/* The first 4 bytes are format, next to them is the actual msm-id */
msm_id++;
switch ((enum _msm_id)*msm_id) {
case MSM8996V3:
case APQ8096V3:
version = MSM8996_V3;
break;
case MSM8996SG:
case APQ8096SG:
version = MSM8996_SG;
break;
default:
version = NUM_OF_MSM8996_VERSIONS;
}
return version;
}
static int qcom_cpufreq_kryo_name_version(struct device *cpu_dev,
struct nvmem_cell *speedbin_nvmem,
struct qcom_cpufreq_drv *drv)
{
size_t len;
u8 *speedbin;
enum _msm8996_version msm8996_version;
msm8996_version = qcom_cpufreq_get_msm_id();
if (NUM_OF_MSM8996_VERSIONS == msm8996_version) {
dev_err(cpu_dev, "Not Snapdragon 820/821!");
return -ENODEV;
}
speedbin = nvmem_cell_read(speedbin_nvmem, &len);
if (IS_ERR(speedbin))
return PTR_ERR(speedbin);
switch (msm8996_version) {
case MSM8996_V3:
drv->versions = 1 << (unsigned int)(*speedbin);
break;
case MSM8996_SG:
drv->versions = 1 << ((unsigned int)(*speedbin) + 4);
break;
default:
BUG();
break;
}
kfree(speedbin);
return 0;
}
static const struct qcom_cpufreq_match_data match_data_kryo = {
.get_version = qcom_cpufreq_kryo_name_version,
};
static const char *qcs404_genpd_names[] = { "cpr", NULL };
static const struct qcom_cpufreq_match_data match_data_qcs404 = {
.genpd_names = qcs404_genpd_names,
};
static int qcom_cpufreq_probe(struct platform_device *pdev)
{
struct qcom_cpufreq_drv *drv;
struct nvmem_cell *speedbin_nvmem;
struct device_node *np;
struct device *cpu_dev;
unsigned cpu;
const struct of_device_id *match;
int ret;
cpu_dev = get_cpu_device(0);
if (!cpu_dev)
return -ENODEV;
np = dev_pm_opp_of_get_opp_desc_node(cpu_dev);
if (!np)
return -ENOENT;
ret = of_device_is_compatible(np, "operating-points-v2-kryo-cpu");
if (!ret) {
of_node_put(np);
return -ENOENT;
}
drv = kzalloc(sizeof(*drv), GFP_KERNEL);
if (!drv)
return -ENOMEM;
match = pdev->dev.platform_data;
drv->data = match->data;
if (!drv->data) {
ret = -ENODEV;
goto free_drv;
}
if (drv->data->get_version) {
speedbin_nvmem = of_nvmem_cell_get(np, NULL);
if (IS_ERR(speedbin_nvmem)) {
if (PTR_ERR(speedbin_nvmem) != -EPROBE_DEFER)
dev_err(cpu_dev,
"Could not get nvmem cell: %ld\n",
PTR_ERR(speedbin_nvmem));
ret = PTR_ERR(speedbin_nvmem);
goto free_drv;
}
ret = drv->data->get_version(cpu_dev, speedbin_nvmem, drv);
if (ret) {
nvmem_cell_put(speedbin_nvmem);
goto free_drv;
}
nvmem_cell_put(speedbin_nvmem);
}
of_node_put(np);
drv->opp_tables = kcalloc(num_possible_cpus(), sizeof(*drv->opp_tables),
GFP_KERNEL);
if (!drv->opp_tables) {
ret = -ENOMEM;
goto free_drv;
}
drv->genpd_opp_tables = kcalloc(num_possible_cpus(),
sizeof(*drv->genpd_opp_tables),
GFP_KERNEL);
if (!drv->genpd_opp_tables) {
ret = -ENOMEM;
goto free_opp;
}
for_each_possible_cpu(cpu) {
cpu_dev = get_cpu_device(cpu);
if (NULL == cpu_dev) {
ret = -ENODEV;
goto free_genpd_opp;
}
if (drv->data->get_version) {
drv->opp_tables[cpu] =
dev_pm_opp_set_supported_hw(cpu_dev,
&drv->versions, 1);
if (IS_ERR(drv->opp_tables[cpu])) {
ret = PTR_ERR(drv->opp_tables[cpu]);
dev_err(cpu_dev,
"Failed to set supported hardware\n");
goto free_genpd_opp;
}
}
if (drv->data->genpd_names) {
drv->genpd_opp_tables[cpu] =
dev_pm_opp_attach_genpd(cpu_dev,
drv->data->genpd_names,
NULL);
if (IS_ERR(drv->genpd_opp_tables[cpu])) {
ret = PTR_ERR(drv->genpd_opp_tables[cpu]);
if (ret != -EPROBE_DEFER)
dev_err(cpu_dev,
"Could not attach to pm_domain: %d\n",
ret);
goto free_genpd_opp;
}
}
}
cpufreq_dt_pdev = platform_device_register_simple("cpufreq-dt", -1,
NULL, 0);
if (!IS_ERR(cpufreq_dt_pdev)) {
platform_set_drvdata(pdev, drv);
return 0;
}
ret = PTR_ERR(cpufreq_dt_pdev);
dev_err(cpu_dev, "Failed to register platform device\n");
free_genpd_opp:
for_each_possible_cpu(cpu) {
if (IS_ERR_OR_NULL(drv->genpd_opp_tables[cpu]))
break;
dev_pm_opp_detach_genpd(drv->genpd_opp_tables[cpu]);
}
kfree(drv->genpd_opp_tables);
free_opp:
for_each_possible_cpu(cpu) {
if (IS_ERR_OR_NULL(drv->opp_tables[cpu]))
break;
dev_pm_opp_put_supported_hw(drv->opp_tables[cpu]);
}
kfree(drv->opp_tables);
free_drv:
kfree(drv);
return ret;
}
static int qcom_cpufreq_remove(struct platform_device *pdev)
{
struct qcom_cpufreq_drv *drv = platform_get_drvdata(pdev);
unsigned int cpu;
platform_device_unregister(cpufreq_dt_pdev);
for_each_possible_cpu(cpu) {
if (drv->opp_tables[cpu])
dev_pm_opp_put_supported_hw(drv->opp_tables[cpu]);
if (drv->genpd_opp_tables[cpu])
dev_pm_opp_detach_genpd(drv->genpd_opp_tables[cpu]);
}
kfree(drv->opp_tables);
kfree(drv->genpd_opp_tables);
kfree(drv);
return 0;
}
static struct platform_driver qcom_cpufreq_driver = {
.probe = qcom_cpufreq_probe,
.remove = qcom_cpufreq_remove,
.driver = {
.name = "qcom-cpufreq-nvmem",
},
};
static const struct of_device_id qcom_cpufreq_match_list[] __initconst = {
{ .compatible = "qcom,apq8096", .data = &match_data_kryo },
{ .compatible = "qcom,msm8996", .data = &match_data_kryo },
{ .compatible = "qcom,qcs404", .data = &match_data_qcs404 },
{},
};
/*
* Since the driver depends on smem and nvmem drivers, which may
* return EPROBE_DEFER, all the real activity is done in the probe,
* which may be defered as well. The init here is only registering
* the driver and the platform device.
*/
static int __init qcom_cpufreq_init(void)
{
struct device_node *np = of_find_node_by_path("/");
const struct of_device_id *match;
int ret;
if (!np)
return -ENODEV;
match = of_match_node(qcom_cpufreq_match_list, np);
of_node_put(np);
if (!match)
return -ENODEV;
ret = platform_driver_register(&qcom_cpufreq_driver);
if (unlikely(ret < 0))
return ret;
cpufreq_pdev = platform_device_register_data(NULL, "qcom-cpufreq-nvmem",
-1, match, sizeof(*match));
ret = PTR_ERR_OR_ZERO(cpufreq_pdev);
if (0 == ret)
return 0;
platform_driver_unregister(&qcom_cpufreq_driver);
return ret;
}
module_init(qcom_cpufreq_init);
static void __exit qcom_cpufreq_exit(void)
{
platform_device_unregister(cpufreq_pdev);
platform_driver_unregister(&qcom_cpufreq_driver);
}
module_exit(qcom_cpufreq_exit);
MODULE_DESCRIPTION("Qualcomm Technologies, Inc. CPUfreq driver");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,226 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Allwinner CPUFreq nvmem based driver
*
* The sun50i-cpufreq-nvmem driver reads the efuse value from the SoC to
* provide the OPP framework with required information.
*
* Copyright (C) 2019 Yangtao Li <tiny.windzz@gmail.com>
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/module.h>
#include <linux/nvmem-consumer.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/slab.h>
#define MAX_NAME_LEN 7
#define NVMEM_MASK 0x7
#define NVMEM_SHIFT 5
static struct platform_device *cpufreq_dt_pdev, *sun50i_cpufreq_pdev;
/**
* sun50i_cpufreq_get_efuse() - Parse and return efuse value present on SoC
* @versions: Set to the value parsed from efuse
*
* Returns 0 if success.
*/
static int sun50i_cpufreq_get_efuse(u32 *versions)
{
struct nvmem_cell *speedbin_nvmem;
struct device_node *np;
struct device *cpu_dev;
u32 *speedbin, efuse_value;
size_t len;
int ret;
cpu_dev = get_cpu_device(0);
if (!cpu_dev)
return -ENODEV;
np = dev_pm_opp_of_get_opp_desc_node(cpu_dev);
if (!np)
return -ENOENT;
ret = of_device_is_compatible(np,
"allwinner,sun50i-h6-operating-points");
if (!ret) {
of_node_put(np);
return -ENOENT;
}
speedbin_nvmem = of_nvmem_cell_get(np, NULL);
of_node_put(np);
if (IS_ERR(speedbin_nvmem)) {
if (PTR_ERR(speedbin_nvmem) != -EPROBE_DEFER)
pr_err("Could not get nvmem cell: %ld\n",
PTR_ERR(speedbin_nvmem));
return PTR_ERR(speedbin_nvmem);
}
speedbin = nvmem_cell_read(speedbin_nvmem, &len);
nvmem_cell_put(speedbin_nvmem);
if (IS_ERR(speedbin))
return PTR_ERR(speedbin);
efuse_value = (*speedbin >> NVMEM_SHIFT) & NVMEM_MASK;
switch (efuse_value) {
case 0b0001:
*versions = 1;
break;
case 0b0011:
*versions = 2;
break;
default:
/*
* For other situations, we treat it as bin0.
* This vf table can be run for any good cpu.
*/
*versions = 0;
break;
}
kfree(speedbin);
return 0;
};
static int sun50i_cpufreq_nvmem_probe(struct platform_device *pdev)
{
struct opp_table **opp_tables;
char name[MAX_NAME_LEN];
unsigned int cpu;
u32 speed = 0;
int ret;
opp_tables = kcalloc(num_possible_cpus(), sizeof(*opp_tables),
GFP_KERNEL);
if (!opp_tables)
return -ENOMEM;
ret = sun50i_cpufreq_get_efuse(&speed);
if (ret)
return ret;
snprintf(name, MAX_NAME_LEN, "speed%d", speed);
for_each_possible_cpu(cpu) {
struct device *cpu_dev = get_cpu_device(cpu);
if (!cpu_dev) {
ret = -ENODEV;
goto free_opp;
}
opp_tables[cpu] = dev_pm_opp_set_prop_name(cpu_dev, name);
if (IS_ERR(opp_tables[cpu])) {
ret = PTR_ERR(opp_tables[cpu]);
pr_err("Failed to set prop name\n");
goto free_opp;
}
}
cpufreq_dt_pdev = platform_device_register_simple("cpufreq-dt", -1,
NULL, 0);
if (!IS_ERR(cpufreq_dt_pdev)) {
platform_set_drvdata(pdev, opp_tables);
return 0;
}
ret = PTR_ERR(cpufreq_dt_pdev);
pr_err("Failed to register platform device\n");
free_opp:
for_each_possible_cpu(cpu) {
if (IS_ERR_OR_NULL(opp_tables[cpu]))
break;
dev_pm_opp_put_prop_name(opp_tables[cpu]);
}
kfree(opp_tables);
return ret;
}
static int sun50i_cpufreq_nvmem_remove(struct platform_device *pdev)
{
struct opp_table **opp_tables = platform_get_drvdata(pdev);
unsigned int cpu;
platform_device_unregister(cpufreq_dt_pdev);
for_each_possible_cpu(cpu)
dev_pm_opp_put_prop_name(opp_tables[cpu]);
kfree(opp_tables);
return 0;
}
static struct platform_driver sun50i_cpufreq_driver = {
.probe = sun50i_cpufreq_nvmem_probe,
.remove = sun50i_cpufreq_nvmem_remove,
.driver = {
.name = "sun50i-cpufreq-nvmem",
},
};
static const struct of_device_id sun50i_cpufreq_match_list[] = {
{ .compatible = "allwinner,sun50i-h6" },
{}
};
static const struct of_device_id *sun50i_cpufreq_match_node(void)
{
const struct of_device_id *match;
struct device_node *np;
np = of_find_node_by_path("/");
match = of_match_node(sun50i_cpufreq_match_list, np);
of_node_put(np);
return match;
}
/*
* Since the driver depends on nvmem drivers, which may return EPROBE_DEFER,
* all the real activity is done in the probe, which may be defered as well.
* The init here is only registering the driver and the platform device.
*/
static int __init sun50i_cpufreq_init(void)
{
const struct of_device_id *match;
int ret;
match = sun50i_cpufreq_match_node();
if (!match)
return -ENODEV;
ret = platform_driver_register(&sun50i_cpufreq_driver);
if (unlikely(ret < 0))
return ret;
sun50i_cpufreq_pdev =
platform_device_register_simple("sun50i-cpufreq-nvmem",
-1, NULL, 0);
ret = PTR_ERR_OR_ZERO(sun50i_cpufreq_pdev);
if (ret == 0)
return 0;
platform_driver_unregister(&sun50i_cpufreq_driver);
return ret;
}
module_init(sun50i_cpufreq_init);
static void __exit sun50i_cpufreq_exit(void)
{
platform_device_unregister(sun50i_cpufreq_pdev);
platform_driver_unregister(&sun50i_cpufreq_driver);
}
module_exit(sun50i_cpufreq_exit);
MODULE_DESCRIPTION("Sun50i-h6 cpufreq driver");
MODULE_LICENSE("GPL v2");

View File

@ -77,6 +77,7 @@ static unsigned long dra7_efuse_xlate(struct ti_cpufreq_data *opp_data,
case DRA7_EFUSE_HAS_ALL_MPU_OPP:
case DRA7_EFUSE_HAS_HIGH_MPU_OPP:
calculated_efuse |= DRA7_EFUSE_HIGH_MPU_OPP;
/* Fall through */
case DRA7_EFUSE_HAS_OD_MPU_OPP:
calculated_efuse |= DRA7_EFUSE_OD_MPU_OPP;
}

View File

@ -33,6 +33,17 @@ config CPU_IDLE_GOV_TEO
Some workloads benefit from using it and it generally should be safe
to use. Say Y here if you are not happy with the alternatives.
config CPU_IDLE_GOV_HALTPOLL
bool "Haltpoll governor (for virtualized systems)"
depends on KVM_GUEST
help
This governor implements haltpoll idle state selection, to be
used in conjunction with the haltpoll cpuidle driver, allowing
for polling for a certain amount of time before entering idle
state.
Some virtualized workloads benefit from using it.
config DT_IDLE_STATES
bool
@ -51,6 +62,15 @@ depends on PPC
source "drivers/cpuidle/Kconfig.powerpc"
endmenu
config HALTPOLL_CPUIDLE
tristate "Halt poll cpuidle driver"
depends on X86 && KVM_GUEST
default y
help
This option enables halt poll cpuidle driver, which allows to poll
before halting in the guest (more efficient than polling in the
host via halt_poll_ns for some scenarios).
endif
config ARCH_NEEDS_CPU_IDLE_COUPLED

View File

@ -7,6 +7,7 @@ obj-y += cpuidle.o driver.o governor.o sysfs.o governors/
obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
obj-$(CONFIG_DT_IDLE_STATES) += dt_idle_states.o
obj-$(CONFIG_ARCH_HAS_CPU_RELAX) += poll_state.o
obj-$(CONFIG_HALTPOLL_CPUIDLE) += cpuidle-haltpoll.o
##################################################################################
# ARM SoC drivers

View File

@ -0,0 +1,134 @@
// SPDX-License-Identifier: GPL-2.0
/*
* cpuidle driver for haltpoll governor.
*
* Copyright 2019 Red Hat, Inc. and/or its affiliates.
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Authors: Marcelo Tosatti <mtosatti@redhat.com>
*/
#include <linux/init.h>
#include <linux/cpu.h>
#include <linux/cpuidle.h>
#include <linux/module.h>
#include <linux/sched/idle.h>
#include <linux/kvm_para.h>
#include <linux/cpuidle_haltpoll.h>
static struct cpuidle_device __percpu *haltpoll_cpuidle_devices;
static enum cpuhp_state haltpoll_hp_state;
static int default_enter_idle(struct cpuidle_device *dev,
struct cpuidle_driver *drv, int index)
{
if (current_clr_polling_and_test()) {
local_irq_enable();
return index;
}
default_idle();
return index;
}
static struct cpuidle_driver haltpoll_driver = {
.name = "haltpoll",
.governor = "haltpoll",
.states = {
{ /* entry 0 is for polling */ },
{
.enter = default_enter_idle,
.exit_latency = 1,
.target_residency = 1,
.power_usage = -1,
.name = "haltpoll idle",
.desc = "default architecture idle",
},
},
.safe_state_index = 0,
.state_count = 2,
};
static int haltpoll_cpu_online(unsigned int cpu)
{
struct cpuidle_device *dev;
dev = per_cpu_ptr(haltpoll_cpuidle_devices, cpu);
if (!dev->registered) {
dev->cpu = cpu;
if (cpuidle_register_device(dev)) {
pr_notice("cpuidle_register_device %d failed!\n", cpu);
return -EIO;
}
arch_haltpoll_enable(cpu);
}
return 0;
}
static int haltpoll_cpu_offline(unsigned int cpu)
{
struct cpuidle_device *dev;
dev = per_cpu_ptr(haltpoll_cpuidle_devices, cpu);
if (dev->registered) {
arch_haltpoll_disable(cpu);
cpuidle_unregister_device(dev);
}
return 0;
}
static void haltpoll_uninit(void)
{
if (haltpoll_hp_state)
cpuhp_remove_state(haltpoll_hp_state);
cpuidle_unregister_driver(&haltpoll_driver);
free_percpu(haltpoll_cpuidle_devices);
haltpoll_cpuidle_devices = NULL;
}
static int __init haltpoll_init(void)
{
int ret;
struct cpuidle_driver *drv = &haltpoll_driver;
cpuidle_poll_state_init(drv);
if (!kvm_para_available() ||
!kvm_para_has_hint(KVM_HINTS_REALTIME))
return -ENODEV;
ret = cpuidle_register_driver(drv);
if (ret < 0)
return ret;
haltpoll_cpuidle_devices = alloc_percpu(struct cpuidle_device);
if (haltpoll_cpuidle_devices == NULL) {
cpuidle_unregister_driver(drv);
return -ENOMEM;
}
ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "cpuidle/haltpoll:online",
haltpoll_cpu_online, haltpoll_cpu_offline);
if (ret < 0) {
haltpoll_uninit();
} else {
haltpoll_hp_state = ret;
ret = 0;
}
return ret;
}
static void __exit haltpoll_exit(void)
{
haltpoll_uninit();
}
module_init(haltpoll_init);
module_exit(haltpoll_exit);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Marcelo Tosatti <mtosatti@redhat.com>");

View File

@ -361,6 +361,36 @@ void cpuidle_reflect(struct cpuidle_device *dev, int index)
cpuidle_curr_governor->reflect(dev, index);
}
/**
* cpuidle_poll_time - return amount of time to poll for,
* governors can override dev->poll_limit_ns if necessary
*
* @drv: the cpuidle driver tied with the cpu
* @dev: the cpuidle device
*
*/
u64 cpuidle_poll_time(struct cpuidle_driver *drv,
struct cpuidle_device *dev)
{
int i;
u64 limit_ns;
if (dev->poll_limit_ns)
return dev->poll_limit_ns;
limit_ns = TICK_NSEC;
for (i = 1; i < drv->state_count; i++) {
if (drv->states[i].disabled || dev->states_usage[i].disable)
continue;
limit_ns = (u64)drv->states[i].target_residency * NSEC_PER_USEC;
}
dev->poll_limit_ns = limit_ns;
return dev->poll_limit_ns;
}
/**
* cpuidle_install_idle_handler - installs the cpuidle idle loop handler
*/

View File

@ -9,6 +9,7 @@
/* For internal use only */
extern char param_governor[];
extern struct cpuidle_governor *cpuidle_curr_governor;
extern struct cpuidle_governor *cpuidle_prev_governor;
extern struct list_head cpuidle_governors;
extern struct list_head cpuidle_detected_devices;
extern struct mutex cpuidle_lock;
@ -22,6 +23,7 @@ extern void cpuidle_install_idle_handler(void);
extern void cpuidle_uninstall_idle_handler(void);
/* governors */
extern struct cpuidle_governor *cpuidle_find_governor(const char *str);
extern int cpuidle_switch_governor(struct cpuidle_governor *gov);
/* sysfs */

View File

@ -254,12 +254,25 @@ static void __cpuidle_unregister_driver(struct cpuidle_driver *drv)
*/
int cpuidle_register_driver(struct cpuidle_driver *drv)
{
struct cpuidle_governor *gov;
int ret;
spin_lock(&cpuidle_driver_lock);
ret = __cpuidle_register_driver(drv);
spin_unlock(&cpuidle_driver_lock);
if (!ret && !strlen(param_governor) && drv->governor &&
(cpuidle_get_driver() == drv)) {
mutex_lock(&cpuidle_lock);
gov = cpuidle_find_governor(drv->governor);
if (gov) {
cpuidle_prev_governor = cpuidle_curr_governor;
if (cpuidle_switch_governor(gov) < 0)
cpuidle_prev_governor = NULL;
}
mutex_unlock(&cpuidle_lock);
}
return ret;
}
EXPORT_SYMBOL_GPL(cpuidle_register_driver);
@ -274,9 +287,21 @@ EXPORT_SYMBOL_GPL(cpuidle_register_driver);
*/
void cpuidle_unregister_driver(struct cpuidle_driver *drv)
{
bool enabled = (cpuidle_get_driver() == drv);
spin_lock(&cpuidle_driver_lock);
__cpuidle_unregister_driver(drv);
spin_unlock(&cpuidle_driver_lock);
if (!enabled)
return;
mutex_lock(&cpuidle_lock);
if (cpuidle_prev_governor) {
if (!cpuidle_switch_governor(cpuidle_prev_governor))
cpuidle_prev_governor = NULL;
}
mutex_unlock(&cpuidle_lock);
}
EXPORT_SYMBOL_GPL(cpuidle_unregister_driver);

View File

@ -20,14 +20,15 @@ char param_governor[CPUIDLE_NAME_LEN];
LIST_HEAD(cpuidle_governors);
struct cpuidle_governor *cpuidle_curr_governor;
struct cpuidle_governor *cpuidle_prev_governor;
/**
* __cpuidle_find_governor - finds a governor of the specified name
* cpuidle_find_governor - finds a governor of the specified name
* @str: the name
*
* Must be called with cpuidle_lock acquired.
*/
static struct cpuidle_governor * __cpuidle_find_governor(const char *str)
struct cpuidle_governor *cpuidle_find_governor(const char *str)
{
struct cpuidle_governor *gov;
@ -87,7 +88,7 @@ int cpuidle_register_governor(struct cpuidle_governor *gov)
return -ENODEV;
mutex_lock(&cpuidle_lock);
if (__cpuidle_find_governor(gov->name) == NULL) {
if (cpuidle_find_governor(gov->name) == NULL) {
ret = 0;
list_add_tail(&gov->governor_list, &cpuidle_governors);
if (!cpuidle_curr_governor ||

View File

@ -6,3 +6,4 @@
obj-$(CONFIG_CPU_IDLE_GOV_LADDER) += ladder.o
obj-$(CONFIG_CPU_IDLE_GOV_MENU) += menu.o
obj-$(CONFIG_CPU_IDLE_GOV_TEO) += teo.o
obj-$(CONFIG_CPU_IDLE_GOV_HALTPOLL) += haltpoll.o

View File

@ -0,0 +1,150 @@
// SPDX-License-Identifier: GPL-2.0
/*
* haltpoll.c - haltpoll idle governor
*
* Copyright 2019 Red Hat, Inc. and/or its affiliates.
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Authors: Marcelo Tosatti <mtosatti@redhat.com>
*/
#include <linux/kernel.h>
#include <linux/cpuidle.h>
#include <linux/time.h>
#include <linux/ktime.h>
#include <linux/hrtimer.h>
#include <linux/tick.h>
#include <linux/sched.h>
#include <linux/module.h>
#include <linux/kvm_para.h>
static unsigned int guest_halt_poll_ns __read_mostly = 200000;
module_param(guest_halt_poll_ns, uint, 0644);
/* division factor to shrink halt_poll_ns */
static unsigned int guest_halt_poll_shrink __read_mostly = 2;
module_param(guest_halt_poll_shrink, uint, 0644);
/* multiplication factor to grow per-cpu poll_limit_ns */
static unsigned int guest_halt_poll_grow __read_mostly = 2;
module_param(guest_halt_poll_grow, uint, 0644);
/* value in us to start growing per-cpu halt_poll_ns */
static unsigned int guest_halt_poll_grow_start __read_mostly = 50000;
module_param(guest_halt_poll_grow_start, uint, 0644);
/* allow shrinking guest halt poll */
static bool guest_halt_poll_allow_shrink __read_mostly = true;
module_param(guest_halt_poll_allow_shrink, bool, 0644);
/**
* haltpoll_select - selects the next idle state to enter
* @drv: cpuidle driver containing state data
* @dev: the CPU
* @stop_tick: indication on whether or not to stop the tick
*/
static int haltpoll_select(struct cpuidle_driver *drv,
struct cpuidle_device *dev,
bool *stop_tick)
{
int latency_req = cpuidle_governor_latency_req(dev->cpu);
if (!drv->state_count || latency_req == 0) {
*stop_tick = false;
return 0;
}
if (dev->poll_limit_ns == 0)
return 1;
/* Last state was poll? */
if (dev->last_state_idx == 0) {
/* Halt if no event occurred on poll window */
if (dev->poll_time_limit == true)
return 1;
*stop_tick = false;
/* Otherwise, poll again */
return 0;
}
*stop_tick = false;
/* Last state was halt: poll */
return 0;
}
static void adjust_poll_limit(struct cpuidle_device *dev, unsigned int block_us)
{
unsigned int val;
u64 block_ns = block_us*NSEC_PER_USEC;
/* Grow cpu_halt_poll_us if
* cpu_halt_poll_us < block_ns < guest_halt_poll_us
*/
if (block_ns > dev->poll_limit_ns && block_ns <= guest_halt_poll_ns) {
val = dev->poll_limit_ns * guest_halt_poll_grow;
if (val < guest_halt_poll_grow_start)
val = guest_halt_poll_grow_start;
if (val > guest_halt_poll_ns)
val = guest_halt_poll_ns;
dev->poll_limit_ns = val;
} else if (block_ns > guest_halt_poll_ns &&
guest_halt_poll_allow_shrink) {
unsigned int shrink = guest_halt_poll_shrink;
val = dev->poll_limit_ns;
if (shrink == 0)
val = 0;
else
val /= shrink;
dev->poll_limit_ns = val;
}
}
/**
* haltpoll_reflect - update variables and update poll time
* @dev: the CPU
* @index: the index of actual entered state
*/
static void haltpoll_reflect(struct cpuidle_device *dev, int index)
{
dev->last_state_idx = index;
if (index != 0)
adjust_poll_limit(dev, dev->last_residency);
}
/**
* haltpoll_enable_device - scans a CPU's states and does setup
* @drv: cpuidle driver
* @dev: the CPU
*/
static int haltpoll_enable_device(struct cpuidle_driver *drv,
struct cpuidle_device *dev)
{
dev->poll_limit_ns = 0;
return 0;
}
static struct cpuidle_governor haltpoll_governor = {
.name = "haltpoll",
.rating = 9,
.enable = haltpoll_enable_device,
.select = haltpoll_select,
.reflect = haltpoll_reflect,
};
static int __init init_haltpoll(void)
{
if (kvm_para_available())
return cpuidle_register_governor(&haltpoll_governor);
return 0;
}
postcore_initcall(init_haltpoll);

View File

@ -38,7 +38,6 @@ struct ladder_device_state {
struct ladder_device {
struct ladder_device_state states[CPUIDLE_STATE_MAX];
int last_state_idx;
};
static DEFINE_PER_CPU(struct ladder_device, ladder_devices);
@ -49,12 +48,13 @@ static DEFINE_PER_CPU(struct ladder_device, ladder_devices);
* @old_idx: the current state index
* @new_idx: the new target state index
*/
static inline void ladder_do_selection(struct ladder_device *ldev,
static inline void ladder_do_selection(struct cpuidle_device *dev,
struct ladder_device *ldev,
int old_idx, int new_idx)
{
ldev->states[old_idx].stats.promotion_count = 0;
ldev->states[old_idx].stats.demotion_count = 0;
ldev->last_state_idx = new_idx;
dev->last_state_idx = new_idx;
}
/**
@ -68,13 +68,13 @@ static int ladder_select_state(struct cpuidle_driver *drv,
{
struct ladder_device *ldev = this_cpu_ptr(&ladder_devices);
struct ladder_device_state *last_state;
int last_residency, last_idx = ldev->last_state_idx;
int last_residency, last_idx = dev->last_state_idx;
int first_idx = drv->states[0].flags & CPUIDLE_FLAG_POLLING ? 1 : 0;
int latency_req = cpuidle_governor_latency_req(dev->cpu);
/* Special case when user has set very strict latency requirement */
if (unlikely(latency_req == 0)) {
ladder_do_selection(ldev, last_idx, 0);
ladder_do_selection(dev, ldev, last_idx, 0);
return 0;
}
@ -91,7 +91,7 @@ static int ladder_select_state(struct cpuidle_driver *drv,
last_state->stats.promotion_count++;
last_state->stats.demotion_count = 0;
if (last_state->stats.promotion_count >= last_state->threshold.promotion_count) {
ladder_do_selection(ldev, last_idx, last_idx + 1);
ladder_do_selection(dev, ldev, last_idx, last_idx + 1);
return last_idx + 1;
}
}
@ -107,7 +107,7 @@ static int ladder_select_state(struct cpuidle_driver *drv,
if (drv->states[i].exit_latency <= latency_req)
break;
}
ladder_do_selection(ldev, last_idx, i);
ladder_do_selection(dev, ldev, last_idx, i);
return i;
}
@ -116,7 +116,7 @@ static int ladder_select_state(struct cpuidle_driver *drv,
last_state->stats.demotion_count++;
last_state->stats.promotion_count = 0;
if (last_state->stats.demotion_count >= last_state->threshold.demotion_count) {
ladder_do_selection(ldev, last_idx, last_idx - 1);
ladder_do_selection(dev, ldev, last_idx, last_idx - 1);
return last_idx - 1;
}
}
@ -139,7 +139,7 @@ static int ladder_enable_device(struct cpuidle_driver *drv,
struct ladder_device_state *lstate;
struct cpuidle_state *state;
ldev->last_state_idx = first_idx;
dev->last_state_idx = first_idx;
for (i = first_idx; i < drv->state_count; i++) {
state = &drv->states[i];
@ -167,9 +167,8 @@ static int ladder_enable_device(struct cpuidle_driver *drv,
*/
static void ladder_reflect(struct cpuidle_device *dev, int index)
{
struct ladder_device *ldev = this_cpu_ptr(&ladder_devices);
if (index > 0)
ldev->last_state_idx = index;
dev->last_state_idx = index;
}
static struct cpuidle_governor ladder_governor = {

View File

@ -117,7 +117,6 @@
*/
struct menu_device {
int last_state_idx;
int needs_update;
int tick_wakeup;
@ -302,9 +301,10 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
!drv->states[0].disabled && !dev->states_usage[0].disable)) {
/*
* In this case state[0] will be used no matter what, so return
* it right away and keep the tick running.
* it right away and keep the tick running if state[0] is a
* polling one.
*/
*stop_tick = false;
*stop_tick = !(drv->states[0].flags & CPUIDLE_FLAG_POLLING);
return 0;
}
@ -395,16 +395,9 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
return idx;
}
if (s->exit_latency > latency_req) {
/*
* If we break out of the loop for latency reasons, use
* the target residency of the selected state as the
* expected idle duration so that the tick is retained
* as long as that target residency is low enough.
*/
predicted_us = drv->states[idx].target_residency;
if (s->exit_latency > latency_req)
break;
}
idx = i;
}
@ -455,7 +448,7 @@ static void menu_reflect(struct cpuidle_device *dev, int index)
{
struct menu_device *data = this_cpu_ptr(&menu_devices);
data->last_state_idx = index;
dev->last_state_idx = index;
data->needs_update = 1;
data->tick_wakeup = tick_nohz_idle_got_tick();
}
@ -468,7 +461,7 @@ static void menu_reflect(struct cpuidle_device *dev, int index)
static void menu_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
{
struct menu_device *data = this_cpu_ptr(&menu_devices);
int last_idx = data->last_state_idx;
int last_idx = dev->last_state_idx;
struct cpuidle_state *target = &drv->states[last_idx];
unsigned int measured_us;
unsigned int new_factor;

View File

@ -96,7 +96,6 @@ struct teo_idle_state {
* @time_span_ns: Time between idle state selection and post-wakeup update.
* @sleep_length_ns: Time till the closest timer event (at the selection time).
* @states: Idle states data corresponding to this CPU.
* @last_state: Idle state entered by the CPU last time.
* @interval_idx: Index of the most recent saved idle interval.
* @intervals: Saved idle duration values.
*/
@ -104,7 +103,6 @@ struct teo_cpu {
u64 time_span_ns;
u64 sleep_length_ns;
struct teo_idle_state states[CPUIDLE_STATE_MAX];
int last_state;
int interval_idx;
unsigned int intervals[INTERVALS];
};
@ -125,12 +123,15 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns) {
/*
* One of the safety nets has triggered or this was a timer
* wakeup (or equivalent).
* One of the safety nets has triggered or the wakeup was close
* enough to the closest timer event expected at the idle state
* selection time to be discarded.
*/
measured_us = sleep_length_us;
measured_us = UINT_MAX;
} else {
unsigned int lat = drv->states[cpu_data->last_state].exit_latency;
unsigned int lat;
lat = drv->states[dev->last_state_idx].exit_latency;
measured_us = ktime_to_us(cpu_data->time_span_ns);
/*
@ -188,15 +189,6 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
cpu_data->states[idx_timer].hits = hits;
}
/*
* If the total time span between idle state selection and the "reflect"
* callback is greater than or equal to the sleep length determined at
* the idle state selection time, the wakeup is likely to be due to a
* timer event.
*/
if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns)
measured_us = UINT_MAX;
/*
* Save idle duration values corresponding to non-timer wakeups for
* pattern detection.
@ -242,12 +234,12 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
int latency_req = cpuidle_governor_latency_req(dev->cpu);
unsigned int duration_us, count;
int max_early_idx, idx, i;
int max_early_idx, constraint_idx, idx, i;
ktime_t delta_tick;
if (cpu_data->last_state >= 0) {
if (dev->last_state_idx >= 0) {
teo_update(drv, dev);
cpu_data->last_state = -1;
dev->last_state_idx = -1;
}
cpu_data->time_span_ns = local_clock();
@ -257,6 +249,7 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
count = 0;
max_early_idx = -1;
constraint_idx = drv->state_count;
idx = -1;
for (i = 0; i < drv->state_count; i++) {
@ -286,16 +279,8 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
if (s->target_residency > duration_us)
break;
if (s->exit_latency > latency_req) {
/*
* If we break out of the loop for latency reasons, use
* the target residency of the selected state as the
* expected idle duration to avoid stopping the tick
* as long as that target residency is low enough.
*/
duration_us = drv->states[idx].target_residency;
goto refine;
}
if (s->exit_latency > latency_req && constraint_idx > i)
constraint_idx = i;
idx = i;
@ -321,7 +306,13 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
duration_us = drv->states[idx].target_residency;
}
refine:
/*
* If there is a latency constraint, it may be necessary to use a
* shallower idle state than the one selected so far.
*/
if (constraint_idx < idx)
idx = constraint_idx;
if (idx < 0) {
idx = 0; /* No states enabled. Must use 0. */
} else if (idx > 0) {
@ -331,13 +322,12 @@ refine:
/*
* Count and sum the most recent idle duration values less than
* the target residency of the state selected so far, find the
* max.
* the current expected idle duration value.
*/
for (i = 0; i < INTERVALS; i++) {
unsigned int val = cpu_data->intervals[i];
if (val >= drv->states[idx].target_residency)
if (val >= duration_us)
continue;
count++;
@ -356,8 +346,10 @@ refine:
* would be too shallow.
*/
if (!(tick_nohz_tick_stopped() && avg_us < TICK_USEC)) {
idx = teo_find_shallower_state(drv, dev, idx, avg_us);
duration_us = avg_us;
if (drv->states[idx].target_residency > avg_us)
idx = teo_find_shallower_state(drv, dev,
idx, avg_us);
}
}
}
@ -394,7 +386,7 @@ static void teo_reflect(struct cpuidle_device *dev, int state)
{
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
cpu_data->last_state = state;
dev->last_state_idx = state;
/*
* If the wakeup was not "natural", but triggered by one of the safety
* nets, assume that the CPU might have been idle for the entire sleep

View File

@ -20,16 +20,9 @@ static int __cpuidle poll_idle(struct cpuidle_device *dev,
local_irq_enable();
if (!current_set_polling_and_test()) {
unsigned int loop_count = 0;
u64 limit = TICK_NSEC;
int i;
u64 limit;
for (i = 1; i < drv->state_count; i++) {
if (drv->states[i].disabled || dev->states_usage[i].disable)
continue;
limit = (u64)drv->states[i].target_residency * NSEC_PER_USEC;
break;
}
limit = cpuidle_poll_time(drv, dev);
while (!need_resched()) {
cpu_relax();

View File

@ -334,6 +334,7 @@ struct cpuidle_state_kobj {
struct cpuidle_state_usage *state_usage;
struct completion kobj_unregister;
struct kobject kobj;
struct cpuidle_device *device;
};
#ifdef CONFIG_SUSPEND
@ -391,6 +392,7 @@ static inline void cpuidle_remove_s2idle_attr_group(struct cpuidle_state_kobj *k
#define kobj_to_state_obj(k) container_of(k, struct cpuidle_state_kobj, kobj)
#define kobj_to_state(k) (kobj_to_state_obj(k)->state)
#define kobj_to_state_usage(k) (kobj_to_state_obj(k)->state_usage)
#define kobj_to_device(k) (kobj_to_state_obj(k)->device)
#define attr_to_stateattr(a) container_of(a, struct cpuidle_state_attr, attr)
static ssize_t cpuidle_state_show(struct kobject *kobj, struct attribute *attr,
@ -414,10 +416,14 @@ static ssize_t cpuidle_state_store(struct kobject *kobj, struct attribute *attr,
struct cpuidle_state *state = kobj_to_state(kobj);
struct cpuidle_state_usage *state_usage = kobj_to_state_usage(kobj);
struct cpuidle_state_attr *cattr = attr_to_stateattr(attr);
struct cpuidle_device *dev = kobj_to_device(kobj);
if (cattr->store)
ret = cattr->store(state, state_usage, buf, size);
/* reset poll time cache */
dev->poll_limit_ns = 0;
return ret;
}
@ -468,6 +474,7 @@ static int cpuidle_add_state_sysfs(struct cpuidle_device *device)
}
kobj->state = &drv->states[i];
kobj->state_usage = &device->states_usage[i];
kobj->device = device;
init_completion(&kobj->kobj_unregister);
ret = kobject_init_and_add(&kobj->kobj, &ktype_state_cpuidle,

View File

@ -93,15 +93,28 @@ config ARM_EXYNOS_BUS_DEVFREQ
This does not yet operate with optimal voltages.
config ARM_TEGRA_DEVFREQ
tristate "Tegra DEVFREQ Driver"
depends on ARCH_TEGRA_124_SOC
select DEVFREQ_GOV_SIMPLE_ONDEMAND
tristate "NVIDIA Tegra30/114/124/210 DEVFREQ Driver"
depends on ARCH_TEGRA_3x_SOC || ARCH_TEGRA_114_SOC || \
ARCH_TEGRA_132_SOC || ARCH_TEGRA_124_SOC || \
ARCH_TEGRA_210_SOC || \
COMPILE_TEST
select PM_OPP
help
This adds the DEVFREQ driver for the Tegra family of SoCs.
It reads ACTMON counters of memory controllers and adjusts the
operating frequencies and voltages with OPP support.
config ARM_TEGRA20_DEVFREQ
tristate "NVIDIA Tegra20 DEVFREQ Driver"
depends on (TEGRA_MC && TEGRA20_EMC) || COMPILE_TEST
depends on COMMON_CLK
select DEVFREQ_GOV_SIMPLE_ONDEMAND
select PM_OPP
help
This adds the DEVFREQ driver for the Tegra20 family of SoCs.
It reads Memory Controller counters and adjusts the operating
frequencies and voltages with OPP support.
config ARM_RK3399_DMC_DEVFREQ
tristate "ARM RK3399 DMC DEVFREQ Driver"
depends on ARCH_ROCKCHIP

View File

@ -10,7 +10,8 @@ obj-$(CONFIG_DEVFREQ_GOV_PASSIVE) += governor_passive.o
# DEVFREQ Drivers
obj-$(CONFIG_ARM_EXYNOS_BUS_DEVFREQ) += exynos-bus.o
obj-$(CONFIG_ARM_RK3399_DMC_DEVFREQ) += rk3399_dmc.o
obj-$(CONFIG_ARM_TEGRA_DEVFREQ) += tegra-devfreq.o
obj-$(CONFIG_ARM_TEGRA_DEVFREQ) += tegra30-devfreq.o
obj-$(CONFIG_ARM_TEGRA20_DEVFREQ) += tegra20-devfreq.o
# DEVFREQ Event Drivers
obj-$(CONFIG_PM_DEVFREQ_EVENT) += event/

View File

@ -254,7 +254,7 @@ static struct devfreq_governor *try_then_request_governor(const char *name)
/* Restore previous state before return */
mutex_lock(&devfreq_list_lock);
if (err)
return ERR_PTR(err);
return (err < 0) ? ERR_PTR(err) : ERR_PTR(-EINVAL);
governor = find_devfreq_governor(name);
}
@ -402,7 +402,7 @@ static void devfreq_monitor(struct work_struct *work)
* devfreq_monitor_start() - Start load monitoring of devfreq instance
* @devfreq: the devfreq instance.
*
* Helper function for starting devfreq device load monitoing. By
* Helper function for starting devfreq device load monitoring. By
* default delayed work based monitoring is supported. Function
* to be called from governor in response to DEVFREQ_GOV_START
* event when device is added to devfreq framework.
@ -420,7 +420,7 @@ EXPORT_SYMBOL(devfreq_monitor_start);
* devfreq_monitor_stop() - Stop load monitoring of a devfreq instance
* @devfreq: the devfreq instance.
*
* Helper function to stop devfreq device load monitoing. Function
* Helper function to stop devfreq device load monitoring. Function
* to be called from governor in response to DEVFREQ_GOV_STOP
* event when device is removed from devfreq framework.
*/
@ -434,7 +434,7 @@ EXPORT_SYMBOL(devfreq_monitor_stop);
* devfreq_monitor_suspend() - Suspend load monitoring of a devfreq instance
* @devfreq: the devfreq instance.
*
* Helper function to suspend devfreq device load monitoing. Function
* Helper function to suspend devfreq device load monitoring. Function
* to be called from governor in response to DEVFREQ_GOV_SUSPEND
* event or when polling interval is set to zero.
*
@ -461,7 +461,7 @@ EXPORT_SYMBOL(devfreq_monitor_suspend);
* devfreq_monitor_resume() - Resume load monitoring of a devfreq instance
* @devfreq: the devfreq instance.
*
* Helper function to resume devfreq device load monitoing. Function
* Helper function to resume devfreq device load monitoring. Function
* to be called from governor in response to DEVFREQ_GOV_RESUME
* event or when polling interval is set to non-zero.
*/
@ -867,7 +867,7 @@ EXPORT_SYMBOL_GPL(devfreq_get_devfreq_by_phandle);
/**
* devm_devfreq_remove_device() - Resource-managed devfreq_remove_device()
* @dev: the device to add devfreq feature.
* @dev: the device from which to remove devfreq feature.
* @devfreq: the devfreq instance to be removed
*/
void devm_devfreq_remove_device(struct device *dev, struct devfreq *devfreq)

View File

@ -13,6 +13,7 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/suspend.h>
@ -20,6 +21,11 @@
#include "exynos-ppmu.h"
enum exynos_ppmu_type {
EXYNOS_TYPE_PPMU,
EXYNOS_TYPE_PPMU_V2,
};
struct exynos_ppmu_data {
struct clk *clk;
};
@ -33,6 +39,7 @@ struct exynos_ppmu {
struct regmap *regmap;
struct exynos_ppmu_data ppmu;
enum exynos_ppmu_type ppmu_type;
};
#define PPMU_EVENT(name) \
@ -86,6 +93,12 @@ static struct __exynos_ppmu_events {
PPMU_EVENT(d1-cpu),
PPMU_EVENT(d1-general),
PPMU_EVENT(d1-rt),
/* For Exynos5422 SoC */
PPMU_EVENT(dmc0_0),
PPMU_EVENT(dmc0_1),
PPMU_EVENT(dmc1_0),
PPMU_EVENT(dmc1_1),
};
static int exynos_ppmu_find_ppmu_id(struct devfreq_event_dev *edev)
@ -151,9 +164,9 @@ static int exynos_ppmu_set_event(struct devfreq_event_dev *edev)
if (ret < 0)
return ret;
/* Set the event of Read/Write data count */
/* Set the event of proper data type monitoring */
ret = regmap_write(info->regmap, PPMU_BEVTxSEL(id),
PPMU_RO_DATA_CNT | PPMU_WO_DATA_CNT);
edev->desc->event_type);
if (ret < 0)
return ret;
@ -365,23 +378,11 @@ static int exynos_ppmu_v2_set_event(struct devfreq_event_dev *edev)
if (ret < 0)
return ret;
/* Set the event of Read/Write data count */
switch (id) {
case PPMU_PMNCNT0:
case PPMU_PMNCNT1:
case PPMU_PMNCNT2:
ret = regmap_write(info->regmap, PPMU_V2_CH_EVx_TYPE(id),
PPMU_V2_RO_DATA_CNT | PPMU_V2_WO_DATA_CNT);
if (ret < 0)
return ret;
break;
case PPMU_PMNCNT3:
ret = regmap_write(info->regmap, PPMU_V2_CH_EVx_TYPE(id),
PPMU_V2_EVT3_RW_DATA_CNT);
if (ret < 0)
return ret;
break;
}
/* Set the event of proper data type monitoring */
ret = regmap_write(info->regmap, PPMU_V2_CH_EVx_TYPE(id),
edev->desc->event_type);
if (ret < 0)
return ret;
/* Reset cycle counter/performance counter and enable PPMU */
ret = regmap_read(info->regmap, PPMU_V2_PMNC, &pmnc);
@ -480,31 +481,24 @@ static const struct devfreq_event_ops exynos_ppmu_v2_ops = {
static const struct of_device_id exynos_ppmu_id_match[] = {
{
.compatible = "samsung,exynos-ppmu",
.data = (void *)&exynos_ppmu_ops,
.data = (void *)EXYNOS_TYPE_PPMU,
}, {
.compatible = "samsung,exynos-ppmu-v2",
.data = (void *)&exynos_ppmu_v2_ops,
.data = (void *)EXYNOS_TYPE_PPMU_V2,
},
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, exynos_ppmu_id_match);
static struct devfreq_event_ops *exynos_bus_get_ops(struct device_node *np)
{
const struct of_device_id *match;
match = of_match_node(exynos_ppmu_id_match, np);
return (struct devfreq_event_ops *)match->data;
}
static int of_get_devfreq_events(struct device_node *np,
struct exynos_ppmu *info)
{
struct devfreq_event_desc *desc;
struct devfreq_event_ops *event_ops;
struct device *dev = info->dev;
struct device_node *events_np, *node;
int i, j, count;
const struct of_device_id *of_id;
int ret;
events_np = of_get_child_by_name(np, "events");
if (!events_np) {
@ -512,7 +506,6 @@ static int of_get_devfreq_events(struct device_node *np,
"failed to get child node of devfreq-event devices\n");
return -EINVAL;
}
event_ops = exynos_bus_get_ops(np);
count = of_get_child_count(events_np);
desc = devm_kcalloc(dev, count, sizeof(*desc), GFP_KERNEL);
@ -520,6 +513,12 @@ static int of_get_devfreq_events(struct device_node *np,
return -ENOMEM;
info->num_events = count;
of_id = of_match_device(exynos_ppmu_id_match, dev);
if (of_id)
info->ppmu_type = (enum exynos_ppmu_type)of_id->data;
else
return -EINVAL;
j = 0;
for_each_child_of_node(events_np, node) {
for (i = 0; i < ARRAY_SIZE(ppmu_events); i++) {
@ -537,10 +536,51 @@ static int of_get_devfreq_events(struct device_node *np,
continue;
}
desc[j].ops = event_ops;
switch (info->ppmu_type) {
case EXYNOS_TYPE_PPMU:
desc[j].ops = &exynos_ppmu_ops;
break;
case EXYNOS_TYPE_PPMU_V2:
desc[j].ops = &exynos_ppmu_v2_ops;
break;
}
desc[j].driver_data = info;
of_property_read_string(node, "event-name", &desc[j].name);
ret = of_property_read_u32(node, "event-data-type",
&desc[j].event_type);
if (ret) {
/* Set the event of proper data type counting.
* Check if the data type has been defined in DT,
* use default if not.
*/
if (info->ppmu_type == EXYNOS_TYPE_PPMU_V2) {
struct devfreq_event_dev edev;
int id;
/* Not all registers take the same value for
* read+write data count.
*/
edev.desc = &desc[j];
id = exynos_ppmu_find_ppmu_id(&edev);
switch (id) {
case PPMU_PMNCNT0:
case PPMU_PMNCNT1:
case PPMU_PMNCNT2:
desc[j].event_type = PPMU_V2_RO_DATA_CNT
| PPMU_V2_WO_DATA_CNT;
break;
case PPMU_PMNCNT3:
desc[j].event_type =
PPMU_V2_EVT3_RW_DATA_CNT;
break;
}
} else {
desc[j].event_type = PPMU_RO_DATA_CNT |
PPMU_WO_DATA_CNT;
}
}
j++;
}

View File

@ -22,7 +22,6 @@
#include <linux/slab.h>
#define DEFAULT_SATURATION_RATIO 40
#define DEFAULT_VOLTAGE_TOLERANCE 2
struct exynos_bus {
struct device *dev;
@ -34,9 +33,8 @@ struct exynos_bus {
unsigned long curr_freq;
struct regulator *regulator;
struct opp_table *opp_table;
struct clk *clk;
unsigned int voltage_tolerance;
unsigned int ratio;
};
@ -90,62 +88,29 @@ static int exynos_bus_get_event(struct exynos_bus *bus,
}
/*
* Must necessary function for devfreq simple-ondemand governor
* devfreq function for both simple-ondemand and passive governor
*/
static int exynos_bus_target(struct device *dev, unsigned long *freq, u32 flags)
{
struct exynos_bus *bus = dev_get_drvdata(dev);
struct dev_pm_opp *new_opp;
unsigned long old_freq, new_freq, new_volt, tol;
int ret = 0;
/* Get new opp-bus instance according to new bus clock */
/* Get correct frequency for bus. */
new_opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(new_opp)) {
dev_err(dev, "failed to get recommended opp instance\n");
return PTR_ERR(new_opp);
}
new_freq = dev_pm_opp_get_freq(new_opp);
new_volt = dev_pm_opp_get_voltage(new_opp);
dev_pm_opp_put(new_opp);
old_freq = bus->curr_freq;
if (old_freq == new_freq)
return 0;
tol = new_volt * bus->voltage_tolerance / 100;
/* Change voltage and frequency according to new OPP level */
mutex_lock(&bus->lock);
ret = dev_pm_opp_set_rate(dev, *freq);
if (!ret)
bus->curr_freq = *freq;
if (old_freq < new_freq) {
ret = regulator_set_voltage_tol(bus->regulator, new_volt, tol);
if (ret < 0) {
dev_err(bus->dev, "failed to set voltage\n");
goto out;
}
}
ret = clk_set_rate(bus->clk, new_freq);
if (ret < 0) {
dev_err(dev, "failed to change clock of bus\n");
clk_set_rate(bus->clk, old_freq);
goto out;
}
if (old_freq > new_freq) {
ret = regulator_set_voltage_tol(bus->regulator, new_volt, tol);
if (ret < 0) {
dev_err(bus->dev, "failed to set voltage\n");
goto out;
}
}
bus->curr_freq = new_freq;
dev_dbg(dev, "Set the frequency of bus (%luHz -> %luHz, %luHz)\n",
old_freq, new_freq, clk_get_rate(bus->clk));
out:
mutex_unlock(&bus->lock);
return ret;
@ -191,57 +156,12 @@ static void exynos_bus_exit(struct device *dev)
if (ret < 0)
dev_warn(dev, "failed to disable the devfreq-event devices\n");
if (bus->regulator)
regulator_disable(bus->regulator);
dev_pm_opp_of_remove_table(dev);
clk_disable_unprepare(bus->clk);
}
/*
* Must necessary function for devfreq passive governor
*/
static int exynos_bus_passive_target(struct device *dev, unsigned long *freq,
u32 flags)
{
struct exynos_bus *bus = dev_get_drvdata(dev);
struct dev_pm_opp *new_opp;
unsigned long old_freq, new_freq;
int ret = 0;
/* Get new opp-bus instance according to new bus clock */
new_opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(new_opp)) {
dev_err(dev, "failed to get recommended opp instance\n");
return PTR_ERR(new_opp);
if (bus->opp_table) {
dev_pm_opp_put_regulators(bus->opp_table);
bus->opp_table = NULL;
}
new_freq = dev_pm_opp_get_freq(new_opp);
dev_pm_opp_put(new_opp);
old_freq = bus->curr_freq;
if (old_freq == new_freq)
return 0;
/* Change the frequency according to new OPP level */
mutex_lock(&bus->lock);
ret = clk_set_rate(bus->clk, new_freq);
if (ret < 0) {
dev_err(dev, "failed to set the clock of bus\n");
goto out;
}
*freq = new_freq;
bus->curr_freq = new_freq;
dev_dbg(dev, "Set the frequency of bus (%luHz -> %luHz, %luHz)\n",
old_freq, new_freq, clk_get_rate(bus->clk));
out:
mutex_unlock(&bus->lock);
return ret;
}
static void exynos_bus_passive_exit(struct device *dev)
@ -256,21 +176,19 @@ static int exynos_bus_parent_parse_of(struct device_node *np,
struct exynos_bus *bus)
{
struct device *dev = bus->dev;
struct opp_table *opp_table;
const char *vdd = "vdd";
int i, ret, count, size;
/* Get the regulator to provide each bus with the power */
bus->regulator = devm_regulator_get(dev, "vdd");
if (IS_ERR(bus->regulator)) {
dev_err(dev, "failed to get VDD regulator\n");
return PTR_ERR(bus->regulator);
}
ret = regulator_enable(bus->regulator);
if (ret < 0) {
dev_err(dev, "failed to enable VDD regulator\n");
opp_table = dev_pm_opp_set_regulators(dev, &vdd, 1);
if (IS_ERR(opp_table)) {
ret = PTR_ERR(opp_table);
dev_err(dev, "failed to set regulators %d\n", ret);
return ret;
}
bus->opp_table = opp_table;
/*
* Get the devfreq-event devices to get the current utilization of
* buses. This raw data will be used in devfreq ondemand governor.
@ -311,14 +229,11 @@ static int exynos_bus_parent_parse_of(struct device_node *np,
if (of_property_read_u32(np, "exynos,saturation-ratio", &bus->ratio))
bus->ratio = DEFAULT_SATURATION_RATIO;
if (of_property_read_u32(np, "exynos,voltage-tolerance",
&bus->voltage_tolerance))
bus->voltage_tolerance = DEFAULT_VOLTAGE_TOLERANCE;
return 0;
err_regulator:
regulator_disable(bus->regulator);
dev_pm_opp_put_regulators(bus->opp_table);
bus->opp_table = NULL;
return ret;
}
@ -383,6 +298,7 @@ static int exynos_bus_probe(struct platform_device *pdev)
struct exynos_bus *bus;
int ret, max_state;
unsigned long min_freq, max_freq;
bool passive = false;
if (!np) {
dev_err(dev, "failed to find devicetree node\n");
@ -396,27 +312,27 @@ static int exynos_bus_probe(struct platform_device *pdev)
bus->dev = &pdev->dev;
platform_set_drvdata(pdev, bus);
/* Parse the device-tree to get the resource information */
ret = exynos_bus_parse_of(np, bus);
if (ret < 0)
return ret;
profile = devm_kzalloc(dev, sizeof(*profile), GFP_KERNEL);
if (!profile) {
ret = -ENOMEM;
goto err;
}
if (!profile)
return -ENOMEM;
node = of_parse_phandle(dev->of_node, "devfreq", 0);
if (node) {
of_node_put(node);
goto passive;
passive = true;
} else {
ret = exynos_bus_parent_parse_of(np, bus);
if (ret < 0)
return ret;
}
/* Parse the device-tree to get the resource information */
ret = exynos_bus_parse_of(np, bus);
if (ret < 0)
goto err;
goto err_reg;
if (passive)
goto passive;
/* Initialize the struct profile and governor data for parent device */
profile->polling_ms = 50;
@ -468,7 +384,7 @@ static int exynos_bus_probe(struct platform_device *pdev)
goto out;
passive:
/* Initialize the struct profile and governor data for passive device */
profile->target = exynos_bus_passive_target;
profile->target = exynos_bus_target;
profile->exit = exynos_bus_passive_exit;
/* Get the instance of parent devfreq device */
@ -507,6 +423,11 @@ out:
err:
dev_pm_opp_of_remove_table(dev);
clk_disable_unprepare(bus->clk);
err_reg:
if (!passive) {
dev_pm_opp_put_regulators(bus->opp_table);
bus->opp_table = NULL;
}
return ret;
}

View File

@ -149,7 +149,6 @@ static int devfreq_passive_notifier_call(struct notifier_block *nb,
static int devfreq_passive_event_handler(struct devfreq *devfreq,
unsigned int event, void *data)
{
struct device *dev = devfreq->dev.parent;
struct devfreq_passive_data *p_data
= (struct devfreq_passive_data *)devfreq->data;
struct devfreq *parent = (struct devfreq *)p_data->parent;
@ -165,12 +164,12 @@ static int devfreq_passive_event_handler(struct devfreq *devfreq,
p_data->this = devfreq;
nb->notifier_call = devfreq_passive_notifier_call;
ret = devm_devfreq_register_notifier(dev, parent, nb,
ret = devfreq_register_notifier(parent, nb,
DEVFREQ_TRANSITION_NOTIFIER);
break;
case DEVFREQ_GOV_STOP:
devm_devfreq_unregister_notifier(dev, parent, nb,
DEVFREQ_TRANSITION_NOTIFIER);
WARN_ON(devfreq_unregister_notifier(parent, nb,
DEVFREQ_TRANSITION_NOTIFIER));
break;
default:
break;

View File

@ -351,7 +351,7 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev)
/*
* Get dram timing and pass it to arm trust firmware,
* the dram drvier in arm trust firmware will get these
* the dram driver in arm trust firmware will get these
* timing and to do dram initial.
*/
if (!of_get_ddr_timings(&data->timing, np)) {

View File

@ -0,0 +1,212 @@
// SPDX-License-Identifier: GPL-2.0
/*
* NVIDIA Tegra20 devfreq driver
*
* Copyright (C) 2019 GRATE-DRIVER project
*/
#include <linux/clk.h>
#include <linux/devfreq.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/slab.h>
#include <soc/tegra/mc.h>
#include "governor.h"
#define MC_STAT_CONTROL 0x90
#define MC_STAT_EMC_CLOCK_LIMIT 0xa0
#define MC_STAT_EMC_CLOCKS 0xa4
#define MC_STAT_EMC_CONTROL 0xa8
#define MC_STAT_EMC_COUNT 0xb8
#define EMC_GATHER_CLEAR (1 << 8)
#define EMC_GATHER_ENABLE (3 << 8)
struct tegra_devfreq {
struct devfreq *devfreq;
struct clk *emc_clock;
void __iomem *regs;
};
static int tegra_devfreq_target(struct device *dev, unsigned long *freq,
u32 flags)
{
struct tegra_devfreq *tegra = dev_get_drvdata(dev);
struct devfreq *devfreq = tegra->devfreq;
struct dev_pm_opp *opp;
unsigned long rate;
int err;
opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(opp))
return PTR_ERR(opp);
rate = dev_pm_opp_get_freq(opp);
dev_pm_opp_put(opp);
err = clk_set_min_rate(tegra->emc_clock, rate);
if (err)
return err;
err = clk_set_rate(tegra->emc_clock, 0);
if (err)
goto restore_min_rate;
return 0;
restore_min_rate:
clk_set_min_rate(tegra->emc_clock, devfreq->previous_freq);
return err;
}
static int tegra_devfreq_get_dev_status(struct device *dev,
struct devfreq_dev_status *stat)
{
struct tegra_devfreq *tegra = dev_get_drvdata(dev);
/*
* EMC_COUNT returns number of memory events, that number is lower
* than the number of clocks. Conversion ratio of 1/8 results in a
* bit higher bandwidth than actually needed, it is good enough for
* the time being because drivers don't support requesting minimum
* needed memory bandwidth yet.
*
* TODO: adjust the ratio value once relevant drivers will support
* memory bandwidth management.
*/
stat->busy_time = readl_relaxed(tegra->regs + MC_STAT_EMC_COUNT);
stat->total_time = readl_relaxed(tegra->regs + MC_STAT_EMC_CLOCKS) / 8;
stat->current_frequency = clk_get_rate(tegra->emc_clock);
writel_relaxed(EMC_GATHER_CLEAR, tegra->regs + MC_STAT_CONTROL);
writel_relaxed(EMC_GATHER_ENABLE, tegra->regs + MC_STAT_CONTROL);
return 0;
}
static struct devfreq_dev_profile tegra_devfreq_profile = {
.polling_ms = 500,
.target = tegra_devfreq_target,
.get_dev_status = tegra_devfreq_get_dev_status,
};
static struct tegra_mc *tegra_get_memory_controller(void)
{
struct platform_device *pdev;
struct device_node *np;
struct tegra_mc *mc;
np = of_find_compatible_node(NULL, NULL, "nvidia,tegra20-mc-gart");
if (!np)
return ERR_PTR(-ENOENT);
pdev = of_find_device_by_node(np);
of_node_put(np);
if (!pdev)
return ERR_PTR(-ENODEV);
mc = platform_get_drvdata(pdev);
if (!mc)
return ERR_PTR(-EPROBE_DEFER);
return mc;
}
static int tegra_devfreq_probe(struct platform_device *pdev)
{
struct tegra_devfreq *tegra;
struct tegra_mc *mc;
unsigned long max_rate;
unsigned long rate;
int err;
mc = tegra_get_memory_controller();
if (IS_ERR(mc)) {
err = PTR_ERR(mc);
dev_err(&pdev->dev, "failed to get memory controller: %d\n",
err);
return err;
}
tegra = devm_kzalloc(&pdev->dev, sizeof(*tegra), GFP_KERNEL);
if (!tegra)
return -ENOMEM;
/* EMC is a system-critical clock that is always enabled */
tegra->emc_clock = devm_clk_get(&pdev->dev, "emc");
if (IS_ERR(tegra->emc_clock)) {
err = PTR_ERR(tegra->emc_clock);
dev_err(&pdev->dev, "failed to get emc clock: %d\n", err);
return err;
}
tegra->regs = mc->regs;
max_rate = clk_round_rate(tegra->emc_clock, ULONG_MAX);
for (rate = 0; rate <= max_rate; rate++) {
rate = clk_round_rate(tegra->emc_clock, rate);
err = dev_pm_opp_add(&pdev->dev, rate, 0);
if (err) {
dev_err(&pdev->dev, "failed to add opp: %d\n", err);
goto remove_opps;
}
}
/*
* Reset statistic gathers state, select global bandwidth for the
* statistics collection mode and set clocks counter saturation
* limit to maximum.
*/
writel_relaxed(0x00000000, tegra->regs + MC_STAT_CONTROL);
writel_relaxed(0x00000000, tegra->regs + MC_STAT_EMC_CONTROL);
writel_relaxed(0xffffffff, tegra->regs + MC_STAT_EMC_CLOCK_LIMIT);
platform_set_drvdata(pdev, tegra);
tegra->devfreq = devfreq_add_device(&pdev->dev, &tegra_devfreq_profile,
DEVFREQ_GOV_SIMPLE_ONDEMAND, NULL);
if (IS_ERR(tegra->devfreq)) {
err = PTR_ERR(tegra->devfreq);
goto remove_opps;
}
return 0;
remove_opps:
dev_pm_opp_remove_all_dynamic(&pdev->dev);
return err;
}
static int tegra_devfreq_remove(struct platform_device *pdev)
{
struct tegra_devfreq *tegra = platform_get_drvdata(pdev);
devfreq_remove_device(tegra->devfreq);
dev_pm_opp_remove_all_dynamic(&pdev->dev);
return 0;
}
static struct platform_driver tegra_devfreq_driver = {
.probe = tegra_devfreq_probe,
.remove = tegra_devfreq_remove,
.driver = {
.name = "tegra20-devfreq",
},
};
module_platform_driver(tegra_devfreq_driver);
MODULE_ALIAS("platform:tegra20-devfreq");
MODULE_AUTHOR("Dmitry Osipenko <digetx@gmail.com>");
MODULE_DESCRIPTION("NVIDIA Tegra20 devfreq driver");
MODULE_LICENSE("GPL v2");

View File

@ -132,7 +132,6 @@ static struct tegra_devfreq_device_config actmon_device_configs[] = {
struct tegra_devfreq_device {
const struct tegra_devfreq_device_config *config;
void __iomem *regs;
spinlock_t lock;
/* Average event count sampled in the last interrupt */
u32 avg_count;
@ -160,6 +159,8 @@ struct tegra_devfreq {
struct notifier_block rate_change_nb;
struct tegra_devfreq_device devices[ARRAY_SIZE(actmon_device_configs)];
int irq;
};
struct tegra_actmon_emc_ratio {
@ -179,23 +180,23 @@ static struct tegra_actmon_emc_ratio actmon_emc_ratios[] = {
static u32 actmon_readl(struct tegra_devfreq *tegra, u32 offset)
{
return readl(tegra->regs + offset);
return readl_relaxed(tegra->regs + offset);
}
static void actmon_writel(struct tegra_devfreq *tegra, u32 val, u32 offset)
{
writel(val, tegra->regs + offset);
writel_relaxed(val, tegra->regs + offset);
}
static u32 device_readl(struct tegra_devfreq_device *dev, u32 offset)
{
return readl(dev->regs + offset);
return readl_relaxed(dev->regs + offset);
}
static void device_writel(struct tegra_devfreq_device *dev, u32 val,
u32 offset)
{
writel(val, dev->regs + offset);
writel_relaxed(val, dev->regs + offset);
}
static unsigned long do_percent(unsigned long val, unsigned int pct)
@ -231,18 +232,14 @@ static void tegra_devfreq_update_wmark(struct tegra_devfreq *tegra,
static void actmon_write_barrier(struct tegra_devfreq *tegra)
{
/* ensure the update has reached the ACTMON */
wmb();
actmon_readl(tegra, ACTMON_GLB_STATUS);
readl(tegra->regs + ACTMON_GLB_STATUS);
}
static void actmon_isr_device(struct tegra_devfreq *tegra,
struct tegra_devfreq_device *dev)
{
unsigned long flags;
u32 intr_status, dev_ctrl;
spin_lock_irqsave(&dev->lock, flags);
dev->avg_count = device_readl(dev, ACTMON_DEV_AVG_COUNT);
tegra_devfreq_update_avg_wmark(tegra, dev);
@ -291,26 +288,6 @@ static void actmon_isr_device(struct tegra_devfreq *tegra,
device_writel(dev, ACTMON_INTR_STATUS_CLEAR, ACTMON_DEV_INTR_STATUS);
actmon_write_barrier(tegra);
spin_unlock_irqrestore(&dev->lock, flags);
}
static irqreturn_t actmon_isr(int irq, void *data)
{
struct tegra_devfreq *tegra = data;
bool handled = false;
unsigned int i;
u32 val;
val = actmon_readl(tegra, ACTMON_GLB_STATUS);
for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) {
if (val & tegra->devices[i].config->irq_mask) {
actmon_isr_device(tegra, tegra->devices + i);
handled = true;
}
}
return handled ? IRQ_WAKE_THREAD : IRQ_NONE;
}
static unsigned long actmon_cpu_to_emc_rate(struct tegra_devfreq *tegra,
@ -337,15 +314,12 @@ static void actmon_update_target(struct tegra_devfreq *tegra,
unsigned long cpu_freq = 0;
unsigned long static_cpu_emc_freq = 0;
unsigned int avg_sustain_coef;
unsigned long flags;
if (dev->config->avg_dependency_threshold) {
cpu_freq = cpufreq_get(0);
static_cpu_emc_freq = actmon_cpu_to_emc_rate(tegra, cpu_freq);
}
spin_lock_irqsave(&dev->lock, flags);
dev->target_freq = dev->avg_count / ACTMON_SAMPLING_PERIOD;
avg_sustain_coef = 100 * 100 / dev->config->boost_up_threshold;
dev->target_freq = do_percent(dev->target_freq, avg_sustain_coef);
@ -353,19 +327,31 @@ static void actmon_update_target(struct tegra_devfreq *tegra,
if (dev->avg_count >= dev->config->avg_dependency_threshold)
dev->target_freq = max(dev->target_freq, static_cpu_emc_freq);
spin_unlock_irqrestore(&dev->lock, flags);
}
static irqreturn_t actmon_thread_isr(int irq, void *data)
{
struct tegra_devfreq *tegra = data;
bool handled = false;
unsigned int i;
u32 val;
mutex_lock(&tegra->devfreq->lock);
update_devfreq(tegra->devfreq);
val = actmon_readl(tegra, ACTMON_GLB_STATUS);
for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) {
if (val & tegra->devices[i].config->irq_mask) {
actmon_isr_device(tegra, tegra->devices + i);
handled = true;
}
}
if (handled)
update_devfreq(tegra->devfreq);
mutex_unlock(&tegra->devfreq->lock);
return IRQ_HANDLED;
return handled ? IRQ_HANDLED : IRQ_NONE;
}
static int tegra_actmon_rate_notify_cb(struct notifier_block *nb,
@ -375,7 +361,6 @@ static int tegra_actmon_rate_notify_cb(struct notifier_block *nb,
struct tegra_devfreq *tegra;
struct tegra_devfreq_device *dev;
unsigned int i;
unsigned long flags;
if (action != POST_RATE_CHANGE)
return NOTIFY_OK;
@ -387,9 +372,7 @@ static int tegra_actmon_rate_notify_cb(struct notifier_block *nb,
for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) {
dev = &tegra->devices[i];
spin_lock_irqsave(&dev->lock, flags);
tegra_devfreq_update_wmark(tegra, dev);
spin_unlock_irqrestore(&dev->lock, flags);
}
actmon_write_barrier(tegra);
@ -397,48 +380,6 @@ static int tegra_actmon_rate_notify_cb(struct notifier_block *nb,
return NOTIFY_OK;
}
static void tegra_actmon_enable_interrupts(struct tegra_devfreq *tegra)
{
struct tegra_devfreq_device *dev;
u32 val;
unsigned int i;
for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) {
dev = &tegra->devices[i];
val = device_readl(dev, ACTMON_DEV_CTRL);
val |= ACTMON_DEV_CTRL_AVG_ABOVE_WMARK_EN;
val |= ACTMON_DEV_CTRL_AVG_BELOW_WMARK_EN;
val |= ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN;
val |= ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_EN;
device_writel(dev, val, ACTMON_DEV_CTRL);
}
actmon_write_barrier(tegra);
}
static void tegra_actmon_disable_interrupts(struct tegra_devfreq *tegra)
{
struct tegra_devfreq_device *dev;
u32 val;
unsigned int i;
for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) {
dev = &tegra->devices[i];
val = device_readl(dev, ACTMON_DEV_CTRL);
val &= ~ACTMON_DEV_CTRL_AVG_ABOVE_WMARK_EN;
val &= ~ACTMON_DEV_CTRL_AVG_BELOW_WMARK_EN;
val &= ~ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN;
val &= ~ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_EN;
device_writel(dev, val, ACTMON_DEV_CTRL);
}
actmon_write_barrier(tegra);
}
static void tegra_actmon_configure_device(struct tegra_devfreq *tegra,
struct tegra_devfreq_device *dev)
{
@ -462,34 +403,80 @@ static void tegra_actmon_configure_device(struct tegra_devfreq *tegra,
<< ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_NUM_SHIFT;
val |= (ACTMON_ABOVE_WMARK_WINDOW - 1)
<< ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_NUM_SHIFT;
val |= ACTMON_DEV_CTRL_AVG_ABOVE_WMARK_EN;
val |= ACTMON_DEV_CTRL_AVG_BELOW_WMARK_EN;
val |= ACTMON_DEV_CTRL_CONSECUTIVE_BELOW_WMARK_EN;
val |= ACTMON_DEV_CTRL_CONSECUTIVE_ABOVE_WMARK_EN;
val |= ACTMON_DEV_CTRL_ENB;
device_writel(dev, val, ACTMON_DEV_CTRL);
}
static void tegra_actmon_start(struct tegra_devfreq *tegra)
{
unsigned int i;
disable_irq(tegra->irq);
actmon_writel(tegra, ACTMON_SAMPLING_PERIOD - 1,
ACTMON_GLB_PERIOD_CTRL);
for (i = 0; i < ARRAY_SIZE(tegra->devices); i++)
tegra_actmon_configure_device(tegra, &tegra->devices[i]);
actmon_write_barrier(tegra);
enable_irq(tegra->irq);
}
static void tegra_actmon_stop(struct tegra_devfreq *tegra)
{
unsigned int i;
disable_irq(tegra->irq);
for (i = 0; i < ARRAY_SIZE(tegra->devices); i++) {
device_writel(&tegra->devices[i], 0x00000000, ACTMON_DEV_CTRL);
device_writel(&tegra->devices[i], ACTMON_INTR_STATUS_CLEAR,
ACTMON_DEV_INTR_STATUS);
}
actmon_write_barrier(tegra);
enable_irq(tegra->irq);
}
static int tegra_devfreq_target(struct device *dev, unsigned long *freq,
u32 flags)
{
struct tegra_devfreq *tegra = dev_get_drvdata(dev);
struct devfreq *devfreq = tegra->devfreq;
struct dev_pm_opp *opp;
unsigned long rate = *freq * KHZ;
unsigned long rate;
int err;
opp = devfreq_recommended_opp(dev, &rate, flags);
opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(opp)) {
dev_err(dev, "Failed to find opp for %lu KHz\n", *freq);
dev_err(dev, "Failed to find opp for %lu Hz\n", *freq);
return PTR_ERR(opp);
}
rate = dev_pm_opp_get_freq(opp);
dev_pm_opp_put(opp);
clk_set_min_rate(tegra->emc_clock, rate);
clk_set_rate(tegra->emc_clock, 0);
err = clk_set_min_rate(tegra->emc_clock, rate);
if (err)
return err;
*freq = rate;
err = clk_set_rate(tegra->emc_clock, 0);
if (err)
goto restore_min_rate;
return 0;
restore_min_rate:
clk_set_min_rate(tegra->emc_clock, devfreq->previous_freq);
return err;
}
static int tegra_devfreq_get_dev_status(struct device *dev,
@ -497,13 +484,15 @@ static int tegra_devfreq_get_dev_status(struct device *dev,
{
struct tegra_devfreq *tegra = dev_get_drvdata(dev);
struct tegra_devfreq_device *actmon_dev;
unsigned long cur_freq;
stat->current_frequency = tegra->cur_freq;
cur_freq = READ_ONCE(tegra->cur_freq);
/* To be used by the tegra governor */
stat->private_data = tegra;
/* The below are to be used by the other governors */
stat->current_frequency = cur_freq * KHZ;
actmon_dev = &tegra->devices[MCALL];
@ -514,7 +503,7 @@ static int tegra_devfreq_get_dev_status(struct device *dev,
stat->busy_time *= 100 / BUS_SATURATION_RATIO;
/* Number of cycles in a sampling period */
stat->total_time = ACTMON_SAMPLING_PERIOD * tegra->cur_freq;
stat->total_time = ACTMON_SAMPLING_PERIOD * cur_freq;
stat->busy_time = min(stat->busy_time, stat->total_time);
@ -553,7 +542,7 @@ static int tegra_governor_get_target(struct devfreq *devfreq,
target_freq = max(target_freq, dev->target_freq);
}
*freq = target_freq;
*freq = target_freq * KHZ;
return 0;
}
@ -566,22 +555,22 @@ static int tegra_governor_event_handler(struct devfreq *devfreq,
switch (event) {
case DEVFREQ_GOV_START:
devfreq_monitor_start(devfreq);
tegra_actmon_enable_interrupts(tegra);
tegra_actmon_start(tegra);
break;
case DEVFREQ_GOV_STOP:
tegra_actmon_disable_interrupts(tegra);
tegra_actmon_stop(tegra);
devfreq_monitor_stop(devfreq);
break;
case DEVFREQ_GOV_SUSPEND:
tegra_actmon_disable_interrupts(tegra);
tegra_actmon_stop(tegra);
devfreq_monitor_suspend(devfreq);
break;
case DEVFREQ_GOV_RESUME:
devfreq_monitor_resume(devfreq);
tegra_actmon_enable_interrupts(tegra);
tegra_actmon_start(tegra);
break;
}
@ -592,25 +581,22 @@ static struct devfreq_governor tegra_devfreq_governor = {
.name = "tegra_actmon",
.get_target_freq = tegra_governor_get_target,
.event_handler = tegra_governor_event_handler,
.immutable = true,
};
static int tegra_devfreq_probe(struct platform_device *pdev)
{
struct tegra_devfreq *tegra;
struct tegra_devfreq_device *dev;
struct resource *res;
unsigned int i;
unsigned long rate;
int irq;
int err;
tegra = devm_kzalloc(&pdev->dev, sizeof(*tegra), GFP_KERNEL);
if (!tegra)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
tegra->regs = devm_ioremap_resource(&pdev->dev, res);
tegra->regs = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(tegra->regs))
return PTR_ERR(tegra->regs);
@ -632,13 +618,10 @@ static int tegra_devfreq_probe(struct platform_device *pdev)
return PTR_ERR(tegra->emc_clock);
}
clk_set_rate(tegra->emc_clock, ULONG_MAX);
tegra->rate_change_nb.notifier_call = tegra_actmon_rate_notify_cb;
err = clk_notifier_register(tegra->emc_clock, &tegra->rate_change_nb);
if (err) {
dev_err(&pdev->dev,
"Failed to register rate change notifier\n");
tegra->irq = platform_get_irq(pdev, 0);
if (tegra->irq < 0) {
err = tegra->irq;
dev_err(&pdev->dev, "Failed to get IRQ: %d\n", err);
return err;
}
@ -656,73 +639,94 @@ static int tegra_devfreq_probe(struct platform_device *pdev)
tegra->max_freq = clk_round_rate(tegra->emc_clock, ULONG_MAX) / KHZ;
tegra->cur_freq = clk_get_rate(tegra->emc_clock) / KHZ;
actmon_writel(tegra, ACTMON_SAMPLING_PERIOD - 1,
ACTMON_GLB_PERIOD_CTRL);
for (i = 0; i < ARRAY_SIZE(actmon_device_configs); i++) {
dev = tegra->devices + i;
dev->config = actmon_device_configs + i;
dev->regs = tegra->regs + dev->config->offset;
spin_lock_init(&dev->lock);
tegra_actmon_configure_device(tegra, dev);
}
for (rate = 0; rate <= tegra->max_freq * KHZ; rate++) {
rate = clk_round_rate(tegra->emc_clock, rate);
dev_pm_opp_add(&pdev->dev, rate, 0);
}
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
dev_err(&pdev->dev, "Failed to get IRQ: %d\n", irq);
return irq;
err = dev_pm_opp_add(&pdev->dev, rate, 0);
if (err) {
dev_err(&pdev->dev, "Failed to add OPP: %d\n", err);
goto remove_opps;
}
}
platform_set_drvdata(pdev, tegra);
err = devm_request_threaded_irq(&pdev->dev, irq, actmon_isr,
actmon_thread_isr, IRQF_SHARED,
"tegra-devfreq", tegra);
tegra->rate_change_nb.notifier_call = tegra_actmon_rate_notify_cb;
err = clk_notifier_register(tegra->emc_clock, &tegra->rate_change_nb);
if (err) {
dev_err(&pdev->dev, "Interrupt request failed\n");
return err;
dev_err(&pdev->dev,
"Failed to register rate change notifier\n");
goto remove_opps;
}
err = devfreq_add_governor(&tegra_devfreq_governor);
if (err) {
dev_err(&pdev->dev, "Failed to add governor: %d\n", err);
goto unreg_notifier;
}
tegra_devfreq_profile.initial_freq = clk_get_rate(tegra->emc_clock);
tegra->devfreq = devm_devfreq_add_device(&pdev->dev,
&tegra_devfreq_profile,
"tegra_actmon",
NULL);
tegra->devfreq = devfreq_add_device(&pdev->dev,
&tegra_devfreq_profile,
"tegra_actmon",
NULL);
if (IS_ERR(tegra->devfreq)) {
err = PTR_ERR(tegra->devfreq);
goto remove_governor;
}
err = devm_request_threaded_irq(&pdev->dev, tegra->irq, NULL,
actmon_thread_isr, IRQF_ONESHOT,
"tegra-devfreq", tegra);
if (err) {
dev_err(&pdev->dev, "Interrupt request failed: %d\n", err);
goto remove_devfreq;
}
return 0;
remove_devfreq:
devfreq_remove_device(tegra->devfreq);
remove_governor:
devfreq_remove_governor(&tegra_devfreq_governor);
unreg_notifier:
clk_notifier_unregister(tegra->emc_clock, &tegra->rate_change_nb);
remove_opps:
dev_pm_opp_remove_all_dynamic(&pdev->dev);
reset_control_reset(tegra->reset);
clk_disable_unprepare(tegra->clock);
return err;
}
static int tegra_devfreq_remove(struct platform_device *pdev)
{
struct tegra_devfreq *tegra = platform_get_drvdata(pdev);
int irq = platform_get_irq(pdev, 0);
u32 val;
unsigned int i;
for (i = 0; i < ARRAY_SIZE(actmon_device_configs); i++) {
val = device_readl(&tegra->devices[i], ACTMON_DEV_CTRL);
val &= ~ACTMON_DEV_CTRL_ENB;
device_writel(&tegra->devices[i], val, ACTMON_DEV_CTRL);
}
actmon_write_barrier(tegra);
devm_free_irq(&pdev->dev, irq, tegra);
devfreq_remove_device(tegra->devfreq);
devfreq_remove_governor(&tegra_devfreq_governor);
clk_notifier_unregister(tegra->emc_clock, &tegra->rate_change_nb);
dev_pm_opp_remove_all_dynamic(&pdev->dev);
reset_control_reset(tegra->reset);
clk_disable_unprepare(tegra->clock);
return 0;
}
static const struct of_device_id tegra_devfreq_of_match[] = {
{ .compatible = "nvidia,tegra30-actmon" },
{ .compatible = "nvidia,tegra124-actmon" },
{ },
};
@ -737,36 +741,7 @@ static struct platform_driver tegra_devfreq_driver = {
.of_match_table = tegra_devfreq_of_match,
},
};
static int __init tegra_devfreq_init(void)
{
int ret = 0;
ret = devfreq_add_governor(&tegra_devfreq_governor);
if (ret) {
pr_err("%s: failed to add governor: %d\n", __func__, ret);
return ret;
}
ret = platform_driver_register(&tegra_devfreq_driver);
if (ret)
devfreq_remove_governor(&tegra_devfreq_governor);
return ret;
}
module_init(tegra_devfreq_init)
static void __exit tegra_devfreq_exit(void)
{
int ret = 0;
platform_driver_unregister(&tegra_devfreq_driver);
ret = devfreq_remove_governor(&tegra_devfreq_governor);
if (ret)
pr_err("%s: failed to remove governor: %d\n", __func__, ret);
}
module_exit(tegra_devfreq_exit)
module_platform_driver(tegra_devfreq_driver);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Tegra devfreq driver");

View File

@ -3,9 +3,11 @@
#include <linux/errno.h>
#include <linux/kernel.h>
#include <linux/delay.h>
#include <linux/pm_qos.h>
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/wait.h>
#include <linux/cpu.h>
#include <linux/cpufreq.h>
#include <asm/prom.h>
@ -16,36 +18,24 @@
static int clamped;
static struct wf_control *clamp_control;
static int clamp_notifier_call(struct notifier_block *self,
unsigned long event, void *data)
{
struct cpufreq_policy *p = data;
unsigned long max_freq;
if (event != CPUFREQ_ADJUST)
return 0;
max_freq = clamped ? (p->cpuinfo.min_freq) : (p->cpuinfo.max_freq);
cpufreq_verify_within_limits(p, 0, max_freq);
return 0;
}
static struct notifier_block clamp_notifier = {
.notifier_call = clamp_notifier_call,
};
static struct dev_pm_qos_request qos_req;
static unsigned int min_freq, max_freq;
static int clamp_set(struct wf_control *ct, s32 value)
{
if (value)
unsigned int freq;
if (value) {
freq = min_freq;
printk(KERN_INFO "windfarm: Clamping CPU frequency to "
"minimum !\n");
else
} else {
freq = max_freq;
printk(KERN_INFO "windfarm: CPU frequency unclamped !\n");
}
clamped = value;
cpufreq_update_policy(0);
return 0;
return dev_pm_qos_update_request(&qos_req, freq);
}
static int clamp_get(struct wf_control *ct, s32 *value)
@ -74,27 +64,60 @@ static const struct wf_control_ops clamp_ops = {
static int __init wf_cpufreq_clamp_init(void)
{
struct cpufreq_policy *policy;
struct wf_control *clamp;
struct device *dev;
int ret;
policy = cpufreq_cpu_get(0);
if (!policy) {
pr_warn("%s: cpufreq policy not found cpu0\n", __func__);
return -EPROBE_DEFER;
}
min_freq = policy->cpuinfo.min_freq;
max_freq = policy->cpuinfo.max_freq;
cpufreq_cpu_put(policy);
dev = get_cpu_device(0);
if (unlikely(!dev)) {
pr_warn("%s: No cpu device for cpu0\n", __func__);
return -ENODEV;
}
clamp = kmalloc(sizeof(struct wf_control), GFP_KERNEL);
if (clamp == NULL)
return -ENOMEM;
cpufreq_register_notifier(&clamp_notifier, CPUFREQ_POLICY_NOTIFIER);
ret = dev_pm_qos_add_request(dev, &qos_req, DEV_PM_QOS_MAX_FREQUENCY,
max_freq);
if (ret < 0) {
pr_err("%s: Failed to add freq constraint (%d)\n", __func__,
ret);
goto free;
}
clamp->ops = &clamp_ops;
clamp->name = "cpufreq-clamp";
if (wf_register_control(clamp))
ret = wf_register_control(clamp);
if (ret)
goto fail;
clamp_control = clamp;
return 0;
fail:
dev_pm_qos_remove_request(&qos_req);
free:
kfree(clamp);
return -ENODEV;
return ret;
}
static void __exit wf_cpufreq_clamp_exit(void)
{
if (clamp_control)
if (clamp_control) {
wf_unregister_control(clamp_control);
dev_pm_qos_remove_request(&qos_req);
}
}

View File

@ -401,6 +401,54 @@ struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
}
EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_exact);
/**
* dev_pm_opp_find_level_exact() - search for an exact level
* @dev: device for which we do this operation
* @level: level to search for
*
* Return: Searches for exact match in the opp table and returns pointer to the
* matching opp if found, else returns ERR_PTR in case of error and should
* be handled using IS_ERR. Error return values can be:
* EINVAL: for bad pointer
* ERANGE: no match found for search
* ENODEV: if device not found in list of registered devices
*
* The callers are required to call dev_pm_opp_put() for the returned OPP after
* use.
*/
struct dev_pm_opp *dev_pm_opp_find_level_exact(struct device *dev,
unsigned int level)
{
struct opp_table *opp_table;
struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) {
int r = PTR_ERR(opp_table);
dev_err(dev, "%s: OPP table not found (%d)\n", __func__, r);
return ERR_PTR(r);
}
mutex_lock(&opp_table->lock);
list_for_each_entry(temp_opp, &opp_table->opp_list, node) {
if (temp_opp->level == level) {
opp = temp_opp;
/* Increment the reference count of OPP */
dev_pm_opp_get(opp);
break;
}
}
mutex_unlock(&opp_table->lock);
dev_pm_opp_put_opp_table(opp_table);
return opp;
}
EXPORT_SYMBOL_GPL(dev_pm_opp_find_level_exact);
static noinline struct dev_pm_opp *_find_freq_ceil(struct opp_table *opp_table,
unsigned long *freq)
{
@ -940,6 +988,7 @@ static struct opp_table *_allocate_opp_table(struct device *dev, int index)
BLOCKING_INIT_NOTIFIER_HEAD(&opp_table->head);
INIT_LIST_HEAD(&opp_table->opp_list);
kref_init(&opp_table->kref);
kref_init(&opp_table->list_kref);
/* Secure the device table modification */
list_add(&opp_table->node, &opp_tables);
@ -1577,6 +1626,12 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev,
goto free_regulators;
}
ret = regulator_enable(reg);
if (ret < 0) {
regulator_put(reg);
goto free_regulators;
}
opp_table->regulators[i] = reg;
}
@ -1590,8 +1645,10 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev,
return opp_table;
free_regulators:
while (i != 0)
regulator_put(opp_table->regulators[--i]);
while (i--) {
regulator_disable(opp_table->regulators[i]);
regulator_put(opp_table->regulators[i]);
}
kfree(opp_table->regulators);
opp_table->regulators = NULL;
@ -1617,8 +1674,10 @@ void dev_pm_opp_put_regulators(struct opp_table *opp_table)
/* Make sure there are no concurrent readers while updating opp_table */
WARN_ON(!list_empty(&opp_table->opp_list));
for (i = opp_table->regulator_count - 1; i >= 0; i--)
for (i = opp_table->regulator_count - 1; i >= 0; i--) {
regulator_disable(opp_table->regulators[i]);
regulator_put(opp_table->regulators[i]);
}
_free_set_opp_data(opp_table);
@ -1771,6 +1830,7 @@ static void _opp_detach_genpd(struct opp_table *opp_table)
* dev_pm_opp_attach_genpd - Attach genpd(s) for the device and save virtual device pointer
* @dev: Consumer device for which the genpd is getting attached.
* @names: Null terminated array of pointers containing names of genpd to attach.
* @virt_devs: Pointer to return the array of virtual devices.
*
* Multiple generic power domains for a device are supported with the help of
* virtual genpd devices, which are created for each consumer device - genpd
@ -1784,12 +1844,16 @@ static void _opp_detach_genpd(struct opp_table *opp_table)
*
* This helper needs to be called once with a list of all genpd to attach.
* Otherwise the original device structure will be used instead by the OPP core.
*
* The order of entries in the names array must match the order in which
* "required-opps" are added in DT.
*/
struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names)
struct opp_table *dev_pm_opp_attach_genpd(struct device *dev,
const char **names, struct device ***virt_devs)
{
struct opp_table *opp_table;
struct device *virt_dev;
int index, ret = -EINVAL;
int index = 0, ret = -EINVAL;
const char **name = names;
opp_table = dev_pm_opp_get_opp_table(dev);
@ -1815,14 +1879,6 @@ struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names
goto unlock;
while (*name) {
index = of_property_match_string(dev->of_node,
"power-domain-names", *name);
if (index < 0) {
dev_err(dev, "Failed to find power domain: %s (%d)\n",
*name, index);
goto err;
}
if (index >= opp_table->required_opp_count) {
dev_err(dev, "Index can't be greater than required-opp-count - 1, %s (%d : %d)\n",
*name, opp_table->required_opp_count, index);
@ -1843,9 +1899,12 @@ struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names
}
opp_table->genpd_virt_devs[index] = virt_dev;
index++;
name++;
}
if (virt_devs)
*virt_devs = opp_table->genpd_virt_devs;
mutex_unlock(&opp_table->genpd_virt_dev_lock);
return opp_table;

View File

@ -617,9 +617,12 @@ static struct dev_pm_opp *_opp_add_static_v2(struct opp_table *opp_table,
/* OPP to select on device suspend */
if (of_property_read_bool(np, "opp-suspend")) {
if (opp_table->suspend_opp) {
dev_warn(dev, "%s: Multiple suspend OPPs found (%lu %lu)\n",
__func__, opp_table->suspend_opp->rate,
new_opp->rate);
/* Pick the OPP with higher rate as suspend OPP */
if (new_opp->rate > opp_table->suspend_opp->rate) {
opp_table->suspend_opp->suspend = false;
new_opp->suspend = true;
opp_table->suspend_opp = new_opp;
}
} else {
new_opp->suspend = true;
opp_table->suspend_opp = new_opp;
@ -662,8 +665,6 @@ static int _of_add_opp_table_v2(struct device *dev, struct opp_table *opp_table)
return 0;
}
kref_init(&opp_table->list_kref);
/* We have opp-table node now, iterate over it and add OPPs */
for_each_available_child_of_node(opp_table->np, np) {
opp = _opp_add_static_v2(opp_table, dev, np);
@ -672,17 +673,15 @@ static int _of_add_opp_table_v2(struct device *dev, struct opp_table *opp_table)
dev_err(dev, "%s: Failed to add OPP, %d\n", __func__,
ret);
of_node_put(np);
goto put_list_kref;
return ret;
} else if (opp) {
count++;
}
}
/* There should be one of more OPP defined */
if (WARN_ON(!count)) {
ret = -ENOENT;
goto put_list_kref;
}
if (WARN_ON(!count))
return -ENOENT;
list_for_each_entry(opp, &opp_table->opp_list, node)
pstate_count += !!opp->pstate;
@ -691,8 +690,7 @@ static int _of_add_opp_table_v2(struct device *dev, struct opp_table *opp_table)
if (pstate_count && pstate_count != count) {
dev_err(dev, "Not all nodes have performance state set (%d: %d)\n",
count, pstate_count);
ret = -ENOENT;
goto put_list_kref;
return -ENOENT;
}
if (pstate_count)
@ -701,11 +699,6 @@ static int _of_add_opp_table_v2(struct device *dev, struct opp_table *opp_table)
opp_table->parsed_static_opps = true;
return 0;
put_list_kref:
_put_opp_list_kref(opp_table);
return ret;
}
/* Initializes OPP tables based on old-deprecated bindings */
@ -731,8 +724,6 @@ static int _of_add_opp_table_v1(struct device *dev, struct opp_table *opp_table)
return -EINVAL;
}
kref_init(&opp_table->list_kref);
val = prop->value;
while (nr) {
unsigned long freq = be32_to_cpup(val++) * 1000;
@ -742,7 +733,6 @@ static int _of_add_opp_table_v1(struct device *dev, struct opp_table *opp_table)
if (ret) {
dev_err(dev, "%s: Failed to add OPP %ld (%d)\n",
__func__, freq, ret);
_put_opp_list_kref(opp_table);
return ret;
}
nr -= 2;

View File

@ -252,36 +252,46 @@ static void intel_button_array_enable(struct device *device, bool enable)
}
static int intel_hid_pm_prepare(struct device *device)
{
if (device_may_wakeup(device)) {
struct intel_hid_priv *priv = dev_get_drvdata(device);
priv->wakeup_mode = true;
}
return 0;
}
static void intel_hid_pm_complete(struct device *device)
{
struct intel_hid_priv *priv = dev_get_drvdata(device);
priv->wakeup_mode = true;
return 0;
priv->wakeup_mode = false;
}
static int intel_hid_pl_suspend_handler(struct device *device)
{
if (pm_suspend_via_firmware()) {
intel_button_array_enable(device, false);
if (!pm_suspend_no_platform())
intel_hid_set_enable(device, false);
intel_button_array_enable(device, false);
}
return 0;
}
static int intel_hid_pl_resume_handler(struct device *device)
{
struct intel_hid_priv *priv = dev_get_drvdata(device);
intel_hid_pm_complete(device);
priv->wakeup_mode = false;
if (pm_resume_via_firmware()) {
if (!pm_suspend_no_platform())
intel_hid_set_enable(device, true);
intel_button_array_enable(device, true);
}
intel_button_array_enable(device, true);
return 0;
}
static const struct dev_pm_ops intel_hid_pl_pm_ops = {
.prepare = intel_hid_pm_prepare,
.complete = intel_hid_pm_complete,
.freeze = intel_hid_pl_suspend_handler,
.thaw = intel_hid_pl_resume_handler,
.restore = intel_hid_pl_resume_handler,
@ -491,6 +501,12 @@ static int intel_hid_probe(struct platform_device *device)
}
device_init_wakeup(&device->dev, true);
/*
* In order for system wakeup to work, the EC GPE has to be marked as
* a wakeup one, so do that here (this setting will persist, but it has
* no effect until the wakeup mask is set for the EC GPE).
*/
acpi_ec_mark_gpe_for_wake();
return 0;
err_remove_notify:

View File

@ -176,6 +176,12 @@ static int intel_vbtn_probe(struct platform_device *device)
return -EBUSY;
device_init_wakeup(&device->dev, true);
/*
* In order for system wakeup to work, the EC GPE has to be marked as
* a wakeup one, so do that here (this setting will persist, but it has
* no effect until the wakeup mask is set for the EC GPE).
*/
acpi_ec_mark_gpe_for_wake();
return 0;
}
@ -195,22 +201,30 @@ static int intel_vbtn_remove(struct platform_device *device)
static int intel_vbtn_pm_prepare(struct device *dev)
{
struct intel_vbtn_priv *priv = dev_get_drvdata(dev);
if (device_may_wakeup(dev)) {
struct intel_vbtn_priv *priv = dev_get_drvdata(dev);
priv->wakeup_mode = true;
priv->wakeup_mode = true;
}
return 0;
}
static int intel_vbtn_pm_resume(struct device *dev)
static void intel_vbtn_pm_complete(struct device *dev)
{
struct intel_vbtn_priv *priv = dev_get_drvdata(dev);
priv->wakeup_mode = false;
}
static int intel_vbtn_pm_resume(struct device *dev)
{
intel_vbtn_pm_complete(dev);
return 0;
}
static const struct dev_pm_ops intel_vbtn_pm_ops = {
.prepare = intel_vbtn_pm_prepare,
.complete = intel_vbtn_pm_complete,
.resume = intel_vbtn_pm_resume,
.restore = intel_vbtn_pm_resume,
.thaw = intel_vbtn_pm_resume,

View File

@ -59,14 +59,14 @@ struct idle_inject_thread {
/**
* struct idle_inject_device - idle injection data
* @timer: idle injection period timer
* @idle_duration_ms: duration of CPU idle time to inject
* @run_duration_ms: duration of CPU run time to allow
* @idle_duration_us: duration of CPU idle time to inject
* @run_duration_us: duration of CPU run time to allow
* @cpumask: mask of CPUs affected by idle injection
*/
struct idle_inject_device {
struct hrtimer timer;
unsigned int idle_duration_ms;
unsigned int run_duration_ms;
unsigned int idle_duration_us;
unsigned int run_duration_us;
unsigned long int cpumask[0];
};
@ -104,16 +104,16 @@ static void idle_inject_wakeup(struct idle_inject_device *ii_dev)
*/
static enum hrtimer_restart idle_inject_timer_fn(struct hrtimer *timer)
{
unsigned int duration_ms;
unsigned int duration_us;
struct idle_inject_device *ii_dev =
container_of(timer, struct idle_inject_device, timer);
duration_ms = READ_ONCE(ii_dev->run_duration_ms);
duration_ms += READ_ONCE(ii_dev->idle_duration_ms);
duration_us = READ_ONCE(ii_dev->run_duration_us);
duration_us += READ_ONCE(ii_dev->idle_duration_us);
idle_inject_wakeup(ii_dev);
hrtimer_forward_now(timer, ms_to_ktime(duration_ms));
hrtimer_forward_now(timer, ns_to_ktime(duration_us * NSEC_PER_USEC));
return HRTIMER_RESTART;
}
@ -138,35 +138,35 @@ static void idle_inject_fn(unsigned int cpu)
*/
iit->should_run = 0;
play_idle(READ_ONCE(ii_dev->idle_duration_ms));
play_idle(READ_ONCE(ii_dev->idle_duration_us));
}
/**
* idle_inject_set_duration - idle and run duration update helper
* @run_duration_ms: CPU run time to allow in milliseconds
* @idle_duration_ms: CPU idle time to inject in milliseconds
* @run_duration_us: CPU run time to allow in microseconds
* @idle_duration_us: CPU idle time to inject in microseconds
*/
void idle_inject_set_duration(struct idle_inject_device *ii_dev,
unsigned int run_duration_ms,
unsigned int idle_duration_ms)
unsigned int run_duration_us,
unsigned int idle_duration_us)
{
if (run_duration_ms && idle_duration_ms) {
WRITE_ONCE(ii_dev->run_duration_ms, run_duration_ms);
WRITE_ONCE(ii_dev->idle_duration_ms, idle_duration_ms);
if (run_duration_us && idle_duration_us) {
WRITE_ONCE(ii_dev->run_duration_us, run_duration_us);
WRITE_ONCE(ii_dev->idle_duration_us, idle_duration_us);
}
}
/**
* idle_inject_get_duration - idle and run duration retrieval helper
* @run_duration_ms: memory location to store the current CPU run time
* @idle_duration_ms: memory location to store the current CPU idle time
* @run_duration_us: memory location to store the current CPU run time
* @idle_duration_us: memory location to store the current CPU idle time
*/
void idle_inject_get_duration(struct idle_inject_device *ii_dev,
unsigned int *run_duration_ms,
unsigned int *idle_duration_ms)
unsigned int *run_duration_us,
unsigned int *idle_duration_us)
{
*run_duration_ms = READ_ONCE(ii_dev->run_duration_ms);
*idle_duration_ms = READ_ONCE(ii_dev->idle_duration_ms);
*run_duration_us = READ_ONCE(ii_dev->run_duration_us);
*idle_duration_us = READ_ONCE(ii_dev->idle_duration_us);
}
/**
@ -181,10 +181,10 @@ void idle_inject_get_duration(struct idle_inject_device *ii_dev,
*/
int idle_inject_start(struct idle_inject_device *ii_dev)
{
unsigned int idle_duration_ms = READ_ONCE(ii_dev->idle_duration_ms);
unsigned int run_duration_ms = READ_ONCE(ii_dev->run_duration_ms);
unsigned int idle_duration_us = READ_ONCE(ii_dev->idle_duration_us);
unsigned int run_duration_us = READ_ONCE(ii_dev->run_duration_us);
if (!idle_duration_ms || !run_duration_ms)
if (!idle_duration_us || !run_duration_us)
return -EINVAL;
pr_debug("Starting injecting idle cycles on CPUs '%*pbl'\n",
@ -193,7 +193,8 @@ int idle_inject_start(struct idle_inject_device *ii_dev)
idle_inject_wakeup(ii_dev);
hrtimer_start(&ii_dev->timer,
ms_to_ktime(idle_duration_ms + run_duration_ms),
ns_to_ktime((idle_duration_us + run_duration_us) *
NSEC_PER_USEC),
HRTIMER_MODE_REL);
return 0;

View File

@ -16,6 +16,7 @@
#include <linux/err.h>
#include <linux/idr.h>
#include <linux/pm_opp.h>
#include <linux/pm_qos.h>
#include <linux/slab.h>
#include <linux/cpu.h>
#include <linux/cpu_cooling.h>
@ -66,8 +67,6 @@ struct time_in_idle {
* @last_load: load measured by the latest call to cpufreq_get_requested_power()
* @cpufreq_state: integer value representing the current state of cpufreq
* cooling devices.
* @clipped_freq: integer value representing the absolute value of the clipped
* frequency.
* @max_level: maximum cooling level. One less than total number of valid
* cpufreq frequencies.
* @freq_table: Freq table in descending order of frequencies
@ -84,12 +83,12 @@ struct cpufreq_cooling_device {
int id;
u32 last_load;
unsigned int cpufreq_state;
unsigned int clipped_freq;
unsigned int max_level;
struct freq_table *freq_table; /* In descending order */
struct cpufreq_policy *policy;
struct list_head node;
struct time_in_idle *idle_time;
struct dev_pm_qos_request qos_req;
};
static DEFINE_IDA(cpufreq_ida);
@ -118,59 +117,6 @@ static unsigned long get_level(struct cpufreq_cooling_device *cpufreq_cdev,
return level - 1;
}
/**
* cpufreq_thermal_notifier - notifier callback for cpufreq policy change.
* @nb: struct notifier_block * with callback info.
* @event: value showing cpufreq event for which this function invoked.
* @data: callback-specific data
*
* Callback to hijack the notification on cpufreq policy transition.
* Every time there is a change in policy, we will intercept and
* update the cpufreq policy with thermal constraints.
*
* Return: 0 (success)
*/
static int cpufreq_thermal_notifier(struct notifier_block *nb,
unsigned long event, void *data)
{
struct cpufreq_policy *policy = data;
unsigned long clipped_freq;
struct cpufreq_cooling_device *cpufreq_cdev;
if (event != CPUFREQ_ADJUST)
return NOTIFY_DONE;
mutex_lock(&cooling_list_lock);
list_for_each_entry(cpufreq_cdev, &cpufreq_cdev_list, node) {
/*
* A new copy of the policy is sent to the notifier and can't
* compare that directly.
*/
if (policy->cpu != cpufreq_cdev->policy->cpu)
continue;
/*
* policy->max is the maximum allowed frequency defined by user
* and clipped_freq is the maximum that thermal constraints
* allow.
*
* If clipped_freq is lower than policy->max, then we need to
* readjust policy->max.
*
* But, if clipped_freq is greater than policy->max, we don't
* need to do anything.
*/
clipped_freq = cpufreq_cdev->clipped_freq;
if (policy->max > clipped_freq)
cpufreq_verify_within_limits(policy, 0, clipped_freq);
break;
}
mutex_unlock(&cooling_list_lock);
return NOTIFY_OK;
}
/**
* update_freq_table() - Update the freq table with power numbers
* @cpufreq_cdev: the cpufreq cooling device in which to update the table
@ -374,7 +320,6 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
unsigned long state)
{
struct cpufreq_cooling_device *cpufreq_cdev = cdev->devdata;
unsigned int clip_freq;
/* Request state should be less than max_level */
if (WARN_ON(state > cpufreq_cdev->max_level))
@ -384,13 +329,10 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
if (cpufreq_cdev->cpufreq_state == state)
return 0;
clip_freq = cpufreq_cdev->freq_table[state].frequency;
cpufreq_cdev->cpufreq_state = state;
cpufreq_cdev->clipped_freq = clip_freq;
cpufreq_update_policy(cpufreq_cdev->policy->cpu);
return 0;
return dev_pm_qos_update_request(&cpufreq_cdev->qos_req,
cpufreq_cdev->freq_table[state].frequency);
}
/**
@ -554,11 +496,6 @@ static struct thermal_cooling_device_ops cpufreq_power_cooling_ops = {
.power2state = cpufreq_power2state,
};
/* Notifier for cpufreq policy change */
static struct notifier_block thermal_cpufreq_notifier_block = {
.notifier_call = cpufreq_thermal_notifier,
};
static unsigned int find_next_max(struct cpufreq_frequency_table *table,
unsigned int prev_max)
{
@ -596,9 +533,16 @@ __cpufreq_cooling_register(struct device_node *np,
struct cpufreq_cooling_device *cpufreq_cdev;
char dev_name[THERMAL_NAME_LENGTH];
unsigned int freq, i, num_cpus;
struct device *dev;
int ret;
struct thermal_cooling_device_ops *cooling_ops;
bool first;
dev = get_cpu_device(policy->cpu);
if (unlikely(!dev)) {
pr_warn("No cpu device for cpu %d\n", policy->cpu);
return ERR_PTR(-ENODEV);
}
if (IS_ERR_OR_NULL(policy)) {
pr_err("%s: cpufreq policy isn't valid: %p\n", __func__, policy);
@ -671,25 +615,29 @@ __cpufreq_cooling_register(struct device_node *np,
cooling_ops = &cpufreq_cooling_ops;
}
ret = dev_pm_qos_add_request(dev, &cpufreq_cdev->qos_req,
DEV_PM_QOS_MAX_FREQUENCY,
cpufreq_cdev->freq_table[0].frequency);
if (ret < 0) {
pr_err("%s: Failed to add freq constraint (%d)\n", __func__,
ret);
cdev = ERR_PTR(ret);
goto remove_ida;
}
cdev = thermal_of_cooling_device_register(np, dev_name, cpufreq_cdev,
cooling_ops);
if (IS_ERR(cdev))
goto remove_ida;
cpufreq_cdev->clipped_freq = cpufreq_cdev->freq_table[0].frequency;
goto remove_qos_req;
mutex_lock(&cooling_list_lock);
/* Register the notifier for first cpufreq cooling device */
first = list_empty(&cpufreq_cdev_list);
list_add(&cpufreq_cdev->node, &cpufreq_cdev_list);
mutex_unlock(&cooling_list_lock);
if (first)
cpufreq_register_notifier(&thermal_cpufreq_notifier_block,
CPUFREQ_POLICY_NOTIFIER);
return cdev;
remove_qos_req:
dev_pm_qos_remove_request(&cpufreq_cdev->qos_req);
remove_ida:
ida_simple_remove(&cpufreq_ida, cpufreq_cdev->id);
free_table:
@ -777,7 +725,6 @@ EXPORT_SYMBOL_GPL(of_cpufreq_cooling_register);
void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)
{
struct cpufreq_cooling_device *cpufreq_cdev;
bool last;
if (!cdev)
return;
@ -786,15 +733,10 @@ void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)
mutex_lock(&cooling_list_lock);
list_del(&cpufreq_cdev->node);
/* Unregister the notifier for the last cpufreq cooling device */
last = list_empty(&cpufreq_cdev_list);
mutex_unlock(&cooling_list_lock);
if (last)
cpufreq_unregister_notifier(&thermal_cpufreq_notifier_block,
CPUFREQ_POLICY_NOTIFIER);
thermal_cooling_device_unregister(cdev);
dev_pm_qos_remove_request(&cpufreq_cdev->qos_req);
ida_simple_remove(&cpufreq_ida, cpufreq_cdev->id);
kfree(cpufreq_cdev->idle_time);
kfree(cpufreq_cdev->freq_table);

View File

@ -430,7 +430,7 @@ static void clamp_idle_injection_func(struct kthread_work *work)
if (should_skip)
goto balance;
play_idle(jiffies_to_msecs(w_data->duration_jiffies));
play_idle(jiffies_to_usecs(w_data->duration_jiffies));
balance:
if (clamping && w_data->clamping && cpu_online(w_data->cpu))

View File

@ -1678,24 +1678,6 @@ pxafb_freq_transition(struct notifier_block *nb, unsigned long val, void *data)
}
return 0;
}
static int
pxafb_freq_policy(struct notifier_block *nb, unsigned long val, void *data)
{
struct pxafb_info *fbi = TO_INF(nb, freq_policy);
struct fb_var_screeninfo *var = &fbi->fb.var;
struct cpufreq_policy *policy = data;
switch (val) {
case CPUFREQ_ADJUST:
pr_debug("min dma period: %d ps, "
"new clock %d kHz\n", pxafb_display_dma_period(var),
policy->max);
/* TODO: fill in min/max values */
break;
}
return 0;
}
#endif
#ifdef CONFIG_PM
@ -2400,11 +2382,8 @@ static int pxafb_probe(struct platform_device *dev)
#ifdef CONFIG_CPU_FREQ
fbi->freq_transition.notifier_call = pxafb_freq_transition;
fbi->freq_policy.notifier_call = pxafb_freq_policy;
cpufreq_register_notifier(&fbi->freq_transition,
CPUFREQ_TRANSITION_NOTIFIER);
cpufreq_register_notifier(&fbi->freq_policy,
CPUFREQ_POLICY_NOTIFIER);
#endif
/*

View File

@ -162,7 +162,6 @@ struct pxafb_info {
#ifdef CONFIG_CPU_FREQ
struct notifier_block freq_transition;
struct notifier_block freq_policy;
#endif
struct regulator *lcd_supply;

View File

@ -1005,31 +1005,6 @@ sa1100fb_freq_transition(struct notifier_block *nb, unsigned long val,
}
return 0;
}
static int
sa1100fb_freq_policy(struct notifier_block *nb, unsigned long val,
void *data)
{
struct sa1100fb_info *fbi = TO_INF(nb, freq_policy);
struct cpufreq_policy *policy = data;
switch (val) {
case CPUFREQ_ADJUST:
dev_dbg(fbi->dev, "min dma period: %d ps, "
"new clock %d kHz\n", sa1100fb_min_dma_period(fbi),
policy->max);
/* todo: fill in min/max values */
break;
case CPUFREQ_NOTIFY:
do {} while(0);
/* todo: panic if min/max values aren't fulfilled
* [can't really happen unless there's a bug in the
* CPU policy verififcation process *
*/
break;
}
return 0;
}
#endif
#ifdef CONFIG_PM
@ -1242,9 +1217,7 @@ static int sa1100fb_probe(struct platform_device *pdev)
#ifdef CONFIG_CPU_FREQ
fbi->freq_transition.notifier_call = sa1100fb_freq_transition;
fbi->freq_policy.notifier_call = sa1100fb_freq_policy;
cpufreq_register_notifier(&fbi->freq_transition, CPUFREQ_TRANSITION_NOTIFIER);
cpufreq_register_notifier(&fbi->freq_policy, CPUFREQ_POLICY_NOTIFIER);
#endif
/* This driver cannot be unloaded at the moment */

View File

@ -64,7 +64,6 @@ struct sa1100fb_info {
#ifdef CONFIG_CPU_FREQ
struct notifier_block freq_transition;
struct notifier_block freq_policy;
#endif
const struct sa1100fb_mach_info *inf;

View File

@ -1459,13 +1459,13 @@ static int ep_create_wakeup_source(struct epitem *epi)
struct wakeup_source *ws;
if (!epi->ep->ws) {
epi->ep->ws = wakeup_source_register("eventpoll");
epi->ep->ws = wakeup_source_register(NULL, "eventpoll");
if (!epi->ep->ws)
return -ENOMEM;
}
name = epi->ffd.file->f_path.dentry->d_name.name;
ws = wakeup_source_register(name);
ws = wakeup_source_register(NULL, name);
if (!ws)
return -ENOMEM;

View File

@ -297,6 +297,9 @@ ACPI_GLOBAL(u8, acpi_gbl_system_awake_and_running);
#define ACPI_HW_DEPENDENT_RETURN_OK(prototype) \
ACPI_EXTERNAL_RETURN_OK(prototype)
#define ACPI_HW_DEPENDENT_RETURN_UINT32(prototype) \
ACPI_EXTERNAL_RETURN_UINT32(prototype)
#define ACPI_HW_DEPENDENT_RETURN_VOID(prototype) \
ACPI_EXTERNAL_RETURN_VOID(prototype)
@ -307,6 +310,9 @@ ACPI_GLOBAL(u8, acpi_gbl_system_awake_and_running);
#define ACPI_HW_DEPENDENT_RETURN_OK(prototype) \
static ACPI_INLINE prototype {return(AE_OK);}
#define ACPI_HW_DEPENDENT_RETURN_UINT32(prototype) \
static ACPI_INLINE prototype {return(0);}
#define ACPI_HW_DEPENDENT_RETURN_VOID(prototype) \
static ACPI_INLINE prototype {return;}
@ -738,7 +744,7 @@ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status
u32 gpe_number,
acpi_event_status
*event_status))
ACPI_HW_DEPENDENT_RETURN_VOID(void acpi_dispatch_gpe(acpi_handle gpe_device, u32 gpe_number))
ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_dispatch_gpe(acpi_handle gpe_device, u32 gpe_number))
ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_disable_all_gpes(void))
ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_runtime_gpes(void))
ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_wakeup_gpes(void))

View File

@ -4,6 +4,8 @@
#include <linux/kernel.h>
#include <linux/cpu.h>
#include <linux/cpufreq.h>
#include <linux/pm_qos.h>
#include <linux/thermal.h>
#include <asm/acpi.h>
@ -230,6 +232,8 @@ struct acpi_processor {
struct acpi_processor_limit limit;
struct thermal_cooling_device *cdev;
struct device *dev; /* Processor device. */
struct dev_pm_qos_request perflib_req;
struct dev_pm_qos_request thermal_req;
};
struct acpi_processor_errata {
@ -296,16 +300,22 @@ static inline void acpi_processor_ffh_cstate_enter(struct acpi_processor_cx
/* in processor_perflib.c */
#ifdef CONFIG_CPU_FREQ
void acpi_processor_ppc_init(void);
void acpi_processor_ppc_exit(void);
extern bool acpi_processor_cpufreq_init;
void acpi_processor_ignore_ppc_init(void);
void acpi_processor_ppc_init(int cpu);
void acpi_processor_ppc_exit(int cpu);
void acpi_processor_ppc_has_changed(struct acpi_processor *pr, int event_flag);
extern int acpi_processor_get_bios_limit(int cpu, unsigned int *limit);
#else
static inline void acpi_processor_ppc_init(void)
static inline void acpi_processor_ignore_ppc_init(void)
{
return;
}
static inline void acpi_processor_ppc_exit(void)
static inline void acpi_processor_ppc_init(int cpu)
{
return;
}
static inline void acpi_processor_ppc_exit(int cpu)
{
return;
}
@ -421,14 +431,14 @@ static inline int acpi_processor_hotplug(struct acpi_processor *pr)
int acpi_processor_get_limit_info(struct acpi_processor *pr);
extern const struct thermal_cooling_device_ops processor_cooling_ops;
#if defined(CONFIG_ACPI_CPU_FREQ_PSS) & defined(CONFIG_CPU_FREQ)
void acpi_thermal_cpufreq_init(void);
void acpi_thermal_cpufreq_exit(void);
void acpi_thermal_cpufreq_init(int cpu);
void acpi_thermal_cpufreq_exit(int cpu);
#else
static inline void acpi_thermal_cpufreq_init(void)
static inline void acpi_thermal_cpufreq_init(int cpu)
{
return;
}
static inline void acpi_thermal_cpufreq_exit(void)
static inline void acpi_thermal_cpufreq_exit(int cpu)
{
return;
}

View File

@ -931,6 +931,8 @@ int acpi_subsys_suspend_noirq(struct device *dev);
int acpi_subsys_suspend(struct device *dev);
int acpi_subsys_freeze(struct device *dev);
int acpi_subsys_poweroff(struct device *dev);
void acpi_ec_mark_gpe_for_wake(void);
void acpi_ec_set_gpe_wake_mask(u8 action);
#else
static inline int acpi_subsys_prepare(struct device *dev) { return 0; }
static inline void acpi_subsys_complete(struct device *dev) {}
@ -939,6 +941,8 @@ static inline int acpi_subsys_suspend_noirq(struct device *dev) { return 0; }
static inline int acpi_subsys_suspend(struct device *dev) { return 0; }
static inline int acpi_subsys_freeze(struct device *dev) { return 0; }
static inline int acpi_subsys_poweroff(struct device *dev) { return 0; }
static inline void acpi_ec_mark_gpe_for_wake(void) {}
static inline void acpi_ec_set_gpe_wake_mask(u8 action) {}
#endif
#ifdef CONFIG_ACPI

View File

@ -179,7 +179,7 @@ void arch_cpu_idle_dead(void);
int cpu_report_state(int cpu);
int cpu_check_up_prepare(int cpu);
void cpu_set_state_online(int cpu);
void play_idle(unsigned long duration_ms);
void play_idle(unsigned long duration_us);
#ifdef CONFIG_HOTPLUG_CPU
bool cpu_wait_death(unsigned int cpu, int seconds);

View File

@ -456,8 +456,8 @@ static inline void cpufreq_resume(void) {}
#define CPUFREQ_POSTCHANGE (1)
/* Policy Notifiers */
#define CPUFREQ_ADJUST (0)
#define CPUFREQ_NOTIFY (1)
#define CPUFREQ_CREATE_POLICY (0)
#define CPUFREQ_REMOVE_POLICY (1)
#ifdef CONFIG_CPU_FREQ
int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list);

View File

@ -85,7 +85,9 @@ struct cpuidle_device {
unsigned int cpu;
ktime_t next_hrtimer;
int last_state_idx;
int last_residency;
u64 poll_limit_ns;
struct cpuidle_state_usage states_usage[CPUIDLE_STATE_MAX];
struct cpuidle_state_kobj *kobjs[CPUIDLE_STATE_MAX];
struct cpuidle_driver_kobj *kobj_driver;
@ -119,6 +121,9 @@ struct cpuidle_driver {
/* the driver handles the cpus in cpumask */
struct cpumask *cpumask;
/* preferred governor to switch at register time */
const char *governor;
};
#ifdef CONFIG_CPU_IDLE
@ -132,6 +137,8 @@ extern int cpuidle_select(struct cpuidle_driver *drv,
extern int cpuidle_enter(struct cpuidle_driver *drv,
struct cpuidle_device *dev, int index);
extern void cpuidle_reflect(struct cpuidle_device *dev, int index);
extern u64 cpuidle_poll_time(struct cpuidle_driver *drv,
struct cpuidle_device *dev);
extern int cpuidle_register_driver(struct cpuidle_driver *drv);
extern struct cpuidle_driver *cpuidle_get_driver(void);
@ -166,6 +173,9 @@ static inline int cpuidle_enter(struct cpuidle_driver *drv,
struct cpuidle_device *dev, int index)
{return -ENODEV; }
static inline void cpuidle_reflect(struct cpuidle_device *dev, int index) { }
static inline u64 cpuidle_poll_time(struct cpuidle_driver *drv,
struct cpuidle_device *dev)
{return 0; }
static inline int cpuidle_register_driver(struct cpuidle_driver *drv)
{return -ENODEV; }
static inline struct cpuidle_driver *cpuidle_get_driver(void) {return NULL; }

View File

@ -0,0 +1,16 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _CPUIDLE_HALTPOLL_H
#define _CPUIDLE_HALTPOLL_H
#ifdef CONFIG_ARCH_CPUIDLE_HALTPOLL
#include <asm/cpuidle_haltpoll.h>
#else
static inline void arch_haltpoll_enable(unsigned int cpu)
{
}
static inline void arch_haltpoll_disable(unsigned int cpu)
{
}
#endif
#endif

View File

@ -78,14 +78,20 @@ struct devfreq_event_ops {
* struct devfreq_event_desc - the descriptor of devfreq-event device
*
* @name : the name of devfreq-event device.
* @event_type : the type of the event determined and used by driver
* @driver_data : the private data for devfreq-event driver.
* @ops : the operation to control devfreq-event device.
*
* Each devfreq-event device is described with a this structure.
* This structure contains the various data for devfreq-event device.
* The event_type describes what is going to be counted in the register.
* It might choose to count e.g. read requests, write data in bytes, etc.
* The full supported list of types is present in specyfic header in:
* include/dt-bindings/pmu/.
*/
struct devfreq_event_desc {
const char *name;
u32 event_type;
void *driver_data;
const struct devfreq_event_ops *ops;

View File

@ -20,10 +20,10 @@ int idle_inject_start(struct idle_inject_device *ii_dev);
void idle_inject_stop(struct idle_inject_device *ii_dev);
void idle_inject_set_duration(struct idle_inject_device *ii_dev,
unsigned int run_duration_ms,
unsigned int idle_duration_ms);
unsigned int run_duration_us,
unsigned int idle_duration_us);
void idle_inject_get_duration(struct idle_inject_device *ii_dev,
unsigned int *run_duration_ms,
unsigned int *idle_duration_ms);
unsigned int *run_duration_us,
unsigned int *idle_duration_us);
#endif /* __IDLE_INJECT_H__ */

View File

@ -238,6 +238,7 @@ extern void teardown_percpu_nmi(unsigned int irq);
/* The following three functions are for the core kernel use only. */
extern void suspend_device_irqs(void);
extern void resume_device_irqs(void);
extern void rearm_wake_irq(unsigned int irq);
/**
* struct irq_affinity_notify - context for notification of IRQ affinity changes

View File

@ -712,8 +712,6 @@ struct dev_pm_domain {
extern void device_pm_lock(void);
extern void dpm_resume_start(pm_message_t state);
extern void dpm_resume_end(pm_message_t state);
extern void dpm_noirq_resume_devices(pm_message_t state);
extern void dpm_noirq_end(void);
extern void dpm_resume_noirq(pm_message_t state);
extern void dpm_resume_early(pm_message_t state);
extern void dpm_resume(pm_message_t state);
@ -722,8 +720,6 @@ extern void dpm_complete(pm_message_t state);
extern void device_pm_unlock(void);
extern int dpm_suspend_end(pm_message_t state);
extern int dpm_suspend_start(pm_message_t state);
extern void dpm_noirq_begin(void);
extern int dpm_noirq_suspend_devices(pm_message_t state);
extern int dpm_suspend_noirq(pm_message_t state);
extern int dpm_suspend_late(pm_message_t state);
extern int dpm_suspend(pm_message_t state);

View File

@ -197,9 +197,9 @@ static inline struct generic_pm_domain_data *dev_gpd_data(struct device *dev)
int pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev);
int pm_genpd_remove_device(struct device *dev);
int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
struct generic_pm_domain *new_subdomain);
struct generic_pm_domain *subdomain);
int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
struct generic_pm_domain *target);
struct generic_pm_domain *subdomain);
int pm_genpd_init(struct generic_pm_domain *genpd,
struct dev_power_governor *gov, bool is_off);
int pm_genpd_remove(struct generic_pm_domain *genpd);
@ -226,12 +226,12 @@ static inline int pm_genpd_remove_device(struct device *dev)
return -ENOSYS;
}
static inline int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
struct generic_pm_domain *new_sd)
struct generic_pm_domain *subdomain)
{
return -ENOSYS;
}
static inline int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
struct generic_pm_domain *target)
struct generic_pm_domain *subdomain)
{
return -ENOSYS;
}
@ -282,8 +282,8 @@ int of_genpd_add_provider_onecell(struct device_node *np,
struct genpd_onecell_data *data);
void of_genpd_del_provider(struct device_node *np);
int of_genpd_add_device(struct of_phandle_args *args, struct device *dev);
int of_genpd_add_subdomain(struct of_phandle_args *parent,
struct of_phandle_args *new_subdomain);
int of_genpd_add_subdomain(struct of_phandle_args *parent_spec,
struct of_phandle_args *subdomain_spec);
struct generic_pm_domain *of_genpd_remove_last(struct device_node *np);
int of_genpd_parse_idle_states(struct device_node *dn,
struct genpd_power_state **states, int *n);
@ -316,8 +316,8 @@ static inline int of_genpd_add_device(struct of_phandle_args *args,
return -ENODEV;
}
static inline int of_genpd_add_subdomain(struct of_phandle_args *parent,
struct of_phandle_args *new_subdomain)
static inline int of_genpd_add_subdomain(struct of_phandle_args *parent_spec,
struct of_phandle_args *subdomain_spec)
{
return -ENODEV;
}

View File

@ -96,6 +96,8 @@ unsigned long dev_pm_opp_get_suspend_opp_freq(struct device *dev);
struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
unsigned long freq,
bool available);
struct dev_pm_opp *dev_pm_opp_find_level_exact(struct device *dev,
unsigned int level);
struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
unsigned long *freq);
@ -128,7 +130,7 @@ struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char * name);
void dev_pm_opp_put_clkname(struct opp_table *opp_table);
struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data));
void dev_pm_opp_unregister_set_opp_helper(struct opp_table *opp_table);
struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names);
struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names, struct device ***virt_devs);
void dev_pm_opp_detach_genpd(struct opp_table *opp_table);
int dev_pm_opp_xlate_performance_state(struct opp_table *src_table, struct opp_table *dst_table, unsigned int pstate);
int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq);
@ -200,6 +202,12 @@ static inline struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
return ERR_PTR(-ENOTSUPP);
}
static inline struct dev_pm_opp *dev_pm_opp_find_level_exact(struct device *dev,
unsigned int level)
{
return ERR_PTR(-ENOTSUPP);
}
static inline struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
unsigned long *freq)
{
@ -292,7 +300,7 @@ static inline struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const
static inline void dev_pm_opp_put_clkname(struct opp_table *opp_table) {}
static inline struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names)
static inline struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names, struct device ***virt_devs)
{
return ERR_PTR(-ENOTSUPP);
}

View File

@ -13,9 +13,6 @@
enum {
PM_QOS_RESERVED = 0,
PM_QOS_CPU_DMA_LATENCY,
PM_QOS_NETWORK_LATENCY,
PM_QOS_NETWORK_THROUGHPUT,
PM_QOS_MEMORY_BANDWIDTH,
/* insert new class ID */
PM_QOS_NUM_CLASSES,
@ -33,9 +30,6 @@ enum pm_qos_flags_status {
#define PM_QOS_LATENCY_ANY_NS ((s64)PM_QOS_LATENCY_ANY * NSEC_PER_USEC)
#define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC)
#define PM_QOS_NETWORK_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC)
#define PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE 0
#define PM_QOS_MEMORY_BANDWIDTH_DEFAULT_VALUE 0
#define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE PM_QOS_LATENCY_ANY
#define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT PM_QOS_LATENCY_ANY
#define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS PM_QOS_LATENCY_ANY_NS

View File

@ -21,6 +21,7 @@ struct wake_irq;
* struct wakeup_source - Representation of wakeup sources
*
* @name: Name of the wakeup source
* @id: Wakeup source id
* @entry: Wakeup source list entry
* @lock: Wakeup source lock
* @wakeirq: Optional device specific wakeirq
@ -35,11 +36,13 @@ struct wake_irq;
* @relax_count: Number of times the wakeup source was deactivated.
* @expire_count: Number of times the wakeup source's timeout has expired.
* @wakeup_count: Number of times the wakeup source might abort suspend.
* @dev: Struct device for sysfs statistics about the wakeup source.
* @active: Status of the wakeup source.
* @autosleep_enabled: Autosleep is active, so update @prevent_sleep_time.
*/
struct wakeup_source {
const char *name;
int id;
struct list_head entry;
spinlock_t lock;
struct wake_irq *wakeirq;
@ -55,6 +58,7 @@ struct wakeup_source {
unsigned long relax_count;
unsigned long expire_count;
unsigned long wakeup_count;
struct device *dev;
bool active:1;
bool autosleep_enabled:1;
};
@ -81,12 +85,12 @@ static inline void device_set_wakeup_path(struct device *dev)
}
/* drivers/base/power/wakeup.c */
extern void wakeup_source_prepare(struct wakeup_source *ws, const char *name);
extern struct wakeup_source *wakeup_source_create(const char *name);
extern void wakeup_source_destroy(struct wakeup_source *ws);
extern void wakeup_source_add(struct wakeup_source *ws);
extern void wakeup_source_remove(struct wakeup_source *ws);
extern struct wakeup_source *wakeup_source_register(const char *name);
extern struct wakeup_source *wakeup_source_register(struct device *dev,
const char *name);
extern void wakeup_source_unregister(struct wakeup_source *ws);
extern int device_wakeup_enable(struct device *dev);
extern int device_wakeup_disable(struct device *dev);
@ -112,9 +116,6 @@ static inline bool device_can_wakeup(struct device *dev)
return dev->power.can_wakeup;
}
static inline void wakeup_source_prepare(struct wakeup_source *ws,
const char *name) {}
static inline struct wakeup_source *wakeup_source_create(const char *name)
{
return NULL;
@ -126,7 +127,8 @@ static inline void wakeup_source_add(struct wakeup_source *ws) {}
static inline void wakeup_source_remove(struct wakeup_source *ws) {}
static inline struct wakeup_source *wakeup_source_register(const char *name)
static inline struct wakeup_source *wakeup_source_register(struct device *dev,
const char *name)
{
return NULL;
}
@ -181,13 +183,6 @@ static inline void pm_wakeup_dev_event(struct device *dev, unsigned int msec,
#endif /* !CONFIG_PM_SLEEP */
static inline void wakeup_source_init(struct wakeup_source *ws,
const char *name)
{
wakeup_source_prepare(ws, name);
wakeup_source_add(ws);
}
static inline void __pm_wakeup_event(struct wakeup_source *ws, unsigned int msec)
{
return pm_wakeup_ws_event(ws, msec, false);

View File

@ -190,8 +190,9 @@ struct platform_suspend_ops {
struct platform_s2idle_ops {
int (*begin)(void);
int (*prepare)(void);
int (*prepare_late)(void);
void (*wake)(void);
void (*sync)(void);
void (*restore_early)(void);
void (*restore)(void);
void (*end)(void);
};
@ -336,6 +337,7 @@ static inline void pm_set_suspend_via_firmware(void) {}
static inline void pm_set_resume_via_firmware(void) {}
static inline bool pm_suspend_via_firmware(void) { return false; }
static inline bool pm_resume_via_firmware(void) { return false; }
static inline bool pm_suspend_no_platform(void) { return false; }
static inline bool pm_suspend_default_s2idle(void) { return false; }
static inline void suspend_set_ops(const struct platform_suspend_ops *ops) {}

Some files were not shown because too many files have changed in this diff Show More