Power management updates for 5.10-rc1

- Rework cpufreq statistics collection to allow it to take place
    when fast frequency switching is enabled in the governor (Viresh
    Kumar).
 
  - Make the cpufreq core set the frequency scale on behalf of the
    driver and update several cpufreq drivers accordingly (Ionela
    Voinescu, Valentin Schneider).
 
  - Add new hardware support to the STI and qcom cpufreq drivers and
    improve them (Alain Volmat, Manivannan Sadhasivam).
 
  - Fix multiple assorted issues in cpufreq drivers (Jon Hunter,
    Krzysztof Kozlowski, Matthias Kaehlcke, Pali Rohár, Stephan
    Gerhold, Viresh Kumar).
 
  - Fix several assorted issues in the operating performance points
    (OPP) framework (Stephan Gerhold, Viresh Kumar).
 
  - Allow devfreq drivers to fetch devfreq instances by DT enumeration
    instead of using explicit phandles and modify the devfreq core
    code to support driver-specific devfreq DT bindings (Leonard
    Crestez, Chanwoo Choi).
 
  - Improve initial hardware resetting in the tegra30 devfreq driver
    and clean up the tegra cpuidle driver (Dmitry Osipenko).
 
  - Update the cpuidle core to collect state entry rejection
    statistics and expose them via sysfs (Lina Iyer).
 
  - Improve the ACPI _CST code handling diagnostics (Chen Yu).
 
  - Update the PSCI cpuidle driver to allow the PM domain
    initialization to occur in the OSI mode as well as in the PC
    mode (Ulf Hansson).
 
  - Rework the generic power domains (genpd) core code to allow
    domain power off transition to be aborted in the absence of the
    "power off" domain callback (Ulf Hansson).
 
  - Fix two suspend-to-idle issues in the ACPI EC driver (Rafael
    Wysocki).
 
  - Fix the handling of timer_expires in the PM-runtime framework on
    32-bit systems and the handling of device links in it (Grygorii
    Strashko, Xiang Chen).
 
  - Add IO requests batching support to the hibernate image saving and
    reading code and drop a bogus get_gendisk() from there (Xiaoyi
    Chen, Christoph Hellwig).
 
  - Allow PCIe ports to be put into the D3cold power state if they
    are power-manageable via ACPI (Lukas Wunner).
 
  - Add missing header file include to a power capping driver (Pujin
    Shi).
 
  - Clean up the qcom-cpr AVS driver a bit (Liu Shixin).
 
  - Kevin Hilman steps down as designated reviwer of adaptive voltage
    scaling (AVS) driverrs (Kevin Hilman).
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAl+F4A4SHHJqd0Byand5
 c29ja2kubmV0AAoJEILEb/54YlRxX6QP/iELq9/OsH0aJdDQlY9tnh2Oa13+HB/Y
 w1e6W+ZR/YjPgUpMVARwRLKf/gn7dUEwRDHVpGvDOyun+HACCPHB2hg8iktbxdVl
 NFAVGZCCRezXqz3opL1hl8C3Dh0CqUPUjWXGMr+Lw2TZQKT+hx9K1dm9Epe3ivyT
 RlVH/wifei80cFRcUUj7DI5KLCAyk+uKkZIFnZHAGKK6qOHMqRL5sDZsMUwWpd2i
 AdghABjePbaiLTAoZuUsJINAGY4DnIt6ASRdMJ4iksiD6pFITwFs0HSOPe7hZLlv
 zbwDPI5+TIkrOy9/aWoMaEIH1OQiFN/O++Slvdjn7gMsRgoW4d300ru4Jo1pOHxb
 5twxagCCqlOf4YAaSrMCH4HT+c6fOWoGj2AKzX3DMJyO3/WN+8XNvUxKtC5Px1u+
 pWRASjfQMO2j6nNjTCTwDJdYzggiKa54rYH2k7svX7XnTIAf+2E1gv8b4rMTgQrZ
 0rq9kULYlhgk3EYjd/DndkvxunRlmiqhzrYB4jc9eDSPNzB8FZEbw1ZMRQTFfjK0
 kp0vaEpTJ7JfKSCfluB4UmTuQoGogLl0xbzc+2NNIpwdNmrH2Srvq6wbj35jEDTU
 tqsTsBP+XZFOWyFOw/L2J47LTOp0TJnz8z4aycLfrmdNUVnXJoU1sXgFlDzETMgT
 0E6cTVwLF7Zi
 =rGhy
 -----END PGP SIGNATURE-----

Merge tag 'pm-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
 "These rework the collection of cpufreq statistics to allow it to take
  place if fast frequency switching is enabled in the governor, rework
  the frequency invariance handling in the cpufreq core and drivers, add
  new hardware support to a couple of cpufreq drivers, fix a number of
  assorted issues and clean up the code all over.

  Specifics:

   - Rework cpufreq statistics collection to allow it to take place when
     fast frequency switching is enabled in the governor (Viresh Kumar).

   - Make the cpufreq core set the frequency scale on behalf of the
     driver and update several cpufreq drivers accordingly (Ionela
     Voinescu, Valentin Schneider).

   - Add new hardware support to the STI and qcom cpufreq drivers and
     improve them (Alain Volmat, Manivannan Sadhasivam).

   - Fix multiple assorted issues in cpufreq drivers (Jon Hunter,
     Krzysztof Kozlowski, Matthias Kaehlcke, Pali Rohár, Stephan
     Gerhold, Viresh Kumar).

   - Fix several assorted issues in the operating performance points
     (OPP) framework (Stephan Gerhold, Viresh Kumar).

   - Allow devfreq drivers to fetch devfreq instances by DT enumeration
     instead of using explicit phandles and modify the devfreq core code
     to support driver-specific devfreq DT bindings (Leonard Crestez,
     Chanwoo Choi).

   - Improve initial hardware resetting in the tegra30 devfreq driver
     and clean up the tegra cpuidle driver (Dmitry Osipenko).

   - Update the cpuidle core to collect state entry rejection statistics
     and expose them via sysfs (Lina Iyer).

   - Improve the ACPI _CST code handling diagnostics (Chen Yu).

   - Update the PSCI cpuidle driver to allow the PM domain
     initialization to occur in the OSI mode as well as in the PC mode
     (Ulf Hansson).

   - Rework the generic power domains (genpd) core code to allow domain
     power off transition to be aborted in the absence of the "power
     off" domain callback (Ulf Hansson).

   - Fix two suspend-to-idle issues in the ACPI EC driver (Rafael
     Wysocki).

   - Fix the handling of timer_expires in the PM-runtime framework on
     32-bit systems and the handling of device links in it (Grygorii
     Strashko, Xiang Chen).

   - Add IO requests batching support to the hibernate image saving and
     reading code and drop a bogus get_gendisk() from there (Xiaoyi
     Chen, Christoph Hellwig).

   - Allow PCIe ports to be put into the D3cold power state if they are
     power-manageable via ACPI (Lukas Wunner).

   - Add missing header file include to a power capping driver (Pujin
     Shi).

   - Clean up the qcom-cpr AVS driver a bit (Liu Shixin).

   - Kevin Hilman steps down as designated reviwer of adaptive voltage
     scaling (AVS) drivers (Kevin Hilman)"

* tag 'pm-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (65 commits)
  cpufreq: stats: Fix string format specifier mismatch
  arm: disable frequency invariance for CONFIG_BL_SWITCHER
  cpufreq,arm,arm64: restructure definitions of arch_set_freq_scale()
  cpufreq: stats: Add memory barrier to store_reset()
  cpufreq: schedutil: Simplify sugov_fast_switch()
  ACPI: EC: PM: Drop ec_no_wakeup check from acpi_ec_dispatch_gpe()
  ACPI: EC: PM: Flush EC work unconditionally after wakeup
  PCI/ACPI: Whitelist hotplug ports for D3 if power managed by ACPI
  PM: hibernate: remove the bogus call to get_gendisk() in software_resume()
  cpufreq: Move traces and update to policy->cur to cpufreq core
  cpufreq: stats: Enable stats for fast-switch as well
  cpufreq: stats: Mark few conditionals with unlikely()
  cpufreq: stats: Remove locking
  cpufreq: stats: Defer stats update to cpufreq_stats_record_transition()
  PM: domains: Allow to abort power off when no ->power_off() callback
  PM: domains: Rename power state enums for genpd
  PM / devfreq: tegra30: Improve initial hardware resetting
  PM / devfreq: event: Change prototype of devfreq_event_get_edev_by_phandle function
  PM / devfreq: Change prototype of devfreq_get_devfreq_by_phandle function
  PM / devfreq: Add devfreq_get_devfreq_by_node function
  ...
This commit is contained in:
Linus Torvalds 2020-10-14 10:45:41 -07:00
commit 0b8417c141
60 changed files with 1018 additions and 2062 deletions

View File

@ -528,6 +528,10 @@ object corresponding to it, as follows:
Total number of times the hardware has been asked by the given CPU to
enter this idle state.
``rejected``
Total number of times a request to enter this idle state on the given
CPU was rejected.
The :file:`desc` and :file:`name` files both contain strings. The difference
between them is that the name is expected to be more concise, while the
description may be longer and it may contain white space or special characters.
@ -572,6 +576,11 @@ particular case. For these reasons, the only reliable way to find out how
much time has been spent by the hardware in different idle states supported by
it is to use idle state residency counters in the hardware, if available.
Generally, an interrupt received when trying to enter an idle state causes the
idle state entry request to be rejected, in which case the ``CPUIdle`` driver
may return an error code to indicate that this was the case. The :file:`usage`
and :file:`rejected` files report the number of times the given idle state
was entered successfully or rejected, respectively.
.. _cpu-pm-qos:

View File

@ -8,7 +8,7 @@ Properties:
- compatible
Usage: required
Value type: <string>
Definition: must be "qcom,cpufreq-hw".
Definition: must be "qcom,cpufreq-hw" or "qcom,cpufreq-epss".
- clocks
Usage: required

View File

@ -154,25 +154,27 @@ Optional properties:
- opp-suspend: Marks the OPP to be used during device suspend. If multiple OPPs
in the table have this, the OPP with highest opp-hz will be used.
- opp-supported-hw: This enables us to select only a subset of OPPs from the
larger OPP table, based on what version of the hardware we are running on. We
still can't have multiple nodes with the same opp-hz value in OPP table.
- opp-supported-hw: This property allows a platform to enable only a subset of
the OPPs from the larger set present in the OPP table, based on the current
version of the hardware (already known to the operating system).
It's a user defined array containing a hierarchy of hardware version numbers,
supported by the OPP. For example: a platform with hierarchy of three levels
of versions (A, B and C), this field should be like <X Y Z>, where X
corresponds to Version hierarchy A, Y corresponds to version hierarchy B and Z
corresponds to version hierarchy C.
Each block present in the array of blocks in this property, represents a
sub-group of hardware versions supported by the OPP. i.e. <sub-group A>,
<sub-group B>, etc. The OPP will be enabled if _any_ of these sub-groups match
the hardware's version.
Each level of hierarchy is represented by a 32 bit value, and so there can be
only 32 different supported version per hierarchy. i.e. 1 bit per version. A
value of 0xFFFFFFFF will enable the OPP for all versions for that hierarchy
level. And a value of 0x00000000 will disable the OPP completely, and so we
never want that to happen.
Each sub-group is a platform defined array representing the hierarchy of
hardware versions supported by the platform. For a platform with three
hierarchical levels of version (X.Y.Z), this field shall look like
If 32 values aren't sufficient for a version hierarchy, than that version
hierarchy can be contained in multiple 32 bit values. i.e. <X Y Z1 Z2> in the
above example, Z1 & Z2 refer to the version hierarchy Z.
opp-supported-hw = <X1 Y1 Z1>, <X2 Y2 Z2>, <X3 Y3 Z3>.
Each level (eg. X1) in version hierarchy is represented by a 32 bit value, one
bit per version and so there can be maximum 32 versions per level. Logical AND
(&) operation is performed for each level with the hardware's level version
and a non-zero output for _all_ the levels in a sub-group means the OPP is
supported by hardware. A value of 0xFFFFFFFF for each level in the sub-group
will enable the OPP for all versions for the hardware.
- status: Marks the node enabled/disabled.
@ -503,7 +505,6 @@ Example 5: opp-supported-hw
*/
opp-supported-hw = <0xF 0xFFFFFFFF 0xFFFFFFFF>
opp-hz = /bits/ 64 <600000000>;
opp-microvolt = <915000 900000 925000>;
...
};
@ -516,7 +517,17 @@ Example 5: opp-supported-hw
*/
opp-supported-hw = <0x20 0xff0000ff 0x0000f4f0>
opp-hz = /bits/ 64 <800000000>;
opp-microvolt = <915000 900000 925000>;
...
};
opp-900000000 {
/*
* Supports:
* - All cuts and substrate where process version is 0x2.
* - All cuts and process where substrate version is 0x2.
*/
opp-supported-hw = <0xFFFFFFFF 0xFFFFFFFF 0x02>, <0xFFFFFFFF 0x01 0xFFFFFFFF>
opp-hz = /bits/ 64 <900000000>;
...
};
};

View File

@ -5388,7 +5388,6 @@ F: include/linux/kobj*
F: lib/kobj*
DRIVERS FOR ADAPTIVE VOLTAGE SCALING (AVS)
M: Kevin Hilman <khilman@kernel.org>
M: Nishanth Menon <nm@ti.com>
L: linux-pm@vger.kernel.org
S: Maintained

View File

@ -26,14 +26,6 @@ opp@456000000,800 {
opp-microvolt = <800000 800000 1125000>;
};
opp@456000000,800,2,2 {
opp-microvolt = <800000 800000 1125000>;
};
opp@456000000,800,3,2 {
opp-microvolt = <800000 800000 1125000>;
};
opp@456000000,825 {
opp-microvolt = <825000 825000 1125000>;
};
@ -46,10 +38,6 @@ opp@608000000,800 {
opp-microvolt = <800000 800000 1125000>;
};
opp@608000000,800,3,2 {
opp-microvolt = <800000 800000 1125000>;
};
opp@608000000,825 {
opp-microvolt = <825000 825000 1125000>;
};
@ -78,18 +66,6 @@ opp@760000000,875 {
opp-microvolt = <875000 875000 1125000>;
};
opp@760000000,875,1,1 {
opp-microvolt = <875000 875000 1125000>;
};
opp@760000000,875,0,2 {
opp-microvolt = <875000 875000 1125000>;
};
opp@760000000,875,1,2 {
opp-microvolt = <875000 875000 1125000>;
};
opp@760000000,900 {
opp-microvolt = <900000 900000 1125000>;
};
@ -134,14 +110,6 @@ opp@912000000,950 {
opp-microvolt = <950000 950000 1125000>;
};
opp@912000000,950,0,2 {
opp-microvolt = <950000 950000 1125000>;
};
opp@912000000,950,2,2 {
opp-microvolt = <950000 950000 1125000>;
};
opp@912000000,1000 {
opp-microvolt = <1000000 1000000 1125000>;
};
@ -170,10 +138,6 @@ opp@1000000000,1000 {
opp-microvolt = <1000000 1000000 1125000>;
};
opp@1000000000,1000,0,2 {
opp-microvolt = <1000000 1000000 1125000>;
};
opp@1000000000,1025 {
opp-microvolt = <1025000 1025000 1125000>;
};

View File

@ -37,19 +37,8 @@ opp@456000000,750 {
opp@456000000,800 {
clock-latency-ns = <400000>;
opp-supported-hw = <0x03 0x0006>;
opp-hz = /bits/ 64 <456000000>;
};
opp@456000000,800,2,2 {
clock-latency-ns = <400000>;
opp-supported-hw = <0x04 0x0004>;
opp-hz = /bits/ 64 <456000000>;
};
opp@456000000,800,3,2 {
clock-latency-ns = <400000>;
opp-supported-hw = <0x08 0x0004>;
opp-supported-hw = <0x03 0x0006>, <0x04 0x0004>,
<0x08 0x0004>;
opp-hz = /bits/ 64 <456000000>;
};
@ -67,13 +56,7 @@ opp@608000000,750 {
opp@608000000,800 {
clock-latency-ns = <400000>;
opp-supported-hw = <0x04 0x0006>;
opp-hz = /bits/ 64 <608000000>;
};
opp@608000000,800,3,2 {
clock-latency-ns = <400000>;
opp-supported-hw = <0x08 0x0004>;
opp-supported-hw = <0x04 0x0006>, <0x08 0x0004>;
opp-hz = /bits/ 64 <608000000>;
};
@ -115,25 +98,8 @@ opp@760000000,850 {
opp@760000000,875 {
clock-latency-ns = <400000>;
opp-supported-hw = <0x04 0x0001>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,875,1,1 {
clock-latency-ns = <400000>;
opp-supported-hw = <0x02 0x0002>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,875,0,2 {
clock-latency-ns = <400000>;
opp-supported-hw = <0x01 0x0004>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,875,1,2 {
clock-latency-ns = <400000>;
opp-supported-hw = <0x02 0x0004>;
opp-supported-hw = <0x04 0x0001>, <0x02 0x0002>,
<0x01 0x0004>, <0x02 0x0004>;
opp-hz = /bits/ 64 <760000000>;
};
@ -199,19 +165,8 @@ opp@912000000,925 {
opp@912000000,950 {
clock-latency-ns = <400000>;
opp-supported-hw = <0x02 0x0006>;
opp-hz = /bits/ 64 <912000000>;
};
opp@912000000,950,0,2 {
clock-latency-ns = <400000>;
opp-supported-hw = <0x01 0x0004>;
opp-hz = /bits/ 64 <912000000>;
};
opp@912000000,950,2,2 {
clock-latency-ns = <400000>;
opp-supported-hw = <0x04 0x0004>;
opp-supported-hw = <0x02 0x0006>, <0x01 0x0004>,
<0x04 0x0004>;
opp-hz = /bits/ 64 <912000000>;
};
@ -253,13 +208,7 @@ opp@1000000000,975 {
opp@1000000000,1000 {
clock-latency-ns = <400000>;
opp-supported-hw = <0x02 0x0006>;
opp-hz = /bits/ 64 <1000000000>;
};
opp@1000000000,1000,0,2 {
clock-latency-ns = <400000>;
opp-supported-hw = <0x01 0x0004>;
opp-supported-hw = <0x02 0x0006>, <0x01 0x0004>;
opp-hz = /bits/ 64 <1000000000>;
};

View File

@ -74,22 +74,6 @@ opp@475000000,850 {
opp-microvolt = <850000 850000 1250000>;
};
opp@475000000,850,0,1 {
opp-microvolt = <850000 850000 1250000>;
};
opp@475000000,850,0,4 {
opp-microvolt = <850000 850000 1250000>;
};
opp@475000000,850,0,7 {
opp-microvolt = <850000 850000 1250000>;
};
opp@475000000,850,0,8 {
opp-microvolt = <850000 850000 1250000>;
};
opp@608000000,850 {
opp-microvolt = <850000 850000 1250000>;
};
@ -106,62 +90,6 @@ opp@640000000,850 {
opp-microvolt = <850000 850000 1250000>;
};
opp@640000000,850,1,1 {
opp-microvolt = <850000 850000 1250000>;
};
opp@640000000,850,2,1 {
opp-microvolt = <850000 850000 1250000>;
};
opp@640000000,850,3,1 {
opp-microvolt = <850000 850000 1250000>;
};
opp@640000000,850,1,4 {
opp-microvolt = <850000 850000 1250000>;
};
opp@640000000,850,2,4 {
opp-microvolt = <850000 850000 1250000>;
};
opp@640000000,850,3,4 {
opp-microvolt = <850000 850000 1250000>;
};
opp@640000000,850,1,7 {
opp-microvolt = <850000 850000 1250000>;
};
opp@640000000,850,2,7 {
opp-microvolt = <850000 850000 1250000>;
};
opp@640000000,850,3,7 {
opp-microvolt = <850000 850000 1250000>;
};
opp@640000000,850,4,7 {
opp-microvolt = <850000 850000 1250000>;
};
opp@640000000,850,1,8 {
opp-microvolt = <850000 850000 1250000>;
};
opp@640000000,850,2,8 {
opp-microvolt = <850000 850000 1250000>;
};
opp@640000000,850,3,8 {
opp-microvolt = <850000 850000 1250000>;
};
opp@640000000,850,4,8 {
opp-microvolt = <850000 850000 1250000>;
};
opp@640000000,900 {
opp-microvolt = <900000 900000 1250000>;
};
@ -170,94 +98,10 @@ opp@760000000,850 {
opp-microvolt = <850000 850000 1250000>;
};
opp@760000000,850,3,1 {
opp-microvolt = <850000 850000 1250000>;
};
opp@760000000,850,3,2 {
opp-microvolt = <850000 850000 1250000>;
};
opp@760000000,850,3,3 {
opp-microvolt = <850000 850000 1250000>;
};
opp@760000000,850,3,4 {
opp-microvolt = <850000 850000 1250000>;
};
opp@760000000,850,3,7 {
opp-microvolt = <850000 850000 1250000>;
};
opp@760000000,850,4,7 {
opp-microvolt = <850000 850000 1250000>;
};
opp@760000000,850,3,8 {
opp-microvolt = <850000 850000 1250000>;
};
opp@760000000,850,4,8 {
opp-microvolt = <850000 850000 1250000>;
};
opp@760000000,850,0,10 {
opp-microvolt = <850000 850000 1250000>;
};
opp@760000000,900 {
opp-microvolt = <900000 900000 1250000>;
};
opp@760000000,900,1,1 {
opp-microvolt = <900000 900000 1250000>;
};
opp@760000000,900,2,1 {
opp-microvolt = <900000 900000 1250000>;
};
opp@760000000,900,1,2 {
opp-microvolt = <900000 900000 1250000>;
};
opp@760000000,900,2,2 {
opp-microvolt = <900000 900000 1250000>;
};
opp@760000000,900,1,3 {
opp-microvolt = <900000 900000 1250000>;
};
opp@760000000,900,2,3 {
opp-microvolt = <900000 900000 1250000>;
};
opp@760000000,900,1,4 {
opp-microvolt = <900000 900000 1250000>;
};
opp@760000000,900,2,4 {
opp-microvolt = <900000 900000 1250000>;
};
opp@760000000,900,1,7 {
opp-microvolt = <900000 900000 1250000>;
};
opp@760000000,900,2,7 {
opp-microvolt = <900000 900000 1250000>;
};
opp@760000000,900,1,8 {
opp-microvolt = <900000 900000 1250000>;
};
opp@760000000,900,2,8 {
opp-microvolt = <900000 900000 1250000>;
};
opp@760000000,912 {
opp-microvolt = <912000 912000 1250000>;
};
@ -282,90 +126,10 @@ opp@860000000,900 {
opp-microvolt = <900000 900000 1250000>;
};
opp@860000000,900,2,1 {
opp-microvolt = <900000 900000 1250000>;
};
opp@860000000,900,3,1 {
opp-microvolt = <900000 900000 1250000>;
};
opp@860000000,900,2,2 {
opp-microvolt = <900000 900000 1250000>;
};
opp@860000000,900,3,2 {
opp-microvolt = <900000 900000 1250000>;
};
opp@860000000,900,2,3 {
opp-microvolt = <900000 900000 1250000>;
};
opp@860000000,900,3,3 {
opp-microvolt = <900000 900000 1250000>;
};
opp@860000000,900,2,4 {
opp-microvolt = <900000 900000 1250000>;
};
opp@860000000,900,3,4 {
opp-microvolt = <900000 900000 1250000>;
};
opp@860000000,900,2,7 {
opp-microvolt = <900000 900000 1250000>;
};
opp@860000000,900,3,7 {
opp-microvolt = <900000 900000 1250000>;
};
opp@860000000,900,4,7 {
opp-microvolt = <900000 900000 1250000>;
};
opp@860000000,900,2,8 {
opp-microvolt = <900000 900000 1250000>;
};
opp@860000000,900,3,8 {
opp-microvolt = <900000 900000 1250000>;
};
opp@860000000,900,4,8 {
opp-microvolt = <900000 900000 1250000>;
};
opp@860000000,975 {
opp-microvolt = <975000 975000 1250000>;
};
opp@860000000,975,1,1 {
opp-microvolt = <975000 975000 1250000>;
};
opp@860000000,975,1,2 {
opp-microvolt = <975000 975000 1250000>;
};
opp@860000000,975,1,3 {
opp-microvolt = <975000 975000 1250000>;
};
opp@860000000,975,1,4 {
opp-microvolt = <975000 975000 1250000>;
};
opp@860000000,975,1,7 {
opp-microvolt = <975000 975000 1250000>;
};
opp@860000000,975,1,8 {
opp-microvolt = <975000 975000 1250000>;
};
opp@860000000,1000 {
opp-microvolt = <1000000 1000000 1250000>;
};
@ -382,62 +146,6 @@ opp@1000000000,975 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1000000000,975,2,1 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1000000000,975,3,1 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1000000000,975,2,2 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1000000000,975,3,2 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1000000000,975,2,3 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1000000000,975,3,3 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1000000000,975,2,4 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1000000000,975,3,4 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1000000000,975,2,7 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1000000000,975,3,7 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1000000000,975,4,7 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1000000000,975,2,8 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1000000000,975,3,8 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1000000000,975,4,8 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1000000000,1000 {
opp-microvolt = <1000000 1000000 1250000>;
};
@ -454,66 +162,10 @@ opp@1100000000,975 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1100000000,975,3,1 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1100000000,975,3,2 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1100000000,975,3,3 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1100000000,975,3,4 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1100000000,975,3,7 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1100000000,975,4,7 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1100000000,975,3,8 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1100000000,975,4,8 {
opp-microvolt = <975000 975000 1250000>;
};
opp@1100000000,1000 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1100000000,1000,2,1 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1100000000,1000,2,2 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1100000000,1000,2,3 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1100000000,1000,2,4 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1100000000,1000,2,7 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1100000000,1000,2,8 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1100000000,1025 {
opp-microvolt = <1025000 1025000 1250000>;
};
@ -534,66 +186,10 @@ opp@1200000000,1000 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1200000000,1000,3,1 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1200000000,1000,3,2 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1200000000,1000,3,3 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1200000000,1000,3,4 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1200000000,1000,3,7 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1200000000,1000,4,7 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1200000000,1000,3,8 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1200000000,1000,4,8 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1200000000,1025 {
opp-microvolt = <1025000 1025000 1250000>;
};
opp@1200000000,1025,2,1 {
opp-microvolt = <1025000 1025000 1250000>;
};
opp@1200000000,1025,2,2 {
opp-microvolt = <1025000 1025000 1250000>;
};
opp@1200000000,1025,2,3 {
opp-microvolt = <1025000 1025000 1250000>;
};
opp@1200000000,1025,2,4 {
opp-microvolt = <1025000 1025000 1250000>;
};
opp@1200000000,1025,2,7 {
opp-microvolt = <1025000 1025000 1250000>;
};
opp@1200000000,1025,2,8 {
opp-microvolt = <1025000 1025000 1250000>;
};
opp@1200000000,1050 {
opp-microvolt = <1050000 1050000 1250000>;
};
@ -610,90 +206,18 @@ opp@1300000000,1000 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1300000000,1000,4,7 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1300000000,1000,4,8 {
opp-microvolt = <1000000 1000000 1250000>;
};
opp@1300000000,1025 {
opp-microvolt = <1025000 1025000 1250000>;
};
opp@1300000000,1025,3,1 {
opp-microvolt = <1025000 1025000 1250000>;
};
opp@1300000000,1025,3,7 {
opp-microvolt = <1025000 1025000 1250000>;
};
opp@1300000000,1025,3,8 {
opp-microvolt = <1025000 1025000 1250000>;
};
opp@1300000000,1050 {
opp-microvolt = <1050000 1050000 1250000>;
};
opp@1300000000,1050,2,1 {
opp-microvolt = <1050000 1050000 1250000>;
};
opp@1300000000,1050,3,2 {
opp-microvolt = <1050000 1050000 1250000>;
};
opp@1300000000,1050,3,3 {
opp-microvolt = <1050000 1050000 1250000>;
};
opp@1300000000,1050,3,4 {
opp-microvolt = <1050000 1050000 1250000>;
};
opp@1300000000,1050,3,5 {
opp-microvolt = <1050000 1050000 1250000>;
};
opp@1300000000,1050,3,6 {
opp-microvolt = <1050000 1050000 1250000>;
};
opp@1300000000,1050,2,7 {
opp-microvolt = <1050000 1050000 1250000>;
};
opp@1300000000,1050,2,8 {
opp-microvolt = <1050000 1050000 1250000>;
};
opp@1300000000,1050,3,12 {
opp-microvolt = <1050000 1050000 1250000>;
};
opp@1300000000,1050,3,13 {
opp-microvolt = <1050000 1050000 1250000>;
};
opp@1300000000,1075 {
opp-microvolt = <1075000 1075000 1250000>;
};
opp@1300000000,1075,2,2 {
opp-microvolt = <1075000 1075000 1250000>;
};
opp@1300000000,1075,2,3 {
opp-microvolt = <1075000 1075000 1250000>;
};
opp@1300000000,1075,2,4 {
opp-microvolt = <1075000 1075000 1250000>;
};
opp@1300000000,1100 {
opp-microvolt = <1100000 1100000 1250000>;
};
@ -722,10 +246,6 @@ opp@1400000000,1150 {
opp-microvolt = <1150000 1150000 1250000>;
};
opp@1400000000,1150,2,4 {
opp-microvolt = <1150000 1150000 1250000>;
};
opp@1400000000,1175 {
opp-microvolt = <1175000 1175000 1250000>;
};
@ -738,42 +258,10 @@ opp@1500000000,1125 {
opp-microvolt = <1125000 1125000 1250000>;
};
opp@1500000000,1125,4,5 {
opp-microvolt = <1125000 1125000 1250000>;
};
opp@1500000000,1125,4,6 {
opp-microvolt = <1125000 1125000 1250000>;
};
opp@1500000000,1125,4,12 {
opp-microvolt = <1125000 1125000 1250000>;
};
opp@1500000000,1125,4,13 {
opp-microvolt = <1125000 1125000 1250000>;
};
opp@1500000000,1150 {
opp-microvolt = <1150000 1150000 1250000>;
};
opp@1500000000,1150,3,5 {
opp-microvolt = <1150000 1150000 1250000>;
};
opp@1500000000,1150,3,6 {
opp-microvolt = <1150000 1150000 1250000>;
};
opp@1500000000,1150,3,12 {
opp-microvolt = <1150000 1150000 1250000>;
};
opp@1500000000,1150,3,13 {
opp-microvolt = <1150000 1150000 1250000>;
};
opp@1500000000,1200 {
opp-microvolt = <1200000 1200000 1250000>;
};

View File

@ -109,31 +109,9 @@ opp@475000000,800 {
opp@475000000,850 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x0F 0x0001>;
opp-hz = /bits/ 64 <475000000>;
};
opp@475000000,850,0,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x01 0x0002>;
opp-hz = /bits/ 64 <475000000>;
};
opp@475000000,850,0,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x01 0x0010>;
opp-hz = /bits/ 64 <475000000>;
};
opp@475000000,850,0,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x01 0x0080>;
opp-hz = /bits/ 64 <475000000>;
};
opp@475000000,850,0,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x01 0x0100>;
opp-supported-hw = <0x0F 0x0001>, <0x01 0x0002>,
<0x01 0x0010>, <0x01 0x0080>,
<0x01 0x0100>;
opp-hz = /bits/ 64 <475000000>;
};
@ -157,91 +135,14 @@ opp@620000000,850 {
opp@640000000,850 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x0F 0x0001>;
opp-hz = /bits/ 64 <640000000>;
};
opp@640000000,850,1,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0002>;
opp-hz = /bits/ 64 <640000000>;
};
opp@640000000,850,2,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0002>;
opp-hz = /bits/ 64 <640000000>;
};
opp@640000000,850,3,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0002>;
opp-hz = /bits/ 64 <640000000>;
};
opp@640000000,850,1,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0010>;
opp-hz = /bits/ 64 <640000000>;
};
opp@640000000,850,2,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0010>;
opp-hz = /bits/ 64 <640000000>;
};
opp@640000000,850,3,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0010>;
opp-hz = /bits/ 64 <640000000>;
};
opp@640000000,850,1,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0080>;
opp-hz = /bits/ 64 <640000000>;
};
opp@640000000,850,2,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0080>;
opp-hz = /bits/ 64 <640000000>;
};
opp@640000000,850,3,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0080>;
opp-hz = /bits/ 64 <640000000>;
};
opp@640000000,850,4,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x0080>;
opp-hz = /bits/ 64 <640000000>;
};
opp@640000000,850,1,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0100>;
opp-hz = /bits/ 64 <640000000>;
};
opp@640000000,850,2,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0100>;
opp-hz = /bits/ 64 <640000000>;
};
opp@640000000,850,3,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0100>;
opp-hz = /bits/ 64 <640000000>;
};
opp@640000000,850,4,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x0100>;
opp-supported-hw = <0x0F 0x0001>, <0x02 0x0002>,
<0x04 0x0002>, <0x08 0x0002>,
<0x02 0x0010>, <0x04 0x0010>,
<0x08 0x0010>, <0x02 0x0080>,
<0x04 0x0080>, <0x08 0x0080>,
<0x10 0x0080>, <0x02 0x0100>,
<0x04 0x0100>, <0x08 0x0100>,
<0x10 0x0100>;
opp-hz = /bits/ 64 <640000000>;
};
@ -253,139 +154,23 @@ opp@640000000,900 {
opp@760000000,850 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x1E 0x3461>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,850,3,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0002>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,850,3,2 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0004>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,850,3,3 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0008>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,850,3,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0010>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,850,3,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0080>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,850,4,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x0080>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,850,3,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0100>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,850,4,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x0100>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,850,0,10 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x01 0x0400>;
opp-supported-hw = <0x1E 0x3461>, <0x08 0x0002>,
<0x08 0x0004>, <0x08 0x0008>,
<0x08 0x0010>, <0x08 0x0080>,
<0x10 0x0080>, <0x08 0x0100>,
<0x10 0x0100>, <0x01 0x0400>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,900 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x01 0x0001>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,900,1,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0002>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,900,2,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0002>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,900,1,2 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0004>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,900,2,2 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0004>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,900,1,3 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0008>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,900,2,3 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0008>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,900,1,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0010>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,900,2,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0010>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,900,1,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0080>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,900,2,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0080>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,900,1,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0100>;
opp-hz = /bits/ 64 <760000000>;
};
opp@760000000,900,2,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0100>;
opp-supported-hw = <0x01 0x0001>, <0x02 0x0002>,
<0x04 0x0002>, <0x02 0x0004>,
<0x04 0x0004>, <0x02 0x0008>,
<0x04 0x0008>, <0x02 0x0010>,
<0x04 0x0010>, <0x02 0x0080>,
<0x04 0x0080>, <0x02 0x0100>,
<0x04 0x0100>;
opp-hz = /bits/ 64 <760000000>;
};
@ -421,133 +206,23 @@ opp@860000000,850 {
opp@860000000,900 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0001>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,900,2,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0002>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,900,3,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0002>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,900,2,2 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0004>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,900,3,2 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0004>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,900,2,3 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0008>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,900,3,3 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0008>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,900,2,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0010>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,900,3,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0010>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,900,2,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0080>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,900,3,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0080>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,900,4,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x0080>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,900,2,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0100>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,900,3,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0100>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,900,4,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x0100>;
opp-supported-hw = <0x02 0x0001>, <0x04 0x0002>,
<0x08 0x0002>, <0x04 0x0004>,
<0x08 0x0004>, <0x04 0x0008>,
<0x08 0x0008>, <0x04 0x0010>,
<0x08 0x0010>, <0x04 0x0080>,
<0x08 0x0080>, <0x10 0x0080>,
<0x04 0x0100>, <0x08 0x0100>,
<0x10 0x0100>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,975 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x01 0x0001>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,975,1,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0002>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,975,1,2 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0004>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,975,1,3 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0008>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,975,1,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0010>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,975,1,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0080>;
opp-hz = /bits/ 64 <860000000>;
};
opp@860000000,975,1,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0100>;
opp-supported-hw = <0x01 0x0001>, <0x02 0x0002>,
<0x02 0x0004>, <0x02 0x0008>,
<0x02 0x0010>, <0x02 0x0080>,
<0x02 0x0100>;
opp-hz = /bits/ 64 <860000000>;
};
@ -571,91 +246,14 @@ opp@1000000000,900 {
opp@1000000000,975 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x03 0x0001>;
opp-hz = /bits/ 64 <1000000000>;
};
opp@1000000000,975,2,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0002>;
opp-hz = /bits/ 64 <1000000000>;
};
opp@1000000000,975,3,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0002>;
opp-hz = /bits/ 64 <1000000000>;
};
opp@1000000000,975,2,2 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0004>;
opp-hz = /bits/ 64 <1000000000>;
};
opp@1000000000,975,3,2 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0004>;
opp-hz = /bits/ 64 <1000000000>;
};
opp@1000000000,975,2,3 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0008>;
opp-hz = /bits/ 64 <1000000000>;
};
opp@1000000000,975,3,3 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0008>;
opp-hz = /bits/ 64 <1000000000>;
};
opp@1000000000,975,2,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0010>;
opp-hz = /bits/ 64 <1000000000>;
};
opp@1000000000,975,3,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0010>;
opp-hz = /bits/ 64 <1000000000>;
};
opp@1000000000,975,2,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0080>;
opp-hz = /bits/ 64 <1000000000>;
};
opp@1000000000,975,3,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0080>;
opp-hz = /bits/ 64 <1000000000>;
};
opp@1000000000,975,4,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x0080>;
opp-hz = /bits/ 64 <1000000000>;
};
opp@1000000000,975,2,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0100>;
opp-hz = /bits/ 64 <1000000000>;
};
opp@1000000000,975,3,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0100>;
opp-hz = /bits/ 64 <1000000000>;
};
opp@1000000000,975,4,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x0100>;
opp-supported-hw = <0x03 0x0001>, <0x04 0x0002>,
<0x08 0x0002>, <0x04 0x0004>,
<0x08 0x0004>, <0x04 0x0008>,
<0x08 0x0008>, <0x04 0x0010>,
<0x08 0x0010>, <0x04 0x0080>,
<0x08 0x0080>, <0x10 0x0080>,
<0x04 0x0100>, <0x08 0x0100>,
<0x10 0x0100>;
opp-hz = /bits/ 64 <1000000000>;
};
@ -679,97 +277,20 @@ opp@1100000000,900 {
opp@1100000000,975 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x06 0x0001>;
opp-hz = /bits/ 64 <1100000000>;
};
opp@1100000000,975,3,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0002>;
opp-hz = /bits/ 64 <1100000000>;
};
opp@1100000000,975,3,2 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0004>;
opp-hz = /bits/ 64 <1100000000>;
};
opp@1100000000,975,3,3 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0008>;
opp-hz = /bits/ 64 <1100000000>;
};
opp@1100000000,975,3,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0010>;
opp-hz = /bits/ 64 <1100000000>;
};
opp@1100000000,975,3,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0080>;
opp-hz = /bits/ 64 <1100000000>;
};
opp@1100000000,975,4,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x0080>;
opp-hz = /bits/ 64 <1100000000>;
};
opp@1100000000,975,3,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0100>;
opp-hz = /bits/ 64 <1100000000>;
};
opp@1100000000,975,4,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x0100>;
opp-supported-hw = <0x06 0x0001>, <0x08 0x0002>,
<0x08 0x0004>, <0x08 0x0008>,
<0x08 0x0010>, <0x08 0x0080>,
<0x10 0x0080>, <0x08 0x0100>,
<0x10 0x0100>;
opp-hz = /bits/ 64 <1100000000>;
};
opp@1100000000,1000 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x01 0x0001>;
opp-hz = /bits/ 64 <1100000000>;
};
opp@1100000000,1000,2,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0002>;
opp-hz = /bits/ 64 <1100000000>;
};
opp@1100000000,1000,2,2 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0004>;
opp-hz = /bits/ 64 <1100000000>;
};
opp@1100000000,1000,2,3 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0008>;
opp-hz = /bits/ 64 <1100000000>;
};
opp@1100000000,1000,2,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0010>;
opp-hz = /bits/ 64 <1100000000>;
};
opp@1100000000,1000,2,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0080>;
opp-hz = /bits/ 64 <1100000000>;
};
opp@1100000000,1000,2,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0100>;
opp-supported-hw = <0x01 0x0001>, <0x04 0x0002>,
<0x04 0x0004>, <0x04 0x0008>,
<0x04 0x0010>, <0x04 0x0080>,
<0x04 0x0100>;
opp-hz = /bits/ 64 <1100000000>;
};
@ -799,97 +320,20 @@ opp@1200000000,975 {
opp@1200000000,1000 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0001>;
opp-hz = /bits/ 64 <1200000000>;
};
opp@1200000000,1000,3,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0002>;
opp-hz = /bits/ 64 <1200000000>;
};
opp@1200000000,1000,3,2 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0004>;
opp-hz = /bits/ 64 <1200000000>;
};
opp@1200000000,1000,3,3 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0008>;
opp-hz = /bits/ 64 <1200000000>;
};
opp@1200000000,1000,3,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0010>;
opp-hz = /bits/ 64 <1200000000>;
};
opp@1200000000,1000,3,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0080>;
opp-hz = /bits/ 64 <1200000000>;
};
opp@1200000000,1000,4,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x0080>;
opp-hz = /bits/ 64 <1200000000>;
};
opp@1200000000,1000,3,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0100>;
opp-hz = /bits/ 64 <1200000000>;
};
opp@1200000000,1000,4,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x0100>;
opp-supported-hw = <0x04 0x0001>, <0x08 0x0002>,
<0x08 0x0004>, <0x08 0x0008>,
<0x08 0x0010>, <0x08 0x0080>,
<0x10 0x0080>, <0x08 0x0100>,
<0x10 0x0100>;
opp-hz = /bits/ 64 <1200000000>;
};
opp@1200000000,1025 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0001>;
opp-hz = /bits/ 64 <1200000000>;
};
opp@1200000000,1025,2,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0002>;
opp-hz = /bits/ 64 <1200000000>;
};
opp@1200000000,1025,2,2 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0004>;
opp-hz = /bits/ 64 <1200000000>;
};
opp@1200000000,1025,2,3 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0008>;
opp-hz = /bits/ 64 <1200000000>;
};
opp@1200000000,1025,2,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0010>;
opp-hz = /bits/ 64 <1200000000>;
};
opp@1200000000,1025,2,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0080>;
opp-hz = /bits/ 64 <1200000000>;
};
opp@1200000000,1025,2,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0100>;
opp-supported-hw = <0x02 0x0001>, <0x04 0x0002>,
<0x04 0x0004>, <0x04 0x0008>,
<0x04 0x0010>, <0x04 0x0080>,
<0x04 0x0100>;
opp-hz = /bits/ 64 <1200000000>;
};
@ -913,133 +357,33 @@ opp@1200000000,1100 {
opp@1300000000,1000 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0001>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1000,4,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x0080>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1000,4,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x0100>;
opp-supported-hw = <0x08 0x0001>, <0x10 0x0080>,
<0x10 0x0100>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1025 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0001>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1025,3,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0002>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1025,3,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0080>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1025,3,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0100>;
opp-supported-hw = <0x04 0x0001>, <0x08 0x0002>,
<0x08 0x0080>, <0x08 0x0100>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1050 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x12 0x3061>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1050,2,1 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0002>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1050,3,2 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0004>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1050,3,3 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0008>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1050,3,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0010>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1050,3,5 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0020>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1050,3,6 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0040>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1050,2,7 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0080>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1050,2,8 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0100>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1050,3,12 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x1000>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1050,3,13 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x2000>;
opp-supported-hw = <0x12 0x3061>, <0x04 0x0002>,
<0x08 0x0004>, <0x08 0x0008>,
<0x08 0x0010>, <0x08 0x0020>,
<0x08 0x0040>, <0x04 0x0080>,
<0x04 0x0100>, <0x08 0x1000>,
<0x08 0x2000>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1075 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x0182>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1075,2,2 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0004>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1075,2,3 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0008>;
opp-hz = /bits/ 64 <1300000000>;
};
opp@1300000000,1075,2,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0010>;
opp-supported-hw = <0x02 0x0182>, <0x04 0x0004>,
<0x04 0x0008>, <0x04 0x0010>;
opp-hz = /bits/ 64 <1300000000>;
};
@ -1081,13 +425,7 @@ opp@1400000000,1125 {
opp@1400000000,1150 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x02 0x000C>;
opp-hz = /bits/ 64 <1400000000>;
};
opp@1400000000,1150,2,4 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0010>;
opp-supported-hw = <0x02 0x000C>, <0x04 0x0010>;
opp-hz = /bits/ 64 <1400000000>;
};
@ -1105,61 +443,17 @@ opp@1400000000,1237 {
opp@1500000000,1125 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0010>;
opp-hz = /bits/ 64 <1500000000>;
};
opp@1500000000,1125,4,5 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x0020>;
opp-hz = /bits/ 64 <1500000000>;
};
opp@1500000000,1125,4,6 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x0040>;
opp-hz = /bits/ 64 <1500000000>;
};
opp@1500000000,1125,4,12 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x1000>;
opp-hz = /bits/ 64 <1500000000>;
};
opp@1500000000,1125,4,13 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x10 0x2000>;
opp-supported-hw = <0x08 0x0010>, <0x10 0x0020>,
<0x10 0x0040>, <0x10 0x1000>,
<0x10 0x2000>;
opp-hz = /bits/ 64 <1500000000>;
};
opp@1500000000,1150 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x04 0x0010>;
opp-hz = /bits/ 64 <1500000000>;
};
opp@1500000000,1150,3,5 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0020>;
opp-hz = /bits/ 64 <1500000000>;
};
opp@1500000000,1150,3,6 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x0040>;
opp-hz = /bits/ 64 <1500000000>;
};
opp@1500000000,1150,3,12 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x1000>;
opp-hz = /bits/ 64 <1500000000>;
};
opp@1500000000,1150,3,13 {
clock-latency-ns = <100000>;
opp-supported-hw = <0x08 0x2000>;
opp-supported-hw = <0x04 0x0010>, <0x08 0x0020>,
<0x08 0x0040>, <0x08 0x1000>,
<0x08 0x2000>;
opp-hz = /bits/ 64 <1500000000>;
};

View File

@ -7,8 +7,13 @@
#include <linux/cpumask.h>
#include <linux/arch_topology.h>
/* big.LITTLE switcher is incompatible with frequency invariance */
#ifndef CONFIG_BL_SWITCHER
/* Replace task scheduler's default frequency-invariant accounting */
#define arch_set_freq_scale topology_set_freq_scale
#define arch_scale_freq_capacity topology_get_freq_scale
#define arch_scale_freq_invariant topology_scale_freq_invariant
#endif
/* Replace task scheduler's default cpu-invariant accounting */
#define arch_scale_cpu_capacity topology_get_cpu_scale

View File

@ -26,7 +26,9 @@ void topology_scale_freq_tick(void);
#endif /* CONFIG_ARM64_AMU_EXTN */
/* Replace task scheduler's default frequency-invariant accounting */
#define arch_set_freq_scale topology_set_freq_scale
#define arch_scale_freq_capacity topology_get_freq_scale
#define arch_scale_freq_invariant topology_scale_freq_invariant
/* Replace task scheduler's default cpu-invariant accounting */
#define arch_scale_cpu_capacity topology_get_cpu_scale

View File

@ -248,6 +248,13 @@ static int __init init_amu_fie(void)
static_branch_enable(&amu_fie_key);
}
/*
* If the system is not fully invariant after AMU init, disable
* partial use of counters for frequency invariance.
*/
if (!topology_scale_freq_invariant())
static_branch_disable(&amu_fie_key);
free_valid_mask:
free_cpumask_var(valid_cpus);
@ -255,7 +262,7 @@ static int __init init_amu_fie(void)
}
late_initcall_sync(init_amu_fie);
bool arch_freq_counters_available(struct cpumask *cpus)
bool arch_freq_counters_available(const struct cpumask *cpus)
{
return amu_freq_invariant() &&
cpumask_subset(cpus, amu_fie_cpus);

View File

@ -798,22 +798,34 @@ int acpi_processor_evaluate_cst(acpi_handle handle, u32 cpu,
memset(&cx, 0, sizeof(cx));
element = &cst->package.elements[i];
if (element->type != ACPI_TYPE_PACKAGE)
if (element->type != ACPI_TYPE_PACKAGE) {
acpi_handle_info(handle, "_CST C%d type(%x) is not package, skip...\n",
i, element->type);
continue;
}
if (element->package.count != 4)
if (element->package.count != 4) {
acpi_handle_info(handle, "_CST C%d package count(%d) is not 4, skip...\n",
i, element->package.count);
continue;
}
obj = &element->package.elements[0];
if (obj->type != ACPI_TYPE_BUFFER)
if (obj->type != ACPI_TYPE_BUFFER) {
acpi_handle_info(handle, "_CST C%d package element[0] type(%x) is not buffer, skip...\n",
i, obj->type);
continue;
}
reg = (struct acpi_power_register *)obj->buffer.pointer;
obj = &element->package.elements[1];
if (obj->type != ACPI_TYPE_INTEGER)
if (obj->type != ACPI_TYPE_INTEGER) {
acpi_handle_info(handle, "_CST C[%d] package element[1] type(%x) is not integer, skip...\n",
i, obj->type);
continue;
}
cx.type = obj->integer.value;
/*
@ -850,6 +862,8 @@ int acpi_processor_evaluate_cst(acpi_handle handle, u32 cpu,
cx.entry_method = ACPI_CSTATE_HALT;
snprintf(cx.desc, ACPI_CX_DESC_LEN, "ACPI HLT");
} else {
acpi_handle_info(handle, "_CST C%d declares FIXED_HARDWARE C-state but not supported in hardware, skip...\n",
i);
continue;
}
} else if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_IO) {
@ -857,6 +871,8 @@ int acpi_processor_evaluate_cst(acpi_handle handle, u32 cpu,
snprintf(cx.desc, ACPI_CX_DESC_LEN, "ACPI IOPORT 0x%x",
cx.address);
} else {
acpi_handle_info(handle, "_CST C%d space_id(%x) neither FIXED_HARDWARE nor SYSTEM_IO, skip...\n",
i, reg->space_id);
continue;
}
@ -864,14 +880,20 @@ int acpi_processor_evaluate_cst(acpi_handle handle, u32 cpu,
cx.valid = 1;
obj = &element->package.elements[2];
if (obj->type != ACPI_TYPE_INTEGER)
if (obj->type != ACPI_TYPE_INTEGER) {
acpi_handle_info(handle, "_CST C%d package element[2] type(%x) not integer, skip...\n",
i, obj->type);
continue;
}
cx.latency = obj->integer.value;
obj = &element->package.elements[3];
if (obj->type != ACPI_TYPE_INTEGER)
if (obj->type != ACPI_TYPE_INTEGER) {
acpi_handle_info(handle, "_CST C%d package element[3] type(%x) not integer, skip...\n",
i, obj->type);
continue;
}
memcpy(&info->states[++last_index], &cx, sizeof(cx));
}

View File

@ -2011,20 +2011,16 @@ bool acpi_ec_dispatch_gpe(void)
if (acpi_any_gpe_status_set(first_ec->gpe))
return true;
if (ec_no_wakeup)
return false;
/*
* Dispatch the EC GPE in-band, but do not report wakeup in any case
* to allow the caller to process events properly after that.
*/
ret = acpi_dispatch_gpe(NULL, first_ec->gpe);
if (ret == ACPI_INTERRUPT_HANDLED) {
if (ret == ACPI_INTERRUPT_HANDLED)
pm_pr_dbg("ACPI EC GPE dispatched\n");
/* Flush the event and query workqueues. */
acpi_ec_flush_work();
}
/* Flush the event and query workqueues. */
acpi_ec_flush_work();
return false;
}

View File

@ -21,18 +21,27 @@
#include <linux/sched.h>
#include <linux/smp.h>
__weak bool arch_freq_counters_available(struct cpumask *cpus)
bool topology_scale_freq_invariant(void)
{
return cpufreq_supports_freq_invariance() ||
arch_freq_counters_available(cpu_online_mask);
}
__weak bool arch_freq_counters_available(const struct cpumask *cpus)
{
return false;
}
DEFINE_PER_CPU(unsigned long, freq_scale) = SCHED_CAPACITY_SCALE;
void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq,
unsigned long max_freq)
void topology_set_freq_scale(const struct cpumask *cpus, unsigned long cur_freq,
unsigned long max_freq)
{
unsigned long scale;
int i;
if (WARN_ON_ONCE(!cur_freq || !max_freq))
return;
/*
* If the use of counters for FIE is enabled, just return as we don't
* want to update the scale factor with information from CPUFREQ.

View File

@ -123,7 +123,7 @@ static const struct genpd_lock_ops genpd_spin_ops = {
#define genpd_lock_interruptible(p) p->lock_ops->lock_interruptible(p)
#define genpd_unlock(p) p->lock_ops->unlock(p)
#define genpd_status_on(genpd) (genpd->status == GPD_STATE_ACTIVE)
#define genpd_status_on(genpd) (genpd->status == GENPD_STATE_ON)
#define genpd_is_irq_safe(genpd) (genpd->flags & GENPD_FLAG_IRQ_SAFE)
#define genpd_is_always_on(genpd) (genpd->flags & GENPD_FLAG_ALWAYS_ON)
#define genpd_is_active_wakeup(genpd) (genpd->flags & GENPD_FLAG_ACTIVE_WAKEUP)
@ -222,7 +222,7 @@ static void genpd_update_accounting(struct generic_pm_domain *genpd)
* out of off and so update the idle time and vice
* versa.
*/
if (genpd->status == GPD_STATE_ACTIVE) {
if (genpd->status == GENPD_STATE_ON) {
int state_idx = genpd->state_idx;
genpd->states[state_idx].idle_time =
@ -497,6 +497,7 @@ static int genpd_power_off(struct generic_pm_domain *genpd, bool one_dev_on,
struct pm_domain_data *pdd;
struct gpd_link *link;
unsigned int not_suspended = 0;
int ret;
/*
* Do not try to power off the domain in the following situations:
@ -544,26 +545,15 @@ static int genpd_power_off(struct generic_pm_domain *genpd, bool one_dev_on,
if (!genpd->gov)
genpd->state_idx = 0;
if (genpd->power_off) {
int ret;
/* Don't power off, if a child domain is waiting to power on. */
if (atomic_read(&genpd->sd_count) > 0)
return -EBUSY;
if (atomic_read(&genpd->sd_count) > 0)
return -EBUSY;
ret = _genpd_power_off(genpd, true);
if (ret)
return ret;
/*
* If sd_count > 0 at this point, one of the subdomains hasn't
* managed to call genpd_power_on() for the parent yet after
* incrementing it. In that case genpd_power_on() will wait
* for us to drop the lock, so we can call .power_off() and let
* the genpd_power_on() restore power for us (this shouldn't
* happen very often).
*/
ret = _genpd_power_off(genpd, true);
if (ret)
return ret;
}
genpd->status = GPD_STATE_POWER_OFF;
genpd->status = GENPD_STATE_OFF;
genpd_update_accounting(genpd);
list_for_each_entry(link, &genpd->child_links, child_node) {
@ -616,7 +606,7 @@ static int genpd_power_on(struct generic_pm_domain *genpd, unsigned int depth)
if (ret)
goto err;
genpd->status = GPD_STATE_ACTIVE;
genpd->status = GENPD_STATE_ON;
genpd_update_accounting(genpd);
return 0;
@ -961,7 +951,7 @@ static void genpd_sync_power_off(struct generic_pm_domain *genpd, bool use_lock,
if (_genpd_power_off(genpd, false))
return;
genpd->status = GPD_STATE_POWER_OFF;
genpd->status = GENPD_STATE_OFF;
list_for_each_entry(link, &genpd->child_links, child_node) {
genpd_sd_counter_dec(link->parent);
@ -1007,8 +997,7 @@ static void genpd_sync_power_on(struct generic_pm_domain *genpd, bool use_lock,
}
_genpd_power_on(genpd, false);
genpd->status = GPD_STATE_ACTIVE;
genpd->status = GENPD_STATE_ON;
}
/**
@ -1287,7 +1276,7 @@ static int genpd_restore_noirq(struct device *dev)
* so make it appear as powered off to genpd_sync_power_on(),
* so that it tries to power it on in case it was really off.
*/
genpd->status = GPD_STATE_POWER_OFF;
genpd->status = GENPD_STATE_OFF;
genpd_sync_power_on(genpd, true, 0);
genpd_unlock(genpd);
@ -1777,7 +1766,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
genpd->gov = gov;
INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn);
atomic_set(&genpd->sd_count, 0);
genpd->status = is_off ? GPD_STATE_POWER_OFF : GPD_STATE_ACTIVE;
genpd->status = is_off ? GENPD_STATE_OFF : GENPD_STATE_ON;
genpd->device_count = 0;
genpd->max_off_time_ns = -1;
genpd->max_off_time_changed = true;
@ -2044,8 +2033,9 @@ int of_genpd_add_provider_simple(struct device_node *np,
if (genpd->set_performance_state) {
ret = dev_pm_opp_of_add_table(&genpd->dev);
if (ret) {
dev_err(&genpd->dev, "Failed to add OPP table: %d\n",
ret);
if (ret != -EPROBE_DEFER)
dev_err(&genpd->dev, "Failed to add OPP table: %d\n",
ret);
goto unlock;
}
@ -2054,7 +2044,7 @@ int of_genpd_add_provider_simple(struct device_node *np,
* state.
*/
genpd->opp_table = dev_pm_opp_get_opp_table(&genpd->dev);
WARN_ON(!genpd->opp_table);
WARN_ON(IS_ERR(genpd->opp_table));
}
ret = genpd_add_provider(np, genpd_xlate_simple, genpd);
@ -2111,8 +2101,9 @@ int of_genpd_add_provider_onecell(struct device_node *np,
if (genpd->set_performance_state) {
ret = dev_pm_opp_of_add_table_indexed(&genpd->dev, i);
if (ret) {
dev_err(&genpd->dev, "Failed to add OPP table for index %d: %d\n",
i, ret);
if (ret != -EPROBE_DEFER)
dev_err(&genpd->dev, "Failed to add OPP table for index %d: %d\n",
i, ret);
goto error;
}
@ -2121,7 +2112,7 @@ int of_genpd_add_provider_onecell(struct device_node *np,
* performance state.
*/
genpd->opp_table = dev_pm_opp_get_opp_table_indexed(&genpd->dev, i);
WARN_ON(!genpd->opp_table);
WARN_ON(IS_ERR(genpd->opp_table));
}
genpd->provider = &np->fwnode;
@ -2802,8 +2793,8 @@ static int genpd_summary_one(struct seq_file *s,
struct generic_pm_domain *genpd)
{
static const char * const status_lookup[] = {
[GPD_STATE_ACTIVE] = "on",
[GPD_STATE_POWER_OFF] = "off"
[GENPD_STATE_ON] = "on",
[GENPD_STATE_OFF] = "off"
};
struct pm_domain_data *pm_data;
const char *kobj_path;
@ -2881,8 +2872,8 @@ static int summary_show(struct seq_file *s, void *data)
static int status_show(struct seq_file *s, void *data)
{
static const char * const status_lookup[] = {
[GPD_STATE_ACTIVE] = "on",
[GPD_STATE_POWER_OFF] = "off"
[GENPD_STATE_ON] = "on",
[GENPD_STATE_OFF] = "off"
};
struct generic_pm_domain *genpd = s->private;
@ -2895,7 +2886,7 @@ static int status_show(struct seq_file *s, void *data)
if (WARN_ON_ONCE(genpd->status >= ARRAY_SIZE(status_lookup)))
goto exit;
if (genpd->status == GPD_STATE_POWER_OFF)
if (genpd->status == GENPD_STATE_OFF)
seq_printf(s, "%s-%u\n", status_lookup[genpd->status],
genpd->state_idx);
else
@ -2938,7 +2929,7 @@ static int idle_states_show(struct seq_file *s, void *data)
ktime_t delta = 0;
s64 msecs;
if ((genpd->status == GPD_STATE_POWER_OFF) &&
if ((genpd->status == GENPD_STATE_OFF) &&
(genpd->state_idx == i))
delta = ktime_sub(ktime_get(), genpd->accounting_time);
@ -2961,7 +2952,7 @@ static int active_time_show(struct seq_file *s, void *data)
if (ret)
return -ERESTARTSYS;
if (genpd->status == GPD_STATE_ACTIVE)
if (genpd->status == GENPD_STATE_ON)
delta = ktime_sub(ktime_get(), genpd->accounting_time);
seq_printf(s, "%lld ms\n", ktime_to_ms(
@ -2984,7 +2975,7 @@ static int total_idle_time_show(struct seq_file *s, void *data)
for (i = 0; i < genpd->state_count; i++) {
if ((genpd->status == GPD_STATE_POWER_OFF) &&
if ((genpd->status == GENPD_STATE_OFF) &&
(genpd->state_idx == i))
delta = ktime_sub(ktime_get(), genpd->accounting_time);

View File

@ -291,8 +291,7 @@ static int rpm_get_suppliers(struct device *dev)
device_links_read_lock_held()) {
int retval;
if (!(link->flags & DL_FLAG_PM_RUNTIME) ||
READ_ONCE(link->status) == DL_STATE_SUPPLIER_UNBIND)
if (!(link->flags & DL_FLAG_PM_RUNTIME))
continue;
retval = pm_runtime_get_sync(link->supplier);
@ -312,8 +311,6 @@ static void rpm_put_suppliers(struct device *dev)
list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,
device_links_read_lock_held()) {
if (READ_ONCE(link->status) == DL_STATE_SUPPLIER_UNBIND)
continue;
while (refcount_dec_not_one(&link->rpm_active))
pm_runtime_put(link->supplier);

View File

@ -283,7 +283,7 @@ config ARM_SPEAR_CPUFREQ
config ARM_STI_CPUFREQ
tristate "STi CPUFreq support"
depends on SOC_STIH407
depends on CPUFREQ_DT && SOC_STIH407
help
This driver uses the generic OPP framework to match the running
platform with a predefined set of suitable values. If not provided

View File

@ -484,6 +484,12 @@ static int __init armada37xx_cpufreq_driver_init(void)
/* late_initcall, to guarantee the driver is loaded after A37xx clock driver */
late_initcall(armada37xx_cpufreq_driver_init);
static const struct of_device_id __maybe_unused armada37xx_cpufreq_of_match[] = {
{ .compatible = "marvell,armada-3700-nb-pm" },
{ },
};
MODULE_DEVICE_TABLE(of, armada37xx_cpufreq_of_match);
MODULE_AUTHOR("Gregory CLEMENT <gregory.clement@free-electrons.com>");
MODULE_DESCRIPTION("Armada 37xx cpufreq driver");
MODULE_LICENSE("GPL");

View File

@ -137,6 +137,7 @@ static const struct of_device_id blacklist[] __initconst = {
{ .compatible = "st,stih407", },
{ .compatible = "st,stih410", },
{ .compatible = "st,stih418", },
{ .compatible = "sigma,tango4", },

View File

@ -13,6 +13,7 @@
#include <linux/cpufreq.h>
#include <linux/cpumask.h>
#include <linux/err.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/pm_opp.h>
@ -24,32 +25,41 @@
#include "cpufreq-dt.h"
struct private_data {
struct opp_table *opp_table;
struct list_head node;
cpumask_var_t cpus;
struct device *cpu_dev;
const char *reg_name;
struct opp_table *opp_table;
struct opp_table *reg_opp_table;
bool have_static_opps;
};
static LIST_HEAD(priv_list);
static struct freq_attr *cpufreq_dt_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL, /* Extra space for boost-attr if required */
NULL,
};
static struct private_data *cpufreq_dt_find_data(int cpu)
{
struct private_data *priv;
list_for_each_entry(priv, &priv_list, node) {
if (cpumask_test_cpu(cpu, priv->cpus))
return priv;
}
return NULL;
}
static int set_target(struct cpufreq_policy *policy, unsigned int index)
{
struct private_data *priv = policy->driver_data;
unsigned long freq = policy->freq_table[index].frequency;
int ret;
ret = dev_pm_opp_set_rate(priv->cpu_dev, freq * 1000);
if (!ret) {
arch_set_freq_scale(policy->related_cpus, freq,
policy->cpuinfo.max_freq);
}
return ret;
return dev_pm_opp_set_rate(priv->cpu_dev, freq * 1000);
}
/*
@ -90,83 +100,24 @@ static const char *find_supply_name(struct device *dev)
return name;
}
static int resources_available(void)
{
struct device *cpu_dev;
struct regulator *cpu_reg;
struct clk *cpu_clk;
int ret = 0;
const char *name;
cpu_dev = get_cpu_device(0);
if (!cpu_dev) {
pr_err("failed to get cpu0 device\n");
return -ENODEV;
}
cpu_clk = clk_get(cpu_dev, NULL);
ret = PTR_ERR_OR_ZERO(cpu_clk);
if (ret) {
/*
* If cpu's clk node is present, but clock is not yet
* registered, we should try defering probe.
*/
if (ret == -EPROBE_DEFER)
dev_dbg(cpu_dev, "clock not ready, retry\n");
else
dev_err(cpu_dev, "failed to get clock: %d\n", ret);
return ret;
}
clk_put(cpu_clk);
ret = dev_pm_opp_of_find_icc_paths(cpu_dev, NULL);
if (ret)
return ret;
name = find_supply_name(cpu_dev);
/* Platform doesn't require regulator */
if (!name)
return 0;
cpu_reg = regulator_get_optional(cpu_dev, name);
ret = PTR_ERR_OR_ZERO(cpu_reg);
if (ret) {
/*
* If cpu's regulator supply node is present, but regulator is
* not yet registered, we should try defering probe.
*/
if (ret == -EPROBE_DEFER)
dev_dbg(cpu_dev, "cpu0 regulator not ready, retry\n");
else
dev_dbg(cpu_dev, "no regulator for cpu0: %d\n", ret);
return ret;
}
regulator_put(cpu_reg);
return 0;
}
static int cpufreq_init(struct cpufreq_policy *policy)
{
struct cpufreq_frequency_table *freq_table;
struct opp_table *opp_table = NULL;
struct private_data *priv;
struct device *cpu_dev;
struct clk *cpu_clk;
unsigned int transition_latency;
bool fallback = false;
const char *name;
int ret;
cpu_dev = get_cpu_device(policy->cpu);
if (!cpu_dev) {
pr_err("failed to get cpu%d device\n", policy->cpu);
priv = cpufreq_dt_find_data(policy->cpu);
if (!priv) {
pr_err("failed to find data for cpu%d\n", policy->cpu);
return -ENODEV;
}
cpu_dev = priv->cpu_dev;
cpumask_copy(policy->cpus, priv->cpus);
cpu_clk = clk_get(cpu_dev, NULL);
if (IS_ERR(cpu_clk)) {
ret = PTR_ERR(cpu_clk);
@ -174,45 +125,6 @@ static int cpufreq_init(struct cpufreq_policy *policy)
return ret;
}
/* Get OPP-sharing information from "operating-points-v2" bindings */
ret = dev_pm_opp_of_get_sharing_cpus(cpu_dev, policy->cpus);
if (ret) {
if (ret != -ENOENT)
goto out_put_clk;
/*
* operating-points-v2 not supported, fallback to old method of
* finding shared-OPPs for backward compatibility if the
* platform hasn't set sharing CPUs.
*/
if (dev_pm_opp_get_sharing_cpus(cpu_dev, policy->cpus))
fallback = true;
}
/*
* OPP layer will be taking care of regulators now, but it needs to know
* the name of the regulator first.
*/
name = find_supply_name(cpu_dev);
if (name) {
opp_table = dev_pm_opp_set_regulators(cpu_dev, &name, 1);
if (IS_ERR(opp_table)) {
ret = PTR_ERR(opp_table);
dev_err(cpu_dev, "Failed to set regulator for cpu%d: %d\n",
policy->cpu, ret);
goto out_put_clk;
}
}
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv) {
ret = -ENOMEM;
goto out_put_regulator;
}
priv->reg_name = name;
priv->opp_table = opp_table;
/*
* Initialize OPP tables for all policy->cpus. They will be shared by
* all CPUs which have marked their CPUs shared with OPP bindings.
@ -232,31 +144,17 @@ static int cpufreq_init(struct cpufreq_policy *policy)
*/
ret = dev_pm_opp_get_opp_count(cpu_dev);
if (ret <= 0) {
dev_dbg(cpu_dev, "OPP table is not ready, deferring probe\n");
ret = -EPROBE_DEFER;
dev_err(cpu_dev, "OPP table can't be empty\n");
ret = -ENODEV;
goto out_free_opp;
}
if (fallback) {
cpumask_setall(policy->cpus);
/*
* OPP tables are initialized only for policy->cpu, do it for
* others as well.
*/
ret = dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus);
if (ret)
dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n",
__func__, ret);
}
ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
if (ret) {
dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret);
goto out_free_opp;
}
priv->cpu_dev = cpu_dev;
policy->driver_data = priv;
policy->clk = cpu_clk;
policy->freq_table = freq_table;
@ -288,11 +186,6 @@ static int cpufreq_init(struct cpufreq_policy *policy)
out_free_opp:
if (priv->have_static_opps)
dev_pm_opp_of_cpumask_remove_table(policy->cpus);
kfree(priv);
out_put_regulator:
if (name)
dev_pm_opp_put_regulators(opp_table);
out_put_clk:
clk_put(cpu_clk);
return ret;
@ -320,12 +213,7 @@ static int cpufreq_exit(struct cpufreq_policy *policy)
dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
if (priv->have_static_opps)
dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
if (priv->reg_name)
dev_pm_opp_put_regulators(priv->opp_table);
clk_put(policy->clk);
kfree(priv);
return 0;
}
@ -344,21 +232,119 @@ static struct cpufreq_driver dt_cpufreq_driver = {
.suspend = cpufreq_generic_suspend,
};
static int dt_cpufreq_early_init(struct device *dev, int cpu)
{
struct private_data *priv;
struct device *cpu_dev;
const char *reg_name;
int ret;
/* Check if this CPU is already covered by some other policy */
if (cpufreq_dt_find_data(cpu))
return 0;
cpu_dev = get_cpu_device(cpu);
if (!cpu_dev)
return -EPROBE_DEFER;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
if (!alloc_cpumask_var(&priv->cpus, GFP_KERNEL))
return -ENOMEM;
priv->cpu_dev = cpu_dev;
/* Try to get OPP table early to ensure resources are available */
priv->opp_table = dev_pm_opp_get_opp_table(cpu_dev);
if (IS_ERR(priv->opp_table)) {
ret = PTR_ERR(priv->opp_table);
if (ret != -EPROBE_DEFER)
dev_err(cpu_dev, "failed to get OPP table: %d\n", ret);
goto free_cpumask;
}
/*
* OPP layer will be taking care of regulators now, but it needs to know
* the name of the regulator first.
*/
reg_name = find_supply_name(cpu_dev);
if (reg_name) {
priv->reg_opp_table = dev_pm_opp_set_regulators(cpu_dev,
&reg_name, 1);
if (IS_ERR(priv->reg_opp_table)) {
ret = PTR_ERR(priv->reg_opp_table);
if (ret != -EPROBE_DEFER)
dev_err(cpu_dev, "failed to set regulators: %d\n",
ret);
goto put_table;
}
}
/* Find OPP sharing information so we can fill pri->cpus here */
/* Get OPP-sharing information from "operating-points-v2" bindings */
ret = dev_pm_opp_of_get_sharing_cpus(cpu_dev, priv->cpus);
if (ret) {
if (ret != -ENOENT)
goto put_reg;
/*
* operating-points-v2 not supported, fallback to all CPUs share
* OPP for backward compatibility if the platform hasn't set
* sharing CPUs.
*/
if (dev_pm_opp_get_sharing_cpus(cpu_dev, priv->cpus)) {
cpumask_setall(priv->cpus);
/*
* OPP tables are initialized only for cpu, do it for
* others as well.
*/
ret = dev_pm_opp_set_sharing_cpus(cpu_dev, priv->cpus);
if (ret)
dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n",
__func__, ret);
}
}
list_add(&priv->node, &priv_list);
return 0;
put_reg:
if (priv->reg_opp_table)
dev_pm_opp_put_regulators(priv->reg_opp_table);
put_table:
dev_pm_opp_put_opp_table(priv->opp_table);
free_cpumask:
free_cpumask_var(priv->cpus);
return ret;
}
static void dt_cpufreq_release(void)
{
struct private_data *priv, *tmp;
list_for_each_entry_safe(priv, tmp, &priv_list, node) {
if (priv->reg_opp_table)
dev_pm_opp_put_regulators(priv->reg_opp_table);
dev_pm_opp_put_opp_table(priv->opp_table);
free_cpumask_var(priv->cpus);
list_del(&priv->node);
}
}
static int dt_cpufreq_probe(struct platform_device *pdev)
{
struct cpufreq_dt_platform_data *data = dev_get_platdata(&pdev->dev);
int ret;
int ret, cpu;
/*
* All per-cluster (CPUs sharing clock/voltages) initialization is done
* from ->init(). In probe(), we just need to make sure that clk and
* regulators are available. Else defer probe and retry.
*
* FIXME: Is checking this only for CPU0 sufficient ?
*/
ret = resources_available();
if (ret)
return ret;
/* Request resources early so we can return in case of -EPROBE_DEFER */
for_each_possible_cpu(cpu) {
ret = dt_cpufreq_early_init(&pdev->dev, cpu);
if (ret)
goto err;
}
if (data) {
if (data->have_governor_per_policy)
@ -374,15 +360,21 @@ static int dt_cpufreq_probe(struct platform_device *pdev)
}
ret = cpufreq_register_driver(&dt_cpufreq_driver);
if (ret)
if (ret) {
dev_err(&pdev->dev, "failed register driver: %d\n", ret);
goto err;
}
return 0;
err:
dt_cpufreq_release();
return ret;
}
static int dt_cpufreq_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&dt_cpufreq_driver);
dt_cpufreq_release();
return 0;
}

View File

@ -61,6 +61,12 @@ static struct cpufreq_driver *cpufreq_driver;
static DEFINE_PER_CPU(struct cpufreq_policy *, cpufreq_cpu_data);
static DEFINE_RWLOCK(cpufreq_driver_lock);
static DEFINE_STATIC_KEY_FALSE(cpufreq_freq_invariance);
bool cpufreq_supports_freq_invariance(void)
{
return static_branch_likely(&cpufreq_freq_invariance);
}
/* Flag to suspend/resume CPUFreq governors */
static bool cpufreq_suspended;
@ -154,12 +160,6 @@ u64 get_cpu_idle_time(unsigned int cpu, u64 *wall, int io_busy)
}
EXPORT_SYMBOL_GPL(get_cpu_idle_time);
__weak void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq,
unsigned long max_freq)
{
}
EXPORT_SYMBOL_GPL(arch_set_freq_scale);
/*
* This is a generic cpufreq init() routine which can be used by cpufreq
* drivers of SMP systems. It will do following:
@ -446,6 +446,10 @@ void cpufreq_freq_transition_end(struct cpufreq_policy *policy,
cpufreq_notify_post_transition(policy, freqs, transition_failed);
arch_set_freq_scale(policy->related_cpus,
policy->cur,
policy->cpuinfo.max_freq);
policy->transition_ongoing = false;
policy->transition_task = NULL;
@ -2056,9 +2060,26 @@ EXPORT_SYMBOL(cpufreq_unregister_notifier);
unsigned int cpufreq_driver_fast_switch(struct cpufreq_policy *policy,
unsigned int target_freq)
{
target_freq = clamp_val(target_freq, policy->min, policy->max);
unsigned int freq;
int cpu;
return cpufreq_driver->fast_switch(policy, target_freq);
target_freq = clamp_val(target_freq, policy->min, policy->max);
freq = cpufreq_driver->fast_switch(policy, target_freq);
if (!freq)
return 0;
policy->cur = freq;
arch_set_freq_scale(policy->related_cpus, freq,
policy->cpuinfo.max_freq);
cpufreq_stats_record_transition(policy, freq);
if (trace_cpu_frequency_enabled()) {
for_each_cpu(cpu, policy->cpus)
trace_cpu_frequency(freq, cpu);
}
return freq;
}
EXPORT_SYMBOL_GPL(cpufreq_driver_fast_switch);
@ -2710,6 +2731,15 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
cpufreq_driver = driver_data;
write_unlock_irqrestore(&cpufreq_driver_lock, flags);
/*
* Mark support for the scheduler's frequency invariance engine for
* drivers that implement target(), target_index() or fast_switch().
*/
if (!cpufreq_driver->setpolicy) {
static_branch_enable_cpuslocked(&cpufreq_freq_invariance);
pr_debug("supports frequency invariance");
}
if (driver_data->setpolicy)
driver_data->flags |= CPUFREQ_CONST_LOOPS;
@ -2779,6 +2809,7 @@ int cpufreq_unregister_driver(struct cpufreq_driver *driver)
cpus_read_lock();
subsys_interface_unregister(&cpufreq_interface);
remove_boost_sysfs_file();
static_branch_disable_cpuslocked(&cpufreq_freq_invariance);
cpuhp_remove_state_nocalls_cpuslocked(hp_online);
write_lock_irqsave(&cpufreq_driver_lock, flags);

View File

@ -19,64 +19,104 @@ struct cpufreq_stats {
unsigned int state_num;
unsigned int last_index;
u64 *time_in_state;
spinlock_t lock;
unsigned int *freq_table;
unsigned int *trans_table;
/* Deferred reset */
unsigned int reset_pending;
unsigned long long reset_time;
};
static void cpufreq_stats_update(struct cpufreq_stats *stats)
static void cpufreq_stats_update(struct cpufreq_stats *stats,
unsigned long long time)
{
unsigned long long cur_time = get_jiffies_64();
stats->time_in_state[stats->last_index] += cur_time - stats->last_time;
stats->time_in_state[stats->last_index] += cur_time - time;
stats->last_time = cur_time;
}
static void cpufreq_stats_clear_table(struct cpufreq_stats *stats)
static void cpufreq_stats_reset_table(struct cpufreq_stats *stats)
{
unsigned int count = stats->max_state;
spin_lock(&stats->lock);
memset(stats->time_in_state, 0, count * sizeof(u64));
memset(stats->trans_table, 0, count * count * sizeof(int));
stats->last_time = get_jiffies_64();
stats->total_trans = 0;
spin_unlock(&stats->lock);
/* Adjust for the time elapsed since reset was requested */
WRITE_ONCE(stats->reset_pending, 0);
/*
* Prevent the reset_time read from being reordered before the
* reset_pending accesses in cpufreq_stats_record_transition().
*/
smp_rmb();
cpufreq_stats_update(stats, READ_ONCE(stats->reset_time));
}
static ssize_t show_total_trans(struct cpufreq_policy *policy, char *buf)
{
return sprintf(buf, "%d\n", policy->stats->total_trans);
struct cpufreq_stats *stats = policy->stats;
if (READ_ONCE(stats->reset_pending))
return sprintf(buf, "%d\n", 0);
else
return sprintf(buf, "%u\n", stats->total_trans);
}
cpufreq_freq_attr_ro(total_trans);
static ssize_t show_time_in_state(struct cpufreq_policy *policy, char *buf)
{
struct cpufreq_stats *stats = policy->stats;
bool pending = READ_ONCE(stats->reset_pending);
unsigned long long time;
ssize_t len = 0;
int i;
if (policy->fast_switch_enabled)
return 0;
spin_lock(&stats->lock);
cpufreq_stats_update(stats);
spin_unlock(&stats->lock);
for (i = 0; i < stats->state_num; i++) {
if (pending) {
if (i == stats->last_index) {
/*
* Prevent the reset_time read from occurring
* before the reset_pending read above.
*/
smp_rmb();
time = get_jiffies_64() - READ_ONCE(stats->reset_time);
} else {
time = 0;
}
} else {
time = stats->time_in_state[i];
if (i == stats->last_index)
time += get_jiffies_64() - stats->last_time;
}
len += sprintf(buf + len, "%u %llu\n", stats->freq_table[i],
(unsigned long long)
jiffies_64_to_clock_t(stats->time_in_state[i]));
jiffies_64_to_clock_t(time));
}
return len;
}
cpufreq_freq_attr_ro(time_in_state);
/* We don't care what is written to the attribute */
static ssize_t store_reset(struct cpufreq_policy *policy, const char *buf,
size_t count)
{
/* We don't care what is written to the attribute. */
cpufreq_stats_clear_table(policy->stats);
struct cpufreq_stats *stats = policy->stats;
/*
* Defer resetting of stats to cpufreq_stats_record_transition() to
* avoid races.
*/
WRITE_ONCE(stats->reset_time, get_jiffies_64());
/*
* The memory barrier below is to prevent the readers of reset_time from
* seeing a stale or partially updated value.
*/
smp_wmb();
WRITE_ONCE(stats->reset_pending, 1);
return count;
}
cpufreq_freq_attr_wo(reset);
@ -84,11 +124,9 @@ cpufreq_freq_attr_wo(reset);
static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf)
{
struct cpufreq_stats *stats = policy->stats;
bool pending = READ_ONCE(stats->reset_pending);
ssize_t len = 0;
int i, j;
if (policy->fast_switch_enabled)
return 0;
int i, j, count;
len += scnprintf(buf + len, PAGE_SIZE - len, " From : To\n");
len += scnprintf(buf + len, PAGE_SIZE - len, " : ");
@ -113,8 +151,13 @@ static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf)
for (j = 0; j < stats->state_num; j++) {
if (len >= PAGE_SIZE)
break;
len += scnprintf(buf + len, PAGE_SIZE - len, "%9u ",
stats->trans_table[i*stats->max_state+j]);
if (pending)
count = 0;
else
count = stats->trans_table[i * stats->max_state + j];
len += scnprintf(buf + len, PAGE_SIZE - len, "%9u ", count);
}
if (len >= PAGE_SIZE)
break;
@ -208,7 +251,6 @@ void cpufreq_stats_create_table(struct cpufreq_policy *policy)
stats->state_num = i;
stats->last_time = get_jiffies_64();
stats->last_index = freq_table_get_index(stats, policy->cur);
spin_lock_init(&stats->lock);
policy->stats = stats;
ret = sysfs_create_group(&policy->kobj, &stats_attr_group);
@ -228,23 +270,22 @@ void cpufreq_stats_record_transition(struct cpufreq_policy *policy,
struct cpufreq_stats *stats = policy->stats;
int old_index, new_index;
if (!stats) {
pr_debug("%s: No stats found\n", __func__);
if (unlikely(!stats))
return;
}
if (unlikely(READ_ONCE(stats->reset_pending)))
cpufreq_stats_reset_table(stats);
old_index = stats->last_index;
new_index = freq_table_get_index(stats, new_freq);
/* We can't do stats->time_in_state[-1]= .. */
if (old_index == -1 || new_index == -1 || old_index == new_index)
if (unlikely(old_index == -1 || new_index == -1 || old_index == new_index))
return;
spin_lock(&stats->lock);
cpufreq_stats_update(stats);
cpufreq_stats_update(stats, stats->last_time);
stats->last_index = new_index;
stats->trans_table[old_index * stats->max_state + new_index]++;
stats->total_trans++;
spin_unlock(&stats->lock);
}

View File

@ -48,7 +48,6 @@ static struct clk_bulk_data clks[] = {
};
static struct device *cpu_dev;
static bool free_opp;
static struct cpufreq_frequency_table *freq_table;
static unsigned int max_freq;
static unsigned int transition_latency;
@ -390,9 +389,6 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
goto put_reg;
}
/* Because we have added the OPPs here, we must free them */
free_opp = true;
if (of_machine_is_compatible("fsl,imx6ul") ||
of_machine_is_compatible("fsl,imx6ull")) {
ret = imx6ul_opp_check_speed_grading(cpu_dev);
@ -507,8 +503,7 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
free_freq_table:
dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
out_free_opp:
if (free_opp)
dev_pm_opp_of_remove_table(cpu_dev);
dev_pm_opp_of_remove_table(cpu_dev);
put_reg:
if (!IS_ERR(arm_reg))
regulator_put(arm_reg);
@ -528,8 +523,7 @@ static int imx6q_cpufreq_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&imx6q_cpufreq_driver);
dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
if (free_opp)
dev_pm_opp_of_remove_table(cpu_dev);
dev_pm_opp_of_remove_table(cpu_dev);
regulator_put(arm_reg);
if (!IS_ERR(pu_reg))
regulator_put(pu_reg);

View File

@ -19,18 +19,23 @@
#define LUT_L_VAL GENMASK(7, 0)
#define LUT_CORE_COUNT GENMASK(18, 16)
#define LUT_VOLT GENMASK(11, 0)
#define LUT_ROW_SIZE 32
#define CLK_HW_DIV 2
#define LUT_TURBO_IND 1
/* Register offsets */
#define REG_ENABLE 0x0
#define REG_FREQ_LUT 0x110
#define REG_VOLT_LUT 0x114
#define REG_PERF_STATE 0x920
struct qcom_cpufreq_soc_data {
u32 reg_enable;
u32 reg_freq_lut;
u32 reg_volt_lut;
u32 reg_perf_state;
u8 lut_row_size;
};
struct qcom_cpufreq_data {
void __iomem *base;
const struct qcom_cpufreq_soc_data *soc_data;
};
static unsigned long cpu_hw_rate, xo_rate;
static struct platform_device *global_pdev;
static bool icc_scaling_enabled;
static int qcom_cpufreq_set_bw(struct cpufreq_policy *policy,
@ -77,22 +82,22 @@ static int qcom_cpufreq_update_opp(struct device *cpu_dev,
static int qcom_cpufreq_hw_target_index(struct cpufreq_policy *policy,
unsigned int index)
{
void __iomem *perf_state_reg = policy->driver_data;
struct qcom_cpufreq_data *data = policy->driver_data;
const struct qcom_cpufreq_soc_data *soc_data = data->soc_data;
unsigned long freq = policy->freq_table[index].frequency;
writel_relaxed(index, perf_state_reg);
writel_relaxed(index, data->base + soc_data->reg_perf_state);
if (icc_scaling_enabled)
qcom_cpufreq_set_bw(policy, freq);
arch_set_freq_scale(policy->related_cpus, freq,
policy->cpuinfo.max_freq);
return 0;
}
static unsigned int qcom_cpufreq_hw_get(unsigned int cpu)
{
void __iomem *perf_state_reg;
struct qcom_cpufreq_data *data;
const struct qcom_cpufreq_soc_data *soc_data;
struct cpufreq_policy *policy;
unsigned int index;
@ -100,9 +105,10 @@ static unsigned int qcom_cpufreq_hw_get(unsigned int cpu)
if (!policy)
return 0;
perf_state_reg = policy->driver_data;
data = policy->driver_data;
soc_data = data->soc_data;
index = readl_relaxed(perf_state_reg);
index = readl_relaxed(data->base + soc_data->reg_perf_state);
index = min(index, LUT_MAX_ENTRIES - 1);
return policy->freq_table[index].frequency;
@ -111,23 +117,18 @@ static unsigned int qcom_cpufreq_hw_get(unsigned int cpu)
static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy,
unsigned int target_freq)
{
void __iomem *perf_state_reg = policy->driver_data;
struct qcom_cpufreq_data *data = policy->driver_data;
const struct qcom_cpufreq_soc_data *soc_data = data->soc_data;
unsigned int index;
unsigned long freq;
index = policy->cached_resolved_idx;
writel_relaxed(index, perf_state_reg);
writel_relaxed(index, data->base + soc_data->reg_perf_state);
freq = policy->freq_table[index].frequency;
arch_set_freq_scale(policy->related_cpus, freq,
policy->cpuinfo.max_freq);
return freq;
return policy->freq_table[index].frequency;
}
static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
struct cpufreq_policy *policy,
void __iomem *base)
struct cpufreq_policy *policy)
{
u32 data, src, lval, i, core_count, prev_freq = 0, freq;
u32 volt;
@ -135,6 +136,8 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
struct dev_pm_opp *opp;
unsigned long rate;
int ret;
struct qcom_cpufreq_data *drv_data = policy->driver_data;
const struct qcom_cpufreq_soc_data *soc_data = drv_data->soc_data;
table = kcalloc(LUT_MAX_ENTRIES + 1, sizeof(*table), GFP_KERNEL);
if (!table)
@ -161,14 +164,14 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
}
for (i = 0; i < LUT_MAX_ENTRIES; i++) {
data = readl_relaxed(base + REG_FREQ_LUT +
i * LUT_ROW_SIZE);
data = readl_relaxed(drv_data->base + soc_data->reg_freq_lut +
i * soc_data->lut_row_size);
src = FIELD_GET(LUT_SRC, data);
lval = FIELD_GET(LUT_L_VAL, data);
core_count = FIELD_GET(LUT_CORE_COUNT, data);
data = readl_relaxed(base + REG_VOLT_LUT +
i * LUT_ROW_SIZE);
data = readl_relaxed(drv_data->base + soc_data->reg_volt_lut +
i * soc_data->lut_row_size);
volt = FIELD_GET(LUT_VOLT, data) * 1000;
if (src)
@ -177,10 +180,15 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
freq = cpu_hw_rate / 1000;
if (freq != prev_freq && core_count != LUT_TURBO_IND) {
table[i].frequency = freq;
qcom_cpufreq_update_opp(cpu_dev, freq, volt);
dev_dbg(cpu_dev, "index=%d freq=%d, core_count %d\n", i,
if (!qcom_cpufreq_update_opp(cpu_dev, freq, volt)) {
table[i].frequency = freq;
dev_dbg(cpu_dev, "index=%d freq=%d, core_count %d\n", i,
freq, core_count);
} else {
dev_warn(cpu_dev, "failed to update OPP for freq=%d\n", freq);
table[i].frequency = CPUFREQ_ENTRY_INVALID;
}
} else if (core_count == LUT_TURBO_IND) {
table[i].frequency = CPUFREQ_ENTRY_INVALID;
}
@ -197,9 +205,13 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
* as the boost frequency
*/
if (prev->frequency == CPUFREQ_ENTRY_INVALID) {
prev->frequency = prev_freq;
prev->flags = CPUFREQ_BOOST_FREQ;
qcom_cpufreq_update_opp(cpu_dev, prev_freq, volt);
if (!qcom_cpufreq_update_opp(cpu_dev, prev_freq, volt)) {
prev->frequency = prev_freq;
prev->flags = CPUFREQ_BOOST_FREQ;
} else {
dev_warn(cpu_dev, "failed to update OPP for freq=%d\n",
freq);
}
}
break;
@ -238,14 +250,38 @@ static void qcom_get_related_cpus(int index, struct cpumask *m)
}
}
static const struct qcom_cpufreq_soc_data qcom_soc_data = {
.reg_enable = 0x0,
.reg_freq_lut = 0x110,
.reg_volt_lut = 0x114,
.reg_perf_state = 0x920,
.lut_row_size = 32,
};
static const struct qcom_cpufreq_soc_data epss_soc_data = {
.reg_enable = 0x0,
.reg_freq_lut = 0x100,
.reg_volt_lut = 0x200,
.reg_perf_state = 0x320,
.lut_row_size = 4,
};
static const struct of_device_id qcom_cpufreq_hw_match[] = {
{ .compatible = "qcom,cpufreq-hw", .data = &qcom_soc_data },
{ .compatible = "qcom,cpufreq-epss", .data = &epss_soc_data },
{}
};
MODULE_DEVICE_TABLE(of, qcom_cpufreq_hw_match);
static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
{
struct device *dev = &global_pdev->dev;
struct platform_device *pdev = cpufreq_get_driver_data();
struct device *dev = &pdev->dev;
struct of_phandle_args args;
struct device_node *cpu_np;
struct device *cpu_dev;
struct resource *res;
void __iomem *base;
struct qcom_cpufreq_data *data;
int ret, index;
cpu_dev = get_cpu_device(policy->cpu);
@ -267,16 +303,21 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
index = args.args[0];
res = platform_get_resource(global_pdev, IORESOURCE_MEM, index);
if (!res)
return -ENODEV;
base = devm_platform_ioremap_resource(pdev, index);
if (IS_ERR(base))
return PTR_ERR(base);
base = devm_ioremap(dev, res->start, resource_size(res));
if (!base)
return -ENOMEM;
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data) {
ret = -ENOMEM;
goto error;
}
data->soc_data = of_device_get_match_data(&pdev->dev);
data->base = base;
/* HW should be in enabled state to proceed */
if (!(readl_relaxed(base + REG_ENABLE) & 0x1)) {
if (!(readl_relaxed(base + data->soc_data->reg_enable) & 0x1)) {
dev_err(dev, "Domain-%d cpufreq hardware not enabled\n", index);
ret = -ENODEV;
goto error;
@ -289,9 +330,9 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
goto error;
}
policy->driver_data = base + REG_PERF_STATE;
policy->driver_data = data;
ret = qcom_cpufreq_hw_read_lut(cpu_dev, policy, base);
ret = qcom_cpufreq_hw_read_lut(cpu_dev, policy);
if (ret) {
dev_err(dev, "Domain-%d failed to read LUT\n", index);
goto error;
@ -315,12 +356,13 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
static int qcom_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
{
struct device *cpu_dev = get_cpu_device(policy->cpu);
void __iomem *base = policy->driver_data - REG_PERF_STATE;
struct qcom_cpufreq_data *data = policy->driver_data;
struct platform_device *pdev = cpufreq_get_driver_data();
dev_pm_opp_remove_all_dynamic(cpu_dev);
dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
kfree(policy->freq_table);
devm_iounmap(&global_pdev->dev, base);
devm_iounmap(&pdev->dev, data->base);
return 0;
}
@ -365,7 +407,7 @@ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)
cpu_hw_rate = clk_get_rate(clk) / CLK_HW_DIV;
clk_put(clk);
global_pdev = pdev;
cpufreq_qcom_hw_driver.driver_data = pdev;
/* Check for optional interconnect paths on CPU0 */
cpu_dev = get_cpu_device(0);
@ -390,12 +432,6 @@ static int qcom_cpufreq_hw_driver_remove(struct platform_device *pdev)
return cpufreq_unregister_driver(&cpufreq_qcom_hw_driver);
}
static const struct of_device_id qcom_cpufreq_hw_match[] = {
{ .compatible = "qcom,cpufreq-hw" },
{}
};
MODULE_DEVICE_TABLE(of, qcom_cpufreq_hw_match);
static struct platform_driver qcom_cpufreq_hw_driver = {
.probe = qcom_cpufreq_hw_driver_probe,
.remove = qcom_cpufreq_hw_driver_remove,

View File

@ -590,6 +590,7 @@ static struct notifier_block s5pv210_cpufreq_reboot_notifier = {
static int s5pv210_cpufreq_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *np;
int id, result = 0;
@ -602,28 +603,20 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev)
* cpufreq-dt driver.
*/
arm_regulator = regulator_get(NULL, "vddarm");
if (IS_ERR(arm_regulator)) {
if (PTR_ERR(arm_regulator) == -EPROBE_DEFER)
pr_debug("vddarm regulator not ready, defer\n");
else
pr_err("failed to get regulator vddarm\n");
return PTR_ERR(arm_regulator);
}
if (IS_ERR(arm_regulator))
return dev_err_probe(dev, PTR_ERR(arm_regulator),
"failed to get regulator vddarm\n");
int_regulator = regulator_get(NULL, "vddint");
if (IS_ERR(int_regulator)) {
if (PTR_ERR(int_regulator) == -EPROBE_DEFER)
pr_debug("vddint regulator not ready, defer\n");
else
pr_err("failed to get regulator vddint\n");
result = PTR_ERR(int_regulator);
result = dev_err_probe(dev, PTR_ERR(int_regulator),
"failed to get regulator vddint\n");
goto err_int_regulator;
}
np = of_find_compatible_node(NULL, NULL, "samsung,s5pv210-clock");
if (!np) {
pr_err("%s: failed to find clock controller DT node\n",
__func__);
dev_err(dev, "failed to find clock controller DT node\n");
result = -ENODEV;
goto err_clock;
}
@ -631,7 +624,7 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev)
clk_base = of_iomap(np, 0);
of_node_put(np);
if (!clk_base) {
pr_err("%s: failed to map clock registers\n", __func__);
dev_err(dev, "failed to map clock registers\n");
result = -EFAULT;
goto err_clock;
}
@ -639,8 +632,7 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev)
for_each_compatible_node(np, NULL, "samsung,s5pv210-dmc") {
id = of_alias_get_id(np, "dmc");
if (id < 0 || id >= ARRAY_SIZE(dmc_base)) {
pr_err("%s: failed to get alias of dmc node '%pOFn'\n",
__func__, np);
dev_err(dev, "failed to get alias of dmc node '%pOFn'\n", np);
of_node_put(np);
result = id;
goto err_clk_base;
@ -648,8 +640,7 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev)
dmc_base[id] = of_iomap(np, 0);
if (!dmc_base[id]) {
pr_err("%s: failed to map dmc%d registers\n",
__func__, id);
dev_err(dev, "failed to map dmc%d registers\n", id);
of_node_put(np);
result = -EFAULT;
goto err_dmc;
@ -658,7 +649,7 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev)
for (id = 0; id < ARRAY_SIZE(dmc_base); ++id) {
if (!dmc_base[id]) {
pr_err("%s: failed to find dmc%d node\n", __func__, id);
dev_err(dev, "failed to find dmc%d node\n", id);
result = -ENODEV;
goto err_dmc;
}

View File

@ -48,16 +48,11 @@ static unsigned int scmi_cpufreq_get_rate(unsigned int cpu)
static int
scmi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index)
{
int ret;
struct scmi_data *priv = policy->driver_data;
struct scmi_perf_ops *perf_ops = handle->perf_ops;
u64 freq = policy->freq_table[index].frequency;
ret = perf_ops->freq_set(handle, priv->domain_id, freq * 1000, false);
if (!ret)
arch_set_freq_scale(policy->related_cpus, freq,
policy->cpuinfo.max_freq);
return ret;
return perf_ops->freq_set(handle, priv->domain_id, freq * 1000, false);
}
static unsigned int scmi_cpufreq_fast_switch(struct cpufreq_policy *policy,
@ -67,11 +62,8 @@ static unsigned int scmi_cpufreq_fast_switch(struct cpufreq_policy *policy,
struct scmi_perf_ops *perf_ops = handle->perf_ops;
if (!perf_ops->freq_set(handle, priv->domain_id,
target_freq * 1000, true)) {
arch_set_freq_scale(policy->related_cpus, target_freq,
policy->cpuinfo.max_freq);
target_freq * 1000, true))
return target_freq;
}
return 0;
}

View File

@ -47,9 +47,8 @@ static unsigned int scpi_cpufreq_get_rate(unsigned int cpu)
static int
scpi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index)
{
unsigned long freq = policy->freq_table[index].frequency;
u64 rate = policy->freq_table[index].frequency * 1000;
struct scpi_data *priv = policy->driver_data;
u64 rate = freq * 1000;
int ret;
ret = clk_set_rate(priv->clk, rate);
@ -60,9 +59,6 @@ scpi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index)
if (clk_get_rate(priv->clk) != rate)
return -EIO;
arch_set_freq_scale(policy->related_cpus, freq,
policy->cpuinfo.max_freq);
return 0;
}

View File

@ -141,7 +141,8 @@ static const struct reg_field sti_stih407_dvfs_regfields[DVFS_MAX_REGFIELDS] = {
static const struct reg_field *sti_cpufreq_match(void)
{
if (of_machine_is_compatible("st,stih407") ||
of_machine_is_compatible("st,stih410"))
of_machine_is_compatible("st,stih410") ||
of_machine_is_compatible("st,stih418"))
return sti_stih407_dvfs_regfields;
return NULL;
@ -258,7 +259,8 @@ static int sti_cpufreq_init(void)
int ret;
if ((!of_machine_is_compatible("st,stih407")) &&
(!of_machine_is_compatible("st,stih410")))
(!of_machine_is_compatible("st,stih410")) &&
(!of_machine_is_compatible("st,stih418")))
return -ENODEV;
ddata.cpu = get_cpu_device(0);

View File

@ -14,6 +14,7 @@
#define EDVD_CORE_VOLT_FREQ(core) (0x20 + (core) * 0x4)
#define EDVD_CORE_VOLT_FREQ_F_SHIFT 0
#define EDVD_CORE_VOLT_FREQ_F_MASK 0xffff
#define EDVD_CORE_VOLT_FREQ_V_SHIFT 16
struct tegra186_cpufreq_cluster_info {
@ -91,10 +92,39 @@ static int tegra186_cpufreq_set_target(struct cpufreq_policy *policy,
return 0;
}
static unsigned int tegra186_cpufreq_get(unsigned int cpu)
{
struct cpufreq_frequency_table *tbl;
struct cpufreq_policy *policy;
void __iomem *edvd_reg;
unsigned int i, freq = 0;
u32 ndiv;
policy = cpufreq_cpu_get(cpu);
if (!policy)
return 0;
tbl = policy->freq_table;
edvd_reg = policy->driver_data;
ndiv = readl(edvd_reg) & EDVD_CORE_VOLT_FREQ_F_MASK;
for (i = 0; tbl[i].frequency != CPUFREQ_TABLE_END; i++) {
if ((tbl[i].driver_data & EDVD_CORE_VOLT_FREQ_F_MASK) == ndiv) {
freq = tbl[i].frequency;
break;
}
}
cpufreq_cpu_put(policy);
return freq;
}
static struct cpufreq_driver tegra186_cpufreq_driver = {
.name = "tegra186",
.flags = CPUFREQ_STICKY | CPUFREQ_HAVE_GOVERNOR_PER_POLICY |
CPUFREQ_NEED_INITIAL_FREQ_CHECK,
.get = tegra186_cpufreq_get,
.verify = cpufreq_generic_frequency_table_verify,
.target_index = tegra186_cpufreq_set_target,
.init = tegra186_cpufreq_init,

View File

@ -182,7 +182,6 @@ static int ve_spc_cpufreq_set_target(struct cpufreq_policy *policy,
{
u32 cpu = policy->cpu, cur_cluster, new_cluster, actual_cluster;
unsigned int freqs_new;
int ret;
cur_cluster = cpu_to_cluster(cpu);
new_cluster = actual_cluster = per_cpu(physical_cluster, cpu);
@ -197,15 +196,8 @@ static int ve_spc_cpufreq_set_target(struct cpufreq_policy *policy,
new_cluster = A15_CLUSTER;
}
ret = ve_spc_cpufreq_set_rate(cpu, actual_cluster, new_cluster,
freqs_new);
if (!ret) {
arch_set_freq_scale(policy->related_cpus, freqs_new,
policy->cpuinfo.max_freq);
}
return ret;
return ve_spc_cpufreq_set_rate(cpu, actual_cluster, new_cluster,
freqs_new);
}
static inline u32 get_table_count(struct cpufreq_frequency_table *table)

View File

@ -105,7 +105,7 @@ static void psci_pd_free_states(struct genpd_power_state *states,
kfree(states);
}
static int psci_pd_init(struct device_node *np)
static int psci_pd_init(struct device_node *np, bool use_osi)
{
struct generic_pm_domain *pd;
struct psci_pd_provider *pd_provider;
@ -135,11 +135,16 @@ static int psci_pd_init(struct device_node *np)
pd->free_states = psci_pd_free_states;
pd->name = kbasename(pd->name);
pd->power_off = psci_pd_power_off;
pd->states = states;
pd->state_count = state_count;
pd->flags |= GENPD_FLAG_IRQ_SAFE | GENPD_FLAG_CPU_DOMAIN;
/* Allow power off when OSI has been successfully enabled. */
if (use_osi)
pd->power_off = psci_pd_power_off;
else
pd->flags |= GENPD_FLAG_ALWAYS_ON;
/* Use governor for CPU PM domains if it has some states to manage. */
pd_gov = state_count > 0 ? &pm_domain_cpu_gov : NULL;
@ -190,7 +195,7 @@ static void psci_pd_remove(void)
}
}
static int psci_pd_init_topology(struct device_node *np, bool add)
static int psci_pd_init_topology(struct device_node *np)
{
struct device_node *node;
struct of_phandle_args child, parent;
@ -203,9 +208,7 @@ static int psci_pd_init_topology(struct device_node *np, bool add)
child.np = node;
child.args_count = 0;
ret = add ? of_genpd_add_subdomain(&parent, &child) :
of_genpd_remove_subdomain(&parent, &child);
ret = of_genpd_add_subdomain(&parent, &child);
of_node_put(parent.np);
if (ret) {
of_node_put(node);
@ -216,14 +219,20 @@ static int psci_pd_init_topology(struct device_node *np, bool add)
return 0;
}
static int psci_pd_add_topology(struct device_node *np)
static bool psci_pd_try_set_osi_mode(void)
{
return psci_pd_init_topology(np, true);
}
int ret;
static void psci_pd_remove_topology(struct device_node *np)
{
psci_pd_init_topology(np, false);
if (!psci_has_osi_support())
return false;
ret = psci_set_osi_mode(true);
if (ret) {
pr_warn("failed to enable OSI mode: %d\n", ret);
return false;
}
return true;
}
static void psci_cpuidle_domain_sync_state(struct device *dev)
@ -244,14 +253,14 @@ static int psci_cpuidle_domain_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct device_node *node;
bool use_osi;
int ret = 0, pd_count = 0;
if (!np)
return -ENODEV;
/* Currently limit the hierarchical topology to be used in OSI mode. */
if (!psci_has_osi_support())
return 0;
/* If OSI mode is supported, let's try to enable it. */
use_osi = psci_pd_try_set_osi_mode();
/*
* Parse child nodes for the "#power-domain-cells" property and
@ -261,7 +270,7 @@ static int psci_cpuidle_domain_probe(struct platform_device *pdev)
if (!of_find_property(node, "#power-domain-cells", NULL))
continue;
ret = psci_pd_init(node);
ret = psci_pd_init(node, use_osi);
if (ret)
goto put_node;
@ -270,30 +279,24 @@ static int psci_cpuidle_domain_probe(struct platform_device *pdev)
/* Bail out if not using the hierarchical CPU topology. */
if (!pd_count)
return 0;
goto no_pd;
/* Link genpd masters/subdomains to model the CPU topology. */
ret = psci_pd_add_topology(np);
ret = psci_pd_init_topology(np);
if (ret)
goto remove_pd;
/* Try to enable OSI mode. */
ret = psci_set_osi_mode();
if (ret) {
pr_warn("failed to enable OSI mode: %d\n", ret);
psci_pd_remove_topology(np);
goto remove_pd;
}
pr_info("Initialized CPU PM domain topology\n");
return 0;
put_node:
of_node_put(node);
remove_pd:
if (pd_count)
psci_pd_remove();
psci_pd_remove();
pr_err("failed to create CPU PM domains ret=%d\n", ret);
no_pd:
if (use_osi)
psci_set_osi_mode(false);
return ret;
}

View File

@ -172,7 +172,7 @@ static int tegra_cpuidle_coupled_barrier(struct cpuidle_device *dev)
static int tegra_cpuidle_state_enter(struct cpuidle_device *dev,
int index, unsigned int cpu)
{
int ret;
int err;
/*
* CC6 state is the "CPU cluster power-off" state. In order to
@ -183,9 +183,9 @@ static int tegra_cpuidle_state_enter(struct cpuidle_device *dev,
* CPU cores, GIC and L2 cache).
*/
if (index == TEGRA_CC6) {
ret = tegra_cpuidle_coupled_barrier(dev);
if (ret)
return ret;
err = tegra_cpuidle_coupled_barrier(dev);
if (err)
return err;
}
local_fiq_disable();
@ -194,15 +194,15 @@ static int tegra_cpuidle_state_enter(struct cpuidle_device *dev,
switch (index) {
case TEGRA_C7:
ret = tegra_cpuidle_c7_enter();
err = tegra_cpuidle_c7_enter();
break;
case TEGRA_CC6:
ret = tegra_cpuidle_cc6_enter(cpu);
err = tegra_cpuidle_cc6_enter(cpu);
break;
default:
ret = -EINVAL;
err = -EINVAL;
break;
}
@ -210,7 +210,7 @@ static int tegra_cpuidle_state_enter(struct cpuidle_device *dev,
tegra_pm_clear_cpu_in_lp2();
local_fiq_enable();
return ret;
return err ?: index;
}
static int tegra_cpuidle_adjust_state_index(int index, unsigned int cpu)
@ -236,21 +236,27 @@ static int tegra_cpuidle_enter(struct cpuidle_device *dev,
int index)
{
unsigned int cpu = cpu_logical_map(dev->cpu);
int err;
int ret;
index = tegra_cpuidle_adjust_state_index(index, cpu);
if (dev->states_usage[index].disable)
return -1;
if (index == TEGRA_C1)
err = arm_cpuidle_simple_enter(dev, drv, index);
ret = arm_cpuidle_simple_enter(dev, drv, index);
else
err = tegra_cpuidle_state_enter(dev, index, cpu);
ret = tegra_cpuidle_state_enter(dev, index, cpu);
if (err && (err != -EINTR || index != TEGRA_CC6))
pr_err_once("failed to enter state %d err: %d\n", index, err);
if (ret < 0) {
if (ret != -EINTR || index != TEGRA_CC6)
pr_err_once("failed to enter state %d err: %d\n",
index, ret);
index = -1;
} else {
index = ret;
}
return err ? -1 : index;
return index;
}
static int tegra114_enter_s2idle(struct cpuidle_device *dev,

View File

@ -297,6 +297,7 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
}
} else {
dev->last_residency_ns = 0;
dev->states_usage[index].rejected++;
}
return entered_state;

View File

@ -256,6 +256,7 @@ define_show_state_time_function(exit_latency)
define_show_state_time_function(target_residency)
define_show_state_function(power_usage)
define_show_state_ull_function(usage)
define_show_state_ull_function(rejected)
define_show_state_str_function(name)
define_show_state_str_function(desc)
define_show_state_ull_function(above)
@ -312,6 +313,7 @@ define_one_state_ro(latency, show_state_exit_latency);
define_one_state_ro(residency, show_state_target_residency);
define_one_state_ro(power, show_state_power_usage);
define_one_state_ro(usage, show_state_usage);
define_one_state_ro(rejected, show_state_rejected);
define_one_state_ro(time, show_state_time);
define_one_state_rw(disable, show_state_disable, store_state_disable);
define_one_state_ro(above, show_state_above);
@ -325,6 +327,7 @@ static struct attribute *cpuidle_state_default_attrs[] = {
&attr_residency.attr,
&attr_power.attr,
&attr_usage.attr,
&attr_rejected.attr,
&attr_time.attr,
&attr_disable.attr,
&attr_above.attr,

View File

@ -213,20 +213,21 @@ EXPORT_SYMBOL_GPL(devfreq_event_reset_event);
* devfreq_event_get_edev_by_phandle() - Get the devfreq-event dev from
* devicetree.
* @dev : the pointer to the given device
* @phandle_name: name of property holding a phandle value
* @index : the index into list of devfreq-event device
*
* Note that this function return the pointer of devfreq-event device.
*/
struct devfreq_event_dev *devfreq_event_get_edev_by_phandle(struct device *dev,
int index)
const char *phandle_name, int index)
{
struct device_node *node;
struct devfreq_event_dev *edev;
if (!dev->of_node)
if (!dev->of_node || !phandle_name)
return ERR_PTR(-EINVAL);
node = of_parse_phandle(dev->of_node, "devfreq-events", index);
node = of_parse_phandle(dev->of_node, phandle_name, index);
if (!node)
return ERR_PTR(-ENODEV);
@ -258,19 +259,20 @@ EXPORT_SYMBOL_GPL(devfreq_event_get_edev_by_phandle);
/**
* devfreq_event_get_edev_count() - Get the count of devfreq-event dev
* @dev : the pointer to the given device
* @phandle_name: name of property holding a phandle value
*
* Note that this function return the count of devfreq-event devices.
*/
int devfreq_event_get_edev_count(struct device *dev)
int devfreq_event_get_edev_count(struct device *dev, const char *phandle_name)
{
int count;
if (!dev->of_node) {
if (!dev->of_node || !phandle_name) {
dev_err(dev, "device does not have a device node entry\n");
return -EINVAL;
}
count = of_property_count_elems_of_size(dev->of_node, "devfreq-events",
count = of_property_count_elems_of_size(dev->of_node, phandle_name,
sizeof(u32));
if (count < 0) {
dev_err(dev,

View File

@ -984,47 +984,74 @@ EXPORT_SYMBOL(devm_devfreq_add_device);
#ifdef CONFIG_OF
/*
* devfreq_get_devfreq_by_phandle - Get the devfreq device from devicetree
* @dev - instance to the given device
* @index - index into list of devfreq
* devfreq_get_devfreq_by_node - Get the devfreq device from devicetree
* @node - pointer to device_node
*
* return the instance of devfreq device
*/
struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, int index)
struct devfreq *devfreq_get_devfreq_by_node(struct device_node *node)
{
struct device_node *node;
struct devfreq *devfreq;
if (!dev)
return ERR_PTR(-EINVAL);
if (!dev->of_node)
return ERR_PTR(-EINVAL);
node = of_parse_phandle(dev->of_node, "devfreq", index);
if (!node)
return ERR_PTR(-ENODEV);
return ERR_PTR(-EINVAL);
mutex_lock(&devfreq_list_lock);
list_for_each_entry(devfreq, &devfreq_list, node) {
if (devfreq->dev.parent
&& devfreq->dev.parent->of_node == node) {
mutex_unlock(&devfreq_list_lock);
of_node_put(node);
return devfreq;
}
}
mutex_unlock(&devfreq_list_lock);
return ERR_PTR(-ENODEV);
}
/*
* devfreq_get_devfreq_by_phandle - Get the devfreq device from devicetree
* @dev - instance to the given device
* @phandle_name - name of property holding a phandle value
* @index - index into list of devfreq
*
* return the instance of devfreq device
*/
struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev,
const char *phandle_name, int index)
{
struct device_node *node;
struct devfreq *devfreq;
if (!dev || !phandle_name)
return ERR_PTR(-EINVAL);
if (!dev->of_node)
return ERR_PTR(-EINVAL);
node = of_parse_phandle(dev->of_node, phandle_name, index);
if (!node)
return ERR_PTR(-ENODEV);
devfreq = devfreq_get_devfreq_by_node(node);
of_node_put(node);
return ERR_PTR(-EPROBE_DEFER);
return devfreq;
}
#else
struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, int index)
struct devfreq *devfreq_get_devfreq_by_node(struct device_node *node)
{
return ERR_PTR(-ENODEV);
}
struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev,
const char *phandle_name, int index)
{
return ERR_PTR(-ENODEV);
}
#endif /* CONFIG_OF */
EXPORT_SYMBOL_GPL(devfreq_get_devfreq_by_node);
EXPORT_SYMBOL_GPL(devfreq_get_devfreq_by_phandle);
/**

View File

@ -193,7 +193,7 @@ static int exynos_bus_parent_parse_of(struct device_node *np,
* Get the devfreq-event devices to get the current utilization of
* buses. This raw data will be used in devfreq ondemand governor.
*/
count = devfreq_event_get_edev_count(dev);
count = devfreq_event_get_edev_count(dev, "devfreq-events");
if (count < 0) {
dev_err(dev, "failed to get the count of devfreq-event dev\n");
ret = count;
@ -209,7 +209,8 @@ static int exynos_bus_parent_parse_of(struct device_node *np,
}
for (i = 0; i < count; i++) {
bus->edev[i] = devfreq_event_get_edev_by_phandle(dev, i);
bus->edev[i] = devfreq_event_get_edev_by_phandle(dev,
"devfreq-events", i);
if (IS_ERR(bus->edev[i])) {
ret = -EPROBE_DEFER;
goto err_regulator;
@ -360,7 +361,7 @@ static int exynos_bus_profile_init_passive(struct exynos_bus *bus,
profile->exit = exynos_bus_passive_exit;
/* Get the instance of parent devfreq device */
parent_devfreq = devfreq_get_devfreq_by_phandle(dev, 0);
parent_devfreq = devfreq_get_devfreq_by_phandle(dev, "devfreq", 0);
if (IS_ERR(parent_devfreq))
return -EPROBE_DEFER;

View File

@ -341,7 +341,7 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev)
return PTR_ERR(data->dmc_clk);
}
data->edev = devfreq_event_get_edev_by_phandle(dev, 0);
data->edev = devfreq_event_get_edev_by_phandle(dev, "devfreq-events", 0);
if (IS_ERR(data->edev))
return -EPROBE_DEFER;

View File

@ -822,8 +822,6 @@ static int tegra_devfreq_probe(struct platform_device *pdev)
return err;
}
reset_control_assert(tegra->reset);
err = clk_prepare_enable(tegra->clock);
if (err) {
dev_err(&pdev->dev,
@ -831,7 +829,11 @@ static int tegra_devfreq_probe(struct platform_device *pdev)
return err;
}
reset_control_deassert(tegra->reset);
err = reset_control_reset(tegra->reset);
if (err) {
dev_err(&pdev->dev, "Failed to reset hardware: %d\n", err);
goto disable_clk;
}
rate = clk_round_rate(tegra->emc_clock, ULONG_MAX);
if (rate < 0) {

View File

@ -151,12 +151,15 @@ static u32 psci_get_version(void)
return invoke_psci_fn(PSCI_0_2_FN_PSCI_VERSION, 0, 0, 0);
}
int psci_set_osi_mode(void)
int psci_set_osi_mode(bool enable)
{
unsigned long suspend_mode;
int err;
err = invoke_psci_fn(PSCI_1_0_FN_SET_SUSPEND_MODE,
PSCI_1_0_SUSPEND_MODE_OSI, 0, 0);
suspend_mode = enable ? PSCI_1_0_SUSPEND_MODE_OSI :
PSCI_1_0_SUSPEND_MODE_PC;
err = invoke_psci_fn(PSCI_1_0_FN_SET_SUSPEND_MODE, suspend_mode, 0, 0);
return psci_to_linux_errno(err);
}
@ -546,8 +549,7 @@ static int __init psci_1_0_init(struct device_node *np)
pr_info("OSI mode supported.\n");
/* Default to PC mode. */
invoke_psci_fn(PSCI_1_0_FN_SET_SUSPEND_MODE,
PSCI_1_0_SUSPEND_MODE_PC, 0, 0);
psci_set_osi_mode(false);
}
return 0;

View File

@ -1293,7 +1293,8 @@ static int exynos5_performance_counters_init(struct exynos5_dmc *dmc)
int counters_size;
int ret, i;
dmc->num_counters = devfreq_event_get_edev_count(dmc->dev);
dmc->num_counters = devfreq_event_get_edev_count(dmc->dev,
"devfreq-events");
if (dmc->num_counters < 0) {
dev_err(dmc->dev, "could not get devfreq-event counters\n");
return dmc->num_counters;
@ -1306,7 +1307,8 @@ static int exynos5_performance_counters_init(struct exynos5_dmc *dmc)
for (i = 0; i < dmc->num_counters; i++) {
dmc->counter[i] =
devfreq_event_get_edev_by_phandle(dmc->dev, i);
devfreq_event_get_edev_by_phandle(dmc->dev,
"devfreq-events", i);
if (IS_ERR_OR_NULL(dmc->counter[i]))
return -EPROBE_DEFER;
}

View File

@ -703,12 +703,10 @@ static int _generic_set_opp_regulator(struct opp_table *opp_table,
* Enable the regulator after setting its voltages, otherwise it breaks
* some boot-enabled regulators.
*/
if (unlikely(!opp_table->regulator_enabled)) {
if (unlikely(!opp_table->enabled)) {
ret = regulator_enable(reg);
if (ret < 0)
dev_warn(dev, "Failed to enable regulator: %d", ret);
else
opp_table->regulator_enabled = true;
}
return 0;
@ -781,29 +779,39 @@ static int _set_opp_custom(const struct opp_table *opp_table,
return opp_table->set_opp(data);
}
static int _set_required_opp(struct device *dev, struct device *pd_dev,
struct dev_pm_opp *opp, int i)
{
unsigned int pstate = likely(opp) ? opp->required_opps[i]->pstate : 0;
int ret;
if (!pd_dev)
return 0;
ret = dev_pm_genpd_set_performance_state(pd_dev, pstate);
if (ret) {
dev_err(dev, "Failed to set performance rate of %s: %d (%d)\n",
dev_name(pd_dev), pstate, ret);
}
return ret;
}
/* This is only called for PM domain for now */
static int _set_required_opps(struct device *dev,
struct opp_table *opp_table,
struct dev_pm_opp *opp)
struct dev_pm_opp *opp, bool up)
{
struct opp_table **required_opp_tables = opp_table->required_opp_tables;
struct device **genpd_virt_devs = opp_table->genpd_virt_devs;
unsigned int pstate;
int i, ret = 0;
if (!required_opp_tables)
return 0;
/* Single genpd case */
if (!genpd_virt_devs) {
pstate = likely(opp) ? opp->required_opps[0]->pstate : 0;
ret = dev_pm_genpd_set_performance_state(dev, pstate);
if (ret) {
dev_err(dev, "Failed to set performance state of %s: %d (%d)\n",
dev_name(dev), pstate, ret);
}
return ret;
}
if (!genpd_virt_devs)
return _set_required_opp(dev, dev, opp, 0);
/* Multiple genpd case */
@ -813,19 +821,21 @@ static int _set_required_opps(struct device *dev,
*/
mutex_lock(&opp_table->genpd_virt_dev_lock);
for (i = 0; i < opp_table->required_opp_count; i++) {
pstate = likely(opp) ? opp->required_opps[i]->pstate : 0;
if (!genpd_virt_devs[i])
continue;
ret = dev_pm_genpd_set_performance_state(genpd_virt_devs[i], pstate);
if (ret) {
dev_err(dev, "Failed to set performance rate of %s: %d (%d)\n",
dev_name(genpd_virt_devs[i]), pstate, ret);
break;
/* Scaling up? Set required OPPs in normal order, else reverse */
if (up) {
for (i = 0; i < opp_table->required_opp_count; i++) {
ret = _set_required_opp(dev, genpd_virt_devs[i], opp, i);
if (ret)
break;
}
} else {
for (i = opp_table->required_opp_count - 1; i >= 0; i--) {
ret = _set_required_opp(dev, genpd_virt_devs[i], opp, i);
if (ret)
break;
}
}
mutex_unlock(&opp_table->genpd_virt_dev_lock);
return ret;
@ -862,6 +872,34 @@ int dev_pm_opp_set_bw(struct device *dev, struct dev_pm_opp *opp)
}
EXPORT_SYMBOL_GPL(dev_pm_opp_set_bw);
static int _opp_set_rate_zero(struct device *dev, struct opp_table *opp_table)
{
int ret;
if (!opp_table->enabled)
return 0;
/*
* Some drivers need to support cases where some platforms may
* have OPP table for the device, while others don't and
* opp_set_rate() just needs to behave like clk_set_rate().
*/
if (!_get_opp_count(opp_table))
return 0;
ret = _set_opp_bw(opp_table, NULL, dev, true);
if (ret)
return ret;
if (opp_table->regulators)
regulator_disable(opp_table->regulators[0]);
ret = _set_required_opps(dev, opp_table, NULL, false);
opp_table->enabled = false;
return ret;
}
/**
* dev_pm_opp_set_rate() - Configure new OPP based on frequency
* @dev: device for which we do this operation
@ -888,33 +926,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
}
if (unlikely(!target_freq)) {
/*
* Some drivers need to support cases where some platforms may
* have OPP table for the device, while others don't and
* opp_set_rate() just needs to behave like clk_set_rate().
*/
if (!_get_opp_count(opp_table)) {
ret = 0;
goto put_opp_table;
}
if (!opp_table->required_opp_tables && !opp_table->regulators &&
!opp_table->paths) {
dev_err(dev, "target frequency can't be 0\n");
ret = -EINVAL;
goto put_opp_table;
}
ret = _set_opp_bw(opp_table, NULL, dev, true);
if (ret)
goto put_opp_table;
if (opp_table->regulator_enabled) {
regulator_disable(opp_table->regulators[0]);
opp_table->regulator_enabled = false;
}
ret = _set_required_opps(dev, opp_table, NULL);
ret = _opp_set_rate_zero(dev, opp_table);
goto put_opp_table;
}
@ -933,14 +945,11 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
old_freq = clk_get_rate(clk);
/* Return early if nothing to do */
if (old_freq == freq) {
if (!opp_table->required_opp_tables && !opp_table->regulators &&
!opp_table->paths) {
dev_dbg(dev, "%s: old/new frequencies (%lu Hz) are same, nothing to do\n",
__func__, freq);
ret = 0;
goto put_opp_table;
}
if (opp_table->enabled && old_freq == freq) {
dev_dbg(dev, "%s: old/new frequencies (%lu Hz) are same, nothing to do\n",
__func__, freq);
ret = 0;
goto put_opp_table;
}
/*
@ -976,7 +985,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
/* Scaling up? Configure required OPPs before frequency */
if (freq >= old_freq) {
ret = _set_required_opps(dev, opp_table, opp);
ret = _set_required_opps(dev, opp_table, opp, true);
if (ret)
goto put_opp;
}
@ -996,13 +1005,16 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
/* Scaling down? Configure required OPPs after frequency */
if (!ret && freq < old_freq) {
ret = _set_required_opps(dev, opp_table, opp);
ret = _set_required_opps(dev, opp_table, opp, false);
if (ret)
dev_err(dev, "Failed to set required opps: %d\n", ret);
}
if (!ret)
if (!ret) {
ret = _set_opp_bw(opp_table, opp, dev, false);
if (!ret)
opp_table->enabled = true;
}
put_opp:
dev_pm_opp_put(opp);
@ -1068,7 +1080,7 @@ static struct opp_table *_allocate_opp_table(struct device *dev, int index)
*/
opp_table = kzalloc(sizeof(*opp_table), GFP_KERNEL);
if (!opp_table)
return NULL;
return ERR_PTR(-ENOMEM);
mutex_init(&opp_table->lock);
mutex_init(&opp_table->genpd_virt_dev_lock);
@ -1079,8 +1091,8 @@ static struct opp_table *_allocate_opp_table(struct device *dev, int index)
opp_dev = _add_opp_dev(dev, opp_table);
if (!opp_dev) {
kfree(opp_table);
return NULL;
ret = -ENOMEM;
goto err;
}
_of_init_opp_table(opp_table, dev, index);
@ -1089,16 +1101,21 @@ static struct opp_table *_allocate_opp_table(struct device *dev, int index)
opp_table->clk = clk_get(dev, NULL);
if (IS_ERR(opp_table->clk)) {
ret = PTR_ERR(opp_table->clk);
if (ret != -EPROBE_DEFER)
dev_dbg(dev, "%s: Couldn't find clock: %d\n", __func__,
ret);
if (ret == -EPROBE_DEFER)
goto err;
dev_dbg(dev, "%s: Couldn't find clock: %d\n", __func__, ret);
}
/* Find interconnect path(s) for the device */
ret = dev_pm_opp_of_find_icc_paths(dev, opp_table);
if (ret)
if (ret) {
if (ret == -EPROBE_DEFER)
goto err;
dev_warn(dev, "%s: Error finding interconnect paths: %d\n",
__func__, ret);
}
BLOCKING_INIT_NOTIFIER_HEAD(&opp_table->head);
INIT_LIST_HEAD(&opp_table->opp_list);
@ -1107,6 +1124,10 @@ static struct opp_table *_allocate_opp_table(struct device *dev, int index)
/* Secure the device table modification */
list_add(&opp_table->node, &opp_tables);
return opp_table;
err:
kfree(opp_table);
return ERR_PTR(ret);
}
void _get_opp_table_kref(struct opp_table *opp_table)
@ -1129,7 +1150,7 @@ static struct opp_table *_opp_get_opp_table(struct device *dev, int index)
if (opp_table) {
if (!_add_opp_dev_unlocked(dev, opp_table)) {
dev_pm_opp_put_opp_table(opp_table);
opp_table = NULL;
opp_table = ERR_PTR(-ENOMEM);
}
goto unlock;
}
@ -1581,8 +1602,8 @@ struct opp_table *dev_pm_opp_set_supported_hw(struct device *dev,
struct opp_table *opp_table;
opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table)
return ERR_PTR(-ENOMEM);
if (IS_ERR(opp_table))
return opp_table;
/* Make sure there are no concurrent readers while updating opp_table */
WARN_ON(!list_empty(&opp_table->opp_list));
@ -1640,8 +1661,8 @@ struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name)
struct opp_table *opp_table;
opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table)
return ERR_PTR(-ENOMEM);
if (IS_ERR(opp_table))
return opp_table;
/* Make sure there are no concurrent readers while updating opp_table */
WARN_ON(!list_empty(&opp_table->opp_list));
@ -1733,8 +1754,8 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev,
int ret, i;
opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table)
return ERR_PTR(-ENOMEM);
if (IS_ERR(opp_table))
return opp_table;
/* This should be called before OPPs are initialized */
if (WARN_ON(!list_empty(&opp_table->opp_list))) {
@ -1804,11 +1825,9 @@ void dev_pm_opp_put_regulators(struct opp_table *opp_table)
/* Make sure there are no concurrent readers while updating opp_table */
WARN_ON(!list_empty(&opp_table->opp_list));
if (opp_table->regulator_enabled) {
if (opp_table->enabled) {
for (i = opp_table->regulator_count - 1; i >= 0; i--)
regulator_disable(opp_table->regulators[i]);
opp_table->regulator_enabled = false;
}
for (i = opp_table->regulator_count - 1; i >= 0; i--)
@ -1843,8 +1862,8 @@ struct opp_table *dev_pm_opp_set_clkname(struct device *dev, const char *name)
int ret;
opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table)
return ERR_PTR(-ENOMEM);
if (IS_ERR(opp_table))
return opp_table;
/* This should be called before OPPs are initialized */
if (WARN_ON(!list_empty(&opp_table->opp_list))) {
@ -1911,8 +1930,8 @@ struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev,
return ERR_PTR(-EINVAL);
opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table)
return ERR_PTR(-ENOMEM);
if (!IS_ERR(opp_table))
return opp_table;
/* This should be called before OPPs are initialized */
if (WARN_ON(!list_empty(&opp_table->opp_list))) {
@ -1949,6 +1968,9 @@ static void _opp_detach_genpd(struct opp_table *opp_table)
{
int index;
if (!opp_table->genpd_virt_devs)
return;
for (index = 0; index < opp_table->required_opp_count; index++) {
if (!opp_table->genpd_virt_devs[index])
continue;
@ -1992,8 +2014,11 @@ struct opp_table *dev_pm_opp_attach_genpd(struct device *dev,
const char **name = names;
opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table)
return ERR_PTR(-ENOMEM);
if (IS_ERR(opp_table))
return opp_table;
if (opp_table->genpd_virt_devs)
return opp_table;
/*
* If the genpd's OPP table isn't already initialized, parsing of the
@ -2020,12 +2045,6 @@ struct opp_table *dev_pm_opp_attach_genpd(struct device *dev,
goto err;
}
if (opp_table->genpd_virt_devs[index]) {
dev_err(dev, "Genpd virtual device already set %s\n",
*name);
goto err;
}
virt_dev = dev_pm_domain_attach_by_name(dev, *name);
if (IS_ERR(virt_dev)) {
ret = PTR_ERR(virt_dev);
@ -2098,9 +2117,6 @@ int dev_pm_opp_xlate_performance_state(struct opp_table *src_table,
int dest_pstate = -EINVAL;
int i;
if (!pstate)
return 0;
/*
* Normally the src_table will have the "required_opps" property set to
* point to one of the OPPs in the dst_table, but in some cases the
@ -2163,8 +2179,8 @@ int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt)
int ret;
opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table)
return -ENOMEM;
if (IS_ERR(opp_table))
return PTR_ERR(opp_table);
/* Fix regulator count for dynamic OPPs */
opp_table->regulator_count = 1;
@ -2405,7 +2421,14 @@ int dev_pm_opp_unregister_notifier(struct device *dev,
}
EXPORT_SYMBOL(dev_pm_opp_unregister_notifier);
void _dev_pm_opp_find_and_remove_table(struct device *dev)
/**
* dev_pm_opp_remove_table() - Free all OPPs associated with the device
* @dev: device pointer used to lookup OPP table.
*
* Free both OPPs created using static entries present in DT and the
* dynamically added entries.
*/
void dev_pm_opp_remove_table(struct device *dev)
{
struct opp_table *opp_table;
@ -2432,16 +2455,4 @@ void _dev_pm_opp_find_and_remove_table(struct device *dev)
/* Drop reference taken by _find_opp_table() */
dev_pm_opp_put_opp_table(opp_table);
}
/**
* dev_pm_opp_remove_table() - Free all OPPs associated with the device
* @dev: device pointer used to lookup OPP table.
*
* Free both OPPs created using static entries present in DT and the
* dynamically added entries.
*/
void dev_pm_opp_remove_table(struct device *dev)
{
_dev_pm_opp_find_and_remove_table(dev);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_remove_table);

View File

@ -124,7 +124,7 @@ void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask,
continue;
}
_dev_pm_opp_find_and_remove_table(cpu_dev);
dev_pm_opp_remove_table(cpu_dev);
}
}

View File

@ -434,9 +434,9 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_find_icc_paths);
static bool _opp_is_supported(struct device *dev, struct opp_table *opp_table,
struct device_node *np)
{
unsigned int count = opp_table->supported_hw_count;
u32 version;
int ret;
unsigned int levels = opp_table->supported_hw_count;
int count, versions, ret, i, j;
u32 val;
if (!opp_table->supported_hw) {
/*
@ -451,21 +451,40 @@ static bool _opp_is_supported(struct device *dev, struct opp_table *opp_table,
return true;
}
while (count--) {
ret = of_property_read_u32_index(np, "opp-supported-hw", count,
&version);
if (ret) {
dev_warn(dev, "%s: failed to read opp-supported-hw property at index %d: %d\n",
__func__, count, ret);
return false;
}
/* Both of these are bitwise masks of the versions */
if (!(version & opp_table->supported_hw[count]))
return false;
count = of_property_count_u32_elems(np, "opp-supported-hw");
if (count <= 0 || count % levels) {
dev_err(dev, "%s: Invalid opp-supported-hw property (%d)\n",
__func__, count);
return false;
}
return true;
versions = count / levels;
/* All levels in at least one of the versions should match */
for (i = 0; i < versions; i++) {
bool supported = true;
for (j = 0; j < levels; j++) {
ret = of_property_read_u32_index(np, "opp-supported-hw",
i * levels + j, &val);
if (ret) {
dev_warn(dev, "%s: failed to read opp-supported-hw property at index %d: %d\n",
__func__, i * levels + j, ret);
return false;
}
/* Check if the level is supported */
if (!(val & opp_table->supported_hw[j])) {
supported = false;
break;
}
}
if (supported)
return true;
}
return false;
}
static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev,
@ -616,7 +635,7 @@ static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev,
*/
void dev_pm_opp_of_remove_table(struct device *dev)
{
_dev_pm_opp_find_and_remove_table(dev);
dev_pm_opp_remove_table(dev);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table);
@ -823,7 +842,7 @@ static struct dev_pm_opp *_opp_add_static_v2(struct opp_table *opp_table,
static int _of_add_opp_table_v2(struct device *dev, struct opp_table *opp_table)
{
struct device_node *np;
int ret, count = 0, pstate_count = 0;
int ret, count = 0;
struct dev_pm_opp *opp;
/* OPP table is already initialized for the device */
@ -857,20 +876,14 @@ static int _of_add_opp_table_v2(struct device *dev, struct opp_table *opp_table)
goto remove_static_opp;
}
list_for_each_entry(opp, &opp_table->opp_list, node)
pstate_count += !!opp->pstate;
/* Either all or none of the nodes shall have performance state set */
if (pstate_count && pstate_count != count) {
dev_err(dev, "Not all nodes have performance state set (%d: %d)\n",
count, pstate_count);
ret = -ENOENT;
goto remove_static_opp;
list_for_each_entry(opp, &opp_table->opp_list, node) {
/* Any non-zero performance state would enable the feature */
if (opp->pstate) {
opp_table->genpd_performance_state = true;
break;
}
}
if (pstate_count)
opp_table->genpd_performance_state = true;
return 0;
remove_static_opp:
@ -886,11 +899,25 @@ static int _of_add_opp_table_v1(struct device *dev, struct opp_table *opp_table)
const __be32 *val;
int nr, ret = 0;
mutex_lock(&opp_table->lock);
if (opp_table->parsed_static_opps) {
opp_table->parsed_static_opps++;
mutex_unlock(&opp_table->lock);
return 0;
}
opp_table->parsed_static_opps = 1;
mutex_unlock(&opp_table->lock);
prop = of_find_property(dev->of_node, "operating-points", NULL);
if (!prop)
return -ENODEV;
if (!prop->value)
return -ENODATA;
if (!prop) {
ret = -ENODEV;
goto remove_static_opp;
}
if (!prop->value) {
ret = -ENODATA;
goto remove_static_opp;
}
/*
* Each OPP is a set of tuples consisting of frequency and
@ -899,13 +926,10 @@ static int _of_add_opp_table_v1(struct device *dev, struct opp_table *opp_table)
nr = prop->length / sizeof(u32);
if (nr % 2) {
dev_err(dev, "%s: Invalid OPP table\n", __func__);
return -EINVAL;
ret = -EINVAL;
goto remove_static_opp;
}
mutex_lock(&opp_table->lock);
opp_table->parsed_static_opps = 1;
mutex_unlock(&opp_table->lock);
val = prop->value;
while (nr) {
unsigned long freq = be32_to_cpup(val++) * 1000;
@ -915,12 +939,14 @@ static int _of_add_opp_table_v1(struct device *dev, struct opp_table *opp_table)
if (ret) {
dev_err(dev, "%s: Failed to add OPP %ld (%d)\n",
__func__, freq, ret);
_opp_remove_all_static(opp_table);
return ret;
goto remove_static_opp;
}
nr -= 2;
}
remove_static_opp:
_opp_remove_all_static(opp_table);
return ret;
}
@ -947,8 +973,8 @@ int dev_pm_opp_of_add_table(struct device *dev)
int ret;
opp_table = dev_pm_opp_get_opp_table_indexed(dev, 0);
if (!opp_table)
return -ENOMEM;
if (IS_ERR(opp_table))
return PTR_ERR(opp_table);
/*
* OPPs have two version of bindings now. Also try the old (v1)
@ -1002,8 +1028,8 @@ int dev_pm_opp_of_add_table_indexed(struct device *dev, int index)
}
opp_table = dev_pm_opp_get_opp_table_indexed(dev, index);
if (!opp_table)
return -ENOMEM;
if (IS_ERR(opp_table))
return PTR_ERR(opp_table);
ret = _of_add_opp_table_v2(dev, opp_table);
if (ret)

View File

@ -147,11 +147,11 @@ enum opp_table_access {
* @clk: Device's clock handle
* @regulators: Supply regulators
* @regulator_count: Number of power supply regulators. Its value can be -1
* @regulator_enabled: Set to true if regulators were previously enabled.
* (uninitialized), 0 (no opp-microvolt property) or > 0 (has opp-microvolt
* property).
* @paths: Interconnect path handles
* @path_count: Number of interconnect paths
* @enabled: Set to true if the device's resources are enabled/configured.
* @genpd_performance_state: Device's power domain support performance state.
* @is_genpd: Marks if the OPP table belongs to a genpd.
* @set_opp: Platform specific set_opp callback
@ -195,9 +195,9 @@ struct opp_table {
struct clk *clk;
struct regulator **regulators;
int regulator_count;
bool regulator_enabled;
struct icc_path **paths;
unsigned int path_count;
bool enabled;
bool genpd_performance_state;
bool is_genpd;
@ -217,7 +217,6 @@ void _get_opp_table_kref(struct opp_table *opp_table);
int _get_opp_count(struct opp_table *opp_table);
struct opp_table *_find_opp_table(struct device *dev);
struct opp_device *_add_opp_dev(const struct device *dev, struct opp_table *opp_table);
void _dev_pm_opp_find_and_remove_table(struct device *dev);
struct dev_pm_opp *_opp_allocate(struct opp_table *opp_table);
void _opp_free(struct dev_pm_opp *opp);
int _opp_compare_key(struct dev_pm_opp *opp1, struct dev_pm_opp *opp2);

View File

@ -944,6 +944,16 @@ static bool acpi_pci_bridge_d3(struct pci_dev *dev)
if (!dev->is_hotplug_bridge)
return false;
/* Assume D3 support if the bridge is power-manageable by ACPI. */
adev = ACPI_COMPANION(&dev->dev);
if (!adev && !pci_dev_is_added(dev)) {
adev = acpi_pci_find_companion(&dev->dev);
ACPI_COMPANION_SET(&dev->dev, adev);
}
if (adev && acpi_device_power_manageable(adev))
return true;
/*
* Look for a special _DSD property for the root port and if it
* is set we know the hierarchy behind it supports D3 just fine.

View File

@ -665,8 +665,6 @@ static int cpr_enable(struct cpr_drv *drv)
static int cpr_disable(struct cpr_drv *drv)
{
int ret;
mutex_lock(&drv->lock);
if (cpr_is_allowed(drv)) {
@ -676,11 +674,7 @@ static int cpr_disable(struct cpr_drv *drv)
mutex_unlock(&drv->lock);
ret = regulator_disable(drv->vdd_apc);
if (ret)
return ret;
return 0;
return regulator_disable(drv->vdd_apc);
}
static int cpr_config(struct cpr_drv *drv)

View File

@ -43,6 +43,7 @@
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/smpboot.h>
#include <linux/idle_inject.h>
#include <uapi/linux/sched/types.h>

View File

@ -93,7 +93,7 @@ static int exynos_asv_update_opps(struct exynos_asv *asv)
continue;
opp_table = dev_pm_opp_get_opp_table(cpu);
if (IS_ERR_OR_NULL(opp_table))
if (IS_ERR(opp_table))
continue;
if (!last_opp_table || opp_table != last_opp_table) {

View File

@ -30,7 +30,11 @@ static inline unsigned long topology_get_freq_scale(int cpu)
return per_cpu(freq_scale, cpu);
}
bool arch_freq_counters_available(struct cpumask *cpus);
void topology_set_freq_scale(const struct cpumask *cpus, unsigned long cur_freq,
unsigned long max_freq);
bool topology_scale_freq_invariant(void);
bool arch_freq_counters_available(const struct cpumask *cpus);
DECLARE_PER_CPU(unsigned long, thermal_pressure);

View File

@ -217,6 +217,7 @@ void refresh_frequency_limits(struct cpufreq_policy *policy);
void cpufreq_update_policy(unsigned int cpu);
void cpufreq_update_limits(unsigned int cpu);
bool have_governor_per_policy(void);
bool cpufreq_supports_freq_invariance(void);
struct kobject *get_governor_parent_kobj(struct cpufreq_policy *policy);
void cpufreq_enable_fast_switch(struct cpufreq_policy *policy);
void cpufreq_disable_fast_switch(struct cpufreq_policy *policy);
@ -237,6 +238,10 @@ static inline unsigned int cpufreq_get_hw_max_freq(unsigned int cpu)
{
return 0;
}
static inline bool cpufreq_supports_freq_invariance(void)
{
return false;
}
static inline void disable_cpufreq(void) { }
#endif
@ -1006,8 +1011,14 @@ static inline void sched_cpufreq_governor_change(struct cpufreq_policy *policy,
extern void arch_freq_prepare_all(void);
extern unsigned int arch_freq_get_on_cpu(int cpu);
extern void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq,
unsigned long max_freq);
#ifndef arch_set_freq_scale
static __always_inline
void arch_set_freq_scale(const struct cpumask *cpus,
unsigned long cur_freq,
unsigned long max_freq)
{
}
#endif
/* the following are really really optional */
extern struct freq_attr cpufreq_freq_attr_scaling_available_freqs;

View File

@ -38,6 +38,7 @@ struct cpuidle_state_usage {
u64 time_ns;
unsigned long long above; /* Number of times it's been too deep */
unsigned long long below; /* Number of times it's been too shallow */
unsigned long long rejected; /* Number of times idle entry was rejected */
#ifdef CONFIG_SUSPEND
unsigned long long s2idle_usage;
unsigned long long s2idle_time; /* in US */

View File

@ -106,8 +106,11 @@ extern int devfreq_event_get_event(struct devfreq_event_dev *edev,
struct devfreq_event_data *edata);
extern int devfreq_event_reset_event(struct devfreq_event_dev *edev);
extern struct devfreq_event_dev *devfreq_event_get_edev_by_phandle(
struct device *dev, int index);
extern int devfreq_event_get_edev_count(struct device *dev);
struct device *dev,
const char *phandle_name,
int index);
extern int devfreq_event_get_edev_count(struct device *dev,
const char *phandle_name);
extern struct devfreq_event_dev *devfreq_event_add_edev(struct device *dev,
struct devfreq_event_desc *desc);
extern int devfreq_event_remove_edev(struct devfreq_event_dev *edev);
@ -152,12 +155,15 @@ static inline int devfreq_event_reset_event(struct devfreq_event_dev *edev)
}
static inline struct devfreq_event_dev *devfreq_event_get_edev_by_phandle(
struct device *dev, int index)
struct device *dev,
const char *phandle_name,
int index)
{
return ERR_PTR(-EINVAL);
}
static inline int devfreq_event_get_edev_count(struct device *dev)
static inline int devfreq_event_get_edev_count(struct device *dev,
const char *phandle_name)
{
return -EINVAL;
}

View File

@ -261,7 +261,9 @@ void devm_devfreq_unregister_notifier(struct device *dev,
struct devfreq *devfreq,
struct notifier_block *nb,
unsigned int list);
struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, int index);
struct devfreq *devfreq_get_devfreq_by_node(struct device_node *node);
struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev,
const char *phandle_name, int index);
#if IS_ENABLED(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND)
/**
@ -414,8 +416,13 @@ static inline void devm_devfreq_unregister_notifier(struct device *dev,
{
}
static inline struct devfreq *devfreq_get_devfreq_by_node(struct device_node *node)
{
return ERR_PTR(-ENODEV);
}
static inline struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev,
int index)
const char *phandle_name, int index)
{
return ERR_PTR(-ENODEV);
}

View File

@ -590,7 +590,7 @@ struct dev_pm_info {
#endif
#ifdef CONFIG_PM
struct hrtimer suspend_timer;
unsigned long timer_expires;
u64 timer_expires;
struct work_struct work;
wait_queue_head_t wait_queue;
struct wake_irq *wakeirq;

View File

@ -64,8 +64,8 @@
#define GENPD_FLAG_RPM_ALWAYS_ON (1U << 5)
enum gpd_status {
GPD_STATE_ACTIVE = 0, /* PM domain is active */
GPD_STATE_POWER_OFF, /* PM domain is off */
GENPD_STATE_ON = 0, /* PM domain is on */
GENPD_STATE_OFF, /* PM domain is off */
};
struct dev_power_governor {

View File

@ -18,7 +18,7 @@ bool psci_tos_resident_on(int cpu);
int psci_cpu_suspend_enter(u32 state);
bool psci_power_state_is_valid(u32 state);
int psci_set_osi_mode(void);
int psci_set_osi_mode(bool enable);
bool psci_has_osi_support(void);
struct psci_operations {

View File

@ -946,17 +946,6 @@ static int software_resume(void)
/* Check if the device is there */
swsusp_resume_device = name_to_dev_t(resume_file);
/*
* name_to_dev_t is ineffective to verify parition if resume_file is in
* integer format. (e.g. major:minor)
*/
if (isdigit(resume_file[0]) && resume_wait) {
int partno;
while (!get_gendisk(swsusp_resume_device, &partno))
msleep(10);
}
if (!swsusp_resume_device) {
/*
* Some device discovery might still be in progress; we need

View File

@ -226,6 +226,7 @@ struct hib_bio_batch {
atomic_t count;
wait_queue_head_t wait;
blk_status_t error;
struct blk_plug plug;
};
static void hib_init_batch(struct hib_bio_batch *hb)
@ -233,6 +234,12 @@ static void hib_init_batch(struct hib_bio_batch *hb)
atomic_set(&hb->count, 0);
init_waitqueue_head(&hb->wait);
hb->error = BLK_STS_OK;
blk_start_plug(&hb->plug);
}
static void hib_finish_batch(struct hib_bio_batch *hb)
{
blk_finish_plug(&hb->plug);
}
static void hib_end_io(struct bio *bio)
@ -294,6 +301,10 @@ static int hib_submit_io(int op, int op_flags, pgoff_t page_off, void *addr,
static blk_status_t hib_wait_io(struct hib_bio_batch *hb)
{
/*
* We are relying on the behavior of blk_plug that a thread with
* a plug will flush the plug list before sleeping.
*/
wait_event(hb->wait, atomic_read(&hb->count) == 0);
return blk_status_to_errno(hb->error);
}
@ -558,6 +569,7 @@ static int save_image(struct swap_map_handle *handle,
nr_pages++;
}
err2 = hib_wait_io(&hb);
hib_finish_batch(&hb);
stop = ktime_get();
if (!ret)
ret = err2;
@ -851,6 +863,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
pr_info("Image saving done\n");
swsusp_show_speed(start, stop, nr_to_write, "Wrote");
out_clean:
hib_finish_batch(&hb);
if (crc) {
if (crc->thr)
kthread_stop(crc->thr);
@ -1081,6 +1094,7 @@ static int load_image(struct swap_map_handle *handle,
nr_pages++;
}
err2 = hib_wait_io(&hb);
hib_finish_batch(&hb);
stop = ktime_get();
if (!ret)
ret = err2;
@ -1444,6 +1458,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
}
swsusp_show_speed(start, stop, nr_to_read, "Read");
out_clean:
hib_finish_batch(&hb);
for (i = 0; i < ring_size; i++)
free_page((unsigned long)page[i]);
if (crc) {

View File

@ -114,22 +114,8 @@ static bool sugov_update_next_freq(struct sugov_policy *sg_policy, u64 time,
static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time,
unsigned int next_freq)
{
struct cpufreq_policy *policy = sg_policy->policy;
int cpu;
if (!sugov_update_next_freq(sg_policy, time, next_freq))
return;
next_freq = cpufreq_driver_fast_switch(policy, next_freq);
if (!next_freq)
return;
policy->cur = next_freq;
if (trace_cpu_frequency_enabled()) {
for_each_cpu(cpu, policy->cpus)
trace_cpu_frequency(next_freq, cpu);
}
if (sugov_update_next_freq(sg_policy, time, next_freq))
cpufreq_driver_fast_switch(sg_policy->policy, next_freq);
}
static void sugov_deferred_update(struct sugov_policy *sg_policy, u64 time,