Add suffix ULL to constant 500 in order to give the compiler complete
information about the proper arithmetic to use. Notice that this
constant is used in a context that expects an expression of type
u64 (64 bits, unsigned).
The expression NUM_RETRIES * cppc_ss->latency at line 578, which at
preprocessing time translates to 500 * cppc_ss->latency is currently
being evaluated using 32-bit arithmetic.
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The initialization of pcc_ss_data from pcc_data[pcc_ss_id] before
pcc_ss_id is being range checked could lead to an out-of-bounds array
read. This very same initialization is also being performed after
the range check on pcc_ss_id, so we can just remove this problematic
and also redundant assignment to fix the issue.
Detected by cppcheck:
warning: Value stored to 'pcc_ss_data' during its initialization is never
read
Fixes: 85b1407bf6 (ACPI / CPPC: Make CPPC ACPI driver aware of PCC subspace IDs)
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Based on ACPI 6.2 Section 8.4.7.1.9 If the PCC register space is used,
all PCC registers, for all processors in the same performance domain
(as defined by _PSD), must be defined to be in the same subspace.
Based on Section 14.1 of ACPI specification, it is possible to have a
maximum of 256 PCC subspace IDs. Add support of multiple PCC subspace
ID instead of using a single global pcc_data structure.
While at that, fix the time_delta check in send_pcc_cmd() so that
last_mpar_reset and mpar_count are initialized properly.
Signed-off-by: George Cherian <george.cherian@cavium.com>
Reviewed-by: Prashanth Prakash <pprakash@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
This typo is quite common. Fix it and add it to the spelling file so
that checkpatch catches it earlier.
Link: http://lkml.kernel.org/r/20170317011131.6881-2-sboyd@codeaurora.org
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Computed delivered performance using CPPC feedback counters are in the
CPPC abstract scale, whereas cppc_cpufreq driver operates in KHz scale.
Exposing the CPPC performance capabilities (highest,lowest, nominal,
lowest non-linear) will allow userspace to figure out the conversion
factor from CPPC abstract scale to KHz.
Also rename ctr_wrap_time to wraparound_time so that show_cppc_data()
macro will work with it.
Signed-off-by: Prashanth Prakash <pprakash@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Read lowest non linear perf in cppc_get_perf_caps so that it can be exposed
via sysfs to the usespace. Lowest non linear perf is the lowest performance
level at which nonlinear power savings are achieved.
Signed-off-by: Prashanth Prakash <pprakash@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Fix a possible use-after-free scenario in acpi_cppc_processor_probe()
that can happen if the function returns without cleaning up the
per-CPU pointer set by it previously.
Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
We should return -EINVAL (instead of 0) if get_cpu_device() fails.
Fixes: 158c998ea4 (ACPI / CPPC: add sysfs support to compute delivered performance)
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
For PCC mailbox with interrupt flag, CPPC should call mbox_chan_txdone()
function to notify the mailbox framework about TX completion.
Signed-off-by: Hoan Tran <hotran@apm.com>
Reviewed-by: Prashanth Prakash <pprakash@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Since struct cpudata is defined in a header file, add prefix cppc_ to
make it not a generic name. Otherwise it causes compile issue in locally
define structure with the same name.
Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The CPPC registers can also be accessed via functional fixed hardware
addresse(FFH) in X86. Add support by modifying cpc_read and cpc_write to
be able to read/write MSRs on x86 platform on per cpu basis.
Also with this change, acpi_cppc_processor_probe doesn't bail out if
address space id is not equal to PCC or memory address space and FFH
is supported on the system.
Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
PCC status field exposes an error bit(2) to indicate any errors during
the execution of last comamnd. This patch checks the error bit before
notifying success/failure to the cpufreq driver.
Signed-off-by: Prashanth Prakash <pprakash@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
There are several global variables in cppc driver that are related
to PCC channel used for CPPC. This patch collects all such
information into a single consolidated structure(cppc_pcc_data).
Signed-off-by: Prashanth Prakash <pprakash@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The CPPC tables contain entries for per CPU feedback counters which
allows us to compute the delivered performance over a given interval
of time.
The math for delivered performance per the CPPCv5.0+ spec is:
reference perf * delta(delivered perf ctr)/delta(ref perf ctr)
Maintaining deltas of the counters in the kernel is messy, as it
depends on when the reads are triggered. (e.g. via the cpufreq
->get() interface). Also the ->get() interace only returns one
value, so cant return raw values. So instead, leave it to userspace
to keep track of raw values and do its math for CPUs it cares about.
delivered and reference perf counters are exposed via the same
sysfs file to avoid the potential "skid", if these values are read
individually from userspace.
Signed-off-by: Prashanth Prakash <pprakash@codeaurora.org>
Signed-off-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Compute the expected transition latency for frequency transitions
using the values from the PCCT tables when the desired perf
register is in PCC.
Signed-off-by: Prashanth Prakash <pprakash@codeaurora.org>
Reviewed-by: Alexey Klimov <alexey.klimov@arm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
CPPC defined in section 8.4.7 of ACPI 6.0 specification suggests
"To amortize the cost of PCC transactions, OSPM should read or write
all PCC registers via a single read or write command when possible"
This patch enables opportunistic batching of frequency transition
requests whenever the request happen to overlap in time.
Currently the access to pcc is serialized by a spin lock which does
not scale well as we increase the number of cores in the system. This
patch improves the scalability by allowing the differnt CPU cores to
update PCC subspace in parallel and by batching requests which will
reduce the certain types of operation(checking command completion bit,
ringing doorbell) by a significant margin.
Profiling shows significant improvement in the overall effeciency
to service freq. transition requests. With this patch we observe close
to 30% of the frequency transition requests being batched with other
requests while running apache bench on a ARM platform with 6
independent domains(or sets of related cpus).
Signed-off-by: Prashanth Prakash <pprakash@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
We need to acquire pcc_lock only when we are accessing registers
that are in the PCC subspsace.
Signed-off-by: Prashanth Prakash <pprakash@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
For cases where sys mapped CPC registers need to be accessed
frequently, it helps immensly to pre-map them rather than map
and unmap for each operation. e.g. case where feedback counters
are sys mem map registers.
Restructure cpc_read/write and the cpc_regs structure to allow
pre-mapping the system addresses and unmap them when the CPU exits.
Signed-off-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Signed-off-by: Prashanth Prakash <pprakash@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
When CPPC fails to request a PCC channel, the CPC data is freed
and cpc_desc_ptr points to the invalid data.
Avoid this issue by moving the cpc_desc_ptr assignment after the PCC
channel request.
Signed-off-by: Hoan Tran <hotran@apm.com>
Acked-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Based on 8.4.7.1 section of ACPI 6.1 specification, if the platform
supports CPPC, the _CPC object must exist under all processor objects.
If cpc_desc_ptr pointer is invalid on any CPUs, acpi_get_psd_map()
should return error and CPPC cpufreq driver can not be registered.
Signed-off-by: Hoan Tran <hotran@apm.com>
Reviewed-by: Prashanth Prakash <pprakash@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The ACPI spec defines Minimum Request Turnaround Time(MRTT) and
Maximum Periodic Access Rate(MPAR) to prevent the OSPM from sending
too many requests than the platform can handle. For further details
on these parameters please refer to section 14.1.3 of ACPI 6.0 spec.
This patch includes MRTT/MPAR in deciding if or when a CPPC request
can be sent to the platform to make sure CPPC implementation is
compliant to the spec.
Signed-off-by: Prashanth Prakash <pprakash@codeaurora.org>
Acked-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
We do not have a strict read/write order requirement while accessing
PCC subspace. The only requirement is all access should be committed
before triggering the PCC doorbell to transfer the ownership of PCC
to the platform and this requirement is enforced by the PCC driver.
Profiling on a many core system shows improvement of about 1.8us on
average per freq change request(about 10% improvement on average).
Since these operations are executed while holding the pcc_lock,
reducing this time helps the CPPC implementation to scale much
better as the number of cores increases.
Signed-off-by: Prashanth Prakash <pprakash@codeaurora.org>
Acked-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
cpc_read and cpc_write are used while holding the pcc_lock spin_lock,
so they need to be as fast as possible. acpi_os_read/write_memory
APIs linearly search through a list for cached mapping which is
quite expensive. Since the PCC subspace is already mapped into
virtual address space during initialization, we can just add the
offset and access the necessary CPPC registers.
This patch + similar changes to PCC driver reduce the time per freq.
transition from around 200us to about 20us for the CPPC cpufreq
driver.
Signed-off-by: Prashanth Prakash <pprakash@codeaurora.org>
Acked-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Previously the send_pcc_cmd() code checked if the
PCC operation had completed before returning from
the function. This check was performed regardless
of the PCC op type (i.e. Read/Write). Knowing
the type of cmd can be used to optimize the check
and avoid needless waiting. e.g. with Write ops,
the actual Writing is done before calling send_pcc_cmd().
And the subsequent Writes will check if the channel is
free at the entry of send_pcc_cmd() anyway.
However, for Read cmds, we need to wait for the cmd
completion bit to be flipped, since the actual Read
ops follow after returning from the send_pcc_cmd(). So,
only do the looping check at the end for Read ops.
Also, instead of using udelay() calls, use ktime as a
means to check for deadlines. The current deadline
in which the Remote should flip the cmd completion bit
is defined as N * Nominal latency. Where N is arbitrary
and large enough to work on slow emulators and Nominal
latency comes from the ACPI table (PCCT). This helps
in working around the CONFIG_HZ effects on udelay()
and also avoids needing different ACPI tables for Silicon
and Emulation platforms.
Signed-off-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Signed-off-by: Prashanth Prakash <pprakash@codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* pm-cpufreq:
Revert "Documentation: kernel_parameters for Intel P state driver"
cpufreq: mediatek: fix build error
cpufreq: intel_pstate: Add separate support for Airmont cores
cpufreq: intel_pstate: Replace BYT with ATOM
Revert "cpufreq: intel_pstate: Use ACPI perf configuration"
Revert "cpufreq: intel_pstate: Avoid calculation for max/min"
* acpi-cppc:
ACPI / CPPC: Use h/w reduced version of the PCCT structure
CPPC is enabled only on platforms which support the h/w reduced
ACPI specification, so use the h/w reduced version of the PCCT
consistently when deferencing PCCT contents.
Fixes: 337aadff8e (ACPI: Introduce CPU performance controls using CPPC)
Signed-off-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Commit 337aadff8e (ACPI: Introduce CPU performance controls using CPPC)
leads to the following static checker warning:
drivers/acpi/cppc_acpi.c:527 acpi_cppc_processor_probe()
warn: overwrite may leak 'cpc_ptr'
Fix the warning by removing the bogus per-CPU pointer dereference.
Fixes: 337aadff8e (ACPI: Introduce CPU performance controls using CPPC)
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The "pcc_subspace_idx" is -1 if it hasn't been initialized yet. We need
it to be signed.
Fixes: 337aadff8e (ACPI: Introduce CPU performance controls using CPPC)
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
CPPC stands for Collaborative Processor Performance Controls
and is defined in the ACPI v5.0+ spec. It describes CPU
performance controls on an abstract and continuous scale
allowing the platform (e.g. remote power processor) to flexibly
optimize CPU performance with its knowledge of power budgets
and other architecture specific knowledge.
This patch adds a shim which exports commonly used functions
to get and set CPPC specific controls for each CPU. This enables
CPUFreq drivers to gather per CPU performance data and use
with exisiting governors or even allows for customized governors
which are implemented inside CPUFreq drivers.
Signed-off-by: Ashwin Chaugule <ashwin.chaugule@linaro.org>
Reviewed-by: Al Stone <al.stone@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>