* pm-acpi: (24 commits)
olpc-xo15-sci: Use struct dev_pm_ops for power management
ACPI / PM: Drop PM callbacks from the ACPI bus type
ACPI / PM: Drop legacy driver PM callbacks that are not used any more
ACPI / PM: Do not execute legacy driver PM callbacks
acpi_power_meter: Use struct dev_pm_ops for power management
fujitsu-tablet: Use struct dev_pm_ops for power management
classmate-laptop: Use struct dev_pm_ops for power management
xo15-ebook: Use struct dev_pm_ops for power management
toshiba_bluetooth: Use struct dev_pm_ops for power management
panasonic-laptop: Use struct dev_pm_ops for power management
sony-laptop: Use struct dev_pm_ops for power management
hp_accel: Use struct dev_pm_ops for power management
toshiba_acpi: Use struct dev_pm_ops for power management
ACPI: Use struct dev_pm_ops for power management in the SBS driver
ACPI: Use struct dev_pm_ops for power management in the power driver
ACPI: Use struct dev_pm_ops for power management in the button driver
ACPI: Use struct dev_pm_ops for power management in the battery driver
ACPI: Use struct dev_pm_ops for power management in the AC driver
ACPI: Use struct dev_pm_ops for power management in processor driver
ACPI: Use struct dev_pm_ops for power management in the thermal driver
...
Remove the latency_ticks field as it is not used.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Fixes issues like this:
i_aSL -> iASL
00-7_f -> 00-7F
local_fADT -> local_FADT
execute_oSI -> execute_OSI
Also, in function headers, the parameters are now translated to
lower case (with underscores if necessary.)
Signed-off-by: Bob Moore <robert.moore@intel.com>
Signed-off-by: Lin Ming <ming.m.lin@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
On certain bios, resume hangs if cpus are allowed to enter idle states
during suspend [1].
This was fixed in apci idle driver [2].But intel_idle driver does not
have this fix. Thus instead of replicating the fix in both the idle
drivers, or in more platform specific idle drivers if needed, the
more general cpuidle infrastructure could handle this.
A suspend callback in cpuidle_driver could handle this fix. But
a cpuidle_driver provides only basic functionalities like platform idle
state detection capability and mechanisms to support entry and exit
into CPU idle states. All other cpuidle functions are found in the
cpuidle generic infrastructure for good reason that all cpuidle
drivers, irrepective of their platforms will support these functions.
One option therefore would be to register a suspend callback in cpuidle
which handles this fix. This could be called through a PM_SUSPEND_PREPARE
notifier. But this is too generic a notfier for a driver to handle.
Also, ideally the job of cpuidle is not to handle side effects of suspend.
It should expose the interfaces which "handle cpuidle 'during' suspend"
or any other operation, which the subsystems call during that respective
operation.
The fix demands that during suspend, no cpus should be allowed to enter
deep C-states. The interface cpuidle_uninstall_idle_handler() in cpuidle
ensures that. Not just that it also kicks all the cpus which are already
in idle out of their idle states which was being done during cpu hotplug
through a CPU_DYING_FROZEN callbacks.
Now the question arises about when during suspend should
cpuidle_uninstall_idle_handler() be called. Since we are dealing with
drivers it seems best to call this function during dpm_suspend().
Delaying the call till dpm_suspend_noirq() does no harm, as long as it is
before cpu_hotplug_begin() to avoid race conditions with cpu hotpulg
operations. In dpm_suspend_noirq(), it would be wise to place this call
before suspend_device_irqs() to avoid ugly interactions with the same.
Ananlogously, during resume.
References:
[1] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/674075.
[2] http://marc.info/?l=linux-pm&m=133958534231884&w=2
Reported-and-tested-by: Dave Hansen <dave@linux.vnet.ibm.com>
Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Reviewed-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Make the ACPI processor driver define its PM callbacks through
a struct dev_pm_ops object rather than by using legacy PM hooks
in struct acpi_device_ops.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
None of the drivers implementing the ACPI device suspend callback
uses the pm_message_t argument of it, so this argument may be dropped
entirely from that callback. This will simplify switching the ACPI
bus type to PM handling based on struct dev_pm_ops.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Commit e978aa7d7d ( cpuidle: Move
dev->last_residency update to driver enter routine; remove dev->last_state)
was breaking suspend on laptops, as reported in the below link
- https://lkml.org/lkml/2011/11/11/164
This was fixed in commit 3439a8da16
(ACPI / cpuidle: Remove acpi_idle_suspend (to fix suspend regression)
by removing acpi_idle_suspend flag.
- https://lkml.org/lkml/2011/11/14/74
But this did fix did not work on all systems
as Suspend/resume regression was reported on Lenovo S10-3
recently by Dave.
- https://lkml.org/lkml/2012/5/27/115
It looked like with commit e978aa7d broke suspend and
with commit 3439a8da resume was not working with acpi_idle driver.
This patch fixes the regression that caused this issue
in the first place. acpi_idle_suspend flag is essential on
some x86 systems to prevent the cpus from going to deeper C-states
when suspend is triggered ( commit b04e7bdb98 )
So reverting the commit 3439a8da is essential.
By default, irqs are disabled in cpu_idle arch specific call
and re-enabled in idle state return path . During suspend,
the acpi_idle_suspend flag is set, which
prevents the cpus from going to deeper idle states,
it is essential to enabling the irqs in this return path too.
To address the suspend issue,
we were not re-enabling the interrupts while returning from
acpi_idle_enter_bm() routine if acpi_idle_suspend flag is set.
and this caused suspend failure.
In addition to the above, to improve the readability of the code,
return of -ENIVAL is replaced with -EBUSY in acpi_idle_suspend
return path. Implying that the system is currently busy when suspend
is in progress, which prevents the cpus from entering deeper C-states.
Reported-and-Tested-by: Dav Hansen <dave@linux.vnet.ibm.com>
Tested-by: Preeti Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
Reviewed-by: Srivatsa S Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
ACPI code is shared by arch/x86 and arch/ia64. ia64 doesn't provide a plain
"halt()" function. Use safe_halt() instead.
Signed-off-by: Tony Luck <tony.luck@intel.com>
Tested-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
Signed-off-by: Len Brown <len.brown@intel.com>
The acpi_processor_cst_has_changed() function is invoked from a
CPU_ONLINE or CPU_DEAD function, which might well execute on CPU 0
even though the CPU being hotplugged is some other CPU. In addition,
acpi_processor_cst_has_changed() invokes smp_processor_id() without
protection, resulting in splats when onlining CPUs.
This commit therefore changes the smp_processor_id() to pr->id, as is
used elsewhere in the code, for example, in acpi_processor_add().
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Tested-by: Yong Zhang <yong.zhang0@gmail.com>
Acked-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Len Brown <len.brown@intel.com>
Currently when a CPU is off-lined it enters either MWAIT-based idle or,
if MWAIT is not desired or supported, HLT-based idle (which places the
processor in C1 state). This patch allows processors without MWAIT
support to stay in states deeper than C1.
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@amd.com>
Signed-off-by: Len Brown <len.brown@intel.com>
After commit e978aa7d7d ("cpuidle: Move dev->last_residency update to
driver enter routine; remove dev->last_state") setting acpi_idle_suspend
to 1 by acpi_processor_suspend() causes the ACPI cpuidle routines to
return error codes continuously, which in turn causes cpuidle to lock up
(hard).
However, acpi_idle_suspend doesn't appear to be useful for any
particular purpose (it's racy and doesn't really provide any real
protection), so it can be removed, which makes the problem go away.
Reported-and-tested-by: Tomas M. <tmezzadra@gmail.com>
Reported-and-tested-by: Ferenc Wagner <wferi@niif.hu>
Tested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux:
cpuidle: Single/Global registration of idle states
cpuidle: Split cpuidle_state structure and move per-cpu statistics fields
cpuidle: Remove CPUIDLE_FLAG_IGNORE and dev->prepare()
cpuidle: Move dev->last_residency update to driver enter routine; remove dev->last_state
ACPI: Fix CONFIG_ACPI_DOCK=n compiler warning
ACPI: Export FADT pm_profile integer value to userspace
thermal: Prevent polling from happening during system suspend
ACPI: Drop ACPI_NO_HARDWARE_INIT
ACPI atomicio: Convert width in bits to bytes in __acpi_ioremap_fast()
PNPACPI: Simplify disabled resource registration
ACPI: Fix possible recursive locking in hwregs.c
ACPI: use kstrdup()
mrst pmu: update comment
tools/power turbostat: less verbose debugging
This patch makes the cpuidle_states structure global (single copy)
instead of per-cpu. The statistics needed on per-cpu basis
by the governor are kept per-cpu. This simplifies the cpuidle
subsystem as state registration is done by single cpu only.
Having single copy of cpuidle_states saves memory. Rare case
of asymmetric C-states can be handled within the cpuidle driver
and architectures such as POWER do not have asymmetric C-states.
Having single/global registration of all the idle states,
dynamic C-state transitions on x86 are handled by
the boot cpu. Here, the boot cpu would disable all the devices,
re-populate the states and later enable all the devices,
irrespective of the cpu that would receive the notification first.
Reference:
https://lkml.org/lkml/2011/4/25/83
Signed-off-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
Signed-off-by: Trinabh Gupta <g.trinabh@gmail.com>
Tested-by: Jean Pihet <j-pihet@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Acked-by: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Len Brown <len.brown@intel.com>
This is the first step towards global registration of cpuidle
states. The statistics used primarily by the governor are per-cpu
and have to be split from rest of the fields inside cpuidle_state,
which would be made global i.e. single copy. The driver_data field
is also per-cpu and moved.
Signed-off-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
Signed-off-by: Trinabh Gupta <g.trinabh@gmail.com>
Tested-by: Jean Pihet <j-pihet@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Acked-by: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Cpuidle governor only suggests the state to enter using the
governor->select() interface, but allows the low level driver to
override the recommended state. The actual entered state
may be different because of software or hardware demotion. Software
demotion is done by the back-end cpuidle driver and can be accounted
correctly. Current cpuidle code uses last_state field to capture the
actual state entered and based on that updates the statistics for the
state entered.
Ideally the driver enter routine should update the counters,
and it should return the state actually entered rather than the time
spent there. The generic cpuidle code should simply handle where
the counters live in the sysfs namespace, not updating the counters.
Reference:
https://lkml.org/lkml/2011/3/25/52
Signed-off-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
Signed-off-by: Trinabh Gupta <g.trinabh@gmail.com>
Tested-by: Jean Pihet <j-pihet@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Acked-by: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Len Brown <len.brown@intel.com>
* 'core-locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (27 commits)
rtmutex: Add missing rcu_read_unlock() in debug_rt_mutex_print_deadlock()
lockdep: Comment all warnings
lib: atomic64: Change the type of local lock to raw_spinlock_t
locking, lib/atomic64: Annotate atomic64_lock::lock as raw
locking, x86, iommu: Annotate qi->q_lock as raw
locking, x86, iommu: Annotate irq_2_ir_lock as raw
locking, x86, iommu: Annotate iommu->register_lock as raw
locking, dma, ipu: Annotate bank_lock as raw
locking, ARM: Annotate low level hw locks as raw
locking, drivers/dca: Annotate dca_lock as raw
locking, powerpc: Annotate uic->lock as raw
locking, x86: mce: Annotate cmci_discover_lock as raw
locking, ACPI: Annotate c3_lock as raw
locking, oprofile: Annotate oprofilefs lock as raw
locking, video: Annotate vga console lock as raw
locking, latencytop: Annotate latency_lock as raw
locking, timer_stats: Annotate table_lock as raw
locking, rwsem: Annotate inner lock as raw
locking, semaphores: Annotate inner lock as raw
locking, sched: Annotate thread_group_cputimer as raw
...
Fix up conflicts in kernel/posix-cpu-timers.c manually: making
cputimer->cputime a raw lock conflicted with the ABBA fix in commit
bcd5cff721 ("cputimer: Cure lock inversion").
We cannot preempt this lock on -rt as we are in an
interrupt disabled region and about to go into deep sleep.
In mainline this change documents the low level nature of
the lock - otherwise there's no functional difference. Lockdep
and Sparse checking will work as usual.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Len Brown <len.brown@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The PM QoS implementation files are better named
kernel/power/qos.c and include/linux/pm_qos.h.
The PM QoS support is compiled under the CONFIG_PM option.
Signed-off-by: Jean Pihet <j-pihet@ti.com>
Acked-by: markgross <markgross@thegnar.org>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
The workaround for AMD erratum 400 uses the term "c1e" falsely suggesting:
1. Intel C1E is somehow involved
2. All AMD processors with C1E are involved
Use the string "amd_c1e" instead of simply "c1e" to clarify that
this workaround is specific to AMD's version of C1E.
Use the string "e400" to clarify that the workaround is specific
to AMD processors with Erratum 400.
This patch is text-substitution only, with no functional change.
cc: x86@kernel.org
Acked-by: Borislav Petkov <borislav.petkov@amd.com>
Signed-off-by: Len Brown <len.brown@intel.com>
CPUIDLE_FLAG_SHALLOW
CPUIDLE_FLAG_BALANCED
CPUIDLE_FLAG_DEEP
CPUIDLE_FLAG_CHECK_BM
were set by acpi_processor_setup_cpuidle(),
but never used by cpuidle or by acpi_idle.
So stop setting them.
Signed-off-by: Len Brown <len.brown@intel.com>
Having four variables for the same thing:
idle_halt, idle_nomwait, force_mwait and boot_option_idle_overrides
is rather confusing and unnecessary complex.
if idle= boot param is passed, only set up one variable:
boot_option_idle_overrides
Introduces following functional changes/fixes:
- intel_idle driver does not register if any idle=xy
boot param is passed.
- processor_idle.c will also not register a cpuidle driver
and get active if idle=halt is passed.
Before a cpuidle driver with one (C1, halt) state got registered
Now the default_idle function will be used which finally uses
the same idle call to enter sleep state (safe_halt()), but
without registering a whole cpuidle driver.
That means idle= param will always avoid cpuidle drivers to register
with one exception (same behavior as before):
idle=nomwait
may still register acpi_idle cpuidle driver, but C1 will not use
mwait, but hlt. This can be a workaround for IO based deeper sleep
states where C1 mwait causes problems.
Signed-off-by: Thomas Renninger <trenn@suse.de>
cc: x86@kernel.org
Signed-off-by: Len Brown <len.brown@intel.com>
__get_cpu_var() can be replaced with this_cpu_read and will then use a single
read instruction with implied address calculation to access the correct per cpu
instance.
However, the address of a per cpu variable passed to __this_cpu_read() cannot be
determed (since its an implied address conversion through segment prefixes).
Therefore apply this only to uses of __get_cpu_var where the addres of the
variable is not used.
V3->V4:
- Move one instance of this_cpu_inc_return to a later patch
so that this one can go in without percpu infrastructrure
changes.
Sedat: fixed compile failure caused by an extra ')'.
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Sedat Dilek <sedat.dilek@gmail.com>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6: (53 commits)
ACPI: install ACPI table handler before any dynamic tables being loaded
ACPI / PM: Blacklist another machine that needs acpi_sleep=nonvs
ACPI: Page based coalescing of I/O remappings optimization
ACPI: Convert simple locking to RCU based locking
ACPI: Pre-map 'system event' related register blocks
ACPI: Add interfaces for ioremapping/iounmapping ACPI registers
ACPI: Maintain a list of ACPI memory mapped I/O remappings
ACPI: Fix ioremap size for MMIO reads and writes
ACPI / Battery: Return -ENODEV for unknown values in get_property()
ACPI / PM: Fix reference counting of power resources
Subject: [PATCH] ACPICA: Fix Scope() op in module level code
ACPI battery: support percentage battery remaining capacity
ACPI: Make Embedded Controller command timeout delay configurable
ACPI dock: move some functions to .init.text
ACPI: thermal: remove unused limit code
ACPI: static sleep_states[] and acpi_gts_bfs_check
ACPI: remove dead code
ACPI: delete dedicated MAINTAINERS entries for ACPI EC and BATTERY drivers
ACPI: Only processor needs CPU_IDLE
ACPICA: Update version to 20101013
...
The mW data in this field comes from AML _CST,
which was typed in by a BIOS writer, and is thus
considered unreliable.
Linux does not use it for making any decisions.
We do display it in sysfs where somebody might
read it and assume it is meaningful, so delete it.
Signed-off-by: Len Brown <len.brown@intel.com>
Looks like a left over from /proc/acpi/processor/*/power which got removed
Signed-off-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: Len Brown <len.brown@intel.com>
Some minor improvements in error handling, but overall it was mostly dead
code.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Len Brown <len.brown@intel.com>
Remove deprecated ACPI processor procfs I/F, including:
/proc/acpi/processor/CPUX/power
/proc/acpi/processor/CPUX/limit
/proc/acpi/processor/CPUX/info
/proc/acpi/processor/CPUX/throttling still exists,
as we don't have sysfs I/F available for now.
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
* 'timers-timekeeping-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
um: Fix read_persistent_clock fallout
kgdb: Do not access xtime directly
powerpc: Clean up obsolete code relating to decrementer and timebase
powerpc: Rework VDSO gettimeofday to prevent time going backwards
clocksource: Add __clocksource_updatefreq_hz/khz methods
x86: Convert common clocksources to use clocksource_register_hz/khz
timekeeping: Make xtime and wall_to_monotonic static
hrtimer: Cleanup direct access to wall_to_monotonic
um: Convert to use read_persistent_clock
timkeeping: Fix update_vsyscall to provide wall_to_monotonic offset
powerpc: Cleanup xtime usage
powerpc: Simplify update_vsyscall
time: Kill off CONFIG_GENERIC_TIME
time: Implement timespec_add
x86: Fix vtime/file timestamp inconsistencies
Trivial conflicts in Documentation/feature-removal-schedule.txt
Much less trivial conflicts in arch/powerpc/kernel/time.c resolved as
per Thomas' earlier merge commit 47916be4e2 ("Merge branch
'powerpc.cherry-picks' into timers/clocksource")
Accomodate the original C1E-aware idle routine to the different times
during boot when the BIOS enables C1E. While at it, remove the synthetic
CPUID flag in favor of a single global setting which denotes C1E status
on the system.
[ hpa: changed c1e_enabled to be a bool; clarified cpu bit 3:21 comment ]
Signed-off-by: Michal Schmidt <mschmidt@redhat.com>
LKML-Reference: <20100727165335.GA11630@aftab>
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Now that all arches have been converted over to use generic time via
clocksources or arch_gettimeoffset(), we can remove the GENERIC_TIME
config option and simplify the generic code.
Signed-off-by: John Stultz <johnstul@us.ibm.com>
LKML-Reference: <1279068988-21864-4-git-send-email-johnstul@us.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
processor.bm_check_disable=1" prevents Linux from checking BM_STS
before entering C3-type cpu power states.
This may be useful for a system running acpi_idle
where the BIOS exports FADT C-states, _CST IO C-states,
or _CST FFH C-states with the BM_STS bit set;
while configuring the chipset to set BM_STS
more frequently than perhaps is optimal.
Note that such systems may have been developed
using a tickful OS that would quickly clear BM_STS,
rather than a tickless OS that may go for some time
between checking and clearing BM_STS.
Note also that an alternative for newer systems
is to use the intel_idle driver, which always
ignores BM_STS, relying Linux device drivers
to register constraints explicitly via PM_QOS.
https://bugzilla.kernel.org/show_bug.cgi?id=15886
Signed-off-by: Len Brown <len.brown@intel.com>
It turns out that there is a bit in the _CST for Intel FFH C3
that tells the OS if we should be checking BM_STS or not.
Linux has been unconditionally checking BM_STS.
If the chip-set is configured to enable BM_STS,
it can retard or completely prevent entry into
deep C-states -- as illustrated by turbostat:
http://userweb.kernel.org/~lenb/acpi/utils/pmtools/turbostat/
ref: Intel Processor Vendor-Specific ACPI Interface Specification
table 4 "_CST FFH GAS Field Encoding"
Bit 1: Set to 1 if OSPM should use Bus Master avoidance for this C-state
https://bugzilla.kernel.org/show_bug.cgi?id=15886
Signed-off-by: Len Brown <len.brown@intel.com>
CONFIG_ACPI_PROCFS=n:
drivers/acpi/processor_idle.c:83: warning: 'us_to_pm_timer_ticks' defined but not used.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Len Brown <len.brown@intel.com>
acpi_enter_[simple|bm] routines does us to pm tick conversion on every
idle wakeup and the value is only used in /proc/acpi display. We can
store the time in us and convert it into pm ticks before printing instead and
avoid the conversion in the common path.
Signed-off-by: Venkatesh Pallipadi <venki@google.com>
Signed-off-by: Len Brown <len.brown@intel.com>
The C-state idle time is not calculated correctly, which will return the wrong
residency time in C-state. It will have the following effects:
1. The system can't choose the deeper C-state when it is idle next time.
Of course the system power is increased. E.g. On one server machine about 40W
idle power is increased.
2. The powertop shows that it will stay in C0 running state about 95% time
although the system is idle at most time.
2.6.35-rc1 regression caused-by: 2da513f582
(ACPI: Minor cleanup eliminating redundant PMTIMER_TICKS to NS conversion)
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Reported-by: Yu Zhidong <zhidong.yu@intel.com>
Tested-by: Yu Zhidong <zhidong.yu@intel.com>
Acked-by: Venkatesh Pallipadi <venki@google.com>
Signed-off-by: Len Brown <len.brown@intel.com>
* 'idle-release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-idle-2.6:
intel_idle: native hardware cpuidle driver for latest Intel processors
ACPI: acpi_idle: touch TS_POLLING only in the non-MWAIT case
acpi_pad: uses MONITOR/MWAIT, so it doesn't need to clear TS_POLLING
sched: clarify commment for TS_POLLING
ACPI: allow a native cpuidle driver to displace ACPI
cpuidle: make cpuidle_curr_driver static
cpuidle: add cpuidle_unregister_driver() error check
cpuidle: fail to register if !CONFIG_CPU_IDLE
acpi_enter_[simple,bm] does
idle timing in ns, convert it to timeval, then to us, then to
pmtimer_ticks and then back to ns.
This patch changes things to
idle timing in ns, convert it to us, and then to pmtimer_ticks.
Just saves an imul along this path, but makes the code cleaner.
Signed-off-by: Venkatesh Pallipadi <venki@google.com>
Signed-off-by: Len Brown <len.brown@intel.com>
commit d306ebc286
(ACPI: Be in TS_POLLING state during mwait based C-state entry)
fixed an important power & performance issue where ACPI c2 and c3 C-states
were clearing TS_POLLING even when using MWAIT (ACPI_STATE_FFH).
That bug had been causing us to receive redundant scheduling interrups
when we had already been woken up by MONITOR/MWAIT.
Following up on that...
In the MWAIT case, we don't have to subsequently
check need_resched(), as that c heck was there
for the TS_POLLING-clearing case.
Note that not only does the cpuidle calling function
already check need_resched() before calling us, the
low-level entry into monitor/mwait calls it twice --
guaranteeing that a write to the trigger address
can not go un-noticed.
Also, in this case, we don't have to set TS_POLLING
when we wake, because we never cleared it.
Signed-off-by: Len Brown <len.brown@intel.com>
Acked-by: Venkatesh Pallipadi <venki@google.com>
These were used before cpuidle by the native ACPI idle driver,
which tracked promotion and demotion between states.
The code was referenced by CONFIG_ACPI_PROCFS
for /proc/acpi/processor/*/power,
but as we no longer do promotion/demotion, that
reference has been a NOP since the transition.
Signed-off-by: Len Brown <len.brown@intel.com>
This patch changes the string based list management to a handle base
implementation to help with the hot path use of pm-qos, it also renames
much of the API to use "request" as opposed to "requirement" that was
used in the initial implementation. I did this because request more
accurately represents what it actually does.
Also, I added a string based ABI for users wanting to use a string
interface. So if the user writes 0xDDDDDDDD formatted hex it will be
accepted by the interface. (someone asked me for it and I don't think
it hurts anything.)
This patch updates some documentation input I got from Randy.
Signed-off-by: markgross <mgross@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
* 'acpica' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6:
ACPI: replace acpi_integer by u64
ACPICA: Update version to 20100121.
ACPICA: Remove unused uint32_struct type
ACPICA: Disassembler: Remove obsolete "Integer64" field in parse object
ACPICA: Remove obsolete ACPI_INTEGER (acpi_integer) type
ACPICA: Predefined name repair: fix NULL package elements
ACPICA: AcpiGetDevices: Eliminate unnecessary _STA calls
ACPICA: Update all ACPICA copyrights and signons to 2010
ACPICA: Update for new gcc-4 warning options
ACPI deep C-state entry had a long standing bug/missing feature, wherein we were sending
resched IPIs when an idle CPU is in mwait based deep C-state. Only mwait based C1 was using
the write to the monitored address to wake up mwait'ing CPU.
This patch changes the code to retain TS_POLLING bit if we are entering an mwait based
deep C-state.
The patch has been verified to reduce the number of resched IPIs in general and also
improves the performance/power on workloads with low system utilization (i.e., when mwait based
deep C-states are being used).
Fixes "netperf ~50% regression with 2.6.33-rc1, bisect to 1b9508f"
http://marc.info/?l=linux-kernel&m=126441481427331&w=4
Reported-by: Lin Ming <ming.m.lin@intel.com>
Tested-by: Alex Shi <alex.shi@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Since the rewrite of the CPU idle governor in 2.6.32, two laptops have
surfaced where the BIOS advertises a C2 power state, but for some reason
this state is not functioning (as verified in both cases by powertop
before the patch in .32).
The old governor had the accidental behavior that if a non-working state
was chosen too many times, it would end up falling back to C1. The new
governor works differently and this accidental behavior is no longer
there; the result is a high temperature on these two machines.
This patch adds these 2 machines to the DMI table for C state anomalies;
by just not using C2 both these machines are better off (the TSC can be
used instead of the pm timer, giving a performance boost for example).
Addresses http://bugzilla.kernel.org/show_bug.cgi?id=14742
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Reported-by: <akwatts@ymail.com>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Len Brown <len.brown@intel.com>
acpi_integer is now obsolete and removed from the ACPICA code base,
replaced by u64.
Signed-off-by: Lin Ming <ming.m.lin@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
no functional change -- cleanup only.
acpi_processor_power_verify_c2() was nearly empty due to a previous patch,
so expand its remains into its one caller and delete it.
Signed-off-by: Len Brown <len.brown@intel.com>
Do for C3 what the previous patch did for C2.
The C2 patch was in response to a highly visible
and multiply reported C-state/turbo failure,
while this change has no bug report in-hand.
This will enable C3 in Linux on systems where BIOS
overstates C3 latency in _CST. It will also enable
future systems which may actually have C3 > 1000usec.
Linux has always ignored ACPI BIOS C3 with exit latency > 1000 usec,
and the ACPI spec is clear that is correct FADT-supplied C3.
However, the ACPI spec explicitly states that _CST-supplied C-states
have no latency limits.
So move the 1000usec C3 test out of the code shared
by FADT and _CST code-paths, and into the FADT-specific path.
Signed-off-by: Len Brown <len.brown@intel.com>
Linux has always ignored ACPI BIOS C2 with exit latency > 100 usec,
and the ACPI spec is clear that is correct FADT-supplied C2.
However, the ACPI spec explicitly states that _CST-supplied C-states
have no latency limits.
So move the 100usec C2 test out of the code shared
by FADT and _CST code-paths, and into the FADT-specific path.
This bug has not been visible until Nehalem, which advertises
a CPU-C2 worst case exit latency on servers of 205usec.
That (incorrect) figure is being used by BIOS writers
on mobile Nehalem systems for the AC configuration.
Thus, Linux ignores C2 leaving just C1, which is
saves less power, and also impacts performance
by preventing the use of turbo mode.
http://bugzilla.kernel.org/show_bug.cgi?id=15064
Tested-by: Alex Chiang <achiang@hp.com>
Signed-off-by: Len Brown <len.brown@intel.com>
I got following warning on ia64 box:
In function 'acpi_processor_power_verify':
642: warning: passing argument 2 of 'smp_call_function_single' from
incompatible pointer type
This smp_call_function_single() was introduced by a commit
f833bab87f:
> @@ -162,8 +162,9 @@
> pr->power.timer_broadcast_on_state = state;
> }
>
> -static void lapic_timer_propagate_broadcast(struct acpi_processor *pr)
> +static void lapic_timer_propagate_broadcast(void *arg)
> {
> + struct acpi_processor *pr = (struct acpi_processor *) arg;
> unsigned long reason;
>
> reason = pr->power.timer_broadcast_on_state < INT_MAX ?
> @@ -635,7 +636,8 @@
> working++;
> }
>
> - lapic_timer_propagate_broadcast(pr);
> + smp_call_function_single(pr->id, lapic_timer_propagate_broadcast,
> + pr, 1);
>
> return (working);
> }
The problem is that the lapic_timer_propagate_broadcast() has 2 versions:
One is real code that modified in the above commit, and the other is NOP
code that used when !ARCH_APICTIMER_STOPS_ON_C3:
static void lapic_timer_propagate_broadcast(struct acpi_processor *pr) { }
So I got warning because of !ARCH_APICTIMER_STOPS_ON_C3.
We really want to do nothing here on !ARCH_APICTIMER_STOPS_ON_C3, so
modify lapic_timer_propagate_broadcast() of real version to use
smp_call_function_single() in it.
Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Acked-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Commit 3d5b6fb47a ("ACPI: Kill overly
verbose "power state" log messages") removed the actual use of this
variable, but didn't remove the variable itself, resulting in build
warnings like
drivers/acpi/processor_idle.c: In function ‘acpi_processor_power_init’:
drivers/acpi/processor_idle.c:1169: warning: unused variable ‘i’
Just get rid of the now unused variable.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
I was recently lucky enough to get a 64-CPU system, so my kernel log
ends up with 64 lines like:
ACPI: CPU0 (power states: C1[C1] C2[C3])
This is pretty useless clutter because this info is already available
after boot from both /sys/devices/system/cpu/cpu*/cpuidle/state?/ as
well as /proc/acpi/processor/CPU*/power.
So just delete the code that prints the C-states in processor_idle.c.
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Linux/ACPI core files using internal.h all PREFIX "ACPI: ",
however, not all ACPI drivers use/want it -- and they
should not have to #undef PREFIX to define their own.
Add GPL commment to internal.h while we are there.
This does not change any actual console output,
asside from a whitespace fix.
Signed-off-by: Len Brown <len.brown@intel.com>
Currently clockevents_notify() is called with interrupts enabled at
some places and interrupts disabled at some other places.
This results in a deadlock in this scenario.
cpu A holds clockevents_lock in clockevents_notify() with irqs enabled
cpu B waits for clockevents_lock in clockevents_notify() with irqs disabled
cpu C doing set_mtrr() which will try to rendezvous of all the cpus.
This will result in C and A come to the rendezvous point and waiting
for B. B is stuck forever waiting for the spinlock and thus not
reaching the rendezvous point.
Fix the clockevents code so that clockevents_lock is taken with
interrupts disabled and thus avoid the above deadlock.
Also call lapic_timer_propagate_broadcast() on the destination cpu so
that we avoid calling smp_call_function() in the clockevents notifier
chain.
This issue left us wondering if we need to change the MTRR rendezvous
logic to use stop machine logic (instead of smp_call_function) or add
a check in spinlock debug code to see if there are other spinlocks
which gets taken under both interrupts enabled/disabled conditions.
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Cc: "Pallipadi Venkatesh" <venkatesh.pallipadi@intel.com>
Cc: "Brown Len" <len.brown@intel.com>
LKML-Reference: <1250544899.2709.210.camel@sbs-t61.sc.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Now whether the ACPI processor proc I/F is registered depends on the
CONFIG_PROC. It had better depend on the CONFIG_ACPI_PROCFS.
When the CONFIG_ACPI_PROCFS is unset in kernel configuration, the
ACPI processor proc I/F won't be registered.
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
cosmetic only. The lapic_timer workaround routines
are specific to the lapic_timer, and are not acpi-generic.
old:
acpi_timer_check_state()
acpi_propagate_timer_broadcast()
acpi_state_timer_broadcast()
new:
lapic_timer_check_state()
lapic_timer_propagate_broadcast()
lapic_timer_state_broadcast()
also, simplify the code in acpi_processor_power_verify()
so that lapic_timer_check_state() is simply called
from one place for all valid C-states, including C1.
Signed-off-by: Len Brown <len.brown@intel.com>
ARB_DISABLE is a NOP on all of the recent Intel platforms.
For such platforms, reduce contention on c3_lock
by skipping the fake ARB_DISABLE.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
When AMD C1E is enabled, local APIC timer will stop even in C1. To avoid
suspend/resume hang, this patch removes C1 and replace it with a cpu_relax() in
suspend/resume path. This hasn't any impact in runtime path.
http://bugzilla.kernel.org/show_bug.cgi?id=13233
[ impact: avoid suspend/resume hang in AMD CPU with C1E enabled ]
Tested-by: Dmitry Lyzhyn <thisistempbox@yahoo.com>
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
When AMD C1E is enabled, local APIC timer will stop even in C1.
This patch uses broadcast IPI to replace local APIC timer in C1.
http://bugzilla.kernel.org/show_bug.cgi?id=13233
[ impact: avoid boot hang in AMD CPU with C1E enabled ]
Tested-by: Dmitry Lyzhyn <thisistempbox@yahoo.com>
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Processor idle power states C2 and C3 stop the TSC on many machines.
Linux recognizes this situation and marks the TSC as unstable:
Marking TSC unstable due to TSC halts in idle
But if those same machines are booted with "processor.max_cstate=1",
then there is no need to validate C2 and C3, and no need to
disable the TSC, which can be reliably used as a clocksource.
Signed-off-by: Len Brown <len.brown@intel.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
A previous 2.6.30 patch, a71e4917dc,
(ACPI: idle: mark_tsc_unstable() at init-time, not run-time)
erroneously disabled the TSC on systems that did not actually
have valid deep C-states.
Move the check after the deep-C-states are validated,
via new helper, tsc_check_state(), hich replaces tsc_halts_in_c().
Signed-off-by: Len Brown <len.brown@intel.com>
Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Frans Pop <elendil@planet.nl>
In 2.6.29,
31878dd86b
"ACPI: remove BM_RLD access from idle entry path"
moved BM_RLD initialization to init-time from run time.
But we discovered that some BIOS do not restore BM_RLD
after suspend, causing device errors on C3 and C4
after resume. So now the kernel restores BM_RLD.
http://bugzilla.kernel.org/show_bug.cgi?id=13032
Signed-off-by: Len Brown <len.brown@intel.com>
As processor.max_cstate is an init-time-only modparam,
sanity checking it at init-time is sufficient.
http://bugzilla.kernel.org/show_bug.cgi?id=13142
Signed-off-by: Len Brown <len.brown@intel.com>
Linux tells ICH4 users that they can (manually) invoke
"hpet=force" to enable the undocumented ICH-4M HPET.
The HPET becomes available for both clocksource and clockevents.
But as of ff69f2bba6
(acpi: fix of pmtimer overflow that make Cx states time incorrect)
the HPET may be used via clocksource for idle accounting, and
hpet=force on an ICH4 box hangs boot.
It turns out that touching the MMIO HPET withing
the ARB_DIS part of C3 will hang the hardware.
The fix is to simply move the timer access outside
the ARB_DIS region. This is a no-op on modern hardware
because ARB_DIS is no longer used.
http://bugzilla.kernel.org/show_bug.cgi?id=13087
Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Linux-2.6.29 deleted the legacy ACPI idle handler, leaving
the CPU_IDLE handler, which does not track bus master activity.
So delete the unused bm_activity field -- it is confusing to
print an always zero value.
This patch could break programs that parse
/proc/acpi/processor/*/power, since it deletes this
line from that file:
bus master activity: 00000000
http://bugzilla.kernel.org/show_bug.cgi?id=13145
is not fixed by this patch, but provoked this patch.
Signed-off-by: Len Brown <len.brown@intel.com>
The c2 and c3 idle handlers check tsc_halts_in_c()
after every time they return from idle. Um, when?:-)
Move this check to init-time to remove the unnecessary
run-time overhead, and also to have the check complete before
the first entry into the idle handler.
ff69f2bba6
(acpi: fix of pmtimer overflow that make Cx states time incorrect)
replaced the hard-coded use of the PM-timer inside idle,
with ktime_get_readl(), which possibly uses the TSC --
so it is now especially prudent to detect a broken TSC
before entering idle.
http://bugzilla.kernel.org/show_bug.cgi?id=13087
Signed-off-by: Len Brown <len.brown@intel.com>
Add support for Always Running APIC timer, CPUID_0x6_EAX_Bit2.
This bit means the APIC timer continues to run even when CPU is
in deep C-states.
The advantage is that we can use LAPIC timer on these CPUs
always, and there is no need for "slow to read and program"
external timers (HPET/PIT) and the timer broadcast logic
and related code in C-state entry and exit.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Len Brown <len.brown@intel.com>
The recent ACPICA patch
(ACPICA: FADT: Favor 32-bit register addresses for compatibility)
makes machine to use the right FADT HW addresses
and C-states now work fine.
http://bugzilla.kernel.org/show_bug.cgi?id=8246
Signed-off-by: Thomas Renninger <trenn@suse.de>
Tested-by: Mark Doughty <me@markdoughty.co.uk>
Signed-off-by: Len Brown <len.brown@intel.com>
Rename acpi_get_register and acpi_set_register to clarify the
purpose of these functions. New names are acpi_read_bit_register
and acpi_write_bit_register.
Signed-off-by: Bob Moore <robert.moore@intel.com>
Signed-off-by: Lin Ming <ming.m.lin@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Removed locking for reads from the ACPI bit registers in PM1
Status, Enable, Control, and PM2 Control. The lock is not required
when reading the single-bit registers. The acpi_get_register_unlocked
function is no longer needed and has been removed. This will
improve performance for reads on these registers. ACPICA BZ 760.
http://www.acpica.org/bugzilla/show_bug.cgi?id=760
Signed-off-by: Bob Moore <robert.moore@intel.com>
Signed-off-by: Lin Ming <ming.m.lin@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
We found Cx states time abnormal in our some of machines which have 16
LCPUs, the C0 take too many time while system is really idle when kernel
enabled tickless and highres. powertop output is below:
PowerTOP version 1.9 (C) 2007 Intel Corporation
Cn Avg residency P-states (frequencies)
C0 (cpu running) (40.5%) 2.53 Ghz 0.0%
C1 0.0ms ( 0.0%) 2.53 Ghz 0.0%
C2 128.8ms (59.5%) 2.40 Ghz 0.0%
1.60 Ghz 100.0%
Wakeups-from-idle per second : 4.7 interval: 20.0s
no ACPI power usage estimate available
Top causes for wakeups:
41.4% ( 24.9) <interrupt> : extra timer interrupt
20.2% ( 12.2) <kernel core> : usb_hcd_poll_rh_status
(rh_timer_func)
After tacking detailed for this issue, Yakui and I find it is due to 24
bit PM timer overflows when some of cpu sleep more than 4 seconds. With
tickless kernel, the CPU want to sleep as much as possible when system
idle. But the Cx sleep time are recorded by pmtimer which length is
determined by BIOS. The current Cx time was gotten in the following
function from driver/acpi/processor_idle.c:
static inline u32 ticks_elapsed(u32 t1, u32 t2)
{
if (t2 >= t1)
return (t2 - t1);
else if (!(acpi_gbl_FADT.flags & ACPI_FADT_32BIT_TIMER))
return (((0x00FFFFFF - t1) + t2) & 0x00FFFFFF);
else
return ((0xFFFFFFFF - t1) + t2);
}
If pmtimer is 24 bits and it take 5 seconds from t1 to t2, in above
function, just about 1 seconds ticks was recorded. So the Cx time will be
reduced about 4 seconds. and this is why we see above powertop output.
To resolve this problem, Yakui and I use ktime_get() to record the Cx
states time instead of PM timer as the following patch. the patch was
tested with i386/x86_64 modes on several platforms.
Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Tested-by: Alex Shi <alex.shi@intel.com>
Signed-off-by: Alex Shi <alex.shi@intel.com>
Signed-off-by: Yakui.zhao <yakui.zhao@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Len Brown <len.brown@intel.com>
CPU_IDLE=y has been default for ACPI=y since Nov-2007,
and has shipped in many distributions since then.
Here we delete the CPU_IDLE=n ACPI idle code, since
nobody should be using it, and we don't want to
maintain two versions.
Signed-off-by: Len Brown <len.brown@intel.com>
It is true that BM_RLD needs to be set to enable
bus master activity to wake an older chipset (eg PIIX4) from C3.
This is contrary to the erroneous wording the ACPI 2.0, 3.0
specifications that suggests that BM_RLD is an indicator
rather than a control bit.
ACPI 1.0's correct wording should be restored in ACPI 4.0:
http://www.acpica.org/bugzilla/show_bug.cgi?id=689
But the kernel should not have to clear BM_RLD
when entering a non C3-type state just to set
it again when entering a C3-type C-state.
We should be able to set BM_RLD at boot time
and leave it alone -- removing the overhead of
accessing this IO register from the idle entry path.
Signed-off-by: Len Brown <len.brown@intel.com>
PM1a_STS and PM1b_STS are twins that get OR'd together
on reads, and all writes are repeated to both.
The fields in PM1x_STS are single bits only,
there are no multi-bit fields.
So it is not necessary to lock PM1x_STS reads against
writes because it is impossible to read an intermediate
value of a single bit. It will either be 0 or 1,
even if a write is in progress during the read.
Reads are asynchronous to writes no matter if a lock
is used or not.
Signed-off-by: Len Brown <len.brown@intel.com>
While looking at reducing the amount of architecture namespace pollution
in the generic kernel, I found that asm/irq.h is included in the vast
majority of compilations on ARM (around 650 files.)
Since asm/irq.h includes a sub-architecture include file on ARM, this
causes a negative impact on the ccache's ability to re-use the build
results from other sub-architectures, so we have a desire to reduce the
dependencies on asm/irq.h.
It turns out that a major cause of this is the needless include of
linux/hardirq.h into asm-generic/local.h. The patch below removes this
include, resulting in some 250 to 300 files (around half) of the kernel
then omitting asm/irq.h.
My test builds still succeed, provided two ARM files are fixed
(arch/arm/kernel/traps.c and arch/arm/mm/fault.c) - so there may be
negative impacts for this on other architectures.
Note that x86 does not include asm/irq.h nor linux/hardirq.h in its
asm/local.h, so this patch can be viewed as bringing the generic version
into line with the x86 version.
[kosaki.motohiro@jp.fujitsu.com: add #include <linux/irqflags.h> to acpi/processor_idle.c]
[adobriyan@gmail.com: fix sparc64]
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Impact: reward non-stop TSCs with good TSC-based clocksources, etc.
Add support for CPUID_0x80000007_Bit8 on Intel CPUs as well. This bit means
that the TSC is invariant with C/P/T states and always runs at constant
frequency.
With Intel CPUs, we have 3 classes
* CPUs where TSC runs at constant rate and does not stop n C-states
* CPUs where TSC runs at constant rate, but will stop in deep C-states
* CPUs where TSC rate will vary based on P/T-states and TSC will stop in deep
C-states.
To cover these 3, one feature bit (CONSTANT_TSC) is not enough. So, add a
second bit (NONSTOP_TSC). CONSTANT_TSC indicates that the TSC runs at
constant frequency irrespective of P/T-states, and NONSTOP_TSC indicates
that TSC does not stop in deep C-states.
CPUID_0x8000000_Bit8 indicates both these feature bit can be set.
We still have CONSTANT_TSC _set_ and NONSTOP_TSC _not_set_ on some older Intel
CPUs, based on model checks. We can use TSC on such CPUs for time, as long as
those CPUs do not support/enter deep C-states.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Move all the component definitions for drivers to a single shared place,
include/acpi/acpi_drivers.h.
Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Signed-off-by: Len Brown <len.brown@intel.com>
reflect the actual state entered in dev->last_state, when actaul state entered
is different from intended one.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
pm_idle_save resp. pm_idle_old can be NULL when the restore code in
acpi_processor_cst_has_changed() resp. cpuidle_uninstall_idle_handler()
is called. This can set pm_idle unconditinally to NULL, which causes the
kernel to panic when calling pm_idle in the x86 idle code. This was
covered by an extra check for !pm_idle in the x86 idle code, which was
removed during the x86 idle code refactoring.
Instead of restoring the pm_idle check in the x86 code prevent the
acpi/cpuidle code to set pm_idle to NULL.
Reported by: Dhaval Giani http://lkml.org/lkml/2008/7/2/309
Based on a debug patch from Ingo Molnar
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The acpi idle waits calls local_irq_save and then uses mwait to go into
idle. The tracer gets reenabled at local_irq_save but does not detect that
the idle allows for wake ups.
This patch adds code to disable the tracing when acpi puts the CPU to idle.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
"idle=nomwait" disables the use of the MWAIT
instruction from both C1 (C1_FFH) and deeper (C2C3_FFH)
C-states.
When MWAIT is unavailable, the BIOS and OS generally
negotiate to use the HALT instruction for C1,
and use IO accesses for deeper C-states.
This option is useful for power and performance
comparisons, and also to work around BIOS bugs
where broken MWAIT support is advertised.
http://bugzilla.kernel.org/show_bug.cgi?id=10807http://bugzilla.kernel.org/show_bug.cgi?id=10914
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Li Shaohua <shaohua.li@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
"idle=halt" limits the idle loop to using
the halt instruction. No MWAIT, no IO accesses,
no C-states deeper than C1.
If something is broken in the idle code,
"idle=halt" is a less severe workaround
than "idle=poll" which disables all power savings.
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Change processors from an array sized by NR_CPUS to a per_cpu variable.
Signed-off-by: Mike Travis <travis@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
It's never used and the comments refer to nonatomic and retry
interchangably. So get rid of it.
Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
cpuidle and acpi driver interaction bug with the way cpuidle_register_driver()
is called. Due to this bug, there will be oops on
AC<->DC on some systems, where they support C-states in one DC and not in AC.
The current code does
ON BOOT:
Look at CST and other C-state info to see whether more than C1 is
supported. If it is, then acpi processor_idle does a
cpuidle_register_driver() call, which internally enables the device.
ON CST change notification (AC<->DC) and on suspend-resume:
acpi driver temporarily disables device, updates the device with
any new C-states, and reenables the device.
The problem is is on boot, there are no C2, C3 states supported and we skip
the register. Later on AC<->DC, we may get a CST notification and we try
to reevaluate CST and enabled the device, without actually registering it.
This causes breakage as we try to create /sys fs sub directory, without the
parent directory which is created at register time.
Thanks to Sanjeev for reporting the problem here.
http://bugzilla.kernel.org/show_bug.cgi?id=10394
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>