Currently clockevents_notify() is called with interrupts enabled at
some places and interrupts disabled at some other places.
This results in a deadlock in this scenario.
cpu A holds clockevents_lock in clockevents_notify() with irqs enabled
cpu B waits for clockevents_lock in clockevents_notify() with irqs disabled
cpu C doing set_mtrr() which will try to rendezvous of all the cpus.
This will result in C and A come to the rendezvous point and waiting
for B. B is stuck forever waiting for the spinlock and thus not
reaching the rendezvous point.
Fix the clockevents code so that clockevents_lock is taken with
interrupts disabled and thus avoid the above deadlock.
Also call lapic_timer_propagate_broadcast() on the destination cpu so
that we avoid calling smp_call_function() in the clockevents notifier
chain.
This issue left us wondering if we need to change the MTRR rendezvous
logic to use stop machine logic (instead of smp_call_function) or add
a check in spinlock debug code to see if there are other spinlocks
which gets taken under both interrupts enabled/disabled conditions.
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Cc: "Pallipadi Venkatesh" <venkatesh.pallipadi@intel.com>
Cc: "Brown Len" <len.brown@intel.com>
LKML-Reference: <1250544899.2709.210.camel@sbs-t61.sc.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
cosmetic only. The lapic_timer workaround routines
are specific to the lapic_timer, and are not acpi-generic.
old:
acpi_timer_check_state()
acpi_propagate_timer_broadcast()
acpi_state_timer_broadcast()
new:
lapic_timer_check_state()
lapic_timer_propagate_broadcast()
lapic_timer_state_broadcast()
also, simplify the code in acpi_processor_power_verify()
so that lapic_timer_check_state() is simply called
from one place for all valid C-states, including C1.
Signed-off-by: Len Brown <len.brown@intel.com>
ARB_DISABLE is a NOP on all of the recent Intel platforms.
For such platforms, reduce contention on c3_lock
by skipping the fake ARB_DISABLE.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
When AMD C1E is enabled, local APIC timer will stop even in C1. To avoid
suspend/resume hang, this patch removes C1 and replace it with a cpu_relax() in
suspend/resume path. This hasn't any impact in runtime path.
http://bugzilla.kernel.org/show_bug.cgi?id=13233
[ impact: avoid suspend/resume hang in AMD CPU with C1E enabled ]
Tested-by: Dmitry Lyzhyn <thisistempbox@yahoo.com>
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
When AMD C1E is enabled, local APIC timer will stop even in C1.
This patch uses broadcast IPI to replace local APIC timer in C1.
http://bugzilla.kernel.org/show_bug.cgi?id=13233
[ impact: avoid boot hang in AMD CPU with C1E enabled ]
Tested-by: Dmitry Lyzhyn <thisistempbox@yahoo.com>
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Processor idle power states C2 and C3 stop the TSC on many machines.
Linux recognizes this situation and marks the TSC as unstable:
Marking TSC unstable due to TSC halts in idle
But if those same machines are booted with "processor.max_cstate=1",
then there is no need to validate C2 and C3, and no need to
disable the TSC, which can be reliably used as a clocksource.
Signed-off-by: Len Brown <len.brown@intel.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
A previous 2.6.30 patch, a71e4917dc,
(ACPI: idle: mark_tsc_unstable() at init-time, not run-time)
erroneously disabled the TSC on systems that did not actually
have valid deep C-states.
Move the check after the deep-C-states are validated,
via new helper, tsc_check_state(), hich replaces tsc_halts_in_c().
Signed-off-by: Len Brown <len.brown@intel.com>
Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Frans Pop <elendil@planet.nl>
In 2.6.29,
31878dd86b
"ACPI: remove BM_RLD access from idle entry path"
moved BM_RLD initialization to init-time from run time.
But we discovered that some BIOS do not restore BM_RLD
after suspend, causing device errors on C3 and C4
after resume. So now the kernel restores BM_RLD.
http://bugzilla.kernel.org/show_bug.cgi?id=13032
Signed-off-by: Len Brown <len.brown@intel.com>
As processor.max_cstate is an init-time-only modparam,
sanity checking it at init-time is sufficient.
http://bugzilla.kernel.org/show_bug.cgi?id=13142
Signed-off-by: Len Brown <len.brown@intel.com>
Linux tells ICH4 users that they can (manually) invoke
"hpet=force" to enable the undocumented ICH-4M HPET.
The HPET becomes available for both clocksource and clockevents.
But as of ff69f2bba6
(acpi: fix of pmtimer overflow that make Cx states time incorrect)
the HPET may be used via clocksource for idle accounting, and
hpet=force on an ICH4 box hangs boot.
It turns out that touching the MMIO HPET withing
the ARB_DIS part of C3 will hang the hardware.
The fix is to simply move the timer access outside
the ARB_DIS region. This is a no-op on modern hardware
because ARB_DIS is no longer used.
http://bugzilla.kernel.org/show_bug.cgi?id=13087
Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Linux-2.6.29 deleted the legacy ACPI idle handler, leaving
the CPU_IDLE handler, which does not track bus master activity.
So delete the unused bm_activity field -- it is confusing to
print an always zero value.
This patch could break programs that parse
/proc/acpi/processor/*/power, since it deletes this
line from that file:
bus master activity: 00000000
http://bugzilla.kernel.org/show_bug.cgi?id=13145
is not fixed by this patch, but provoked this patch.
Signed-off-by: Len Brown <len.brown@intel.com>
The c2 and c3 idle handlers check tsc_halts_in_c()
after every time they return from idle. Um, when?:-)
Move this check to init-time to remove the unnecessary
run-time overhead, and also to have the check complete before
the first entry into the idle handler.
ff69f2bba6
(acpi: fix of pmtimer overflow that make Cx states time incorrect)
replaced the hard-coded use of the PM-timer inside idle,
with ktime_get_readl(), which possibly uses the TSC --
so it is now especially prudent to detect a broken TSC
before entering idle.
http://bugzilla.kernel.org/show_bug.cgi?id=13087
Signed-off-by: Len Brown <len.brown@intel.com>
Add support for Always Running APIC timer, CPUID_0x6_EAX_Bit2.
This bit means the APIC timer continues to run even when CPU is
in deep C-states.
The advantage is that we can use LAPIC timer on these CPUs
always, and there is no need for "slow to read and program"
external timers (HPET/PIT) and the timer broadcast logic
and related code in C-state entry and exit.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Len Brown <len.brown@intel.com>
The recent ACPICA patch
(ACPICA: FADT: Favor 32-bit register addresses for compatibility)
makes machine to use the right FADT HW addresses
and C-states now work fine.
http://bugzilla.kernel.org/show_bug.cgi?id=8246
Signed-off-by: Thomas Renninger <trenn@suse.de>
Tested-by: Mark Doughty <me@markdoughty.co.uk>
Signed-off-by: Len Brown <len.brown@intel.com>
Rename acpi_get_register and acpi_set_register to clarify the
purpose of these functions. New names are acpi_read_bit_register
and acpi_write_bit_register.
Signed-off-by: Bob Moore <robert.moore@intel.com>
Signed-off-by: Lin Ming <ming.m.lin@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Removed locking for reads from the ACPI bit registers in PM1
Status, Enable, Control, and PM2 Control. The lock is not required
when reading the single-bit registers. The acpi_get_register_unlocked
function is no longer needed and has been removed. This will
improve performance for reads on these registers. ACPICA BZ 760.
http://www.acpica.org/bugzilla/show_bug.cgi?id=760
Signed-off-by: Bob Moore <robert.moore@intel.com>
Signed-off-by: Lin Ming <ming.m.lin@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
We found Cx states time abnormal in our some of machines which have 16
LCPUs, the C0 take too many time while system is really idle when kernel
enabled tickless and highres. powertop output is below:
PowerTOP version 1.9 (C) 2007 Intel Corporation
Cn Avg residency P-states (frequencies)
C0 (cpu running) (40.5%) 2.53 Ghz 0.0%
C1 0.0ms ( 0.0%) 2.53 Ghz 0.0%
C2 128.8ms (59.5%) 2.40 Ghz 0.0%
1.60 Ghz 100.0%
Wakeups-from-idle per second : 4.7 interval: 20.0s
no ACPI power usage estimate available
Top causes for wakeups:
41.4% ( 24.9) <interrupt> : extra timer interrupt
20.2% ( 12.2) <kernel core> : usb_hcd_poll_rh_status
(rh_timer_func)
After tacking detailed for this issue, Yakui and I find it is due to 24
bit PM timer overflows when some of cpu sleep more than 4 seconds. With
tickless kernel, the CPU want to sleep as much as possible when system
idle. But the Cx sleep time are recorded by pmtimer which length is
determined by BIOS. The current Cx time was gotten in the following
function from driver/acpi/processor_idle.c:
static inline u32 ticks_elapsed(u32 t1, u32 t2)
{
if (t2 >= t1)
return (t2 - t1);
else if (!(acpi_gbl_FADT.flags & ACPI_FADT_32BIT_TIMER))
return (((0x00FFFFFF - t1) + t2) & 0x00FFFFFF);
else
return ((0xFFFFFFFF - t1) + t2);
}
If pmtimer is 24 bits and it take 5 seconds from t1 to t2, in above
function, just about 1 seconds ticks was recorded. So the Cx time will be
reduced about 4 seconds. and this is why we see above powertop output.
To resolve this problem, Yakui and I use ktime_get() to record the Cx
states time instead of PM timer as the following patch. the patch was
tested with i386/x86_64 modes on several platforms.
Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Tested-by: Alex Shi <alex.shi@intel.com>
Signed-off-by: Alex Shi <alex.shi@intel.com>
Signed-off-by: Yakui.zhao <yakui.zhao@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Len Brown <len.brown@intel.com>
CPU_IDLE=y has been default for ACPI=y since Nov-2007,
and has shipped in many distributions since then.
Here we delete the CPU_IDLE=n ACPI idle code, since
nobody should be using it, and we don't want to
maintain two versions.
Signed-off-by: Len Brown <len.brown@intel.com>
It is true that BM_RLD needs to be set to enable
bus master activity to wake an older chipset (eg PIIX4) from C3.
This is contrary to the erroneous wording the ACPI 2.0, 3.0
specifications that suggests that BM_RLD is an indicator
rather than a control bit.
ACPI 1.0's correct wording should be restored in ACPI 4.0:
http://www.acpica.org/bugzilla/show_bug.cgi?id=689
But the kernel should not have to clear BM_RLD
when entering a non C3-type state just to set
it again when entering a C3-type C-state.
We should be able to set BM_RLD at boot time
and leave it alone -- removing the overhead of
accessing this IO register from the idle entry path.
Signed-off-by: Len Brown <len.brown@intel.com>
PM1a_STS and PM1b_STS are twins that get OR'd together
on reads, and all writes are repeated to both.
The fields in PM1x_STS are single bits only,
there are no multi-bit fields.
So it is not necessary to lock PM1x_STS reads against
writes because it is impossible to read an intermediate
value of a single bit. It will either be 0 or 1,
even if a write is in progress during the read.
Reads are asynchronous to writes no matter if a lock
is used or not.
Signed-off-by: Len Brown <len.brown@intel.com>
While looking at reducing the amount of architecture namespace pollution
in the generic kernel, I found that asm/irq.h is included in the vast
majority of compilations on ARM (around 650 files.)
Since asm/irq.h includes a sub-architecture include file on ARM, this
causes a negative impact on the ccache's ability to re-use the build
results from other sub-architectures, so we have a desire to reduce the
dependencies on asm/irq.h.
It turns out that a major cause of this is the needless include of
linux/hardirq.h into asm-generic/local.h. The patch below removes this
include, resulting in some 250 to 300 files (around half) of the kernel
then omitting asm/irq.h.
My test builds still succeed, provided two ARM files are fixed
(arch/arm/kernel/traps.c and arch/arm/mm/fault.c) - so there may be
negative impacts for this on other architectures.
Note that x86 does not include asm/irq.h nor linux/hardirq.h in its
asm/local.h, so this patch can be viewed as bringing the generic version
into line with the x86 version.
[kosaki.motohiro@jp.fujitsu.com: add #include <linux/irqflags.h> to acpi/processor_idle.c]
[adobriyan@gmail.com: fix sparc64]
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Impact: reward non-stop TSCs with good TSC-based clocksources, etc.
Add support for CPUID_0x80000007_Bit8 on Intel CPUs as well. This bit means
that the TSC is invariant with C/P/T states and always runs at constant
frequency.
With Intel CPUs, we have 3 classes
* CPUs where TSC runs at constant rate and does not stop n C-states
* CPUs where TSC runs at constant rate, but will stop in deep C-states
* CPUs where TSC rate will vary based on P/T-states and TSC will stop in deep
C-states.
To cover these 3, one feature bit (CONSTANT_TSC) is not enough. So, add a
second bit (NONSTOP_TSC). CONSTANT_TSC indicates that the TSC runs at
constant frequency irrespective of P/T-states, and NONSTOP_TSC indicates
that TSC does not stop in deep C-states.
CPUID_0x8000000_Bit8 indicates both these feature bit can be set.
We still have CONSTANT_TSC _set_ and NONSTOP_TSC _not_set_ on some older Intel
CPUs, based on model checks. We can use TSC on such CPUs for time, as long as
those CPUs do not support/enter deep C-states.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Move all the component definitions for drivers to a single shared place,
include/acpi/acpi_drivers.h.
Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Signed-off-by: Len Brown <len.brown@intel.com>
reflect the actual state entered in dev->last_state, when actaul state entered
is different from intended one.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
pm_idle_save resp. pm_idle_old can be NULL when the restore code in
acpi_processor_cst_has_changed() resp. cpuidle_uninstall_idle_handler()
is called. This can set pm_idle unconditinally to NULL, which causes the
kernel to panic when calling pm_idle in the x86 idle code. This was
covered by an extra check for !pm_idle in the x86 idle code, which was
removed during the x86 idle code refactoring.
Instead of restoring the pm_idle check in the x86 code prevent the
acpi/cpuidle code to set pm_idle to NULL.
Reported by: Dhaval Giani http://lkml.org/lkml/2008/7/2/309
Based on a debug patch from Ingo Molnar
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The acpi idle waits calls local_irq_save and then uses mwait to go into
idle. The tracer gets reenabled at local_irq_save but does not detect that
the idle allows for wake ups.
This patch adds code to disable the tracing when acpi puts the CPU to idle.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
"idle=nomwait" disables the use of the MWAIT
instruction from both C1 (C1_FFH) and deeper (C2C3_FFH)
C-states.
When MWAIT is unavailable, the BIOS and OS generally
negotiate to use the HALT instruction for C1,
and use IO accesses for deeper C-states.
This option is useful for power and performance
comparisons, and also to work around BIOS bugs
where broken MWAIT support is advertised.
http://bugzilla.kernel.org/show_bug.cgi?id=10807http://bugzilla.kernel.org/show_bug.cgi?id=10914
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Li Shaohua <shaohua.li@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
"idle=halt" limits the idle loop to using
the halt instruction. No MWAIT, no IO accesses,
no C-states deeper than C1.
If something is broken in the idle code,
"idle=halt" is a less severe workaround
than "idle=poll" which disables all power savings.
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Change processors from an array sized by NR_CPUS to a per_cpu variable.
Signed-off-by: Mike Travis <travis@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
It's never used and the comments refer to nonatomic and retry
interchangably. So get rid of it.
Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
cpuidle and acpi driver interaction bug with the way cpuidle_register_driver()
is called. Due to this bug, there will be oops on
AC<->DC on some systems, where they support C-states in one DC and not in AC.
The current code does
ON BOOT:
Look at CST and other C-state info to see whether more than C1 is
supported. If it is, then acpi processor_idle does a
cpuidle_register_driver() call, which internally enables the device.
ON CST change notification (AC<->DC) and on suspend-resume:
acpi driver temporarily disables device, updates the device with
any new C-states, and reenables the device.
The problem is is on boot, there are no C2, C3 states supported and we skip
the register. Later on AC<->DC, we may get a CST notification and we try
to reevaluate CST and enabled the device, without actually registering it.
This causes breakage as we try to create /sys fs sub directory, without the
parent directory which is created at register time.
Thanks to Sanjeev for reporting the problem here.
http://bugzilla.kernel.org/show_bug.cgi?id=10394
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6: (179 commits)
ACPI: Fix acpi_processor_idle and idle= boot parameters interaction
acpi: fix section mismatch warning in pnpacpi
intel_menlo: fix build warning
ACPI: Cleanup: Remove unneeded, multiple local dummy variables
ACPI: video - fix permissions on some proc entries
ACPI: video - properly handle errors when registering proc elements
ACPI: video - do not store invalid entries in attached_array list
ACPI: re-name acpi_pm_ops to acpi_suspend_ops
ACER_WMI/ASUS_LAPTOP: fix build bug
thinkpad_acpi: fix possible NULL pointer dereference if kstrdup failed
ACPI: check a return value correctly in acpi_power_get_context()
#if 0 acpi/bay.c:eject_removable_drive()
eeepc-laptop: add hwmon fan control
eeepc-laptop: add backlight
eeepc-laptop: add base driver
ACPI: thinkpad-acpi: bump up version to 0.20
ACPI: thinkpad-acpi: fix selects in Kconfig
ACPI: thinkpad-acpi: use a private workqueue
ACPI: thinkpad-acpi: fluff really minor fix
ACPI: thinkpad-acpi: use uppercase for "LED" on user documentation
...
Fixed conflicts in drivers/acpi/video.c and drivers/misc/intel_menlow.c
manually.
acpi_processor_idle and "idle=" boot parameter interaction is broken.
The problem is that, at boot time acpi driver is checking for "idle=" boot
option and not registering the acpi idle handler. But, when there is a CST
changed callback (typically when switching AC <-> battery or suspend-resume)
there are no checks for boot_option_idle_override and acpi idle handler tries
to get installed with nasty side effects.
With CPU_IDLE configured this issue causes results in a nasty oops on CST
change callback and without CPU_IDLE there is no oops, but boot option
of "idle=" gets ignored and acpi idle handler gets installed.
Change the behavior to not do anything in acpi idle handler when there is a
"idle=" boot option.
Note that the problem is only there when "idle=" boot option is used.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Use proc_create()/proc_create_data() to make sure that ->proc_fops and ->data
be setup before gluing PDE to main tree.
Add correct ->owner to proc_fops to fix reading/module unloading race.
Signed-off-by: Denis V. Lunev <den@openvz.org>
Cc: Len Brown <lenb@kernel.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
OK, so 25-mm1 gave a lockdep error which made me look into this.
The first thing that I noticed was the horrible mess; the second thing I
saw was hacks like: 71e93d1561
The problem is that arch idle routines are somewhat inconsitent with
their IRQ state handling and instead of fixing _that_, we go paper over
the problem.
So the thing I've tried to do is set a standard for idle routines and
fix them all up to adhere to that. So the rules are:
idle routines are entered with IRQs disabled
idle routines will exit with IRQs enabled
Nearly all already did this in one form or another.
Merge the 32 and 64 bit bits so they no longer have different bugs.
As for the actual lockdep warning; __sti_mwait() did a plainly un-annotated
irq-enable.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Tested-by: Bob Copeland <me@bobcopeland.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Patch to fix huge number of wakeups reported due to recent changes in
processor_idle.c. The problem was that the entry_method determination was
broken due to one of the recent commits (bc71bec91f) causing
C1 entry to not to go to halt.
http://lkml.org/lkml/2008/3/22/124
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
commit 9b12e18cdc
'ACPI: cpuidle: Support C1 idle time accounting'
was implicated in a 100% C0 idle regression.
http://bugzilla.kernel.org/show_bug.cgi?id=10076
It pointed out a potential problem where the menu governor
may get confused by the C-state residency time from poll
idle or C1 idle, where this timing info is not accurate.
This inaccuracy is due to interrupts being handled
before we account for C-state exit.
Do not mark TIME_VALID for CO poll state.
Mark C1 time as valid only with the MWAIT (CSTATE_FFH) entry method.
This makes governors use the timing information only when it is correct and
eliminates any wrong policy decisions that may result from invalid timing
information.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
This original patch
http://ussg.iu.edu/hypermail/linux/kernel/0712.2/1451.html
was intending to add acpi_unlazy_tlb() to acpi_idle_enter_bm(),
which is used for C3 entry.
But it was merged incorrectly as commmit
bde6f5f59c
'x86: voluntary leave_mm before entering ACPI C3'
so the call was instead added to acpi_idle_enter_simple()
(which is C2 entry routine), probably due to identical
context in that function.
Move the call back to acpi_idle_enter_bm().
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
TSC is used even on machines when CONFIG_X86_TSC is not set (X86_TSC
means _require_ TSC), but it is not properly disabled when it is
unusable, because ACPI code understood the config switch as "may use
TSC".
This actually fixes suspend problems on my x60.
Signed-off-by: Pavel Machek <pavel@suse.cz>
Signed-off-by: Len Brown <len.brown@intel.com>
Add a new sysfs entry under cpuidle states. desc - can be used by driver to
communicate to userspace any specific information about the state.
This helps in identifying the exact hardware C-states behind the ACPI C-state
definition.
Idea is to export this through powertop, which will help to map the C-state
reported by powertop to actual hardware C-state.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Earlier patch (bc71bec91f) broke
suspend resume on many laptops. The problem was reported by
Carlos R. Mafra and Calvin Walton, who bisected the issue to above patch.
The problem was because, C2 and C3 code were calling acpi_idle_enter_c1
directly, with C2 or C3 as state parameter, while suspend/resume was in
progress. The patch bc71bec started making use of that state information,
assuming that it would always be referring to C1 state. This caused the
problem with suspend-resume as we ended up using C2/C3 state indirectly.
Fix this by adding acpi_idle_suspend check in enter_c1.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>