* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/selinux-2.6:
SELinux: more GFP_NOFS fixups to prevent selinux from re-entering the fs code
Fix broken build due to patch order dependency. A future patch requires
the lines that break the current build. Disable those lines for now.
Signed-off-by: Michael Krufky <mkrufky@linuxtv.org>
Acked-by: Mauro Carvalho Chehab <mchehab@infradead.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
More cases where SELinux must not re-enter the fs code. Called from the
d_instantiate security hook.
Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: James Morris <jmorris@namei.org>
flush_cache_vmap / flush_cache_vunmap were calling flush_cache_all which -
having been deprecated - turned into a nop ...
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Fix kernel oops due to machine check occuring in init_chipset_siimage() on PPC
44x platforms. These 32-bit CPUs have 36-bit physical address and PCI I/O and
memory spaces are mapped beyond 4 GB; arch/ppc/ code has a fixup in ioremap()
that creates an illusion of the PCI I/O and memory resources being mapped below
4 GB, while arch/powerpc/ code got rid of this fixup with PPC 44x having instead
CONFIG_RESOURCES_64BIT=y -- this causes the resources to be truncated to 32-bit
'unsigned long' type in this driver, and so non-existant memory being ioremap'ed
and then accessed...
Thanks to Valentine Barshak for providing an initial patch and explanations.
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
The 'disable_cb' is really just a hint and as such, it's possible for more
work to get queued up while callbacks are disabled. Under stress with an
SMP guest, this printk triggers very frequently. There is no race here, this
is how things are designed to work so let's just remove the printk.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This reverts commit 9e6db60825, which was
merged without the API it needed, causing build breakage.
Reported-by: Bryan Wu <cooloney@kernel.org>
Acked-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86:
x86: fix 64-bit asm NOPS for CONFIG_GENERIC_CPU
x86: fix call to set_cyc2ns_scale() from time_cpufreq_notifier()
revert "x86: tsc prevent time going backwards"
The 'disable_cb' callback is designed as an optimization to tell the host
we don't need callbacks now. As it is not reliable, the debug check is
overzealous: it can happen on two CPUs at the same time. Document this.
Even if it were reliable, the virtio_net driver doesn't disable
callbacks on transmit so the START_USE/END_USE debugging reentrance
protection can be easily tripped even on UP.
Thanks to Balaji Rao for the bug report and testing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
CC: Balaji Rao <balajirrao@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
In time_cpufreq_notifier() the cpu id to act upon is held in freq->cpu. Use it
instead of smp_processor_id() in the call to set_cyc2ns_scale().
This makes the preempt_*able() unnecessary and lets set_cyc2ns_scale() update
the intended cpu's cyc2ns.
Related mail/thread: http://lkml.org/lkml/2007/12/7/130
Signed-off-by: Karsten Wiese <fzu@wemgehoertderstaat.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
revert:
| commit 47001d6033
| Author: Thomas Gleixner <tglx@linutronix.de>
| Date: Tue Apr 1 19:45:18 2008 +0200
|
| x86: tsc prevent time going backwards
it has been identified to cause suspend regression - and the
commit fixes a longstanding bug that existed before 2.6.25 was
opened - so it can wait some more until the effects are better
understood.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6:
fix endian lossage in forcedeth
net/tokenring/olympic.c section fixes
net: marvell.c fix sparse shadowed variable warning
[VLAN]: Fix egress priority mappings leak.
[TG3]: Add PHY workaround for 5784
[NET]: srandom32 fixes for networking v2
[IPV6]: Fix refcounting for anycast dst entries.
[IPV6]: inet6_dev on loopback should be kept until namespace stop.
[IPV6]: Event type in addrconf_ifdown is mis-used.
[ICMP]: Ensure that ICMP relookup maintains status quo
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6:
[SPARC64]: Fix user accesses in regset code.
[SPARC64]: Fix FPU saving in 64-bit signal handling.
* 'pci_id_updates' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/v4l-dvb:
V4L/DVB (7497): pvrusb2: add new usb pid for 73xxx models
V4L/DVB (7496): pvrusb2: add new usb pid for 75xxx models
We handle a broken tsc these days, so no need to panic. We clear the
TSC bit when tsc_init decides it's unreliable (eg. under lguest w/ bad
host TSC), leading to bogus panic.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Now that we're mapping registers in the DRM driver at load time, the
driver actually checks the PCI ID, so we need to make sure the macros
have all the right bits (and longer term use the DRM headers as the sole
copy of the PCI & register definitions).
This patch adds 945GME support to the DRM headers, fixing a regression
reported in http://bugzilla.kernel.org/show_bug.cgi?id=10395.
Tested-by: Alexander Oltu <alexander@all-2.com>
Signed-off-by: Jesse Barnes <jesse.barnes@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Since 2.6.25-rc7, I've been seeing an occasional livelock on one x86_64
machine, copying kernel trees to tmpfs, paging out to swap.
Signature: 6000 pages under writeback but never getting written; most
tasks of interest trying to reclaim, but each get_swap_bio waiting for a
bio in mempool_alloc's io_schedule_timeout(5*HZ); every five seconds an
atomic page allocation failure report from kblockd failing to allocate a
sense_buffer in __scsi_get_command.
__scsi_get_command has a (one item) free_list to protect against this,
but rc1's [SCSI] use dynamically allocated sense buffer
de25deb180 upset that slightly. When it
fails to allocate from the separate sense_slab, instead of giving up, it
must fall back to the command free_list, which is sure to have a
sense_buffer attached.
Either my earlier -rc testing missed this, or there's some recent
contributory factor. One very significant factor is SLUB, which merges
slab caches when it can, and on 64-bit happens to merge both bio cache
and sense_slab cache into kmalloc's 128-byte cache: so that under this
swapping load, bios above are liable to gobble up all the slots needed
for scsi_cmnd sense_buffers below.
That's disturbing behaviour, and I tried a few things to fix it. Adding
a no-op constructor to the sense_slab inhibits SLUB from merging it, and
stops all the allocation failures I was seeing; but it's rather a hack,
and perhaps in different configurations we have other caches on the
swapout path which are ill-merged.
Another alternative is to revert the separate sense_slab, using
cache-line-aligned sense_buffer allocated beyond scsi_cmnd from the one
kmem_cache; but that might waste more memory, and is only a way of
diverting around the known problem.
While I don't like seeing the allocation failures, and hate the idea of
all those bios piled up above a scsi host working one by one, it does
seem to emerge fairly soon with the livelock fix. So lacking better
ideas, stick with that one clear fix for now.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Peter Zijlstra <a.p.ziljstra@chello.nl>
Cc: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Preserve all other bits when setting gpio.
Signed-off-by: Michael Krufky <mkrufky@linuxtv.org>
Signed-off-by: Steven Toth <stoth@hauppauge.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@infradead.org>
I noticed this while testing the latest code. I'm not sure if it is required,
but the normal (or LSB) timeout value is set to zero, so the MSB should
be as well to stay consistent.
If the chip revision is >= 8, set MSB of the 16-bit timeout value to zero
when disabling the watchdog in it8712f_wdt_disable().
Signed-off-by: Andrew Paprocki <andrew@ishiboo.com>
Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This reverts commit 7c0ea45be4 which
caused a regression with the backlight being set to off when a laptop
doesn't have a _BQC entry to query the actual backlight value. The code
blindly then falls back on a value of 0.
See
http://bugzilla.kernel.org/show_bug.cgi?id=10387http://lkml.org/lkml/2008/4/2/366
for details.
Bisected-and-reported-by: Andrey Borzenkov <arvidjaar@mail.ru>
Cc: Zhao Yakui <yakui.zhao@intel.com>
Cc: Zhang Rui <rui.zhang@intel.com>
Cc: Len Brown <len.brown@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
In 2.6.14 a patch was merged which switching the order of the ipmi device
naming from in-order-of-discovery over to reverse-order-of-discovery.
So on systems with multiple BMC interfaces, the ipmi device names are being
created in reverse order relative to how they are discovered on the system
(e.g. on an IBM x3950 multinode server with N nodes, the device name for the
BMC in the first node is /dev/ipmiN-1 and the device name for the BMC in the
last node is /dev/ipmi0, etc.).
The problem is caused by the list handling routines chosen in dmi_scan.c.
Using list_add() causes the multiple ipmi devices to be added to the device
list using a stack-paradigm and so the ipmi driver subsequently pulls them off
during initialization in LIFO order. This patch changes the
dmi_save_ipmi_device() list handling paradigm to a queue, thereby allowing the
ipmi driver to build the ipmi device names in the order in which they are
found on the system.
Signed-off-by: Carol Hebert <cah@us.ibm.com>
Signed-off-by: Corey Minyard <cminyard@mvista.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
THe CFI driver in 2.6.24 kernel is broken. Not so intensive read/write
operations cause incomplete writes which lead to kernel panics in JFFS2.
We investigated the issue - it is caused by bug in FL_SHUTDOWN parsing code.
Sometimes chip returns -EIO as if it is in FL_SHUTDOWN state when it should
wait in FL_PONT (error in order of conditions).
The following patch fixes the bug in state parsing code of CFI. Also I've
added comments to notify developers if they want to add new case in future.
Signed-off-by: Alexey Korolev <akorolev@infradead.org>
Reviewed-by: Joern Engel <joern@logfs.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
A boot option for the memory controller was discussed on lkml. It is a good
idea to add it, since it saves memory for people who want to turn off the
memory controller.
By default the option is on for the following two reasons:
1. It provides compatibility with the current scheme where the memory
controller turns on if the config option is enabled
2. It allows for wider testing of the memory controller, once the config
option is enabled
We still allow the create, destroy callbacks to succeed, since they are not
aware of boot options. We do not populate the directory will memory resource
controller specific files.
Signed-off-by: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: Paul Menage <menage@google.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: Pavel Emelyanov <xemul@openvz.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Sudhir Kumar <skumar@linux.vnet.ibm.com>
Cc: YAMAMOTO Takashi <yamamoto@valinux.co.jp>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The effects of cgroup_disable=foo are:
- foo isn't auto-mounted if you mount all cgroups in a single hierarchy
- foo isn't visible as an individually mountable subsystem
As a result there will only ever be one call to foo->create(), at init time;
all processes will stay in this group, and the group will never be mounted on
a visible hierarchy. Any additional effects (e.g. not allocating metadata)
are up to the foo subsystem.
This doesn't handle early_init subsystems (their "disabled" bit isn't set be,
but it could easily be extended to do so if any of the early_init systems
wanted it - I think it would just involve some nastier parameter processing
since it would occur before the command-line argument parser had been run.
Hugh said:
Ballpark figures, I'm trying to get this question out rather than
processing the exact numbers: CONFIG_CGROUP_MEM_RES_CTLR adds 15% overhead
to the affected paths, booting with cgroup_disable=memory cuts that back to
1% overhead (due to slightly bigger struct page).
I'm no expert on distros, they may have no interest whatever in
CONFIG_CGROUP_MEM_RES_CTLR=y; and the rest of us can easily build with or
without it, or apply the cgroup_disable=memory patches.
Unix bench's execl test result on x86_64 was
== just after boot without mounting any cgroup fs.==
mem_cgorup=off : Execl Throughput 43.0 3150.1 732.6
mem_cgroup=on : Execl Throughput 43.0 2932.6 682.0
==
[lizf@cn.fujitsu.com: fix boot option parsing]
Signed-off-by: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: Paul Menage <menage@google.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: Pavel Emelyanov <xemul@openvz.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Sudhir Kumar <skumar@linux.vnet.ibm.com>
Cc: YAMAMOTO Takashi <yamamoto@valinux.co.jp>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Building UP kernel with KGDB enabled produces the following errors and warning
(fatal due to -Werror in arch/mips/kernel/Makefile):
In file included from arch/mips/kernel/gdb-stub.c:142:
include/asm/smp.h:25:1: "raw_smp_processor_id" redefined
In file included from include/linux/sched.h:69,
from arch/mips/kernel/gdb-stub.c:126:
include/linux/smp.h:88:1: this is the location of the previous definition
In file included from arch/mips/kernel/gdb-stub.c:142:
include/asm/smp.h:62: error: redefinition of 'smp_send_reschedule'
include/linux/smp.h:102: error: previous definition of 'smp_send_reschedule' was here
include/asm/smp.h: In function `smp_send_reschedule':
include/asm/smp.h:65: error: dereferencing pointer to incomplete type
arch/mips/kernel/gdb-stub.c: At top level:
arch/mips/kernel/gdb-stub.c:660: warning: 'kgdb_wait' defined but not used
Fix the errors by not directly including <asm/smp.h> (which is already included
by <linux/smp.h>) and the warning by enclosing kgdb_wait() in #ifdef CONFIG_SMP.
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Long overdue update of the m68k defconfigs
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The default defconfig should be one from arch/m68k/configs/
arch/m68k/defconfig was not exactly identical to amiga_defconfig but
also considering how long they have been without any update that doesn't
seem to have been on purpose.
Signed-off-by: Adrian Bunk <adrian.bunk@movial.fi>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/libata-dev:
pata_ali: disable ATAPI DMA
libata: ATA_12/16 doesn't fall into ATAPI_MISC
libata: uninline atapi_cmd_type()
libata: fix IDENTIFY order in ata_bus_probe()
Mikulas Patocka noted that the optimization where we check if a buffer
was already dirty (and we avoid re-dirtying it) was not really SMP-safe.
Since the read of the old status was not synchronized with anything, an
aggressive CPU re-ordering of memory accesses might have moved that read
up to before the data was even written to the buffer, and another CPU
that cleaned it again, causing the newly dirty state to never actually
hit the disk.
Admittedly this would probably never trigger in practice, but it's still
wrong.
Mikulas sent a patch that fixed the problem, but I dislike the subtlety
of the whole optimization, so this is an alternate fix that is more
explicit about the particular SMP ordering for the optimization, and
separates out the speculative reads of the buffer state into its own
conditional (and makes the memory barrier only happen if we are likely
to actually hit the optimized case in the first place).
I considered removing the optimization entirely, but Andrew argued for
it's continued existence. I'm a push-over.
Cc: Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Commit f63fd7e299 ("parport_pc: detection
for SuperIO IT87XX POST") only released the IO port region on success,
not when the probe for the IT87XX chip failed.
That caused not only a reserved region to leak, but also caused an oops
when the driver module was unloaded and somebody tried to cat
/proc/ioports - because the string that was assigned to the IO port
region was a static string in the module virtual address area.
Reported-by: Lubos Lunak <l.lunak@suse.cz>
Cc: Jan Kara <jack@suse.cz>
Cc: Petr Cvek <petr.cvek@tul.cz>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
a) if you initialize something with le32_to_cpu(...), then |= it
with host-endian and feed to cpu_to_le32(), it's most definitely
*not* __le32. As sparse would've told you...
b) the whole sequence is |= cpu_to_le32(host-endian constant)
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
My previous section fix only turned one section problem into another
section problem.
This patch fixes it for real.
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
The other if blocks don't redeclare temp, remove the redeclaration in
the final if() block.
drivers/net/phy/marvell.c:214:7: warning: symbol 'temp' shadows an earlier one
drivers/net/phy/marvell.c:160:6: originally declared here
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
These entries are allocated in vlan_dev_set_egress_priority,
but are never released and leaks on vlan device removal.
Drop these in vlan's ->uninit callback - after the device is
brought down and everyone is notified about it is going to
be unregistered.
Found during testing vlan netnsization patchset.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
Acked-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
The commits:
commit 37a47db8d7
Author: Balaji Rao <balajirrao@gmail.com>
Date: Wed Jan 30 13:30:03 2008 +0100
x86: assign IRQs to HPET timers, fix
and
commit e3f37a54f6
Author: Balaji Rao <balajirrao@gmail.com>
Date: Wed Jan 30 13:30:03 2008 +0100
x86: assign IRQs to HPET timers
have been identified to cause a regression on some platforms due to
the assignement of legacy IRQs which makes the legacy devices
connected to those IRQs disfunctional.
Revert them.
This fixes http://bugzilla.kernel.org/show_bug.cgi?id=10382
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
We already catch most of the TSC problems by sanity checks, but there
is a subtle bug which has been in the code for ever. This can cause
time jumps in the range of hours.
This was reported in:
http://lkml.org/lkml/2007/8/23/96
and
http://lkml.org/lkml/2008/3/31/23
I was able to reproduce the problem with a gettimeofday loop test on a
dual core and a quad core machine which both have sychronized
TSCs. The TSCs seems not to be perfectly in sync though, but the
kernel is not able to detect the slight delta in the sync check. Still
there exists an extremly small window where this delta can be observed
with a real big time jump. So far I was only able to reproduce this
with the vsyscall gettimeofday implementation, but in theory this
might be observable with the syscall based version as well.
CPU 0 updates the clock source variables under xtime/vyscall lock and
CPU1, where the TSC is slighty behind CPU0, is reading the time right
after the seqlock was unlocked.
The clocksource reference data was updated with the TSC from CPU0 and
the value which is read from TSC on CPU1 is less than the reference
data. This results in a huge delta value due to the unsigned
subtraction of the TSC value and the reference value. This algorithm
can not be changed due to the support of wrapping clock sources like
pm timer.
The huge delta is converted to nanoseconds and added to xtime, which
is then observable by the caller. The next gettimeofday call on CPU1
will show the correct time again as now the TSC has advanced above the
reference value.
To prevent this TSC specific wreckage we need to compare the TSC value
against the reference value and return the latter when it is larger
than the actual TSC value.
I pondered to mark the TSC unstable when the readout is smaller than
the reference value, but this would render an otherwise good and fast
clocksource unusable without a real good reason.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>