Because newly offlined CPUs continue executing after completing the
CPU_DYING notifiers, they legitimately enter the scheduler and use
RCU while appearing to be offline. This calls for a more sophisticated
approach as follows:
1. RCU marks the CPU online during the CPU_UP_PREPARE phase.
2. RCU marks the CPU offline during the CPU_DEAD phase.
3. Diagnostics regarding use of read-side RCU by offline CPUs use
RCU's accounting rather than the cpu_online_map. (Note that
__call_rcu() still uses cpu_online_map to detect illegal
invocations within CPU_DYING notifiers.)
4. Offline CPUs are prevented from hanging the system by
force_quiescent_state(), which pays attention to cpu_online_map.
Some additional work (in a later commit) will be needed to
guarantee that force_quiescent_state() waits a full jiffy before
assuming that a CPU is offline, for example, when called from
idle entry. (This commit also makes the one-jiffy wait
explicit, since the old-style implicit wait can now be defeated
by RCU_FAST_NO_HZ and by rcutorture.)
This approach avoids the false positives encountered when attempting to
use more exact classification of CPU online/offline state.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
It is illegal to use RCU from a CPU that has reported idleness or
offlinedness to RCU. However, it can be quite difficult to determine
from a stack trace whether or not a given CPU is idle or offline.
Therefore, this commit adds idle/offline diagnostics to the lockdep-RCU
error message.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
The rcu_prepare_for_idle() function is always called with interrupts
disabled, so there is no reason to disable interrupts again within
rcu_prepare_for_idle(). Therefore, this commit removes all of the
interrupt disabling, also removing a latent disabling-unbalance bug.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Now that TREE_RCU and TREE_PREEMPT_RCU no longer do anything different
for the single-CPU case, there is no need for multiple definitions of
synchronize_sched_expedited(). It is no longer in any sense a plug-in,
so move it from kernel/rcutree_plugin.h to kernel/rcutree.c.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Although it is legal to use RCU during early boot, it is anything
but legal to use RCU at runtime from an offlined CPU. After all, RCU
explicitly ignores offlined CPUs. This commit therefore adds checks
for runtime use of RCU from offlined CPUs.
These checks are not perfect, in particular, they can be subverted
through use of things like rcu_dereference_raw(). Note that it is not
possible to put checks in rcu_read_lock() and friends due to the fact
that these primitives are used in code that might be used under either
RCU or lock-based protection, which means that checking rcu_read_lock()
gets you fat piles of false positives.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Add documentation of CONFIG_RCU_CPU_STALL_VERBOSE, CONFIG_RCU_CPU_STALL_INFO,
and RCU_STALL_DELAY_DELTA. Describe multiple stall-warning messages from
a single stall, and the timing of the subsequent messages. Add headings.
Remove RCU_SECONDS_TILL_STALL_RECHECK because this value is now computed
at runtime from RCU_CPU_STALL_TIMEOUT, so that sysfs changes to the timeout
value now directly affect the RCU_SECONDS_TILL_STALL_RECHECK value.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Add module parameters to rcutorture that induce a CPU stall.
The stall_cpu parameter specifies how long to stall in seconds,
defaulting to zero, which indicates no stalling is to be undertaken.
The stall_cpu_holdoff parameter specifies how many seconds after
insmod (or boot, if rcutorture is built into the kernel) that this
stall is to start. The default value for stall_cpu_holdoff is ten
seconds.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
The torture.txt documentation gives an example rcutorture run with a
100-second duration. This is ridiculously short, unless maybe testing
a fix for a egregious bug. Use a more-realistic one-hour duration for
the example.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
When rcutorture is started automatically at boot time, it might well
also start CPU-hotplug operations at that time, which might not be
desirable. This commit therefore adds an rcutorture parameter that
allows CPU-hotplug operations to be held off for the specified number
of seconds after the start of boot.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
There have been situations where RCU CPU stall warnings were caused by
issues in scheduling-clock timer initialization. To make it easier to
track these down, this commit causes the RCU CPU stall-warning messages
to print out the number of scheduling-clock interrupts taken in the
current grace period for each stalled CPU.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
The default CONFIG_RCU_CPU_STALL_TIMEOUT value of 60 seconds has served
Linux users well for production use for quite some time. However, for
debugging, there will be more than three minutes between subsequent
stall-warning messages. This can be an annoyingly long wait if you
are trying to work out where the offending infinite loop is hiding.
Therefore, this commit provides a rcu_cpu_stall_timeout sysfs
parameter that may be adjusted at boot time and at runtime to speed
up debugging.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Now that both TINY_RCU and TINY_PREEMPT_RCU have been in place for awhile,
it is time to remove UP support from TREE_RCU, which is what this commit
does.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
There is no convenient expression for rcu_deference_protected()
when it is used in tearing down multilinked structures following
a grace period. For example, suppose that an element containing an
RCU-protected pointer to a second element is removed from an enclosing
RCU-protected data structure, then the write-side lock is released,
and finally synchronize_rcu() is invoked to wait for a grace period.
Then it is necessary to traverse the pointer in order to free up the
second element. But we are not in an RCU read-side critical section
and we are holding no locks, so the usual rcu_dereference_check() and
rcu_dereference_protected() primitives are not appropriate. Neither
is rcu_dereference_raw(), as it is intended for use in data structures
where the user defines the locking design (for example, list_head).
So this responsibility is added to rcu_access_pointer()'s list, and
this commit updates rcu_assign_pointer()'s header comment accordingly.
Suggested-by: David Howells <dhowells@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: David Howells <dhowells@redhat.com>
Although it is OK to be preempted in an RCU read-side critical section
for TREE_PREEMPT_RCU, it is definitely not OK to be preempted, block,
or might_sleep() within an RCU read-side critical section for TREE_RCU.
Unfortunately, rcu_might_sleep() currently only checks for RCU-bh and
RCU-sched read-side critical sections. This commit therefore makes
rcu_might_sleep() check for RCU read-side critical sections, but only
in TREE_RCU builds.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
The inner idle loop is an extended quiescent state for all flavors
of RCU, but there have been recent bug involving use of RCU read-side
primitives from within the idle loop. Therefore, this commit enlists
lockdep-RCU to detect attempts to enter the inner idle loop while in
an RCU read-side critical section, emitting a lockdep-RCU splat if so.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
The recent updates to RCU_CPU_FAST_NO_HZ have an rcu_needs_cpu() that
does more than just check for callbacks, so get the name for
rcu_preempt_needs_cpu() consistent with that change, now calling it
rcu_preempt_cpu_has_callbacks().
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
This is a port of commit #82e78d80 from TREE_PREEMPT_RCU to
TINY_PREEMPT_RCU.
This commit uses the fact that current->rcu_boost_mutex is set
any time that the RCU_READ_UNLOCK_BOOSTED flag is set in the
current->rcu_read_unlock_special bitmask. This allows tests of
the bit to be changed to tests of the pointer, which in turn allows
the RCU_READ_UNLOCK_BOOSTED flag to be eliminated.
Please note that the check of current->rcu_read_unlock_special need not
change because any time that RCU_READ_UNLOCK_BOOSTED was set, so was
RCU_READ_UNLOCK_BLOCKED. Therefore, __rcu_read_unlock() can continue
testing current->rcu_read_unlock_special for non-zero, as before.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
This is a port to TINY_RCU of Peter Zijlstra's commit #ec433f0c5
The rcu_read_unlock_special() function relies on in_irq() to exclude
scheduler activity from interrupt level. This fails because exit_irq()
can invoke the scheduler after clearing the preempt_count() bits that
in_irq() uses to determine that it is at interrupt level. This situation
can result in failures as follows:
$task IRQ SoftIRQ
rcu_read_lock()
/* do stuff */
<preempt> |= UNLOCK_BLOCKED
rcu_read_unlock()
--t->rcu_read_lock_nesting
irq_enter();
/* do stuff, don't use RCU */
irq_exit();
sub_preempt_count(IRQ_EXIT_OFFSET);
invoke_softirq()
ttwu();
spin_lock_irq(&pi->lock)
rcu_read_lock();
/* do stuff */
rcu_read_unlock();
rcu_read_unlock_special()
rcu_report_exp_rnp()
ttwu()
spin_lock_irq(&pi->lock) /* deadlock */
rcu_read_unlock_special(t);
This can be triggered 'easily' because invoke_softirq() immediately does
a ttwu() of ksoftirqd/# instead of doing the in-place softirq stuff first,
but even without that the above happens.
Cure this by also excluding softirqs from the rcu_read_unlock_special()
handler and ensuring the force_irqthreads ksoftirqd/# wakeup is done
from full softirq context.
It is also necessary to delay the ->rcu_read_lock_nesting decrement until
after rcu_read_unlock_special(). This delay is handled by the commit
"Protect __rcu_read_unlock() against scheduler-using irq handlers".
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
This is a port of commit #b0d3041 from TREE_RCU to TREE_PREEMPT_RCU.
Under some rare but real combinations of configuration parameters, RCU
callbacks are posted during early boot that use kernel facilities that are
not yet initialized. Therefore, when these callbacks are invoked, hard
hangs and crashes ensue. This commit therefore prevents RCU callbacks
from being invoked until after the scheduler is fully up and running,
as in after multiple tasks have been spawned.
It might well turn out that a better approach is to identify the specific
RCU callbacks that are causing this problem, but that discussion will
wait until such time as someone really needs an RCU callback to be invoked
(as opposed to merely registered) during early boot.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
This is a port of commit #be0e1e21 to TINY_PREEMPT_RCU. This uses
noinline to prevent rcu_read_unlock_special() from being inlined into
__rcu_read_unlock().
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
This commit ports commit #10f39bb1b2 (rcu: protect __rcu_read_unlock()
against scheduler-using irq handlers) from TREE_PREEMPT_RCU to
TINY_PREEMPT_RCU. The following is a corresponding port of that
commit message.
The addition of RCU read-side critical sections within runqueue and
priority-inheritance critical sections introduced some deadlocks,
for example, involving interrupts from __rcu_read_unlock() where the
interrupt handlers call wake_up(). This situation can cause the
instance of __rcu_read_unlock() invoked from interrupt to do some
of the processing that would otherwise have been carried out by the
task-level instance of __rcu_read_unlock(). When the interrupt-level
instance of __rcu_read_unlock() is called with a scheduler lock held from
interrupt-entry/exit situations where in_irq() returns false, deadlock can
result. Of course, in a UP kernel, there are not really any deadlocks,
but the upper-level critical section can still be be fatally confused
by the lower-level critical section changing things out from under it.
This commit resolves these deadlocks by using negative values of the
per-task ->rcu_read_lock_nesting counter to indicate that an instance of
__rcu_read_unlock() is in flight, which in turn prevents instances from
interrupt handlers from doing any special processing. Note that nested
rcu_read_lock()/rcu_read_unlock() pairs are still permitted, but they will
never see ->rcu_read_lock_nesting go to zero, and will therefore never
invoke rcu_read_unlock_special(), thus preventing them from seeing the
RCU_READ_UNLOCK_BLOCKED bit should it be set in ->rcu_read_unlock_special.
This patch also adds a check for ->rcu_read_unlock_special being negative
in rcu_check_callbacks(), thus preventing the RCU_READ_UNLOCK_NEED_QS
bit from being set should a scheduling-clock interrupt occur while
__rcu_read_unlock() is exiting from an outermost RCU read-side critical
section.
Of course, __rcu_read_unlock() can be preempted during the time that
->rcu_read_lock_nesting is negative. This could result in the setting
of the RCU_READ_UNLOCK_BLOCKED bit after __rcu_read_unlock() checks it,
and would also result it this task being queued on the corresponding
rcu_node structure's blkd_tasks list. Therefore, some later RCU read-side
critical section would enter rcu_read_unlock_special() to clean up --
which could result in deadlock (OK, OK, fatal confusion) if that RCU
read-side critical section happened to be in the scheduler where the
runqueue or priority-inheritance locks were held.
To prevent the possibility of fatal confusion that might result from
preemption during the time that ->rcu_read_lock_nesting is negative,
this commit also makes rcu_preempt_note_context_switch() check for
negative ->rcu_read_lock_nesting, thus refraining from queuing the task
(and from setting RCU_READ_UNLOCK_BLOCKED) if we are already exiting
from the outermost RCU read-side critical section (in other words,
we really are no longer actually in that RCU read-side critical
section). In addition, rcu_preempt_note_context_switch() invokes
rcu_read_unlock_special() to carry out the cleanup in this case, which
clears out the ->rcu_read_unlock_special bits and dequeues the task
(if necessary), in turn avoiding needless delay of the current RCU grace
period and needless RCU priority boosting.
It is still illegal to call rcu_read_unlock() while holding a scheduler
lock if the prior RCU read-side critical section has ever had both
preemption and irqs enabled. However, the common use case is legal,
namely where then entire RCU read-side critical section executes with
irqs disabled, for example, when the scheduler lock is held across the
entire lifetime of the RCU read-side critical section.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
The grace-period initialization sequence in rcu_start_gp() has a special
case for systems where the rcu_node tree is a single rcu_node structure.
This made sense some years ago when systems were smaller and up to 64
CPUs could share a single rcu_node structure, but now that large systems
are common and a given leaf rcu_node structure can support only 16 CPUs
(due to lock contention on the rcu_node's ->lock field), this optimization
is almost never taken. And even the small mobile platforms that might
make use of it might rather have the kernel text reduction.
Therefore, this commit removes the check for single-rcu_node trees.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
RCU's current CPU-offline code path dumps all of the outgoing CPU's
callbacks onto the RCU_NEXT_TAIL portion of the surviving CPU's
callback list. This means that all the ready-to-invoke callbacks from
the outgoing CPU must wait for another full RCU grace period. This was
just fine when CPU-hotplug events were rare, but there is increasing
evidence that users are planning to make increasing use of CPU hotplug.
Therefore, this commit changes the callback-dumping procedure so that
callbacks that are ready to invoke are moved to the RCU_DONE_TAIL
portion of the surviving CPU's callback list. This avoids running
these callbacks through a second unnecessary grace period.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Because quiescent states are now reported from offline CPUs in
CPU_DYING state, there is some possibility that such a CPU might
note the end of a grace period and attempt to start invoking
callbacks. This would be a very bad thing, and is supposed to
be prevented by the fact that the CPU_DYING CPU gets rid of all
its callbacks before reporting the quiescent state. However,
there is other CPU-offline code in the kernel, and it is quite
possible that someone will invoke RCU core processing from that
code. Therefore, this commit adds a warning for this case.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Currently, a given CPU is permitted to remain in dyntick-idle mode
indefinitely if it has only lazy RCU callbacks queued. This is vulnerable
to corner cases in NUMA systems, so limit the time to six seconds by
default. (Currently controlled by a cpp macro.)
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Make rcutorture check for CPU-hotplug failures and complain if there
were any.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Move ->qsmaskinit and blkd_tasks[] manipulation to the CPU_DYING
notifier. This simplifies the code by eliminating a potential
deadlock and by reducing the responsibilities of force_quiescent_state().
Also rename functions to make their connection to the CPU-hotplug
stages explicit.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
The call_rcu() in mesh_gate_del() invokes mesh_gate_node_reclaim(),
which simply calls kfree(). So convert the call_rcu() to kfree_rcu(),
allowing mesh_gate_node_reclaim() to be eliminated.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: "John W. Linville" <linville@tuxdriver.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-wireless@vger.kernel.org
Cc: netdev@vger.kernel.org
The call_rcu() in do_ip_setsockopt() invokes opt_kfree_rcu(), which just
calls kfree(). So convert the call_rcu() to kfree_rcu(), which allows
opt_kfree_rcu() to be eliminated.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: David S. Miller <davem@davemloft.net>
Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
Cc: James Morris <jmorris@namei.org>
Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
Cc: Patrick McHardy <kaber@trash.net>
Cc: netdev@vger.kernel.org
Because opt_kfree_rcu() just calls kfree(), all call_rcu() uses of it
may be converted to kfree_rcu(). This permits opt_kfree_rcu() to
be eliminated.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: David S. Miller <davem@davemloft.net>
Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
Cc: James Morris <jmorris@namei.org>
Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
Cc: Patrick McHardy <kaber@trash.net>
Cc: netdev@vger.kernel.org
The call_rcu() in ft_tport_delete() invokes ft_tport_rcu_free(),
which just does a kfree(). So convert the call_rcu() to kfree_rcu(),
allowing ft_tport_rcu_free() to be eliminated.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: "Nicholas A. Bellinger" <nab@linux-iscsi.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Jesper Juhl <jj@chaosbits.net>
Cc: linux-scsi@vger.kernel.org
Cc: target-devel@vger.kernel.org
The call_rcu() in unregister_external_interrupt() invokes
ext_int_hash_update(), which just does a kfree(). Convert the
call_rcu() to kfree_rcu(), allowing ext_int_hash_update() to
be eliminated.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
The RCU_TRACE kernel parameter has always been intended for debugging,
not for production use. Formalize this by moving RCU_TRACE from
init/Kconfig to lib/Kconfig.debug.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
When CONFIG_RCU_FAST_NO_HZ is enabled, RCU will allow a given CPU to
enter dyntick-idle mode even if it still has RCU callbacks queued.
RCU avoids system hangs in this case by scheduling a timer for several
jiffies in the future. However, if all of the callbacks on that CPU
are from kfree_rcu(), there is no reason to wake the CPU up, as it is
not a problem to defer freeing of memory.
This commit therefore tracks the number of callbacks on a given CPU
that are from kfree_rcu(), and avoids scheduling the timer if all of
a given CPU's callbacks are from kfree_rcu().
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
The push for energy efficiency will require that RCU tag rcu_head
structures to indicate whether or not their invocation is time critical.
This tagging is best carried out in the bottom bits of the ->next
pointers in the rcu_head structures. This tagging requires that the
rcu_head structures be properly aligned, so this commit adds the required
diagnostics.
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
It is illegal to have a grace period within a same-flavor RCU read-side
critical section, so this commit adds lockdep-RCU checks to splat when
such abuse is encountered. This commit does not detect more elaborate
RCU deadlock situations. These situations might be a job for lockdep
enhancements.
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Although TREE_PREEMPT_RCU indirectly uses might_sleep() to detect illegal
use of synchronize_sched() and synchronize_rcu_bh() from within an RCU
read-side critical section, this might_sleep() check is bypassed when
there is only a single CPU (for example, when running an SMP kernel on
a single-CPU system). This patch therefore adds a might_sleep() call
to the rcu_blocking_is_gp() check that is unconditionally invoked from
both synchronize_sched() and synchronize_rcu_bh().
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
The majority of them are regression fixes for stuff that broke during
the merge 3.3 window.
The notable ones are:
* The at91 ata drivers both broke because of an earlier cleanup patch that
some other patches were based on. Jean-Christophe decided to remove
the legacy at91_ide driver and fix the new-style at91-pata driver while
keeping the cleanup patch. I almost rejected the patches for being too
late and too big but in the end decided to accept them because they
fix a regression.
* A patch fixing build breakage from the sysdev-to-device conversion
colliding with other changes touches a number of mach-s3c files.
* b0654037 "ARM: orion: Fix Orion5x GPIO regression from MPP cleanup"
is a mechanical change that unfortunately touches a lot of lines
that should up in the diffstat.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIVAwUATztTM2CrR//JCVInAQJu3Q/+KN4npDjjJbRm1FR4J+z7dEy3631gt7Ku
M64JuC2259da0AtXlHXoc8XB7ZrBkMR2k1n+Q42FqUFVILOXcrHSTId6osPQ8WYE
TGWR0E2APP6/w4YH3dz0aTUauX0HhnWNP4ShWalWxw2Zsc1nhPNcMO3k57E/PNnp
nUHb2ZR+Huqk9Eje6/Vkr7OQq7dhl0KJvITJKCT1H93vVYZc5l2O5ZytcOC3dsFg
yMP/btmu9JlCenOwoKcQFv6ug0tWAYiY4ALqQujLN0kcf7rmjLLOG2HQrnycmeh3
gv9jwK04gYxHkhPbCLCgO/bg906LVcYIl9/TY7jXK8oE4kR0vVxdQOWEzKIX5+KO
dyAuwy3uGi4szG8f1DKnz1h7vR1MEyBVgQ+yRqnfhLh7mFmuZcOlGTzziD3csDXG
qd5B2xf9WvLupfpbvgnHUUKEIJVfWPDoJeN3jGCOjd4+j8OzPR6yeAtU85TDQzIx
IKs2x+0zrYMBre3R+m5vb9v3IhPb1wZU29eXXRzDmLuHJDM00Qc8LmpiWUoeu3cX
DwuLstYLm8EhWN+LnjAABd3mKeR5tyBojK3EsDFRxIfz3mKHVNEAPE6Iky60Lfwr
Pq+LgBBftFfcct70UyXWSK7UI92suavDgCHVejIxpbvIWF1UVY7S1mgmflZ1WZAL
R5tdx6oe5Y4=
=QYVn
-----END PGP SIGNATURE-----
Merge tag 'fixes-3.3-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
These are the bug fixes that have accumulated since 3.3-rc3 in arm-soc.
The majority of them are regression fixes for stuff that broke during
the merge 3.3 window.
The notable ones are:
* The at91 ata drivers both broke because of an earlier cleanup patch that
some other patches were based on. Jean-Christophe decided to remove
the legacy at91_ide driver and fix the new-style at91-pata driver while
keeping the cleanup patch. I almost rejected the patches for being too
late and too big but in the end decided to accept them because they
fix a regression.
* A patch fixing build breakage from the sysdev-to-device conversion
colliding with other changes touches a number of mach-s3c files.
* b0654037 "ARM: orion: Fix Orion5x GPIO regression from MPP cleanup"
is a mechanical change that unfortunately touches a lot of lines
that should up in the diffstat.
* tag 'fixes-3.3-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (28 commits)
ARM: at91: drop ide driver in favor of the pata one
pata/at91: use newly introduced SMC accessors
ARM: at91: add accessor to manage SMC
ARM: at91:rtc/rtc-at91sam9: ioremap register bank
ARM: at91: USB AT91 gadget registration for module
ep93xx: fix build of vision_ep93xx.c
ARM: OMAP2xxx: PM: fix OMAP2xxx-specific UART idle bug in v3.3
ARM: orion: Fix USB phy for orion5x.
ARM: orion: Fix Orion5x GPIO regression from MPP cleanup
ARM: EXYNOS: Add cpu-offset property in gic device tree node
ARM: EXYNOS: Bring exynos4-dt up to date
ARM: OMAP3: cm-t35: fix section mismatch warning
ARM: OMAP2: Fix the OMAP2 only build break seen with 2011+ ARM tool-chains
ARM: tegra: paz00: fix wrong UART port on mini-pcie plug
ARM: tegra: paz00: fix wrong SD1 power gpio
i2c: tegra: Add devexit_p() for remove
ARM: EXYNOS: Correct M-5MOLS sensor clock frequency on Universal C210 board
ARM: EXYNOS: Correct framebuffer window size on Nuri board
ARM: SAMSUNG: Fix missing api-change from subsys_interface change
ARM: EXYNOS: Fix "warning: initialization from incompatible pointer type"
...
1) VETH_INFO_PEER netlink attribute needs to have it's size validated,
from Thomas Graf.
2) 'poll' module option of bnx2x driver crashes the machine, just remove
it. From Michal Schmidt.
3) ks8851_mll driver reads the irq number from two places, but only
initializes one of them, oops. Use only one location and fix this
problem, from Jan Weitzel.
4) Fix buffer overrun and unicast sterring bugs in mellanox mlx4 driver,
from Eugenia Emantayev.
5) Swapped kcalloc() args in RxRPC and mlx4, from Axel Lin.
6) PHY MDIO device name regression fixes from Florian Fainelli.
7) If the wake event IRQ line is different from the netdevice one, we
have to properly route it to the stmmac interrupt handler. From
Francesco Virlinzi.
8) Fix rwlock lock initialization ordering bug in mac80211, from
Mohammed Shafi Shajakhan.
9) TCP lost_cnt can get out of sync, and in fact go negative, in certain
circumstances. Fix the way we specify what sequence range to operate
on in tcp_sacktag_one() to fix this bug. From Neal Cardwell.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (27 commits)
net/ethernet: ks8851_mll fix irq handling
veth: Enforce minimum size of VETH_INFO_PEER
stmmac: update the driver version to Feb 2012 (v2)
stmmac: move hw init in the probe (v2)
stmmac: request_irq when use an ext wake irq line (v2)
stmmac: do not discard frame on dribbling bit assert
ipheth: Add iPhone 4S
mlx4: add unicast steering entries to resource_tracker
mlx4: fix QP tree trashing
mlx4: fix buffer overrun
3c59x: shorten timer period for slave devices
netpoll: netpoll_poll_dev() should access dev->flags
RxRPC: Fix kcalloc parameters swapped
bnx2x: remove the 'poll' module option
tcp: fix tcp_shifted_skb() adjustment of lost_cnt_hint for FACK
ks8851: Fix NOHZ local_softirq_pending 08 warning
bnx2x: fix bnx2x_storm_stats_update() on big endian
ixp4xx-eth: fix PHY name to match MDIO bus name
octeon: fix PHY name to match MDIO bus name
fec: fix PHY name to match fixed MDIO bus name
...
method for register cache initialisation is used. Only affects a fairly
small proportion of users that both don't use explicit register defaults
and do use the cache.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJPO94HAAoJEBus8iNuMP3dL+kP/2mfXyC8Nrxp9ussST9cElJA
5S6lnaKVKoA7jSa4MB5QEuMX2C4P9lgKhfZoSOmvbkdgSSU+q18nSYXDDMhgjgWt
4JUxgmXEeloj17DBpi1iEkJaUBPfmpr5HR8Bu7eFsUbXVd3cYKFVmmPBe+QC6Hf8
2MvbkcFrOL7TSA1jwYO/O9oZooE1gEkBDpmo/YgwX0u6hWU2Jc/axGhc2ZXWLTmM
SfPrfAmLq/P6q/RGJX/NPfbLxcPn1nrgv6O8ptgrMeqLjvnWUKfs26tVgiqmzEGi
SQr5oUZKxwRalHd06Q4P1tBB/tErqnYCC7I3ys3b8zKBcDudRfSIMrwOkhn9XObI
ePBMwZb6tjWpzCf2NyKwSDhONEf5puE/n36N9vuyyVMNJCnjiCbYcJ2OrRttqOtw
UDa6ohdH3QzHbIhujvyyE7WM5JiJz+Qtx8/3ZJS6kTEb7oAVPB2luF9e8P7lGfRv
+TypFrYGkAJhPhafkk5Qx29NX24fvIaErlNLPdgKHr+6O+oAizFXx8XkR7xbIibM
8RY3ZKrQNvDSTE3lI1sne1GQdJX/4XlVjgZxv6AS/jIKRpI8FMkOmL8hHK2Ttgaq
lFYeWW8xRD5gtU/ViFbGrFNytMyro/qIVmJ1qVV2IeCz1tdg16oa/3E6F7ubvjHM
a3IvzDF5njnoKpeWFwiA
=pU4C
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap
Fixes a bootstrapping issue for some registers when a less commonly used
method for register cache initialisation is used. Only affects a fairly
small proportion of users that both don't use explicit register defaults
and do use the cache.
* tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap:
regmap: Fix cache defaults initialization from raw cache defaults
and also fixes stale inode mode bits on eCryptfs inodes after a POSIX ACL was
set on the lower filesystem's inode.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABCgAGBQJPPZzXAAoJENaSAD2qAscKNjsP/0KEqvVV7SXFXq/4yXEij/02
+wHS8/4SsTXcE2fkzQM2Jx/osoZfO0Tse8VMADwpX+aNyYlfnsdI4RLYm0y5X1JL
kKdSKNuuEQaA0+YEge4KicsjMQ6nVmYIHGWs3kpEpkk3dKeafVKgh5/+k+NCOs7U
tZricmWrLV/Rn9pb7S1tp4hOo4cVQEJQGPqaFJ0QwORcdmBWTw904yFsZV+TojFf
1kEV6ue/mFav7ZPOhz8Cx2xIukUHk8EUykbScTWx3kpdkffF3INsf9b3mt34lEFB
chuLdcJZuABBBe8RL4g/qRcDOHNUmZx01uX4eHwejs7sYFDfnmNz+tjomr1MRaCv
JEUgE31n407HR3ChA8IxX1bqHOfjN0mgVnOwnmK4nbnLKX0WvmogaIDV7hZK3Z9Q
lGeKN0qwU3Cr1N9K/+JLHB3xB/yieE/53tlksHDjRgJAEXy7SV0fcftmgDLxyx5m
DX4u+eFKdFt+3YBFzTBWRtDO5GVgbylT31JVnKJUi+RDFiOWUf0PzAOXUp5nmr8w
Md8zZ/kCZY/pbnrETNVcXPrEP9eDb6nY5b16ZP5nADMRtZLQdKiKRRdHtwnKg5N6
7nU/cS1bpG7bUekJkkRgzrAz+ziDielzVvkhwb7sq3sac9fegjW67K3/SEiJiQJC
XTBZCrTW3RdEAt6WCTVV
=hwGP
-----END PGP SIGNATURE-----
Merge tag 'ecryptfs-3.3-rc4-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tyhicks/ecryptfs
Fixes maximum filename length and filesystem type reporting in statfs() calls
and also fixes stale inode mode bits on eCryptfs inodes after a POSIX ACL was
set on the lower filesystem's inode.
* tag 'ecryptfs-3.3-rc4-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tyhicks/ecryptfs:
ecryptfs: remove the second argument of k[un]map_atomic()
eCryptfs: Copy up lower inode attrs after setting lower xattr
eCryptfs: Improve statfs reporting
Here are a few more fixes for powerpc. Some are regressions, the rest
is simple/obvious/nasty enough that I deemed it good to go now.
Here's also step one of deprecating legacy iSeries support: we are
removing it from the main defconfig.
Nobody seems to be using it anymore and the code is nasty to maintain,
(involves horrible hacks in various low level areas of the kernel) so we
plan to actually rip it out at some point. For now let's just avoid
building it by default. Stephen will proceed to do the actual removal
later (probably 3.4 or 3.5).
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc:
powerpc/perf: power_pmu_start restores incorrect values, breaking frequency events
powerpc/adb: Use set_current_state()
powerpc: Disable interrupts early in Program Check
powerpc: Remove legacy iSeries from ppc64_defconfig
powerpc/fsl/pci: Fix PCIe fixup regression
powerpc: Fix kernel log of oops/panic instruction dump
One regression fix for SR-IOV on PPC and a couple of misc fixes from
Yinghai.
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci:
PCI: Fix pci cardbus removal
PCI: set pci sriov page size before reading SRIOV BAR
PCI: workaround hard-wired bus number V2
3 radeon fixes, I have some exynos fixes to push later but I'll queue
them separately once I've looked them over a bit.
* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux:
drm/radeon/kms: fix MSI re-arm on rv370+
drm/radeon/kms/atom: bios scratch reg handling updates
drm/radeon/kms: drop lock in return path of radeon_fence_count_emitted.
After all the FPU state cleanups and finally finding the problem that
caused all our FPU save/restore problems, this re-introduces the
preloading of FPU state that was removed in commit b3b0870ef3 ("i387:
do not preload FPU state at task switch time").
However, instead of simply reverting the removal, this reimplements
preloading with several fixes, most notably
- properly abstracted as a true FPU state switch, rather than as
open-coded save and restore with various hacks.
In particular, implementing it as a proper FPU state switch allows us
to optimize the CR0.TS flag accesses: there is no reason to set the
TS bit only to then almost immediately clear it again. CR0 accesses
are quite slow and expensive, don't flip the bit back and forth for
no good reason.
- Make sure that the same model works for both x86-32 and x86-64, so
that there are no gratuitous differences between the two due to the
way they save and restore segment state differently due to
architectural differences that really don't matter to the FPU state.
- Avoid exposing the "preload" state to the context switch routines,
and in particular allow the concept of lazy state restore: if nothing
else has used the FPU in the meantime, and the process is still on
the same CPU, we can avoid restoring state from memory entirely, just
re-expose the state that is still in the FPU unit.
That optimized lazy restore isn't actually implemented here, but the
infrastructure is set up for it. Of course, older CPU's that use
'fnsave' to save the state cannot take advantage of this, since the
state saving also trashes the state.
In other words, there is now an actual _design_ to the FPU state saving,
rather than just random historical baggage. Hopefully it's easier to
follow as a result.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This moves the bit that indicates whether a thread has ownership of the
FPU from the TS_USEDFPU bit in thread_info->status to a word of its own
(called 'has_fpu') in task_struct->thread.has_fpu.
This fixes two independent bugs at the same time:
- changing 'thread_info->status' from the scheduler causes nasty
problems for the other users of that variable, since it is defined to
be thread-synchronous (that's what the "TS_" part of the naming was
supposed to indicate).
So perfectly valid code could (and did) do
ti->status |= TS_RESTORE_SIGMASK;
and the compiler was free to do that as separate load, or and store
instructions. Which can cause problems with preemption, since a task
switch could happen in between, and change the TS_USEDFPU bit. The
change to TS_USEDFPU would be overwritten by the final store.
In practice, this seldom happened, though, because the 'status' field
was seldom used more than once, so gcc would generally tend to
generate code that used a read-modify-write instruction and thus
happened to avoid this problem - RMW instructions are naturally low
fat and preemption-safe.
- On x86-32, the current_thread_info() pointer would, during interrupts
and softirqs, point to a *copy* of the real thread_info, because
x86-32 uses %esp to calculate the thread_info address, and thus the
separate irq (and softirq) stacks would cause these kinds of odd
thread_info copy aliases.
This is normally not a problem, since interrupts aren't supposed to
look at thread information anyway (what thread is running at
interrupt time really isn't very well-defined), but it confused the
heck out of irq_fpu_usable() and the code that tried to squirrel
away the FPU state.
(It also caused untold confusion for us poor kernel developers).
It also turns out that using 'task_struct' is actually much more natural
for most of the call sites that care about the FPU state, since they
tend to work with the task struct for other reasons anyway (ie
scheduling). And the FPU data that we are going to save/restore is
found there too.
Thanks to Arjan Van De Ven <arjan@linux.intel.com> for pointing us to
the %esp issue.
Cc: Arjan van de Ven <arjan@linux.intel.com>
Reported-and-tested-by: Raphael Prevost <raphael@buro.asia>
Acked-and-tested-by: Suresh Siddha <suresh.b.siddha@intel.com>
Tested-by: Peter Anvin <hpa@zytor.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>