With the removal of 31 bit support a couple of defines became unused.
Remove them.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
We removed the only user of this define in the rtmutex code. Get rid
of it.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Since sizeof(long) == 4 is always false now, simplify cmpxchg_double a bit.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The kernel build for s390 fails for gcc compilers with version 3.x,
set the minimum required version of gcc to version 4.3.
As the atomic builtins are available with all gcc 4.x compilers,
use the __sync_val_compare_and_swap and __sync_bool_compare_and_swap
functions to replace the complex macro and inline assembler magic
in include/asm/cmpxchg.h. The compiler can just-do-it and generates
better code with the builtins.
While we are at it use __sync_bool_compare_and_swap for the
_raw_compare_and_swap function in the spinlock code as well.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
For 1 and 2 byte operands for xchg and cmpxchg the old and new values
get or'ed into the larger 4 byte old value before the compare and swap
instruction gets executed. This is done without using the proper byte
mask before or'ing the values.
If the caller passed in negative old or new values these got sign
extended by the caller. Which in turn means that either the old value
never matches, or, even worse, unrelated bytes would be changed in memory.
Luckily there don't seem to be any callers around yet, since that would
have resulted in the specification exception fixed in an earlies patch.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
When accessing a 1 or 2 byte memory operand we cannot use the
passed address since the compare and swap instruction only works
for 4 byte aligned memory operands.
Hence we calculate an aligned address so that compare and swap works
correctly. However we don't pass the calculated address to the inline
assembly. This results in incorrect memory accesses and in a
specification exception if used on non 4 byte aligned memory operands.
Since this didn't happen until now, there don't seem to be
too many users of cmpxchg on unaligned addresses.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The cmpxchg macros and functions are a bit different than on other
architectures. In particular the macros do not store the return
value of a __cmpxchg function call in a variable before returning the
value.
This causes compile warnings that only occur on s390 like this one:
net/ipv4/af_inet.c: In function 'build_ehash_secret':
net/ipv4/af_inet.c:241:2: warning: value computed is not used [-Wunused-value]
To get rid of these warnings use the same construct that we already use
for the xchg macro, which was introduced for the same reason.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
All cmpxchg functions imply a memory barrier.
cmpxch64 did not have one for 31 bit code, so add it.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Implement arch specific irqsafe_cpu ops. The arch specific ops do not
disable/enable interrupts since that is an expensive operation. Instead
we disable preemption and perform a compare and swap loop.
Since on server distros (the ones we care about) preemption is disabled
the preempt_disable()/preempt_enable() pair is a nop.
In the end this code should be faster than the generic one.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
We have a cmpxchg64_local() implementation but strange enough the
SMP capable variant cmpxchg64() is missing. So implement it.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Move xchg() and cmpxchg() functions to own header file like some
other architectures have done.
With this we make sure that system.h now really looks like a place
where everything is gathered that doesn't fit anywhere else.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>