Commit Graph

13 Commits

Author SHA1 Message Date
Heiko Carstens
e4165dcbc0 s390/cmpxchg: remove dead code
With the removal of 31 bit support a couple of defines became unused.
Remove them.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2015-10-14 14:32:15 +02:00
Thomas Gleixner
a22e5f579b arch: Remove __ARCH_HAVE_CMPXCHG
We removed the only user of this define in the rtmutex code. Get rid
of it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2015-05-13 10:55:42 +02:00
Heiko Carstens
26f15caaf9 s390/cmpxchg: simplify cmpxchg_double
Since sizeof(long) == 4 is always false now, simplify cmpxchg_double a bit.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2015-03-25 11:49:36 +01:00
Martin Schwidefsky
f318a1229b s390/cmpxchg: use compiler builtins
The kernel build for s390 fails for gcc compilers with version 3.x,
set the minimum required version of gcc to version 4.3.

As the atomic builtins are available with all gcc 4.x compilers,
use the __sync_val_compare_and_swap and __sync_bool_compare_and_swap
functions to replace the complex macro and inline assembler magic
in include/asm/cmpxchg.h. The compiler can just-do-it and generates
better code with the builtins.

While we are at it use __sync_bool_compare_and_swap for the
_raw_compare_and_swap function in the spinlock code as well.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-11-03 13:29:47 +01:00
Heiko Carstens
38ea1f358b s390/32bit: fix cmpxchg64
Fix broken inline assembly contraints for cmpxchg64 on 32bit.

Fixes this crash:

specification exception: 0006 [#1] SMP
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.13.0 #4
task: 005a16c8 ti: 00592000 task.ti: 00592000
Krnl PSW : 070ce000 8029abd6 (lockref_get+0x3e/0x9c)
...
Krnl Code: 8029abcc: a71a0001           ahi     %r1,1
           8029abd0: 1852               lr      %r5,%r2
          #8029abd2: bb40f064           cds     %r4,%r0,100(%r15)
          >8029abd6: 1943               cr      %r4,%r3
           8029abd8: 1815               lr      %r1,%r5
Call Trace:
([<0000000078e01870>] 0x78e01870)
 [<000000000021105a>] sysfs_mount+0xd2/0x1c8
 [<00000000001b551e>] mount_fs+0x3a/0x134
 [<00000000001ce768>] vfs_kern_mount+0x44/0x11c
 [<00000000001ce864>] kern_mount_data+0x24/0x3c
 [<00000000005cc4b8>] sysfs_init+0x74/0xd4
 [<00000000005cb5b4>] mnt_init+0xe0/0x1fc
 [<00000000005cb16a>] vfs_caches_init+0xb6/0x14c
 [<00000000005be794>] start_kernel+0x318/0x33c
 [<000000000010001c>] _stext+0x1c/0x80

Reported-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-01-22 14:02:15 +01:00
Heiko Carstens
b1d6b40cbd s390/cmpxchg,percpu: implement cmpxchg_double()
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-09-26 15:45:25 +02:00
Heiko Carstens
1896d256d3 s390/cmpxchg: fix sign extension bugs
For 1 and 2 byte operands for xchg and cmpxchg the old and new values
get or'ed into the larger 4 byte old value before the compare and swap
instruction gets executed. This is done without using the proper byte
mask before or'ing the values.
If the caller passed in negative old or new values these got sign
extended by the caller. Which in turn means that either the old value
never matches, or, even worse, unrelated bytes would be changed in memory.

Luckily there don't seem to be any callers around yet, since that would
have resulted in the specification exception fixed in an earlies patch.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-05-30 09:07:58 +02:00
Heiko Carstens
bf3db85311 s390/cmpxchg: fix 1 and 2 byte memory accesses
When accessing a 1 or 2 byte memory operand we cannot use the
passed address since the compare and swap instruction only works
for 4 byte aligned memory operands.
Hence we calculate an aligned address so that compare and swap works
correctly. However we don't pass the calculated address to the inline
assembly. This results in incorrect memory accesses and in a
specification exception if used on non 4 byte aligned memory operands.

Since this didn't happen until now, there don't seem to be
too many users of cmpxchg on unaligned addresses.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-05-30 09:07:57 +02:00
Heiko Carstens
6b894a409e s390/cmpxchg: fix compile warnings specific to s390
The cmpxchg macros and functions are a bit different than on other
architectures. In particular the macros do not store the return
value of a __cmpxchg function call in a variable before returning the
value.

This causes compile warnings that only occur on s390 like this one:

net/ipv4/af_inet.c: In function 'build_ehash_secret':
net/ipv4/af_inet.c:241:2: warning: value computed is not used [-Wunused-value]

To get rid of these warnings use the same construct that we already use
for the xchg macro, which was introduced for the same reason.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-05-30 09:07:56 +02:00
Heiko Carstens
0c44ca71f5 s390/cmpxchg: add missing memory barrier to cmpxchg64
All cmpxchg functions imply a memory barrier.
cmpxch64 did not have one for 31 bit code, so add it.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2012-05-30 09:07:55 +02:00
Heiko Carstens
4c2241fd42 [S390] percpu: implement arch specific irqsafe_cpu_ops
Implement arch specific irqsafe_cpu ops. The arch specific ops do not
disable/enable interrupts since that is an expensive operation. Instead
we disable preemption and perform a compare and swap loop.
Since on server distros (the ones we care about) preemption is disabled
the preempt_disable()/preempt_enable() pair is a nop.
In the end this code should be faster than the generic one.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2011-05-23 10:24:29 +02:00
Heiko Carstens
54eaae3028 [S390] cmpxchg: implement cmpxchg64()
We have a cmpxchg64_local() implementation but strange enough the
SMP capable variant cmpxchg64() is missing. So implement it.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2011-03-23 10:16:00 +01:00
Heiko Carstens
a2c9dbe8db [S390] xchg/cmpxchg: move to own header file
Move xchg() and cmpxchg() functions to own header file like some
other architectures have done.
With this we make sure that system.h now really looks like a place
where everything is gathered that doesn't fit anywhere else.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2011-03-23 10:15:59 +01:00