System call table generation script must be run to gener-
ate unistd_32/64.h and syscall_table_32/64/c32.h files.
This patch will have changes which will invokes the script.
This patch will generate unistd_32/64.h and syscall_table-
_32/64/c32.h files by the syscall table generation script
invoked by parisc/Makefile and the generated files against
the removed files must be identical.
The generated uapi header file will be included in uapi/-
asm/unistd.h and generated system call table header file
will be included by kernel/syscall.S file.
Signed-off-by: Firoz Khan <firoz.khan@linaro.org>
Acked-by: Helge Deller <deller@gmx.de>
Signed-off-by: Helge Deller <deller@gmx.de>
This reverts commit d27dfa13b9.
Unfortunately, this patch needs to be reverted. We need the full sync
barrier and not the limited barrier provided by using an ordered store.
The sync ensures that all accesses and cache purge instructions that
follow the sync are performed after all such instructions prior the sync
instruction have completed executing.
The patch breaks the rwlock implementation in glibc. This caused the
test-lock application in the libprelude testsuite to hang. With the
change reverted, the test runs correctly and the libprelude package
builds successfully.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
Pull XArray conversion from Matthew Wilcox:
"The XArray provides an improved interface to the radix tree data
structure, providing locking as part of the API, specifying GFP flags
at allocation time, eliminating preloading, less re-walking the tree,
more efficient iterations and not exposing RCU-protected pointers to
its users.
This patch set
1. Introduces the XArray implementation
2. Converts the pagecache to use it
3. Converts memremap to use it
The page cache is the most complex and important user of the radix
tree, so converting it was most important. Converting the memremap
code removes the only other user of the multiorder code, which allows
us to remove the radix tree code that supported it.
I have 40+ followup patches to convert many other users of the radix
tree over to the XArray, but I'd like to get this part in first. The
other conversions haven't been in linux-next and aren't suitable for
applying yet, but you can see them in the xarray-conv branch if you're
interested"
* 'xarray' of git://git.infradead.org/users/willy/linux-dax: (90 commits)
radix tree: Remove multiorder support
radix tree test: Convert multiorder tests to XArray
radix tree tests: Convert item_delete_rcu to XArray
radix tree tests: Convert item_kill_tree to XArray
radix tree tests: Move item_insert_order
radix tree test suite: Remove multiorder benchmarking
radix tree test suite: Remove __item_insert
memremap: Convert to XArray
xarray: Add range store functionality
xarray: Move multiorder_check to in-kernel tests
xarray: Move multiorder_shrink to kernel tests
xarray: Move multiorder account test in-kernel
radix tree test suite: Convert iteration test to XArray
radix tree test suite: Convert tag_tagged_items to XArray
radix tree: Remove radix_tree_clear_tags
radix tree: Remove radix_tree_maybe_preload_order
radix tree: Remove split/join code
radix tree: Remove radix_tree_update_node_t
page cache: Finish XArray conversion
dax: Convert page fault handlers to XArray
...
This patch updates the spin unlock code to use an ordered store with
release semanatics. All prior accesses are guaranteed to be performed
before an ordered store is performed.
Using an ordered store is significantly faster than using the sync
memory barrier.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
We do support running 64-bit userspace processes, although there isn't
yet full gcc and glibc support. Anyway, fix the comments to reflect the
reality.
Signed-off-by: Helge Deller <deller@gmx.de>
Now that we use a sync prior to releasing the locks in syscall.S, we don't need
the PA 2.0 ordered stores used to release some locks. Using an ordered store,
potentially slows the release and subsequent code.
There are a number of other ordered stores and loads that serve no purpose. I
have converted these to normal stores.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Cc: stable@vger.kernel.org # 4.0+
Signed-off-by: Helge Deller <deller@gmx.de>
For years I thought all parisc machines executed loads and stores in
order. However, Jeff Law recently indicated on gcc-patches that this is
not correct. There are various degrees of out-of-order execution all the
way back to the PA7xxx processor series (hit-under-miss). The PA8xxx
series has full out-of-order execution for both integer operations, and
loads and stores.
This is described in the following article:
http://web.archive.org/web/20040214092531/http://www.cpus.hp.com/technical_references/advperf.shtml
For this reason, we need to define mb() and to insert a memory barrier
before the store unlocking spinlocks. This ensures that all memory
accesses are complete prior to unlocking. The ldcw instruction performs
the same function on entry.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Cc: stable@vger.kernel.org # 4.0+
Signed-off-by: Helge Deller <deller@gmx.de>
As noted by Christoph Biedl, passing a pointer size of 4 in the new CAS
implementation causes a kernel crash. The attached patch corrects the
off by one error in the argument validity check.
In reviewing the code, I noticed that we only perform word operations
with the pointer size argument. The subi instruction intentionally uses
a word condition on 64-bit kernels. Nullification was used instead of a
cmpib instruction as the branch should never be taken. The shlw
pseudo-operation generates a depw,z instruction and it clears the target
before doing a shift left word deposit. Thus, we don't need to clip the
upper 32 bits of this argument on 64-bit kernels.
Tested with a gcc testsuite run with a 64-bit kernel. The gcc atomic
code in libgcc is the only direct user of the new CAS implementation
that I am aware of.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Cc: stable@vger.kernel.org # 3.13+
Signed-off-by: Helge Deller <deller@gmx.de>
As discussed on the debian-hppa list, double-wordcompare and exchange
operations fail on 32-bit kernels. Looking at the code, I realized that
the ",ma" completer does the wrong thing in the "ldw,ma 4(%r26), %r29"
instruction. This increments %r26 and causes the following store to
write to the wrong location.
Note by Helge Deller:
The patch applies cleanly to stable kernel series if this upstream
commit is merged in advance:
f4125cfdb3 ("parisc: Avoid trashing sr2 and sr3 in LWS code").
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Tested-by: Christoph Biedl <debian.axhn@manchmal.in-ulm.de>
Fixes: 8920649120 ("parisc: Implement new LWS CAS supporting 64 bit operations.")
Cc: stable@vger.kernel.org # 3.13+
Signed-off-by: Helge Deller <deller@gmx.de>
We have one critical section in the syscall entry path in which we switch from
the userspace stack to kernel stack. In the event of an external interrupt, the
interrupt code distinguishes between those two states by analyzing the value of
sr7. If sr7 is zero, it uses the kernel stack. Therefore it's important, that
the value of sr7 is in sync with the currently enabled stack.
This patch now disables interrupts while executing the critical section. This
prevents the interrupt handler to possibly see an inconsistent state which in
the worst case can lead to crashes.
Interestingly, in the syscall exit path interrupts were already disabled in the
critical section which switches back to the userspace stack.
Cc: <stable@vger.kernel.org>
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
There is no need to trash sr2 and sr3 in the Light-weight syscall (LWS). sr2
already points to kernel space (it's zero in userspace, otherwise syscalls
wouldn't work), and since the LWS code is executed in userspace, we can simply
ignore to preload sr3.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
This patch adds support for the TIF_SYSCALL_TRACEPOINT on the parisc
architecture. Basically, it calls the appropriate tracepoints on syscall
entry and exit.
Signed-off-by: Helge Deller <deller@gmx.de>
Do not load one entry beyond the end of the syscall table when the
syscall number of a traced process equals to __NR_Linux_syscalls.
Similar bug with regular processes was fixed by commit 3bb457af4f
("[PARISC] Fix bug when syscall nr is __NR_Linux_syscalls").
This bug was found by strace test suite.
Cc: stable@vger.kernel.org
Signed-off-by: Dmitry V. Levin <ldv@altlinux.org>
Acked-by: Helge Deller <deller@gmx.de>
Signed-off-by: Helge Deller <deller@gmx.de>
The seccomp filter support requires careful handling of task registers. This
includes reloading of the return value (%r28) and proper syscall exit if
secure_computing() returned -1.
Additionally we need to sign-extend the syscall number from signed 32bit to
signed 64bit in do_syscall_trace_enter() since the ptrace interface only allows
storing 32bit values in compat mode.
Signed-off-by: Helge Deller <deller@gmx.de>
Cc: stable@vger.kernel.org # v4.5
Mike Frysinger reported that his ptrace testcase showed strange
behaviour on parisc: It was not possible to avoid a syscall and the
return value of a syscall couldn't be changed.
To modify a syscall number, we were missing to save the new syscall
number to gr20 which is then picked up later in assembly again.
The effect that the return value couldn't be changed is a side-effect of
another bug in the assembly code. When a process is ptraced, userspace
expects each syscall to report entrance and exit of a syscall. If a
syscall number was given which doesn't exist, we jumped to the normal
syscall exit code instead of informing userspace that the (non-existant)
syscall exits. This unexpected behaviour confuses userspace and thus the
bug was misinterpreted as if we can't change the return value.
This patch fixes both problems and was tested on 64bit kernel with
32bit userspace.
Signed-off-by: Helge Deller <deller@gmx.de>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: stable@vger.kernel.org # v4.0+
Tested-by: Mike Frysinger <vapier@gentoo.org>
Use the 22bit instead of the 17bit branch instruction on a 64bit kernel
to reach the do_syscall_trace_exit function from the gateway page.
A huge page enabled kernel may need the additional branch distance bits.
Signed-off-by: Helge Deller <deller@gmx.de>
The attached change fixes the condition used in the "sub" instruction.
A double word comparison is needed. This fixes the 64-bit LWS CAS
operation on 64-bit kernels.
I can now enable 64-bit atomic support in GCC.
Cc: <stable@vger.kernel.org>
Signed-off-by: John David Anglin <dave.anglin>
Signed-off-by: Helge Deller <deller@gmx.de>
The current LWS cas only works correctly for 32bit. The new LWS allows
for CAS operations of variable size.
Signed-off-by: Guy Martin <gmsoft@tuxicoman.be>
Cc: <stable@vger.kernel.org> # 3.13+
Signed-off-by: Helge Deller <deller@gmx.de>
The attached change significantly improves the performance of the LWS-CAS code
in syscall.S.
This allows a number of packages to build (e.g., zeromq3, gtest and libxs)
that previously failed because slow LWS-CAS performance under contention. In
particular, interrupts taken while the lock was taken degraded performance
significantly.
The change does the following:
1) Disables interrupts around the CAS operation, and
2) Changes the loads and stores to use the ordered completer, "o", on
PA 2.0. "o" and "ma" with a zero offset are equivalent. The latter is
accepted on both PA 1.X and 2.0.
The use of ordered loads and stores probably makes no difference on all
existing hardware, but it seemed pedantically correct. In particular, the CAS
operation must complete before LDCW lock is released. As written before, a
processor could reorder the operations.
I don't believe the period interrupts are disabled is long enough to
significantly increase interrupt latency. For example, the TLB insert code is
longer. Worst case is a memory fault in the CAS operation.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Cc: stable@vger.kernel.org # 3.13+
Signed-off-by: Helge Deller <deller@gmx.de>
Provide a macro ASM_EXCEPTIONTABLE_ENTRY() to create exception table
entries and convert all open-coded places to use that macro.
This patch is a first step toward creating a exception table which only
holds 32bit pointers even on a 64bit kernel. That way in my own kernel
I was able to reduce the in-kernel exception table from 44kB to 22kB.
Signed-off-by: Helge Deller <deller@gmx.de>
Include some documentation about how the parisc gateway page technically
works and how it is used from userspace.
James Bottomley is the original author of this description and it was
copied here out of an email thread from Apr 12 2013 titled:
man2 : syscall.2 : document syscall calling conventions
CC: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Helge Deller <deller@gmx.de>
This patch fixes partly PAGE_SIZEs of 16K or 64K by adjusting the
assembler PTE lookup code and the assembler TEMPALIAS code. Furthermore
some data alignments for PAGE_SIZE have been limited to 4K (or less) to
not waste too much memory with greater page sizes. As a side note, the
palo loader can (currently) only handle up to 10 ELF segments which is
fixed with tighter aligning as well.
My testings indicated that the ldci command in the sba iommu coding
needed adjustment by the PAGE_SHIFT value and that the I/O PDIR Page
size was only set to 4K for my machine (C3000).
All this fixes partly the boot, but there are still quite some caching
problems left. Examples are e.g. the symbios logic driver which is
failing:
sym0: <896> rev 0x7 at pci 0000:00:0f.0 irq 69
sym0: PA-RISC Firmware, ID 7, Fast-40, SE, parity checking
CACHE TEST FAILED: DMA error (dstat=0x81).sym0: CACHE INCORRECTLY CONFIGURED.
and the tulip network driver which doesn't seem to work correctly
either:
Sending BOOTP requests .net eth0: Setting full-duplex based on MII#1
link partner capability of 05e1
..... timed out!
Beside those kernel fixes glibc will need fixes too to be able to handle
>4K page sizes.
Signed-off-by: Helge Deller <deller@gmx.de>
1) PTRACE_SYSCALL doesn't work for 64bit process on parisc64.
Compat syscall table is used instead of 64bit one. IMO we should either
refuse to allow PTRACE_SYSCALL for 64bit processes or duplicate the
logics choosing the right syscall table into .Ltracesys.
2) if you have let the tracee run with PTRACE_SYSCALL and
it had stopped, you can use PTRACE_POKEUSR to modify syscall number
(r20) and arguments 1--4 (r26--r23). Modifications will have effect.
However, modifying arguments 5 and 6 (r22 and r21 resp.) works only
when process (32bit one) runs on 64bit host - on 32bit one it has no
effect. AFAICS, the diff below should fix that one.
Signed-off-by: Al Viro <viro@ZenIV.linux.org.uk>
Tested-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
Don't bother restoring r28 on syscall restarts; it's clobbered by
syscall anyway. Reuse (now unused) ->orig_r28 as "no restarts allowed"
flag.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
1) Gate immediately and save a branch.
2) Fix off by one error in checking entry number.
3) Use sr7 instead of sr3 in error return path as sr3 might not
contain correct value.
4) Enable locking on UP systems to prevent incorrect operation of
the cas_action critical region on page faults.
Tested on several systems, including UP c3750 with 2.6.33.2 kernel.
Signed-off-by: John David Anglin <dave.anglin@nrc-cnrc.gc.ca>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
Instead of fiddling with gr[20], restructure code to return whether
or not to -ENOSYS. (Also do a bit of fiddling to let them take
pt_regs directly instead of re-computing it.)
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
This makes parisc use the standard tracehook_report_syscall_entry
and tracehook_report_syscall_exit hooks in <linux/tracehook.h>.
To do this, we need to access current->thread.regs, and to know
whether we're entering or exiting the syscall, so add this to
syscall_trace.
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
Document the LWS ABI including implementation notes for
userspace, and comment cleanup.
Remove extraneous .align 16 after lws_lock_start.
Signed-off-by: Carlos O'Donell <carlos@systemhalted.org>
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
This trivial patch fixes the following section warnings on PARISC:
> WARNING: vmlinux.o (.text.1): unexpected section name.
>The (.[number]+) following section name are ld generated and not expected.
> Did you forget to use "ax"/"aw" in a .S file?
> Note that for example <linux/init.h> contains
> section definitions for use in .S files.
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
We have the macro _AC() generally available now
so the calculation of PAGE_SIZE can be made
assembler compatible.
Introduce use of _AC() and kill all users of
ASM_PAGE_SIZE.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
The bug was that we were comparing __NR_syscalls to be greater or equal
to the syscall number stored in %r20. __NR_syscalls is one greater than
the last syscall though, so we're loading one entry beyond the end of the
syscall table, and trying to jump to it.
Fix this by only checking that we're greater, alternatively, we could
have compared to (__NR_Linux_syscalls - 1)
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
Amazingly, parisc was the only arch effected by this...
Convert register-sized loads/stores to always be 32-bit for these fields.
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
The LWS debugging code on parisc is wrongly enabled due to a bug in the
use of the preprocessor directives. This debugging code is not thread
safe and causes problems with a recent glibc on SMP kernels.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
- this macro unifies the code to add exception table entries
- additionally use ENTRY()/ENDPROC() at more places
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
ldcw,co should always be used on pa2.0, otherwise the strict cache
width alignment requirement is not relaxed.
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
More work towards supporing multiple page sizes on 64-bit. Convert
some assumptions that 64bit uses 3 level page tables into testing
PT_NLEVELS. Also some BUG() to BUG_ON() conversions and some cleanups
to assembler.
Signed-off-by: Helge Deller <deller@parisc-linux.org>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
Add the parisc version of the "mark rodata section read only" patches.
Based on code from and Signed-off-by Arjan van de Ven
<arjan@infradead.org>, Ingo Molnar <mingo@elte.hu>, Andi Kleen <ak@muc.de>,
Andrew Morton <akpm@osdl.org>, Linus Torvalds <torvalds@osdl.org>.
Signed-off-by: Helge Deller <deller@parisc-linux.org>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
Document clobbers and args in entry.S and syscall.S.
entry.S: Add comment to indicate that cr27 may recycle and EDEADLOCK
detection is not 100% correct. Since this is only enabled when using
ENABLE_LWS_DEBUG, the user is warned by the comment.
Signed-off-by: Carlos O'Donell <carlos@parisc-linux.org>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>