linux_dsm_epyc7002/arch/sparc/include/asm/ttable.h

695 lines
20 KiB
C
Raw Normal View History

License cleanup: add SPDX GPL-2.0 license identifier to files with no license Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 21:07:57 +07:00
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _SPARC64_TTABLE_H
#define _SPARC64_TTABLE_H
#include <asm/utrap.h>
#include <asm/pil.h>
#ifdef __ASSEMBLY__
#include <asm/thread_info.h>
#endif
#define BOOT_KERNEL b sparc64_boot; nop; nop; nop; nop; nop; nop; nop;
/* We need a "cleaned" instruction... */
#define CLEAN_WINDOW \
rdpr %cleanwin, %l0; add %l0, 1, %l0; \
wrpr %l0, 0x0, %cleanwin; \
clr %o0; clr %o1; clr %o2; clr %o3; \
clr %o4; clr %o5; clr %o6; clr %o7; \
clr %l0; clr %l1; clr %l2; clr %l3; \
clr %l4; clr %l5; clr %l6; clr %l7; \
retry; \
nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;
#define TRAP(routine) \
sethi %hi(109f), %g7; \
ba,pt %xcc, etrap; \
109: or %g7, %lo(109b), %g7; \
call routine; \
add %sp, PTREGS_OFF, %o0; \
ba,pt %xcc, rtrap; \
nop; \
nop;
#define TRAP_7INSNS(routine) \
sethi %hi(109f), %g7; \
ba,pt %xcc, etrap; \
109: or %g7, %lo(109b), %g7; \
call routine; \
add %sp, PTREGS_OFF, %o0; \
ba,pt %xcc, rtrap; \
nop;
#define TRAP_SAVEFPU(routine) \
sethi %hi(109f), %g7; \
ba,pt %xcc, do_fptrap; \
109: or %g7, %lo(109b), %g7; \
call routine; \
add %sp, PTREGS_OFF, %o0; \
ba,pt %xcc, rtrap; \
nop; \
nop;
#define TRAP_NOSAVE(routine) \
ba,pt %xcc, routine; \
nop; \
nop; nop; nop; nop; nop; nop;
#define TRAP_NOSAVE_7INSNS(routine) \
ba,pt %xcc, routine; \
nop; \
nop; nop; nop; nop; nop;
#define TRAPTL1(routine) \
sethi %hi(109f), %g7; \
ba,pt %xcc, etraptl1; \
109: or %g7, %lo(109b), %g7; \
call routine; \
add %sp, PTREGS_OFF, %o0; \
ba,pt %xcc, rtrap; \
nop; \
nop;
#define TRAP_ARG(routine, arg) \
sethi %hi(109f), %g7; \
ba,pt %xcc, etrap; \
109: or %g7, %lo(109b), %g7; \
add %sp, PTREGS_OFF, %o0; \
call routine; \
mov arg, %o1; \
ba,pt %xcc, rtrap; \
nop;
#define TRAPTL1_ARG(routine, arg) \
sethi %hi(109f), %g7; \
ba,pt %xcc, etraptl1; \
109: or %g7, %lo(109b), %g7; \
add %sp, PTREGS_OFF, %o0; \
call routine; \
mov arg, %o1; \
ba,pt %xcc, rtrap; \
nop;
#define SYSCALL_TRAP(routine, systbl) \
rdpr %pil, %g2; \
mov TSTATE_SYSCALL, %g3; \
sethi %hi(109f), %g7; \
ba,pt %xcc, etrap_syscall; \
109: or %g7, %lo(109b), %g7; \
sethi %hi(systbl), %l7; \
ba,pt %xcc, routine; \
or %l7, %lo(systbl), %l7;
#define TRAP_UTRAP(handler,lvl) \
mov handler, %g3; \
ba,pt %xcc, utrap_trap; \
mov lvl, %g4; \
nop; \
nop; \
nop; \
nop; \
nop;
#ifdef CONFIG_COMPAT
#define LINUX_32BIT_SYSCALL_TRAP SYSCALL_TRAP(linux_sparc_syscall32, sys_call_table32)
#else
#define LINUX_32BIT_SYSCALL_TRAP BTRAP(0x110)
#endif
#define LINUX_64BIT_SYSCALL_TRAP SYSCALL_TRAP(linux_sparc_syscall, sys_call_table64)
#define GETCC_TRAP TRAP(getcc)
#define SETCC_TRAP TRAP(setcc)
#define BREAKPOINT_TRAP TRAP(breakpoint_trap)
#ifdef CONFIG_TRACE_IRQFLAGS
#define TRAP_IRQ(routine, level) \
rdpr %pil, %g2; \
wrpr %g0, PIL_NORMAL_MAX, %pil; \
sethi %hi(1f-4), %g7; \
ba,pt %xcc, etrap_irq; \
or %g7, %lo(1f-4), %g7; \
nop; \
nop; \
nop; \
.subsection 2; \
1: call trace_hardirqs_off; \
nop; \
mov level, %o0; \
call routine; \
add %sp, PTREGS_OFF, %o1; \
ba,a,pt %xcc, rtrap_irq; \
.previous;
#else
#define TRAP_IRQ(routine, level) \
rdpr %pil, %g2; \
wrpr %g0, PIL_NORMAL_MAX, %pil; \
ba,pt %xcc, etrap_irq; \
rd %pc, %g7; \
mov level, %o0; \
call routine; \
add %sp, PTREGS_OFF, %o1; \
ba,a,pt %xcc, rtrap_irq;
#endif
#define TRAP_NMI_IRQ(routine, level) \
rdpr %pil, %g2; \
wrpr %g0, PIL_NMI, %pil; \
ba,pt %xcc, etrap_irq; \
rd %pc, %g7; \
mov level, %o0; \
call routine; \
add %sp, PTREGS_OFF, %o1; \
ba,a,pt %xcc, rtrap_nmi;
#define TRAP_IVEC TRAP_NOSAVE(do_ivec)
#define BTRAP(lvl) TRAP_ARG(bad_trap, lvl)
#define BTRAPTL1(lvl) TRAPTL1_ARG(bad_trap_tl1, lvl)
#define FLUSH_WINDOW_TRAP \
ba,pt %xcc, etrap; \
rd %pc, %g7; \
flushw; \
ldx [%sp + PTREGS_OFF + PT_V9_TNPC], %l1; \
add %l1, 4, %l2; \
stx %l1, [%sp + PTREGS_OFF + PT_V9_TPC]; \
ba,pt %xcc, rtrap; \
stx %l2, [%sp + PTREGS_OFF + PT_V9_TNPC];
#ifdef CONFIG_KPROBES
#define KPROBES_TRAP(lvl) TRAP_IRQ(kprobe_trap, lvl)
#else
#define KPROBES_TRAP(lvl) TRAP_ARG(bad_trap, lvl)
#endif
#ifdef CONFIG_UPROBES
#define UPROBES_TRAP(lvl) TRAP_ARG(uprobe_trap, lvl)
#else
#define UPROBES_TRAP(lvl) TRAP_ARG(bad_trap, lvl)
#endif
#ifdef CONFIG_KGDB
#define KGDB_TRAP(lvl) TRAP_IRQ(kgdb_trap, lvl)
#else
#define KGDB_TRAP(lvl) TRAP_ARG(bad_trap, lvl)
#endif
#define SUN4V_ITSB_MISS \
ldxa [%g0] ASI_SCRATCHPAD, %g2; \
ldx [%g2 + HV_FAULT_I_ADDR_OFFSET], %g4; \
ldx [%g2 + HV_FAULT_I_CTX_OFFSET], %g5; \
srlx %g4, 22, %g6; \
ba,pt %xcc, sun4v_itsb_miss; \
nop; \
nop; \
nop;
#define SUN4V_DTSB_MISS \
ldxa [%g0] ASI_SCRATCHPAD, %g2; \
ldx [%g2 + HV_FAULT_D_ADDR_OFFSET], %g4; \
ldx [%g2 + HV_FAULT_D_CTX_OFFSET], %g5; \
srlx %g4, 22, %g6; \
ba,pt %xcc, sun4v_dtsb_miss; \
nop; \
nop; \
nop;
#define SUN4V_MCD_PRECISE \
ldxa [%g0] ASI_SCRATCHPAD, %g2; \
ldx [%g2 + HV_FAULT_D_ADDR_OFFSET], %g4; \
ldx [%g2 + HV_FAULT_D_CTX_OFFSET], %g5; \
ba,pt %xcc, etrap; \
rd %pc, %g7; \
ba,pt %xcc, sun4v_mcd_detect_precise; \
nop; \
nop;
/* Before touching these macros, you owe it to yourself to go and
* see how arch/sparc64/kernel/winfixup.S works... -DaveM
*
* For the user cases we used to use the %asi register, but
* it turns out that the "wr xxx, %asi" costs ~5 cycles, so
* now we use immediate ASI loads and stores instead. Kudos
* to Greg Onufer for pointing out this performance anomaly.
*
* Further note that we cannot use the g2, g4, g5, and g7 alternate
* globals in the spill routines, check out the save instruction in
* arch/sparc64/kernel/etrap.S to see what I mean about g2, and
* g4/g5 are the globals which are preserved by etrap processing
* for the caller of it. The g7 register is the return pc for
* etrap. Finally, g6 is the current thread register so we cannot
* us it in the spill handlers either. Most of these rules do not
* apply to fill processing, only g6 is not usable.
*/
/* Normal kernel spill */
#define SPILL_0_NORMAL \
stx %l0, [%sp + STACK_BIAS + 0x00]; \
stx %l1, [%sp + STACK_BIAS + 0x08]; \
stx %l2, [%sp + STACK_BIAS + 0x10]; \
stx %l3, [%sp + STACK_BIAS + 0x18]; \
stx %l4, [%sp + STACK_BIAS + 0x20]; \
stx %l5, [%sp + STACK_BIAS + 0x28]; \
stx %l6, [%sp + STACK_BIAS + 0x30]; \
stx %l7, [%sp + STACK_BIAS + 0x38]; \
stx %i0, [%sp + STACK_BIAS + 0x40]; \
stx %i1, [%sp + STACK_BIAS + 0x48]; \
stx %i2, [%sp + STACK_BIAS + 0x50]; \
stx %i3, [%sp + STACK_BIAS + 0x58]; \
stx %i4, [%sp + STACK_BIAS + 0x60]; \
stx %i5, [%sp + STACK_BIAS + 0x68]; \
stx %i6, [%sp + STACK_BIAS + 0x70]; \
stx %i7, [%sp + STACK_BIAS + 0x78]; \
saved; retry; nop; nop; nop; nop; nop; nop; \
nop; nop; nop; nop; nop; nop; nop; nop;
#define SPILL_0_NORMAL_ETRAP \
etrap_kernel_spill: \
stx %l0, [%sp + STACK_BIAS + 0x00]; \
stx %l1, [%sp + STACK_BIAS + 0x08]; \
stx %l2, [%sp + STACK_BIAS + 0x10]; \
stx %l3, [%sp + STACK_BIAS + 0x18]; \
stx %l4, [%sp + STACK_BIAS + 0x20]; \
stx %l5, [%sp + STACK_BIAS + 0x28]; \
stx %l6, [%sp + STACK_BIAS + 0x30]; \
stx %l7, [%sp + STACK_BIAS + 0x38]; \
stx %i0, [%sp + STACK_BIAS + 0x40]; \
stx %i1, [%sp + STACK_BIAS + 0x48]; \
stx %i2, [%sp + STACK_BIAS + 0x50]; \
stx %i3, [%sp + STACK_BIAS + 0x58]; \
stx %i4, [%sp + STACK_BIAS + 0x60]; \
stx %i5, [%sp + STACK_BIAS + 0x68]; \
stx %i6, [%sp + STACK_BIAS + 0x70]; \
stx %i7, [%sp + STACK_BIAS + 0x78]; \
saved; \
sub %g1, 2, %g1; \
ba,pt %xcc, etrap_save; \
wrpr %g1, %cwp; \
nop; nop; nop; nop; nop; nop; nop; nop; \
nop; nop; nop; nop;
/* Normal 64bit spill */
#define SPILL_1_GENERIC(ASI) \
add %sp, STACK_BIAS + 0x00, %g1; \
stxa %l0, [%g1 + %g0] ASI; \
mov 0x08, %g3; \
stxa %l1, [%g1 + %g3] ASI; \
add %g1, 0x10, %g1; \
stxa %l2, [%g1 + %g0] ASI; \
stxa %l3, [%g1 + %g3] ASI; \
add %g1, 0x10, %g1; \
stxa %l4, [%g1 + %g0] ASI; \
stxa %l5, [%g1 + %g3] ASI; \
add %g1, 0x10, %g1; \
stxa %l6, [%g1 + %g0] ASI; \
stxa %l7, [%g1 + %g3] ASI; \
add %g1, 0x10, %g1; \
stxa %i0, [%g1 + %g0] ASI; \
stxa %i1, [%g1 + %g3] ASI; \
add %g1, 0x10, %g1; \
stxa %i2, [%g1 + %g0] ASI; \
stxa %i3, [%g1 + %g3] ASI; \
add %g1, 0x10, %g1; \
stxa %i4, [%g1 + %g0] ASI; \
stxa %i5, [%g1 + %g3] ASI; \
add %g1, 0x10, %g1; \
stxa %i6, [%g1 + %g0] ASI; \
stxa %i7, [%g1 + %g3] ASI; \
saved; \
retry; nop; nop; \
b,a,pt %xcc, spill_fixup_dax; \
b,a,pt %xcc, spill_fixup_mna; \
b,a,pt %xcc, spill_fixup;
#define SPILL_1_GENERIC_ETRAP \
etrap_user_spill_64bit: \
stxa %l0, [%sp + STACK_BIAS + 0x00] %asi; \
stxa %l1, [%sp + STACK_BIAS + 0x08] %asi; \
stxa %l2, [%sp + STACK_BIAS + 0x10] %asi; \
stxa %l3, [%sp + STACK_BIAS + 0x18] %asi; \
stxa %l4, [%sp + STACK_BIAS + 0x20] %asi; \
stxa %l5, [%sp + STACK_BIAS + 0x28] %asi; \
stxa %l6, [%sp + STACK_BIAS + 0x30] %asi; \
stxa %l7, [%sp + STACK_BIAS + 0x38] %asi; \
stxa %i0, [%sp + STACK_BIAS + 0x40] %asi; \
stxa %i1, [%sp + STACK_BIAS + 0x48] %asi; \
stxa %i2, [%sp + STACK_BIAS + 0x50] %asi; \
stxa %i3, [%sp + STACK_BIAS + 0x58] %asi; \
stxa %i4, [%sp + STACK_BIAS + 0x60] %asi; \
stxa %i5, [%sp + STACK_BIAS + 0x68] %asi; \
stxa %i6, [%sp + STACK_BIAS + 0x70] %asi; \
stxa %i7, [%sp + STACK_BIAS + 0x78] %asi; \
saved; \
sub %g1, 2, %g1; \
ba,pt %xcc, etrap_save; \
wrpr %g1, %cwp; \
nop; nop; nop; nop; nop; \
nop; nop; nop; nop; \
ba,a,pt %xcc, etrap_spill_fixup_64bit; \
ba,a,pt %xcc, etrap_spill_fixup_64bit; \
ba,a,pt %xcc, etrap_spill_fixup_64bit;
#define SPILL_1_GENERIC_ETRAP_FIXUP \
etrap_spill_fixup_64bit: \
ldub [%g6 + TI_WSAVED], %g1; \
sll %g1, 3, %g3; \
add %g6, %g3, %g3; \
stx %sp, [%g3 + TI_RWIN_SPTRS]; \
sll %g1, 7, %g3; \
add %g6, %g3, %g3; \
stx %l0, [%g3 + TI_REG_WINDOW + 0x00]; \
stx %l1, [%g3 + TI_REG_WINDOW + 0x08]; \
stx %l2, [%g3 + TI_REG_WINDOW + 0x10]; \
stx %l3, [%g3 + TI_REG_WINDOW + 0x18]; \
stx %l4, [%g3 + TI_REG_WINDOW + 0x20]; \
stx %l5, [%g3 + TI_REG_WINDOW + 0x28]; \
stx %l6, [%g3 + TI_REG_WINDOW + 0x30]; \
stx %l7, [%g3 + TI_REG_WINDOW + 0x38]; \
stx %i0, [%g3 + TI_REG_WINDOW + 0x40]; \
stx %i1, [%g3 + TI_REG_WINDOW + 0x48]; \
stx %i2, [%g3 + TI_REG_WINDOW + 0x50]; \
stx %i3, [%g3 + TI_REG_WINDOW + 0x58]; \
stx %i4, [%g3 + TI_REG_WINDOW + 0x60]; \
stx %i5, [%g3 + TI_REG_WINDOW + 0x68]; \
stx %i6, [%g3 + TI_REG_WINDOW + 0x70]; \
stx %i7, [%g3 + TI_REG_WINDOW + 0x78]; \
add %g1, 1, %g1; \
stb %g1, [%g6 + TI_WSAVED]; \
saved; \
rdpr %cwp, %g1; \
sub %g1, 2, %g1; \
ba,pt %xcc, etrap_save; \
wrpr %g1, %cwp; \
nop; nop; nop
/* Normal 32bit spill */
#define SPILL_2_GENERIC(ASI) \
sparc64: Make montmul/montsqr/mpmul usable in 32-bit threads. The Montgomery Multiply, Montgomery Square, and Multiple-Precision Multiply instructions work by loading a combination of the floating point and multiple register windows worth of integer registers with the inputs. These values are 64-bit. But for 32-bit userland processes we only save the low 32-bits of each integer register during a register spill. This is because the register window save area is in the user stack and has a fixed layout. Therefore, the only way to use these instruction in 32-bit mode is to perform the following sequence: 1) Load the top-32bits of a choosen integer register with a sentinel, say "-1". This will be in the outer-most register window. The idea is that we're trying to see if the outer-most register window gets spilled, and thus the 64-bit values were truncated. 2) Load all the inputs for the montmul/montsqr/mpmul instruction, down to the inner-most register window. 3) Execute the opcode. 4) Traverse back up to the outer-most register window. 5) Check the sentinel, if it's still "-1" store the results. Otherwise retry the entire sequence. This retry is extremely troublesome. If you're just unlucky and an interrupt or other trap happens, it'll push that outer-most window to the stack and clear the sentinel when we restore it. We could retry forever and never make forward progress if interrupts arrive at a fast enough rate (consider perf events as one example). So we have do limited retries and fallback to software which is extremely non-deterministic. Luckily it's very straightforward to provide a mechanism to let 32-bit applications use a 64-bit stack. Stacks in 64-bit mode are biased by 2047 bytes, which means that the lowest bit is set in the actual %sp register value. So if we see bit zero set in a 32-bit application's stack we treat it like a 64-bit stack. Runtime detection of such a facility is tricky, and cumbersome at best. For example, just trying to use a biased stack and seeing if it works is hard to recover from (the signal handler will need to use an alt stack, plus something along the lines of longjmp). Therefore, we add a system call to report a bitmask of arch specific features like this in a cheap and less hairy way. With help from Andy Polyakov. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-10-27 05:18:37 +07:00
and %sp, 1, %g3; \
brnz,pn %g3, (. - (128 + 4)); \
srl %sp, 0, %sp; \
stwa %l0, [%sp + %g0] ASI; \
mov 0x04, %g3; \
stwa %l1, [%sp + %g3] ASI; \
add %sp, 0x08, %g1; \
stwa %l2, [%g1 + %g0] ASI; \
stwa %l3, [%g1 + %g3] ASI; \
add %g1, 0x08, %g1; \
stwa %l4, [%g1 + %g0] ASI; \
stwa %l5, [%g1 + %g3] ASI; \
add %g1, 0x08, %g1; \
stwa %l6, [%g1 + %g0] ASI; \
stwa %l7, [%g1 + %g3] ASI; \
add %g1, 0x08, %g1; \
stwa %i0, [%g1 + %g0] ASI; \
stwa %i1, [%g1 + %g3] ASI; \
add %g1, 0x08, %g1; \
stwa %i2, [%g1 + %g0] ASI; \
stwa %i3, [%g1 + %g3] ASI; \
add %g1, 0x08, %g1; \
stwa %i4, [%g1 + %g0] ASI; \
stwa %i5, [%g1 + %g3] ASI; \
add %g1, 0x08, %g1; \
stwa %i6, [%g1 + %g0] ASI; \
stwa %i7, [%g1 + %g3] ASI; \
saved; \
sparc64: Make montmul/montsqr/mpmul usable in 32-bit threads. The Montgomery Multiply, Montgomery Square, and Multiple-Precision Multiply instructions work by loading a combination of the floating point and multiple register windows worth of integer registers with the inputs. These values are 64-bit. But for 32-bit userland processes we only save the low 32-bits of each integer register during a register spill. This is because the register window save area is in the user stack and has a fixed layout. Therefore, the only way to use these instruction in 32-bit mode is to perform the following sequence: 1) Load the top-32bits of a choosen integer register with a sentinel, say "-1". This will be in the outer-most register window. The idea is that we're trying to see if the outer-most register window gets spilled, and thus the 64-bit values were truncated. 2) Load all the inputs for the montmul/montsqr/mpmul instruction, down to the inner-most register window. 3) Execute the opcode. 4) Traverse back up to the outer-most register window. 5) Check the sentinel, if it's still "-1" store the results. Otherwise retry the entire sequence. This retry is extremely troublesome. If you're just unlucky and an interrupt or other trap happens, it'll push that outer-most window to the stack and clear the sentinel when we restore it. We could retry forever and never make forward progress if interrupts arrive at a fast enough rate (consider perf events as one example). So we have do limited retries and fallback to software which is extremely non-deterministic. Luckily it's very straightforward to provide a mechanism to let 32-bit applications use a 64-bit stack. Stacks in 64-bit mode are biased by 2047 bytes, which means that the lowest bit is set in the actual %sp register value. So if we see bit zero set in a 32-bit application's stack we treat it like a 64-bit stack. Runtime detection of such a facility is tricky, and cumbersome at best. For example, just trying to use a biased stack and seeing if it works is hard to recover from (the signal handler will need to use an alt stack, plus something along the lines of longjmp). Therefore, we add a system call to report a bitmask of arch specific features like this in a cheap and less hairy way. With help from Andy Polyakov. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-10-27 05:18:37 +07:00
retry; \
b,a,pt %xcc, spill_fixup_dax; \
b,a,pt %xcc, spill_fixup_mna; \
b,a,pt %xcc, spill_fixup;
#define SPILL_2_GENERIC_ETRAP \
etrap_user_spill_32bit: \
sparc64: Make montmul/montsqr/mpmul usable in 32-bit threads. The Montgomery Multiply, Montgomery Square, and Multiple-Precision Multiply instructions work by loading a combination of the floating point and multiple register windows worth of integer registers with the inputs. These values are 64-bit. But for 32-bit userland processes we only save the low 32-bits of each integer register during a register spill. This is because the register window save area is in the user stack and has a fixed layout. Therefore, the only way to use these instruction in 32-bit mode is to perform the following sequence: 1) Load the top-32bits of a choosen integer register with a sentinel, say "-1". This will be in the outer-most register window. The idea is that we're trying to see if the outer-most register window gets spilled, and thus the 64-bit values were truncated. 2) Load all the inputs for the montmul/montsqr/mpmul instruction, down to the inner-most register window. 3) Execute the opcode. 4) Traverse back up to the outer-most register window. 5) Check the sentinel, if it's still "-1" store the results. Otherwise retry the entire sequence. This retry is extremely troublesome. If you're just unlucky and an interrupt or other trap happens, it'll push that outer-most window to the stack and clear the sentinel when we restore it. We could retry forever and never make forward progress if interrupts arrive at a fast enough rate (consider perf events as one example). So we have do limited retries and fallback to software which is extremely non-deterministic. Luckily it's very straightforward to provide a mechanism to let 32-bit applications use a 64-bit stack. Stacks in 64-bit mode are biased by 2047 bytes, which means that the lowest bit is set in the actual %sp register value. So if we see bit zero set in a 32-bit application's stack we treat it like a 64-bit stack. Runtime detection of such a facility is tricky, and cumbersome at best. For example, just trying to use a biased stack and seeing if it works is hard to recover from (the signal handler will need to use an alt stack, plus something along the lines of longjmp). Therefore, we add a system call to report a bitmask of arch specific features like this in a cheap and less hairy way. With help from Andy Polyakov. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-10-27 05:18:37 +07:00
and %sp, 1, %g3; \
brnz,pn %g3, etrap_user_spill_64bit; \
srl %sp, 0, %sp; \
stwa %l0, [%sp + 0x00] %asi; \
stwa %l1, [%sp + 0x04] %asi; \
stwa %l2, [%sp + 0x08] %asi; \
stwa %l3, [%sp + 0x0c] %asi; \
stwa %l4, [%sp + 0x10] %asi; \
stwa %l5, [%sp + 0x14] %asi; \
stwa %l6, [%sp + 0x18] %asi; \
stwa %l7, [%sp + 0x1c] %asi; \
stwa %i0, [%sp + 0x20] %asi; \
stwa %i1, [%sp + 0x24] %asi; \
stwa %i2, [%sp + 0x28] %asi; \
stwa %i3, [%sp + 0x2c] %asi; \
stwa %i4, [%sp + 0x30] %asi; \
stwa %i5, [%sp + 0x34] %asi; \
stwa %i6, [%sp + 0x38] %asi; \
stwa %i7, [%sp + 0x3c] %asi; \
saved; \
sub %g1, 2, %g1; \
ba,pt %xcc, etrap_save; \
wrpr %g1, %cwp; \
nop; nop; nop; nop; \
sparc64: Make montmul/montsqr/mpmul usable in 32-bit threads. The Montgomery Multiply, Montgomery Square, and Multiple-Precision Multiply instructions work by loading a combination of the floating point and multiple register windows worth of integer registers with the inputs. These values are 64-bit. But for 32-bit userland processes we only save the low 32-bits of each integer register during a register spill. This is because the register window save area is in the user stack and has a fixed layout. Therefore, the only way to use these instruction in 32-bit mode is to perform the following sequence: 1) Load the top-32bits of a choosen integer register with a sentinel, say "-1". This will be in the outer-most register window. The idea is that we're trying to see if the outer-most register window gets spilled, and thus the 64-bit values were truncated. 2) Load all the inputs for the montmul/montsqr/mpmul instruction, down to the inner-most register window. 3) Execute the opcode. 4) Traverse back up to the outer-most register window. 5) Check the sentinel, if it's still "-1" store the results. Otherwise retry the entire sequence. This retry is extremely troublesome. If you're just unlucky and an interrupt or other trap happens, it'll push that outer-most window to the stack and clear the sentinel when we restore it. We could retry forever and never make forward progress if interrupts arrive at a fast enough rate (consider perf events as one example). So we have do limited retries and fallback to software which is extremely non-deterministic. Luckily it's very straightforward to provide a mechanism to let 32-bit applications use a 64-bit stack. Stacks in 64-bit mode are biased by 2047 bytes, which means that the lowest bit is set in the actual %sp register value. So if we see bit zero set in a 32-bit application's stack we treat it like a 64-bit stack. Runtime detection of such a facility is tricky, and cumbersome at best. For example, just trying to use a biased stack and seeing if it works is hard to recover from (the signal handler will need to use an alt stack, plus something along the lines of longjmp). Therefore, we add a system call to report a bitmask of arch specific features like this in a cheap and less hairy way. With help from Andy Polyakov. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-10-27 05:18:37 +07:00
nop; nop; \
ba,a,pt %xcc, etrap_spill_fixup_32bit; \
ba,a,pt %xcc, etrap_spill_fixup_32bit; \
ba,a,pt %xcc, etrap_spill_fixup_32bit;
#define SPILL_2_GENERIC_ETRAP_FIXUP \
etrap_spill_fixup_32bit: \
ldub [%g6 + TI_WSAVED], %g1; \
sll %g1, 3, %g3; \
add %g6, %g3, %g3; \
stx %sp, [%g3 + TI_RWIN_SPTRS]; \
sll %g1, 7, %g3; \
add %g6, %g3, %g3; \
stw %l0, [%g3 + TI_REG_WINDOW + 0x00]; \
stw %l1, [%g3 + TI_REG_WINDOW + 0x04]; \
stw %l2, [%g3 + TI_REG_WINDOW + 0x08]; \
stw %l3, [%g3 + TI_REG_WINDOW + 0x0c]; \
stw %l4, [%g3 + TI_REG_WINDOW + 0x10]; \
stw %l5, [%g3 + TI_REG_WINDOW + 0x14]; \
stw %l6, [%g3 + TI_REG_WINDOW + 0x18]; \
stw %l7, [%g3 + TI_REG_WINDOW + 0x1c]; \
stw %i0, [%g3 + TI_REG_WINDOW + 0x20]; \
stw %i1, [%g3 + TI_REG_WINDOW + 0x24]; \
stw %i2, [%g3 + TI_REG_WINDOW + 0x28]; \
stw %i3, [%g3 + TI_REG_WINDOW + 0x2c]; \
stw %i4, [%g3 + TI_REG_WINDOW + 0x30]; \
stw %i5, [%g3 + TI_REG_WINDOW + 0x34]; \
stw %i6, [%g3 + TI_REG_WINDOW + 0x38]; \
stw %i7, [%g3 + TI_REG_WINDOW + 0x3c]; \
add %g1, 1, %g1; \
stb %g1, [%g6 + TI_WSAVED]; \
saved; \
rdpr %cwp, %g1; \
sub %g1, 2, %g1; \
ba,pt %xcc, etrap_save; \
wrpr %g1, %cwp; \
nop; nop; nop
#define SPILL_1_NORMAL SPILL_1_GENERIC(ASI_AIUP)
#define SPILL_2_NORMAL SPILL_2_GENERIC(ASI_AIUP)
#define SPILL_3_NORMAL SPILL_0_NORMAL
#define SPILL_4_NORMAL SPILL_0_NORMAL
#define SPILL_5_NORMAL SPILL_0_NORMAL
#define SPILL_6_NORMAL SPILL_0_NORMAL
#define SPILL_7_NORMAL SPILL_0_NORMAL
#define SPILL_0_OTHER SPILL_0_NORMAL
#define SPILL_1_OTHER SPILL_1_GENERIC(ASI_AIUS)
#define SPILL_2_OTHER SPILL_2_GENERIC(ASI_AIUS)
#define SPILL_3_OTHER SPILL_3_NORMAL
#define SPILL_4_OTHER SPILL_4_NORMAL
#define SPILL_5_OTHER SPILL_5_NORMAL
#define SPILL_6_OTHER SPILL_6_NORMAL
#define SPILL_7_OTHER SPILL_7_NORMAL
/* Normal kernel fill */
#define FILL_0_NORMAL \
ldx [%sp + STACK_BIAS + 0x00], %l0; \
ldx [%sp + STACK_BIAS + 0x08], %l1; \
ldx [%sp + STACK_BIAS + 0x10], %l2; \
ldx [%sp + STACK_BIAS + 0x18], %l3; \
ldx [%sp + STACK_BIAS + 0x20], %l4; \
ldx [%sp + STACK_BIAS + 0x28], %l5; \
ldx [%sp + STACK_BIAS + 0x30], %l6; \
ldx [%sp + STACK_BIAS + 0x38], %l7; \
ldx [%sp + STACK_BIAS + 0x40], %i0; \
ldx [%sp + STACK_BIAS + 0x48], %i1; \
ldx [%sp + STACK_BIAS + 0x50], %i2; \
ldx [%sp + STACK_BIAS + 0x58], %i3; \
ldx [%sp + STACK_BIAS + 0x60], %i4; \
ldx [%sp + STACK_BIAS + 0x68], %i5; \
ldx [%sp + STACK_BIAS + 0x70], %i6; \
ldx [%sp + STACK_BIAS + 0x78], %i7; \
restored; retry; nop; nop; nop; nop; nop; nop; \
nop; nop; nop; nop; nop; nop; nop; nop;
#define FILL_0_NORMAL_RTRAP \
kern_rtt_fill: \
rdpr %cwp, %g1; \
sub %g1, 1, %g1; \
wrpr %g1, %cwp; \
ldx [%sp + STACK_BIAS + 0x00], %l0; \
ldx [%sp + STACK_BIAS + 0x08], %l1; \
ldx [%sp + STACK_BIAS + 0x10], %l2; \
ldx [%sp + STACK_BIAS + 0x18], %l3; \
ldx [%sp + STACK_BIAS + 0x20], %l4; \
ldx [%sp + STACK_BIAS + 0x28], %l5; \
ldx [%sp + STACK_BIAS + 0x30], %l6; \
ldx [%sp + STACK_BIAS + 0x38], %l7; \
ldx [%sp + STACK_BIAS + 0x40], %i0; \
ldx [%sp + STACK_BIAS + 0x48], %i1; \
ldx [%sp + STACK_BIAS + 0x50], %i2; \
ldx [%sp + STACK_BIAS + 0x58], %i3; \
ldx [%sp + STACK_BIAS + 0x60], %i4; \
ldx [%sp + STACK_BIAS + 0x68], %i5; \
ldx [%sp + STACK_BIAS + 0x70], %i6; \
ldx [%sp + STACK_BIAS + 0x78], %i7; \
restored; \
add %g1, 1, %g1; \
ba,pt %xcc, kern_rtt_restore; \
wrpr %g1, %cwp; \
nop; nop; nop; nop; nop; \
nop; nop; nop; nop;
/* Normal 64bit fill */
#define FILL_1_GENERIC(ASI) \
add %sp, STACK_BIAS + 0x00, %g1; \
ldxa [%g1 + %g0] ASI, %l0; \
mov 0x08, %g2; \
mov 0x10, %g3; \
ldxa [%g1 + %g2] ASI, %l1; \
mov 0x18, %g5; \
ldxa [%g1 + %g3] ASI, %l2; \
ldxa [%g1 + %g5] ASI, %l3; \
add %g1, 0x20, %g1; \
ldxa [%g1 + %g0] ASI, %l4; \
ldxa [%g1 + %g2] ASI, %l5; \
ldxa [%g1 + %g3] ASI, %l6; \
ldxa [%g1 + %g5] ASI, %l7; \
add %g1, 0x20, %g1; \
ldxa [%g1 + %g0] ASI, %i0; \
ldxa [%g1 + %g2] ASI, %i1; \
ldxa [%g1 + %g3] ASI, %i2; \
ldxa [%g1 + %g5] ASI, %i3; \
add %g1, 0x20, %g1; \
ldxa [%g1 + %g0] ASI, %i4; \
ldxa [%g1 + %g2] ASI, %i5; \
ldxa [%g1 + %g3] ASI, %i6; \
ldxa [%g1 + %g5] ASI, %i7; \
restored; \
retry; nop; nop; nop; nop; \
b,a,pt %xcc, fill_fixup_dax; \
b,a,pt %xcc, fill_fixup_mna; \
b,a,pt %xcc, fill_fixup;
#define FILL_1_GENERIC_RTRAP \
user_rtt_fill_64bit: \
ldxa [%sp + STACK_BIAS + 0x00] %asi, %l0; \
ldxa [%sp + STACK_BIAS + 0x08] %asi, %l1; \
ldxa [%sp + STACK_BIAS + 0x10] %asi, %l2; \
ldxa [%sp + STACK_BIAS + 0x18] %asi, %l3; \
ldxa [%sp + STACK_BIAS + 0x20] %asi, %l4; \
ldxa [%sp + STACK_BIAS + 0x28] %asi, %l5; \
ldxa [%sp + STACK_BIAS + 0x30] %asi, %l6; \
ldxa [%sp + STACK_BIAS + 0x38] %asi, %l7; \
ldxa [%sp + STACK_BIAS + 0x40] %asi, %i0; \
ldxa [%sp + STACK_BIAS + 0x48] %asi, %i1; \
ldxa [%sp + STACK_BIAS + 0x50] %asi, %i2; \
ldxa [%sp + STACK_BIAS + 0x58] %asi, %i3; \
ldxa [%sp + STACK_BIAS + 0x60] %asi, %i4; \
ldxa [%sp + STACK_BIAS + 0x68] %asi, %i5; \
ldxa [%sp + STACK_BIAS + 0x70] %asi, %i6; \
ldxa [%sp + STACK_BIAS + 0x78] %asi, %i7; \
ba,pt %xcc, user_rtt_pre_restore; \
restored; \
nop; nop; nop; nop; nop; nop; \
nop; nop; nop; nop; nop; \
sparc64: Fix return from trap window fill crashes. We must handle data access exception as well as memory address unaligned exceptions from return from trap window fill faults, not just normal TLB misses. Otherwise we can get an OOPS that looks like this: ld-linux.so.2(36808): Kernel bad sw trap 5 [#1] CPU: 1 PID: 36808 Comm: ld-linux.so.2 Not tainted 4.6.0 #34 task: fff8000303be5c60 ti: fff8000301344000 task.ti: fff8000301344000 TSTATE: 0000004410001601 TPC: 0000000000a1a784 TNPC: 0000000000a1a788 Y: 00000002 Not tainted TPC: <do_sparc64_fault+0x5c4/0x700> g0: fff8000024fc8248 g1: 0000000000db04dc g2: 0000000000000000 g3: 0000000000000001 g4: fff8000303be5c60 g5: fff800030e672000 g6: fff8000301344000 g7: 0000000000000001 o0: 0000000000b95ee8 o1: 000000000000012b o2: 0000000000000000 o3: 0000000200b9b358 o4: 0000000000000000 o5: fff8000301344040 sp: fff80003013475c1 ret_pc: 0000000000a1a77c RPC: <do_sparc64_fault+0x5bc/0x700> l0: 00000000000007ff l1: 0000000000000000 l2: 000000000000005f l3: 0000000000000000 l4: fff8000301347e98 l5: fff8000024ff3060 l6: 0000000000000000 l7: 0000000000000000 i0: fff8000301347f60 i1: 0000000000102400 i2: 0000000000000000 i3: 0000000000000000 i4: 0000000000000000 i5: 0000000000000000 i6: fff80003013476a1 i7: 0000000000404d4c I7: <user_rtt_fill_fixup+0x6c/0x7c> Call Trace: [0000000000404d4c] user_rtt_fill_fixup+0x6c/0x7c The window trap handlers are slightly clever, the trap table entries for them are composed of two pieces of code. First comes the code that actually performs the window fill or spill trap handling, and then there are three instructions at the end which are for exception processing. The userland register window fill handler is: add %sp, STACK_BIAS + 0x00, %g1; \ ldxa [%g1 + %g0] ASI, %l0; \ mov 0x08, %g2; \ mov 0x10, %g3; \ ldxa [%g1 + %g2] ASI, %l1; \ mov 0x18, %g5; \ ldxa [%g1 + %g3] ASI, %l2; \ ldxa [%g1 + %g5] ASI, %l3; \ add %g1, 0x20, %g1; \ ldxa [%g1 + %g0] ASI, %l4; \ ldxa [%g1 + %g2] ASI, %l5; \ ldxa [%g1 + %g3] ASI, %l6; \ ldxa [%g1 + %g5] ASI, %l7; \ add %g1, 0x20, %g1; \ ldxa [%g1 + %g0] ASI, %i0; \ ldxa [%g1 + %g2] ASI, %i1; \ ldxa [%g1 + %g3] ASI, %i2; \ ldxa [%g1 + %g5] ASI, %i3; \ add %g1, 0x20, %g1; \ ldxa [%g1 + %g0] ASI, %i4; \ ldxa [%g1 + %g2] ASI, %i5; \ ldxa [%g1 + %g3] ASI, %i6; \ ldxa [%g1 + %g5] ASI, %i7; \ restored; \ retry; nop; nop; nop; nop; \ b,a,pt %xcc, fill_fixup_dax; \ b,a,pt %xcc, fill_fixup_mna; \ b,a,pt %xcc, fill_fixup; And the way this works is that if any of those memory accesses generate an exception, the exception handler can revector to one of those final three branch instructions depending upon which kind of exception the memory access took. In this way, the fault handler doesn't have to know if it was a spill or a fill that it's handling the fault for. It just always branches to the last instruction in the parent trap's handler. For example, for a regular fault, the code goes: winfix_trampoline: rdpr %tpc, %g3 or %g3, 0x7c, %g3 wrpr %g3, %tnpc done All window trap handlers are 0x80 aligned, so if we "or" 0x7c into the trap time program counter, we'll get that final instruction in the trap handler. On return from trap, we have to pull the register window in but we do this by hand instead of just executing a "restore" instruction for several reasons. The largest being that from Niagara and onward we simply don't have enough levels in the trap stack to fully resolve all possible exception cases of a window fault when we are already at trap level 1 (which we enter to get ready to return from the original trap). This is executed inline via the FILL_*_RTRAP handlers. rtrap_64.S's code branches directly to these to do the window fill by hand if necessary. Now if you look at them, we'll see at the end: ba,a,pt %xcc, user_rtt_fill_fixup; ba,a,pt %xcc, user_rtt_fill_fixup; ba,a,pt %xcc, user_rtt_fill_fixup; And oops, all three cases are handled like a fault. This doesn't work because each of these trap types (data access exception, memory address unaligned, and faults) store their auxiliary info in different registers to pass on to the C handler which does the real work. So in the case where the stack was unaligned, the unaligned trap handler sets up the arg registers one way, and then we branched to the fault handler which expects them setup another way. So the FAULT_TYPE_* value ends up basically being garbage, and randomly would generate the backtrace seen above. Reported-by: Nick Alcock <nix@esperi.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-29 10:41:12 +07:00
ba,a,pt %xcc, user_rtt_fill_fixup_dax; \
ba,a,pt %xcc, user_rtt_fill_fixup_mna; \
ba,a,pt %xcc, user_rtt_fill_fixup;
/* Normal 32bit fill */
#define FILL_2_GENERIC(ASI) \
sparc64: Make montmul/montsqr/mpmul usable in 32-bit threads. The Montgomery Multiply, Montgomery Square, and Multiple-Precision Multiply instructions work by loading a combination of the floating point and multiple register windows worth of integer registers with the inputs. These values are 64-bit. But for 32-bit userland processes we only save the low 32-bits of each integer register during a register spill. This is because the register window save area is in the user stack and has a fixed layout. Therefore, the only way to use these instruction in 32-bit mode is to perform the following sequence: 1) Load the top-32bits of a choosen integer register with a sentinel, say "-1". This will be in the outer-most register window. The idea is that we're trying to see if the outer-most register window gets spilled, and thus the 64-bit values were truncated. 2) Load all the inputs for the montmul/montsqr/mpmul instruction, down to the inner-most register window. 3) Execute the opcode. 4) Traverse back up to the outer-most register window. 5) Check the sentinel, if it's still "-1" store the results. Otherwise retry the entire sequence. This retry is extremely troublesome. If you're just unlucky and an interrupt or other trap happens, it'll push that outer-most window to the stack and clear the sentinel when we restore it. We could retry forever and never make forward progress if interrupts arrive at a fast enough rate (consider perf events as one example). So we have do limited retries and fallback to software which is extremely non-deterministic. Luckily it's very straightforward to provide a mechanism to let 32-bit applications use a 64-bit stack. Stacks in 64-bit mode are biased by 2047 bytes, which means that the lowest bit is set in the actual %sp register value. So if we see bit zero set in a 32-bit application's stack we treat it like a 64-bit stack. Runtime detection of such a facility is tricky, and cumbersome at best. For example, just trying to use a biased stack and seeing if it works is hard to recover from (the signal handler will need to use an alt stack, plus something along the lines of longjmp). Therefore, we add a system call to report a bitmask of arch specific features like this in a cheap and less hairy way. With help from Andy Polyakov. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-10-27 05:18:37 +07:00
and %sp, 1, %g3; \
brnz,pn %g3, (. - (128 + 4)); \
srl %sp, 0, %sp; \
lduwa [%sp + %g0] ASI, %l0; \
mov 0x04, %g2; \
mov 0x08, %g3; \
lduwa [%sp + %g2] ASI, %l1; \
mov 0x0c, %g5; \
lduwa [%sp + %g3] ASI, %l2; \
lduwa [%sp + %g5] ASI, %l3; \
add %sp, 0x10, %g1; \
lduwa [%g1 + %g0] ASI, %l4; \
lduwa [%g1 + %g2] ASI, %l5; \
lduwa [%g1 + %g3] ASI, %l6; \
lduwa [%g1 + %g5] ASI, %l7; \
add %g1, 0x10, %g1; \
lduwa [%g1 + %g0] ASI, %i0; \
lduwa [%g1 + %g2] ASI, %i1; \
lduwa [%g1 + %g3] ASI, %i2; \
lduwa [%g1 + %g5] ASI, %i3; \
add %g1, 0x10, %g1; \
lduwa [%g1 + %g0] ASI, %i4; \
lduwa [%g1 + %g2] ASI, %i5; \
lduwa [%g1 + %g3] ASI, %i6; \
lduwa [%g1 + %g5] ASI, %i7; \
restored; \
sparc64: Make montmul/montsqr/mpmul usable in 32-bit threads. The Montgomery Multiply, Montgomery Square, and Multiple-Precision Multiply instructions work by loading a combination of the floating point and multiple register windows worth of integer registers with the inputs. These values are 64-bit. But for 32-bit userland processes we only save the low 32-bits of each integer register during a register spill. This is because the register window save area is in the user stack and has a fixed layout. Therefore, the only way to use these instruction in 32-bit mode is to perform the following sequence: 1) Load the top-32bits of a choosen integer register with a sentinel, say "-1". This will be in the outer-most register window. The idea is that we're trying to see if the outer-most register window gets spilled, and thus the 64-bit values were truncated. 2) Load all the inputs for the montmul/montsqr/mpmul instruction, down to the inner-most register window. 3) Execute the opcode. 4) Traverse back up to the outer-most register window. 5) Check the sentinel, if it's still "-1" store the results. Otherwise retry the entire sequence. This retry is extremely troublesome. If you're just unlucky and an interrupt or other trap happens, it'll push that outer-most window to the stack and clear the sentinel when we restore it. We could retry forever and never make forward progress if interrupts arrive at a fast enough rate (consider perf events as one example). So we have do limited retries and fallback to software which is extremely non-deterministic. Luckily it's very straightforward to provide a mechanism to let 32-bit applications use a 64-bit stack. Stacks in 64-bit mode are biased by 2047 bytes, which means that the lowest bit is set in the actual %sp register value. So if we see bit zero set in a 32-bit application's stack we treat it like a 64-bit stack. Runtime detection of such a facility is tricky, and cumbersome at best. For example, just trying to use a biased stack and seeing if it works is hard to recover from (the signal handler will need to use an alt stack, plus something along the lines of longjmp). Therefore, we add a system call to report a bitmask of arch specific features like this in a cheap and less hairy way. With help from Andy Polyakov. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-10-27 05:18:37 +07:00
retry; nop; nop; \
b,a,pt %xcc, fill_fixup_dax; \
b,a,pt %xcc, fill_fixup_mna; \
b,a,pt %xcc, fill_fixup;
#define FILL_2_GENERIC_RTRAP \
user_rtt_fill_32bit: \
sparc64: Make montmul/montsqr/mpmul usable in 32-bit threads. The Montgomery Multiply, Montgomery Square, and Multiple-Precision Multiply instructions work by loading a combination of the floating point and multiple register windows worth of integer registers with the inputs. These values are 64-bit. But for 32-bit userland processes we only save the low 32-bits of each integer register during a register spill. This is because the register window save area is in the user stack and has a fixed layout. Therefore, the only way to use these instruction in 32-bit mode is to perform the following sequence: 1) Load the top-32bits of a choosen integer register with a sentinel, say "-1". This will be in the outer-most register window. The idea is that we're trying to see if the outer-most register window gets spilled, and thus the 64-bit values were truncated. 2) Load all the inputs for the montmul/montsqr/mpmul instruction, down to the inner-most register window. 3) Execute the opcode. 4) Traverse back up to the outer-most register window. 5) Check the sentinel, if it's still "-1" store the results. Otherwise retry the entire sequence. This retry is extremely troublesome. If you're just unlucky and an interrupt or other trap happens, it'll push that outer-most window to the stack and clear the sentinel when we restore it. We could retry forever and never make forward progress if interrupts arrive at a fast enough rate (consider perf events as one example). So we have do limited retries and fallback to software which is extremely non-deterministic. Luckily it's very straightforward to provide a mechanism to let 32-bit applications use a 64-bit stack. Stacks in 64-bit mode are biased by 2047 bytes, which means that the lowest bit is set in the actual %sp register value. So if we see bit zero set in a 32-bit application's stack we treat it like a 64-bit stack. Runtime detection of such a facility is tricky, and cumbersome at best. For example, just trying to use a biased stack and seeing if it works is hard to recover from (the signal handler will need to use an alt stack, plus something along the lines of longjmp). Therefore, we add a system call to report a bitmask of arch specific features like this in a cheap and less hairy way. With help from Andy Polyakov. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-10-27 05:18:37 +07:00
and %sp, 1, %g3; \
brnz,pn %g3, user_rtt_fill_64bit; \
srl %sp, 0, %sp; \
lduwa [%sp + 0x00] %asi, %l0; \
lduwa [%sp + 0x04] %asi, %l1; \
lduwa [%sp + 0x08] %asi, %l2; \
lduwa [%sp + 0x0c] %asi, %l3; \
lduwa [%sp + 0x10] %asi, %l4; \
lduwa [%sp + 0x14] %asi, %l5; \
lduwa [%sp + 0x18] %asi, %l6; \
lduwa [%sp + 0x1c] %asi, %l7; \
lduwa [%sp + 0x20] %asi, %i0; \
lduwa [%sp + 0x24] %asi, %i1; \
lduwa [%sp + 0x28] %asi, %i2; \
lduwa [%sp + 0x2c] %asi, %i3; \
lduwa [%sp + 0x30] %asi, %i4; \
lduwa [%sp + 0x34] %asi, %i5; \
lduwa [%sp + 0x38] %asi, %i6; \
lduwa [%sp + 0x3c] %asi, %i7; \
ba,pt %xcc, user_rtt_pre_restore; \
restored; \
nop; nop; nop; nop; nop; \
sparc64: Make montmul/montsqr/mpmul usable in 32-bit threads. The Montgomery Multiply, Montgomery Square, and Multiple-Precision Multiply instructions work by loading a combination of the floating point and multiple register windows worth of integer registers with the inputs. These values are 64-bit. But for 32-bit userland processes we only save the low 32-bits of each integer register during a register spill. This is because the register window save area is in the user stack and has a fixed layout. Therefore, the only way to use these instruction in 32-bit mode is to perform the following sequence: 1) Load the top-32bits of a choosen integer register with a sentinel, say "-1". This will be in the outer-most register window. The idea is that we're trying to see if the outer-most register window gets spilled, and thus the 64-bit values were truncated. 2) Load all the inputs for the montmul/montsqr/mpmul instruction, down to the inner-most register window. 3) Execute the opcode. 4) Traverse back up to the outer-most register window. 5) Check the sentinel, if it's still "-1" store the results. Otherwise retry the entire sequence. This retry is extremely troublesome. If you're just unlucky and an interrupt or other trap happens, it'll push that outer-most window to the stack and clear the sentinel when we restore it. We could retry forever and never make forward progress if interrupts arrive at a fast enough rate (consider perf events as one example). So we have do limited retries and fallback to software which is extremely non-deterministic. Luckily it's very straightforward to provide a mechanism to let 32-bit applications use a 64-bit stack. Stacks in 64-bit mode are biased by 2047 bytes, which means that the lowest bit is set in the actual %sp register value. So if we see bit zero set in a 32-bit application's stack we treat it like a 64-bit stack. Runtime detection of such a facility is tricky, and cumbersome at best. For example, just trying to use a biased stack and seeing if it works is hard to recover from (the signal handler will need to use an alt stack, plus something along the lines of longjmp). Therefore, we add a system call to report a bitmask of arch specific features like this in a cheap and less hairy way. With help from Andy Polyakov. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-10-27 05:18:37 +07:00
nop; nop; nop; \
sparc64: Fix return from trap window fill crashes. We must handle data access exception as well as memory address unaligned exceptions from return from trap window fill faults, not just normal TLB misses. Otherwise we can get an OOPS that looks like this: ld-linux.so.2(36808): Kernel bad sw trap 5 [#1] CPU: 1 PID: 36808 Comm: ld-linux.so.2 Not tainted 4.6.0 #34 task: fff8000303be5c60 ti: fff8000301344000 task.ti: fff8000301344000 TSTATE: 0000004410001601 TPC: 0000000000a1a784 TNPC: 0000000000a1a788 Y: 00000002 Not tainted TPC: <do_sparc64_fault+0x5c4/0x700> g0: fff8000024fc8248 g1: 0000000000db04dc g2: 0000000000000000 g3: 0000000000000001 g4: fff8000303be5c60 g5: fff800030e672000 g6: fff8000301344000 g7: 0000000000000001 o0: 0000000000b95ee8 o1: 000000000000012b o2: 0000000000000000 o3: 0000000200b9b358 o4: 0000000000000000 o5: fff8000301344040 sp: fff80003013475c1 ret_pc: 0000000000a1a77c RPC: <do_sparc64_fault+0x5bc/0x700> l0: 00000000000007ff l1: 0000000000000000 l2: 000000000000005f l3: 0000000000000000 l4: fff8000301347e98 l5: fff8000024ff3060 l6: 0000000000000000 l7: 0000000000000000 i0: fff8000301347f60 i1: 0000000000102400 i2: 0000000000000000 i3: 0000000000000000 i4: 0000000000000000 i5: 0000000000000000 i6: fff80003013476a1 i7: 0000000000404d4c I7: <user_rtt_fill_fixup+0x6c/0x7c> Call Trace: [0000000000404d4c] user_rtt_fill_fixup+0x6c/0x7c The window trap handlers are slightly clever, the trap table entries for them are composed of two pieces of code. First comes the code that actually performs the window fill or spill trap handling, and then there are three instructions at the end which are for exception processing. The userland register window fill handler is: add %sp, STACK_BIAS + 0x00, %g1; \ ldxa [%g1 + %g0] ASI, %l0; \ mov 0x08, %g2; \ mov 0x10, %g3; \ ldxa [%g1 + %g2] ASI, %l1; \ mov 0x18, %g5; \ ldxa [%g1 + %g3] ASI, %l2; \ ldxa [%g1 + %g5] ASI, %l3; \ add %g1, 0x20, %g1; \ ldxa [%g1 + %g0] ASI, %l4; \ ldxa [%g1 + %g2] ASI, %l5; \ ldxa [%g1 + %g3] ASI, %l6; \ ldxa [%g1 + %g5] ASI, %l7; \ add %g1, 0x20, %g1; \ ldxa [%g1 + %g0] ASI, %i0; \ ldxa [%g1 + %g2] ASI, %i1; \ ldxa [%g1 + %g3] ASI, %i2; \ ldxa [%g1 + %g5] ASI, %i3; \ add %g1, 0x20, %g1; \ ldxa [%g1 + %g0] ASI, %i4; \ ldxa [%g1 + %g2] ASI, %i5; \ ldxa [%g1 + %g3] ASI, %i6; \ ldxa [%g1 + %g5] ASI, %i7; \ restored; \ retry; nop; nop; nop; nop; \ b,a,pt %xcc, fill_fixup_dax; \ b,a,pt %xcc, fill_fixup_mna; \ b,a,pt %xcc, fill_fixup; And the way this works is that if any of those memory accesses generate an exception, the exception handler can revector to one of those final three branch instructions depending upon which kind of exception the memory access took. In this way, the fault handler doesn't have to know if it was a spill or a fill that it's handling the fault for. It just always branches to the last instruction in the parent trap's handler. For example, for a regular fault, the code goes: winfix_trampoline: rdpr %tpc, %g3 or %g3, 0x7c, %g3 wrpr %g3, %tnpc done All window trap handlers are 0x80 aligned, so if we "or" 0x7c into the trap time program counter, we'll get that final instruction in the trap handler. On return from trap, we have to pull the register window in but we do this by hand instead of just executing a "restore" instruction for several reasons. The largest being that from Niagara and onward we simply don't have enough levels in the trap stack to fully resolve all possible exception cases of a window fault when we are already at trap level 1 (which we enter to get ready to return from the original trap). This is executed inline via the FILL_*_RTRAP handlers. rtrap_64.S's code branches directly to these to do the window fill by hand if necessary. Now if you look at them, we'll see at the end: ba,a,pt %xcc, user_rtt_fill_fixup; ba,a,pt %xcc, user_rtt_fill_fixup; ba,a,pt %xcc, user_rtt_fill_fixup; And oops, all three cases are handled like a fault. This doesn't work because each of these trap types (data access exception, memory address unaligned, and faults) store their auxiliary info in different registers to pass on to the C handler which does the real work. So in the case where the stack was unaligned, the unaligned trap handler sets up the arg registers one way, and then we branched to the fault handler which expects them setup another way. So the FAULT_TYPE_* value ends up basically being garbage, and randomly would generate the backtrace seen above. Reported-by: Nick Alcock <nix@esperi.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-05-29 10:41:12 +07:00
ba,a,pt %xcc, user_rtt_fill_fixup_dax; \
ba,a,pt %xcc, user_rtt_fill_fixup_mna; \
ba,a,pt %xcc, user_rtt_fill_fixup;
#define FILL_1_NORMAL FILL_1_GENERIC(ASI_AIUP)
#define FILL_2_NORMAL FILL_2_GENERIC(ASI_AIUP)
#define FILL_3_NORMAL FILL_0_NORMAL
#define FILL_4_NORMAL FILL_0_NORMAL
#define FILL_5_NORMAL FILL_0_NORMAL
#define FILL_6_NORMAL FILL_0_NORMAL
#define FILL_7_NORMAL FILL_0_NORMAL
#define FILL_0_OTHER FILL_0_NORMAL
#define FILL_1_OTHER FILL_1_GENERIC(ASI_AIUS)
#define FILL_2_OTHER FILL_2_GENERIC(ASI_AIUS)
#define FILL_3_OTHER FILL_3_NORMAL
#define FILL_4_OTHER FILL_4_NORMAL
#define FILL_5_OTHER FILL_5_NORMAL
#define FILL_6_OTHER FILL_6_NORMAL
#define FILL_7_OTHER FILL_7_NORMAL
#endif /* !(_SPARC64_TTABLE_H) */