Commit Graph

27 Commits

Author SHA1 Message Date
Vineet Gupta
616c05d110 sh: move fpu_counter into ARCH specific thread_struct
Only a couple of arches (sh/x86) use fpu_counter in task_struct so it can
be moved out into ARCH specific thread_struct, reducing the size of
task_struct for other arches.

Compile tested sh defconfig + sh4-linux-gcc (4.6.3)

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Cc: Paul Mundt <paul.mundt@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-11-13 12:09:13 +09:00
Kuninori Morimoto
30c254ff5c sh: define TASK_UNMAPPED_BASE as a page aligned constant
b4265f1234
(mm: use vm_unmapped_area() on sh architecture)
broke sh boot. This patch define TASK_UNMAPPED_BASE
as a page aligned constant to solve this issue.
Special thanks to Michel

Acked-by: Michel Lespinasse <walken@google.com>
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2013-01-11 20:52:35 +09:00
Al Viro
7147e21548 sh: switch to generic kernel_thread()/kernel_execve()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2012-10-22 22:31:01 -04:00
Suresh Siddha
55ccf3fe3f fork: move the real prepare_to_copy() users to arch_dup_task_struct()
Historical prepare_to_copy() is mostly a no-op, duplicated for majority of
the architectures and the rest following the x86 model of flushing the extended
register state like fpu there.

Remove it and use the arch_dup_task_struct() instead.

Suggested-by: Oleg Nesterov <oleg@redhat.com>
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Link: http://lkml.kernel.org/r/1336692811-30576-1-git-send-email-suresh.b.siddha@intel.com
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Chris Zankel <chris@zankel.net>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Mark Salter <msalter@redhat.com>
Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: James E.J. Bottomley <jejb@parisc-linux.org>
Cc: Helge Deller <deller@gmx.de>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Chen Liqin <liqin.chen@sunplusct.com>
Cc: Lennox Wu <lennox.wu@gmail.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2012-05-16 15:16:26 -07:00
Paul Mundt
c81fc38925 sh: constify prefetch pointers.
prefetch()/prefetchw() are supposed to take a const void * instead of a
straight void *, which the build recently started complaining about, fix
them up.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2011-01-11 14:39:35 +09:00
Giuseppe CAVALLARO
d53e4307c2 sh: Use GCC __builtin_prefetch() to implement prefetch().
GCC's __builtin_prefetch() was introduced a long time ago, all
supported GCC versions have it. So this patch is to use it for
implementing the prefetch on SH2A and SH4.

The current  prefetch implementation is almost equivalent with
__builtin_prefetch.
The third parameter in the __builtin_prefetch is the locality
that it's not supported on SH architectures.  It has been set
to three and it should be verified if it's suitable for SH2A
as well. I didn't test on this architecture.

The builtin usage should be more efficient that an __asm__
because less barriers, and because the compiler doesn't see the
inst as a "black box" allowing better code generation.

This has been already done on other architectures (see the commit:
0453fb3c52).

Many thanks to Christian Bruel <christain.bruel@st.com> for his
support on evaluate the impact of the gcc built-in on SH4 arch.

No regressions found while testing with LMbench on STLinux targets.

Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-11-18 14:53:18 +09:00
Paul Mundt
eaaaeef392 sh: Add kprobe-based event tracer.
This follows the x86/ppc changes for kprobe-based event tracing on sh.
While kprobes is only supported on 32-bit sh, we provide the API for
HAVE_REGS_AND_STACK_ACCESS_API for both 32 and 64-bit.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-06-14 15:16:53 +09:00
Paul Mundt
4a6feab0ee sh: __cpuinit annotate the CPU init path.
All of the regular CPU init path needs to be __cpuinit annotated for CPU
hotplug.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-04-21 12:20:42 +09:00
Paul Mundt
94ea5e449a sh: wire up SET/GET_UNALIGN_CTL.
This hooks up the SET/GET_UNALIGN_CTL knobs cribbing the bulk of it from
the PPC and ia64 implementations. The thread flags happen to be the
logical inverse of what the global fault mode is set to, so this works
out pretty cleanly. By default the global fault mode is used, with tasks
now being able to override their own settings via prctl().

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-23 12:56:30 +09:00
Paul Mundt
3ef2932b8c sh64: Fix up the build for the thread_xstate changes.
This updates the sh64 processor info with the sh32 changes in order to
tie in to the generic task_xstate management code.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-19 15:40:03 +09:00
Paul Mundt
644755e786 Merge branches 'sh/xstate', 'sh/hw-breakpoints' and 'sh/stable-updates' 2010-01-13 13:02:55 +09:00
Paul Mundt
0ea820cf9b sh: Move over to dynamically allocated FPU context.
This follows the x86 xstate changes and implements a task_xstate slab
cache that is dynamically sized to match one of hard FP/soft FP/FPU-less.

This also tidies up and consolidates some of the SH-2A/SH-4 FPU
fragmentation. Now fpu state restorers are commonly defined, with the
init_fpu()/fpu_init() mess reworked to follow the x86 convention.
The fpu_init() register initialization has been replaced by xstate setup
followed by writing out to hardware via the standard restore path.

As init_fpu() now performs a slab allocation a secondary lighterweight
restorer is also introduced for the context switch.

In the future the DSP state will be rolled in here, too.

More work remains for math emulation and the SH-5 FPU, which presently
uses its own special (UP-only) interfaces.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-13 12:51:40 +09:00
Paul Mundt
70e068eef9 sh: Move start_thread() out of line.
start_thread() will become a bit heavier with the xstate freeing to be
added in, so move it out-of-line in preparation.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-12 18:52:00 +09:00
Paul Mundt
4352fc1b12 sh: Abstracted SH-4A UBC support on hw-breakpoint core.
This is the next big chunk of hw_breakpoint support. This decouples
the SH-4A support from the core and moves it out in to its own stub,
following many of the conventions established with the perf events
layering.

In addition to extending SH-4A support to encapsulate the remainder
of the UBC channels, clock framework support for handling the UBC
interface clock is added as well, allowing for dynamic clock gating.

This also fixes up a regression introduced by the SIGTRAP handling that
broke the ksym_tracer, to the extent that the current support works well
with all of the ksym_tracer/ptrace/kgdb. The kprobes singlestep code will
follow in turn.

With this in place, the remaining UBC variants (SH-2A and SH-4) can now
be trivially plugged in.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-05 19:06:45 +09:00
Paul Mundt
6424db52e2 Merge branch 'master' into sh/hw-breakpoints
Conflict between FPU thread flag migration and debug
thread flag addition.

Conflicts:
	arch/sh/include/asm/thread_info.h
	arch/sh/include/asm/ubc.h
	arch/sh/kernel/process_32.c
2009-12-08 15:47:12 +09:00
Paul Mundt
09a0729477 sh: hw-breakpoints: Add preliminary support for SH-4A UBC.
This adds preliminary support for the SH-4A UBC to the hw-breakpoints API.
Presently only a single channel is implemented, and the ptrace interface
still needs to be converted. This is the first step to cleaning up the
long-standing UBC mess, making the UBC more generally accessible, and
finally making it SMP safe.

An additional abstraction will be layered on top of this as with the perf
events code to permit the various CPU families to wire up support for
their own specific UBCs, as many variations exist.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-12-08 15:02:27 +09:00
Stuart Menefy
d3ea9fa0a5 sh: Minor optimisations to FPU handling
A number of small optimisations to FPU handling, in particular:

 - move the task USEDFPU flag from the thread_info flags field (which
   is accessed asynchronously to the thread) to a new status field,
   which is only accessed by the thread itself. This allows locking to
   be removed in most cases, or can be reduced to a preempt_lock().
   This mimics the i386 behaviour.

 - move the modification of regs->sr and thread_info->status flags out
   of save_fpu() to __unlazy_fpu(). This gives the compiler a better
   chance to optimise things, as well as making save_fpu() symmetrical
   with restore_fpu() and init_fpu().

 - implement prepare_to_copy(), so that when creating a thread, we can
   unlazy the FPU prior to copying the thread data structures.

Also make sure that the FPU is disabled while in the kernel, in
particular while booting, and for newly created kernel threads,

In a very artificial benchmark, the execution time for 2500000
context switches was reduced from 50 to 45 seconds.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-11-24 17:45:38 +09:00
Michael Trimarchi
01ab10393c sh: Fix up DSP context save/restore.
There were a number of issues with the DSP context save/restore code,
mostly left-over relics from when it was introduced on SH3-DSP with
little follow-up testing, resulting in things like task_pt_dspregs()
referencing incorrect state on the stack.

This follows the MIPS convention of tracking the DSP state in the
thread_struct and handling the state save/restore in switch_to() and
finish_arch_switch() respectively. The regset interface is also updated,
which allows us to finally be rid of task_pt_dspregs() and the special
cased task_pt_regs().

Signed-off-by: Michael Trimarchi <michael@evidence.eu.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-04-04 11:48:11 -04:00
Paul Mundt
a73090ffaf sh: Disable unsupportable prefetching on SH-3.
The SH-3 does not support 'pref'-based prefetching, only SH-2A and SH-4A
parts do. Remove SH-3 from the list.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-02-27 16:42:05 +09:00
Roel Kluin
dc66ff6220 SH: fix start_thread and user_stack_pointer macros
Fix macros start_thread and user_stack_pointer.
When these macros aren't called with a variable named regs as second
argument, this will result in a build failure.

Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-01-29 15:41:15 +09:00
Paul Mundt
5d2685d0b3 sh: Conditionalize the code dumper on CONFIG_DUMP_CODE.
We don't really want this enabled by default, but it is still quite
useful for debugging. So, make it conditional and leave it off by
default.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-12-22 18:44:47 +09:00
Paul Mundt
eb67cf14ae sh: Consolidate cpu_relax()/cpu_sleep() definitions across _32/_64.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-12-22 18:43:50 +09:00
Paul Mundt
9cfc9a9b6f sh: Add a simple code dumper for SUPERH32 show_regs().
This implements a simple show_code() that is in turn plugged in to
show_regs() to provide minimal code dumping at the end of the trace.

Built on top of a simple instruction disassembler derived from the
binutils opcode table.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-12-22 18:43:49 +09:00
Paul Mundt
81b669952e sh: Consolidate struct sh_cpuinfo definitions across _32/_64 split.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-17 23:24:02 +09:00
Paul Mundt
9996b42ac0 sh: provide user_stack_pointer(), needed for tracehook support.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-12 22:11:36 +09:00
Paul Mundt
fa43972fab sh: fixup many sparse errors.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-08 10:35:04 +09:00
Paul Mundt
f15cbe6f1a sh: migrate to arch/sh/include/
This follows the sparc changes a439fe51a1.

Most of the moving about was done with Sam's directions at:

http://marc.info/?l=linux-sh&m=121724823706062&w=2

with subsequent hacking and fixups entirely my fault.

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-07-29 08:09:44 +09:00