Commit Graph

2608 Commits

Author SHA1 Message Date
Linus Torvalds
581bfce969 Merge branch 'work.set_fs' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull more set_fs removal from Al Viro:
 "Christoph's 'use kernel_read and friends rather than open-coding
  set_fs()' series"

* 'work.set_fs' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
  fs: unexport vfs_readv and vfs_writev
  fs: unexport vfs_read and vfs_write
  fs: unexport __vfs_read/__vfs_write
  lustre: switch to kernel_write
  gadget/f_mass_storage: stop messing with the address limit
  mconsole: switch to kernel_read
  btrfs: switch write_buf to kernel_write
  net/9p: switch p9_fd_read to kernel_write
  mm/nommu: switch do_mmap_private to kernel_read
  serial2002: switch serial2002_tty_write to kernel_{read/write}
  fs: make the buf argument to __kernel_write a void pointer
  fs: fix kernel_write prototype
  fs: fix kernel_read prototype
  fs: move kernel_read to fs/read_write.c
  fs: move kernel_write to fs/read_write.c
  autofs4: switch autofs4_write to __kernel_write
  ashmem: switch to ->read_iter
2017-09-14 18:13:32 -07:00
Linus Torvalds
dd198ce714 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace
Pull namespace updates from Eric Biederman:
 "Life has been busy and I have not gotten half as much done this round
  as I would have liked. I delayed it so that a minor conflict
  resolution with the mips tree could spend a little time in linux-next
  before I sent this pull request.

  This includes two long delayed user namespace changes from Kirill
  Tkhai. It also includes a very useful change from Serge Hallyn that
  allows the security capability attribute to be used inside of user
  namespaces. The practical effect of this is people can now untar
  tarballs and install rpms in user namespaces. It had been suggested to
  generalize this and encode some of the namespace information
  information in the xattr name. Upon close inspection that makes the
  things that should be hard easy and the things that should be easy
  more expensive.

  Then there is my bugfix/cleanup for signal injection that removes the
  magic encoding of the siginfo union member from the kernel internal
  si_code. The mips folks reported the case where I had used FPE_FIXME
  me is impossible so I have remove FPE_FIXME from mips, while at the
  same time including a return statement in that case to keep gcc from
  complaining about unitialized variables.

  I almost finished the work to get make copy_siginfo_to_user a trivial
  copy to user. The code is available at:

     git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace.git neuter-copy_siginfo_to_user-v3

  But I did not have time/energy to get the code posted and reviewed
  before the merge window opened.

  I was able to see that the security excuse for just copying fields
  that we know are initialized doesn't work in practice there are buggy
  initializations that don't initialize the proper fields in siginfo. So
  we still sometimes copy unitialized data to userspace"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace:
  Introduce v3 namespaced file capabilities
  mips/signal: In force_fcr31_sig return in the impossible case
  signal: Remove kernel interal si_code magic
  fcntl: Don't use ambiguous SIG_POLL si_codes
  prctl: Allow local CAP_SYS_ADMIN changing exe_file
  security: Use user_namespace::level to avoid redundant iterations in cap_capable()
  userns,pidns: Verify the userns for new pid namespaces
  signal/testing: Don't look for __SI_FAULT in userspace
  signal/mips: Document a conflict with SI_USER with SIGFPE
  signal/sparc: Document a conflict with SI_USER with SIGFPE
  signal/ia64: Document a conflict with SI_USER with SIGFPE
  signal/alpha: Document a conflict with SI_USER for SIGTRAP
2017-09-11 18:34:47 -07:00
Christoph Hellwig
bdd1d2d3d2 fs: fix kernel_read prototype
Use proper ssize_t and size_t types for the return value and count
argument, move the offset last and make it an in/out argument like
all other read/write helpers, and make the buf argument a void pointer
to get rid of lots of casts in the callers.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2017-09-04 19:05:15 -04:00
Ingo Molnar
edc2988c54 Merge branch 'linus' into locking/core, to fix up conflicts
Conflicts:
	mm/page_alloc.c

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-09-04 11:01:18 +02:00
James Cowgill
5af2ed3669 MIPS: Remove pt_regs adjustments in indirect syscall handler
If a restartable syscall is called using the indirect o32 syscall
handler - eg: syscall(__NR_waitid, ...), then it is possible for the
incorrect arguments to be passed to the syscall after it has been
restarted. This is because the syscall handler tries to shift all the
registers down one place in pt_regs so that when the syscall is restarted,
the "real" syscall is called instead. Unfortunately it only shifts the
arguments passed in registers, not the arguments on the user stack. This
causes the 4th argument to be duplicated when the syscall is restarted.

Fix by removing all the pt_regs shifting so that the indirect syscall
handler is called again when the syscall is restarted. The comment "some
syscalls like execve get their arguments from struct pt_regs" is long
out of date so this should now be safe.

Signed-off-by: James Cowgill <James.Cowgill@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Tested-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/15856/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-08-29 15:48:34 +02:00
James Hogan
3d729deaf2 MIPS: seccomp: Fix indirect syscall args
Since commit 669c409222 ("MIPS: Give __secure_computing() access to
syscall arguments."), upon syscall entry when seccomp is enabled,
syscall_trace_enter() passes a carefully prepared struct seccomp_data
containing syscall arguments to __secure_computing(). Unfortunately it
directly uses mips_get_syscall_arg() and fails to take into account the
indirect O32 system calls (i.e. syscall(2)) which put the system call
number in a0 and have the arguments shifted up by one entry.

We can't just revert that commit as samples/bpf/tracex5 would break
again, so use syscall_get_arguments() which already takes indirect
syscalls into account instead of directly using mips_get_syscall_arg(),
similar to what populate_seccomp_data() does.

This also removes the redundant error checking of the
mips_get_syscall_arg() return value (get_user() already zeroes the
result if an argument from the stack can't be loaded).

Reported-by: James Cowgill <James.Cowgill@imgtec.com>
Fixes: 669c409222 ("MIPS: Give __secure_computing() access to syscall arguments.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Cc: David Daney <david.daney@cavium.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Will Drewry <wad@chromium.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16994/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-08-29 15:42:44 +02:00
Ying Huang
966a967116 smp: Avoid using two cache lines for struct call_single_data
struct call_single_data is used in IPIs to transfer information between
CPUs.  Its size is bigger than sizeof(unsigned long) and less than
cache line size.  Currently it is not allocated with any explicit alignment
requirements.  This makes it possible for allocated call_single_data to
cross two cache lines, which results in double the number of the cache lines
that need to be transferred among CPUs.

This can be fixed by requiring call_single_data to be aligned with the
size of call_single_data. Currently the size of call_single_data is the
power of 2.  If we add new fields to call_single_data, we may need to
add padding to make sure the size of new definition is the power of 2
as well.

Fortunately, this is enforced by GCC, which will report bad sizes.

To set alignment requirements of call_single_data to the size of
call_single_data, a struct definition and a typedef is used.

To test the effect of the patch, I used the vm-scalability multiple
thread swap test case (swap-w-seq-mt).  The test will create multiple
threads and each thread will eat memory until all RAM and part of swap
is used, so that huge number of IPIs are triggered when unmapping
memory.  In the test, the throughput of memory writing improves ~5%
compared with misaligned call_single_data, because of faster IPIs.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Huang, Ying <ying.huang@intel.com>
[ Add call_single_data_t and align with size of call_single_data. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Aaron Lu <aaron.lu@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/87bmnqd6lz.fsf@yhuang-mobile.sh.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-08-29 15:14:38 +02:00
Eric W. Biederman
20229305af mips/signal: In force_fcr31_sig return in the impossible case
In a recent discussion Maciej Rozycki reported that this case is
impossible.

Handle the impossible case by just returning instead of trying to
handle it.  This makes static analysis simpler as it means nothing
needs to consider the impossible case after the return statement.

As the code no longer has to deal with this case remove FPE_FIXME from
the mips siginfo.h

Cc: "Maciej W. Rozycki" <macro@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Link: http://lkml.kernel.org/r/20170718140651.15973-4-ebiederm@xmission.com
Ref: ea1b75cf91 ("signal/mips: Document a conflict with SI_USER with SIGFPE")
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2017-08-17 17:31:27 -05:00
Matija Glavinic Pecotic
6f542ebeae MIPS: Fix race on setting and getting cpu_online_mask
While testing cpu hoptlug (cpu down and up in loops) on kernel 4.4, it was
observed that occasionally check for cpu online will fail in kernel/cpu.c,
_cpu_up:

https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/tree/kernel/cpu.c?h=v4.4.79#n485
 518        /* Arch-specific enabling code. */
 519        ret = __cpu_up(cpu, idle);
 520
 521        if (ret != 0)
 522                goto out_notify;
 523        BUG_ON(!cpu_online(cpu));

Reason is race between start_secondary and _cpu_up. cpu_callin_map is set
before cpu_online_mask. In __cpu_up, cpu_callin_map is waited for, but cpu
online mask is not, resulting in race in which secondary processor started
and set cpu_callin_map, but not yet set the online mask,resulting in above
BUG being hit.

Upstream differs in the area. cpu_online check is in bringup_wait_for_ap,
which is after cpu reached AP_ONLINE_IDLE,where secondary passed its start
function. Nonetheless, fix makes start_secondary safe and not depending on
other locks throughout the code. It protects as well against cpu_online
checks put in between sometimes in the future.

Fix this by moving completion after all flags are set.

Signed-off-by: Matija Glavinic Pecotic <matija.glavinic-pecotic.ext@nokia.com>
Cc: Alexander Sverdlin <alexander.sverdlin@nokia.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16925/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-08-07 11:57:30 +02:00
Eric W. Biederman
cc731525f2 signal: Remove kernel interal si_code magic
struct siginfo is a union and the kernel since 2.4 has been hiding a union
tag in the high 16bits of si_code using the values:
__SI_KILL
__SI_TIMER
__SI_POLL
__SI_FAULT
__SI_CHLD
__SI_RT
__SI_MESGQ
__SI_SYS

While this looks plausible on the surface, in practice this situation has
not worked well.

- Injected positive signals are not copied to user space properly
  unless they have these magic high bits set.

- Injected positive signals are not reported properly by signalfd
  unless they have these magic high bits set.

- These kernel internal values leaked to userspace via ptrace_peek_siginfo

- It was possible to inject these kernel internal values and cause the
  the kernel to misbehave.

- Kernel developers got confused and expected these kernel internal values
  in userspace in kernel self tests.

- Kernel developers got confused and set si_code to __SI_FAULT which
  is SI_USER in userspace which causes userspace to think an ordinary user
  sent the signal and that it was not kernel generated.

- The values make it impossible to reorganize the code to transform
  siginfo_copy_to_user into a plain copy_to_user.  As si_code must
  be massaged before being passed to userspace.

So remove these kernel internal si codes and make the kernel code simpler
and more maintainable.

To replace these kernel internal magic si_codes introduce the helper
function siginfo_layout, that takes a signal number and an si_code and
computes which union member of siginfo is being used.  Have
siginfo_layout return an enumeration so that gcc will have enough
information to warn if a switch statement does not handle all of union
members.

A couple of architectures have a messed up ABI that defines signal
specific duplications of SI_USER which causes more special cases in
siginfo_layout than I would like.  The good news is only problem
architectures pay the cost.

Update all of the code that used the previous magic __SI_ values to
use the new SIL_ values and to call siginfo_layout to get those
values.  Escept where not all of the cases are handled remove the
defaults in the switch statements so that if a new case is missed in
the future the lack will show up at compile time.

Modify the code that copies siginfo si_code to userspace to just copy
the value and not cast si_code to a short first.  The high bits are no
longer used to hold a magic union member.

Fixup the siginfo header files to stop including the __SI_ values in
their constants and for the headers that were missing it to properly
update the number of si_codes for each signal type.

The fixes to copy_siginfo_from_user32 implementations has the
interesting property that several of them perviously should never have
worked as the __SI_ values they depended up where kernel internal.
With that dependency gone those implementations should work much
better.

The idea of not passing the __SI_ values out to userspace and then
not reinserting them has been tested with criu and criu worked without
changes.

Ref: 2.4.0-test1
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2017-07-24 14:30:28 -05:00
Eric W. Biederman
ea1b75cf91 signal/mips: Document a conflict with SI_USER with SIGFPE
Setting si_code to __SI_FAULT results in a userspace seeing
an si_code of 0.  This is the same si_code as SI_USER.  Posix
and common sense requires that SI_USER not be a signal specific
si_code.  As such this use of 0 for the si_code is a pretty
horribly broken ABI.

This use of of __SI_FAULT is only a decade old.  Which compared
to the other pieces of kernel code that has made this mistake
is almost yesterday.

This is probably worth fixing but I don't know mips well enough
to know what si_code to would be the proper one to use.

Cc: Ralf Baechle <ralf@linux-mips.org>
Ref: 948a34cf39 ("[MIPS] Maintain si_code field properly for FP exceptions")
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2017-07-19 19:13:15 -05:00
Linus Torvalds
568d135d33 Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus
Pull MIPS updates from Ralf Baechle:
 "Boston platform support:
   - Document DT bindings
   - Add CLK driver for board clocks

  CM:
   - Avoid per-core locking with CM3 & higher
   - WARN on attempt to lock invalid VP, not BUG

  CPS:
   - Select CONFIG_SYS_SUPPORTS_SCHED_SMT for MIPSr6
   - Prevent multi-core with dcache aliasing
   - Handle cores not powering down more gracefully
   - Handle spurious VP starts more gracefully

  DSP:
   - Add lwx & lhx missaligned access support

  eBPF:
   - Add MIPS support along with many supporting change to add the
     required infrastructure

  Generic arch code:
   - Misc sysmips MIPS_ATOMIC_SET fixes
   - Drop duplicate HAVE_SYSCALL_TRACEPOINTS
   - Negate error syscall return in trace
   - Correct forced syscall errors
   - Traced negative syscalls should return -ENOSYS
   - Allow samples/bpf/tracex5 to access syscall arguments for sane
     traces
   - Cleanup from old Kconfig options in defconfigs
   - Fix PREF instruction usage by memcpy for MIPS R6
   - Fix various special cases in the FPU eulation
   - Fix some special cases in MIPS16e2 support
   - Fix MIPS I ISA /proc/cpuinfo reporting
   - Sort MIPS Kconfig alphabetically
   - Fix minimum alignment requirement of IRQ stack as required by
     ABI / GCC
   - Fix special cases in the module loader
   - Perform post-DMA cache flushes on systems with MAARs
   - Probe the I6500 CPU
   - Cleanup cmpxchg and add support for 1 and 2 byte operations
   - Use queued read/write locks (qrwlock)
   - Use queued spinlocks (qspinlock)
   - Add CPU shared FTLB feature detection
   - Handle tlbex-tlbp race condition
   - Allow storing pgd in C0_CONTEXT for MIPSr6
   - Use current_cpu_type() in m4kc_tlbp_war()
   - Support Boston in the generic kernel

  Generic platform:
   - yamon-dt: Pull YAMON DT shim code out of SEAD-3 board
   - yamon-dt: Support > 256MB of RAM
   - yamon-dt: Use serial* rather than uart* aliases
   - Abstract FDT fixup application
   - Set RTC_ALWAYS_BCD to 0
   - Add a MAINTAINERS entry

  core kernel:
   - qspinlock.c: include linux/prefetch.h

  Loongson 3:
   - Add support

  Perf:
   - Add I6500 support

  SEAD-3:
   - Remove GIC timer from DT
   - Set interrupt-parent per-device, not at root node
   - Fix GIC interrupt specifiers

  SMP:
   - Skip IPI setup if we only have a single CPU

  VDSO:
   - Make comment match reality
   - Improvements to time code in VDSO"

* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (86 commits)
  locking/qspinlock: Include linux/prefetch.h
  MIPS: Fix MIPS I ISA /proc/cpuinfo reporting
  MIPS: Fix minimum alignment requirement of IRQ stack
  MIPS: generic: Support MIPS Boston development boards
  MIPS: DTS: img: Don't attempt to build-in all .dtb files
  clk: boston: Add a driver for MIPS Boston board clocks
  dt-bindings: Document img,boston-clock binding
  MIPS: Traced negative syscalls should return -ENOSYS
  MIPS: Correct forced syscall errors
  MIPS: Negate error syscall return in trace
  MIPS: Drop duplicate HAVE_SYSCALL_TRACEPOINTS select
  MIPS16e2: Provide feature overrides for non-MIPS16 systems
  MIPS: MIPS16e2: Report ASE presence in /proc/cpuinfo
  MIPS: MIPS16e2: Subdecode extended LWSP/SWSP instructions
  MIPS: MIPS16e2: Identify ASE presence
  MIPS: VDSO: Fix a mismatch between comment and preprocessor constant
  MIPS: VDSO: Add implementation of gettimeofday() fallback
  MIPS: VDSO: Add implementation of clock_gettime() fallback
  MIPS: VDSO: Fix conversions in do_monotonic()/do_monotonic_coarse()
  MIPS: Use current_cpu_type() in m4kc_tlbp_war()
  ...
2017-07-15 10:59:54 -07:00
Maciej W. Rozycki
e5f5a5b06e MIPS: Fix MIPS I ISA /proc/cpuinfo reporting
Correct a commit 515a6393db ("MIPS: kernel: proc: Add MIPS R6 support
to /proc/cpuinfo") regression that caused MIPS I systems to show no ISA
levels supported in /proc/cpuinfo, e.g.:

system type		: Digital DECstation 2100/3100
machine			: Unknown
processor		: 0
cpu model		: R3000 V2.0  FPU V2.0
BogoMIPS		: 10.69
wait instruction	: no
microsecond timers	: no
tlb_entries		: 64
extra interrupt vector	: no
hardware watchpoint	: no
isa			:
ASEs implemented	:
shadow register sets	: 1
kscratch registers	: 0
package			: 0
core			: 0
VCED exceptions		: not available
VCEI exceptions		: not available

and similarly exclude `mips1' from the ISA list for any processors below
MIPSr1.  This is because the condition to show `mips1' on has been made
`cpu_has_mips_r1' rather than newly-introduced `cpu_has_mips_1'.  Use
the correct condition then.

Fixes: 515a6393db ("MIPS: kernel: proc: Add MIPS R6 support to /proc/cpuinfo")
Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 3.19+
Patchwork: https://patchwork.linux-mips.org/patch/16758/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11 14:13:50 +02:00
James Hogan
828db212bf MIPS: Traced negative syscalls should return -ENOSYS
If a negative system call number is used when system call tracing is
enabled, syscall_trace_enter() will return that negative system call
number without having written the return value and error flag into the
pt_regs.

The caller then treats it as a cancelled system call and assumes that
the return value and error flag are already written, leaving the
negative system call number in the return register ($v0), and the 4th
system call argument in the error register ($a3).

Add a special case to detect this at the end of syscall_trace_enter(),
to set the return value to error -ENOSYS when this happens.

Fixes: d218af7849 ("MIPS: scall: Always run the seccomp syscall filters")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16653/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11 14:13:06 +02:00
James Hogan
4f32a39d49 MIPS: Negate error syscall return in trace
The sys_exit trace event takes a single return value for the system
call, which MIPS passes the value of the $v0 (result) register, however
MIPS returns positive error codes in $v0 with $a3 specifying that $v0
contains an error code. As a result erroring system calls are traced
returning positive error numbers that can't always be distinguished from
success.

Use regs_return_value() to negate the error code if $a3 is set.

Fixes: 1d7bf993e0 ("MIPS: ftrace: Add support for syscall tracepoints.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.13+
Patchwork: https://patchwork.linux-mips.org/patch/16651/
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11 14:13:06 +02:00
Maciej W. Rozycki
92ecd19a7e MIPS: MIPS16e2: Report ASE presence in /proc/cpuinfo
Only now that both feature determination and unaligned emulation is in
place add reporting to /proc/cpuinfo, so that the presence of "mips16e2"
there not only indicates our recognition of the hardware feature, but
correct unaligned emulation as well.

Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16757/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-11 13:48:49 +02:00
Thomas Meyer
a94c33dd1f lib/extable.c: use bsearch() library function in search_extable()
[thomas@m3y3r.de: v3: fix arch specific implementations]
  Link: http://lkml.kernel.org/r/1497890858.12931.7.camel@m3y3r.de
Signed-off-by: Thomas Meyer <thomas@m3y3r.de>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-10 16:32:35 -07:00
Maciej W. Rozycki
f3235d3207 MIPS: MIPS16e2: Subdecode extended LWSP/SWSP instructions
Implement extended LWSP/SWSP instruction subdecoding for the purpose of
unaligned GP-relative memory access emulation.

With the introduction of the MIPS16e2 ASE[1] the previously must-be-zero
3-bit field at bits 7..5 of the extended encodings of the instructions
selected with the LWSP and SWSP major opcodes has become a `sel' field,
acting as an opcode extension for additional operations.  In both cases
the `sel' value of 0 has retained the original operation, that is:

	LW	rx, offset(sp)

and:

	SW	rx, offset(sp)

for LWSP and SWSP respectively.  In hardware predating the MIPS16e2 ASE
other values may or may not have been decoded, architecturally yielding
unpredictable results, and in our unaligned memory access emulation we
have treated the 3-bit field as a don't-care, that is effectively making
all the possible encodings of the field alias to the architecturally
defined encoding of 0.

For the non-zero values of the `sel' field the MIPS16e2 ASE has in
particular defined these GP-relative operations:

	LW	rx, offset(gp)		# sel = 1
	LH	rx, offset(gp)		# sel = 2
	LHU	rx, offset(gp)		# sel = 4

and

	SW	rx, offset(gp)		# sel = 1
	SH	rx, offset(gp)		# sel = 2

for LWSP and SWSP respectively, which will trap with an Address Error
exception if the effective address calculated is not naturally-aligned
for the operation requested.  These operations have been selected for
unaligned access emulation, for consistency with the corresponding
regular MIPS and microMIPS operations.

For other non-zero values of the `sel' field the MIPS16e2 ASE has
defined further operations, which however either never trap with an
Address Error exception, such as LWL or GP-relative SB, or are not
supposed to be emulated, such as LL or SC.  These operations have been
selected to exclude from unaligned access emulation, should an Address
Error exception ever happen with them.

Subdecode the `sel' field in unaligned access emulation then for the
extended encodings of the instructions selected with the LWSP and SWSP
major opcodes, whenever support for the MIPS16e2 ASE has been detected
in hardware, and either emulate the operation requested or send SIGBUS
to the originating process, according to the selection described above.
For hardware implementing the MIPS16 ASE, however lacking MIPS16e2 ASE
support retain the original interpretation of the `sel' field.

The effects of this change are illustrated with the following user
program:

$ cat mips16e2-test.c
#include <inttypes.h>
#include <stdio.h>

int main(void)
{
	int64_t scratch[16] = { 0 };
	int32_t *tmp0, *tmp1, *tmp2;
	int i;

	scratch[0] = 0xc8c7c6c5c4c3c2c1;
	scratch[1] = 0xd0cfcecdcccbcac9;

	asm volatile(
		"move	%0, $sp\n\t"
		"move	%1, $gp\n\t"
		"move	$sp, %4\n\t"
		"addiu	%2, %4, 8\n\t"
		"move	$gp, %2\n\t"

		"lw	%2, 2($sp)\n\t"
		"sw	%2, 16(%4)\n\t"
		"lw	%2, 2($gp)\n\t"
		"sw	%2, 24(%4)\n\t"

		"lw	%2, 1($sp)\n\t"
		"sw	%2, 32(%4)\n\t"
		"lh	%2, 1($gp)\n\t"
		"sw	%2, 40(%4)\n\t"

		"lw	%2, 3($sp)\n\t"
		"sw	%2, 48(%4)\n\t"
		"lhu	%2, 3($gp)\n\t"
		"sw	%2, 56(%4)\n\t"

		"lw	%2, 0(%4)\n\t"
		"sw	%2, 66($sp)\n\t"
		"lw	%2, 8(%4)\n\t"
		"sw	%2, 82($gp)\n\t"

		"lw	%2, 0(%4)\n\t"
		"sw	%2, 97($sp)\n\t"
		"lw	%2, 8(%4)\n\t"
		"sh	%2, 113($gp)\n\t"

		"move	$gp, %1\n\t"
		"move	$sp, %0"
		: "=&d" (tmp0), "=&d" (tmp1), "=&d" (tmp2), "=m" (scratch)
		: "d" (scratch));

	for (i = 0; i < sizeof(scratch) / sizeof(*scratch); i += 2)
		printf("%016" PRIx64 "\t%016" PRIx64 "\n",
		       scratch[i], scratch[i + 1]);

	return 0;
}
$

to be compiled with:

$ gcc -mips16 -mips32r2 -Wa,-mmips16e2 -o mips16e2-test mips16e2-test.c
$

With 74Kf hardware, which does not implement the MIPS16e2 ASE, this
program produces the following output:

$ ./mips16e2-test
c8c7c6c5c4c3c2c1        d0cfcecdcccbcac9
00000000c6c5c4c3        00000000c6c5c4c3
00000000c5c4c3c2        00000000c5c4c3c2
00000000c7c6c5c4        00000000c7c6c5c4
0000c4c3c2c10000        0000000000000000
0000cccbcac90000        0000000000000000
000000c4c3c2c100        0000000000000000
000000cccbcac900        0000000000000000
$

regardless of whether the change has been applied or not.

With the change not applied and interAptive MR2 hardware[2], which does
implement the MIPS16e2 ASE, it produces the following output:

$ ./mips16e2-test
c8c7c6c5c4c3c2c1        d0cfcecdcccbcac9
00000000c6c5c4c3        00000000cecdcccb
00000000c5c4c3c2        00000000cdcccbca
00000000c7c6c5c4        00000000cfcecdcc
0000c4c3c2c10000        0000000000000000
0000000000000000        0000cccbcac90000
000000c4c3c2c100        0000000000000000
0000000000000000        000000cccbcac900
$

which shows that for GP-relative operations the correct trapping address
calculated from $gp has been obtained from the CP0 BadVAddr register and
so has data from the source operand, however masking and extension has
not been applied for halfword operations.

With the change applied and interAptive MR2 hardware the program
produces the following output:

$ ./mips16e2-test
c8c7c6c5c4c3c2c1        d0cfcecdcccbcac9
00000000c6c5c4c3        00000000cecdcccb
00000000c5c4c3c2        00000000ffffcbca
00000000c7c6c5c4        000000000000cdcc
0000c4c3c2c10000        0000000000000000
0000000000000000        0000cccbcac90000
000000c4c3c2c100        0000000000000000
0000000000000000        0000000000cac900
$

as expected.

References:

[1] "MIPS32 Architecture for Programmers: MIPS16e2 Application-Specific
    Extension Technical Reference Manual", Imagination Technologies
    Ltd., Document Number: MD01172, Revision 01.00, April 26, 2016

[2] "MIPS32 interAptiv Multiprocessing System Software User's Manual",
    Imagination Technologies Ltd., Document Number: MD00904, Revision
    02.01, June 15, 2016, Chapter 24 "MIPS16e Application-Specific
    Extension to the MIPS32 Instruction Set", pp. 871-883

Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16095/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-05 14:07:20 +02:00
Maciej W. Rozycki
8d1630f137 MIPS: MIPS16e2: Identify ASE presence
Identify the presence of the MIPS16e2 ASE as per the architecture
specification[1], by checking for CP0 Config5.CA2 bit being 1[2].

References:

[1] "MIPS32 Architecture for Programmers: MIPS16e2 Application-Specific
    Extension Technical Reference Manual", Imagination Technologies
    Ltd., Document Number: MD01172, Revision 01.00, April 26, 2016,
    Section 1.2 "Software Detection of the ASE", p. 5

[2] "MIPS32 interAptiv Multiprocessing System Software User's Manual",
    Imagination Technologies Ltd., Document Number: MD00904, Revision
    02.01, June 15, 2016, Section 2.2.1.6 "Device Configuration 5 --
    Config5 (CP0 Register 16, Select 5)", pp. 71-72

Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16094/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-07-05 14:06:44 +02:00
Linus Torvalds
9a9594efe5 Merge branch 'smp-hotplug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull SMP hotplug updates from Thomas Gleixner:
 "This update is primarily a cleanup of the CPU hotplug locking code.

  The hotplug locking mechanism is an open coded RWSEM, which allows
  recursive locking. The main problem with that is the recursive nature
  as it evades the full lockdep coverage and hides potential deadlocks.

  The rework replaces the open coded RWSEM with a percpu RWSEM and
  establishes full lockdep coverage that way.

  The bulk of the changes fix up recursive locking issues and address
  the now fully reported potential deadlocks all over the place. Some of
  these deadlocks have been observed in the RT tree, but on mainline the
  probability was low enough to hide them away."

* 'smp-hotplug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (37 commits)
  cpu/hotplug: Constify attribute_group structures
  powerpc: Only obtain cpu_hotplug_lock if called by rtasd
  ARM/hw_breakpoint: Fix possible recursive locking for arch_hw_breakpoint_init
  cpu/hotplug: Remove unused check_for_tasks() function
  perf/core: Don't release cred_guard_mutex if not taken
  cpuhotplug: Link lock stacks for hotplug callbacks
  acpi/processor: Prevent cpu hotplug deadlock
  sched: Provide is_percpu_thread() helper
  cpu/hotplug: Convert hotplug locking to percpu rwsem
  s390: Prevent hotplug rwsem recursion
  arm: Prevent hotplug rwsem recursion
  arm64: Prevent cpu hotplug rwsem recursion
  kprobes: Cure hotplug lock ordering issues
  jump_label: Reorder hotplug lock and jump_label_lock
  perf/tracing/cpuhotplug: Fix locking order
  ACPI/processor: Use cpu_hotplug_disable() instead of get_online_cpus()
  PCI: Replace the racy recursion prevention
  PCI: Use cpu_hotplug_disable() instead of get_online_cpus()
  perf/x86/intel: Drop get_online_cpus() in intel_snb_check_microcode()
  x86/perf: Drop EXPORT of perf_check_microcode
  ...
2017-07-03 18:08:06 -07:00
James Hogan
8542363633 MIPS: Avoid accidental raw backtrace
Since commit 81a76d7119 ("MIPS: Avoid using unwind_stack() with
usermode") show_backtrace() invokes the raw backtracer when
cp0_status & ST0_KSU indicates user mode to fix issues on EVA kernels
where user and kernel address spaces overlap.

However this is used by show_stack() which creates its own pt_regs on
the stack and leaves cp0_status uninitialised in most of the code paths.
This results in the non deterministic use of the raw back tracer
depending on the previous stack content.

show_stack() deals exclusively with kernel mode stacks anyway, so
explicitly initialise regs.cp0_status to KSU_KERNEL (i.e. 0) to ensure
we get a useful backtrace.

Fixes: 81a76d7119 ("MIPS: Avoid using unwind_stack() with usermode")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.15+
Patchwork: https://patchwork.linux-mips.org/patch/16656/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-30 04:42:15 +02:00
Paul Burton
d8550860d9 MIPS: Fix IRQ tracing & lockdep when rescheduling
When the scheduler sets TIF_NEED_RESCHED & we call into the scheduler
from arch/mips/kernel/entry.S we disable interrupts. This is true
regardless of whether we reach work_resched from syscall_exit_work,
resume_userspace or by looping after calling schedule(). Although we
disable interrupts in these paths we don't call trace_hardirqs_off()
before calling into C code which may acquire locks, and we therefore
leave lockdep with an inconsistent view of whether interrupts are
disabled or not when CONFIG_PROVE_LOCKING & CONFIG_DEBUG_LOCKDEP are
both enabled.

Without tracing this interrupt state lockdep will print warnings such
as the following once a task returns from a syscall via
syscall_exit_partial with TIF_NEED_RESCHED set:

[   49.927678] ------------[ cut here ]------------
[   49.934445] WARNING: CPU: 0 PID: 1 at kernel/locking/lockdep.c:3687 check_flags.part.41+0x1dc/0x1e8
[   49.946031] DEBUG_LOCKS_WARN_ON(current->hardirqs_enabled)
[   49.946355] CPU: 0 PID: 1 Comm: init Not tainted 4.10.0-00439-gc9fd5d362289-dirty #197
[   49.963505] Stack : 0000000000000000 ffffffff81bb5d6a 0000000000000006 ffffffff801ce9c4
[   49.974431]         0000000000000000 0000000000000000 0000000000000000 000000000000004a
[   49.985300]         ffffffff80b7e487 ffffffff80a24498 a8000000ff160000 ffffffff80ede8b8
[   49.996194]         0000000000000001 0000000000000000 0000000000000000 0000000077c8030c
[   50.007063]         000000007fd8a510 ffffffff801cd45c 0000000000000000 a8000000ff127c88
[   50.017945]         0000000000000000 ffffffff801cf928 0000000000000001 ffffffff80a24498
[   50.028827]         0000000000000000 0000000000000001 0000000000000000 0000000000000000
[   50.039688]         0000000000000000 a8000000ff127bd0 0000000000000000 ffffffff805509bc
[   50.050575]         00000000140084e0 0000000000000000 0000000000000000 0000000000040a00
[   50.061448]         0000000000000000 ffffffff8010e1b0 0000000000000000 ffffffff805509bc
[   50.072327]         ...
[   50.076087] Call Trace:
[   50.079869] [<ffffffff8010e1b0>] show_stack+0x80/0xa8
[   50.086577] [<ffffffff805509bc>] dump_stack+0x10c/0x190
[   50.093498] [<ffffffff8015dde0>] __warn+0xf0/0x108
[   50.099889] [<ffffffff8015de34>] warn_slowpath_fmt+0x3c/0x48
[   50.107241] [<ffffffff801c15b4>] check_flags.part.41+0x1dc/0x1e8
[   50.114961] [<ffffffff801c239c>] lock_is_held_type+0x8c/0xb0
[   50.122291] [<ffffffff809461b8>] __schedule+0x8c0/0x10f8
[   50.129221] [<ffffffff80946a60>] schedule+0x30/0x98
[   50.135659] [<ffffffff80106278>] work_resched+0x8/0x34
[   50.142397] ---[ end trace 0cb4f6ef5b99fe21 ]---
[   50.148405] possible reason: unannotated irqs-off.
[   50.154600] irq event stamp: 400463
[   50.159566] hardirqs last  enabled at (400463): [<ffffffff8094edc8>] _raw_spin_unlock_irqrestore+0x40/0xa8
[   50.171981] hardirqs last disabled at (400462): [<ffffffff8094eb98>] _raw_spin_lock_irqsave+0x30/0xb0
[   50.183897] softirqs last  enabled at (400450): [<ffffffff8016580c>] __do_softirq+0x4ac/0x6a8
[   50.195015] softirqs last disabled at (400425): [<ffffffff80165e78>] irq_exit+0x110/0x128

Fix this by using the TRACE_IRQS_OFF macro to call trace_hardirqs_off()
when CONFIG_TRACE_IRQFLAGS is enabled. This is done before invoking
schedule() following the work_resched label because:

 1) Interrupts are disabled regardless of the path we take to reach
    work_resched() & schedule().

 2) Performing the tracing here avoids the need to do it in paths which
    disable interrupts but don't call out to C code before hitting a
    path which uses the RESTORE_SOME macro that will call
    trace_hardirqs_on() or trace_hardirqs_off() as appropriate.

We call trace_hardirqs_on() using the TRACE_IRQS_ON macro before calling
syscall_trace_leave() for similar reasons, ensuring that lockdep has a
consistent view of state after we re-enable interrupts.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: 1da177e4c3 ("Linux-2.6.12-rc2")
Cc: linux-mips@linux-mips.org
Cc: stable <stable@vger.kernel.org>
Patchwork: https://patchwork.linux-mips.org/patch/15385/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-30 04:40:18 +02:00
Paul Burton
161c51ccb7 MIPS: pm-cps: Drop manual cache-line alignment of ready_count
We allocate memory for a ready_count variable per-CPU, which is accessed
via a cached non-coherent TLB mapping to perform synchronisation between
threads within the core using LL/SC instructions. In order to ensure
that the variable is contained within its own data cache line we
allocate 2 lines worth of memory & align the resulting pointer to a line
boundary. This is however unnecessary, since kmalloc is guaranteed to
return memory which is at least cache-line aligned (see
ARCH_DMA_MINALIGN). Stop the redundant manual alignment.

Besides cleaning up the code & avoiding needless work, this has the side
effect of avoiding an arithmetic error found by Bryan on 64 bit systems
due to the 32 bit size of the former dlinesz. This led the ready_count
variable to have its upper 32b cleared erroneously for MIPS64 kernels,
causing problems when ready_count was later used on MIPS64 via cpuidle.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: 3179d37ee1 ("MIPS: pm-cps: add PM state entry code for CPS systems")
Reported-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com>
Reviewed-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com>
Tested-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable <stable@vger.kernel.org> # v3.16+
Patchwork: https://patchwork.linux-mips.org/patch/15383/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-30 04:38:55 +02:00
Paul Burton
e7bc855742 MIPS: Add CPU shared FTLB feature detection
Some systems share FTLB RAMs or entries between sibling CPUs (ie.
hardware threads, or VP(E)s, within a core). These properties require
kernel handling in various places. As a start this patch introduces
cpu_has_shared_ftlb_ram & cpu_has_shared_ftlb_entries feature macros
which we set appropriately for I6400 & I6500 CPUs. Further patches will
make use of these macros as appropriate.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16202/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:29 +02:00
Paul Burton
fa7a3b4a72 MIPS: CPS: Handle spurious VP starts more gracefully
On pre-r6 systems with the MT ASE the CPS SMP code included checks to
halt the VPE running mips_cps_boot_vpes() if its bit in the struct
core_boot_config vpe_mask field is clear. This was largely done in order
to allow us to start arbitrary VPEs within a core despite the fact that
hardware is typically configured to run only VPE0 after powering up a
core. VPE0 would start the desired other VPEs, halt itself, and the fact
that VPE0 started would be largely hidden & irrelevant.

In MIPSr6 multithreading we have control over which VPs start executing
when a core powers up via the cores CPC registers accessed remotely
through the redirect block. For this reason the MIPSr6 multithreading
path in mips_cps_boot_vpes() hasn't bothered up until now to handle
halting the VP running it.

However it is possible to power up cores entirely in hardware by using a
pwr_up pin associated with the core. Unfortunately some systems wire
this pin to a logic 1, which means that it is possible for a core to
power up at a point that software doesn't expect. The result is that we
generally go execute the kernel on a CPU that ought not to be running &
the results can be unpredictable.

Handle this case by stopping VPs that we don't expect to be running in
mips_cps_boot_vpes() - with this change even if a core powers up it will
do nothing useful & all VPs within it will stop running before they
proceed to run general kernel code & do any damage. Ideally we would
produce some sort of warning here, but given the stage of core bringup
this happens at that would be non-trivial. We also will only hit this if
a core starts up after being offlined via hotplug, and when that happens
we will already produce a warning that the CPU didn't power down in
cps_cpu_die() which seems sufficient.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16198/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:28 +02:00
Paul Burton
4ad755c9e3 MIPS: CPS: Handle cores not powering down more gracefully
If we get into a state where a core that ought to power down isn't doing
so then the current result is that another CPU gets stuck inside
cps_cpu_die() waiting for CPU that ought to be powering down to do so.
The best case scenario is that we then trigger RCU stall messages or
lockup messages, but neither makes it particularly clear what's
happening.

Handle this more gracefully by introducing a timeout beyond which we
warn the user that the core didn't power down & stop waiting for it.
This at least allows the CPU running cps_cpu_die() to continue normally,
and hopefully presuming the CPU that powered back up is doing nothing
harmful the system will continue functioning as normal.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16197/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:28 +02:00
Paul Burton
5570ba2ee9 MIPS: CPS: Prevent multi-core with dcache aliasing
Systems using the MIPS Coherence Manager (CM) cannot support multi-core
SMP with dcache aliasing. This is because CPU caches are VIPT, but
interventions in CM-based systems provide only the physical address to
remote caches. This means that interventions may behave incorrectly in
the presence of an aliasing dcache, since the physical address used
when handling an intervention may lead to operation on an aliased cache
line rather than the correct line.

Prevent us from running into this issue by refusing to boot secondary
cores in systems where dcache aliasing may occur.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16196/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:28 +02:00
Paul Burton
2f93a60c3d MIPS: CM: WARN on attempt to lock invalid VP, not BUG
Rather than using BUG_ON in the case of an invalid attempt to lock
access to a non-zero VP on a pre-CM3 system, use WARN_ON so that we have
even the slightest chance of recovery.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16194/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:28 +02:00
Paul Burton
516db1c61f MIPS: CM: Avoid per-core locking with CM3 & higher
CM3 provides a GCR_CL_OTHER register per VP, rather than only per core.
This means that we don't need to prevent other VPs within a core from
racing with code that makes use of the core-other register region.

Reduce locking overhead by demoting the per-core spinlock providing
protection for CM2.5 & lower to a per-CPU/per-VP spinlock for CM3 &
higher.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16193/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:28 +02:00
Paul Burton
9b03d8abe0 MIPS: Skip IPI setup if we only have 1 CPU
If we're running on a system with only 1 possible CPU then it makes no
sense to reserve or initialise IPIs since we'll never use them. Avoid
doing so.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16192/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:27 +02:00
Maciej W. Rozycki
f259fe295e MIPS: Use pr_debug' for messages from __compute_return_epc_for_insn'
Reduce the log level for branch emulation error messages issued before
sending SIGILL by `__compute_return_epc_for_insn' as these are triggered
by user software and are not an event that would normally require any
attention.  The same signal sent from elsewhere does not actually leave
any trace in the kernel log at all.

Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16402/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:27 +02:00
Maciej W. Rozycki
27fe2200da MIPS: Fix a typo: s/preset/present/ in r2-to-r6 emulation error message
This is a user-visible message, so we want it to be spelled correctly.

Fixes: 5f9f41c474 ("MIPS: kernel: Prepare the JR instruction for emulation on MIPS R6")
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 3.19+
Patchwork: https://patchwork.linux-mips.org/patch/16400/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:27 +02:00
Maciej W. Rozycki
a60b1a5bf8 MIPS: Send SIGILL for R6 branches in `__compute_return_epc_for_insn'
Fix:

* commit 8467ca0122 ("MIPS: Emulate the new MIPS R6 branch compact
(BC) instruction"),

* commit 84fef63012 ("MIPS: Emulate the new MIPS R6 BALC
instruction"),

* commit 69b9a2fd05 ("MIPS: Emulate the new MIPS R6 BEQZC and JIC
instructions"),

* commit 28d6f93d20 ("MIPS: Emulate the new MIPS R6 BNEZC and JIALC
instructions"),

* commit c893ce38b2 ("MIPS: Emulate the new MIPS R6 BOVC, BEQC and
BEQZALC instructions")

and send SIGILL rather than returning -SIGILL for R6 branch and jump
instructions.  Returning -SIGILL is never correct as the API defines
this function's result upon error to be -EFAULT and a signal actually
issued.

Fixes: 8467ca0122 ("MIPS: Emulate the new MIPS R6 branch compact (BC) instruction")
Fixes: 84fef63012 ("MIPS: Emulate the new MIPS R6 BALC instruction")
Fixes: 69b9a2fd05 ("MIPS: Emulate the new MIPS R6 BEQZC and JIC instructions")
Fixes: 28d6f93d20 ("MIPS: Emulate the new MIPS R6 BNEZC and JIALC instructions")
Fixes: c893ce38b2 ("MIPS: Emulate the new MIPS R6 BOVC, BEQC and BEQZALC instructions")
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 3.19+
Patchwork: https://patchwork.linux-mips.org/patch/16399/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:27 +02:00
Maciej W. Rozycki
fef40be6da MIPS: Send SIGILL for linked branches in `__compute_return_epc_for_insn'
Fix commit 319824eabc ("MIPS: kernel: branch: Do not emulate the
branch likelies on MIPS R6") and also send SIGILL rather than returning
-SIGILL for BLTZAL, BLTZALL, BGEZAL and BGEZALL instruction encodings no
longer supported in R6, except where emulated.  Returning -SIGILL is
never correct as the API defines this function's result upon error to be
-EFAULT and a signal actually issued.

Fixes: 319824eabc ("MIPS: kernel: branch: Do not emulate the branch likelies on MIPS R6")
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 3.19+
Patchwork: https://patchwork.linux-mips.org/patch/16398/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:27 +02:00
Maciej W. Rozycki
1f4edde422 MIPS: Rename sigill_r6' to sigill_r2r6' in `__compute_return_epc_for_insn'
Use the more accurate `sigill_r2r6' name for the label used in the case
of sending SIGILL in the absence of the instruction emulator for an
earlier ISA level instruction that has been removed as from the R6 ISA,
so that the `sigill_r6' name is freed for the situation where an R6
instruction is not supposed to be interpreted, because the executing
processor does not support the R6 ISA.

Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 3.19+
Patchwork: https://patchwork.linux-mips.org/patch/16397/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:26 +02:00
Maciej W. Rozycki
7b82c1058a MIPS: Send SIGILL for BPOSGE32 in `__compute_return_epc_for_insn'
Fix commit e50c0a8fa6 ("Support the MIPS32 / MIPS64 DSP ASE.") and
send SIGILL rather than SIGBUS whenever an unimplemented BPOSGE32 DSP
ASE instruction has been encountered in `__compute_return_epc_for_insn'
as our Reserved Instruction exception handler would in response to an
attempt to actually execute the instruction.  Sending SIGBUS only makes
sense for the unaligned PC case, since moved to `__compute_return_epc'.
Adjust function documentation accordingly, correct formatting and use
`pr_info' rather than `printk' as the other exit path already does.

Fixes: e50c0a8fa6 ("Support the MIPS32 / MIPS64 DSP ASE.")
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 2.6.14+
Patchwork: https://patchwork.linux-mips.org/patch/16396/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:26 +02:00
Maciej W. Rozycki
a9db101b73 MIPS: Actually decode JALX in `__compute_return_epc_for_insn'
Complement commit fb6883e580 ("MIPS: microMIPS: Support handling of
delay slots.") and actually decode the regular MIPS JALX major
instruction opcode, the handling of which has been added with the said
commit for EPC calculation in `__compute_return_epc_for_insn'.

Fixes: fb6883e580 ("MIPS: microMIPS: Support handling of delay slots.")
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 3.9+
Patchwork: https://patchwork.linux-mips.org/patch/16394/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:26 +02:00
Paul Burton
3ba7f44d2b MIPS: cmpxchg: Implement 1 byte & 2 byte cmpxchg()
Implement support for 1 & 2 byte cmpxchg() using read-modify-write atop
a 4 byte cmpxchg(). This allows us to support these atomic operations
despite the MIPS ISA only providing 4 & 8 byte atomic operations.

This is required in order to support queued rwlocks (qrwlock) in a later
patch, since these make use of a 1 byte cmpxchg() in their slow path.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16355/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:25 +02:00
Paul Burton
b70eb30056 MIPS: cmpxchg: Implement 1 byte & 2 byte xchg()
Implement 1 & 2 byte xchg() using read-modify-write atop a 4 byte
cmpxchg(). This allows us to support these atomic operations despite the
MIPS ISA only providing for 4 & 8 byte atomic operations.

This is required in order to support queued spinlocks (qspinlock) in a
later patch, since these make use of a 2 byte xchg() in their slow path.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16354/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:25 +02:00
Miodrag Dinic
3f88ec6333 MIPS: unaligned: Add DSP lwx & lhx missaligned access support
Add handling of missaligned access for DSP load instructions
lwx & lhx.

Since DSP instructions share SPECIAL3 opcode with other non-DSP
instructions, necessary logic was inserted for distinguishing
between instructions with SPECIAL3 opcode. For that purpose,
the instruction format for DSP instructions is added to
arch/mips/include/uapi/asm/inst.h.

Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@imgtech.com>
Cc: James.Hogan@imgtec.com
Cc: Paul.Burton@imgtec.com
Cc: Raghu.Gandham@imgtec.com
Cc: Leonid.Yegoshin@imgtec.com
Cc: Douglas.Leung@imgtec.com
Cc: Petar.Jovanovic@imgtec.com
Cc: Goran.Ferenc@imgtec.com
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16511/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:24 +02:00
Miodrag Dinic
296a7624f5 MIPS: cmdline: Add support for 'memmap' parameter
Implement support for parsing 'memmap' kernel command line parameter.

This patch covers parsing of the following two formats for 'memmap'
parameter values:

  - nn[KMG]@ss[KMG]
  - nn[KMG]$ss[KMG]

  ([KMG] = K M or G (kilo, mega, giga))

These two allowed formats for parameter value are already documented
in file kernel-parameters.txt in Documentation/admin-guide folder.
Some architectures already support them, but Mips did not prior to
this patch.

Excerpt from Documentation/admin-guide/kernel-parameters.txt:

memmap=nn[KMG]@ss[KMG]
    [KNL] Force usage of a specific region of memory.
    Region of memory to be used is from ss to ss+nn.

memmap=nn[KMG]$ss[KMG]
    Mark specific memory as reserved.
    Region of memory to be reserved is from ss to ss+nn.
    Example: Exclude memory from 0x18690000-0x1869ffff
        memmap=64K$0x18690000
        or
        memmap=0x10000$0x18690000

There is no need to update this documentation file with respect to
this patch.

Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Goran Ferenc <goran.ferenc@imgtec.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@imgtec.com>
Cc: James.Hogan@imgtec.com
Cc: Paul.Burton@imgtec.com
Cc: Raghu.Gandham@imgtec.com
Cc: Leonid.Yegoshin@imgtec.com
Cc: Douglas.Leung@imgtec.com
Cc: Petar.Jovanovic@imgtec.com
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16508/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-29 02:42:23 +02:00
Huacai Chen
0a00024d7a MIPS: Loongson: Add Loongson-3A R3 basic support
Loongson-3A R3 is very similar to Loongson-3A R2.

All Loongson-3 CPU family:

Code-name       Brand-name       PRId
Loongson-3A R1  Loongson-3A1000  0x6305
Loongson-3A R2  Loongson-3A2000  0x6308
Loongson-3A R3  Loongson-3A3000  0x6309
Loongson-3B R1  Loongson-3B1000  0x6306
Loongson-3B R2  Loongson-3B1500  0x6307

Signed-off-by: Huacai Chen <chenhc@lemote.com>
Cc: John Crispin <john@phrozen.org>
Cc: Steven J . Hill <Steven.Hill@cavium.com>
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16585/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-28 12:22:42 +02:00
James Hogan
203e090ade MIPS: Branch straight to ll in mips_atomic_set()
Adjust the atomic loop in the MIPS_ATOMIC_SET operation of the sysmips
system call to branch straight back to the linked load rather than
jumping via a different subsection (whose purpose remains a mystery to
me).

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16150/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-28 12:22:40 +02:00
James Hogan
4915e1b043 MIPS: Fix mips_atomic_set() with EVA
EVA linked loads (LLE) and conditional stores (SCE) should be used on
EVA kernels for the MIPS_ATOMIC_SET operation of the sysmips system
call, or else the atomic set will apply to the kernel view of the
virtual address space (potentially unmapped on EVA kernels) rather than
the user view (TLB mapped).

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.15.x-
Patchwork: https://patchwork.linux-mips.org/patch/16151/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-28 12:22:40 +02:00
James Hogan
49955d84cd MIPS: Save static registers before sysmips
The MIPS sysmips system call handler may return directly from the
MIPS_ATOMIC_SET case (mips_atomic_set()) to syscall_exit. This path
restores the static (callee saved) registers, however they won't have
been saved on entry to the system call.

Use the save_static_function() macro to create a __sys_sysmips wrapper
function which saves the static registers before calling sys_sysmips, so
that the correct static register state is restored by syscall_exit.

Fixes: f1e39a4a61 ("MIPS: Rewrite sysmips(MIPS_ATOMIC_SET, ...) in C with inline assembler")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16149/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-28 12:22:40 +02:00
James Hogan
2ec420b26f MIPS: Fix mips_atomic_set() retry condition
The inline asm retry check in the MIPS_ATOMIC_SET operation of the
sysmips system call has been backwards since commit f1e39a4a61 ("MIPS:
Rewrite sysmips(MIPS_ATOMIC_SET, ...) in C with inline assembler")
merged in v2.6.32, resulting in the non R10000_LLSC_WAR case retrying
until the operation was inatomic, before returning the new value that
was probably just written multiple times instead of the old value.

Invert the branch condition to fix that particular issue.

Fixes: f1e39a4a61 ("MIPS: Rewrite sysmips(MIPS_ATOMIC_SET, ...) in C with inline assembler")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16148/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-28 12:22:40 +02:00
Marcin Nowakowski
736add2412 MIPS: perf: add I6500 handling
Add a definition of the perf registers for the new I6500 core.

Since I6500 has the same event definitions as I6400, re-use the existing
i6400 map structures by renaming them to a slightly more generic
'i6x00_***_map'.

Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16362/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-28 12:22:39 +02:00
Paul Burton
859aeb1b0d MIPS: Probe the I6500 CPU
Introduce the I6500 PRID & probe it just the same way as I6400. The MIPS
I6500 is the latest in Imagination Technologies' I-Class range of CPUs,
with a focus on scalability & heterogeneity. It introduces the notion of
multiple clusters to the MIPS Coherent Processing System, allowing for a
far higher total number of cores & threads in a system when compared
with its predecessors. Clusters don't need to be identical, and may
contain differing numbers of cores & IOCUs, or cores with differing
properties.

This patch alone adds the basic support for booting Linux on an I6500
CPU without support for any of its new functionality, for which support
will be introduced in further patches.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16190/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-28 12:22:39 +02:00
David Daney
669c409222 MIPS: Give __secure_computing() access to syscall arguments.
KProbes of __seccomp_filter() are not very useful without access to
the syscall arguments.

Do what x86 does, and populate a struct seccomp_data to be passed to
__secure_computing().  This allows samples/bpf/tracex5 to extract a
sensible trace.

Signed-off-by: David Daney <david.daney@cavium.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16368/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-28 12:22:39 +02:00
Paul Burton
430d0b8894 MIPS: module: Unify rel & rela reloc handling
The module load code has previously had entirely separate
implementations for rel & rela style relocs, which unnecessarily
duplicates a whole lot of code. Unify the implementations of both types
of reloc, sharing the bulk of the code.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/15832/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-28 12:22:38 +02:00