On MIPS the GLOBAL bit of the PTE must have the same value in any
aligned pair of PTEs. These pairs of PTEs are referred to as
"buddies". In a SMP system is is possible for two CPUs to be calling
set_pte() on adjacent PTEs at the same time. There is a race between
setting the PTE and a different CPU setting the GLOBAL bit in its
buddy PTE.
This race can be observed when multiple CPUs are executing
vmap()/vfree() at the same time.
Make setting the buddy PTE's GLOBAL bit an atomic operation to close
the race condition.
The case of CONFIG_64BIT_PHYS_ADDR && CONFIG_CPU_MIPS32 is *not*
handled.
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: <stable@vger.kernel.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10835/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
KGDB fails to build after f51e2f1911 ("ARC: make sure instruction_pointer()
returns unsigned value")
The hack to force one specific reg to unsigned backfired. There's no
reason to keep the regs signed after all.
| CC arch/arc/kernel/kgdb.o
|../arch/arc/kernel/kgdb.c: In function 'kgdb_trap':
| ../arch/arc/kernel/kgdb.c:180:29: error: lvalue required as left operand of assignment
| instruction_pointer(regs) -= BREAK_INSTR_SIZE;
Reported-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Fixes: f51e2f1911 ("ARC: make sure instruction_pointer() returns unsigned value")
Cc: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This register is required to be passed to the SATA PHY driver
to workaround errata i783 (SATA Lockup After SATA DPLL Unlock/Relock).
Signed-off-by: Roger Quadros <rogerq@ti.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
Vince Weaver and Stephane Eranian reported warnings in the PEBS
code when running the perf fuzzer. Stephane wrote:
> I can reproduce the problem on my HSW running the fuzzer.
>
> I can see why this could be happening if you are mixing PEBS and non PEBS events
> in the bottom 4 counters. I suspect:
> for (bit = 0; bit < x86_pmu.max_pebs_events; bit++) {
> if ((counts[bit] == 0) && (error[bit] == 0))
> continue;
>
> This test is not correct when you have non-PEBS events mixed with
> PEBS events and they overflow at the same time. They will have
> counts[i] != 0 but error[i] == 0, and thus you fall thru the loop
> and hit the assert. Or it is something along those lines.
The only way I can make this work is if ->status only has !PEBS events
set, because if it has both set we'll take that slow path which masks
out the !PEBS bits.
After masking there are 3 options:
- there is one bit set, and its @bit, we increment counts[bit].
- there are multiple bits set, we increment error[] for each set bit,
we do not increment counts[].
- there are no bits set, we do nothing.
The intent was to never increment counts[] for !PEBS events.
Now if we start out with only a single !PEBS event set, we'll pass the
test and increment counts[] for a !PEBS and hit the warn.
Reported-by: Vince Weaver <vincent.weaver@maine.edu>
Reported-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
When disabling a PEBS event, we need to drain the buffer. Doing so
requires a correct cpuc->pebs_active mask.
The current code clears the pebs_active bit before draining the
buffer. Fix that.
Signed-off-by: "Liang, Kan" <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver<vincent.weaver@maine.edu>
Link: http://lkml.kernel.org/r/37D7C6CF3E00A74B8858931C1DB2F07701885A65@SHSMSX103.ccr.corp.intel.com
[ Fixed the SOB. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This patch adds an MSR PMU to support free running MSR counters. Such
as time and freq related counters includes TSC, IA32_APERF, IA32_MPERF
and IA32_PPERF, but also SMI_COUNT.
The events are exposed in sysfs for use by perf stat and other tools.
The files are under /sys/devices/msr/events/
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Kan Liang <kan.liang@intel.com>
[ s/freq/msr/, added SMI_COUNT, fixed bugs. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@kernel.org
Cc: adrian.hunter@intel.com
Cc: dsahern@gmail.com
Cc: eranian@google.com
Cc: jolsa@kernel.org
Cc: mark.rutland@arm.com
Cc: namhyung@kernel.org
Link: http://lkml.kernel.org/r/1437407346-31186-1-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The next patch adds a new perf extra register where 0x1ff is not a valid
value. Use 0x11 instead.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1435707205-6676-3-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
merge_attr() allows to merge two sysfs attribute tables.
Export it to be usable by other files too.
Next patch is going to use that to extend the sysfs format
attributes for a CPU.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1435612935-24425-1-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
In callstack mode the LBR is not a ring buffer, but a stack that grows up
and down. This means in this case we don't need to access all LBRs, only the
ones up to TOS. Do this optimization for the normal LBR read, and the context
switch save/restore code. For save/restore it can be done unconditionally, as
it only runs when call stack mode is active.
This recovers some of the cost of going to 32 LBRs on Skylake.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@kernel.org
Cc: eranian@google.com
Cc: jolsa@redhat.com
Link: http://lkml.kernel.org/r/1432786398-23861-6-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Use the correct index to save/restore the LBR_INFO_x MSR in
callstack mode. This is more a cleanup, as even with the wrong
index the register was correctly saved/restored, and also
LBR callgraph mode in perf tools do not really need anything in
LBR_INFO. But still better to use the right index.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@kernel.org
Cc: eranian@google.com
Cc: jolsa@redhat.com
Link: http://lkml.kernel.org/r/1432786398-23861-5-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Add perf core PMU support for future Intel Skylake CPU cores.
The code is based on Haswell/Broadwell.
There is a new cache event list, based on the updated Haswell
event list.
Skylake has removed most counter constraints on basic
events, so the basic constraints table now only has a single
entry (plus the fixed counters).
TSX support and various other setups are all shared with Haswell.
Skylake has 32 LBR entries. Add a new LBR init function
to set this up. The filters are all the same as Haswell.
It also has a new LBR format with a separate LBR_INFO_* MSR,
but that has been already added earlier.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-7-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
In Arch perfmon v4 the GLOBAL_STATUS reset automatically unfreezes
LBRs. So no need to do it manually in the LBR code. Add a check
to skip it.
v2: Move test up to beginning of function.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-9-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
With Arch Perfmon v4 the PMU ack unfreezes the LBRs. So we need to do
the PMU ack after the LBR reading, otherwise the LBRs would be polluted
by the PMI handler.
This is a minimal change. In principle the ACK could be moved much later.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-10-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
ArchPerfmon v4 has some new status bits in GLOBAL_STATUS.
These need to be ignored when deciding whether a NMI
was an NMI, to avoid eating all NMIs when they
stay set, see:
b292d7a104 ("perf/x86/intel: ignore CondChgd bit to avoid false NMI handling")
This patch ignores the new ASIF bit, which indicates
that SGX interfered with the PMU, and also the new
LBR freezing bits, which are set when the LBRs get
frozen, plus the existing CondChange (set by JTAG
debuggers and some buggy BIOSes)
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-8-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Add support for the new LBRv5 format used on Intel Skylake CPUs.
The flags for mispredict, abort, in_tx etc. moved to range of separate
LBR_INFO_* MSRs. Teach the LBR code to read those. The original
LBR registers stay the same, except they have full sign
extension now.
LBR_INFO also reports a cycle count to the last branch.
Report the cycle information using the new "cycles" branch_info
output field.
In addition we have to context switch and clear the new INFO
MSRs to avoid any information leaks.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-6-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Add new MSRs (LBR_INFO) and some new MSR bits used by the Intel Skylake
PMU driver.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-4-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
With PEBSv3 the PEBS record contains a time stamp. That means we can allow
free-running PEBS without a PMI even if the user program requested a time stamp.
This avoids the need to use -T to get free running PEBS, and also avoids
any problems with mis-identifying MMAPs later.
Move the free_running_flags state into a variable in x86_pmu and use it.
This only works when no explicit clock_id is set.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@kernel.org
Cc: eranian@google.com
Cc: jolsa@redhat.com
Cc: kan.liang@intel.com
Link: http://lkml.kernel.org/r/1432786398-23861-2-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
PEBSv3 is the same as the existing PEBSv2 used on Haswell,
but it adds a new TSC field. Add support to the generic
PEBS handler to handle the new format, and overwrite
the perf time stamp using the new native_sched_clock_from_tsc().
Right now the time stamp is just slightly more accurate,
as it is nearer the actual event trigger point. With
the PEBS threshold > 1 patchkit it will be much more accurate,
avoid the problems with MMAP mismatches earlier.
The accurate time stamping is only implemented for
the default trace clock for now.
v2: Use _skl prefix. Check for default clock_id.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-3-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
PEBSv3 has a raw TSC time stamp in its memory buffer that
later needs to to be converted to perf_clock.
Add a native_sched_clock_from_tsc() that works the same
as native_sched_clock(), but starts with an already given
TSC value.
Paravirt is ignored, it will just get the native clock.
But there isn't a para virtualized PEBS anyway.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-2-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Intel PT chapter in the new Intel Architecture SDM adds several packets
corresponding enable bits and registers that control packet generation.
Also, additional bits in the Intel PT CPUID leaf were added to enumerate
presence and parameters of these new packets and features.
The packets and enables are:
* CYC: cycle accurate mode, provides the number of cycles elapsed since
previous CYC packet; its presence and available threshold values are
enumerated via CPUID;
* MTC: mini time counter packets, used for tracking TSC time between
full TSC packets; its presence and available resolution options are
enumerated via CPUID;
* PSB packet period is now configurable, available period values are
enumerated via CPUID.
This patch adds corresponding bit and register definitions, pmu driver
capabilities based on CPUID enumeration, new attribute format bits for
the new featurens and extends event configuration validation function
to take these into account.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: adrian.hunter@intel.com
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/1438262131-12725-1-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Currently, the PT driver zeroes out the status register every time before
starting the event. However, all the writable bits are already taken care
of in pt_handle_status() function, except the new PacketByteCnt field,
which in new versions of PT contains the number of packet bytes written
since the last sync (PSB) packet. Zeroing it out before enabling PT forces
a sync packet to be written. This means that, with the existing code, a
sync packet (PSB and PSBEND, 18 bytes in total) will be generated every
time a PT event is scheduled in.
To avoid these unnecessary syncs and save a WRMSR in the fast path, this
patch changes the default behavior to not clear PacketByteCnt field, so
that the sync packets will be generated with the period specified as
"psb_period" attribute config field. This has little impact on the trace
data as the other packets that are normally sent within PSB+ (between PSB
and PSBEND) have their own generation scenarios which do not depend on the
sync packets.
One exception where we do need to force PSB like this when tracing starts,
so that the decoder has a clear sync point in the trace. For this purpose
we aready have hw::itrace_started flag, which we are currently using to
output PERF_RECORD_ITRACE_START. This patch moves setting itrace_started
from perf core to the pmu::start, where it should still be 0 on the very
first run.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: adrian.hunter@intel.com
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/1438264104-16189-1-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The check looked wrong, although I think it was actually safe. TASK_SIZE
is unnecessarily small for compat tasks, and it wasn't possible to make
a range breakpoint so large it started in user space and ended in kernel
space.
Nonetheless, let's fix up the check for the benefit of future
readers. A breakpoint is in the kernel if either end is in the
kernel.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/136be387950e78f18cea60e9d1bef74465d0ee8f.1438312874.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Range breakpoints will do the wrong thing if the address isn't
aligned. While we're there, add comments about why it's safe for
instruction breakpoints.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/ae25d14d61f2f43b78e0a247e469f3072df7e201.1438312874.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Code on the kprobe blacklist doesn't want unexpected int3
exceptions. It probably doesn't want unexpected debug exceptions
either. Be safe: disallow breakpoints in nokprobes code.
On non-CONFIG_KPROBES kernels, there is no kprobe blacklist. In
that case, disallow kernel breakpoints entirely.
It will be particularly important to keep hw breakpoints out of the
entry and NMI code once we move debug exceptions off the IST stack.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/e14b152af99640448d895e3c2a8c2d5ee19a1325.1438312874.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
AVG_LATENCY(bit 38) is only available on MSR_OFFCORE_RSP0.
So the bit should be removed from RSP1 valid_mask.
Since RSP0 and RSP1 may have different valid_mask, intel_alt_er should
validate the config on the alternate offcore reg before replacing it.
Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1435170215-5017-1-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The x86_lbr_exclusive commit (4807034248 "perf/x86: Mark Intel PT and
LBR/BTS as mutually exclusive") mistakenly moved intel_pmu_needs_lbr_smpl()
to perf_event.h, while another commit (a46a230001 "perf: Simplify the
branch stack check") removed it in favor of needs_branch_stack().
This patch gets rid of intel_pmu_needs_lbr_smpl() for good.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: adrian.hunter@intel.com
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/1435140349-32588-3-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Both intel_pmu_enable_bts() and intel_pmu_disable_bts() are in perf_event.h
header file, no need to have them declared again in the driver.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: adrian.hunter@intel.com
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/1435140349-32588-2-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Haswell and Broadwell have the same uncore CBOX/ARB PMU as Sandy Bridge.
Add the respective model numbers to enable the SNB uncore PMU.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Cc: kan.liang@intel.com
Link: http://lkml.kernel.org/r/1434347862-28490-2-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Add a new "ARB" uncore PMU that is used to monitor the uncore queue
arbiter. This is useful to measure uncore queue occupancy and similar
statistics. The registers all have the same format as the
existing CBOX PMU.
Also move the event constraints from the CBOX to ARB. The 0x80+
events are ARB events and cannot be scheduled on a CBOX PMU.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Cc: kan.liang@intel.com
Link: http://lkml.kernel.org/r/1434347862-28490-1-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The DEFINE_PCI_DEVICE_TABLE() macro is deprecated. Use
'struct pci_device_id' instead of DEFINE_PCI_DEVICE_TABLE(),
with the goal of getting rid of this macro completely.
This Coccinelle semantic patch performs this transformation:
@@
identifier a;
declarer name DEFINE_PCI_DEVICE_TABLE;
initializer i;
@@
- DEFINE_PCI_DEVICE_TABLE(a)
+ const struct pci_device_id a[] = i;
Signed-off-by: Vaishali Thakkar <vthakkar1994@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150717052759.GA6265@vaishali-Ideapad-Z570
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Knights Landing DRAM RAPL supports PKG and DRAM RAPL domains.
DRAM RAPL has a different fixed energy unit (2^-16J) similar to
that of HSW.
Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Stephane Eranian <eranian@google.com>
Acked-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jacob Pan Jun <jacob.jun.pan@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nikhil Rao <nikhil.rao@intel.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/aa63b4a3af3160152fea1a10c807f4200527280c.1432665809.git.dasaratharaman.chandramouli@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The previous commit for delayed retry of SCOND needs some fine tuning
for spin locks.
The backoff from delayed retry in conjunction with spin looping of lock
itself can potentially cause the delay counter to reach high values.
So to provide fairness to any lock operation, after a lock "seems"
available (i.e. just before first SCOND try0, reset the delay counter
back to starting value of 1
Essentially reset delay to 1 for a new spin-wait-loop-acquire cycle.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This is to workaround the llock/scond livelock
HS38x4 could get into a LLOCK/SCOND livelock in case of multiple overlapping
coherency transactions in the SCU. The exclusive line state keeps rotating
among contenting cores leading to a never ending cycle. So break the cycle
by deferring the retry of failed exclusive access (SCOND). The actual delay
needed is function of number of contending cores as well as the unrelated
coherency traffic from other cores. To keep the code simple, start off with
small delay of 1 which would suffice most cases and in case of contention
double the delay. Eventually the delay is sufficient such that the coherency
pipeline is drained, thus a subsequent exclusive access would succeed.
Link: http://lkml.kernel.org/r/1438612568-28265-1-git-send-email-vgupta@synopsys.com
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
With LLOCK/SCOND, the rwlock counter can be atomically updated w/o need
for a guarding spin lock.
This in turn elides the EXchange instruction based spinning which causes
the cacheline transition to exclusive state and concurrent spinning
across cores would cause the line to keep bouncing around.
LLOCK/SCOND based implementation is superior as spinning on LLOCK keeps
the cacheline in shared state.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Current spin_lock uses EXchange instruction to implement the atomic test
and set of lock location (reads orig value and ST 1). This however forces
the cacheline into exclusive state (because of the ST) and concurrent
loops in multiple cores will bounce the line around between cores.
Instead, use LLOCK/SCOND to implement the atomic test and set which is
better as line is in shared state while lock is spinning on LLOCK
The real motivation of this change however is to make way for future
changes in atomics to implement delayed retry (with backoff).
Initial experiment with delayed retry in atomics combined with orig
EX based spinlock was a total disaster (broke even LMBench) as
struct sock has a cache line sharing an atomic_t and spinlock. The
tight spinning on lock, caused the atomic retry to keep backing off
such that it would never finish.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This reduces the diff in forth-coming patches and also helps understand
better the incremental changes to inline asm.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Extended testing of quad core configuration revealed that this fix was
insufficient. Specifically LTP open posix shm_op/23-1 would cause the
hardware livelock in llock/scond loop in update_cpu_load_active()
So remove this and make way for a proper workaround
This reverts commit a5c8b52abe.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
With HS 2.1 release, the peripheral space register no longer contains
the uncached space specifics, causing the kernel to panic early on.
So read the newer NON VOLATILE AUX register to get that info.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
When EVA is enabled, flush the Return Prediction Stack (RPS) present on
some MIPS cores on entry to the kernel from user mode.
This is important specifically for interAptiv with EVA enabled,
otherwise kernel mode RPS mispredicts may trigger speculative fetches of
user return addresses, which may be sensitive in the kernel address
space due to EVA's overlapping user/kernel address spaces.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.15.x-
Patchwork: https://patchwork.linux-mips.org/patch/10812/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This reverts commit 3cf2954341 ("MIPS:
BCM63xx: Provide a plat_post_dma_flush hook") since this commit was
found to prevent BCM6358 (early BMIPS4350 cores) and some BCM6368
(BMIPS4380 cores) from booting reliably.
Alvaro was able to track this down to an issue specifically located to
devices that use the second thread (TP1) when booting. Since BCM63xx did
not have a need for plat_post_dma_flush() hook before, let's just keep
things the way they were.
Reported-by: Álvaro Fernández Rojas <noltari@gmail.com>
Reported-by: Jonas Gorski <jogo@openwrt.org>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Cc: stable@vger.kernel.org
Cc: Kevin Cernekee <cernekee@gmail.com>
Cc: Nicolas Schichan <nschichan@freebox.fr>
Cc: linux-mips@linux-mips.org
Cc: blogic@openwrt.org
Cc: noltari@gmail.com
Cc: jogo@openwrt.org
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/10804/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This was left over from an earlier iteration of the BMIPS irqchip changes.
It doesn't actually have an effect, so let's nuke it.
Reported-by: Valentin Rothberg <valentinrothberg@gmail.com>
Signed-off-by: Kevin Cernekee <cernekee@chromium.org>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Cc: stable@vger.kernel.org # v4.1+
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/9910/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The show_stack() function deals exclusively with kernel contexts, but if
it gets called in user context with EVA enabled, show_stacktrace() will
attempt to access the stack using EVA accesses, which will either read
other user mapped data, or more likely cause an exception which will be
handled by __get_user().
This is easily reproduced using SysRq t to show all task states, which
results in the following stack dump output:
Stack : (Bad stack address)
Fix by setting the current user access mode to kernel around the call to
show_stacktrace(). This causes __get_user() to use normal loads to read
the kernel stack.
Now we get the correct output, like this:
Stack : 00000000 80168960 00000000 004a0000 00000000 00000000 8060016c 1f3abd0c
1f172cd8 8056f09c 7ff1e450 8014fc3c 00000001 806dd0b0 0000001d 00000002
1f17c6a0 1f17c804 1f17c6a0 8066f6e0 00000000 0000000a 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 0110e800 1f3abd6c 1f17c6a0
...
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.15+
Patchwork: https://patchwork.linux-mips.org/patch/10778/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
If a machine check exception is raised in kernel mode, user context,
with EVA enabled, then the do_mcheck handler will attempt to read the
code around the EPC using EVA load instructions, i.e. as if the reads
were from user mode. This will either read random user data if the
process has anything mapped at the same address, or it will cause an
exception which is handled by __get_user, resulting in this output:
Code: (Bad address in epc)
Fix by setting the current user access mode to kernel if the saved
register context indicates the exception was taken in kernel mode. This
causes __get_user to use normal loads to read the kernel code.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.15+
Patchwork: https://patchwork.linux-mips.org/patch/10777/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The majority of SMP platforms handle their IPIs through do_IRQ()
which calls irq_{enter/exit}(). When a call function IPI is received,
smp_call_function_interrupt() is called which also calls
irq_{enter,exit}(), meaning irq_count is raised twice.
When tick broadcasting is used (which is implemented via a call
function IPI), this incorrectly causes all CPU idle time on the core
receiving broadcast ticks to be accounted as time spent servicing
IRQs, as account_process_tick() will account as such if irq_count is
greater than 1. This results in 100% CPU usage being reported on a
core which receives its ticks via broadcast.
This patch removes the SMP smp_call_function_interrupt() wrapper which
calls irq_{enter,exit}(). Platforms which handle their IPIs through
do_IRQ() now call generic_smp_call_function_interrupt() directly to
avoid incrementing irq_count a second time. Platforms which don't
(loongson, sgi-ip27, sibyte) call generic_smp_call_function_interrupt()
wrapped in irq_{enter,exit}().
Signed-off-by: Alex Smith <alex.smith@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10770/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Execution of break instruction, trap instructions, emulation of unaligned
loads or floating point instructions - anything that tries to read the
instruction's opcode from userspace - needs read access to a page.
RIXI (Read Inhibit / Execute Inhibit) support however allows the creation of
pags that are executable but not readable. On such a mapping the attempted
load of the opcode by the kernel is going to cause an endless loop of
page faults.
The quick workaround for this is to disable the combinations that the kernel
currently isn't able to handle which are executable mappings.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
On Malta, since commit a87ea88d8f ("MIPS: Malta: initialise the RTC at
boot"), the RTC is reinitialised and forced into binary coded decimal
(BCD) mode during init, even if the bootloader has already initialised
it, and may even have already put it into binary mode (as YAMON does).
This corrupts the current time, can result in the RTC seconds being an
invalid BCD (e.g. 0x1a..0x1f) for up to 6 seconds, as well as confusing
YAMON for a while after reset, enough for it to report timeouts when
attempting to load from TFTP (it actually uses the RTC in that code).
Therefore only initialise the RTC to the extent that is necessary so
that Linux avoids interfering with the bootloader setup, while also
allowing it to estimate the CPU frequency without hanging, without a
bootloader necessarily having done anything with the RTC (for example
when the kernel is loaded via EJTAG).
The divider control is configured for a 32KHZ reference clock if
necessary, and the SET bit of the RTC_CONTROL register is cleared if
necessary without changing any other bits (this bit will be set when
coming out of reset if the battery has been disconnected).
Fixes: a87ea88d8f ("MIPS: Malta: initialise the RTC at boot")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.14+
Patchwork: https://patchwork.linux-mips.org/patch/10739/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Commit eeb5389503 ("MIPS: unaligned: Prevent EVA instructions on kernel
unaligned accesses") renamed the Load* and Store* defines in unaligned.c
to _Load* and _Store* as part of its fix. One define was missed out which
causes big endian R6 kernels to fail to build.
arch/mips/kernel/unaligned.c:880:35:
error: implicit declaration of function '_StoreDW'
#define StoreDW(addr, value, res) _StoreDW(addr, value, res)
^
Signed-off-by: James Cowgill <James.Cowgill@imgtec.com>
Fixes: eeb5389503 ("MIPS: unaligned: Prevent EVA instructions on kernel unaligned accesses")
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: <stable@vger.kernel.org> # 4.0+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10575/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
p->thread.user_cpus_allowed is zero-initialized and is only filled on
the first sched_setaffinity call.
To avoid adding overhead in the task initialization codepath, simply OR
the returned mask in sched_getaffinity with p->cpus_allowed.
Cc: stable@vger.kernel.org
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10740/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Commit 01306aeadd ("MIPS: prepare for user enabling of CONFIG_OF")
changed the guards in asm/prom.h from CONFIG_OF to CONFIG_USE_OF, but
missed the actual function declarations in kernel/prom.c, which have
additional dependencies.
Fixes the following build error:
CC arch/mips/kernel/prom.o
arch/mips/kernel/prom.c: In function '__dt_setup_arch':
arch/mips/kernel/prom.c:54:2: error: implicit declaration of function 'early_init_dt_scan' [-Werror=implicit-function-declaration]
if (!early_init_dt_scan(bph))
^
Fixes: 01306aeadd ("MIPS: prepare for user enabling of CONFIG_OF")
Signed-off-by: Jonas Gorski <jogo@openwrt.org>
Acked-by: Rob Herring <robh@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: devicetree@vger.kernel.org
Cc: Grant Likely <grant.likely@linaro.org>
Patchwork: https://patchwork.linux-mips.org/patch/10741/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Things are calming down nicely here w.r.t. fixes. This batch includes two
week's worth since I missed to send before -rc4.
Nothing particularly scary to point out, smaller fixes here and
there. Shortlog describes it pretty well.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJVvcZ0AAoJEIwa5zzehBx3+dYQAKtl3XvLYKUurbwX5u79qgI5
66iLTdPkJxmdOArfKERbC4wstb39WcTjP3KRGHGpqWMxQZTZoAHZqx2itoU0CKyI
IzIu789SzmrwMDbGTOoU4OFeTxA3GNZBPdlxXbcilFqPALnvR9cT9HANc0nOaeOG
kwunJEKIZoVRDmeAd/u25Z//zRk4BYHcgRMfJRqpGEAIEXT2f+v4whLGjCa1pdPz
PpL6StHoXQ4raeocDhWAUkz/2HpjOFds1bhvaKmPb1zFissSSYBlS5QpCn110E3k
kpeu5lPojsVBkLPNqmyyx3vobj6pnDWuz2BdaZa8epqsV00hUnM+kIb+sfXnS24w
23gEAguT91Vw9hgFdVYc0R4xQwuQWqOmNgS6tkS96Aeie/bFBrPxB86AiA76fIaw
I/0aDJH2pQc6dMQFpzYK1hK3B4KSwlffKnfgIBUecLiXbWDwcTTwZH8Diwc25hdP
ozI9k6omUkiMTtyjLuj67/e7yTszxffLExPZlccu//kahhGSGJLhCQoRuRTBA0I6
bnAXC4hc7damn9Xj4RCM9PBXSWonraGyd6Mlgmr+h4MWZMANHuL4bwNcyQAx/gNq
muzSSFKak3zbo8zn/8j8l8W+UEPJap4pF01Et3HqeleAUx2j2ap7SKy+7eHn9P4F
D9EnzPopeZJpXJjf03qV
=AGL5
-----END PGP SIGNATURE-----
Merge tag 'armsoc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC fixes from Olof Johansson:
"Things are calming down nicely here w.r.t. fixes. This batch
includes two week's worth since I missed to send before -rc4.
Nothing particularly scary to point out, smaller fixes here and there.
Shortlog describes it pretty well"
* tag 'armsoc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
ARM: dts: keystone: fix dt bindings to use post div register for mainpll
ARM: nomadik: disable UART0 on Nomadik boards
ARM: dts: i.MX35: Fix can support.
ARM: OMAP2+: hwmod: Fix _wait_target_ready() for hwmods without sysc
ARM: dts: add CPU OPP and regulator supply property for exynos4210
ARM: dts: Update video-phy node with syscon phandle for exynos3250
ARM: DRA7: hwmod: fix gpmc hwmod
We had a regression due to reuse of descriptor so we have reverted that.
Rest are driver fixes
at_hdmac and at_xdmac for residue, trannfer width, and channel config
pl330 final fix for dma fails and overflow issue
xgene resouce map fix
mv_xor big endian op fix
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJVvOweAAoJEHwUBw8lI4NH21QP/0D8rEh/iXVUZOUqp7ANp+NX
B96LMvxTmc7Vn8C7dLeMvktZy+SSvlSrG2kqN+X02syhttWjXvvwEYUDw6/InLCy
ZnXzPxFmPPZEIGiUqb0zFbfUSYtV/7qjTGcXdamxWR3dw2ti1114sQ4K4RfMUvgh
9aU8PmFw3PYMi1w9boxaoU5KHIAc8zogcKHo21mxSzFPOa9ej4Bcaxa1AtKCsawG
lPBbjKI7/VWtvMReMF2GVK/mummZ03Iro+iXGL78QUud2hlcxbF7OLPuFHazhi7x
B8PprnvbVk/DDRy9zO3EVVRpEgWa0E4ms24UKt2eg06k8o/ibaqdZsGR6QpqLmZI
bl26tQiBpoX1PBxgP8w+6v84FXDzE8pA64dt5t0mCnFrcehyCfPek4P5UmbbfAo1
S4AH4E9vlNQbjyhB6MYSZD0Ck8BmxxrHqzp/xbUzfRl0Qsyqe9zyaSOraqcmveAZ
XCETHDb82EetOJh8ukWPGw95Pi9rrKX98FZFWKU8+oxePlGPIeVc3s7T06hj+j+Y
9ShalP9TG56kmIRGvKFmxW5T9VGQWu/GiglN8LtJSN1hrGAxyaK4QCD8nnYBrxvG
59WwR/XjkQhldxH3IhuU7LqaphOzOcokFX5kD5imyYRMTQsMjL89LYXshw+8DsQw
mzZsRA6L3777Zq9SlnsF
=X0jd
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-fix-4.2-rc5' of git://git.infradead.org/users/vkoul/slave-dma
Pull dmaengine fixes from Vinod Koul:
"We had a regression due to reuse of descriptor so we have reverted
that.
The rest are driver fixes:
- at_hdmac and at_xdmac for residue, trannfer width, and channel config
- pl330 final fix for dma fails and overflow issue
- xgene resouce map fix
- mv_xor big endian op fix"
* tag 'dmaengine-fix-4.2-rc5' of git://git.infradead.org/users/vkoul/slave-dma:
Revert "dmaengine: virt-dma: don't always free descriptor upon completion"
dmaengine: mv_xor: fix big endian operation in register mode
dmaengine: xgene-dma: Fix the resource map to handle overlapping
dmaengine: at_xdmac: fix transfer data width in at_xdmac_prep_slave_sg()
dmaengine: at_hdmac: fix residue computation
dmaengine: at_xdmac: fix bug about channel configuration
dmaengine: pl330: Really fix choppy sound because of wrong residue calculation
dmaengine: pl330: Fix overflow when reporting residue in memcpy
Pull x86 fixes from Ingo Molnar:
"Fallout from the recent NMI fixes: make x86 LDT handling more robust.
Also some EFI fixes"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/ldt: Make modify_ldt synchronous
x86/xen: Probe target addresses in set_aliased_prot() before the hypercall
x86/irq: Use the caller provided polarity setting in mp_check_pin_attr()
efi: Check for NULL efi kernel parameters
x86/efi: Use all 64 bit of efi_memmap in setup_e820()
Pull networking fixes from David Miller:
1) Must teardown SR-IOV before unregistering netdev in igb driver, from
Alex Williamson.
2) Fix ipv6 route unreachable crash in IPVS, from Alex Gartrell.
3) Default route selection in ipv4 should take the prefix length, table
ID, and TOS into account, from Julian Anastasov.
4) sch_plug must have a reset method in order to purge all buffered
packets when the qdisc is reset, likewise for sch_choke, from WANG
Cong.
5) Fix deadlock and races in slave_changelink/br_setport in bridging.
From Nikolay Aleksandrov.
6) mlx4 bug fixes (wrong index in port even propagation to VFs,
overzealous BUG_ON assertion, etc.) from Ido Shamay, Jack
Morgenstein, and Or Gerlitz.
7) Turn off klog message about SCTP userspace interface compat that
makes no sense at all, from Daniel Borkmann.
8) Fix unbounded restarts of inet frag eviction process, causing NMI
watchdog soft lockup messages, from Florian Westphal.
9) Suspend/resume fixes for r8152 from Hayes Wang.
10) Fix busy loop when MSG_WAITALL|MSG_PEEK is used in TCP recv, from
Sabrina Dubroca.
11) Fix performance regression when removing a lot of routes from the
ipv4 routing tables, from Alexander Duyck.
12) Fix device leak in AF_PACKET, from Lars Westerhoff.
13) AF_PACKET also has a header length comparison bug due to signedness,
from Alexander Drozdov.
14) Fix bug in EBPF tail call generation on x86, from Daniel Borkmann.
15) Memory leaks, TSO stats, watchdog timeout and other fixes to
thunderx driver from Sunil Goutham and Thanneeru Srinivasulu.
16) act_bpf can leak memory when replacing programs, from Daniel
Borkmann.
17) WOL packet fixes in gianfar driver, from Claudiu Manoil.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (79 commits)
stmmac: fix missing MODULE_LICENSE in stmmac_platform
gianfar: Enable device wakeup when appropriate
gianfar: Fix suspend/resume for wol magic packet
gianfar: Fix warning when CONFIG_PM off
act_pedit: check binding before calling tcf_hash_release()
net: sk_clone_lock() should only do get_net() if the parent is not a kernel socket
net: sched: fix refcount imbalance in actions
r8152: reset device when tx timeout
r8152: add pre_reset and post_reset
qlcnic: Fix corruption while copying
act_bpf: fix memory leaks when replacing bpf programs
net: thunderx: Fix for crash while BGX teardown
net: thunderx: Add PCI driver shutdown routine
net: thunderx: Fix crash when changing rss with mutliple traffic flows
net: thunderx: Set watchdog timeout value
net: thunderx: Wakeup TXQ only if CQE_TX are processed
net: thunderx: Suppress alloc_pages() failure warnings
net: thunderx: Fix TSO packet statistic
net: thunderx: Fix memory leak when changing queue count
net: thunderx: Fix RQ_DROP miscalculation
...
All of the keystone devices have a separate register to hold post
divider value for main pll clock. Currently the fixed-postdiv
value used for k2hk/l/e SoCs works by sheer luck as u-boot happens to
use a value of 2 for this. Now that we have fixed this in the pll
clock driver change the dt bindings for the same.
Signed-off-by: Murali Karicheri <m-karicheri2@ti.com>
Acked-by: Santosh Shilimkar <ssantosh@kernel.org>
Signed-off-by: Olof Johansson <olof@lixom.net>
The Sourcery CodeBench Lite 2014.05 toolchain (gcc 4.8.3, binutils
2.24.51) has a GCC which implements -fuse-ld, and it doesn't include
the gold linker, but it lacks an ld.bfd executable in its
installation. This means that passing -fuse-ld=bfd fails with:
VDSO arch/arm/vdso/vdso.so.raw
collect2: fatal error: cannot find 'ld'
Arguably this is a deficiency in the toolchain, but I suspect it's
commonly used enough that it's worth accommodating: just use
cc-ldoption (to cause a link attempt) instead of cc-option to test
whether we can use -fuse-ld. So -fuse-ld=bfd won't be used with this
toolchain, but the build will rightly succeed, just as it does for
toolchains which don't implement -fuse-ld (and don't use gold as the
default linker).
Note: this will change the failure mode for a corner case I was trying
to handle in d2b30cd4b7, where the toolchain defaults to the gold
linker and the BFD linker is not found in PATH, from:
VDSO arch/arm/vdso/vdso.so.raw
collect2: fatal error: cannot find 'ld'
i.e. the BFD linker is not found, to:
OBJCOPY arch/arm/vdso/vdso.so
BFD: arch/arm/vdso/vdso.so: Not enough room for program headers, try
linking with -N
that is, we fail to prevent gold from being used as the linker, and it
produces an object that objcopy can't digest.
Reported-by: Baruch Siach <baruch@tkos.co.il>
Tested-by: Baruch Siach <baruch@tkos.co.il>
Tested-by: Raphaël Poggi <poggi.raph@gmail.com>
Fixes: d2b30cd4b7 ("ARM: 8384/1: VDSO: force use of BFD linker")
Cc: stable@vger.kernel.org
Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There is an overlap in dma ring cmd csr region due to sharing of ethernet
ring cmd csr region. This patch fix the resource overlapping by mapping
the entire dma ring cmd csr region.
Signed-off-by: Rameshwar Prasad Sahu <rsahu@apm.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The previous change documents that cleanup_return_instances()
can't always detect the dead frames, the stack can grow. But
there is one special case which imho worth fixing:
arch_uretprobe_is_alive() can return true when the stack didn't
actually grow, but the next "call" insn uses the already
invalidated frame.
Test-case:
#include <stdio.h>
#include <setjmp.h>
jmp_buf jmp;
int nr = 1024;
void func_2(void)
{
if (--nr == 0)
return;
longjmp(jmp, 1);
}
void func_1(void)
{
setjmp(jmp);
func_2();
}
int main(void)
{
func_1();
return 0;
}
If you ret-probe func_1() and func_2() prepare_uretprobe() hits
the MAX_URETPROBE_DEPTH limit and "return" from func_2() is not
reported.
When we know that the new call is not chained, we can do the
more strict check. In this case "sp" points to the new ret-addr,
so every frame which uses the same "sp" must be dead. The only
complication is that arch_uretprobe_is_alive() needs to know was
it chained or not, so we add the new RP_CHECK_CHAIN_CALL enum
and change prepare_uretprobe() to pass RP_CHECK_CALL only if
!chained.
Note: arch_uretprobe_is_alive() could also re-read *sp and check
if this word is still trampoline_vaddr. This could obviously
improve the logic, but I would like to avoid another
copy_from_user() especially in the case when we can't avoid the
false "alive == T" positives.
Tested-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Anton Arapov <arapov@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134028.GA4786@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86 doesn't care (so far), but as Pratyush Anand pointed
out other architectures might want why arch_uretprobe_is_alive()
was called and use different checks depending on the context.
Add the new argument to distinguish 2 callers.
Tested-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Anton Arapov <arapov@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134026.GA4779@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Add the x86 specific version of arch_uretprobe_is_alive()
helper. It returns true if the stack frame mangled by
prepare_uretprobe() is still on stack. So if it returns false,
we know that the probed function has already returned.
We add the new return_instance->stack member and change the
generic code to initialize it in prepare_uretprobe, but it
should be equally useful for other architectures.
TODO: this assumes that the probed application can't use
multiple stacks (say sigaltstack). We will try to improve
this logic later.
Tested-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: Anton Arapov <arapov@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134018.GA4766@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
modify_ldt() has questionable locking and does not synchronize
threads. Improve it: redesign the locking and synchronize all
threads' LDTs using an IPI on all modifications.
This will dramatically slow down modify_ldt in multithreaded
programs, but there shouldn't be any multithreaded programs that
care about modify_ldt's performance in the first place.
This fixes some fallout from the CVE-2015-5157 fixes.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Borislav Petkov <bp@suse.de>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: security@kernel.org <security@kernel.org>
Cc: <stable@vger.kernel.org>
Cc: xen-devel <xen-devel@lists.xen.org>
Link: http://lkml.kernel.org/r/4c6978476782160600471bd865b318db34c7b628.1438291540.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The update_va_mapping hypercall can fail if the VA isn't present
in the guest's page tables. Under certain loads, this can
result in an OOPS when the target address is in unpopulated vmap
space.
While we're at it, add comments to help explain what's going on.
This isn't a great long-term fix. This code should probably be
changed to use something like set_memory_ro.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Vrabel <dvrabel@cantab.net>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: security@kernel.org <security@kernel.org>
Cc: <stable@vger.kernel.org>
Cc: xen-devel <xen-devel@lists.xen.org>
Link: http://lkml.kernel.org/r/0b0e55b995cda11e7829f140b833ef932fcabe3a.1438291540.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
booting because the upper 32-bits of the EFI memmap pointer were
being discarded in setup_e820() - Dmitry Skorodumov
* Validate that the "efi" kernel parameter gets used with an argument,
otherwise we will oops - Ricardo Neri
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJVupCGAAoJEC84WcCNIz1VkmMP/jcfvKrgeNjO3qV83qeCSjWL
BwkQ0BhWCvReQUrdGVTP0gxCIZsZI/RbFueSUekONuZYj9fJiWyTA88uj74SD35G
Or/dOMhcLbhI0j/zaoZCxTN582+bPdE016/R8pq0uvnsTKJu95dFaNs0XPD5OzGz
p3we3l6O2BY7NQO0rku+RUJmKRN74q89sAaB+/2v7WCbcONJhiAj0OVQhH1BbyX7
QAiqxetubgNadLdxc8h2Dqcj3YAUD2yVancP6x4RAEwAcZfjEPuXiHyEH+xGOsfU
F6r9T/YHHnOyjKUMZP03WV2fXr9ACX/hDj5p5NUkMgQK1hAKY2KtXNUNJIyRSKL5
alKNX40EG0I2WllA5wYZuIPaGvWRmajfz9YgBivaEMEif0ix0BEeQ/Q0qJGbUDTB
pSCvkOoJJyqfzXj4ZWp3zUNmJk5zQKw+rHsjthy34QAPEHId32rGwI8Whcdszzgi
Ytqy6jK/vEnbD3O7KvGCnJNTu+xzfsYX/0wlAiwQs7x+TO4m2MZ0vhC+C1/tDlz4
YnUqFTnscAZW+nPoNXk+emlvojgcqbII/ziDh8R7WdEBt14e32uHt6Bzxhb10evg
MEDT86Ur4zffs8hBkKANK+RO5TM4aAIFQk2oROUd7CYjrTeoyyX7QUH1/9t6m8am
+2nLy5vN//C4QGPB/46g
=kohF
-----END PGP SIGNATURE-----
Merge tag 'efi-urgent' of git://git.kernel.org/pub/scm/linux/kernel/git/mfleming/efi into x86/urgent
Pull EFI fixes from Matt Fleming:
* Fix an EFI boot issue preventing a Parallels virtual machine from
booting because the upper 32-bits of the EFI memmap pointer were
being discarded in setup_e820(). (Dmitry Skorodumov)
* Validate that the "efi" kernel parameter gets used with an argument,
otherwise we will oops. (Ricardo Neri)
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The change removes the second of_node_put(), if
for_each_compatible_node() body execution is not terminated. This
prevents from object refcounter overflow over zero in OF_DYNAMIC
build.
Signed-off-by: Vladimir Zapolskiy <vz@mleia.com>
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
The change fixes a bug introduced by 2be2a3ff42, memory allocated
by kstrdup_const() must be always deallocated with kfree_const(),
otherwise there is a risk of kfree'ing ro memory in power domain error
exit path.
Signed-off-by: Vladimir Zapolskiy <vz@mleia.com>
Cc: <stable@vger.kernel.org>
Fixes: 2be2a3ff42 ("ARM: EXYNOS: register power domain driver from core_initcall")
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Commit d32932d02e ("x86/irq: Convert IOAPIC to use hierarchical
irqdomain interfaces") introduced a regression which causes
malfunction of interrupt lines.
The reason is that the conversion of mp_check_pin_attr() missed to
update the polarity selection of the interrupt pin with the caller
provided setting and instead uses a stale attribute value. That in
turn results in chosing the wrong interrupt flow handler.
Use the caller supplied setting to configure the pin correctly which
also choses the correct interrupt flow handler.
This restores the original behaviour and on the affected
machine/driver (Surface Pro 3, i2c controller) all IOAPIC IRQ
configuration are identical to v4.1.
Fixes: d32932d02e ("x86/irq: Convert IOAPIC to use hierarchical irqdomain interfaces")
Reported-and-tested-by: Matt Fleming <matt@codeblueprint.co.uk>
Reported-and-tested-by: Chen Yu <yu.c.chen@intel.com>
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Chen Yu <yu.c.chen@intel.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1438242695-23531-1-git-send-email-jiang.liu@linux.intel.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Even though it is documented how to specifiy efi parameters, it is
possible to cause a kernel panic due to a dereference of a NULL pointer when
parsing such parameters if "efi" alone is given:
PANIC: early exception 0e rip 10:ffffffff812fb361 error 0 cr2 0
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.2.0-rc1+ #450
[ 0.000000] ffffffff81fe20a9 ffffffff81e03d50 ffffffff8184bb0f 00000000000003f8
[ 0.000000] 0000000000000000 ffffffff81e03e08 ffffffff81f371a1 64656c62616e6520
[ 0.000000] 0000000000000069 000000000000005f 0000000000000000 0000000000000000
[ 0.000000] Call Trace:
[ 0.000000] [<ffffffff8184bb0f>] dump_stack+0x45/0x57
[ 0.000000] [<ffffffff81f371a1>] early_idt_handler_common+0x81/0xae
[ 0.000000] [<ffffffff812fb361>] ? parse_option_str+0x11/0x90
[ 0.000000] [<ffffffff81f4dd69>] arch_parse_efi_cmdline+0x15/0x42
[ 0.000000] [<ffffffff81f376e1>] do_early_param+0x50/0x8a
[ 0.000000] [<ffffffff8106b1b3>] parse_args+0x1e3/0x400
[ 0.000000] [<ffffffff81f37a43>] parse_early_options+0x24/0x28
[ 0.000000] [<ffffffff81f37691>] ? loglevel+0x31/0x31
[ 0.000000] [<ffffffff81f37a78>] parse_early_param+0x31/0x3d
[ 0.000000] [<ffffffff81f3ae98>] setup_arch+0x2de/0xc08
[ 0.000000] [<ffffffff8109629a>] ? vprintk_default+0x1a/0x20
[ 0.000000] [<ffffffff81f37b20>] start_kernel+0x90/0x423
[ 0.000000] [<ffffffff81f37495>] x86_64_start_reservations+0x2a/0x2c
[ 0.000000] [<ffffffff81f37582>] x86_64_start_kernel+0xeb/0xef
[ 0.000000] RIP 0xffffffff81ba2efc
This panic is not reproducible with "efi=" as this will result in a non-NULL
zero-length string.
Thus, verify that the pointer to the parameter string is not NULL. This is
consistent with other parameter-parsing functions which check for NULL pointers.
Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
The efi_info structure stores low 32 bits of memory map
in efi_memmap and high 32 bits in efi_memmap_hi.
While constructing pointer in the setup_e820(), need
to take into account all 64 bit of the pointer.
It is because on 64bit machine the function
efi_get_memory_map() may return full 64bit pointer and before
the patch that pointer was truncated.
The issue is triggered on Parallles virtual machine and
fixed with this patch.
Signed-off-by: Dmitry Skorodumov <sdmitry@parallels.com>
Cc: Denis V. Lunev <den@openvz.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
Here is a bugfix for a regression that was introduced after 4.1
with the commit commit 785dbef407 ("KVM: s390: optimize round
trip time in request handling"). After lots of cpu hotplugs in the
guest (online/offline) sometimes a guest CPU did loop within host
KVM code. Reason was that PROG_REQUEST was set in the sie control
block, but no request was pending. This made commit 785dbef407
the suspect and changing that area to always reset PROG_REQUEST
did indeed fix the problem.
Special thanks to David Hildenbrand, who helped understanding the
exact sequence that led to the problem.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.14 (GNU/Linux)
iQIcBAABAgAGBQJVugfQAAoJEBF7vIC1phx8aYIP/1Axd7er7Z3M6ls33cYznVKz
kAF1gsOemBkIHX7E/tpI8ReYBc+jHrAj+QlYF6loGVnIbb2TpA3PQsswYk3hfbNg
rpmR7OCezwXCWKUOSy5QPtop7Be3Z4+mitAYjwz8GR4Bqw54IoZs8yExgFeDYVmK
eRTQjNOjCFQ5ba9nNEf8uBGal4ND9nx46JMIZy7tUwN9Jl0RLr8PaBN89JeS3vYg
olIxrSwZUXTX0NEr04o9LZ+68aOVhMpzhCFbaACvvUg9YCjyMFIuSDtYLnh6+yKN
EztHjmlNA54TuAW/XVb0Kayiq0PU32Z9GwD4nCluacsxbLaNB2rC43PYjZPqlXn/
TwYTTSDrct/RdLtwNOaI/V89sNgTmvwAU8Qt1IFHd3lu9XZS0EJAkU30mUEOh1Mv
skP9wLmjdjNOZfdyVLu1RBu70mEfuUGKT2QKRmgW5RMMfhXSLaSyUKf/npiXiv/m
5FfmhGBt+qlusTuCVcvuyqci1GSjaSZUIXcIy95gpU5OaXvxCWQiE3dNOXRAlBmc
sdQ3WLVXd1AileCSkCvGxzzqA7If3M715ac1+IpnkM5Mmv38cCfSZDEoFGAnaEvi
m+ISQ1nRVmdRFbuKTIOdxJS06PZ6ysF2IDeL4b6X1L055Zf80gSe2iXxqOcfJ27e
X8InBSuF1G9VqihjkbQm
=eHXj
-----END PGP SIGNATURE-----
Merge tag 'kvm-s390-master-20150730' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into kvm-master
KVM: s390: bugfix for kvm/master (4.2)
Here is a bugfix for a regression that was introduced after 4.1
with the commit commit 785dbef407 ("KVM: s390: optimize round
trip time in request handling"). After lots of cpu hotplugs in the
guest (online/offline) sometimes a guest CPU did loop within host
KVM code. Reason was that PROG_REQUEST was set in the sie control
block, but no request was pending. This made commit 785dbef407
the suspect and changing that area to always reset PROG_REQUEST
did indeed fix the problem.
Special thanks to David Hildenbrand, who helped understanding the
exact sequence that led to the problem.
commit 785dbef407 ("KVM: s390: optimize round trip time in request
handling") introduced a regression. This regression was seen with
CPU hotplug in the guest and switching between 1 or 2 CPUs. This will
set/reset the IBS control via synced request.
Whenever we make a synced request, we first set the vcpu->requests
bit and then block the vcpu. The handler, on the other hand, unblocks
itself, processes vcpu->requests (by clearing them) and unblocks itself
once again.
Now, if the requester sleeps between setting of vcpu->requests and
blocking, the handler will clear the vcpu->requests bit and try to
unblock itself (although no bit is set). When the requester wakes up,
it blocks the VCPU and we have a blocked VCPU without requests.
Solution is to always unset the block bit.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Fixes: 785dbef407 ("KVM: s390: optimize round trip time in request handling")
pnv_eeh_next_error() re-enables the eeh opal event interrupt but it
gets called from a loop if there are more outstanding events to
process, resulting in a warning due to enabling an already enabled
interrupt. Instead the interrupt should only be re-enabled once the
last outstanding event has been processed.
Tested-by: Daniel Axtens <dja@axtens.net>
Reported-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Acked-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Pull s390 fixes from Martin Schwidefsky:
"Two bug fixes:
- fix a crash on pre-z10 hardware due to cache-info
- fix an issue with classic BPF programs in the eBPF JIT"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390/cachinfo: add missing facility check to init_cache_level()
s390/bpf: clear correct BPF accumulator register
system table into a char array with a size of 100 bytes.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJVt8LEAAoJEGvWsS0AyF7xg4gP/iZweJzesP29V1O6l+PxqEMU
vTJYVEUBmzso2bt8GYb8EFhL3CdPmw5azGNksgOICL2Knd+sVlGLtMmfMupN7H1M
j+f7o546UCw3g+e0huKJvGmBuNFJkTleXAh+KRWSlFDpt7IVqzjT1njVeF+xvd0b
JG+a3+xPYCUuOUDv4mCVdQ3zueLhLBy/Mv3QWKAGyX0JdraT4PkgHSiD1c46YeAt
l4uymuTGXJlSMTdwQK50QDevH5Nh28c7TaksH1OkZPHNxDogWuTeAUpFRpbtWGpQ
VrGExlb/CYT14R6SvlG5Jz80BLlW0mHVYgwXXJZ+Z/tKquOnYR0B4ZnX7R8q7YgM
g6YKOAPNhiifgwBbasXPt46po7SeBV0/qdUuOVpjdtZXKlUo7O57bGDcdchxJ5V5
WDuXJoA3wDcRUg99eEG8cPl0yb5DAzUhR0n+1WvQ7ON7G978QHW5YpXWQ13zEHGV
rIDZelU+o2Yr84YIZBmuo7qip4xQU7AJaHmqs9GSxyNA1Kip8jJD2UJ/+7PW/l+F
VsNasShQleiC+9nIkOhzkpgfy1BLb7+8PkfIgJiz6nz3i9PpTHKHsCAHOURoGjuP
g2wHGxOsjcygkqkJQTMwlaGjhWbm7uP40d88kHmYqYfVDwLtYXdXsv/oH+zDN+zN
UUFU9EIDgOkEocRNEMf+
=phfS
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fix from Catalin Marinas:
"Fix buffer overflow when UTF-16 UEFI vendor string is copied from the
system table into a char array with a size of 100 bytes"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64/efi: map the entire UEFI vendor string before reading it
- Add the required second clock for i.MX35 FlexCAN in device tree,
so that the device can be probed by kernel successfully.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJVtxnOAAoJEFBXWFqHsHzO8ZAH/1jztrw/N5efGoJwdyvQyZLD
w/yN7otyk6b92cPJ/IIXpU/OOWeJNGIrEzidp4nJoYGa6iFlpGKdOKimtE9LVEm0
s5PllYCdnErh4d8+ae+T4lC70WHDOKN+1w8CzJzaIXA1XWUojdzpjwUvHVDpoP09
c9xRUmz/Q3D6GRhcNtFH69paVKaTN8MlfugJf6Ojr7gcyjXr38gCkM6u+WSUmMXQ
kb2uiifsYv8hibir5DkDRSBRgyUqI6guBjW1ZNABpMTjb+l3q4z6qLhA5Kqm4s98
OVpl0+R2oOa/SvAc5iIKAvqzRhtebcqvOCkd5xlsOTFOpKLFSDNmeoMa/mg9xUQ=
=sIV+
-----END PGP SIGNATURE-----
Merge tag 'imx-fixes-4.2-2' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux into fixes
The i.MX fixes for 4.2, 2nd round:
- Add the required second clock for i.MX35 FlexCAN in device tree,
so that the device can be probed by kernel successfully.
* tag 'imx-fixes-4.2-2' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux:
ARM: dts: i.MX35: Fix can support.
Signed-off-by: Olof Johansson <olof@lixom.net>
At boot, the UTF-16 UEFI vendor string is copied from the system
table into a char array with a size of 100 bytes. However, this
size of 100 bytes is also used for memremapping() the source,
which may not be sufficient if the vendor string exceeds 50
UTF-16 characters, and the placement of the vendor string inside
a 4 KB page happens to leave the end unmapped.
So use the correct '100 * sizeof(efi_char16_t)' for the size of
the mapping.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Fixes: f84d02755f ("arm64: add EFI runtime services")
Cc: <stable@vger.kernel.org> # 3.16+
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Since NULL is used as valid clock object on optional clocks we have to handle
this case in avr32 implementation as well.
Fixes: e1824dfe0d (net: macb: Adjust tx_clk when link speed changes)
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Hans-Christian Egtvedt <egtvedt@samfundet.no>
Currently we assumed the following BPF to eBPF register mapping:
- BPF_REG_A -> BPF_REG_7
- BPF_REG_X -> BPF_REG_8
Unfortunately this mapping is wrong. The correct mapping is:
- BPF_REG_A -> BPF_REG_0
- BPF_REG_X -> BPF_REG_7
So clear the correct registers and use the BPF_REG_A and BPF_REG_X
macros instead of BPF_REG_0/7.
Fixes: 0546231057 ("s390/bpf: Add s390x eBPF JIT compiler backend")
Cc: stable@vger.kernel.org # 4.0+
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The UART0 is not used on these boards, yet active and blocking
other use. Fix this by disabling UART0 and setting port aliases
to maintain port enumeration to userspace.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Olof Johansson <olof@lixom.net>
Two fixes against v4.2-rc1. The first, for DRA7xx platforms,
corrects some incorrect GPMC hardware description data. The
second one will ensure that the hwmod code will wait for any
module with CPU-accessible registers to become ready before
attempting to access it.
Basic build, boot, and PM test logs are available here:
http://www.pwsan.com/omap/testlogs/omap-hwmod-a-for-v4.2-rc/20150723065408/
Note that I do not have a DRA7xx or AM43xx board, and therefore
cannot test on those platforms.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJVsPJKAAoJEMePsQ0LvSpLtgsQAKT6RZ7pfQMgf6RjY4xDuQwJ
qiCgLKvj0gohSkpE0AG7DOPxIMNOfH79Fa44kjv/MtxBcmKiAztPCUhQYbWaNYPI
O5fBI0uYY9i1RYQQ0H2SGzEstXMjuJn9w8Fhq60vbC0bjrAC1rKSIm+t0iO1O02T
zVc7mto68vJQbh9tp2GwY9BXq8nbhI8fxi3+wIQhsMN+PWAAf6YQPZFjRGCZWZFA
ipH01kfYC2/iQF4h5tkFnasTPeWJs3H8ENrM+uW3PPOd6Q4TVf+KslW7Has+MJcn
4RFjdMkDCGfY1KxfsLtq9dsCFzZ8wLX/PfibAlgAnp8hyZyHMpGhSfCZQK3xK6L0
orQ5CQwwInzKCxGZCc57L80RI+KX8Z8LjUoTmwpIUCY705vPNukU4sWQqD0cMvv7
21XffSC+ak9edF5PYAUJTuRa1XiFOXn3dwNd98gANuUa21tEGnBtqT688eQMPlOG
oprIITSjFASmE2iKhvNvr8+IyWacu7TBKGglKSWILcrdgGhxLUyrjcsScYu+E4lG
pVyFzo1tz8HZgiwFYTEVZe8qFlMCYYgcbB/UxaNeox6A99S4eRPvLjevgBvDyEtP
d3GxHkpzfxA8jCWRxquHTwIDAmpTWaVnT42oU83oZHuf2CUn1gY5DJDmWgk9C552
ifw1tQPE/Jh4cmX2ayMw
=dgbc
-----END PGP SIGNATURE-----
Merge tag 'for-v4.2-rc/omap-fixes-a' of git://git.kernel.org/pub/scm/linux/kernel/git/pjw/omap-pending into fixes
Merge "ARM: OMAP2+: hwmod fixes for v4.2-rc" from Paul Walmsley:
ARM: OMAP2+: hwmod fixes for v4.2-rc
Two fixes against v4.2-rc1. The first, for DRA7xx platforms,
corrects some incorrect GPMC hardware description data. The
second one will ensure that the hwmod code will wait for any
module with CPU-accessible registers to become ready before
attempting to access it.
Basic build, boot, and PM test logs are available here:
http://www.pwsan.com/omap/testlogs/omap-hwmod-a-for-v4.2-rc/20150723065408/
Note that I do not have a DRA7xx or AM43xx board, and therefore
cannot test on those platforms.
* tag 'for-v4.2-rc/omap-fixes-a' of git://git.kernel.org/pub/scm/linux/kernel/git/pjw/omap-pending:
ARM: OMAP2+: hwmod: Fix _wait_target_ready() for hwmods without sysc
ARM: DRA7: hwmod: fix gpmc hwmod
Signed-off-by: Olof Johansson <olof@lixom.net>
Pull perf fix from Thomas Gleixner:
"A single fix for the intel cqm perf facility to prevent IPIs from
interrupt context"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/x86/intel/cqm: Return cached counter value from IRQ context
Pull x86 fixes from Thomas Gleixner:
"This update contains:
- the manual revert of the SYSCALL32 changes which caused a
regression
- a fix for the MPX vma handling
- three fixes for the ioremap 'is ram' checks.
- PAT warning fixes
- a trivial fix for the size calculation of TLB tracepoints
- handle old EFI structures gracefully
This also contains a PAT fix from Jan plus a revert thereof. Toshi
explained why the code is correct"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/mm/pat: Revert 'Adjust default caching mode translation tables'
x86/asm/entry/32: Revert 'Do not use R9 in SYSCALL32' commit
x86/mm: Fix newly introduced printk format warnings
mm: Fix bugs in region_is_ram()
x86/mm: Remove region_is_ram() call from ioremap
x86/mm: Move warning from __ioremap_check_ram() to the call site
x86/mm/pat, drivers/media/ivtv: Move the PAT warning and replace WARN() with pr_warn()
x86/mm/pat, drivers/infiniband/ipath: Replace WARN() with pr_warn()
x86/mm/pat: Adjust default caching mode translation tables
x86/fpu: Disable dependent CPU features on "noxsave"
x86/mpx: Do not set ->vm_ops on MPX VMAs
x86/mm: Add parenthesis for TLB tracepoint size calculation
efi: Handle memory error structures produced based on old versions of standard
Toshi explains:
"No, the default values need to be set to the fallback types,
i.e. minimal supported mode. For WC and WT, UC is the fallback type.
When PAT is disabled, pat_init() does update the tables below to
enable WT per the default BIOS setup. However, when PAT is enabled,
but CPU has PAT -errata, WT falls back to UC per the default values."
Revert: ca1fec58bc 'x86/mm/pat: Adjust default caching mode translation tables'
Requested-by: Toshi Kani <toshi.kani@hp.com>
Cc: Jan Beulich <jbeulich@suse.de>
Link: http://lkml.kernel.org/r/1437577776.3214.252.camel@hp.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Here are a number of small serial and tty fixes for reported issues.
All have been in linux-next successfully.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iEYEABECAAYFAlW0QW4ACgkQMUfUDdst+ym7UwCglnDVjGrWiw29PU42rXROq/Vf
6boAniJehd4f/U9r5+aw/Hx6jqrhwC6c
=dfFk
-----END PGP SIGNATURE-----
Merge tag 'tty-4.2-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty
Pull tty/serial driver fixes from Greg KH:
"Here are a number of small serial and tty fixes for reported issues.
All have been in linux-next successfully"
* tag 'tty-4.2-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty:
tty: vt: Fix !TASK_RUNNING diagnostic warning from paste_selection()
serial: core: Fix crashes while echoing when closing
m32r: Add ioreadXX/iowriteXX big-endian mmio accessors
Revert "serial: imx: initialized DMA w/o HW flow enabled"
sc16is7xx: fix FIFO address of secondary UART
sc16is7xx: fix Kconfig dependencies
serial: etraxfs-uart: Fix release etraxfs_uart_ports
tty/vt: Fix the memory leak in visual_init
serial: amba-pl011: Fix devm_ioremap_resource return value check
n_tty: signal and flush atomically
Since commit 3d42a379b6
("can: flexcan: add 2nd clock to support imx53 and newer")
the can driver requires a dt nodes to have a second clock.
Add them to imx35 to fix probing the flex can driver on the
respective platforms.
Signed-off-by: Denis Carikli <denis@eukrea.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Shawn Guo <shawnguo@kernel.org>
- Fix a reference inbalance issue for power_ro_lock_show() sysfs handler
MMC host:
- omap_hsmmc: Fix IRQ errorhandling for CD, DTO, and CRC
- sdhci: Prevent a kernel panic while using DMA
- mtk-sd: Let it depend on HAS_DMA to prevent build errors
- sdhci-esdhc: Make 8BIT bus work
- sdhci-esdhc-imx: Fix some regressions for DT based platforms
- sdhci-pxav3: Fix a regression for DT based platforms
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJVsflkAAoJEP4mhCVzWIwpW6EP/jDb41+bvVv9Yuua10z1wD5C
+cVn5w2hMyZH982lnba526HJ7LmmmO3Pe5ZLN27Z6Z5Di4vTzTv7zmoP3e59Vkgo
m62utgbMksxpywLVAJEakoA5bmbSoTkuCjvLdXAyMtmzkwDx0djake/DJ9uzOWX5
qlfzPljD+6IXPVBG/z5hzpz3XWWitf4n/olVE+wuLP8hdKRpNVNtHNfP3d7p/2Xf
LZQ9I0zcmktVvaP0fScKvmv20vqtn1T3EHmXii2RqML/dwNvl/5hFIpS8+nXXfYW
nT+5HWHJokXsYBQx+UaveVCiRvT2y1VBCT0Wj9ztuUkBWx5URuXJE9Evmh2nUTSj
mI7eNjIiZ+DoKfN++cVanBfvUmCjukDXHJjN+NsmywHrdUVAfBengLTmdOvOlTD9
C+sMVv6uDmlfjq/Ed16Mi1UTRr7Nv9BJWDhVoI4ZSzrOY+ayYw+7W/3oMyqajoDT
19NXU5hZqeEnYkhQ6lq3DjL7TSV9sEx2hMCDdhuNrQJ34hcatake7zcVYECpJSrB
6NgPFjc/qnSf42L49HhpaSa2mx7EQBWL5yXX5vmOFRZ8aL5xKCtBaoP8Wo5n4eiG
JVg4y0qw7p8XSlN88cIHUy4VRn8HJ6gpJOa4S5RtAfM6gmozjZM5+FeXNxSLFLri
s4ymGZ3tr6vp+LZ6Q6Me
=Bk8L
-----END PGP SIGNATURE-----
Merge tag 'mmc-4.2-rc3' of git://git.linaro.org/people/ulf.hansson/mmc
Pull MMC fixes from Ulf Hansson:
"Here are some mmc fixes intended for v4.2 rc4.
Note, most of the changes are for the sdhci-esdhc-imx controller,
which also required us to modify some related DTS files. Those
changes have been acked by the SoC maintainer.
MMC core:
- Fix a reference inbalance issue for power_ro_lock_show() sysfs handler
MMC host:
- omap_hsmmc: Fix IRQ errorhandling for CD, DTO, and CRC
- sdhci: Prevent a kernel panic while using DMA
- mtk-sd: Let it depend on HAS_DMA to prevent build errors
- sdhci-esdhc: Make 8BIT bus work
- sdhci-esdhc-imx: Fix some regressions for DT based platforms
- sdhci-pxav3: Fix a regression for DT based platforms"
* tag 'mmc-4.2-rc3' of git://git.linaro.org/people/ulf.hansson/mmc:
mmc: sdhci-pxav3: fix platform_data is not initialized
dts: mmc: fsl-imx-esdhc: remove fsl,cd-controller support
mmc: sdhci-esdhc-imx: clear f_max in boarddata
mmc: sdhci-esdhc-imx: remove duplicated dts parsing
mmc: sdhci: make max-frequency property in device tree work
mmc: sdhci-esdhc-imx: move all non dt probe code into one function
mmc: sdhci-esdhc-imx: fix cd regression for dt platform
dts: imx7: fix sd card gpio polarity specified in device tree
dts: imx25: fix sd card gpio polarity specified in device tree
dts: imx6: fix sd card gpio polarity specified in device tree
dts: imx53: fix sd card gpio polarity specified in device tree
dts: imx51: fix sd card gpio polarity specified in device tree
mmc: sdhci-esdhc: Make 8BIT bus work
mmc: block: Add missing mmc_blk_put() in power_ro_lock_show()
mmc: MMC_MTK should depend on HAS_DMA
mmc: sdhci check parameters before call dma_free_coherent
mmc: omap_hsmmc: Handle BADA, DEB and CEB interrupts
mmc: omap_hsmmc: Fix DTO and DCRC handling
Pull arch/tile bugfix from Chris Metcalf:
"This fixes a bug in freeing the initramfs memory"
* 'stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile:
tile: use free_bootmem_late() for initrd
This change reverts most of commit 53e9accf0f 'Do not use R9 in
SYSCALL32'. I don't yet understand how, but code in that commit
sometimes fails to preserve EBP.
See https://bugzilla.kernel.org/show_bug.cgi?id=101061
"Problems while executing 32-bit code on AMD64"
Reported-and-tested-by: Krzysztof A. Sobiecki <sobkas@gmail.com>
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Will Drewry <wad@chromium.org>
Cc: Kees Cook <keescook@chromium.org>
CC: x86@kernel.org
Link: http://lkml.kernel.org/r/1437740203-11552-1-git-send-email-dvlasenk@redhat.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
cd-gpios polarity should be changed to GPIO_ACTIVE_LOW and wp-gpios
should be changed to GPIO_ACTIVE_HIGH.
Otherwise, the SD may not work properly due to wrong polarity inversion
specified in DT after switch to common parsing function mmc_of_parse().
Signed-off-by: Dong Aisheng <aisheng.dong@freescale.com>
Acked-by: Shawn Guo <shawnguo@kernel.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
cd-gpios polarity should be changed to GPIO_ACTIVE_LOW and wp-gpios
should be changed to GPIO_ACTIVE_HIGH.
Otherwise, the SD may not work properly due to wrong polarity inversion
specified in DT after switch to common parsing function mmc_of_parse().
Signed-off-by: Dong Aisheng <aisheng.dong@freescale.com>
Acked-by: Shawn Guo <shawnguo@kernel.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
cd-gpios polarity should be changed to GPIO_ACTIVE_LOW and wp-gpios
should be changed to GPIO_ACTIVE_HIGH.
Otherwise, the SD may not work properly due to wrong polarity inversion
specified in DT after switch to common parsing function mmc_of_parse().
Signed-off-by: Dong Aisheng <aisheng.dong@freescale.com>
Acked-by: Shawn Guo <shawnguo@kernel.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
cd-gpios polarity should be changed to GPIO_ACTIVE_LOW and wp-gpios
should be changed to GPIO_ACTIVE_HIGH.
Otherwise, the SD may not work properly due to wrong polarity inversion
specified in DT after switch to common parsing function mmc_of_parse().
Signed-off-by: Dong Aisheng <aisheng.dong@freescale.com>
Acked-by: Shawn Guo <shawnguo@kernel.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>