Returning to delay slot, riding an interrupti, had one loose end.
AUX_USER_SP used for restoring user mode SP upon RTIE was not being
setup from orig task's saved value, causing task to use wrong SP,
leading to ProtV errors.
The reason being:
- INTERRUPT_EPILOGUE returns to a kernel trampoline, thus not expected to restore it
- EXCEPTION_EPILOGUE is not used at all
Fix that by restoring AUX_USER_SP explicitly in the trampoline.
This was broken in the original workaround, but the error scenarios got
reduced considerably since v3.14 due to following:
1. The Linuxthreads.old based userspace at the time caused many more
exceptions in delay slot than the current NPTL based one.
Infact with current userspace the error doesn't happen at all.
2. Return from interrupt (delay slot or otherwise) doesn't get exercised much
after commit 4de0e52867 ("Really Re-enable interrupts to avoid deadlocks")
since IRQ_ACTIVE.active being clear means most returns are as if from pure
kernel (even for active interrupts)
Infact the issue only happened in an experimental branch where I was tinkering with
reverted 4de0e52867
Cc: stable@kernel.org # v4.2+
Fixes: 4255b07f2c ("ARCv2: STAR 9000793984: Handle return from intr to Delay Slot")
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Pull livepatching updates from Jiri Kosina:
- RO/NX attribute fixes for patch module relocations from Josh
Poimboeuf. As part of this effort, module.c has been cleaned up as
well and livepatching is piggy-backing on this cleanup. Rusty is OK
with this whole lot going through livepatching tree.
- symbol disambiguation support from Chris J Arges. That series is
also
Reviewed-by: Miroslav Benes <mbenes@suse.cz>
but this came in only after I've alredy pushed out. Didn't want to
rebase because of that, hence I am mentioning it here.
- symbol lookup fix from Miroslav Benes
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching:
livepatch: Cleanup module page permission changes
module: keep percpu symbols in module's symtab
module: clean up RO/NX handling.
module: use a structure to encapsulate layout.
gcov: use within_module() helper.
module: Use the same logic for setting and unsetting RO/NX
livepatch: function,sympos scheme in livepatch sysfs directory
livepatch: add sympos as disambiguator field to klp_reloc
livepatch: add old_sympos as disambiguator field to klp_func
Blingly ignoring CIE.version != 1 was a bad idea.
It still leaves "desirability" when running perf with callgraphing where libgcc
symbols might show in hotspot.
More importantly, basic CIE.version == 3 support already exists in code:
|
| retAddrReg = state.version <= 1 ? *ptr++ : get_uleb128(&ptr, end);
|
Next commit with simply add continue-not-bail for CIE.version != 1
This reverts commit 323f41f9e7.
This will better reflect its description i.e. "any needed setup..."
and not just do an "IPI request".
Signed-off-by: Noam Camus <noamc@ezchip.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
ARC dwarf unwinder only supports CIE version == 1
The boot time dwarf sanitizer (part of binary lookup table constructor)
would simply bail if it saw CIE version == 3, rendering unwinder with a
NULL lookup table.
It seems libgcc linked with kernel does have such entries.
With fallback linear search removed, and a NULL binary lookup table,
unwinder fails to generate any stack trace.
So allow graceful ignoring of unsupported CIE entries.
This problem was initially seen in Alexey's setup (and not mine) as he
was using buildroot built toolchain (libgcc) which doesn't get built with
CFLAGS_FOR_TARGET="-gdwarf-2 which is my default
Fixes STAR 9000985048: "kernel unwinder broken with stock tools"
Fixes: 2e22502c08 ARC: dw2 unwind: Remove falllback linear search thru FDE entries
Reported-by Alexey Brodkin <abrodkin@synopsys.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
The fix which removed linear searching of dwarf (because binary lookup
data always exists) missed out on the fact that modules don't get the
binary lookup tables info. This caused unwinding out of modules to stop
working.
So add binary lookup header setup (equivalent of eh_frame_hdr setup) to
modules as well.
While at it, confine the header setup to within unwinder code,
reducing one API exposed out of unwinder code.
Fixes: 2e22502c08 ARC: dw2 unwind: Remove falllback linear search thru FDE entries
Cc: <stable@vger.kernel.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This was the second perf intr issue
perf sampling on multicore requires intr to be enabled on all cores.
ARC perf probe code used helper arc_request_percpu_irq() which calls
- request_percpu_irq() on core0
- enable_percpu_irq() on all all cores (including core0)
genirq requires that request be made ahead of enable call.
However if perf probe happened on non core0 (observed on a 3.18 kernel),
enable would get called ahead of request, failing obviously and
rendering perf intr disabled on all such cores
[ 11.120000] 1 ARC perf : 8 counters (48 bits), 113 conditions, [overflow IRQ support]
[ 11.130000] 1 -----> enable_percpu_irq() IRQ 20 failed
[ 11.140000] 3 -----> enable_percpu_irq() IRQ 20 failed
[ 11.140000] 2 -----> enable_percpu_irq() IRQ 20 failed
[ 11.140000] 0 =====> request_percpu_irq() IRQ 20
[ 11.140000] 0 -----> enable_percpu_irq() IRQ 20
Fix this fragility, by calling request_percpu_irq() on whatever core
calls probe (there is no requirement on which core calls this anyways)
and then calling enable on each cores.
Interestingly this started as invesigation of STAR 9000838902:
"sporadically IRQs enabled on perf prob"
which was about occassional boot spew as request_percpu_irq got called
non-locally (from an IPI), and re-enabled interrupts in following path
proc_mkdir -> spin_unlock_irq()
which the irq work code didn't like.
| ARC perf : 8 counters (48 bits), 113 conditions, [overflow IRQ support]
|
| BUG: failure at ../kernel/irq_work.c:135/irq_work_run_list()!
| CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.18.10-01127-g285efb8e66d1 #2
|
| Stack Trace:
| arc_unwind_core.constprop.1+0x94/0x104
| dump_stack+0x62/0x98
| irq_work_run_list+0xb0/0xb4
| irq_work_run+0x22/0x3c
| do_IPI+0x74/0x9c
| handle_irq_event_percpu+0x34/0x164
| handle_percpu_irq+0x58/0x78
| generic_handle_irq+0x1e/0x2c
| arch_do_IRQ+0x3c/0x60
| ret_from_exception+0x0/0x8
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Cc: Alexey Brodkin <abrodkin@synopsys.com>
Cc: <stable@vger.kernel.org> #4.2+
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
arc_request_percpu_irq() is called by all cores to request/enable percpu
irq. It has some "prep" calls needed by genirq:
- setup percpu devid
- disable IRQ_NOAUTOEN
However given that enable_percpu_irq() is called enayways, latter can be
avoided.
We are now left with irq_set_percpu_devid() quirk and that too for
ARCompact builds only, since previous patch updated ARCv2 intc to do this
in the "right" place, i.e. irq map function.
By next release, this will ultimately be fixed for ARCompact as well.
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Alexey Brodkin <abrodkin@synopsys.com>
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Makes it easier to handle init vs core cleanly, though the change is
fairly invasive across random architectures.
It simplifies the rbtree code immediately, however, while keeping the
core data together in the same cachline (now iff the rbtree code is
enabled).
Acked-by: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Fixes STAR 9000953410: "perf callgraph profiling causing RCU stalls"
| perf record -g -c 15000 -e cycles /sbin/hackbench
|
| INFO: rcu_preempt self-detected stall on CPU
| 1: (1 GPs behind) idle=609/140000000000002/0 softirq=2914/2915 fqs=603
| Task dump for CPU 1:
in-kernel dwarf unwinder has a fast binary lookup and a fallback linear
search (which iterates thru each of ~11K entries) thus takes 2 orders of
magnitude longer (~3 million cycles vs. 2000). Routines written in hand
assembler lack dwarf info (as we don't support assembler CFI pseudo-ops
yet) fail the unwinder binary lookup, hit linear search, failing
nevertheless in the end.
However the linear search is pointless as binary lookup tables are created
from it in first place. It is impossible to have binary lookup fail while
succeed the linear search. It is pure waste of cycles thus removed by
this patch.
This manifested as RCU stalls / NMI watchdog splat when running
hackbench under perf with callgraph profiling. The triggering condition
was perf counter overflowing in routine lacking dwarf info (like memset)
leading to patheic 3 million cycle unwinder slow path and by the time it
returned new interrupts were already pending (Timer, IPI) and taken
rightaway. The original memset didn't make forward progress, system kept
accruing more interrupts and more unwinder delayes in a vicious feedback
loop, ultimately triggering the NMI diagnostic.
Cc: stable@vger.kernel.org
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
SYNC in __switch_to() is a historic relic and not needed at all.
- In UP context it is obviously useless, why would we want to stall
the core for all updates to stack memory of t0 to complete before
loading kernel mode callee registers from t1 stack's memory.
- In SMP, there could be potential race in which outgoing task could
be concurrently picked for running on a different core, thus writes
to stack here need to be visible before the reads from stack on
other core. Peter confirmed that generic schedular already has needed
barriers (by way of rq lock) so there is no need for additional arch
barrier.
This came up when Noam was trying to replace this SYNC with EZChip
specific hardware thread scheduling instruction for their platform
support.
Link: http://lkml.kernel.org/r/20151102092654.GM17308@twins.programming.kicks-ass.net
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-kernel@vger.kernel.org
Cc: Noam Camus <noamc@ezchip.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Bus errors from userspace on ARCompact based cores are handled by core
as a high priority L2 interrupt but current code treated it as interrupt
Handling an interrupt like exception is certainly not going to go unnoticed.
(and it worked so far as we never saw a Bus error from userspace until
IPPK guys tested a DDR controller with ECC error detection etc hence
needed to explicitly trigger/handle such errors)
- So move mem_service exception handler from common code into ARCv2 code.
- In ARCompact code, define mem_service as L2 interrupt handler which
just drops down to pure kernel mode and goes of to enqueue SIGBUS
Reported-by: Nelson Pereira <npereira@synopsys.com>
Tested-by: Ana Martins <amartins@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
With prev fixes, all cores now start via common entry point @stext which
already calls EARLY_CPU_SETUP for all cores - so no need to invoke it
again
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
MCIP now registers it's own per cpu setup routine (for IPI IRQ request)
using smp_ops.init_irq_cpu().
So no need for platforms to do that. This now completely decouples
platforms from MCIP.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Note this is not part of platform owned static machine_desc,
but more of device owned plat_smp_ops (rather misnamed) which a IPI
provider or some such typically defines.
This will help us seperate out the IPI registration from platform
specific init_cpu_smp() into device specific init_irq_cpu()
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
MCIP now registers it's own probe callback with smp_ops.init_early_smp()
which is called by ARC common code, so no need for platforms to do that.
This decouples the platforms and MCIP and helps confine MCIP details
to it's own file.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This adds a platform agnostic early SMP init hook which is called on
Master core before calling setup_processor()
setup_arch()
smp_init_cpus()
smp_ops.init_early_smp()
...
setup_processor()
How this helps:
- Used for one time init of certain SMP centric IP blocks, before
calling setup_processor() which probes various bits of core,
possibly including this block
- Currently platforms need to call this IP block init from their
init routines, which doesn't make sense as this is specific to ARC
core and not platform and otherwise requires copy/paste in all
(and hence a possible point of failure)
e.g. MCIP init is called from 2 platforms currently (axs10x and sim)
which will go away once we have this.
This change only adds the hooks but they are empty for now. Next commit
will populate them and remove the explicit init calls from platforms.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
These are not in use for ARC platforms. Moreover DT mechanims exist to
probe them w/o explicit platform calls.
- clocksource drivers can use CLOCKSOURCE_OF_DECLARE()
- intc IRQCHIP_DECLARE() calls + cascading inside DT allows external
intc to be probed automatically
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
The reason this was not done so far was lack of genuine IPI_IRQ for
ARC700, as we don't have a SMP version of core yet (which might change
soon thx to EZChip). Nevertheles to increase the build coverage, we
need to allow CONFIG_SMP for ARC700 and still be able to run it on a
UP platform (nsim or AXS101) with a UP Device Tree (SMP-on-UP)
The build itself requires some define for IPI_IRQ and even a dummy
value is fine since that code won't run anyways.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
For Run-on-reset, non masters need to spin wait. For Halt-on-reset they
can jump to entry point directly.
Also while at it, made reset vector handler as "the" entry point for
kernel including host debugger based boot (which uses the ELF header
entry point)
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
For non halt-on-reset case, all cores start of simultaneously in @stext.
Master core0 proceeds with kernel boot, while other spin-wait on
@wake_flag being set by master once it is ready. So NO hardware assist
is needed for master to "kick" the others.
This patch moves this soft implementation out of mcip.c (as there is no
hardware assist) into common smp.c
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This is done by improving the laddering logic !
Before:
if Exception
goto excep_or_pure_k_ret
if !Interrupt(L2)
goto l1_chk
else
INTERRUPT_EPILOGUE 2
l1_chk:
if !Interrupt(L1) (i.e. pure kernel mode)
goto excep_or_pure_k_ret
else
INTERRUPT_EPILOGUE 1
excep_or_pure_k_ret:
EXCEPTION_EPILOGUE
Now:
if !Interrupt(L1 or L2) (i.e. exception or pure kernel mode)
goto excep_or_pure_k_ret
; guaranteed to be an interrupt
if !Interrupt(L2)
goto l1_ret
else
INTERRUPT_EPILOGUE 2
; by virtue of above, no need to chk for L1 active
l1_ret:
INTERRUPT_EPILOGUE 1
excep_or_pure_k_ret:
EXCEPTION_EPILOGUE
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Historically this was done by ARC IDE driver, which is long gone.
IRQ core is pretty robust now and already checks if IRQs are enabled
in hard ISRs. Thus no point in checking this in arch code, for every
call of irq enabled.
Further if some driver does do that - let it bring down the system so we
notice/fix this sooner than covering up for sucker
This makes local_irq_enable() - for L1 only case atleast simple enough
so we can inline it.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Most interrupt flow handlers do not use the irq argument. Those few
which use it can retrieve the irq number from the irq descriptor.
Remove the argument.
Search and replace was done with coccinelle and some extra helper
scripts around it. Thanks to Julia for her help!
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Julia Lawall <Julia.Lawall@lip6.fr>
Cc: Jiang Liu <jiang.liu@linux.intel.com>
Pull irq updates from Thomas Gleixner:
"This updated pull request does not contain the last few GIC related
patches which were reported to cause a regression. There is a fix
available, but I let it breed for a couple of days first.
The irq departement provides:
- new infrastructure to support non PCI based MSI interrupts
- a couple of new irq chip drivers
- the usual pile of fixlets and updates to irq chip drivers
- preparatory changes for removal of the irq argument from interrupt
flow handlers
- preparatory changes to remove IRQF_VALID"
* 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (129 commits)
irqchip/imx-gpcv2: IMX GPCv2 driver for wakeup sources
irqchip: Add bcm2836 interrupt controller for Raspberry Pi 2
irqchip: Add documentation for the bcm2836 interrupt controller
irqchip/bcm2835: Add support for being used as a second level controller
irqchip/bcm2835: Refactor handle_IRQ() calls out of MAKE_HWIRQ
PCI: xilinx: Fix typo in function name
irqchip/gic: Ensure gic_cpu_if_up/down() programs correct GIC instance
irqchip/gic: Only allow the primary GIC to set the CPU map
PCI/MSI: pci-xgene-msi: Consolidate chained IRQ handler install/remove
unicore32/irq: Prepare puv3_gpio_handler for irq argument removal
tile/pci_gx: Prepare trio_handle_level_irq for irq argument removal
m68k/irq: Prepare irq handlers for irq argument removal
C6X/megamode-pic: Prepare megamod_irq_cascade for irq argument removal
blackfin: Prepare irq handlers for irq argument removal
arc/irq: Prepare idu_cascade_isr for irq argument removal
sparc/irq: Use access helper irq_data_get_affinity_mask()
sparc/irq: Use helper irq_data_get_irq_handler_data()
parisc/irq: Use access helper irq_data_get_affinity_mask()
mn10300/irq: Use access helper irq_data_get_affinity_mask()
irqchip/i8259: Prepare i8259_irq_dispatch for irq argument removal
...
With all features in place, the ARC HS pct block can now be effectively
allowed to be probed/used
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* split off pmu info into singleton and per-cpu bits
* setup PMU on all cores
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
In times of ARC 700 performance counters didn't have support of
interrupt an so for ARC we only had support of non-sampling events.
Put simply only "perf stat" was functional.
Now with ARC HS we have support of interrupts in performance counters
which this change introduces support of.
ARC performance counters act in the following way in regard of
interrupts generation.
[1] A counter counts starting from value set in PCT_COUNT register pair
[2] Once counter reaches value set in PCT_INT_CNT interrupt is raised
Basic setup look like this:
[1] PCT_COUNT = 0;
[2] PCT_INT_CNT = __limit_value__;
[3] Enable interrupts for that counter and let it run
[4] Let counter reach its limit
[5] Handle interrupt when it happens
Note that PCT HW block is build in CPU core and so ints interrupt
line (which is basically OR of all counters IRQs) is wired directly to
top-level IRQC. That means do de-assert PCT interrupt it's required to
reset IRQs from all counters that have reached their limit values.
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This generalization prepares for support of overflow interrupts.
Hardware event counters on ARC work that way:
Each counter counts from programmed start value (set in
ARC_REG_PCT_COUNT) to a limit value (set in ARC_REG_PCT_INT_CNT) and
once limit value is reached this timer generates an interrupt.
Even though this hardware implementation allows for more flexibility,
in Linux kernel we decided to mimic behavior of other architectures
this way:
[1] Set limit value as half of counter's max value (to allow counter to
run after reaching it limit, see below for more explanation):
---------->8-----------
arc_pmu->max_period = (1ULL << counter_size) / 2 - 1ULL;
---------->8-----------
[2] Set start value as "arc_pmu->max_period - sample_period" and then
count up to the limit
Our event counters don't stop on reaching max value (the one we set in
ARC_REG_PCT_INT_CNT) but continue to count until kernel explicitly
stops each of them.
And setting a limit as half of counter capacity is done to allow
capturing of additional events in between moment when interrupt was
triggered until we're actually processing PMU interrupts. That way
we're trying to be more precise.
For example if we count CPU cycles we keep track of cycles while
running through generic IRQ handling code:
[1] We set counter period as say 100_000 events of type "crun"
[2] Counter reaches that limit and raises its interrupt
[3] Once we get in PMU IRQ handler we read current counter value from
ARC_REG_PCT_SNAP ans see there something like 105_000.
If counters stop on reaching a limit value then we would miss
additional 5000 cycles.
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
The number of counters in PCT can never be more than 32 (while
countable conditions could be 100+) for both ARCompact and ARCv2
And while at it update copyright dates.
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
When kernel's binary becomes large enough (32M and more) errors
may occur during the final linkage stage. It happens because
the build system uses short relocations for ARC by default.
This problem may be easily resolved by passing -mlong-calls
option to GCC to use long absolute jumps (j) instead of short
relative branchs (b).
But there are fragments of pure assembler code exist which use
branchs in inappropriate places and cause a linkage error because
of relocations overflow.
First of these fragments is .fixup insertion in futex.h and
unaligned.c. It inserts a code in the separate section (.fixup)
with branch instruction. It leads to the linkage error when
kernel becomes large.
Second of these fragments is calling scheduler's functions
(common kernel code) from entry.S of ARC's code. When kernel's
binary becomes large it may lead to the linkage error because
scheduler may occur far enough from ARC's code in the final
binary.
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This is to workaround the llock/scond livelock
HS38x4 could get into a LLOCK/SCOND livelock in case of multiple overlapping
coherency transactions in the SCU. The exclusive line state keeps rotating
among contenting cores leading to a never ending cycle. So break the cycle
by deferring the retry of failed exclusive access (SCOND). The actual delay
needed is function of number of contending cores as well as the unrelated
coherency traffic from other cores. To keep the code simple, start off with
small delay of 1 which would suffice most cases and in case of contention
double the delay. Eventually the delay is sufficient such that the coherency
pipeline is drained, thus a subsequent exclusive access would succeed.
Link: http://lkml.kernel.org/r/1438612568-28265-1-git-send-email-vgupta@synopsys.com
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
With HS 2.1 release, the peripheral space register no longer contains
the uncached space specifics, causing the kernel to panic early on.
So read the newer NON VOLATILE AUX register to get that info.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
The irq argument of most interrupt flow handlers is unused or merily
used instead of a local variable. The handlers which need the irq
argument can retrieve the irq number from the irq descriptor.
Search and update was done with coccinelle and the invaluable help of
Julia Lawall.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Julia Lawall <Julia.Lawall@lip6.fr>
Cc: Vineet Gupta <vgupta@synopsys.com>
Migrate arc driver to the new 'set-state' interface provided by
clockevents core, the earlier 'set-mode' interface is marked obsolete
now.
This also enables us to implement callbacks for new states of clockevent
devices, for example: ONESHOT_STOPPED.
Cc: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
The IRQCHIP_DECLARE macro migrated to 'include/linux/irqchip.h'.
See commit 91e20b5040
("irqchip: Move IRQCHIP_DECLARE macro to include/linux/irqchip.h").
This patch removes the inclusions of private header 'drivers/irqchip/irqchip.h'
and if necessary replaces them with inclusions of 'include/linux/irqchip.h'.
Signed-off-by: Joel Porquet <joel@porquet.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
With this nsim standlone / OSCI have working irq affinity - AXS103 still
needs some work as IDU is not visible in intc hierarchy yet !
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Pull more vfs updates from Al Viro:
"Assorted VFS fixes and related cleanups (IMO the most interesting in
that part are f_path-related things and Eric's descriptor-related
stuff). UFS regression fixes (it got broken last cycle). 9P fixes.
fs-cache series, DAX patches, Jan's file_remove_suid() work"
[ I'd say this is much more than "fixes and related cleanups". The
file_table locking rule change by Eric Dumazet is a rather big and
fundamental update even if the patch isn't huge. - Linus ]
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (49 commits)
9p: cope with bogus responses from server in p9_client_{read,write}
p9_client_write(): avoid double p9_free_req()
9p: forgetting to cancel request on interrupted zero-copy RPC
dax: bdev_direct_access() may sleep
block: Add support for DAX reads/writes to block devices
dax: Use copy_from_iter_nocache
dax: Add block size note to documentation
fs/file.c: __fget() and dup2() atomicity rules
fs/file.c: don't acquire files->file_lock in fd_install()
fs:super:get_anon_bdev: fix race condition could cause dev exceed its upper limitation
vfs: avoid creation of inode number 0 in get_next_ino
namei: make set_root_rcu() return void
make simple_positive() public
ufs: use dir_pages instead of ufs_dir_pages()
pagemap.h: move dir_pages() over there
remove the pointless include of lglock.h
fs: cleanup slight list_entry abuse
xfs: Correctly lock inode when removing suid and file capabilities
fs: Call security_ops->inode_killpriv on truncate
fs: Provide function telling whether file_remove_privs() will do anything
...
- CONFIG_ARC_UBOOT_SUPPORT to handle arguments passed in r0, r1, r2
- CONFIG_DEVTMPFS_MOUNT for mouting rootfs since it uses external cpio
for rootfs
Cc: Grant Likely <grant.likely@linaro.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: devicetree@vger.kernel.org
Signed-off-by: Ruud Derwig <rderwig@synopsys.com>
[vgupta: folded the Main baord DT files for smp/up into one]
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This allows platforms to provide their own cpu wakeup routines
as well as IPI send / clear backends, while allowing a SMP kernel w/o
any such backend to build/boot
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Reported by Anton as LTP:munmap01 failing with Illegal Instruction
Exception.
--------------------->8--------------------------------------
mmap2(NULL, 24576, PROT_READ|PROT_WRITE, MAP_SHARED, 3, 0) = 0x200d2000
munmap(0x200d2000, 24576) = 0
--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x200d2000}
---
potentially unexpected fatal signal 4.
Path: /munmap01
CPU: 0 PID: 61 Comm: munmap01 Not tainted 3.13.0-g5d5c46d9a556 #8
task: 9f1a8000 ti: 9f154000 task.ti: 9f154000
[ECR ]: 0x00020100 => Illegal Insn
[EFA ]: 0x0001354c
[BLINK ]: 0x200515d4
[ERET ]: 0x1354c
@off 0x1354c in [/munmap01]
VMA: 0x00010000 to 0x00018000
[STAT32]: 0x800802c0
...
--------------------->8--------------------------------------
The issue was
1. munmap01 accessed unmapped memory (on purpose) with signal handler
installed for SIGSEGV
2. The faulting instruction happened to be in Delay Slot
00011864 <main>:
11908: bl.d 13284 <tst_resm>
1190c: stb r16,[r2]
3. kernel sets up the reg file for signal handler and correctly clears
the DE bit in pt_regs->status32 placeholder
4. However RESTORE_CALLEE_SAVED_USER macro is not adjusted for ARCv2,
and it over-writes the above with orig/stale value of status32
5. After RTIE, userspace signal handler executes a non branch
instruction with DE bit set, triggering Illegal Instruction Exception.
Reported-by: Anton Kolesov <akolesov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
These have been register compatible so far. However ARCv2 mandates
different pt_regs layout (due to h/w auto save). To keep pt_regs same
for both, we start by removing the assumption - used mainly for block
copies between the 2 structs in signal handling and ptrace
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Returning from pure kernel mode and exception mode use the same code
anyways. Remove one the duplicate blocks
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Elide the need to re-read ECR in Trap handler by ensuring that
EXCEPTION_PROLOGUE does that at the very end just before returning
to Trap handler
ARCv2 EXCEPTION_PROLOGUE already did that, so same for ARcompact and the
common trap handler adjusted to use cached ECR
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This fixes the possible link/relo errors, since restore_regs will be
provided by ISA code, but called from ARC common code.
The .L prefix reassures binutils that it will be in same compilation
unit.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Earlycon calculates UART clock as "BASE_BAUD * 16". In case of ARC
"BASE_BAUD" is calculated dynamically in runtime, basically it is an
alias to arc_early_base_baud(), which in turn just does
"arc_base_baud/16".
8250 UART on AXS/SDP board uses 33.3MHz clock source which is set in
"arc_base_baud" with this change.
Additional compatibility string "snps,arc-sdp" is introduced as well
because there're different flavours of AXS boards but they all share the
same motherboard and so it's possible to re-use the same code for
motherbord even if CPU daughterboard changes.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Currently, it doesn't invoke the callback but continues to unwind
Also while at it - simplify the code a bit
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Directly return the result of perf_pmu_register() in
arc_pmu_device_probe() instead of assigning and returning variable ret.
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
static arc_pmu in the arch/arc/kernel/perf_event.c is not initialized as
it's shadowed by a local variable of the same name in the
arc_pmu_device_probe.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Fixes: 03c94fcf95 "ARC: perf: make @arc_pmu static global"
CC: <stable@vger.kernel.org> # 4.1
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>