Currently, whenever an erratum workaround is enabled, it will be
applied whether or not the erratum is relevent for the CPU. This
patch changes this - we check the variant and revision fields in the
main ID register to determine which errata to apply.
We also avoid re-applying erratum 460075 if it has already been applied.
Applying this fix in non-secure mode results in the kernel failing to
boot (or even do anything.)
This fixes booting on some ARMv7 based platforms which otherwise
silently fail.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* master.kernel.org:/home/rmk/linux-2.6-arm: (45 commits)
[ARM] 5489/1: ARM errata: Data written to the L2 cache can be overwritten with stale data
[ARM] 5490/1: ARM errata: Processor deadlock when a false hazard is created
[ARM] 5487/1: ARM errata: Stale prediction on replaced interworking branch
[ARM] 5488/1: ARM errata: Invalidation of the Instruction Cache operation can fail
davinci: DM644x: NAND: update partitioning
davinci: update DM644x support in preparation for more SoCs
davinci: DM644x: rename board file
davinci: update pin-multiplexing support
davinci: serial: generalize for more SoCs
davinci: DM355 IRQ Definitions
davinci: DM646x: add interrupt number and priorities
davinci: PSC: Clear bits in MDCTL reg before setting new bits
davinci: gpio bugfixes
davinci: add EDMA driver
davinci: timers: use clk_get_rate()
[ARM] pxa/littleton: add missing da9034 touchscreen support
[ARM] pxa/zylonite: configure GPIO18/19 correctly, used by 2 GPIO expanders
[ARM] pxa/zylonite: fix the issue of unused SDATA_IN_1 pin get AC97 not working
[ARM] pxa: make ads7846 on corgi and spitz to sync on HSYNC
[ARM] pxa: remove unused CPU_FREQ_PXA Kconfig symbol
...
This patch is a workaround for the 460075 Cortex-A8 (r2p0) erratum. It
configures the L2 cache auxiliary control register so that the Write
Allocate mode for the L2 cache is disabled.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds a workaround for the 458693 Cortex-A8 (r2p0)
erratum. It sets the corresponding bits in the auxiliary control
register so that the PLD instruction becomes a NOP.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds the workaround for the 430973 Cortex-A8 (r1p0..r1p2)
erratum. The BTAC/BTB is now flushed at every context switch.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch implements the recommended workaround for erratum 411920
(ARM1136, ARM1156, ARM1176).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
arm is placing some code in the .text.init section, but it does not
reference that section in its linker scripts.
This change moves this code from the .text.init section to the
.init.text section, which is presumably where it belongs.
Signed-off-by: Tim Abbott <tabbott@mit.edu>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Because of an ARM1136 erratum (326103), the current v6_early_abort
function needs to set the correct FSR[11] value which determines whether
the data abort was caused by a read or write. For legacy reasons (bit 10
not handled by software), bit 10 was also cleared masking out imprecise
aborts on ARMv6 CPUs. This patch removes the clearing of bit 10 of FSR.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
I get random oopses on my Kirkwood board at startup when L2 cache is
enabled. FYI I'm using Marvell uboot version 3.4.16
Each boot produces the same oops, but anything that changes the kernel
size (even only changing initramfs) makes the oops different.
I noticed that nothing invalidates the L2 cache before enabling it,
doing so fixes my problem.
Signed-off-by: Maxime Bizon <mbizon@freebox.fr>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Compiling recent 2.6.29-rc kernels for ARM gives me the following warning:
arch/arm/mm/mmu.c: In function 'sanity_check_meminfo':
arch/arm/mm/mmu.c:697: warning: comparison between pointer and integer
This is because commit 3fd9825c42
"[ARM] 5402/1: fix a case of wrap-around in sanity_check_meminfo()"
in 2.6.29-rc5-git4 added a comparison of a pointer with PAGE_OFFSET,
which is an integer.
Fixed by casting PAGE_OFFSET to void *.
Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Adds support for Faraday FA526 core. This core is used at least by:
Cortina Systems Gemini and Centroid family
Cavium Networks ECONA family
Grain Media GM8120
Pixelplus ImageARM
Prolific PL-1029
Faraday IP evaluation boards
v2:
- move TLB_BTB to separate patch
- update copyrights
Signed-off-by: Paulius Zaleckas <paulius.zaleckas@teltonika.lt>
"""The Marvell® PXA168 processor is the first in a family of application
processors targeted at mass market opportunities in computing and consumer
devices. It balances high computing and multimedia performance with low
power consumption to support extended battery life, and includes a wealth
of integrated peripherals to reduce overall BOM cost .... """
See http://www.marvell.com/featured/pxa168.jsp for more information.
1. Marvell Mohawk core is a hybrid of xscale3 and its own ARM core,
there are many enhancements like instructions for flushing the
whole D-cache, and so on
2. Clock reuses Russell's common clkdev, and added the basic support
for UART1/2.
3. Devices are a bit different from the 'mach-pxa' way, the platform
devices are now dynamically allocated only when necessary (i.e.
when pxa_register_device() is called). Description for each device
are stored in an array of 'struct pxa_device_desc'. Now that:
a. this array of device description is marked with __initdata and
can be freed up system is fully up
b. which means board code has to add all needed devices early in
his initializing function
c. platform specific data can now be marked as __initdata since
they are allocated and copied by platform_device_add_data()
4. only the basic UART1/2/3 are added, more devices will come later.
Signed-off-by: Jason Chagas <chagas@marvell.com>
Signed-off-by: Eric Miao <eric.miao@marvell.com>
VIPT aliasing caches have issues of their own which are not yet handled.
Usage of discard_old_kernel_data() in copypage-v6.c is not highmem ready,
kmap/fixmap stuff doesn't take account of cache colouring, etc.
If/when those issues are handled then this could be reverted.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
On xsc3, L2 cache ops are possible only on virtual addresses. The code
is rearranged so to have a linear progression requiring the least amount
of pte setups in the highmem case. To protect the virtual mapping so
created, interrupts must be disabled currently up to a page worth of
address range.
The interrupt disabling is done in a way to minimize the overhead within
the inner loop. The alternative would consist in separate code for
the highmem and non highmem compilation which is less preferable.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
The choice is between looping over the physical range and performing
single cache line operations, or to map highmem pages somewhere, as
cache range ops are possible only on virtual addresses.
Because L2 range ops are much faster, we go with the later by factoring
the physical-to-virtual address conversion and use a fixmap entry for it
in the HIGHMEM case.
Possible future optimizations to avoid the pte setup cost:
- do the pte setup for highmem pages only
- determine a threshold for doing a line-by-line processing on physical
addresses when the range is small
Signed-off-by: Nicolas Pitre <nico@marvell.com>
This is a helper to be used by the DMA mapping API to handle cache
maintenance for memory identified by a page structure instead of a
virtual address. Those pages may or may not be highmem pages, and
when they're highmem pages, they may or may not be virtually mapped.
When they're not mapped then there is no L1 cache to worry about. But
even in that case the L2 cache must be processed since unmapped highmem
pages can still be L2 cached.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
The kmap virtual area borrows a 2MB range at the top of the 16MB area
below PAGE_OFFSET currently reserved for kernel modules and/or the
XIP kernel. This 2MB corresponds to the range covered by 2 consecutive
second-level page tables, or a single pmd entry as seen by the Linux
page table abstraction. Because XIP kernels are unlikely to be seen
on systems needing highmem support, there shouldn't be any shortage of
VM space for modules (14 MB for modules is still way more than twice the
typical usage).
Because the virtual mapping of highmem pages can go away at any moment
after kunmap() is called on them, we need to bypass the delayed cache
flushing provided by flush_dcache_page() in that case.
The atomic kmap versions are based on fixmaps, and
__cpuc_flush_dcache_page() is used directly in that case.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
This is the minimum fixmap interface expected to be implemented by
architectures supporting highmem.
We have a second level page table already allocated and covering
0xfff00000-0xffffffff because the exception vector page is located
at 0xffff0000, and various cache tricks already use some entries above
0xffff0000. Therefore the PTEs covering 0xfff00000-0xfffeffff are free
to be used.
However the XScale cache flushing code already uses virtual addresses
between 0xfffe0000 and 0xfffeffff.
So this reserves the 0xfff00000-0xfffdffff range for fixmap stuff.
The Documentation/arm/memory.txt information is updated accordingly,
including the information about the actual top of DMA memory mapping
region which didn't match the code.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
The current use of these macros works well when the conversion is
entirely linear. In this case, we can be assured that the following
holds true:
__va(p + s) - s = __va(p)
However, this is not always the case, especially when there is a
non-linear conversion (eg, when there is a 3.5GB hole in memory.)
In this case, if 's' is the size of the region (eg, PAGE_SIZE) and
'p' is the final page, the above is most definitely not true.
So, we must ensure that __va() and __pa() are only used with valid
kernel direct mapped RAM addresses. This patch tweaks the code
to achieve this.
Tested-by: Charles Moschel <fred99@carolina.rr.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This is a fix for the following crash observed in 2.6.29-rc3:
http://lkml.org/lkml/2009/1/29/150
On ARM it doesn't make sense to trace a naked function because then
mcount is called without stack and frame pointer being set up and there
is no chance to restore the lr register to the value before mcount was
called.
Reported-by: Matthias Kaehlcke <matthias@kaehlcke.net>
Tested-by: Matthias Kaehlcke <matthias@kaehlcke.net>
Cc: Abhishek Sagar <sagar.abhishek@gmail.com>
Cc: Steven Rostedt <rostedt@home.goodmis.org>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds a Non-cacheable Normal ARM executable memory type,
MT_MEMORY_NONCACHED.
On OMAP3, this is used for rapid dynamic voltage/frequency scaling in
the VDD2 voltage domain. OMAP3's SDRAM controller (SDRC) is in the
VDD2 voltage domain, and its clock frequency must change along with
voltage. The SDRC clock change code cannot run from SDRAM itself,
since SDRAM accesses are paused during the clock change. So the
current implementation of the DVFS code executes from OMAP on-chip
SRAM, aka "OCM RAM."
If the OCM RAM pages are marked as Cacheable, the ARM cache controller
will attempt to flush dirty cache lines to the SDRC, so it can fill
those lines with OCM RAM instruction code. The problem is that the
SDRC is paused during DVFS, and so any SDRAM access causes the ARM MPU
subsystem to hang.
TI's original solution to this problem was to mark the OCM RAM
sections as Strongly Ordered memory, thus preventing caching. This is
overkill: since the memory is marked as non-bufferable, OCM RAM writes
become needlessly slow. The idea of "Strongly Ordered SRAM" is also
conceptually disturbing. Previous LAKML list discussion is here:
http://www.spinics.net/lists/arm-kernel/msg54312.html
This memory type MT_MEMORY_NONCACHED is used for OCM RAM by a future
patch.
Cc: Richard Woodruff <r-woodruff2@ti.com>
Signed-off-by: Paul Walmsley <paul@pwsan.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The target of the strex instruction to clear the exlusive monitor
is currently the top of the stack. If the store succeeeds this
corrupts r0 in pt_regs. Use the next stack location instead of
the current one to prevent any chance of corrupting an in-use
address.
Signed-off-by: Seth Forshee <seth.forshee@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
In the non highmem case, if two memory banks of 1GB each are provided,
the second bank would evade suppression since its virtual base would
be 0. Fix this by disallowing any memory bank which virtual base
address is found to be lower than PAGE_OFFSET.
Reported-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When there are multiple L1-aliasing userland mappings of the same physical
page, we currently remap each of them uncached, to prevent VIVT cache
aliasing issues. (E.g. writes to one of the mappings not being immediately
visible via another mapping.) However, when we do this remapping, there
could still be stale data in the L2 cache, and an uncached mapping might
bypass L2 and go straight to RAM. This would cause reads from such
mappings to see old data (until the dirty L2 line is eventually evicted.)
This issue is solved by forcing a L2 cache flush whenever the shared page
is made L1 uncacheable.
Ideally, we would make L1 uncacheable and L2 cacheable as L2 is PIPT. But
Feroceon does not support that combination, and the TEX=5 C=0 B=0 encoding
for XSc3 doesn't appear to work in practice.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tomi Valkeinen reports:
Running with latest linux-omap kernel on OMAP3 SDP board, I have
problem with iounmap(). It looks like iounmap() does not properly
free large areas. Below is a test which fails for me in 6-7 loops.
for (i = 0; i < 200; ++i) {
vaddr = ioremap(paddr, size);
if (!vaddr) {
printk("couldn't ioremap\n");
break;
}
iounmap(vaddr);
}
The changes to vmalloc.c weren't reflected in the ARM ioremap
implementation. Turns out the fix is rather simple.
Tested-by: Tomi Valkeinen <tomi.valkeinen@nokia.com>
Tested-by: Matt Gerassimoff <mgeras@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Without this, the pxa2xx-flash driver cannot be used as a module.
Reported-by: Chris Lawrence <chrisdl@netspace.net.au>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rename ARM's struct vm_region so that I can introduce my own global version
for NOMMU. It's feasible that the ARM version may wish to use my global one
instead.
The NOMMU vm_region struct defines areas of the physical memory map that are
under mmap. This may include chunks of RAM or regions of memory mapped
devices, such as flash. It is also used to retain copies of file content so
that shareable private memory mappings of files can be made. As such, it may
be compatible with what is described in the banner comment for ARM's vm_region
struct.
Signed-off-by: David Howells <dhowells@redhat.com>
As noted by Akinobu Mita in patch b1fceac2b9,
alloc_bootmem and related functions never return NULL and always return a
zeroed region of memory. Thus a NULL test or memset after calls to these
functions is unnecessary.
This was fixed using the following semantic patch.
(http://www.emn.fr/x-info/coccinelle/)
// <smpl>
@@
expression E;
statement S;
@@
E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...)
... when != E
(
- BUG_ON (E == NULL);
|
- if (E == NULL) S
)
@@
expression E,E1;
@@
E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...)
... when != E
- memset(E,0,E1);
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On ARMv6 and later CPUs, it is possible for userspace processes to
get stuck on a misaligned load or store due to the "ignore fault"
setting; unlike previous CPUs, retrying the instruction without
the 'A' bit set does not always cause the load to succeed.
We have no real option but to default to fixing up alignment faults
on these CPUs, and having the CPU fix up those misaligned accesses
which it can.
Reported-by: Wolfgang Grandegger <wg@grandegger.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
PXA935 has changed its implementor ID from Intel to Marvell, this
patch modifies arch/arm/boot/compressed/head.S and proc-xsc3.S to
support a smooth bootup.
Signed-off-by: Eric Miao <eric.miao@marvell.com>
This patch adds the necessary definitions and Kconfig entries to enable
Cortex-A9 (ARMv7 SMP) tiles on the RealView/EB board.
Signed-off-by: Jon Callan <Jon.Callan@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Update to use the asm/sections.h header rather than declaring these
symbols ourselves. Change __data_start to _data to conform with the
naming found within asm/sections.h.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>