If running in non-secure mode accessing
some registers of l2x0 will fault. So
check if l2x0 is already enabled, if so
do not access those secure registers.
Signed-off-by: srinidhi kasagar <srinidhi.kasagar@stericsson.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The zero page is read-only, and has its cache state cleared during
boot. No further maintanence for this page is required.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
page_address() is a function call rather than a macro, and so:
if (page_address(page))
do_something(page_address(page));
results in two calls to this function. This is unnecessary; remove
the duplication.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We had two copies of the wrapper code for VIVT cache flushing - one in
asm/cacheflush.h and one in arch/arm/mm/flush.c. Reduce this down to
one common copy.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Support for the Tauros2 L2 cache controller as used with the PJ1
and PJ4 CPUs.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Saeed Bishara <saeed@marvell.com>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
The Marvell Dove (88AP510) is a high-performance, highly integrated,
low power SoC with high-end ARM-compatible processor (known as PJ4),
graphics processing unit, high-definition video decoding acceleration
hardware, and a broad range of peripherals.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Saeed Bishara <saeed@marvell.com>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
On ARMv7, it is invalid to map the same physical address multiple times
with different memory types. Since system RAM is already mapped as
'memory', subsequent remapping of it must retain this attribute.
However, DMA memory maps it as "strongly ordered". Fix this by introducing
'pgprot_dmacoherent()' which provides the necessary page table bits for
DMA mappings.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
It's unnecessary; x86 doesn't do it, and ALSA doesn't require it
anymore.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
This entirely separates the DMA coherent buffer remapping code from
the allocation code, and gets rid of the duplicate copy in the !MMU
section.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
IXP23xx added support for dma_alloc_coherent() for DMA arches with an
exception in dma_alloc_coherent(). This is a subset of what goes on
in __dma_alloc(), and there is no reason why dma_alloc_writecombine()
should not be given the same treatment (except, maybe, that IXP23xx
doesn't use it.)
We can better deal with this by moving the arch_is_coherent() test
inside __dma_alloc() and killing the code duplication.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
No point wrapping the contents of this function with #ifdef CONFIG_MMU
when we can place it and the core_initcall() entirely within the
existing conditional block.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
We effectively have three implementations of dma_free_coherent() mixed up
in the code; the incoherent MMU, coherent MMU and noMMU versions.
The coherent MMU and noMMU versions are actually functionally identical.
The incoherent MMU version is almost the same, but with the additional
step of unmapping the secondary mapping.
Separate out this additional step into __dma_free_remap() and simplify
the resulting dma_free_coherent() code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
The nommu version of dma_alloc_coherent was using kmalloc/kfree to manage
the memory. dma_alloc_coherent() is expected to work with a granularity
of a page, so this is wrong. Fix it by using the helper functions now
provided.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
The coherent architecture dma_alloc_coherent was using kmalloc/kfree to
manage the memory. dma_alloc_coherent() is expected to work with a
granularity of a page, so this is wrong. Fix it by using the helper
functions now provided.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
Samsung S5PC1xx SoCs are based on ARM Coretex8, which has 64 bytes of L1
cache line size. Enable proper handling of L1 cache on these SoCs.
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If running in non-secure mode, enabling this register will fault.
Signed-off-by: Tony Thompson <Anthony.Thompson@arm.com>
Acked-by: Srinidhi Kasagar <srinidhikasagar@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Mapping the same memory using two different attributes (memory
type, shareability, cacheability) is unpredictable. During boot,
we encounter a situation when we're updating the kernel's page
tables which can lead to dirty cache lines existing in the cache
which are subsequently missed. This causes stack corruption,
and therefore a crash.
Therefore, ensure that the shared and cacheability settings
matches the configuration that will be used later; this together
with the restriction in early_cachepolicy() ensures that we won't
create a mismatch during boot.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Errata 411920 indicates that any "invalidate entire instruction cache"
operation can fail if the right conditions are present. This is not
limited just to those operations in flush.c, but elsewhere. Place the
workaround in the already existing __flush_icache_all() function
instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This adds a better sched_clock() to the IOP platform,
implemented using its new clocksource support.
Tested on n2100, compile-tested for all plat-iop machines.
[dan.j.williams@intel.com: allow early cp6 access]
Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
When SPARSEMEM_EXTREME is enabled, memory_present() wants to use bootmem
to allocate data structures. However, we call memory_present() after
declaring memory to bootmem, but before we've reserved areas.
This leads to sparsemem data structures being overwritten later in the
kernel's initialization (when slab initializes.)
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We were using GFP_DMA for masks other than 0xffffffff, which is
wrong when some masks are initialized to 0xffffffffffffffff.
This caused such masks to obtain memory from the precious DMA
pool.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Remove the URL listed for Maverick EP9312 since it is not available
and modify the help text appropriately.
Signed-off-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Ryan Mallon <ryan@bluewatersys.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On ARM, update_mmu_cache() does dcache flush for a page only if
it has a kernel mapping (page_mapping(page) != NULL). The correct
behavior would be to force the flush based on dcache_dirty bit only.
One of the cases where present logic would be a problem is when
a RAM based block device[1] is used as a swap disk. In this case,
we would have in-memory data corruption as shown in steps below:
do_swap_page()
{
- Allocate a new page (if not already in swap cache)
- Issue read from swap disk
- Block driver issues flush_dcache_page()
- flush_dcache_page() simply sets PG_dcache_dirty bit and does not
actually issue a flush since this page has no user space mapping yet.
- Now, if swap disk is almost full, this newly read page is removed
from swap cache and corrsponding swap slot is freed.
- Map this page anonymously in user space.
- update_mmu_cache()
- Since this page does not have kernel mapping (its not in page/swap
cache and is mapped anonymously), it does not issue dcache flush
even if dcache_dirty bit is set by flush_dcache_page() above.
<user now gets stale data since dcache was never flushed>
}
Same problem exists on mips too.
[1] example:
- brd (RAM based block device)
- ramzswap (RAM based compressed swap device)
Signed-off-by: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Seemingly this support was missed when highmem was added, so
DEBUG_HIGHMEM wouldn't have checked the kmap_atomic type.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If sparsemem is enabled, the start_pfn passed to the free_memmap()
function corresponds to an area of memory not known to the kernel and
pfn_to_page returns a wrong value. The (start_pfn - 1), however, is
known to the kernel.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This is needed because applications using the sys_cacheflush system call
can pass a memory range which isn't mapped yet even though the
corresponding vma is valid. The patch also adds unwinding annotations
for correct backtraces from the coherent_user_range() functions.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
According to the following in arch/arm/mm/fault.c page faults from
kernel mode are invalid if mmap_sem is already held and there is
no exception handler defined for the faulting instruction:
/*
* As per x86, we may deadlock here. However, since the kernel only
* validly references user space from well defined areas of the code,
* we can bug out early if this is from code which shouldn't.
*/
if (!down_read_trylock(&mm->mmap_sem)) {
if (!user_mode(regs) && !search_exception_tables(regs->ARM_pc))
goto no_context;
Since mmap_sem can be held at arbitrary times by another thread this
also means that any page faults from kernel mode are invalid if no
exception handler is defined for them, regardless whether mmap_sem is
held at the time of fault.
To easier detect code that can trigger the above error, add a check
also for the case where mmap_sem is acquired. As this has an overhead
make it a VM debug check.
Signed-off-by: Imre Deak <imre.deak@nokia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Steven Walter <stevenrwalter@gmail.com> writes:
> I've been tracking down an instance of userspace data corruption,
> and I believe I have found a window during fork where data can be
> lost. The corruption is occurring on an ARMv5 system with VIVT
> caches. Here's the scenario in question. Thread A is forking,
> Thread B is running in userspace:
>
> Thread A: flush_cache_mm() (dup_mmap)
> Thread B: writes to a page in the above mm
> Thread A: pte_wrprotect() the above page (copy_one_pte)
> Thread B: writes to the same page again
>
> During thread B's second write, he'll take a fault and enter the
> do_wp_page() case. We'll end up calling copy_page(), which notably
> uses the kernel virtual addresses for the old and new pages. This
> means that the new page does not necessarily have the data from the
> first write. Now there are two conflicting copies of the same
> cache-line in dcache. If the userspace cache-line flushes before
> the kernel cache-line, we lose the changes made during the first
> write. do_wp_page does call flush_dcache_page on the newly-copied
> page, but there's still a window where the CPU could flush the
> userspace cache-line before then.
Resolve this by flushing the user mapping before copying the page
on processors with a writeback VIVT cache.
Note: this does have a performance impact, and so needs further
consideration before being merged - can we optimize out some of
the cache flushes if, eg, we know that the page isn't yet mapped?
Thread: <e06498070903061426o5875ad13hc6328aa0d3f08ed7@mail.gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Our copy_user_highpage() implementations may require cache maintainence.
Ensure that implementations have all necessary details to perform this
maintainence.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Currently, on ARMv6 and ARMv7, if an application tries to execute
code (or garbage) on non-executable page it hangs. It caused by
incorrect prefetch abort handling. Now every prefetch abort
processes as a translation fault.
To fix this we have to analyze instruction fault status register
to figure out reason why we've got the abort and process it
accordingly.
To make IFSR different from DFSR we set bit 31 which is reserved in
both IFSR and DFSR.
This patch also tries to protect from future hangs on unexpected
exceptions. An application will be killed if unexpected exception
type was received.
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Instruction fault status register, IFSR, was introduced on ARMv6 to
provide status information about the last insturction fault. It
needed for proper prefetch abort handling.
Now we have three prefetch abort model:
* legacy - for CPUs before ARMv6. They doesn't provide neither
IFSR nor IFAR. We simulate IFSR with section translation fault
status for them to generalize code;
* ARMv6 - provides IFSR, but not IFAR;
* ARMv7 - provides both IFSR and IFAR.
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 1522ac3ec9
("Fix virtual to physical translation macro corner cases")
breaks the end of memory check in valid_phys_addr_range().
The modified expression results in the apparent /dev/mem size
being 2 bytes smaller than what it actually is.
This patch reworks the expression to correctly check the address,
while maintaining use of a valid address to __pa().
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We suffer an unfortunate combination of "features" which makes highmem
support on platforms without hardware TLB maintainence broadcast difficult:
- we need kmap_high_get() support for DMA cache coherence
- this requires kmap_high() to take a spinlock with IRQs disabled
- kmap_high() occasionally calls flush_all_zero_pkmaps() to clear
out old mappings
- flush_all_zero_pkmaps() calls flush_tlb_kernel_range(), which
on s/w IPI'd systems eventually calls smp_call_function_many()
- smp_call_function_many() must not be called with IRQs disabled:
WARNING: at kernel/smp.c:380 smp_call_function_many+0xc4/0x240()
Modules linked in:
Backtrace:
[<c00306f0>] (dump_backtrace+0x0/0x108) from [<c0286e6c>] (dump_stack+0x18/0x1c)
r6:c007cd18 r5:c02ff228 r4:0000017c
[<c0286e54>] (dump_stack+0x0/0x1c) from [<c0053e08>] (warn_slowpath_common+0x50/0x80)
[<c0053db8>] (warn_slowpath_common+0x0/0x80) from [<c0053e50>] (warn_slowpath_null+0x18/0x1c)
r7:00000003 r6:00000001 r5:c1ff4000 r4:c035fa34
[<c0053e38>] (warn_slowpath_null+0x0/0x1c) from [<c007cd18>] (smp_call_function_many+0xc4/0x240)
[<c007cc54>] (smp_call_function_many+0x0/0x240) from [<c007cec0>] (smp_call_function+0x2c/0x38)
[<c007ce94>] (smp_call_function+0x0/0x38) from [<c005980c>] (on_each_cpu+0x1c/0x38)
[<c00597f0>] (on_each_cpu+0x0/0x38) from [<c0031788>] (flush_tlb_kernel_range+0x50/0x58)
r6:00000001 r5:00000800 r4:c05f3590
[<c0031738>] (flush_tlb_kernel_range+0x0/0x58) from [<c009c600>] (flush_all_zero_pkmaps+0xc0/0xe8)
[<c009c540>] (flush_all_zero_pkmaps+0x0/0xe8) from [<c009c6b4>] (kmap_high+0x8c/0x1e0)
[<c009c628>] (kmap_high+0x0/0x1e0) from [<c00364a8>] (kmap+0x44/0x5c)
[<c0036464>] (kmap+0x0/0x5c) from [<c0109dfc>] (cramfs_readpage+0x3c/0x194)
[<c0109dc0>] (cramfs_readpage+0x0/0x194) from [<c0090c14>] (__do_page_cache_readahead+0x1f0/0x290)
[<c0090a24>] (__do_page_cache_readahead+0x0/0x290) from [<c0090ce4>] (ra_submit+0x30/0x38)
[<c0090cb4>] (ra_submit+0x0/0x38) from [<c0089384>] (filemap_fault+0x3dc/0x438)
r4:c1819988
[<c0088fa8>] (filemap_fault+0x0/0x438) from [<c009d21c>] (__do_fault+0x58/0x43c)
[<c009d1c4>] (__do_fault+0x0/0x43c) from [<c009e8cc>] (handle_mm_fault+0x104/0x318)
[<c009e7c8>] (handle_mm_fault+0x0/0x318) from [<c0033c98>] (do_page_fault+0x188/0x1e4)
[<c0033b10>] (do_page_fault+0x0/0x1e4) from [<c0033ddc>] (do_translation_fault+0x7c/0x84)
[<c0033d60>] (do_translation_fault+0x0/0x84) from [<c002b474>] (do_DataAbort+0x40/0xa4)
r8:c1ff5e20 r7:c0340120 r6:00000805 r5:c1ff5e54 r4:c03400d0
[<c002b434>] (do_DataAbort+0x0/0xa4) from [<c002bcac>] (__dabt_svc+0x4c/0x60)
...
So we disable highmem support on these systems.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Makes code futureproof against the impending change to mm->cpu_vm_mask.
It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Commit 9617729941 ("Drop free_pages()")
modified nr_free_pages() to return 'unsigned long' instead of 'unsigned
int'. This made the casts to 'unsigned long' in most callers superfluous,
so remove them.
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Reviewed-by: Christoph Lameter <cl@linux-foundation.org>
Acked-by: Ingo Molnar <mingo@elte.hu>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Kyle McMartin <kyle@mcmartin.ca>
Acked-by: WANG Cong <xiyou.wangcong@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Haavard Skinnemoen <hskinnemoen@atmel.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Howells <dhowells@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Chris Zankel <zankel@tensilica.com>
Cc: Michal Simek <monstr@monstr.eu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
ARMv6 introduces non-executable mappings, which can cause prefetch aborts
when an attempt is made to execute from such a mapping. Currently, this
causes us to loop in the page fault handler since we don't correctly
check for proper permissions.
Fix this by checking that VMAs have VM_EXEC set for prefetch aborts.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Since we get notified separately about prefetch aborts, which may be
permission faults, we need to check for appropriate access permissions
when handling a fault. This patch prepares us for doing this by
separating out the access error checking.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This adds the TCM interface to Linux, when active, it will
detect and report TCM memories and sizes early in boot if
present, introduce generic TCM memory handling, provide a
generic TCM memory pool and select TCM memory for the U300
platform.
See the Documentation/arm/tcm.txt for documentation.
Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Currently kernel believes that all ARM CPUs have L1_CACHE_SHIFT == 5.
It's not true at least for CPUs based on Cortex-A8.
List of CPUs with cache line size != 32 should be expanded later.
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Due to problems at cam.org, my nico@cam.org email address is no longer
valid. FRom now on, nico@fluxnic.net should be used instead.
Signed-off-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
On OMAP platforms, some people want to declare to segment up the memory
between the kernel and a separate application such that there is a hole
in the middle of the memory as far as Linux is concerned. However,
they want to be able to mmap() the hole.
This currently causes problems, because update_mmu_cache() thinks that
there are valid struct pages for the "hole". Fix this by making
pfn_valid() slightly more expensive, by checking whether the PFN is
contained within the meminfo array.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Khasim Syed Mohammed <khasim@ti.com>
Let's suppose a highmem page is kmap'd with kmap(). A pkmap entry is
used, the page mapped to it, and the virtual cache is dirtied. Then
kunmap() is used which does virtually nothing except for decrementing a
usage count.
Then, let's suppose the _same_ page gets mapped using kmap_atomic().
It is therefore mapped onto a fixmap entry instead, which has a
different virtual address unaware of the dirty cache data for that page
sitting in the pkmap mapping.
Fortunately it is easy to know if a pkmap mapping still exists for that
page and use it directly with kmap_atomic(), thanks to kmap_high_get().
And actual testing with a printk in the added code path shows that this
condition is actually met *extremely* frequently. Seems that we've been
quite lucky that things have worked so well with highmem so far.
Cc: stable@kernel.org
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
In xdr_partial_copy_from_skb() there is that sequence:
kaddr = kmap_atomic(*ppage, KM_SKB_SUNRPC_DATA);
[...]
flush_dcache_page(*ppage);
kunmap_atomic(kaddr, KM_SKB_SUNRPC_DATA);
Mixing flush_dcache_page() and kmap_atomic() is a bit odd,
especially since kunmap_atomic() must deal with cache issues
already. OTOH the non-highmem case must use flush_dcache_page()
as kunmap_atomic() becomes a no op with no cache maintenance.
Problem is that with highmem the implementation of kmap_atomic()
doesn't set page->virtual, and page_address(page) returns 0 in
that case. Here flush_dcache_page() calls __flush_dcache_page()
which calls __cpuc_flush_dcache_page(page_address(page)) resulting
in a kernel oops.
None of the kmap_atomic() implementations uses set_page_address().
Hence we can assume page_address() is always expected to return 0 in
that case. Let's conditionally call __cpuc_flush_dcache_page() only
when the page address is non zero, and perform that test only when
highmem is configured.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add the ARM implementation of highpte, which allows PTE tables to be
placed in highmem. Unfortunately, we do not offer highpte support
when support for L2 cache is enabled.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Currently, highmem is selectable, and you can request an increased
vmalloc area. However, none of this has any effect on the memory
layout since a patch in the highmem series was accidentally dropped.
Moreover, even if you did want highmem, all memory would still be
registered as lowmem, possibly resulting in overflow of the available
virtual mapping space.
The highmem boundary is determined by the highest allowed beginning
of the vmalloc area, which depends on its configurable minimum size
(see commit 60296c71f6 for details on
this).
We should create mappings and initialize bootmem only for low memory,
while the zone allocator must still be told about highmem.
Currently, memory nodes which are completely located in high memory
are not supported. This is not a huge limitation since systems
relying on highmem support are unlikely to have discontiguous memory
with large holes.
[ A similar patch was meant to be merged before commit 5f0fbf9eca
and be available in Linux v2.6.30, however some git rebase screw-up
of mine dropped the first commit of the series, and that goofage
escaped testing somehow as well. -- Nico ]
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Reviewed-by: Nicolas Pitre <nico@marvell.com>
The patch adds the necessary ifdefs around functions that only make
sense when the MMU is enabled.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
These old symbols are meaningless now that we have memory type
support implemented. The entire memory type field needs to be
modified rather than just a few bits twiddled.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Now required for libsas:
Kernel: arch/arm/boot/Image is ready
Kernel: arch/arm/boot/zImage is ready
Building modules, stage 2.
MODPOST 1096 modules
ERROR: "xscale_flush_kern_dcache_page" [drivers/scsi/libsas/libsas.ko] undefined!
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Alessandro Rubini <rubini@unipv.it>
Acked-by: Andrea Gallo <andrea.gallo@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* 'devel' of master.kernel.org:/home/rmk/linux-2.6-arm: (49 commits)
[ARM] idle: clean up pm_idle calling, obey hlt_counter
[ARM] S3C: Fix gpio-config off-by-one bug
[ARM] S3C64XX: add to_irq() support for EINT() GPIO
[ARM] S3C64XX: clock.c: fix typo in usb-host clock ctrlbit
[ARM] S3C64XX: fix HCLK gate defines
[ARM] Update mach-types
[ARM] wire up rt_tgsigqueueinfo and perf_counter_open
OMAP2 clock/powerdomain: off by 1 error in loop timeout comparisons
OMAP3 SDRC: set FIXEDDELAY when disabling SDRC DLL
OMAP3: Add support for DPLL3 divisor values higher than 2
OMAP3 SRAM: convert SRAM code to use macros rather than magic numbers
OMAP3 SRAM: add more comments on the SRAM code
OMAP3 clock/SDRC: program SDRC_MR register during SDRC clock change
OMAP3 clock: add a short delay when lowering CORE clk rate
OMAP3 clock: initialize SDRC timings at kernel start
OMAP3 clock: remove wait for DPLL3 M2 clock to stabilize
[ARM] Add old Feroceon support to compressed/head.S
[ARM] 5559/1: Limit the stack unwinding caused by a kthread exit
[ARM] 5558/1: Add extra checks to ARM unwinder to avoid tracing corrupt stacks
[ARM] 5557/1: Discard some ARM.ex*.*exit.text sections when !HOTPLUG or !HOTPLUG_CPU
...
This allows the callers to now pass down the full set of FAULT_FLAG_xyz
flags to handle_mm_fault(). All callers have been (mechanically)
converted to the new calling convention, there's almost certainly room
for architectures to clean up their code and then add FAULT_FLAG_RETRY
when that support is added.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
From: Min Zhang <mzhang@mvista.com>
Add alignment fault fixup support for 32-bit Thumb-2 LDM, LDRD, POP,
PUSH, STM and STRD instructions. Alignment fault fixup support for
the remaining 32-bit Thumb-2 load/store instruction cases is not
included since ARMv6 and later processors include hardware support
for loads and stores of unaligned words and halfwords.
Signed-off-by: Min Zhang <mzhang@mvista.com>
Signed-off-by: George G. Davis <gdavis@mvista.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (417 commits)
MAINTAINERS: EB110ATX is not ebsa110
MAINTAINERS: update Eric Miao's email address and status
fb: add support of LCD display controller on pxa168/910 (base layer)
[ARM] 5552/1: ep93xx get_uart_rate(): use EP93XX_SYSCON_PWRCNT and EP93XX_SYSCON_PWRCN
[ARM] pxa/sharpsl_pm: zaurus needs generic pxa suspend/resume routines
[ARM] 5544/1: Trust PrimeCell resource sizes
[ARM] pxa/sharpsl_pm: cleanup of gpio-related code.
[ARM] pxa/sharpsl_pm: drop set_irq_type calls
[ARM] pxa/sharpsl_pm: merge pxa-specific code into generic one
[ARM] pxa/sharpsl_pm: merge the two sharpsl_pm.c since it's now pxa specific
[ARM] sa1100: remove unused collie_pm.c
[ARM] pxa: fix the conflicting non-static declarations of global_gpios[]
[ARM] 5550/1: Add default configure file for w90p910 platform
[ARM] 5549/1: Add clock api for w90p910 platform.
[ARM] 5548/1: Add gpio api for w90p910 platform
[ARM] 5551/1: Add multi-function pin api for w90p910 platform.
[ARM] Make ARM_VIC_NR depend on ARM_VIC
[ARM] 5546/1: ARM PL022 SSP/SPI driver v3
ARM: OMAP4: SMP: Update defconfig for OMAP4430
ARM: OMAP4: SMP: Enable SMP support for OMAP4430
...
Currently, whenever an erratum workaround is enabled, it will be
applied whether or not the erratum is relevent for the CPU. This
patch changes this - we check the variant and revision fields in the
main ID register to determine which errata to apply.
We also avoid re-applying erratum 460075 if it has already been applied.
Applying this fix in non-secure mode results in the kernel failing to
boot (or even do anything.)
This fixes booting on some ARMv7 based platforms which otherwise
silently fail.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Starting with ARMv6, the CPUs support the BE-8 variant of big-endian
(byte-invariant). This patch adds the core support:
- setting of the BE-8 mode via the CPSR.E register for both kernel and
user threads
- big-endian page table walking
- REV used to rotate instructions read from memory during fault
processing as they are still little-endian format
- Kconfig and Makefile support for BE-8. The --be8 option must be passed
to the final linking stage to convert the instructions to
little-endian
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch adds a comment to the proc-v7.S file for the setting of the
PRRR and NMRR registers. It also sets the PRRR[13:12] bits to 0
(corresponding to the reserved TEX[0]CB encoding 110) to be consistent
with the documentation.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The SWP instruction has been deprecated starting with the ARMv6
architecture. On ARMv7 processors with the multiprocessor extensions
(like Cortex-A9), this instruction is disabled by default but it can be
enabled by setting bit 10 in the System Control register. Note that
setting this bit is safe even if the ARMv7 processor has the SWP
instruction enabled by default.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
There are additional bits to set for the ARMv7 SMP extensions in the
TTBR registers. The IRGN bits order is counter-intuitive but it allows
software built for the ARMv7 base architecture to run on an
implementation with the MP extensions.
Signed-off-by: Tony Thompson <Anthony.Thompson@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
ARMv7 SMP hardware can handle the TLB maintenance operations
broadcasting in hardware so that the software can avoid the costly IPIs.
This patch adds the necessary checks (the MMFR3 CPUID register) to avoid
the broadcasting if already supported by the hardware.
(this patch is based on the work done by Tony Thompson @ ARM)
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This is a RealView platform supporting core tiles with ARM11MPCore,
Cortex-A8 or Cortex-A9 (multicore) processors. It has support for MMC,
CompactFlash, PCI-E.
Signed-off-by: Colin Tuckley <colin.tuckley@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This patch provides a device drivers, which has a omap iommu, with
address mapping APIs between device virtual address(iommu), physical
address and MPU virtual address.
There are 4 possible patterns for iommu virtual address(iova/da) mapping.
|iova/ mapping iommu_ page
| da pa va (d)-(p)-(v) function type
---------------------------------------------------------------------------
1 | c c c 1 - 1 - 1 _kmap() / _kunmap() s
2 | c c,a c 1 - 1 - 1 _kmalloc()/ _kfree() s
3 | c d c 1 - n - 1 _vmap() / _vunmap() s
4 | c d,a c 1 - n - 1 _vmalloc()/ _vfree() n*
'iova': device iommu virtual address
'da': alias of 'iova'
'pa': physical address
'va': mpu virtual address
'c': contiguous memory area
'd': dicontiguous memory area
'a': anonymous memory allocation
'()': optional feature
'n': a normal page(4KB) size is used.
's': multiple iommu superpage(16MB, 1MB, 64KB, 4KB) size is used.
'*': not yet, but feasible.
Signed-off-by: Hiroshi DOYU <Hiroshi.DOYU@nokia.com>
* master.kernel.org:/home/rmk/linux-2.6-arm: (45 commits)
[ARM] 5489/1: ARM errata: Data written to the L2 cache can be overwritten with stale data
[ARM] 5490/1: ARM errata: Processor deadlock when a false hazard is created
[ARM] 5487/1: ARM errata: Stale prediction on replaced interworking branch
[ARM] 5488/1: ARM errata: Invalidation of the Instruction Cache operation can fail
davinci: DM644x: NAND: update partitioning
davinci: update DM644x support in preparation for more SoCs
davinci: DM644x: rename board file
davinci: update pin-multiplexing support
davinci: serial: generalize for more SoCs
davinci: DM355 IRQ Definitions
davinci: DM646x: add interrupt number and priorities
davinci: PSC: Clear bits in MDCTL reg before setting new bits
davinci: gpio bugfixes
davinci: add EDMA driver
davinci: timers: use clk_get_rate()
[ARM] pxa/littleton: add missing da9034 touchscreen support
[ARM] pxa/zylonite: configure GPIO18/19 correctly, used by 2 GPIO expanders
[ARM] pxa/zylonite: fix the issue of unused SDATA_IN_1 pin get AC97 not working
[ARM] pxa: make ads7846 on corgi and spitz to sync on HSYNC
[ARM] pxa: remove unused CPU_FREQ_PXA Kconfig symbol
...
This patch is a workaround for the 460075 Cortex-A8 (r2p0) erratum. It
configures the L2 cache auxiliary control register so that the Write
Allocate mode for the L2 cache is disabled.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds a workaround for the 458693 Cortex-A8 (r2p0)
erratum. It sets the corresponding bits in the auxiliary control
register so that the PLD instruction becomes a NOP.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds the workaround for the 430973 Cortex-A8 (r1p0..r1p2)
erratum. The BTAC/BTB is now flushed at every context switch.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch implements the recommended workaround for erratum 411920
(ARM1136, ARM1156, ARM1176).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This hooks the U300 support into Kbuild and makes a small hook
in mmu.c for supporting an odd memory alignment with shared memory
on these systems.
This is rebased to RMK:s GIT HEAD. This patch tries to add the
Kconfig option in alphabetic order by option text and the Makefile
entry after config symbol.
Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
arm is placing some code in the .text.init section, but it does not
reference that section in its linker scripts.
This change moves this code from the .text.init section to the
.init.text section, which is presumably where it belongs.
Signed-off-by: Tim Abbott <tabbott@mit.edu>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Because of an ARM1136 erratum (326103), the current v6_early_abort
function needs to set the correct FSR[11] value which determines whether
the data abort was caused by a read or write. For legacy reasons (bit 10
not handled by software), bit 10 was also cleared masking out imprecise
aborts on ARMv6 CPUs. This patch removes the clearing of bit 10 of FSR.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
I get random oopses on my Kirkwood board at startup when L2 cache is
enabled. FYI I'm using Marvell uboot version 3.4.16
Each boot produces the same oops, but anything that changes the kernel
size (even only changing initramfs) makes the oops different.
I noticed that nothing invalidates the L2 cache before enabling it,
doing so fixes my problem.
Signed-off-by: Maxime Bizon <mbizon@freebox.fr>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Compiling recent 2.6.29-rc kernels for ARM gives me the following warning:
arch/arm/mm/mmu.c: In function 'sanity_check_meminfo':
arch/arm/mm/mmu.c:697: warning: comparison between pointer and integer
This is because commit 3fd9825c42
"[ARM] 5402/1: fix a case of wrap-around in sanity_check_meminfo()"
in 2.6.29-rc5-git4 added a comparison of a pointer with PAGE_OFFSET,
which is an integer.
Fixed by casting PAGE_OFFSET to void *.
Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Adds support for Faraday FA526 core. This core is used at least by:
Cortina Systems Gemini and Centroid family
Cavium Networks ECONA family
Grain Media GM8120
Pixelplus ImageARM
Prolific PL-1029
Faraday IP evaluation boards
v2:
- move TLB_BTB to separate patch
- update copyrights
Signed-off-by: Paulius Zaleckas <paulius.zaleckas@teltonika.lt>
"""The Marvell® PXA168 processor is the first in a family of application
processors targeted at mass market opportunities in computing and consumer
devices. It balances high computing and multimedia performance with low
power consumption to support extended battery life, and includes a wealth
of integrated peripherals to reduce overall BOM cost .... """
See http://www.marvell.com/featured/pxa168.jsp for more information.
1. Marvell Mohawk core is a hybrid of xscale3 and its own ARM core,
there are many enhancements like instructions for flushing the
whole D-cache, and so on
2. Clock reuses Russell's common clkdev, and added the basic support
for UART1/2.
3. Devices are a bit different from the 'mach-pxa' way, the platform
devices are now dynamically allocated only when necessary (i.e.
when pxa_register_device() is called). Description for each device
are stored in an array of 'struct pxa_device_desc'. Now that:
a. this array of device description is marked with __initdata and
can be freed up system is fully up
b. which means board code has to add all needed devices early in
his initializing function
c. platform specific data can now be marked as __initdata since
they are allocated and copied by platform_device_add_data()
4. only the basic UART1/2/3 are added, more devices will come later.
Signed-off-by: Jason Chagas <chagas@marvell.com>
Signed-off-by: Eric Miao <eric.miao@marvell.com>
VIPT aliasing caches have issues of their own which are not yet handled.
Usage of discard_old_kernel_data() in copypage-v6.c is not highmem ready,
kmap/fixmap stuff doesn't take account of cache colouring, etc.
If/when those issues are handled then this could be reverted.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
On xsc3, L2 cache ops are possible only on virtual addresses. The code
is rearranged so to have a linear progression requiring the least amount
of pte setups in the highmem case. To protect the virtual mapping so
created, interrupts must be disabled currently up to a page worth of
address range.
The interrupt disabling is done in a way to minimize the overhead within
the inner loop. The alternative would consist in separate code for
the highmem and non highmem compilation which is less preferable.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
The choice is between looping over the physical range and performing
single cache line operations, or to map highmem pages somewhere, as
cache range ops are possible only on virtual addresses.
Because L2 range ops are much faster, we go with the later by factoring
the physical-to-virtual address conversion and use a fixmap entry for it
in the HIGHMEM case.
Possible future optimizations to avoid the pte setup cost:
- do the pte setup for highmem pages only
- determine a threshold for doing a line-by-line processing on physical
addresses when the range is small
Signed-off-by: Nicolas Pitre <nico@marvell.com>
This is a helper to be used by the DMA mapping API to handle cache
maintenance for memory identified by a page structure instead of a
virtual address. Those pages may or may not be highmem pages, and
when they're highmem pages, they may or may not be virtually mapped.
When they're not mapped then there is no L1 cache to worry about. But
even in that case the L2 cache must be processed since unmapped highmem
pages can still be L2 cached.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
The kmap virtual area borrows a 2MB range at the top of the 16MB area
below PAGE_OFFSET currently reserved for kernel modules and/or the
XIP kernel. This 2MB corresponds to the range covered by 2 consecutive
second-level page tables, or a single pmd entry as seen by the Linux
page table abstraction. Because XIP kernels are unlikely to be seen
on systems needing highmem support, there shouldn't be any shortage of
VM space for modules (14 MB for modules is still way more than twice the
typical usage).
Because the virtual mapping of highmem pages can go away at any moment
after kunmap() is called on them, we need to bypass the delayed cache
flushing provided by flush_dcache_page() in that case.
The atomic kmap versions are based on fixmaps, and
__cpuc_flush_dcache_page() is used directly in that case.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
This is the minimum fixmap interface expected to be implemented by
architectures supporting highmem.
We have a second level page table already allocated and covering
0xfff00000-0xffffffff because the exception vector page is located
at 0xffff0000, and various cache tricks already use some entries above
0xffff0000. Therefore the PTEs covering 0xfff00000-0xfffeffff are free
to be used.
However the XScale cache flushing code already uses virtual addresses
between 0xfffe0000 and 0xfffeffff.
So this reserves the 0xfff00000-0xfffdffff range for fixmap stuff.
The Documentation/arm/memory.txt information is updated accordingly,
including the information about the actual top of DMA memory mapping
region which didn't match the code.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
The current use of these macros works well when the conversion is
entirely linear. In this case, we can be assured that the following
holds true:
__va(p + s) - s = __va(p)
However, this is not always the case, especially when there is a
non-linear conversion (eg, when there is a 3.5GB hole in memory.)
In this case, if 's' is the size of the region (eg, PAGE_SIZE) and
'p' is the final page, the above is most definitely not true.
So, we must ensure that __va() and __pa() are only used with valid
kernel direct mapped RAM addresses. This patch tweaks the code
to achieve this.
Tested-by: Charles Moschel <fred99@carolina.rr.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This is a fix for the following crash observed in 2.6.29-rc3:
http://lkml.org/lkml/2009/1/29/150
On ARM it doesn't make sense to trace a naked function because then
mcount is called without stack and frame pointer being set up and there
is no chance to restore the lr register to the value before mcount was
called.
Reported-by: Matthias Kaehlcke <matthias@kaehlcke.net>
Tested-by: Matthias Kaehlcke <matthias@kaehlcke.net>
Cc: Abhishek Sagar <sagar.abhishek@gmail.com>
Cc: Steven Rostedt <rostedt@home.goodmis.org>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds a Non-cacheable Normal ARM executable memory type,
MT_MEMORY_NONCACHED.
On OMAP3, this is used for rapid dynamic voltage/frequency scaling in
the VDD2 voltage domain. OMAP3's SDRAM controller (SDRC) is in the
VDD2 voltage domain, and its clock frequency must change along with
voltage. The SDRC clock change code cannot run from SDRAM itself,
since SDRAM accesses are paused during the clock change. So the
current implementation of the DVFS code executes from OMAP on-chip
SRAM, aka "OCM RAM."
If the OCM RAM pages are marked as Cacheable, the ARM cache controller
will attempt to flush dirty cache lines to the SDRC, so it can fill
those lines with OCM RAM instruction code. The problem is that the
SDRC is paused during DVFS, and so any SDRAM access causes the ARM MPU
subsystem to hang.
TI's original solution to this problem was to mark the OCM RAM
sections as Strongly Ordered memory, thus preventing caching. This is
overkill: since the memory is marked as non-bufferable, OCM RAM writes
become needlessly slow. The idea of "Strongly Ordered SRAM" is also
conceptually disturbing. Previous LAKML list discussion is here:
http://www.spinics.net/lists/arm-kernel/msg54312.html
This memory type MT_MEMORY_NONCACHED is used for OCM RAM by a future
patch.
Cc: Richard Woodruff <r-woodruff2@ti.com>
Signed-off-by: Paul Walmsley <paul@pwsan.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The target of the strex instruction to clear the exlusive monitor
is currently the top of the stack. If the store succeeeds this
corrupts r0 in pt_regs. Use the next stack location instead of
the current one to prevent any chance of corrupting an in-use
address.
Signed-off-by: Seth Forshee <seth.forshee@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
In the non highmem case, if two memory banks of 1GB each are provided,
the second bank would evade suppression since its virtual base would
be 0. Fix this by disallowing any memory bank which virtual base
address is found to be lower than PAGE_OFFSET.
Reported-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When there are multiple L1-aliasing userland mappings of the same physical
page, we currently remap each of them uncached, to prevent VIVT cache
aliasing issues. (E.g. writes to one of the mappings not being immediately
visible via another mapping.) However, when we do this remapping, there
could still be stale data in the L2 cache, and an uncached mapping might
bypass L2 and go straight to RAM. This would cause reads from such
mappings to see old data (until the dirty L2 line is eventually evicted.)
This issue is solved by forcing a L2 cache flush whenever the shared page
is made L1 uncacheable.
Ideally, we would make L1 uncacheable and L2 cacheable as L2 is PIPT. But
Feroceon does not support that combination, and the TEX=5 C=0 B=0 encoding
for XSc3 doesn't appear to work in practice.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tomi Valkeinen reports:
Running with latest linux-omap kernel on OMAP3 SDP board, I have
problem with iounmap(). It looks like iounmap() does not properly
free large areas. Below is a test which fails for me in 6-7 loops.
for (i = 0; i < 200; ++i) {
vaddr = ioremap(paddr, size);
if (!vaddr) {
printk("couldn't ioremap\n");
break;
}
iounmap(vaddr);
}
The changes to vmalloc.c weren't reflected in the ARM ioremap
implementation. Turns out the fix is rather simple.
Tested-by: Tomi Valkeinen <tomi.valkeinen@nokia.com>
Tested-by: Matt Gerassimoff <mgeras@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Without this, the pxa2xx-flash driver cannot be used as a module.
Reported-by: Chris Lawrence <chrisdl@netspace.net.au>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Rename ARM's struct vm_region so that I can introduce my own global version
for NOMMU. It's feasible that the ARM version may wish to use my global one
instead.
The NOMMU vm_region struct defines areas of the physical memory map that are
under mmap. This may include chunks of RAM or regions of memory mapped
devices, such as flash. It is also used to retain copies of file content so
that shareable private memory mappings of files can be made. As such, it may
be compatible with what is described in the banner comment for ARM's vm_region
struct.
Signed-off-by: David Howells <dhowells@redhat.com>
As noted by Akinobu Mita in patch b1fceac2b9,
alloc_bootmem and related functions never return NULL and always return a
zeroed region of memory. Thus a NULL test or memset after calls to these
functions is unnecessary.
This was fixed using the following semantic patch.
(http://www.emn.fr/x-info/coccinelle/)
// <smpl>
@@
expression E;
statement S;
@@
E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...)
... when != E
(
- BUG_ON (E == NULL);
|
- if (E == NULL) S
)
@@
expression E,E1;
@@
E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...)
... when != E
- memset(E,0,E1);
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On ARMv6 and later CPUs, it is possible for userspace processes to
get stuck on a misaligned load or store due to the "ignore fault"
setting; unlike previous CPUs, retrying the instruction without
the 'A' bit set does not always cause the load to succeed.
We have no real option but to default to fixing up alignment faults
on these CPUs, and having the CPU fix up those misaligned accesses
which it can.
Reported-by: Wolfgang Grandegger <wg@grandegger.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
PXA935 has changed its implementor ID from Intel to Marvell, this
patch modifies arch/arm/boot/compressed/head.S and proc-xsc3.S to
support a smooth bootup.
Signed-off-by: Eric Miao <eric.miao@marvell.com>
This patch adds the necessary definitions and Kconfig entries to enable
Cortex-A9 (ARMv7 SMP) tiles on the RealView/EB board.
Signed-off-by: Jon Callan <Jon.Callan@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Update to use the asm/sections.h header rather than declaring these
symbols ourselves. Change __data_start to _data to conform with the
naming found within asm/sections.h.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The restriction on !CONFIG_HIGHMEM is unneeded since page tables are
currently never allocated with highmem pages, and actually disable PTE
dump whenever highmem is configured. Let's have a dynamic test to better
describe the current limitation instead.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 8d5796d2ec allows for the vmalloc
area to be resized from the kernel cmdline. Make sure it cannot overlap
with RAM entirely.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Make free_area() arguments pfn based, and return number of freed pages.
This will simplify highmem initialization later.
Also, codepages, datapages and initpages are actually codesize, datasize
and initsize.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Doing so will greatly simplify the bootmem initialization code as each
bank is therefore entirely lowmem or highmem with no crossing between
those zones.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Currently there are two instances of struct meminfo: one in
kernel/setup.c marked __initdata, and another in mm/init.c with
permanent storage. Let's keep only the later to directly populate
the permanent version from arm_add_memory().
Also move common validation tests between the MMU and non-MMU cases
into arm_add_memory() to remove some duplication. Protection against
overflowing the membank array is also moved in there in order to cover
the kernel cmdline parsing path as well.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
In all cases the kaddr is assigned an input register even though it is
modified in the assembly code. Let's assign a new variable to the
modified value and mark those inline asm with volatile otherwise they
get optimized away because the output variable is otherwise not used.
Also fix a few conversion errors in copypage-feroceon.c and
copypage-v4mc.c.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
For similar reasons as copy_user_page(), we want to avoid the
additional kmap_atomic if it's unnecessary.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We used to override the copy_user_page() function. However, this
is not only inefficient, it also causes additional complexity for
highmem support, since we convert from a struct page to a kernel
direct mapped address and back to a struct page again.
Moreover, with highmem support, we end up pointlessly setting up
kmap entries for pages which we're going to remap. So, push the
kmapping down into the copypage implementation files where it's
required.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>