Commit Graph

107 Commits

Author SHA1 Message Date
Russell King
1fb333489f Merge branches 'alignment', 'fixes', 'l2c' (early part) and 'misc' into for-next 2014-06-05 12:35:52 +01:00
Rob Herring
1dc5455f6f ARM: 8028/1: move __fixup_smp out of init section
With large kernel builds such as allyesconfig exceeding maximum relative
branch offsets, the init section will be too far away to branch to
directly. This causes veneers to be added by the linker, but veneers
don't work before the MMU is enabled. Fix this by moving __fixup_smp to
the .head.text section as it is not very big.

Signed-off-by: Rob Herring <robh@kernel.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-05-25 23:46:51 +01:00
Victor Kamensky
e3892e9160 ARM: 8033/1: fix big endian __pv_phys_pfn_offset size related issue
Fix e26a9e00af 'ARM: Better
virt_to_page() handling' replaced __pv_phys_offset with
__pv_phys_pfn_offset. Also note that size of __pv_phys_offset
was quad but size of __pv_phys_pfn_offset is word. Instruction
that used to update __pv_phys_offset which address is in r6
had to update low word of __pv_phys_offset so it used #LOW_OFFSET
macro for store offset. Now when size of __pv_phys_pfn_offset is
word, no difference between little endian and big endian should
exist - i.e no offset should be used when __pv_phys_pfn_offset
is stored.

Note that for little endian image proposed change is noop,
since in little endian case #LOW_OFFSET is defined 0 anyway.

Reported-by: Taras Kondratiuk <taras.kondratiuk@linaro.org>
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-22 22:24:00 +01:00
Russell King
95959e6a06 Merge branches 'amba', 'fixes', 'misc', 'mmci', 'unstable/omap-dma' and 'unstable/sa11x0' into for-next 2014-04-04 00:33:32 +01:00
Russell King
e26a9e00af ARM: Better virt_to_page() handling
virt_to_page() is incredibly inefficient when virt-to-phys patching is
enabled.  This is because we end up with this calculation:

  page = &mem_map[asm virt_to_phys(addr) >> 12 - __pv_phys_offset >> 12]

in assembly.  The asm virt_to_phys() is equivalent this this operation:

  addr - PAGE_OFFSET + __pv_phys_offset

and we can see that because this is assembly, the compiler has no chance
to optimise some of that away.  This should reduce down to:

  page = &mem_map[(addr - PAGE_OFFSET) >> 12]

for the common cases.  Permit the compiler to make this optimisation by
giving it more of the information it needs - do this by providing a
virt_to_pfn() macro.

Another issue which makes this more complex is that __pv_phys_offset is
a 64-bit type on all platforms.  This is needlessly wasteful - if we
store the physical offset as a PFN, we can save a lot of work having
to deal with 64-bit values, which sometimes ends up producing incredibly
horrid code:

     a4c:       e3009000        movw    r9, #0
                        a4c: R_ARM_MOVW_ABS_NC  __pv_phys_offset
     a50:       e3409000        movt    r9, #0          ; r9 = &__pv_phys_offset
                        a50: R_ARM_MOVT_ABS     __pv_phys_offset
     a54:       e3002000        movw    r2, #0
                        a54: R_ARM_MOVW_ABS_NC  __pv_phys_offset
     a58:       e3402000        movt    r2, #0          ; r2 = &__pv_phys_offset
                        a58: R_ARM_MOVT_ABS     __pv_phys_offset
     a5c:       e5999004        ldr     r9, [r9, #4]    ; r9 = high word of __pv_phys_offset
     a60:       e3001000        movw    r1, #0
                        a60: R_ARM_MOVW_ABS_NC  mem_map
     a64:       e592c000        ldr     ip, [r2]        ; ip = low word of __pv_phys_offset

Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-03 22:46:34 +01:00
Thomas Petazzoni
b363457593 ARM: 7980/1: kernel: improve error message when LPAE config doesn't match CPU
Currently, when the kernel is configured with LPAE support, but the
CPU doesn't support it, the error message is fairly cryptic:

  Error: unrecognized/unsupported processor variant (0x561f5811).

This messages is normally shown when there is an issue when comparing
the processor ID (CP15 0, c0, c0) with the values/masks described in
proc-v7.S. However, the same message is displayed when LPAE support is
enabled in the kernel configuration, but not available in the CPU,
after looking at ID_MMFR0 (CP15 0, c0, c1, 4). Having the same error
message is highly misleading.

This commit improves this by showing a different error message when
this situation occurs:

  Error: Kernel with LPAE support, but CPU does not support LPAE.

Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-21 11:56:32 +00:00
Christopher Covington
2ab4e8c06d ARM: 7947/1: Make pgtbl macro more robust
The pgtbl macro couldn't handle the specific
(TEXT_OFFSET - PG_DIR_SIZE) value that the combination of
MSM platforms and LPAE created:

head.S:163: Error: invalid constant (203000) after fixup

Regardless of whether this combination of configuration options
will work on currently support platforms at run time, make it
at least assemble properly.

Signed-off-by: Christopher Covington <cov@codeaurora.org>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-01-28 14:34:03 +00:00
Russell King
b713aa0b15 ARM: fix asm/memory.h build error
Jason Gunthorpe reports a build failure when ARM_PATCH_PHYS_VIRT is
not defined:

In file included from arch/arm/include/asm/page.h:163:0,
                 from include/linux/mm_types.h:16,
                 from include/linux/sched.h:24,
                 from arch/arm/kernel/asm-offsets.c:13:
arch/arm/include/asm/memory.h: In function '__virt_to_phys':
arch/arm/include/asm/memory.h:244:40: error: 'PHYS_OFFSET' undeclared (first use in this function)
arch/arm/include/asm/memory.h:244:40: note: each undeclared identifier is reported only once for each function it appears in
arch/arm/include/asm/memory.h: In function '__phys_to_virt':
arch/arm/include/asm/memory.h:249:13: error: 'PHYS_OFFSET' undeclared (first use in this function)

Fixes: ca5a45c06c ("ARM: mm: use phys_addr_t appropriately in p2v and v2p conversions")
Tested-By: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-12-13 20:25:30 +00:00
Victor Kamensky
d9a790df8e ARM: 7883/1: fix mov to mvn conversion in case of 64 bit phys_addr_t and BE
Fix patching code to convert mov instruction into mvn instruction
in case of CONFIG_ARCH_PHYS_ADDR_T_64BIT and CONFIG_ARM_PATCH_PHYS_VIRT.

In BE case store into r0 proper bits so byte swapped instruction
could be modified correctly.

Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Reviewed-by: R Sricharan <r.sricharan@ti.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-11-14 11:13:09 +00:00
Victor Kamensky
10593b2e49 ARM: 7881/1: __fixup_smp read of SCU config should do byteswap in BE case
Commit "bc41b8724f24b9a27d1dcc6c974b8f686b38d554 ARM: 7846/1:
Update SMP_ON_UP code to detect A9MPCore with 1 CPU devices"
added read of SCU config register into __fixup_smp function.
Such read should be followed by byteswap, if kernel runs in
BE mode.

Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-11-14 11:03:01 +00:00
Russell King
df762eccba Merge branch 'devel-stable' into for-next
Conflicts:
	arch/arm/include/asm/atomic.h
	arch/arm/include/asm/hardirq.h
	arch/arm/kernel/smp.c
2013-11-12 10:58:59 +00:00
Russell King
2098990e7c Merge branch 'baserock/bjdooks/312-rc4/be/core-v3' of git://git.baserock.org/delta/linux into devel-stable
Conflicts:
	arch/arm/kernel/head.S

This series has been well tested and it would be great to get this
merged now.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-30 22:20:26 +00:00
Sricharan R
830fd4d6de ARM: 7870/1: head: Fix the missing underscore in __ARMEB__ macro and .align keyword
Commit 'f52bb722547f43caeaecbcc62db9f3c3b80ead9b'
Author: Sricharan R <r.sricharan@ti.com>
    ARM: mm: Correct virt_to_phys patching for 64 bit physical addresses

introduced a __ARMEB__ macro usage in a new place, but missed the second
underscore. So correcting it here.

Also a explicit .align keyword is needed for the label with .long
data-type to be aligned on the 4 byte boundary. Otherwise this can
cause problem for thumb2 build. So adding it here.

Signed-off-by: Sricharan R <r.sricharan@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-29 10:58:52 +00:00
Ben Dooks
97bcb0fea5 ARM: set BE8 if LE in head code
If we are booting in LE and compiled for BE8, then add code to
set the state to bE8. Since the instruction stream is always LE,
we do not need to do anything special to the instruction.

Also ensure that the secondary processors are started in the same mode.

Note, we do add about 20 bytes to the kernel image, but it seems easier
to do this than adding another configuration to change.

Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Tested-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
2013-10-19 20:46:33 +01:00
Ben Dooks
2f9bf9bedd ARM: fixup_pv_table bug when CPU_ENDIAN_BE8
The fixup_pv_table assumes that the instructions are in the same
endian configuration as the data, but when the CPU is running in
BE8 the instructions stay in little-endian format.

Make sure if CONFIG_CPU_ENDIAN_BE8 is set that we do all the
alterations to the instructions taking in to account the LDR/STR
will be swapping the data endian-ness.

Since the code is only modifying a byte, we avoid dual-swapping
the data, and just change the bits we clear and ORR in (in the
case where the code is not thumb2).

For thumb2, we add the necessary rev16 instructions to ensure that
the instructions are processed in the correct format, as it was
easier than re-writing the code to contain a mask and shift.

Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Tested-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
2013-10-19 20:46:33 +01:00
Sricharan R
f52bb72254 ARM: mm: Correct virt_to_phys patching for 64 bit physical addresses
The current phys_to_virt patching mechanism works only for 32 bit
physical addresses and this patch extends the idea for 64bit physical
addresses.

The 64bit v2p patching mechanism patches the higher 8 bits of physical
address with a constant using 'mov' instruction and lower 32bits are patched
using 'add'. While this is correct, in those platforms where the lowmem addressable
physical memory spawns across 4GB boundary, a carry bit can be produced as a
result of addition of lower 32bits. This has to be taken in to account and added
in to the upper. The patched __pv_offset and va are added in lower 32bits, where
__pv_offset can be in two's complement form when PA_START < VA_START and that can
result in a false carry bit.

e.g
    1) PA = 0x80000000; VA = 0xC0000000
       __pv_offset = PA - VA = 0xC0000000 (2's complement)

    2) PA = 0x2 80000000; VA = 0xC000000
       __pv_offset = PA - VA = 0x1 C0000000

So adding __pv_offset + VA should never result in a true overflow for (1).
So in order to differentiate between a true carry, a __pv_offset is extended
to 64bit and the upper 32bits will have 0xffffffff if __pv_offset is
2's complement. So 'mvn #0' is inserted instead of 'mov' while patching
for the same reason. Since mov, add, sub instruction are to patched
with different constants inside the same stub, the rotation field
of the opcode is using to differentiate between them.

So the above examples for v2p translation becomes for VA=0xC0000000,
    1) PA[63:32] = 0xffffffff
       PA[31:0] = VA + 0xC0000000 --> results in a carry
       PA[63:32] = PA[63:32] + carry

       PA[63:0] = 0x0 80000000

    2) PA[63:32] = 0x1
       PA[31:0] = VA + 0xC0000000 --> results in a carry
       PA[63:32] = PA[63:32] + carry

       PA[63:0] = 0x2 80000000

The above ideas were suggested by Nicolas Pitre <nico@linaro.org> as
part of the review of first and second versions of the subject patch.

There is no corresponding change on the phys_to_virt() side, because
computations on the upper 32-bits would be discarded anyway.

Cc: Russell King <linux@arm.linux.org.uk>

Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Sricharan R <r.sricharan@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
2013-10-10 20:25:06 -04:00
Santosh Shilimkar
bc41b8724f ARM: 7846/1: Update SMP_ON_UP code to detect A9MPCore with 1 CPU devices
The generic code is well equipped to differentiate between
SMP and UP configurations.However, there are some devices which
use Cortex-A9 MP core IP with 1 CPU as configuration. To let
these SOCs to co-exist in a CONFIG_SMP=y build by leveraging
the SMP_ON_UP support, we need to additionally check the
number the cores in Cortex-A9 MPCore configuration. Without
such a check in place, the startup code tries to execute
ALT_SMP() set of instructions which lead to CPU faults.

The issue was spotted on TI's Aegis device and this patch
makes now the device work with omap2plus_defconfig which
enables SMP by default. The change is kept limited to only
Cortex-A9 MPCore detection code.

Note that if any future SoC *does* use 0x0 as the PERIPH_BASE, then
the SCU address check code needs to be #ifdef'd for for the Aegis
platform.

Acked-by: Sricharan R <r.sricharan@ti.com>
Signed-off-by: Vaibhav Bedia <vaibhav.bedia@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-03 10:39:44 +01:00
Russell King
2449189bb7 ARM: Add .text annotations where required after __CPUINIT removal
Commit 8bd26e3a7 (arm: delete __cpuinit/__CPUINIT usage from all ARM
users) caused some code to leak into sections which are discarded
through the removal of __CPUINIT annotations.  Add appropriate .text
annotations to bring these back into the kernel text.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-08-01 14:41:40 +01:00
Paul Gortmaker
8bd26e3a7e arm: delete __cpuinit/__CPUINIT usage from all ARM users
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications.  For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.

After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out.  Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.

Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
and are flagged as __cpuinit  -- so if we remove the __cpuinit from
the arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
related content into no-ops as early as possible, since that will get
rid of these warnings.  In any case, they are temporary and harmless.

This removes all the ARM uses of the __cpuinit macros from C code,
and all __CPUINIT from assembly code.  It also had two ".previous"
section statements that were paired off against __CPUINIT
(aka .section ".cpuinit.text") that also get removed here.

[1] https://lkml.org/lkml/2013/5/20/589

Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-07-14 19:36:52 -04:00
Cyril Chemparathy
4756dcbfd3 ARM: LPAE: accomodate >32-bit addresses for page table base
This patch redefines the early boot time use of the R4 register to steal a few
low order bits (ARCH_PGD_SHIFT bits) on LPAE systems.  This allows for up to
38-bit physical addresses.

Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30 16:02:15 +01:00
Paul Bolle
4e1db26a0b ARM: 7690/1: mm: fix CONFIG_LPAE typos
CONFIG_LPAE doesn't exist: the correct option is CONFIG_ARM_LPAE, so fix
up the two typos under arch/arm/.

The fix to head.S is slightly scary, but this is just for setting up
an early io-mapping for the serial port when running on a big-endian,
LPAE system. Since these systems don't exist in the wild (at least, I
have no access to one outside of kvmtool, which doesn't provide a serial
port suitable for earlyprintk), then we can revisit the code later if it
causes any problems.

Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-04-03 16:45:51 +01:00
Will Deacon
d61947a164 ARM: 7657/1: head: fix swapper and idmap population with LPAE and big-endian
The LPAE page table format uses 64-bit descriptors, so we need to take
endianness into account when populating the swapper and idmap tables
during early initialisation.

This patch ensures that we store the two words making up each page table
entry in the correct order when running big-endian.

Cc: <stable@vger.kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-03-03 22:54:13 +00:00
Russell King
210b1847b3 Merge branch 'for-rmk/virt/hyp-boot/fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into fixes 2013-01-19 15:27:30 +00:00
Nicolas Pitre
6f16f4998f ARM: 7628/1: head.S: map one extra section for the ATAG/DTB area
We currently use a temporary 1MB section aligned to a 1MB boundary for
mapping the provided device tree until the final page table is created.
However, if the device tree happens to cross that 1MB boundary, the end
of it remains unmapped and the kernel crashes when it attempts to access
it.  Given no restriction on the location of that DTB, it could end up
with only a few bytes mapped at the end of a section.

Solve this issue by mapping two consecutive sections.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Sascha Hauer <s.hauer@pengutronix.de>
Tested-by: Tomasz Figa <t.figa@samsung.com>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-01-16 16:51:13 +00:00
Marc Zyngier
6e484be1cc ARM: virt: boot secondary CPUs through the right entry point
Secondary CPUs should use the __hyp_stub_install_secondary entry
point, so boot mode inconsistencies can be detected.

Cc: <stable@vger.kernel.org>
Acked-by: Dave Martin <dave.martin@linaro.org>
Reported-by: Ian Molton <ian.molton@collabora.co.uk>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-01-10 21:08:51 +00:00
Russell King
a0f0dd57f4 Merge branch 'fixes' into for-linus
Conflicts:
	arch/arm/kernel/smp.c
2012-10-11 10:55:04 +01:00
Dave Martin
80c59dafb1 ARM: virt: allow the kernel to be entered in HYP mode
This patch does two things:

  * Ensure that asynchronous aborts are masked at kernel entry.
    The bootloader should be masking these anyway, but this reduces
    the damage window just in case it doesn't.

  * Enter svc mode via exception return to ensure that CPU state is
    properly serialised.  This does not matter when switching from
    an ordinary privileged mode ("PL1" modes in ARMv7-AR rev C
    parlance), but it potentially does matter when switching from a
    another privileged mode such as hyp mode.

This should allow the kernel to boot safely either from svc mode or
hyp mode, even if no support for use of the ARM Virtualization
Extensions is built into the kernel.

Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2012-09-19 08:32:50 +01:00
Rob Herring
91a9fec022 ARM: move debug macros to common location
Based on suggestion by Russell King, create a common location for debug
macros and select the included debug macro file using config option.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
2012-09-14 09:22:00 -05:00
Nicolas Pitre
9fa16b7755 ARM: 7439/1: head.S: simplify initial page table mapping
Let's map the initial RAM up to the end of the kernel .bss instead of
the strict kernel image area.  This simplifies the code as the kernel
image only needs to be handled specially in the XIP case.  That covers
the legacy ATAG location as well.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-07-09 17:39:39 +01:00
Nicolas Pitre
f67860a76f ARM: 7363/1: DEBUG_LL: limit early mapping to the minimum
There is just no point mapping up to 512MB for a serial port.
Using a single 1MB entry is way sufficient for all users.
This will create less interference for the following debugging patch.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-05-04 16:56:46 +01:00
Nicolas Pitre
9b5a146a43 ARM: 7338/1: add support for early console output via semihosting
This is a very simple method for code running in an emulator, or under
the supervision of a debugger, to use I/O facilities on the controlling
host.

Tested with OpenOCD, and ARM's Fast Models.

Details on semihosting can be found in chapter 8 of
DUI0203I_rvct_developer_guide.pdf from ARM Ltd.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-03-24 09:38:55 +00:00
Russell King
195864cf3d ARM: move CP15 definitions to separate header file
Avoid namespace conflicts with drivers over the CP15 definitions by
moving CP15 related prototypes and definitions to a private header
file.

Acked-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Stephen Warren <swarren@nvidia.com> [Tegra]
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> [EP93xx]
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-03-24 09:38:51 +00:00
Catalin Marinas
294064f589 ARM: 7275/1: LPAE: Check the CPU support for the long descriptor format
This patch adds a check for the presence of the LPAE feature during the
CPU initialisation. If not present, it reports an error when
CONFIG_DEBUG_LL is enabled.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-01-13 08:56:41 +00:00
Russell King
6ae25a5b9d Merge branch 'for-rmk' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux into devel-stable
Conflicts:
	arch/arm/mm/ioremap.c
2011-12-08 18:02:04 +00:00
Catalin Marinas
1b6ba46b7e ARM: LPAE: MMU setup for the 3-level page table format
This patch adds the MMU initialisation for the LPAE page table format.
The swapper_pg_dir size with LPAE is 5 rather than 4 pages. A new
proc-v7-3level.S file contains the TTB initialisation, context switch
and PTE setting code with the LPAE. The TTBRx split is based on the
PAGE_OFFSET with TTBR1 used for the kernel mappings. The 36-bit mappings
(supersections) and a few other memory types in mmu.c are conditionally
compiled.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2011-12-08 10:30:39 +00:00
Will Deacon
d675d0bc47 ARM: LPAE: add ISBs around MMU enabling code
Before we enable the MMU, we must ensure that the TTBR registers contain
sane values. After the MMU has been enabled, we jump to the *virtual*
address of the following function, so we also need to ensure that the
SCTLR write has taken effect.

This patch adds ISB instructions around the SCTLR write to ensure the
visibility of the above.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2011-12-08 10:30:38 +00:00
Russell King
3ee0fc5ca1 Merge branch 'kexec/idmap' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable 2011-12-06 20:27:54 +00:00
Will Deacon
4e8ee7de22 ARM: SMP: use idmap_pgd for mapping MMU enable during secondary booting
The ARM SMP booting code allocates a temporary set of page tables
containing an identity mapping of the kernel image and provides this
to secondary CPUs for initial booting.

In reality, we only need to include the __turn_mmu_on function in the
identity mapping since the rest of the kernel is executing from virtual
addresses after this point.

This patch adds __turn_mmu_on to the .idmap.text section, allowing the
SMP booting code to use the idmap_pgd directly and not have to populate
its own set of page table.

As a result of this patch, we can make the identity_mapping_add function
static (since it is only used within mm/idmap.c) and also remove the
identity_mapping_del function. The identity map population is moved to
an early initcall so that it is setup in time for secondary CPU bringup.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2011-12-06 14:04:15 +00:00
Will Deacon
72662e0108 ARM: head.S: only include __turn_mmu_on in the initial identity mapping
__create_page_tables identity maps the region of memory from
__enable_mmu to the end of __turn_mmu_on.

In preparation for including __turn_mmu_on in the .idmap.text section,
this patch modifies the identity mapping so that it only includes the
__turn_mmu_on code.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2011-12-06 14:04:15 +00:00
Catalin Marinas
8428e84d42 ARM: 7150/1: Allow kernel unaligned accesses on ARMv6+ processors
Recent gcc versions generate unaligned accesses by default on ARMv6 and
later processors. This patch ensures that the SCTLR.A bit is always
cleared on such processors to avoid kernel traping before
alignment_init() is called.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: John Linn <John.Linn@xilinx.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-11-08 18:25:04 +00:00
Linus Torvalds
1fdb24e969 Merge branch 'devel-stable' of http://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm
* 'devel-stable' of http://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm: (178 commits)
  ARM: 7139/1: fix compilation with CONFIG_ARM_ATAG_DTB_COMPAT and large TEXT_OFFSET
  ARM: gic, local timers: use the request_percpu_irq() interface
  ARM: gic: consolidate PPI handling
  ARM: switch from NO_MACH_MEMORY_H to NEED_MACH_MEMORY_H
  ARM: mach-s5p64x0: remove mach/memory.h
  ARM: mach-s3c64xx: remove mach/memory.h
  ARM: plat-mxc: remove mach/memory.h
  ARM: mach-prima2: remove mach/memory.h
  ARM: mach-zynq: remove mach/memory.h
  ARM: mach-bcmring: remove mach/memory.h
  ARM: mach-davinci: remove mach/memory.h
  ARM: mach-pxa: remove mach/memory.h
  ARM: mach-ixp4xx: remove mach/memory.h
  ARM: mach-h720x: remove mach/memory.h
  ARM: mach-vt8500: remove mach/memory.h
  ARM: mach-s5pc100: remove mach/memory.h
  ARM: mach-tegra: remove mach/memory.h
  ARM: plat-tcc: remove mach/memory.h
  ARM: mach-mmp: remove mach/memory.h
  ARM: mach-cns3xxx: remove mach/memory.h
  ...

Fix up mostly pretty trivial conflicts in:
 - arch/arm/Kconfig
 - arch/arm/include/asm/localtimer.h
 - arch/arm/kernel/Makefile
 - arch/arm/mach-shmobile/board-ap4evb.c
 - arch/arm/mach-u300/core.c
 - arch/arm/mm/dma-mapping.c
 - arch/arm/mm/proc-v7.S
 - arch/arm/plat-omap/Kconfig
largely due to some CONFIG option renaming (ie CONFIG_PM_SLEEP ->
CONFIG_ARM_CPU_SUSPEND for the arm-specific suspend code etc) and
addition of NEED_MACH_MEMORY_H next to HAVE_IDE.
2011-10-28 12:02:27 -07:00
Russell King
06afb1a087 Merge branches 'arnd-randcfg-fixes', 'debug', 'io' (early part), 'l2x0', 'p2v', 'pgt' (early part) and 'smp' into for-linus 2011-10-25 08:19:29 +01:00
Nicolas Pitre
1b9f95f8ad ARM: prepare for removal of a bunch of <mach/memory.h> files
When the CONFIG_NO_MACH_MEMORY_H symbol is selected by a particular
machine class, the machine specific memory.h include file is no longer
used and can be removed.  In that case the equivalent information can
be obtained dynamically at runtime by enabling CONFIG_ARM_PATCH_PHYS_VIRT
or by specifying the physical memory address at kernel configuration time.

If/when all instances of mach/memory.h are removed then this symbol could
be removed.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2011-09-26 10:11:58 -04:00
Nicolas Pitre
639da5ee37 ARM: add an extra temp register to the low level debugging addruart macro
Some platforms (like OMAP not to name it) are doing rather complicated
hacks just to determine the base UART address to use.  Let's give their
addruart macro some slack by providing an extra work register which will
allow for much needed cleanups.

This is basically a no-op as this commit is only adding the extra argument
to the macro but no one is using it yet.

Signed-off-by: nicolas Pitre <nicolas.pitre@linaro.org>
Reviewed-by: Kevin Hilman <khilman@ti.com>
2011-09-26 10:11:25 -04:00
Catalin Marinas
e73fc88e19 ARM: 7059/1: LPAE: Use PMD_(SHIFT|SIZE|MASK) instead of PGDIR_*
PGDIR_SHIFT and PMD_SHIFT for the classic 2-level page table format have
the same value (21). This patch converts the PGDIR_* uses in the kernel
to the PMD_* equivalent so that LPAE builds can reuse the same code.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-08-23 15:30:33 +01:00
Nicolas Pitre
daece59689 ARM: 7013/1: P2V: Remove ARM_PATCH_PHYS_VIRT_16BIT
This code can be removed now that MSM targets no longer need the 16-bit
offsets for P2V.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-08-13 11:26:40 +01:00
Dave Martin
540b573875 ARM: 6999/1: head, zImage: Always Enter the kernel in ARM state
Currently, the documented kernel entry requirements are not
explicit about whether the kernel should be entered in ARM or
Thumb, leading to an ambiguitity about how to enter Thumb-2
kernels.  As a result, the kernel is reliant on the zImage
decompressor to enter the kernel proper in the correct instruction
set state.

This patch changes the boot entry protocol for head.S and Image to
be the same as for zImage: in all cases, the kernel is now entered
in ARM.

Documentation/arm/Booting is updated to reflect this new policy.

A different rule will be needed for Cortex-M class CPUs as and when
support for those lands in mainline, since these CPUs don't support
the ARM instruction set at all: a note is added to the effect that
the kernel must be entered in Thumb on such systems.

Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-19 12:00:53 +01:00
Russell King
239df0fd5e Merge branches 'devel', 'devel-stable' and 'fixes' into for-linus 2011-05-27 22:59:57 +01:00
Catalin Marinas
d427958a46 ARM: 6942/1: mm: make TTBR1 always point to swapper_pg_dir on ARMv6/7
This patch makes TTBR1 point to swapper_pg_dir so that global, kernel
mappings can be used exclusively on v6 and v7 cores where they are
needed.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-05-26 12:14:32 +01:00
Grant Likely
4c2896e88d arm/dt: Make __vet_atags also accept a dtb image
The dtb is passed to the kernel via register r2, which is the same
method that is used to pass an atags pointer.  This patch modifies
__vet_atags to not clear r2 when it encounters a dtb image.

v2: fixed bugs pointed out by Nicolas Pitre

Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
2011-05-11 15:12:32 +02:00