Commit Graph

93 Commits

Author SHA1 Message Date
Arnd Bergmann
91c617d7a3 ARM: 8536/1: mm: hide __start_rodata_section_aligned for non-debug builds
The __start_rodata_section_aligned is only referenced by the
DEBUG_RODATA code, which is only used when the MMU is enabled,
but the definition fails on !MMU builds:

arch/arm/kernel/vmlinux.lds:702: undefined symbol `SECTION_SHIFT' referenced in expression

This hides the symbol whenever DEBUG_RODATA is disabled.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 64ac2e74f0 ("ARM: 8502/1: mm: mark section-aligned portion of rodata NX")
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-02-22 11:39:43 +00:00
Kees Cook
64ac2e74f0 ARM: 8502/1: mm: mark section-aligned portion of rodata NX
When rodata is large enough that it crosses a section boundary after the
kernel text, mark the rest NX. This is as close to full NX of rodata as
we can get without splitting page tables or doing section alignment via
CONFIG_DEBUG_ALIGN_RODATA.

When the config is:

 CONFIG_DEBUG_RODATA=y
 # CONFIG_DEBUG_ALIGN_RODATA is not set

Before:

---[ Kernel Mapping ]---
0x80000000-0x80100000           1M     RW NX SHD
0x80100000-0x80a00000           9M     ro x  SHD
0x80a00000-0xa0000000         502M     RW NX SHD

After:

---[ Kernel Mapping ]---
0x80000000-0x80100000           1M     RW NX SHD
0x80100000-0x80700000           6M     ro x  SHD
0x80700000-0x80a00000           3M     ro NX SHD
0x80a00000-0xa0000000         502M     RW NX SHD

Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-02-11 15:44:10 +00:00
Ard Biesheuvel
31b96cae5c ARM: 8515/2: move .vectors and .stubs sections back into the kernel VMA
Commit b9b32bf70f ("ARM: use linker magic for vectors and vector stubs")
updated the linker script to emit the .vectors and .stubs sections into a
VMA range that is zero based and disjoint from the normal static kernel
region. The reason for that was that this way, the sections can be placed
exactly 4 KB apart, while the payload of the .vectors section is only 32
bytes.

Since the symbols that are part of the .stubs section are emitted into the
kallsyms table, they appear with zero based addresses as well, e.g.,

  00001004 t vector_rst
  00001020 t vector_irq
  000010a0 t vector_dabt
  00001120 t vector_pabt
  000011a0 t vector_und
  00001220 t vector_addrexcptn
  00001240 t vector_fiq
  00001240 T vector_fiq_offset

As this confuses perf when it accesses the kallsyms tables, commit
7122c3e915 ("scripts/link-vmlinux.sh: only filter kernel symbols for
arm") implemented a somewhat ugly special case for ARM, where the value
of CONFIG_PAGE_OFFSET is passed to scripts/kallsyms, and symbols whose
addresses are below it are filtered out. Note that this special case only
applies to CONFIG_XIP_KERNEL=n, not because the issue the patch addresses
exists only in that case, but because finding a limit below which to apply
the filtering is not entirely straightforward.

Since the .vectors and .stubs sections contain position independent code
that is never executed in place, we can emit it at its most likely runtime
VMA (for more recent CPUs), which is 0xffff0000 for the vector table and
0xffff1000 for the stubs. Not only does this fix the perf issue with
kallsyms, allowing us to drop the special case in scripts/kallsyms
entirely, it also gives debuggers a more realistic view of the address
space, and setting breakpoints or single stepping through code in the
vector table or the stubs is more likely to work as expected on CPUs that
use a high vector address. E.g.,

  00001240 A vector_fiq_offset
  ...
  c0c35000 T __init_begin
  c0c35000 T __vectors_start
  c0c35020 T __stubs_start
  c0c35020 T __vectors_end
  c0c352e0 T _sinittext
  c0c352e0 T __stubs_end
  ...
  ffff1004 t vector_rst
  ffff1020 t vector_irq
  ffff10a0 t vector_dabt
  ffff1120 t vector_pabt
  ffff11a0 t vector_und
  ffff1220 t vector_addrexcptn
  ffff1240 T vector_fiq

(Note that vector_fiq_offset is now an absolute symbol, which kallsyms
already ignores by default)

The LMA footprint is identical with or without this change, only the VMAs
are different:

  Before:
  Idx Name          Size      VMA       LMA       File off  Algn
   ...
   14 .notes        00000024  c0c34020  c0c34020  00a34020  2**2
                    CONTENTS, ALLOC, LOAD, READONLY, CODE
   15 .vectors      00000020  00000000  c0c35000  00a40000  2**1
                    CONTENTS, ALLOC, LOAD, READONLY, CODE
   16 .stubs        000002c0  00001000  c0c35020  00a41000  2**5
                    CONTENTS, ALLOC, LOAD, READONLY, CODE
   17 .init.text    0006b1b8  c0c352e0  c0c352e0  00a452e0  2**5
                    CONTENTS, ALLOC, LOAD, READONLY, CODE
   ...

  After:
  Idx Name          Size      VMA       LMA       File off  Algn
   ...
   14 .notes        00000024  c0c34020  c0c34020  00a34020  2**2
                    CONTENTS, ALLOC, LOAD, READONLY, CODE
   15 .vectors      00000020  ffff0000  c0c35000  00a40000  2**1
                    CONTENTS, ALLOC, LOAD, READONLY, CODE
   16 .stubs        000002c0  ffff1000  c0c35020  00a41000  2**5
                    CONTENTS, ALLOC, LOAD, READONLY, CODE
   17 .init.text    0006b1b8  c0c352e0  c0c352e0  00a452e0  2**5
                    CONTENTS, ALLOC, LOAD, READONLY, CODE
   ...

Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Chris Brandt <chris.brandt@renesas.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-02-11 15:33:39 +00:00
Chris Brandt
538bf46948 ARM: 8513/1: xip: Move XIP linking to a separate file
When building an XIP kernel, the linker script needs to be much different
than a conventional kernel's script. Over time, it's been difficult to
maintain both XIP and non-XIP layouts in one linker script. Therefore,
this patch separates the two procedures into two completely different
files.

The new linker script is essentially a straight copy of the current script
with all the non-CONFIG_XIP_KERNEL portions removed.

Additionally, all CONFIG_XIP_KERNEL portions have been removed from the
existing linker script...never to return again.

It should be noted that this does not fix any current XIP issues, but
rather is the first move in fixing them properly with subsequent patches.

Signed-off-by: Chris Brandt <chris.brandt@renesas.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-02-11 15:33:39 +00:00
Kees Cook
25362dc496 ARM: 8501/1: mm: flip priority of CONFIG_DEBUG_RODATA
The use of CONFIG_DEBUG_RODATA is generally seen as an essential part of
kernel self-protection:
http://www.openwall.com/lists/kernel-hardening/2015/11/30/13
Additionally, its name has grown to mean things beyond just rodata. To
get ARM closer to this, we ought to rearrange the names of the configs
that control how the kernel protects its memory. What was called
CONFIG_ARM_KERNMEM_PERMS is realy doing the work that other architectures
call CONFIG_DEBUG_RODATA.

This redefines CONFIG_DEBUG_RODATA to actually do the bulk of the
ROing (and NXing). In the place of the old CONFIG_DEBUG_RODATA, use
CONFIG_DEBUG_ALIGN_RODATA, since that's what the option does: adds
section alignment for making rodata explicitly NX, as arm does not split
the page tables like arm64 does without _ALIGN_RODATA.

Also adds human readable names to the sections so I could more easily
debug my typos, and makes CONFIG_DEBUG_RODATA default "y" for CPU_V7.

Results in /sys/kernel/debug/kernel_page_tables for each config state:

 # CONFIG_DEBUG_RODATA is not set
 # CONFIG_DEBUG_ALIGN_RODATA is not set

---[ Kernel Mapping ]---
0x80000000-0x80900000           9M     RW x  SHD
0x80900000-0xa0000000         503M     RW NX SHD

 CONFIG_DEBUG_RODATA=y
 CONFIG_DEBUG_ALIGN_RODATA=y

---[ Kernel Mapping ]---
0x80000000-0x80100000           1M     RW NX SHD
0x80100000-0x80700000           6M     ro x  SHD
0x80700000-0x80a00000           3M     ro NX SHD
0x80a00000-0xa0000000         502M     RW NX SHD

 CONFIG_DEBUG_RODATA=y
 # CONFIG_DEBUG_ALIGN_RODATA is not set

---[ Kernel Mapping ]---
0x80000000-0x80100000           1M     RW NX SHD
0x80100000-0x80a00000           9M     ro x  SHD
0x80a00000-0xa0000000         502M     RW NX SHD

Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Laura Abbott <labbott@fedoraproject.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-02-08 15:56:45 +00:00
Linus Torvalds
714d8e7e27 arm64 updates for 4.1:
The main change here is a significant head.S rework that allows us to
 boot on machines with physical memory at a really high address without
 having to increase our mapped VA range. Other changes include:
 
 - AES performance boost for Cortex-A57
 - AArch32 (compat) userspace with 64k pages
 - Cortex-A53 erratum workaround for #845719
 - defconfig updates (new platforms, PCI, ...)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABCgAGBQJVLnQpAAoJELescNyEwWM03RIH/iwcDc0MBZgkwfD5cnY+29p4
 m89lMDo3SyGQT4NynHSw7P3R7c3zULmI+9hmJMw/yfjjjL6m7X+vVAF3xj1Am4Al
 OzCqYLHyFnlRktzJ6dWeF1Ese7tWqPpxn+OCXgYNpz/r5MfF/HhlyX/qNzAQPKrw
 ZpDvnt44DgUfweqjTbwQUg2wkyCRjmz57MQYxDcmJStdpHIu24jWOvDIo3OJGjyS
 L49I9DU6DGUhkISZmmBE0T7vmKMD1BcgI7OIzX2WIqn521QT+GSLMhRxaHmK1s1V
 A8gaMTwpo0xFhTAt7sbw/5+2663WmfRdZI+FtduvORsoxX6KdDn7DH1NQixIm8s=
 =+F0I
 -----END PGP SIGNATURE-----

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 updates from Will Deacon:
 "Here are the core arm64 updates for 4.1.

  Highlights include a significant rework to head.S (allowing us to boot
  on machines with physical memory at a really high address), an AES
  performance boost on Cortex-A57 and the ability to run a 32-bit
  userspace with 64k pages (although this requires said userspace to be
  built with a recent binutils).

  The head.S rework spilt over into KVM, so there are some changes under
  arch/arm/ which have been acked by Marc Zyngier (KVM co-maintainer).
  In particular, the linker script changes caused us some issues in
  -next, so there are a few merge commits where we had to apply fixes on
  top of a stable branch.

  Other changes include:

   - AES performance boost for Cortex-A57
   - AArch32 (compat) userspace with 64k pages
   - Cortex-A53 erratum workaround for #845719
   - defconfig updates (new platforms, PCI, ...)"

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (39 commits)
  arm64: fix midr range for Cortex-A57 erratum 832075
  arm64: errata: add workaround for cortex-a53 erratum #845719
  arm64: Use bool function return values of true/false not 1/0
  arm64: defconfig: updates for 4.1
  arm64: Extract feature parsing code from cpu_errata.c
  arm64: alternative: Allow immediate branch as alternative instruction
  arm64: insn: Add aarch64_insn_decode_immediate
  ARM: kvm: round HYP section to page size instead of log2 upper bound
  ARM: kvm: assert on HYP section boundaries not actual code size
  arm64: head.S: ensure idmap_t0sz is visible
  arm64: pmu: add support for interrupt-affinity property
  dt: pmu: extend ARM PMU binding to allow for explicit interrupt affinity
  arm64: head.S: ensure visibility of page tables
  arm64: KVM: use ID map with increased VA range if required
  arm64: mm: increase VA range of identity map
  ARM: kvm: implement replacement for ld's LOG2CEIL()
  arm64: proc: remove unused cpu_get_pgd macro
  arm64: enforce x1|x2|x3 == 0 upon kernel entry as per boot protocol
  arm64: remove __calc_phys_offset
  arm64: merge __enable_mmu and __turn_mmu_on
  ...
2015-04-16 13:58:29 -05:00
Ard Biesheuvel
c4a84ae39b ARM: 8322/1: keep .text and .fixup regions closer together
This moves all fixup snippets to the .text.fixup section, which is
a special section that gets emitted along with the .text section
for each input object file, i.e., the snippets are kept much closer
to the code they refer to, which helps prevent linker failure on
large kernels.

Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-03-29 23:11:56 +01:00
Ard Biesheuvel
eb765c1ceb ARM: 8317/1: move the .idmap.text section closer to .head.text
This moves the .idmap.text section closer to .head.text, so that
relative branches are less likely to go out of range if the kernel
text gets bigger.

Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-03-28 15:46:14 +00:00
Ard Biesheuvel
a9fea8b388 ARM: kvm: round HYP section to page size instead of log2 upper bound
Older binutils do not support expressions involving the values of
external symbols so just round up the HYP region to the page size.

Tested-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: when will this ever end?!]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-27 12:21:27 +00:00
Ard Biesheuvel
12eb3e8339 ARM: kvm: assert on HYP section boundaries not actual code size
Using ASSERT() with an expression that involves a symbol that
is only supplied through a PROVIDE() definition in the linker
script itself is apparently not supported by some older versions
of binutils.

So instead, rewrite the expression so that only the section
boundaries __hyp_idmap_text_start and __hyp_idmap_text_end
are used. Note that this reverts the fix in 06f75a1f62
("ARM, arm64: kvm: get rid of the bounce page") for the ASSERT()
being triggered erroneously when unrelated linker emitted veneers
happen to end up in the HYP idmap region.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-25 11:43:46 +00:00
Ard Biesheuvel
e60a1fec44 ARM: kvm: implement replacement for ld's LOG2CEIL()
Commit 06f75a1f62 ("ARM, arm64: kvm: get rid of the bounce
page") uses ld's builtin function LOG2CEIL() to align the
KVM init code to a log2 upper bound of its size. However,
this function turns out to be a fairly recent addition to
binutils, which breaks the build for older toolchains.

So instead, implement a replacement LOG2_ROUNDUP() using
the C preprocessor.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-23 11:16:50 +00:00
Ard Biesheuvel
06f75a1f62 ARM, arm64: kvm: get rid of the bounce page
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.

For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.

For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size

Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.

On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.

Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:21:56 +00:00
Russell King
06e944b8e5 generic fixmaps
ARM support for CONFIG_DEBUG_RODATA
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 Comment: Kees Cook <kees@outflux.net>
 
 iQIcBAABCgAGBQJUQDtHAAoJEIly9N/cbcAmS/gQAJnkmOBN5WLymkYE02QMcRZM
 l70c131QOGimFWUZpPxsyHdUFwASogxGet6LsGcbB40ayCrQw4tzXBlcFOvbY/dM
 HC+1I3CqRDphU/q2Mm/2bpg2F+VPbwIyxACPtsqW824muTHK47qDs3R9vVYDtfPV
 xupOpqz6qNFxgE5/TpDjsjVeJol/i+ygEQzIxo70m0FnLVv5t/deGjDM6bvfqwdm
 po/+hUlkW8lpyQspuucBCxfGagaCSkd67hyHMvq5zDjz1+6T1XljdA7jc7rSa9bI
 eCWuJgmM51AaA1lr+Eu1raSOduk0x1GU33wf1Y0z+qZ0A6ELTDmiY/SLMK6o8n6T
 4kPmzigRRT9a4B9kTR/mj9IyG+LAKu8Bvppl5CedNp2xEtIEGppnHU3d7bFZIVq3
 gCIL0ZG26D467hgUEgJwdJwnIU2GqirR4XyD3Ml+hjGCq6L5QzZpUWiskgh+EoHw
 dIhbJnGBxB2MkVJW0zE+ajVdiFAeU7P3voR74B73kwC9H+S6Fo10pff9LZbs8BKd
 R5RH5xCPDXgjnkjDbXW8e9Zkr58IrT0ffFcwE1IuDQpTMKKQPIlH5DUlRLQSHau9
 abDZuCZpvAPvGO4korTsu0dAXYJHlilG5Ftd856Hmrs+32eMDX2DV+RT4m7YX2Dt
 7tLXNu6P1yRFnjFdGxtR
 =u3lA
 -----END PGP SIGNATURE-----

Merge tag 'ronx-next' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux into devel-stable

generic fixmaps
ARM support for CONFIG_DEBUG_RODATA
2014-11-03 10:12:13 +00:00
Kees Cook
80d6b0c2ee ARM: mm: allow text and rodata sections to be read-only
This introduces CONFIG_DEBUG_RODATA, making kernel text and rodata
read-only. Additionally, this splits rodata from text so that rodata can
also be NX, which may lead to wasted memory when aligning to SECTION_SIZE.
The read-only areas are made writable during ftrace updates and kexec.

Signed-off-by: Kees Cook <keescook@chromium.org>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
2014-10-16 14:38:54 -07:00
Kees Cook
1e6b48116a ARM: mm: allow non-text sections to be non-executable
Adds CONFIG_ARM_KERNMEM_PERMS to separate the kernel memory regions
into section-sized areas that can have different permisions. Performs
the NX permission changes during free_initmem, so that init memory can be
reclaimed.

This uses section size instead of PMD size to reduce memory lost to
padding on non-LPAE systems.

Based on work by Brad Spengler, Larry Bassel, and Laura Abbott.

Signed-off-by: Kees Cook <keescook@chromium.org>
Tested-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
2014-10-16 14:38:54 -07:00
Yalin Wang
562c85cadb ARM: 8168/1: extend __init_end to a page align address
This patch changes the __init_end address to a
page align address, so that free_initmem() can
free the whole .init section, because if the end
address is not page aligned, it will round down to
a page align address, then the tail unligned page
will not be freed.

Signed-off-by: wang <yalin.wang2010@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-10-02 21:28:16 +01:00
Mark Rutland
76e7d5c4fd ARM: 8088/1: vmlinux.lds.S: drop redundant .comment
Commit 78d7530ac3 ("ARM: Clean up linker script using new linker script
macros.") modified the arm kernel linker script to use the STABS_DEBUG
macro, but left a .comment section definition. As STABS_DEBUG defines
the .comment section in an identical way, the second section definition
is redundant and can be removed.

This patch removes the redundant .comment section definition.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18 12:29:23 +01:00
Russell King
b9b32bf70f ARM: use linker magic for vectors and vector stubs
Use linker magic to create the vectors and vector stubs: we can tell the
linker to place them at an appropriate VMA, but keep the LMA within the
kernel.  This gets rid of some unnecessary symbol manipulation, and
have the linker calculate the relocations appropriately.

Cc: <stable@vger.kernel.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-07-31 21:34:24 +01:00
Stephen Rothwell
40b313608a Finally eradicate CONFIG_HOTPLUG
Ever since commit 45f035ab9b ("CONFIG_HOTPLUG should be always on"),
it has been basically impossible to build a kernel with CONFIG_HOTPLUG
turned off.  Remove all the remaining references to it.

Cc: Russell King <linux@arm.linux.org.uk>
Cc: Doug Thompson <dougthompson@xmission.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Steven Whitehouse <swhiteho@redhat.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Acked-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-06-03 14:20:18 -07:00
Marc Zyngier
0394e1f605 ARM: KVM: enforce maximum size for identity mapped code
We're about to move to an init procedure where we rely on the
fact that the init code fits in a single page. Make sure we
align the idmap text on a vector alignment, and that the code is
not bigger than a single page.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu>
2013-04-28 22:23:09 -07:00
Christoffer Dall
9e9a367c29 ARM: Section based HYP idmap
Add a method (hyp_idmap_setup) to populate a hyp pgd with an
identity mapping of the code contained in the .hyp.idmap.text
section.

Offer a method to drop this identity mapping through
hyp_idmap_teardown.

Make all the above depend on CONFIG_ARM_VIRT_EXT and CONFIG_ARM_LPAE.

Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-23 13:29:09 -05:00
Pawel Moll
dad5451a32 ARM: 7605/1: vmlinux.lds: Move .notes section next to the rodata
The .notes, being read-only data by nature, were placed between
read-write .data and .bss. This was harmful in case of the XIP
kernel, as being placed in the RAM range, most likely far
from the ROM address, was inflating the XIP images.

Moving the .notes at the end of the read-only section
(consisting of .text, .rodata and unwind info) fixes the problem.

Reported-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Pawel Moll <pawel.moll@arm.com>
Tested-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-12-16 10:04:24 +00:00
Stephen Boyd
ee951c630c ARM: 7568/1: Sort exception table at compile time
Add the ARM machine identifier to sortextable and select the
config option so that we can sort the exception table at compile
time. sortextable relies on a section named __ex_table existing
in the vmlinux, but ARM's linker script places the exception
table in the data section. Give the exception table its own
section so that sortextable can find it.

This allows us to skip the sorting step during boot.

Cc: David Daney <david.daney@cavium.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Tested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-11-04 10:31:16 +00:00
David Brown
9973290ce2 ARM: 7428/1: Prevent KALLSYM size mismatch on ARM.
ARM builds seem to be plagued by an occasional build error:

    Inconsistent kallsyms data
    This is a bug - please report about it
    Try "make KALLSYMS_EXTRA_PASS=1" as a workaround

The problem has to do with alignment of some sections by the linker.
The kallsyms data is built in two passes by first linking the kernel
without it, and then linking the kernel again with the symbols
included.  Normally, this just shifts the symbols, without changing
their order, and the compression used by the kallsyms gives the same
result.

On non SMP, the per CPU data is empty.  Depending on the where the
alignment ends up, it can come out as either:

   +-------------------+
   | last text segment |
   +-------------------+
   /* padding */
   +-------------------+     <- L1_CACHE_BYTES alignemnt
   | per cpu (empty)   |
   +-------------------+
__per_cpu_end:
   /* padding */
__data_loc:
   +-------------------+     <- THREAD_SIZE alignment
   | data              |
   +-------------------+

or

   +-------------------+
   | last text segment |
   +-------------------+
   /* padding */
   +-------------------+     <- L1_CACHE_BYTES alignemnt
   | per cpu (empty)   |
   +-------------------+
__per_cpu_end:
   /* no padding */
__data_loc:
   +-------------------+     <- THREAD_SIZE alignment
   | data              |
   +-------------------+

if the alignment satisfies both.  Because symbols that have the same
address are sorted by 'nm -n', the second case will be in a different
order than the first case.  This changes the compression, changing the
size of the kallsym data, causing the build failure.

The KALLSYMS_EXTRA_PASS=1 workaround usually works, but it is still
possible to have the alignment change between the second and third
pass.  It's probably even possible for it to never reach a fixedpoint.

The problem only occurs on non-SMP, when the per-cpu data is empty,
and when the data segment has alignment (and immediately follows the
text segments).  Fix this by only including the per_cpu section on
SMP, when it is not empty.

Signed-off-by: David Brown <davidb@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-06-22 22:54:18 +01:00
Marc Zyngier
b8b9987ffd ARM: 7320/1: Fix proc_info table alignment
With an admittedly exotic choice of configuration options
(CC_OPTIMIZE_FOR_SIZE, THUMB2, some other size-minimizing ones)
and compiler, the proc_info table can end up being misaligned,
and the kernel being unbootable (Error: unrecognized/unsupported
processor variant).

Forcing the alignement to 4 bytes in the linker script fixes the
issue.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-02-09 16:25:37 +00:00
Will Deacon
972da06470 ARM: 7290/1: vmlinux.lds.S: align the exception fixup table to a 4-byte boundary
The exception fixup table is currently aligned to a 32-byte boundary.
Whilst this won't cause any problems, the exception_table_entry
structures contain only a pair of unsigned longs, so 4-byte alignment
is all that is required. If the table was walked from start to end,
cacheline alignment may bring some performance benefits, but since a
binary search is used, the access pattern is random and will not benefit
from a stricter alignment.

Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-01-23 10:20:04 +00:00
Will Deacon
f0d5375e3c ARM: 7289/1: vmlinux.lds.S: do not hardcode cacheline size as 32 bytes
The linker script assumes a cacheline size of 32 bytes when aligning
the .data..cacheline_aligned and .data..percpu sections.

This patch updates the script to use L1_CACHE_BYTES, which should be set
to 64 on platforms that require it.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-01-23 10:20:04 +00:00
Will Deacon
8903826d0c ARM: idmap: populate identity map pgd at init time using .init.text
When disabling and re-enabling the MMU, it is necessary to take out an
identity mapping for the code that manipulates the SCTLR in order to
avoid it disappearing from under our feet. This is useful when soft
rebooting and returning from CPU suspend.

This patch allocates a set of page tables during boot and populates them
with an identity mapping for the .idmap.text section. This means that
users of the identity map do not need to manage their own pgd and can
instead annotate their functions with __idmap or, in the case of assembly
code, place them in the correct section.

Acked-by: Dave Martin <dave.martin@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2011-12-06 14:04:14 +00:00
Russell King
bdf4e94823 Merge branch 'misc' into for-linus
Conflicts:
	arch/arm/mach-integrator/integrator_ap.c
2011-10-25 08:19:59 +01:00
Simon Glass
87e040b645 ARM: 7017/1: Use generic BUG() handler
ARM uses its own BUG() handler which makes its output slightly different
from other archtectures.

One of the problems is that the ARM implementation doesn't report the function
with the BUG() in it, but always reports the PC being in __bug(). The generic
implementation doesn't have this problem.

Currently we get something like:

kernel BUG at fs/proc/breakme.c:35!
Unable to handle kernel NULL pointer dereference at virtual address 00000000
...
PC is at __bug+0x20/0x2c

With this patch it displays:

kernel BUG at fs/proc/breakme.c:35!
Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP
...
PC is at write_breakme+0xd0/0x1b4

This implementation uses an undefined instruction to implement BUG, and sets up
a bug table containing the relevant information. Many versions of gcc do not
support %c properly for ARM (inserting a # when they shouldn't) so we work
around this using distasteful macro magic.

v1: Initial version to replace existing ARM BUG() implementation with something
more similar to other architectures.

v2: Add Thumb support, remove backtrace whitespace output changes. Change to
use macros instead of requiring the asm %d flag to work (thanks to
Dave Martin <dave.martin@linaro.org>)

v3: Remove old BUG() implementation in favor of this one.
Remove the Backtrace: message (will submit this separately).
Use ARM_EXIT_KEEP() so that some architectures can dump exit text at link time
thanks to Stephen Boyd <sboyd@codeaurora.org> (although since we always
define GENERIC_BUG this might be academic.)
Rebase to linux-2.6.git master.

v4: Allow BUGS in modules (these were not reported correctly in v3)
(thanks to Stephen Boyd <sboyd@codeaurora.org> for suggesting that.)
Remove __bug() as this is no longer needed.

v5: Add %progbits as the section flags.

Signed-off-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-10-17 09:13:41 +01:00
Russell King
6760b10960 ARM: fix vmlinux.lds.S discarding sections
We are seeing linker errors caused by sections being discarded, despite
the linker script trying to keep them.  The result is (eg):

`.exit.text' referenced in section `.alt.smp.init' of drivers/built-in.o: defined in discarded section `.exit.text' of drivers/built-in.o
`.exit.text' referenced in section `.alt.smp.init' of net/built-in.o: defined in discarded section `.exit.text' of net/built-in.o

This is the relevent part of the linker script (reformatted to make it
clearer):
| SECTIONS
| {
| /*
| * unwind exit sections must be discarded before the rest of the
| * unwind sections get included.
| */
| /DISCARD/ : {
| *(.ARM.exidx.exit.text)
| *(.ARM.extab.exit.text)
| }
| ...
| .exit.text : {
| *(.exit.text)
| *(.memexit.text)
| }
| ...
| /DISCARD/ : {
| *(.exit.text)
| *(.memexit.text)
| *(.exit.data)
| *(.memexit.data)
| *(.memexit.rodata)
| *(.exitcall.exit)
| *(.discard)
| *(.discard.*)
| }
| }

Now, this is what the linker manual says about discarded output sections:

|    The special output section name `/DISCARD/' may be used to discard
| input sections.  Any input sections which are assigned to an output
| section named `/DISCARD/' are not included in the output file.

No questions, no exceptions. It doesn't say "unless they are listed
before the /DISCARD/ section." Now, this is what asn-generic/vmlinux.lds.S
says:
| /*
|  * Default discarded sections.
|  *
|  * Some archs want to discard exit text/data at runtime rather than
|  * link time due to cross-section references such as alt instructions,
|  * bug table, eh_frame, etc. DISCARDS must be the last of output
|  * section definitions so that such archs put those in earlier section
|  * definitions.
|  */

And guess what - the list _always_ includes .exit.text etc.

Now, what's actually happening is that the linker is reading the script,
and it finds the first /DISCARD/ output section at the beginning of the
script. It continues reading the script, and finds the 'DISCARD' macro
at the end, which having been postprocessed results in another
/DISCARD/ output section. As the linker already contains the earlier
/DISCARD/ output section, it adds it to that existing section, so it
effectively is placed at the start. This can be seen by using the -M
option to ld:

| Linker script and memory map
|
|                 0xc037c080                jiffies = jiffies_64
|
| /DISCARD/
|  *(.ARM.exidx.exit.text)
|  *(.ARM.extab.exit.text)
|  *(.exit.text)
|  *(.memexit.text)
|  *(.exit.data)
|  *(.memexit.data)
|  *(.memexit.rodata)
|  *(.exitcall.exit)
|  *(.discard)
|  *(.discard.*)
|
|                 0xc0008000                . = 0xc0008000
|
| .head.text      0xc0008000      0x1d0
|                 0xc0008000                _text = .
|  *(.head.text)
|  .head.text     0xc0008000      0x1d0 arch/arm/kernel/head.o
|                 0xc0008000                stext
|
| .text           0xc0008200   0x2d78d0
|                 0xc0008200                _stext = .
|                 0xc0008200                __exception_text_start = .
|  *(.exception.text)
|  .exception.text
| ...

As you can see, all the discarded sections are grouped together - and
as a result of it being the first output section, they all appear before
any other section.

The result is that not only is the unwind information discarded (as
intended), but also the .exit.text, despite us wanting to have the
.exit.text preserved.

We can't move the unwind information elsewhere, because it'll then be
included even when we do actually discard the .exit.text (and similar)
sections.

So, work around this by avoiding the generic DISCARDS macro, and instead
conditionalize the sections to be discarded ourselves.  This avoids the
ambiguity in how the linker assigns input sections to output sections,
making our script less dependent on undocumented linker behaviour.

Reported-by: Rob Herring <robherring2@gmail.com>
Tested-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-09-20 23:42:31 +01:00
Russell King
e2f81844ef ARM: vmlinux.lds: use _text and _stext the same way as x86
x86 uses _text to mark the start of the kernel image including the
head text, and _stext to mark the start of the .text section.  Change
our vmlinux.lds to conform.  An audit of the places which use _stext
and _text in arch/arm indicates no users of either symbol are impacted
by this change.  It does mean a slight change to /proc/iomem output.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-07 23:36:34 +01:00
Russell King
3835d69a6c ARM: vmlinux.lds: move init sections between text and data sections
Place the init sections between the text and data sections.  This
means all code is grouped together at the beginning of the kernel
image, and all data is at the end of the image.  This avoids problems
with the 24-bit branch instruction relocations becoming invalid with
large initramfs images.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-07 23:36:31 +01:00
Russell King
43fc9d2fa5 ARM: vmlinux.lds: remove .rodata/.rodata1 from main .text segment
RODATA() already handles these sections, so allow it to take care
of them for us.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-07 23:36:28 +01:00
Russell King
1604d79d37 ARM: vmlinux.lds: rearrange .init output section
Keep the various linker tables as separate output sections rather
than combining them together into one big .init section.  This
makes the 'vmlinux' easier to see what is placed where.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-07 23:35:41 +01:00
Russell King
39df88872f ARM: vmlinux.lds: move discarded sections to beginning
Rather than scattering the discarded sections throughout the linker
file, move them to the start.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-07 23:35:38 +01:00
Tejun Heo
0415b00d17 percpu: Always align percpu output section to PAGE_SIZE
Percpu allocator honors alignment request upto PAGE_SIZE and both the
percpu addresses in the percpu address space and the translated kernel
addresses should be aligned accordingly.  The calculation of the
former depends on the alignment of percpu output section in the kernel
image.

The linker script macros PERCPU_VADDR() and PERCPU() are used to
define this output section and the latter takes @align parameter.
Several architectures are using @align smaller than PAGE_SIZE breaking
percpu memory alignment.

This patch removes @align parameter from PERCPU(), renames it to
PERCPU_SECTION() and makes it always align to PAGE_SIZE.  While at it,
add PCPU_SETUP_BUG_ON() checks such that alignment problems are
reliably detected and remove percpu alignment comment recently added
in workqueue.c as the condition would trigger BUG way before reaching
there.

For um, this patch raises the alignment of percpu area.  As the area
is in .init, there shouldn't be any noticeable difference.

This problem was discovered by David Howells while debugging boot
failure on mn10300.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Cc: uclinux-dist-devel@blackfin.uclinux.org
Cc: David Howells <dhowells@redhat.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: user-mode-linux-devel@lists.sourceforge.net
2011-03-24 18:50:09 +01:00
Linus Torvalds
16d8775700 Merge branch 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (91 commits)
  ARM: 6806/1: irq: introduce entry and exit functions for chained handlers
  ARM: 6781/1: Thumb-2: Work around buggy Thumb-2 short branch relocations in gas
  ARM: 6747/1: P2V: Thumb2 support
  ARM: 6798/1: aout-core: zero thread debug registers in a.out core dump
  ARM: 6796/1: Footbridge: Fix I/O mappings for NOMMU mode
  ARM: 6784/1: errata: no automatic Store Buffer drain on Cortex-A9
  ARM: 6772/1: errata: possible fault MMU translations following an ASID switch
  ARM: 6776/1: mach-ux500: activate fix for errata 753970
  ARM: 6794/1: SPEAr: Append UL to device address macros.
  ARM: 6793/1: SPEAr: Remove unused *_SIZE macros from spear*.h files
  ARM: 6792/1: SPEAr: Replace SIZE macro's with SZ_4K macros
  ARM: 6791/1: SPEAr3xx: Declare device structures after shirq code
  ARM: 6790/1: SPEAr: Clock Framework: Rename usbd clock and align apb_clk entry
  ARM: 6789/1: SPEAr3xx: Rename sdio to sdhci
  ARM: 6788/1: SPEAr: Include mach/hardware.h instead of mach/spear.h
  ARM: 6787/1: SPEAr: Reorder #includes in .h & .c files.
  ARM: 6681/1: SPEAr: add debugfs support to clk API
  ARM: 6703/1: SPEAr: update clk API support
  ARM: 6679/1: SPEAr: make clk API functions more generic
  ARM: 6737/1: SPEAr: formalized timer support
  ...
2011-03-16 19:03:06 -07:00
Russell King
05e3475451 Merge branch 'p2v' into devel
Conflicts:
	arch/arm/kernel/module.c
	arch/arm/mach-s5pv210/sleep.S
2011-03-16 23:35:27 +00:00
Linus Torvalds
79d8a8f736 Merge branch 'for-2.6.39' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu
* 'for-2.6.39' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
  percpu, x86: Add arch-specific this_cpu_cmpxchg_double() support
  percpu: Generic support for this_cpu_cmpxchg_double()
  alpha: use L1_CACHE_BYTES for cacheline size in the linker script
  percpu: align percpu readmostly subsection to cacheline

Fix up trivial conflict in arch/x86/kernel/vmlinux.lds.S due to the
percpu alignment having changed ("x86: Reduce back the alignment of the
per-CPU data section")
2011-03-16 08:22:41 -07:00
Russell King
a9ad21fed0 ARM: Keep exit text/data around for SMP_ON_UP
When SMP_ON_UP is used and the spinlocks are inlined, we end up with
inline spinlocks in the exit code, with references from the SMP
alternatives section to the exit sections.  This causes link time
errors.  Avoid this by placing the exit sections in the init-discarded
region.

Cc: <stable@kernel.org>
Tested-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-02-21 19:29:27 +00:00
Pawel Moll
dc810efb0c ARM: 6740/1: Place correctly notes section in the linker script
Commit 18991197b4 added --build-id
linker option when toolchain supports it. ARM one does, but for some
reason places the section at 0 when linker script doesn't mention it
explicitly.

The 1e621a8e37 worked around the problem
removing this section from binary image with explicit objcopy options,
but it still exists in vmlinux, confusing tools like debuggers and perf.

This problem was discussed here:
http://lists.infradead.org/pipermail/linux-arm-kernel/2010-May/015994.html
http://lists.infradead.org/pipermail/linux-arm-kernel/2010-May/015994.html
but the proposed changes to the linker script were substantial.

This patch simply places NOTES (36 bytes long, at least when compiled
with CodeSourcery toolchain) between data and bss, which seem to be
the right place (and suggested by the sample linker script in
include/asm-generic/vmlinux.lds.h).

It is enough to place it correctly in vmlinux (so debuggers are happy):

Section Headers:
  [11] .data             PROGBITS        c07ce000 7ce000 020fc0 00  WA  0   0 32
  [12] .notes            NOTE            c07eefc0 7eefc0 000024 00  AX  0   0  4
  [13] .bss              NOBITS          c07ef000 7eefe4 01e628 00  WA  0   0 32
Program Headers:
  LOAD           0x008000 0xc0008000 0xc0008000 0x7e6fe4 0x805628 RWE 0x8000
  NOTE           0x7eefc0 0xc07eefc0 0xc07eefc0 0x00024 0x00024 R E 0x4
Section to Segment mapping:
  Segment Sections...
   00     <...> .data .notes .bss
   01     .notes

and to get it exposed as /sys/kernel/notes used by perf tools.

Signed-off-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-02-21 19:29:25 +00:00
Russell King
dc21af99fa ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching
This idea came from Nicolas, Eric Miao produced an initial version,
which was then rewritten into this.

Patch the physical to virtual translations at runtime.  As we modify
the code, this makes it incompatible with XIP kernels, but allows us
to achieve this with minimal loss of performance.

As many translations are of the form:

	physical = virtual + (PHYS_OFFSET - PAGE_OFFSET)
	virtual = physical - (PHYS_OFFSET - PAGE_OFFSET)

we generate an 'add' instruction for __virt_to_phys(), and a 'sub'
instruction for __phys_to_virt().  We calculate at run time (PHYS_OFFSET
- PAGE_OFFSET) by comparing the address prior to MMU initialization with
where it should be once the MMU has been initialized, and place this
constant into the above add/sub instructions.

Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real
PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for
the C-mode PHYS_OFFSET variable definition to use.

At present, we are unable to support Realview with Sparsemem enabled
as this uses a complex mapping function, and MSM as this requires a
constant which will not fit in our math instruction.

Add a module version magic string for this feature to prevent
incompatible modules being loaded.

Tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-02-17 23:27:32 +00:00
Tejun Heo
19df0c2fef percpu: align percpu readmostly subsection to cacheline
Currently percpu readmostly subsection may share cachelines with other
percpu subsections which may result in unnecessary cacheline bounce
and performance degradation.

This patch adds @cacheline parameter to PERCPU() and PERCPU_VADDR()
linker macros, makes each arch linker scripts specify its cacheline
size and use it to align percpu subsections.

This is based on Shaohua's x86 only patch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Shaohua Li <shaohua.li@intel.com>
2011-01-25 14:26:50 +01:00
Russell King
4073723acb Merge branch 'misc' into devel
Conflicts:
	arch/arm/Kconfig
	arch/arm/common/Makefile
	arch/arm/kernel/Makefile
	arch/arm/kernel/smp.c
2011-01-06 22:32:52 +00:00
Russell King
daf8741675 ARM: implement support for read-mostly sections
As our SMP implementation uses MESI protocols.  Grouping together data
which is mostly only read together means that we avoid unnecessary
cache line bouncing when this code shares a cache line with other data.

In other words, cache lines associated with read-mostly data are
expected to spend most of their time in shared state.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-05 08:39:36 +00:00
Rabin Vincent
61b5cb1c3b ARM: place C irq handlers in IRQ_ENTRY for ftrace
When FUNCTION_GRAPH_TRACER is enabled, place do_IRQ() and friends in the
IRQ_ENTRY section so that the irq-related features of the function graph
tracer work.

Signed-off-by: Rabin Vincent <rabin@rab.in>
2010-11-19 21:43:26 +05:30
Tony Lindgren
88d927e948 ARM: 6465/1: Fix data abort accessing proc_info from __lookup_processor_type
Commit 5085f3ff45 added better support for
CONFIG_HOTPLUG_CPU by keeping proc_info around. However, depending on
the Kconfig options selected, this can make the booting fail mysteriously
early on.

Turns out a data abort can happen in __lookup_processor in ldmia r5 {r3, r4}.
When it happens the address loaded to r5 is not aligned. Fix the problem by
aligning proc_info.

Reported-by: Anand Gadiyar <gadiyar@ti.com>
Tested-by: Anand Gadiyar <gadiyar@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-27 21:40:29 +01:00
Russell King
a0a55682b8 Merge branch 'hotplug' into devel
Conflicts:
	arch/arm/kernel/head-common.S
2010-10-18 22:34:47 +01:00
Russell King
5085f3ff45 ARM: hotplug cpu: Keep processor information, startup code & __lookup_processor_type
When hotplug CPU is enabled, we need to keep the list of supported CPUs,
their setup functions, and __lookup_processor_type in place so that we
can find and initialize secondary CPUs.  Move these into the __CPUINIT
section.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-08 10:07:32 +01:00