Add SD Card support to the kfr2r09 board using the
sh_mobile_sdhi driver hooked up to SDHI0 and yc304.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Add SD Card support to the se7724 board using the
sh_mobile_sdhi driver hooked up to SDHI0 and CN7.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Convert the AP325 board to use sh_mobile_sdhi for the
SD Card connected to CN3 instead of mmc_spi.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Convert the Migo-R board to use sh_mobile_sdhi for the
SD Card connected to CN9 instead of mmc_spi.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
When CONFIG_FUNCTION_GRAPH_TRACER is enabled the function graph tracer
may patch return addresses on the stack with the address of
return_to_handler(). This really confuses the DWARF unwinder because it
will try find the caller of return_to_handler(), not the caller of the
real return address.
So teach the DWARF unwinder how to find the real return address whenever
it encounters return_to_handler().
This patch does not cope very well when multiple return addresses on the
stack have been patched. To make it work properly it would require state
to track how many return_to_handler()'s have been seen so that we'd know
where to look in current->curr_ret_stack[]. So for now, instead of
trying to handle this, just moan if more than one return address on the
stack has been patched.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds an __irq_entry annotation for do_IRQ() so that the IRQ
annotation in the function graph tracer works as advertized. We already
have the IRQENTRY section wired up, so this is just a trivial addition
to actually make use of it.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This moves the current dma_alloc/free_coherent() calls to a generic
variant and plugs them in for the nommu default. Other variants can
override the defaults in the dma mapping ops directly.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
In the past these were simply wrapping to barrier() which was sufficient
on SH SMP platforms predating SH-4A. Unfortunately due to ll/sc semantics
an explicit synco is needed in these cases, which is sorted for us by
just switching these over to smp_mb(). smp_mb() also has the benefit of
being wrapped to barrier() in the UP and non-SH4A cases, so old behaviour
is maintained for those parts.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
SCIF2 and the FPU exceptions happen to share vector numbers, one in
EXPEVT and the other in INTEVT. This is a violation of the interface and
should have never made it in to silicon. On top of that, the demux hack
that was added for special dispatch is rather error prone, and introduces
more problems than it solves. Kill all of it off, and just refuse to deal
with SCIF2 outright.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This simplifies the irqflags support by switching over to the asm-generic
version. The necessary support functions are brought out-of-line for both
SHcompact and SHmedia instruction sets.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This code was added for some ancient SH-4 solution engines with peculiar
boot ROMs that did silly things to the UBC MSTP bits. None of these have
been in the wild for years, and these days the clock framework wraps up
the MSTP bits, meaning that the UBC code is one of the few interfaces
that is stomping MSTP bits underneath the clock framework. At this point
the risks far outweigh any benefit this code provided, so just kill it
off.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This enables SCHED_MC support for SH-X3 multi-cores. Presently this is
just a simple wrapper around the possible map, but this allows for
tying in support for some of the more exotic NUMA clusters where we can
actually do something with the topology.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
In the case where need_resched() is set in between the cpu_idle() and
pm_idle() calls we were missing an else case for just re-enabling local
IRQs and bailing out. This was noticed by the irqs_disabled() warning,
even though IRQs were being re-enabled elsewhere.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This follows the x86 change and moves check_pgt_cache() up under the
!need_resched() tight loop, rather than simply calling in to it when
exiting idle.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This does a bit of chainsawing of the idle loop code to get light sleep
working on SMP. Previously this was forcing secondary CPUs in to sleep
mode with them not coming back if they didn't have their own local
timers. Given that we use clockevents broadcasting by default, the CPU
managing the clockevents can't have IRQs disabled before entering its
sleep state.
This unfortunately leaves us with the age-old need_resched() race in
between local_irq_enable() and cpu_sleep(), but at present this is
unavoidable. After some more experimentation it may be possible to layer
on SR.BL bit manipulation over top of this scheme to inhibit the race
condition, but given the current potential for missing wakeups, this is
left as a future exercise.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
All of the secondary CPUs are forced in to light sleep mode, but we were
missing the same initialization for the boot CPU. This resulted in
inconsistent sleep modes depending on which CPU we were on, confusing the
idle loop when not polling.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The open function got the BKL via the big push down. Replace it by
preempt_enable/disable as this is sufficient for an UP machine.
The ioctl can be unlocked because there is no functionality which
requires serialization. The usage by multiple callers is broken with
and without the BKL due to the local static variable addr.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Add code to handle the cache disabled case. Fixes breakage introduced by
37443ef3f0 ("sh: Migrate SH-4 cacheflush
ops to function pointers."). Without this patch configuring caches off
with CONFIG_CACHE_OFF=y makes kfr2r09 and migo-r lock up in fbdev
deferred io or early user space.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently The SH-4 cache flushing code uses flush_cache_4096() for most
of the real flushing work, which breaks down to a fixed 4096 unroll and
increment. Not only is this sub-optimal for larger page sizes, it's also
uncovered a bug in sh4_flush_dcache_page() when large page sizes are used
and we have no cache aliases -- resulting in only a part of the page's
D-cache lines being written back.
Signed-off-by: Valentin Sitdikov <valentin.sitdikov@siemens.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in support for NMI counting per-CPU via irq_cpustat_t.
Modelled after the x86 implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Replace TIF_RESTORE_SIGMASK with TS_RESTORE_SIGMASK and define our own
set_restore_sigmask() function. This saves the costly SMP-safe set_bit
operation, which we do not need for the sigmask flag since TIF_SIGPENDING
always has to be set too.
Based on the x86 and powerpc change.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The resume_userspace path had TRACE_IRQS_OFF written incorrectly and so
never handled the transition properly. This was fixed once before but
seems to have made it back in the tree. Fix it for good.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This only needs to flush the return code via the legacy path, and just
invalidates uselessly otherwise. This makes the behaviour consistent for
all of the trampoline setup paths.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The secondary CPU info was seeing corrupted results due to not entering
all of the setup paths taken by the boot CPU. So we just memcpy() the
boot cpu data over directly, and then fix up the per-CPU bits.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
We do not want to use smp_processor_id() from these paths, as they trip
preempt BUGs. Switch the test over to the boot cpu directly.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Secondary CPUs already take care of the D-cache bits through the common
cache initialization path, and the only thing that is necessary after
twiddling around with stack_start is ensuring that the I-cache changes
are visible (particularly since this tends to be the only part lacking
coherency).
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Despite being located in the ftrace header, the CALLER_ADDRx definitions
are used by generic code. As such, we have to provide it generically, and
given that there is no real dependence on ftrace in the first place, the
definitions can just be moved out.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This cribs the x86 implementation of ftrace_nmi_enter() and friends to
make ftrace_modify_code() NMI safe, particularly on SMP configurations.
For additional notes on the problems involved, see the comment below
ftrace_call_replace().
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds return_address.c to the -pg exclusion list, as this is the
building block for CALLER_ADDRx we do not want to profile this.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This enables us to build the dwarf unwinder both with modules enabled and
disabled in addition to reducing code size in the latter case. The
helpers are also consolidated, and modified to resemble the BUG module
helpers.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This splits out the unwinder implementation and adds a new
return_address() abstraction modelled after the ARM code. The DWARF
unwinder is tied in to this, returning NULL otherwise in the case of
being unable to support arbitrary depths.
This enables us to get correct behaviour with the unwinder enabled,
as well as disabling the arbitrary depth support when frame pointers are
enabled, as arbitrary depths with __builtin_return_address() are not
supported regardless.
With this abstraction it's also possible to layer on a simplified
implementation with frame pointers in the event that the unwinder isn't
enabled, although this is left as a future exercise.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Sync up with latest core changes in the syscalls tracing area:
- tracing: Map syscall name to number (syscall_name_to_nr())
- tracing: Call arch_init_ftrace_syscalls at boot
- tracing: add support tracepoint ids (set_syscall_{enter,exit}_id())
Taken from the s390 change.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This too follows the ARM change, given that the issue at hand applies to
all platforms that implement lazy D-cache writeback.
This fixes up the case when a page mapping disappears between the
flush_dcache_page() call (when PG_dcache_dirty is set for the page) and
the update_mmu_cache() call -- such as in the case of swap cache being
freed early. This kills off the mapping test in update_mmu_cache() and
switches to simply testing for PG_dcache_dirty.
Reported-by: Nitin Gupta <ngupta@vflare.org>
Reported-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This follows the ARM change, as SH had all of the same issues:
Make die() better match x86:
- add printing of the last accessed sysfs file
- ensure console_verbose() is called under the lock
- ensure we panic outside of oops_exit()
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The major reason for implementing the DWARF unwinder in the first place
was so that we could stop using __builtin_return_address(n), which
doesn't work on SH for n > 0.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Originally, dwarf_unwind_stack() was a recursive function and it seems
that some of the old comments were never updated.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
If we broke out of the while (1) loop because the return address of
"frame" was zero, then "frame" needs to be free'd before we return.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Pass a module's .eh_frame section to the DWARF unwinder at module load
time so that the section's FDEs and CIEs can be registered with the
DWARF unwinder. This allows us to unwind the stack through module code
when generating backtraces.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
In the multi-evt conversion for the SH-X3 proto CPU, IRLs were dropped
down to a single unique masking source, which ended up blowing up on
ILSEL-based IRQs which have special semantics that otherwise confuse the
intc code. While this does result in intc spewing about not having a
unique masking source, we don't really care.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The initialisation process differs for CONFIG_PMB and for
CONFIG_PMB_FIXED. For CONFIG_PMB_FIXED we need to register the PMB
entries that were allocated by the bootloader.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
We need to map the gap between 0x00000000 and __MEMORY_START in the PMB,
as well as RAM.
With this change my 7785LCR board can switch to 32bit MMU mode at
runtime.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Eventually we'll have complete control over what physical memory gets
mapped where and we can probably do other interesting things. For now
though, when the MMU is in 32-bit mode, we map physical memory into the
P1 and P2 virtual address ranges with the same semantics as they have in
29-bit mode.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Unfortunately, at the time during in boot when we want to be setting up
the PMB entries, the kmem subsystem hasn't been initialised.
We now match pmb_map slots with pmb_entry_list slots. When we find an
empty slot in pmb_map, we set the bit, thereby acquiring the
corresponding pmb_entry_list entry. There is a benefit in using this
static array of struct pmb_entry's; we don't need to acquire any locks
in order to traverse the list of struct pmb_entry's.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
There's no need to export the internal PMB functions for allocating,
freeing and modifying PMB entries, etc. This way we can restrict the
interface for PMB.
Also remove the static from pmb_init() so that we have more freedom in
setting up the initial PMB entries and turning on MMU 32bit mode.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
CONFIG_PMB will eventually allow the MMU to be switched between 29-bit
and 32-bit mode dynamically at runtime.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
To allow the MMU to be switched between 29bit and 32bit mode at runtime
some constants need to swapped for functions that return a runtime
value.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Replace the use of PHYSADDR() with __pa(). PHYSADDR() is based on the
idea that all addresses in P1SEG are untranslated, so we can access an
address's physical page as an offset from P1SEG. This doesn't work for
CONFIG_PMB/CONFIG_PMB_FIXED because pages in P1SEG and P2SEG are used
for PMB mappings and so can be translated to any physical address.
Likewise, replace a P1SEGADDR() use with virt_to_phys().
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Simplify set_pmb_entry() by removing the possibility of not finding a
free slot in the PMB. Instead we now allocate a slot in pmb_alloc() so
that if there are no free slots we fail at allocation time, rather than
in set_pmb_entry().
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Currently, we've got the less than ideal situation where if we need to
allocate a 256MB mapping we'll allocate four entries like so,
entry 1: 128MB
entry 2: 64MB
entry 3: 16MB
entry 4: 16MB
This is because as we execute the loop in pmb_remap() we will
progressively try mapping the remaining address space with smaller and
smaller sizes. This isn't good because the size we use on one iteration
may be the perfect size to use on the next iteration, for instance when
the initial size is divisible by one of the PMB mapping sizes.
With this patch, we now only need two entries in the PMB to map 256MB of
address space,
entry 1: 128MB
entry 2: 128MB
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
We should favour PMB mappings when the physical address cannot be
reached with 29-bits.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
If we fail to allocate a PMB entry in pmb_remap() we must remember to
clear and free any PMB entries that we may have previously allocated,
e.g. if we were allocating a multiple entry mapping.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Fix some callers of jump_to_uncached() and back_to_cached() that were
not annotated with __uses_jump_to_uncached.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Extend the ecovec24 board code to enable Power
Management LEDs showing the current sh7724 sleep state.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Certain networking and USB workloads generate floods of these accesses,
so just disable it by default (thereby restoring the old behaviour). The
option remains configurable from userspace, and can still be used as a
debugging aid.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
There is now no need for the magicpanelr2 and dreamcast platforms to set
their own I/O port bas values, given that the generic machvec code now
sets this to P2SEG for everyone.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
There's already code in do_page_fault() to conditionally enable
interrupts, so we don't need to unconditonally enable them before
calling it. This fixes a lockdep warning where we called
trace_hardirqs_off() but with irqs still enabled.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This bumps up the default I/O base to P2SEG, which allows legacy probing
to bail out gracefully rather than oopsing. Platforms that have a real
PIO offset still need to fix this up on their own, although most
platforms are content with P2SEG already.
The previous change to teach ioport_map() about >= P1SEG offsets in
combination with this patch allows both the already remapped and the
legacy address probing to pass through and succeed.
Fixes up an oops with i8042 on the sh7785lcr board.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This fixes up the case where certain drivers already do their own
remapping and subsequently attempt to use the PIO calls for I/O. In this
case there is no additional remapping that needs to be done, and the
address can be casted in to the cookie directly.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com>
Acked-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* 'for-linus' of git://neil.brown.name/md: (97 commits)
md: raid-1/10: fix RW bits manipulation
md: remove unnecessary memset from multipath.
md: report device as congested when suspended
md: Improve name of threads created by md_register_thread
md: remove sparse warnings about lock context.
md: remove sparse waring "symbol xxx shadows an earlier one"
async_tx/raid6: add missing dma_unmap calls to the async fail case
ioat3: fix uninitialized var warnings
drivers/dma/ioat/dma_v2.c: fix warnings
raid6test: fix stack overflow
ioat2: clarify ring size limits
md/raid6: cleanup ops_run_compute6_2
md/raid6: eliminate BUG_ON with side effect
dca: module load should not be an error message
ioat: driver version 4.0
dca: registering requesters in multiple dca domains
async_tx: remove HIGHMEM64G restriction
dmaengine: sh: Add Support SuperH DMA Engine driver
dmaengine: Move all map_sg/unmap_sg for slave channel to its client
fsldma: Add DMA_SLAVE support
...
In the unaligned kernel exception fixup case the printk() was ordered
before the copy_from_user(), resulting in a nonsensical instruction
value. This fixes up the ordering properly.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds some sanity checking in the unaligned instruction handler to
verify the instruction size, which enables basic support for 16-bit
fixups on SH-2A parts.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
fix the following 'make includecheck' warning:
arch/sh/kernel/dwarf.c: asm/dwarf.h is included more than once.
Signed-off-by: Jaswinder Singh Rajput <jaswinderrajput@gmail.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linus: (39 commits)
cpumask: Move deprecated functions to end of header.
cpumask: remove unused deprecated functions, avoid accusations of insanity
cpumask: use new-style cpumask ops in mm/quicklist.
cpumask: use mm_cpumask() wrapper: x86
cpumask: use mm_cpumask() wrapper: um
cpumask: use mm_cpumask() wrapper: mips
cpumask: use mm_cpumask() wrapper: mn10300
cpumask: use mm_cpumask() wrapper: m32r
cpumask: use mm_cpumask() wrapper: arm
cpumask: Use accessors for cpu_*_mask: um
cpumask: Use accessors for cpu_*_mask: powerpc
cpumask: Use accessors for cpu_*_mask: mips
cpumask: Use accessors for cpu_*_mask: m32r
cpumask: remove arch_send_call_function_ipi
cpumask: arch_send_call_function_ipi_mask: s390
cpumask: arch_send_call_function_ipi_mask: powerpc
cpumask: arch_send_call_function_ipi_mask: mips
cpumask: arch_send_call_function_ipi_mask: m32r
cpumask: arch_send_call_function_ipi_mask: alpha
cpumask: remove obsolete topology_core_siblings and topology_thread_siblings: ia64
...
* remove asm/atomic.h inclusion from linux/utsname.h --
not needed after kref conversion
* remove linux/utsname.h inclusion from files which do not need it
NOTE: it looks like fs/binfmt_elf.c do not need utsname.h, however
due to some personality stuff it _is_ needed -- cowardly leave ELF-related
headers and files alone.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* git://git.kernel.org/pub/scm/linux/kernel/git/sam/kbuild-next: (30 commits)
Use macros for .data.page_aligned section.
Use macros for .bss.page_aligned section.
Use new __init_task_data macro in arch init_task.c files.
kbuild: Don't define ALIGN and ENTRY when preprocessing linker scripts.
arm, cris, mips, sparc, powerpc, um, xtensa: fix build with bash 4.0
kbuild: add static to prototypes
kbuild: fail build if recordmcount.pl fails
kbuild: set -fconserve-stack option for gcc 4.5
kbuild: echo the record_mcount command
gconfig: disable "typeahead find" search in treeviews
kbuild: fix cc1 options check to ensure we do not use -fPIC when compiling
checkincludes.pl: add option to remove duplicates in place
markup_oops: use modinfo to avoid confusion with underscored module names
checkincludes.pl: provide usage helper
checkincludes.pl: close file as soon as we're done with it
ctags: usability fix
kernel hacking: move STRIP_ASM_SYMS from General
gitignore usr/initramfs_data.cpio.bz2 and usr/initramfs_data.cpio.lzma
kbuild: Check if linker supports the -X option
kbuild: introduce ld-option
...
Fix trivial conflict in scripts/basic/fixdep.c
For /proc/kcore, each arch registers its memory range by kclist_add().
In usual,
- range of physical memory
- range of vmalloc area
- text, etc...
are registered but "range of physical memory" has some troubles. It
doesn't updated at memory hotplug and it tend to include unnecessary
memory holes. Now, /proc/iomem (kernel/resource.c) includes required
physical memory range information and it's properly updated at memory
hotplug. Then, it's good to avoid using its own code(duplicating
information) and to rebuild kclist for physical memory based on
/proc/iomem.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
For /proc/kcore, vmalloc areas are registered per arch. But, all of them
registers same range of [VMALLOC_START...VMALLOC_END) This patch unifies
them. By this. archs which have no kclist_add() hooks can see vmalloc
area correctly.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>