This adopts a trimmed down version of the MIPS port mangling interface
limited to the I/O swabbing for platforms that can't use little endian
accessors. For platforms with mixed I/O spaces involving PCI it will
still be necessary to enable byte swapping at the host controller level.
Attention needs to be paid to all of host controller endianness, CPU
endianness, and whether I/O accesses are explicitly swapped or not via
SWAP_IO_SPACE. Fortunately the platforms that need this are in the
minority.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
When p3_ioremap() was converted to ioremap_prot() there was some breakage
introduced where the 29-bit segmentation logic would trap the area range
and return an identity mapping without having allowed the area
specification to force mapping through page tables. This wires up a PCC
mask for pgprot verification to work out whether to short-circuit the
identity mapping on legacy parts, restoring the previous behaviour.
Reported-by: Nobuhiro Iwamatsu <iwamatsu@nigauri.org>
Cc: stable@kernel.org
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now that all of the in-tree drivers have been converted to portable I/O
accessors, we can kill off the legacy ones with extreme prejudice.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This takes a bit of a sledgehammer to the machvec I/O routines. The
iomem case requires no special casing and so can just be dropped
outright. This only leaves the ioport casing for PCI and SuperIO
mangling. With the SuperIO case going through the standard ioport
mapping, it's possible to replace everything with generic routines.
With this done the standard I/O routines are tidied up and NO_IOPORT
now gets default-enabled for the vast majority of boards.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This extends some of the existing special casing for HAS_IOPORT
platforms and gets it to the point where platforms can begin to
conditionally select it.
The major changes here are that the PIO routines themselves go away
completely, including all of the machvec port mapping wrappers. With this
in place it's possible for any non-machvec abusing platform to disable
PIO completely. At present this is left as an opt-in until the abusers
are the odd ones out instead of the majority.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This implements a fairly significant overhaul of the dynamic PMB mapping
code. The primary change here is that the PMB gets its own VMA that
follows the uncached mapping and we attempt to be a bit more intelligent
with dynamic sizing, multi-entry mapping, and so forth.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
There are lots of registers that can only be updated from the uncached
mapping, so we add some helpers for those cases in order to make it
easier to ensure that we only make the jump when it's absolutely
necessary.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
These routines are unsuitable for cross-platform use and no new code
should be using them, flag them as deprecated in order to give drivers
sufficient time to migrate over.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently ioremap_prot() uses an unsigned long to pass the pgprot value
around. This results in the upper half of the pgprot being chomped when
using 64-bit pgprots on a 32-bit ABI (X2TLB and SH-5).
As the only users of ioremap_prot() are presently legacy parts, this
doesn't cause too much of an issue. In the future when the interface is
converted to use pgprot_t directly this can be re-enabled for the other
parts, too.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This is already taken care of in the top-level ioremap, and now that
no one should be calling ioremap_fixed() directly we can simply throw the
mapping displacement in as an additional argument.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently 'flags' gets passed around a lot between the various ioremap
helpers and implementations, which is only 32-bits. In the X2TLB case
we use 64-bit pgprots which presently results in the upper 32bits being
chopped off (which handily include our read/write/exec permissions).
As such, we convert everything internally to using pgprot_t directly and
simply convert over with pgprot_val() where needed. With this in place,
transparent fixmap utilization for early ioremap works as expected.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This converts iounmap_fixed() to return success/error if it handled the
unmap request or not. At the same time, drop the __init label, as this
can be called in to later.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently the fixed ioremap API is only defined when CONFIG_IOREMAP_FIXED
is set. As we want to call in to it unconditionally, provide a stubbed
out interface.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Some devices need to be ioremap'd and accessed very early in the boot
process. It is not possible to use the standard ioremap() function in
this case because that requires kmalloc()'ing some virtual address space
and kmalloc() may not be available so early in boot.
This patch provides fixmap mappings that allow physical address ranges
to be remapped into the kernel address space during the early boot
stages.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
This introduces some much overdue chainsawing of the fixed PMB support.
fixed PMB was introduced initially to work around the fact that dynamic
PMB mode was relatively broken, though they were never intended to
converge. The main areas where there are differences are whether the
system is booted in 29-bit mode or 32-bit mode, and whether legacy
mappings are to be preserved. Any system booting in true 32-bit mode will
not care about legacy mappings, so these are roughly decoupled.
Regardless of the entry point, PMB and 32BIT are directly related as far
as the kernel is concerned, so we also switch back to having one select
the other.
With legacy mappings iterated through and applied in the initialization
path it's now possible to finally merge the two implementations and
permit dynamic remapping overtop of remaining entries regardless of
whether boot mappings are crafted by hand or inherited from the boot
loader.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
p3_ioremap() references __ioremap() which is presently undefined on
nommu. This provides a trivial stub to fix the build up.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This wires up the caller information for the ioremap VMA, which allows
for more helpful caller tracking via /proc/vmallocinfo. Follows the x86
and powerpc changes of the same nature.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This fixes up the build and behaviour for various configurations. Namely
the CONFIG_32BIT cases where legacy mappings do not exist, as well as the
sh64 build.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Eventually we'll have complete control over what physical memory gets
mapped where and we can probably do other interesting things. For now
though, when the MMU is in 32-bit mode, we map physical memory into the
P1 and P2 virtual address ranges with the same semantics as they have in
29-bit mode.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Reading from the ROM is not a good idea as it could disturb some
flash operation that it is in progress.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The synopsys PCI cell used in the later STMicro chips requires code to
be run in order to do IO cycles, rather than just memory mapping the IO
space. Rather than extending the existing SH infrastructure to allow
this, use the GENERIC_IOMAP implmentation to save re-inventing the
wheel.
This set of changes allows the SH to be built with GENERIC_IOMAP
enabled, it just ifdef's out the functions provided by the GENERIC_IOMAP
implementation, and provides a few required missing functions.
Signed-off-by: David McKay <david.mckay@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
These are presently only defined for sh32, use the plain unoptimized
versions for sh64. Fixes up smsc911x build.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently this is special-cased for early initialization. While there are
situations where these static early initializations are still necessary,
with minor changes it is possible to use this for the regular ioremap
implementation as well. This allows us to kill off the special-casing for
the remap completely and to start tidying up all of the SH-5
special-casing in drivers.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
All 32-bit SuperH processors currently go through __ioremap_mode()
and check for IO_TRAPPED and directly mapped segments. With this
patch we simplify the MMU less case with a pass through version of
__ioremap_mode() which just returns the physical address.
The effects of this is change are:
- fix non-MMU ioremap() of high address hardware blocks (sh7203 CMT)
- make sure IO_TRAPPED is not selected
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This provides a method for supporting fixed PMB mappings inherited from
the bootloader, as an alternative to the dynamic PMB mapping currently
used by the kernel. In the future these methods will be combined.
P1/P2 area is handled like a regular 29-bit physical address, and local
bus device are assigned P3 area addresses.
Signed-off-by: Yoshihiro Shimoda <shimoda.yoshihiro@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch adds a pass-through case when ioremapping P4 addresses.
Addresses passed to ioremap() should be physical addresses, so the
best option is usually to convert the virtual address to a physical
address before calling ioremap. This will give you a virtual address
in P2 which matches the physical address and this works well for
most internal hardware blocks on the SuperH architecture.
However, some hardware blocks must be accessed through P4. Converting
the P4 address to a physical and then back to a P2 does not work. One
example of this is the sh7722 TMU block, it must be accessed through P4.
Without this patch P4 addresses will be mapped using PTEs which
requires the page allocator to be up and running.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
With the PMB enabled, only P1SEG and up are covered by the PMB mappings,
meaning that situations where out-of-bounds physical addresses are read
from will lead to TLB reset after the PMB miss, allowing for use cases
like dd if=/dev/mem to reset the TLB.
Fix this up to make sure the reference is between __MEMORY_START (phys)
and __pa(high_memory). This is coherent across all variants of sh/sh64
with and without MMU, though the PMB bug itself is only applicable to
SH-4A parts.
Reported-by: Hideo Saito <saito@densan.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This tidies up a lot of the PIO/MMIO split. No in-tree platforms were
making use of the MMIO overloading through the machvec (nor have any of
them been in some time), so we just kill all of that off. The ISA I/O
routine wrapping remains unaffected, which remains the only special
casing outside of the iomap API that boards need to think about.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
These were doing largely bogus things and using the wrong typing for
the address. Bring these in line with the ARM definitions.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This follows the sparc changes a439fe51a1.
Most of the moving about was done with Sam's directions at:
http://marc.info/?l=linux-sh&m=121724823706062&w=2
with subsequent hacking and fixups entirely my fault.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>