Commit Graph

28 Commits

Author SHA1 Message Date
Vineet Gupta
a6416f57ce ARC: use ASL assembler mnemonic
ARCompact and ARCv2 only have ASL, while binutils used to support LSL as
a alias mnemonic.

Newer binutils (upstream) don't want to do that so replace it.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-11-14 13:12:21 +05:30
Vineet Gupta
5a364c2a17 ARC: mm: PAE40 support
This is the first working implementation of 40-bit physical address
extension on ARCv2.

Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-29 18:41:30 +05:30
Vineet Gupta
25d464183c ARC: mm: PAE40: tlbex.S: Explicitify the size of pte_t
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28 19:50:29 +05:30
Vineet Gupta
fe6c1b8611 ARCv2: mm: THP support
MMUv4 in HS38x cores supports Super Pages which are basis for Linux THP
support.

Normal and Super pages can co-exist (ofcourse not overlap) in TLB with a
new bit "SZ" in TLB page desciptor to distinguish between them.
Super Page size is configurable in hardware (4K to 16M), but fixed once
RTL builds.

The exact THP size a Linx configuration will support is a function of:
 - MMU page size (typical 8K, RTL fixed)
 - software page walker address split between PGD:PTE:PFN (typical
   11:8:13, but can be changed with 1 line)

So for above default, THP size supported is 8K * 256 = 2M

Default Page Walker is 2 levels, PGD:PTE:PFN, which in THP regime
reduces to 1 level (as PTE is folded into PGD and canonically referred
to as PMD).

Thus thp PMD accessors are implemented in terms of PTE (just like sparc)

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17 17:48:18 +05:30
Vineet Gupta
129cbed54a ARC: mm: pte flags comsetic cleanups, comments
No semantical changes

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-09 17:04:22 +05:30
Ruud Derwig
2924cd18c4 ARCv2: [vdk] dts files and defconfig for HS38 VDK
- CONFIG_ARC_UBOOT_SUPPORT to handle arguments passed in r0, r1, r2
 - CONFIG_DEVTMPFS_MOUNT for mouting rootfs since it uses external cpio
   for rootfs

Cc: Grant Likely <grant.likely@linaro.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: devicetree@vger.kernel.org
Signed-off-by: Ruud Derwig <rderwig@synopsys.com>
[vgupta: folded the Main baord DT files for smp/up into one]
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-06-25 06:00:21 +05:30
Vineet Gupta
d7a512bfe0 ARCv2: MMUv4: TLB programming Model changes
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-06-22 14:06:55 +05:30
Vineet Gupta
1f6ccfff63 ARCv2: Support for ARCv2 ISA and HS38x cores
The notable features are:
    - SMP configurations of upto 4 cores with coherency
    - Optional L2 Cache and IO-Coherency
    - Revised Interrupt Architecture (multiple priorites, reg banks,
        auto stack switch, auto regfile save/restore)
    - MMUv4 (PIPT dcache, Huge Pages)
    - Instructions for
	* 64bit load/store: LDD, STD
	* Hardware assisted divide/remainder: DIV, REM
	* Function prologue/epilogue: ENTER_S, LEAVE_S
	* IRQ enable/disable: CLRI, SETI
	* pop count: FFS, FLS
	* SETcc, BMSKN, XBFU...

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-06-22 14:06:55 +05:30
Vineet Gupta
a615b47dbf ARC: entry.S: confine EXCEPTION_* macros to one file
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-06-19 18:09:35 +05:30
Vineet Gupta
c9a98e1849 ARC: update some comments
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2014-07-23 11:17:12 +05:30
Vineet Gupta
ec7ac6afd0 ARC: switch to generic ENTRY/END assembler annotations
With commit 9df62f0544 "arch: use ASM_NL instead of ';'" the generic
macros can handle the arch specific newline quirk. Hence we can get rid
of ARC asm macros and use the "C" style macros.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2014-03-26 14:31:28 +05:30
Vineet Gupta
21a63b5604 ARC: Change calling convention of do_page_fault()
switch the args (address, pt_regs) to match with all the other "C"
exception handlers.

This removes the awkwardness in EV_ProtV for page fault vs. unaligned
access.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06 10:41:39 +05:30
Vineet Gupta
947bf103fc ARC: [ASID] Track ASID allocation cycles/generations
This helps remove asid-to-mm reverse map

While mm->context.id contains the ASID assigned to a process, our ASID
allocator also used asid_mm_map[] reverse map. In a new allocation
cycle (mm->ASID >= @asid_cache), the Round Robin ASID allocator used this
to check if new @asid_cache belonged to some mm2 (from prev cycle).
If so, it could locate that mm using the ASID reverse map, and mark that
mm as unallocated ASID, to force it to refresh at the time of switch_mm()

However, for SMP, the reverse map has to be maintained per CPU, so
becomes 2 dimensional, hence got rid of it.

With reverse map gone, it is NOT possible to reach out to current
assignee. So we track the ASID allocation generation/cycle and
on every switch_mm(), check if the current generation of CPU ASID is
same as mm's ASID; If not it is refreshed.

(Based loosely on arch/sh implementation)

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30 21:42:19 +05:30
Vineet Gupta
5bd87adf9b ARC: [ASID] Refactor the TLB paranoid debug code
-Asm code already has values of SW and HW ASID values, so they can be
 passed to the printing routine.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30 21:42:18 +05:30
Vineet Gupta
d091fcb97f ARC: MMUv4 preps/2 - Reshuffle PTE bits
With previous commit freeing up PTE bits, reassign them so as to:

- Match the bit to H/w counterpart where possible
  (e.g. MMUv2 GLOBAL/PRESENT, this avoids a shift in create_tlb())
- Avoid holes in _PAGE_xxx definitions

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30 10:19:12 +05:30
Vineet Gupta
64b703ef27 ARC: MMUv4 preps/1 - Fold PTE K/U access flags
The current ARC VM code has 13 flags in Page Table entry: some software
(accesed/dirty/non-linear-maps) and rest hardware specific. With 8k MMU
page, we need 19 bits for addressing page frame so remaining 13 bits is
just about enough to accomodate the current flags.

In MMUv4 there are 2 additional flags, SZ (normal or super page) and WT
(cache access mode write-thru) - and additionally PFN is 20 bits (vs. 19
before for 8k). Thus these can't be held in current PTE w/o making each
entry 64bit wide.

It seems there is some scope of compressing the current PTE flags (and
freeing up a few bits). Currently PTE contains fully orthogonal distinct
access permissions for kernel and user mode (Kr, Kw, Kx; Ur, Uw, Ux)
which can be folded into one set (R, W, X). The translation of 3 PTE
bits into 6 TLB bits (when programming the MMU) can be done based on
following pre-requites/assumptions:

1. For kernel-mode-only translations (vmalloc: 0x7000_0000 to
   0x7FFF_FFFF), PTE additionally has PAGE_GLOBAL flag set (and user
   space entries can never be global). Thus such a PTE can translate
   to Kr, Kw, Kx (as appropriate) and zero for User mode counterparts.

2. For non global entries, the PTE flags can be used to create mirrored
   K and U TLB bits. This is true after commit a950549c67
   "ARC: copy_(to|from)_user() to honor usermode-access permissions"
   which ensured that user-space translations _MUST_ have same access
   permissions for both U/K mode accesses so that  copy_{to,from}_user()
   play fair with fault based CoW break and such...

There is no such thing as free lunch - the cost is slightly infalted
TLB-Miss Handlers.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-29 17:51:36 +05:30
Vineet Gupta
4b06ff35fb ARC: Code cosmetics (Nothing semantical)
* reduce editor lines taken by pt_regs
* ARCompact ISA specific part of TLB Miss handlers clubbed together
* cleanup some comments

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-29 17:51:15 +05:30
Vineet Gupta
37f3ac498c ARC: Exception Handlers Code consolidation
After the recent cleanups, all the exception handlers now have same
boilerplate prologue code. Move that into common macro.

This reduces readability but helps greatly with sharing / duplicating
entry code with ARCv2 ISA where the handlers are pretty much the same,
just the entry prologue is different (due to hardware assist).

Also while at it, add the missing FAKE_RET_FROM_EXCPN calls in couple of
places to drop down to pure kernel mode (from exception mode) before
jumping off into "C" code.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-26 09:40:25 +05:30
Vineet Gupta
dc81df2440 ARC: [tlb-miss] Fix bug with CONFIG_ARC_DBG_TLB_MISS_COUNT
LOAD_FAULT_PTE macro is expected to set r2 with faulting vaddr.
However in case of CONFIG_ARC_DBG_TLB_MISS_COUNT, it was getting
clobbered with statistics collection code.

Fix latter by using a different register.

Note that only I-TLB Miss handler was potentially affected.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-06-27 14:37:57 +05:30
Vineet Gupta
c3e757a77c ARC: [tlb-miss] Extraneous PTE bit testing/setting
* No need to check for READ access in I-TLB Miss handler

* Redundant PAGE_PRESENT update in PTE

Post TLB entry installation, in updating PTE for software accessed/dity
bits, no need to update PAGE_PRESENT since it will already be set.
Infact the entry won't have installed if !PAGE_PRESENT.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-06-27 14:37:57 +05:30
Vineet Gupta
38a9ff6d24 ARC: Remove explicit passing around of ECR
With ECR now part of pt_regs

* No need to propagate from lowest asm handlers as arg
* No need to save it in tsk->thread.cause_code
* Avoid bit chopping to access the bit-fields

More code consolidation, cleanup

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-06-26 15:30:50 +05:30
Vineet Gupta
3e1ae44188 ARC: [mm] Remove @write argument to do_page_fault()
This can be ascertained within do_page_fault() since it gets the full
ECR (Exception Cause Register).

Further, for both the callers of do_page_fault(): Prot-V / D-TLB-Miss,
the cause sub-fields in ECR are same for same type of access, making the
code much more simpler.

D-TLB-Miss [LD] 0x00_21_01_00
Prot-V     [LD] 0x00_23_01_00
                        ^^
D-TLB-Miss [ST] 0x00_21_02_00
Prot-V     [ST] 0x00_23_02_00
                        ^^
D-TLB-Miss [EX] 0x00_21_03_00
Prot-V     [EX] 0x00_23_03_00
                        ^^

This helps code consolidation, which is even better when moving code from
assembler to "C".

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-06-22 19:23:20 +05:30
Vineet Gupta
da1677b02d ARC: Disintegrate arcregs.h
* Move the various sub-system defines/types into relevant files/functions
  (reduces compilation time)

* move CPU specific stuff out of asm/tlb.h into asm/mmu.h

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-06-22 13:46:42 +05:30
Vineet Gupta
a950549c67 ARC: copy_(to|from)_user() to honor usermode-access permissions
This manifested as grep failing psuedo-randomly:

-------------->8---------------------
[ARCLinux]$ ip address show lo | grep inet
[ARCLinux]$ ip address show lo | grep inet
[ARCLinux]$ ip address show lo | grep inet
[ARCLinux]$
[ARCLinux]$ ip address show lo | grep inet
    inet 127.0.0.1/8 scope host lo
-------------->8---------------------

ARC700 MMU provides fully orthogonal permission bits per page:
Ur, Uw, Ux, Kr, Kw, Kx

The user mode page permission templates used to have all Kernel mode
access bits enabled.
This caused a tricky race condition observed with uClibc buffered file
read and UNIX pipes.

1. Read access to an anon mapped page in libc .bss: write-protected
   zero_page mapped: TLB Entry installed with Ur + K[rwx]

2. grep calls libc:getc() -> buffered read layer calls read(2) with the
   internal read buffer in same .bss page.
   The read() call is on STDIN which has been redirected to a pipe.
   read(2) => sys_read() => pipe_read() => copy_to_user()

3. Since page has Kernel-write permission (despite being user-mode
   write-protected), copy_to_user() suceeds w/o taking a MMU TLB-Miss
   Exception (page-fault for ARC). core-MM is unaware that kernel
   erroneously wrote to the reserved read-only zero-page (BUG #1)

4. Control returns to userspace which now does a write to same .bss page
   Since Linux MM is not aware that page has been modified by kernel, it
   simply reassigns a new writable zero-init page to mapping, loosing the
   prior write by kernel - effectively zero'ing out the libc read buffer
   under the hood - hence grep doesn't see right data (BUG #2)

The fix is to make all kernel-mode access permissions mirror the
user-mode ones. Note that the kernel still has full access to pages,
when accessed directly (w/o MMU) - this fix ensures that kernel-mode
access in copy_to_from() path uses the same faulting access model as for
pure user accesses to keep MM fully aware of page state.

The issue is peudo-random because it only shows up if the TLB entry
installed in #1 is present at the time of #3. If it is evicted out, due
to TLB pressure or some-such, then copy_to_user() does take a TLB Miss
Exception, with a routine write-to-anon COW processing installing a
fresh page for kernel writes and also usable as it is in userspace.

Further the issue was dormant for so long as it depends on where the
libc internal read buffer (in .bss) is mapped at runtime.
If it happens to reside in file-backed data mapping of libc (in the
page-aligned slack space trailing the file backed data), loader zero
padding the slack space, does the early cow page replacement, setting
things up at the very beginning itself.

With gcc 4.8 based builds, the libc buffer got pushed out to a real
anon mapping which triggers the issue.

Reported-by: Anton Kolesov <akolesov@synopsys.com>
Cc: <stable@vger.kernel.org> # 3.9
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-05-23 10:33:03 +05:30
Vineet Gupta
8b5850f8ac ARC: Support for single cycle Close Coupled Mem (CCM)
* Includes mapping of CCMs in address space
* Annotations to move arbitrary code/data into CCM
* Moving some of the critical code/data into CCM
* Runtime detection/reporting

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-02-15 23:16:10 +05:30
Vineet Gupta
41195d236e ARC: SMP support
ARC common code to enable a SMP system + ISS provided SMP extensions.

ARC700 natively lacks SMP support, hence some of the core features are
are only enabled if SoCs have the necessary h/w pixie-dust. This
includes:
-Inter Processor Interrupts (IPI)
-Cache coherency
-load-locked/store-conditional
...

The low level exception handling would be completely broken in SMP
because we don't have hardware assisted stack switching. Thus a fair bit
of this code is repurposing the MMU_SCRATCH reg for event handler
prologues to keep them re-entrant.

Many thanks to Rajeshwar Ranga for his initial "major" contributions to
SMP Port (back in 2008), and to Noam Camus and Gilad Ben-Yossef for help
with resurrecting that in 3.2 kernel (2012).

Note that this platform code is again singleton design pattern - so
multiple SMP platforms won't build at the moment - this deficiency is
addressed in subsequent patches within this series.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Rajeshwar Ranga <rajeshwar.ranga@gmail.com>
Cc: Noam Camus <noamc@ezchip.com>
Cc: Gilad Ben-Yossef <gilad@benyossef.com>
2013-02-15 23:16:02 +05:30
Vineet Gupta
0ef88a54aa ARC: Diagnostics: show_regs() etc
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-02-15 23:16:02 +05:30
Vineet Gupta
cc562d2eae ARC: MMU Exception Handling
* MMU I-TLB / D-TLB Miss Exceptions
  - Fast Path TLB Refill Handler
  - slowpath TLB creation via do_page_fault() -> update_mmu_cache()
* Duplicate PD Exception Handler

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-02-15 23:15:52 +05:30