Commit Graph

15 Commits

Author SHA1 Message Date
Eugeniy Paltsev
55c0c4c793 ARC: memset: fix build with L1_CACHE_SHIFT != 6
In case of 'L1_CACHE_SHIFT != 6' we define dummy assembly macroses
PREALLOC_INSTR and PREFETCHW_INSTR without arguments. However
we pass arguments to them in code which cause build errors.
Fix that.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Cc: <stable@vger.kernel.org>    [5.0]
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-04-08 08:41:44 -07:00
Eugeniy Paltsev
7655146883 ARCv2: Add explcit unaligned access support (and ability to disable too)
As of today we enable unaligned access unconditionally on ARCv2.
Do this under a Kconfig option to allow disable it for test, benchmarking
etc. Also while at it

  - Select HAVE_EFFICIENT_UNALIGNED_ACCESS
  - Although gcc defaults to unaligned access (since GNU 2018.03), add the
    right toggles for enabling or disabling as appropriate
  - update bootlog to prints both HW feature status (exists, enabled/disabled)
    and SW status (used / not used).
  - wire up the relaxed memcpy for unaligned access

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
[vgupta: squashed patches, handle gcc -mno-unaligned-access quick]
2019-02-25 12:10:58 -08:00
Eugeniy Paltsev
4d1e7918aa ARCv2: lib: introduce memcpy optimized for unaligned access
Optimise code to use efficient unaligned memory access which is
available on ARCv2. This allows us to really simplify memcpy code
and speed up the code one and a half times (in case of unaligned
source or destination).

Don't wire it up yet !

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-02-25 08:52:16 -08:00
Eugeniy Paltsev
f8a15f9766 ARCv2: lib: memcpy: fix doing prefetchw outside of buffer
ARCv2 optimized memcpy uses PREFETCHW instruction for prefetching the
next cache line but doesn't ensure that the line is not past the end of
the buffer. PRETECHW changes the line ownership and marks it dirty,
which can cause data corruption if this area is used for DMA IO.

Fix the issue by avoiding the PREFETCHW. This leads to performance
degradation but it is OK as we'll introduce new memcpy implementation
optimized for unaligned memory access using.

We also cut off all PREFETCH instructions at they are quite useless
here:
 * we call PREFETCH right before LOAD instruction call.
 * we copy 16 or 32 bytes of data (depending on CONFIG_ARC_HAS_LL64)
   in a main logical loop. so we call PREFETCH 4 times (or 2 times)
   for each L1 cache line (in case of 64B L1 cache Line which is
   default case). Obviously this is not optimal.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-02-21 11:03:16 -08:00
Eugeniy Paltsev
e6a72b7dae ARCv2: lib: memeset: fix doing prefetchw outside of buffer
ARCv2 optimized memset uses PREFETCHW instruction for prefetching the
next cache line but doesn't ensure that the line is not past the end of
the buffer. PRETECHW changes the line ownership and marks it dirty,
which can cause issues in SMP config when next line was already owned by
other core. Fix the issue by avoiding the PREFETCHW

Some more details:

The current code has 3 logical loops (ignroing the unaligned part)
  (a) Big loop for doing aligned 64 bytes per iteration with PREALLOC
  (b) Loop for 32 x 2 bytes with PREFETCHW
  (c) any left over bytes

loop (a) was already eliding the last 64 bytes, so PREALLOC was
safe. The fix was removing PREFETCW from (b).

Another potential issue (applicable to configs with 32 or 128 byte L1
cache line) is that PREALLOC assumes 64 byte cache line and may not do
the right thing specially for 32b. While it would be easy to adapt,
there are no known configs with those lie sizes, so for now, just
compile out PREALLOC in such cases.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Cc: stable@vger.kernel.org #4.4+
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
[vgupta: rewrote changelog, used asm .macro vs. "C" macro]
2019-01-17 16:24:39 -08:00
Vineet Gupta
86effd0dc6 ARC: dw2 unwind: enable cfi pseudo ops in string lib
This uses a new set of annoations viz. ENTRY_CFI/END_CFI to enabel cfi
ops generation.

Note that we didn't change the normal ENTRY/EXIT as we don't actually
want unwind info in the trap/exception/interrutp handlers which use
these, as unwinder then gets confused (it keeps recursing vs. stopping).
Semantically these are leaf routines and unwinding should stop when it
hits those routines.

Before
------

    28.52%     1.19%          9929  hackbench  libuClibc-1.0.17.so   [.] __write_nocancel
            |
            ---__write_nocancel
               |--8.95%--EV_Trap
               |           --8.25%--sys_write
               |                     |--3.93%--sock_write_iter
     ...
               |--2.62%--memset   <==== [LEAF entry as no unwind info]
                         ^^^^^^

After
-----

    29.46%     1.24%         13622  hackbench  libuClibc-1.0.17.so   [.] __write_nocancel
            |
            ---__write_nocancel
               |--9.31%--EV_Trap
               |           --8.62%--sys_write
               |                     |--4.17%--sock_write_iter
     ...
               |--6.19%--sys_write
               |           --6.19%--sock_write_iter
               |                     unix_stream_sendmsg
               |                     |--1.62%--sock_alloc_send_pskb
               |                     |--0.89%--sock_def_readable
               |                     |--0.88%--_raw_spin_unlock_irqrestore
               |                     |--0.69%--memset
               |                     |         ^^^^^^     <==== [now in proper callframe]
               |                     |
               |                      --0.52%--skb_copy_datagram_from_iter

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-09-30 14:48:22 -07:00
Vineet Gupta
ac506b7f22 ARCv2: lib: memcpy: use local symbols
Otherwise perf profiles don't charge tme to memcpy

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-11-03 17:33:00 +05:30
Vineet Gupta
262137bca7 ARCv2: lib: memset: Don't assume 64-bit load/stores
There are configurations which may not have LDD/STD

Signed-off-by: Claudiu Zissulescu <claziss@synopsys.com>
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-07-20 17:44:37 +03:00
Vineet Gupta
21481f2cfe ARCv2: lib: memcpy: Missing PREFETCHW
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-07-20 17:27:35 +03:00
Vineet Gupta
8922bc3058 ARCv2: Adhere to Zero Delay loop restriction
Branch insn can't be scheduled as last insn of Zero Overhead loop

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-06-22 14:06:56 +05:30
Claudiu Zissulescu
1f7e3dc0ba ARCv2: optimised string/mem lib routines
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-06-22 14:06:56 +05:30
Vineet Gupta
ec7ac6afd0 ARC: switch to generic ENTRY/END assembler annotations
With commit 9df62f0544 "arch: use ASM_NL instead of ';'" the generic
macros can handle the arch specific newline quirk. Hence we can get rid
of ARC asm macros and use the "C" style macros.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2014-03-26 14:31:28 +05:30
Joern Rennecke
b0f55f2a1a ARC: [lib] strchr breakage in Big-endian configuration
For a search buffer, 2 byte aligned, strchr() was returning pointer
outside of buffer (buf - 1)

------------->8----------------
    // Input buffer (default 4 byte aigned)
    char *buffer = "1AA_";

    // actual search start (to mimick 2 byte alignment)
    char *current_line = &(buffer[2]);

    // Character to search for
    char c = 'A';

    char *c_pos = strchr(current_line, c);

    printf("%s\n", c_pos) --> 'AA_' as oppose to 'A_'
------------->8----------------

Reported-by: Anton Kolesov <Anton.Kolesov@synopsys.com>
Debugged-by: Anton Kolesov <Anton.Kolesov@synopsys.com>
Cc: <stable@vger.kernel.org> # [3.9 and 3.10]
Cc: Noam Camus <noamc@ezchip.com>
Signed-off-by: Joern Rennecke  <joern.rennecke@embecosm.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-08-24 11:24:53 -07:00
Vineet Gupta
5210d1e688 ARC: String library
Hand optimised asm code for ARC700 pipeline.
Originally written/optimized by Joern Rennecke

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Cc: Joern Rennecke <joern.rennecke@embecosm.com>
2013-02-11 20:00:35 +05:30
Vineet Gupta
cfdbc2e16e ARC: Build system: Makefiles, Kconfig, Linker script
Arnd in his review pointed out that arch Kconfig organisation has several
deficiencies:

* Build time entries for things which can be runtime extracted from DT
  (e.g. SDRAM size, core clk frequency..)
* Not multi-platform-image-build friendly (choice .. endchoice constructs)
* cpu variants support (750/770) is exclusive.

The first 2 have been fixed in subsequent patches.
Due to the nature of the 750 and 770, it is not possible to build for
both together, w/o special runtime glue code which would hurt
performance.

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Sam Ravnborg <sam@ravnborg.org>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
2013-02-11 20:00:25 +05:30