Unless the address range matters, symbols can be ignored earlier,
which avoids unneeded memory allocation.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
The callers of this function expect (unsigned char *). I do not see
a good reason to make this function return (void *).
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
You can do equivalent things with strspn(). I do not see noticeable
performance difference.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
sym_entry::sym is (unsigned char *) instead of (char *) because
kallsyms exploits the MSB for compression, and the characters are
used as the index of token_profit array.
However, it requires casting (unsigned char *) to (char *) in some
places since standard library functions such as strcmp(), strlen()
expect (char *).
Introduce a new helper, sym_name(), which advances the given pointer
by 1 and casts it to (char *).
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Since commit 6f00df24ee ("[PATCH] Strip local symbols from kallsyms"),
all symbols starting '$' are ignored.
is_arm_mapping_symbol() particularly ignores $a, $t, etc. but it is
redundant.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Currently, record_relative_base() iterates over the entire table to
find the minimum address, but it is not efficient because we sort
the table anyway.
After sort_symbol(), the table is sorted by address. (kallsyms parses
the 'nm -n' output, so the data is already sorted by address, but this
commit does not rely on it.)
Move record_relative_base() after sort_symbols(), and take the first
non-absolute symbol value.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Currently, build_initial_tok_table() trims unused symbols, but it is
called after sort_symbols().
It is not efficient to sort the huge table that contains unused entries.
Shrink the table before sorting it.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
build_initial_tok_table() overwrites unused sym_entry to shrink the
table size. Before the entry is overwritten, table[i].sym must be freed
since it is malloc'ed data.
This fixes the 'definitely lost' report from valgrind. I ran valgrind
against x86_64_defconfig of v5.4-rc8 kernel, and here is the summary:
[Before the fix]
LEAK SUMMARY:
definitely lost: 53,184 bytes in 2,874 blocks
[After the fix]
LEAK SUMMARY:
definitely lost: 0 bytes in 0 blocks
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
gcc asan instrumentation emits the following sequence to store frame pc
when the kernel is built with CONFIG_RELOCATABLE:
debug/vsprintf.s:
.section .data.rel.ro.local,"aw"
.align 8
.LC3:
.quad .LASANPC4826@GOTOFF
.text
.align 8
.type number, @function
number:
.LASANPC4826:
and in case reloc is issued for LASANPC label it also gets into .symtab
with the same address as actual function symbol:
$ nm -n vmlinux | grep 0000000001397150
0000000001397150 t .LASANPC4826
0000000001397150 t number
In the end kernel backtraces are almost unreadable:
[ 143.748476] Call Trace:
[ 143.748484] ([<000000002da3e62c>] .LASANPC2671+0x114/0x190)
[ 143.748492] [<000000002eca1a58>] .LASANPC2612+0x110/0x160
[ 143.748502] [<000000002de9d830>] print_address_description+0x80/0x3b0
[ 143.748511] [<000000002de9dd64>] __kasan_report+0x15c/0x1c8
[ 143.748521] [<000000002ecb56d4>] strrchr+0x34/0x60
[ 143.748534] [<000003ff800a9a40>] kasan_strings+0xb0/0x148 [test_kasan]
[ 143.748547] [<000003ff800a9bba>] kmalloc_tests_init+0xe2/0x528 [test_kasan]
[ 143.748555] [<000000002da2117c>] .LASANPC4069+0x354/0x748
[ 143.748563] [<000000002dbfbb16>] do_init_module+0x136/0x3b0
[ 143.748571] [<000000002dbff3f4>] .LASANPC3191+0x2164/0x25d0
[ 143.748580] [<000000002dbffc4c>] .LASANPC3196+0x184/0x1b8
[ 143.748587] [<000000002ecdf2ec>] system_call+0xd8/0x2d8
Since LASANPC labels are not even unique and get into .symtab only due
to relocs filter them out in kallsyms.
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
- do not generate unneeded top-level built-in.a
- let git ignore O= directory entirely
- optimize scripts/kallsyms slightly
- exclude DWARF info from *.s regardless of config options
- fix GCC toolchain search path for Clang to prepare ld.lld support
- do not generate modules.order when CONFIG_MODULES is disabled
- simplify single target rules and remove VPATH for external module build
- allow to add optional flags to dpkg-buildpackage when building deb-pkg
- move some compiler option tests from Makefile to Kconfig
- various Makefile cleanups
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJcgxYUAAoJED2LAQed4NsGr7YQAJq4LmN/aZDI9Mt0YAQjEyyA
PCpm8J2HI9HO1sMoY7J/ksWmV0BU25G+uspKD7dXAQo3l9fmahQM5e4dsyZ4Xqs8
DyyYSGtJJnMJaWmupIZNA4UKDCVtwPoVW8YeuK9rwADVokCux9avogof9O1OoA/E
Pylo+I4UCM82kbpZSd+UxnCx6B0v8XGtW+d31Q4yZXCkw5nw14chrlaprcqB3UgB
+7C3xOnDWCi7gyxaTqmD7dLay2DM8KCDlznEvBL733Y/cK3to1fywzEPzp0JQCLX
BLgmmpW13NF++q5BCoTW6sFjZAhBVbiYZwesMrCi75Y32T8zt4G5l4pkvGkSuGF/
UQh5aoCxaMIp70VPj/loZ0lh78nwVGTok9zRb0rfztM0X4DbmiPi5MNiHRzRpIeE
1jjEa/GK1t0TDnXc/MuDFK8cWwdhttIqUL5yWfAxjXbtP27eLtsopQUdW7EPHs7d
sMnfuSUuhOC28yByVxIkBcwawLyYrcWRphJ3ixCO70CoJWt2DT6aOKxcFJefoJix
Pto6Oo3oQ4iypMM5M9/0Uo+AK2TKRejWIqtZdbo+ir70tNxVH3WDZq++fG0drXOB
r2I/GY6nRjuzLOe2jzEqywFTFd2xpk4Qo84LGb1R3U6aU5qS2gA0W/q00JS5c2qU
R8uReJ7bvmLmrVNZ/NI4
=y9YG
-----END PGP SIGNATURE-----
Merge tag 'kbuild-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
Pull Kbuild updates from Masahiro Yamada:
- do not generate unneeded top-level built-in.a
- let git ignore O= directory entirely
- optimize scripts/kallsyms slightly
- exclude DWARF info from *.s regardless of config options
- fix GCC toolchain search path for Clang to prepare ld.lld support
- do not generate modules.order when CONFIG_MODULES is disabled
- simplify single target rules and remove VPATH for external module
build
- allow to add optional flags to dpkg-buildpackage when building
deb-pkg
- move some compiler option tests from Makefile to Kconfig
- various Makefile cleanups
* tag 'kbuild-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (40 commits)
kbuild: remove scripts/basic/% build target
kbuild: use -Werror=implicit-... instead of -Werror-implicit-...
kbuild: clean up scripts/gcc-version.sh
kbuild: remove cc-version macro
kbuild: update comment block of scripts/clang-version.sh
kbuild: remove commented-out INITRD_COMPRESS
kbuild: move -gsplit-dwarf, -gdwarf-4 option tests to Kconfig
kbuild: [bin]deb-pkg: add DPKG_FLAGS variable
kbuild: move ".config not found!" message from Kconfig to Makefile
kbuild: invoke syncconfig if include/config/auto.conf.cmd is missing
kbuild: simplify single target rules
kbuild: remove empty rules for makefiles
kbuild: make -r/-R effective in top Makefile for old Make versions
kbuild: move tools_silent to a more relevant place
kbuild: compute false-positive -Wmaybe-uninitialized cases in Kconfig
kbuild: refactor cc-cross-prefix implementation
kbuild: hardcode genksyms path and remove GENKSYMS variable
scripts/gdb: refactor rules for symlink creation
kbuild: create symlink to vmlinux-gdb.py in scripts_gdb target
scripts/gdb: do not descend into scripts/gdb from scripts
...
Fix the following sparse warnings:
scripts/kallsyms.c:65:5: warning: symbol 'token_profit' was not declared. Should it be static?
scripts/kallsyms.c:68:15: warning: symbol 'best_table' was not declared. Should it be static?
scripts/kallsyms.c:69:15: warning: symbol 'best_table_len' was not declared. Should it be static?
Also, remove 'inline' from is_arm_mapping_symbol(). The compiler
will inline it anyway when it is appropriate to do so.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
As mentioned in the info pages of gas, the '.align' pseudo op's
interpretation of the alignment value is architecture specific.
It might either be a byte value or taken to the power of two.
On ARM it's actually the latter which leads to unnecessary large
alignments of 16 bytes for 32 bit builds or 256 bytes for 64 bit
builds.
Fix this by switching to '.balign' instead which is consistent
across all architectures.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
These symbols were added by commit 028f042613 ("kallsyms: support
kernel symbols in Blackfin on-chip memory") for Blackfin.
The Blackfin support was removed by commit 4ba66a9760 ("arch: remove
blackfin port").
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Both kallsyms_num_syms and kallsyms_markers[] don't really need to use
unsigned long as their (base) types; unsigned int fully suffices.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
scripts/kallsyms.c: function write_src:
"printf", the #1 format specifier "d" need arg type "int",
but the according arg "table_cnt" has type "unsigned int"
scripts/recordmcount.c: function do_file:
"fprintf", the #1 format specifier "d" need arg type "int",
but the according arg "(*w2)(ehdr->e_machine)" has type "unsigned int"
scripts/recordmcount.h: function find_secsym_ndx:
"fprintf", the #1 format specifier "d" need arg type "int",
but the according arg "txtndx" has type "unsigned int"
Signed-off-by: nixiaoming <nixiaoming@huawei.com>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
CONFIG_HAVE_UNDERSCORE_SYMBOL_PREFIX was selected by BLACKFIN, METAG.
They were removed by commit 4ba66a9760 ("arch: remove blackfin port"),
commit bb6fb6dfcc ("metag: Remove arch/metag/"), respectively.
No more architecture enables CONFIG_HAVE_UNDERSCORE_SYMBOL_PREFIX,
hence the --symbol-prefix option is unnecessary.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
Nothing particularly stands out here, probably because people were tied
up with spectre/meltdown stuff last time around. Still, the main pieces
are:
- Rework of our CPU features framework so that we can whitelist CPUs that
don't require kpti even in a heterogeneous system
- Support for the IDC/DIC architecture extensions, which allow us to elide
instruction and data cache maintenance when writing out instructions
- Removal of the large memory model which resulted in suboptimal codegen
by the compiler and increased the use of literal pools, which could
potentially be used as ROP gadgets since they are mapped as executable
- Rework of forced signal delivery so that the siginfo_t is well-formed
and handling of show_unhandled_signals is consolidated and made
consistent between different fault types
- More siginfo cleanup based on the initial patches from Eric Biederman
- Workaround for Cortex-A55 erratum #1024718
- Some small ACPI IORT updates and cleanups from Lorenzo Pieralisi
- Misc cleanups and non-critical fixes
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJaw1TCAAoJELescNyEwWM0gyQIAJVMK4QveBW+LwF96NYdZo16
p90Aa+nqKelh/s93govQArDMv1gxyuXdFlQZVOGPQHfqpz6RhJWmBA2tFsUbQrUc
OBcioPrRihqTmKBe+1r1XORwZxkVX6GGmCn0LYpPR7I3TjxXZpvxqaxGxiUvHkci
yVxWlDTyN/7eL3akhCpCDagN3Fxwk3QnJLqE3fxOFMlY7NvQcmUxcITiUl/s469q
xK6SWH9SRH1JK8jTHPitwUBiU//3FfCqSI9HLEdDIDoTuPcVM8UetWvi4QzrzJL1
UYg8lmU0CXNmflDzZJDaMf+qFApOrGxR0YVPpBzlQvxe0JIY69g48f+JzDPz8nc=
=+gNa
-----END PGP SIGNATURE-----
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Will Deacon:
"Nothing particularly stands out here, probably because people were
tied up with spectre/meltdown stuff last time around. Still, the main
pieces are:
- Rework of our CPU features framework so that we can whitelist CPUs
that don't require kpti even in a heterogeneous system
- Support for the IDC/DIC architecture extensions, which allow us to
elide instruction and data cache maintenance when writing out
instructions
- Removal of the large memory model which resulted in suboptimal
codegen by the compiler and increased the use of literal pools,
which could potentially be used as ROP gadgets since they are
mapped as executable
- Rework of forced signal delivery so that the siginfo_t is
well-formed and handling of show_unhandled_signals is consolidated
and made consistent between different fault types
- More siginfo cleanup based on the initial patches from Eric
Biederman
- Workaround for Cortex-A55 erratum #1024718
- Some small ACPI IORT updates and cleanups from Lorenzo Pieralisi
- Misc cleanups and non-critical fixes"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (70 commits)
arm64: uaccess: Fix omissions from usercopy whitelist
arm64: fpsimd: Split cpu field out from struct fpsimd_state
arm64: tlbflush: avoid writing RES0 bits
arm64: cmpxchg: Include linux/compiler.h in asm/cmpxchg.h
arm64: move percpu cmpxchg implementation from cmpxchg.h to percpu.h
arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG
arm64: lse: Include compiler_types.h and export.h for out-of-line LL/SC
arm64: fpsimd: include <linux/init.h> in fpsimd.h
drivers/perf: arm_pmu_platform: do not warn about affinity on uniprocessor
perf: arm_spe: include linux/vmalloc.h for vmap()
Revert "arm64: Revert L1_CACHE_SHIFT back to 6 (64-byte cache line size)"
arm64: cpufeature: Avoid warnings due to unused symbols
arm64: Add work around for Arm Cortex-A55 Erratum 1024718
arm64: Delay enabling hardware DBM feature
arm64: Add MIDR encoding for Arm Cortex-A55 and Cortex-A35
arm64: capabilities: Handle shared entries
arm64: capabilities: Add support for checks based on a list of MIDRs
arm64: Add helpers for checking CPU MIDR against a range
arm64: capabilities: Clean up midr range helpers
arm64: capabilities: Change scope of VHE to Boot CPU feature
...
On arm64, the EFI stub and the kernel proper are essentially the same
binary, although the EFI stub executes at a different virtual address
as the kernel. For this reason, the EFI stub is restricted in the
symbols it can link to, which is ensured by prefixing all EFI stub
symbols with __efistub_ (and emitting __efistub_ prefixed aliases for
routines that may be shared between the core kernel and the stub)
These symbols are leaking into kallsyms, polluting the namespace, so
let's filter them explicitly.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
gcc on aarch64 may emit synbols of type 'n' if the kernel is built with
'-frecord-gcc-switches'. In most cases, those symbols are reported with
nm as
000000000000000e n $d
and with objdump as
0000000000000000 l d .GCC.command.line 0000000000000000 .GCC.command.line
000000000000000e l .GCC.command.line 0000000000000000 $d
Those symbols are detected in is_arm_mapping_symbol() and ignored.
However, if "--prefix-symbols=<prefix>" is configured as well, the
situation is different. For example, in efi/libstub, arm64 images are
built with
'--prefix-alloc-sections=.init --prefix-symbols=__efistub_'.
In combination with '-frecord-gcc-switches', the symbols are now reported
by nm as:
000000000000000e n __efistub_$d
and by objdump as:
0000000000000000 l d .GCC.command.line 0000000000000000 .GCC.command.line
000000000000000e l .GCC.command.line 0000000000000000 __efistub_$d
Those symbols are no longer ignored and included in the base address
calculation. This results in a base address of 000000000000000e, which
in turn causes kallsyms to abort with
kallsyms failure:
relative symbol value 0xffffff900800a000 out of range in relative mode
The problem is seen in little endian arm64 builds with CONFIG_EFI
enabled and with '-frecord-gcc-switches' set in KCFLAGS.
Explicitly ignore symbols of type 'n' since those are clearly debug
symbols.
Link: http://lkml.kernel.org/r/1507136063-3139-1-git-send-email-linux@roeck-us.net
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This add the kbuild infrastructure that will allow architectures to emit
vmlinux symbol CRCs as 32-bit offsets to another location in the kernel
where the actual value is stored. This works around problems with CRCs
being mistaken for relocatable symbols on kernels that self relocate at
runtime (i.e., powerpc with CONFIG_RELOCATABLE=y)
For the kbuild side of things, this comes down to the following:
- introducing a Kconfig symbol MODULE_REL_CRCS
- adding a -R switch to genksyms to instruct it to emit the CRC symbols
as references into the .rodata section
- making modpost distinguish such references from absolute CRC symbols
by the section index (SHN_ABS)
- making kallsyms disregard non-absolute symbols with a __crc_ prefix
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The implementation of the --page-offset kallsyms command line option has
been removed, so remove it from the usage string as well.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Michal Marek <mmarek@suse.com>
The --page-offset command line option was only used for ARM, to filter
symbol addresses below CONFIG_PAGE_OFFSET. This is no longer needed, so
remove the functionality altogether.
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Chris Brandt <chris.brandt@renesas.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On ARM, the linker may emit veneers to deal with relative branch
instructions that appear too far away from their targets. Since the
second kallsyms pass results in an increase of the kernel size, it may
result in additional veneers to be emitted, potentially affecting the
output of kallsyms itself if these symbols are visible to it, and for
that reason, symbols whose names end in '_veneer' are ignored explicitly.
However, when building Thumb2 kernels, such veneers are named differently
if they also incur a mode switch, and since they are not filtered by
kallsyms, they may cause the build to fail. So filter symbols whose names
end in '_from_arm' or '_from_thumb' as well.
Tested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Similar to how relative extables are implemented, it is possible to emit
the kallsyms table in such a way that it contains offsets relative to
some anchor point in the kernel image rather than absolute addresses.
On 64-bit architectures, it cuts the size of the kallsyms address table
in half, since offsets between kernel symbols can typically be expressed
in 32 bits. This saves several hundreds of kilobytes of permanent
.rodata on average. In addition, the kallsyms address table is no
longer subject to dynamic relocation when CONFIG_RELOCATABLE is in
effect, so the relocation work done after decompression now doesn't have
to do relocation updates for all these values. This saves up to 24
bytes (i.e., the size of a ELF64 RELA relocation table entry) per value,
which easily adds up to a couple of megabytes of uncompressed __init
data on ppc64 or arm64. Even if these relocation entries typically
compress well, the combined size reduction of 2.8 MB uncompressed for a
ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500
KB space saving in the compressed image.
Since it is useful for some architectures (like x86) to retain the
ability to emit absolute values as well, this patch also adds support
for capturing both absolute and relative values when
KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu
addresses as positive 32-bit values, and addresses relative to the
lowest encountered relative symbol as negative values, which are
subtracted from the runtime address of this base symbol to produce the
actual address.
Support for the above is enabled by default for all architectures except
IA-64 and Tile-GX, whose symbols are too far apart to capture in this
manner.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Commit c6bda7c988 ("kallsyms: fix percpu vars on x86-64 with
relocation") overloaded the 'A' (absolute) symbol type to signify that a
symbol is not subject to dynamic relocation. However, the original A
type does not imply that at all, and depending on the version of the
toolchain, many A type symbols are emitted that are in fact relative to
the kernel text, i.e., if the kernel is relocated at runtime, these
symbols should be updated as well.
For instance, on sparc32, the following symbols are emitted as absolute
(kindly provided by Guenter Roeck):
f035a420 A _etext
f03d9000 A _sdata
f03de8c4 A jiffies
f03f8860 A _edata
f03fc000 A __init_begin
f041bdc8 A __init_text_end
f0423000 A __bss_start
f0423000 A __init_end
f044457d A __bss_stop
f044457d A _end
On x86_64, similar behavior can be observed:
ffffffff81a00000 A __end_rodata_hpage_align
ffffffff81b19000 A __vvar_page
ffffffff81d3d000 A _end
Even if only a couple of them pass the symbol range check that results
in them to be taken into account for the final kallsyms symbol table, it
is obvious that 'A' does not mean the symbol does not need to be updated
at relocation time, and overloading its meaning to signify that is
perhaps not a good idea.
So instead, add a new percpu_absolute member to struct sym_entry, and
when --absolute-percpu is in effect, use it to record symbols whose
addresses should be emitted as final values rather than values that
still require relocation at runtime. That way, we can drop the check
against the 'A' type.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michal Marek <mmarek@suse.cz>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Since we have required at least GCC v3.2 for some time now, we
can drop the special handling of the 'gcc[0-9]_compiled.' label
which is not emitted anymore since GCC v3.0.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Michal Marek <mmarek@suse.cz>
When linking large kernels on ARM, the linker will insert veneers
(i.e., PLT like stubs) when function symbols are out of reach for
the ordinary relative branch/branch-and-link instructions.
However, due to the fact that the kallsyms region sits in .rodata,
which is between .text and .init.text, additional veneers may be
emitted in the second pass due to the fact that the size of the
kallsyms region itself has pushed the .init.text section further
apart, requiring even more veneers.
So ignore the veneers when generating the symbol table. Veneers
have no corresponding source code, and they will not turn up in
backtraces anyway.
This patch also lightly refactors the symbol_valid() function
to use a local 'sym_name' rather than the obfuscated 'sym + 1'
and 'sym + offset'
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Michal Marek <mmarek@suse.cz>
Similar to ARM, AArch64 is generating $x and $d syms... which isn't
terribly helpful when looking at %pF output and the like. Filter those
out in kallsyms, modpost and when looking at module symbols.
Seems simplest since none of these check EM_ARM anyway, to just add it
to the strchr used, rather than trying to make things overly
complicated.
initcall_debug improves:
dmesg_before.txt: initcall $x+0x0/0x154 [sg] returned 0 after 26331 usecs
dmesg_after.txt: initcall init_sg+0x0/0x154 [sg] returned 0 after 15461 usecs
Signed-off-by: Kyle McMartin <kyle@redhat.com>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
x86-64 has a problem: per-cpu variables are actually represented by
their absolute offsets within the per-cpu area, but the symbols are
not emitted as absolute. Thus kallsyms naively creates them as offsets
from _text, meaning their values change if the kernel is relocated
(especially noticeable with CONFIG_RANDOMIZE_BASE):
$ egrep ' (gdt_|_(stext|_per_cpu_))' /root/kallsyms.nokaslr
0000000000000000 D __per_cpu_start
0000000000004000 D gdt_page
0000000000014280 D __per_cpu_end
ffffffff810001c8 T _stext
ffffffff81ee53c0 D __per_cpu_offset
$ egrep ' (gdt_|_(stext|_per_cpu_))' /root/kallsyms.kaslr1
000000001f200000 D __per_cpu_start
000000001f204000 D gdt_page
000000001f214280 D __per_cpu_end
ffffffffa02001c8 T _stext
ffffffffa10e53c0 D __per_cpu_offset
Making them absolute symbols is the Right Thing, but requires fixes to
the relocs tool. So for the moment, we add a --absolute-percpu option
which makes them absolute from a kallsyms perspective:
$ egrep ' (gdt_|_(stext|_per_cpu_))' /proc/kallsyms # no KASLR
0000000000000000 A __per_cpu_start
000000000000a000 A gdt_page
0000000000013040 A __per_cpu_end
ffffffff802001c8 T _stext
ffffffff8099b180 D __per_cpu_offset
ffffffff809a3000 D __per_cpu_load
$ egrep ' (gdt_|_(stext|_per_cpu_))' /proc/kallsyms # With KASLR
0000000000000000 A __per_cpu_start
000000000000a000 A gdt_page
0000000000013040 A __per_cpu_end
ffffffff89c001c8 T _stext
ffffffff8a39d180 D __per_cpu_offset
ffffffff8a3a5000 D __per_cpu_load
Based-on-the-original-screenplay-by: Andy Honig <ahonig@google.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: Kees Cook <keescook@chromium.org>
This refactors the address range checks to be generalized instead of
specific to text range checks, in preparation for other range checks.
Also extracts logic for "is the symbol absolute" into a function.
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Revert the recently applied 0f55159d09 ("kallsyms: fix absolute
addresses for kASLR"). Kees said
: This got NAKed, please don't apply -- this patch works for x86 and
: ARM, but may cause problems for others:
:
: https://lkml.org/lkml/2014/2/24/718
It appears that Kees will be fixing all this up for 3.15.
Cc: Andy Honig <ahonig@google.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Currently symbols that are absolute addresses are incorrectly displayed
in /proc/kallsyms if the kernel is loaded with kASLR.
The problem was that the scripts/kallsyms.c file which generates the
array of symbol names and addresses uses an relocatable value for all
symbols, even absolute symbols. This patch fixes that.
Several kallsyms output in different boot states for comparison:
$ egrep '_(stext|_per_cpu_(start|end))' /root/kallsyms.nokaslr
0000000000000000 D __per_cpu_start
0000000000014280 D __per_cpu_end
ffffffff810001c8 T _stext
$ egrep '_(stext|_per_cpu_(start|end))' /root/kallsyms.kaslr1
000000001f200000 D __per_cpu_start
000000001f214280 D __per_cpu_end
ffffffffa02001c8 T _stext
$ egrep '_(stext|_per_cpu_(start|end))' /root/kallsyms.kaslr2
000000000d400000 D __per_cpu_start
000000000d414280 D __per_cpu_end
ffffffff8e4001c8 T _stext
$ egrep '_(stext|_per_cpu_(start|end))' /root/kallsyms.kaslr-fixed
0000000000000000 D __per_cpu_start
0000000000014280 D __per_cpu_end
ffffffffadc001c8 T _stext
Signed-off-by: Andy Honig <ahonig@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull kbuild changes from Michal Marek:
- LTO fixes, but the kallsyms part had to be reverted
- Pass -Werror=implicit-int and -Werror=strict-prototypes to the
compiler by default
- snprintf fix in modpost
- remove GREP_OPTIONS from the environment to be immune against exotic
grep option settings
* 'kbuild' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild:
kallsyms: Revert back to 128 max symbol length
Kbuild: Ignore GREP_OPTIONS env variable
scripts: kallsyms: Use %zu to print 'size_t'
scripts/bloat-o-meter: use .startswith rather than fragile slicing
scripts/bloat-o-meter: ignore changes in the size of linux_banner
kbuild: replace unbounded sprintf call in modpost
kbuild, bloat-o-meter: fix static detection
Kbuild: Handle longer symbols in kallsyms.c
kbuild: Increase kallsyms max symbol length
Makefile: enable -Werror=implicit-int and -Werror=strict-prototypes by default
This reverts commits
f3462aa (Kbuild: Handle longer symbols in kallsyms.c) and
eea0e9c (kbuild: Increase kallsyms max symbol length)
except for the added overflow check. The reason is a regression caused
by increasing the buffer:
http://marc.info/?l=linux-kernel&m=138387700415675.
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Joe Mario <jmario@redhat.com>
Signed-off-by: Michal Marek <mmarek@suse.cz>
Commit f3462aa95 (Kbuild: Handle longer symbols in kallsyms.c) introduced the
following warning on ARM:
scripts/kallsyms.c:121:4: warning: format '%lu' expects argument of type 'long unsigned int', but argument 4 has type 'size_t' [-Wformat]
Use %zu to print 'size_t'.
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Michal Marek <mmarek@suse.cz>
Also warn for too long symbols
v2: Add missing newline. Use 255 max (Joe Perches)
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Michal Marek <mmarek@suse.cz>
This patch uses CONFIG_PAGE_OFFSET to filter symbols which
are not in kernel address space because these symbols are
generally for generating code purpose and can't be run at
kernel mode, so we needn't keep them in /proc/kallsyms.
For example, on ARM there are some symbols which may be
linked in relocatable code section, then perf can't parse
symbols any more from /proc/kallsyms, this patch fixes the
problem (introduced b9b32bf70f)
Cc: Russell King <linux@arm.linux.org.uk>
Cc: linux-arm-kernel@lists.infradead.org
Cc: Michal Marek <mmarek@suse.cz>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: stable@vger.kernel.org
Description:
This bug hardly appears during real kernel compiling,
because the vmlinux symbols table is huge.
But we can still catch it under strict condition , as follows.
$ echo "c101b97b T do_fork" | ./scripts/kallsyms --all-symbols
#include <asm/types.h>
......
......
.globl kallsyms_token_table
ALGN
kallsyms_token_table:
Segmentation fault (core dumped)
$
If symbols table is small, all entries in token_profit[0x10000] may
decrease to 0 after several calls of compress_symbols() in optimize_result().
In that case, find_best_token() always return 0 and
best_table[i] is set to "\0\0" and best_table_len[i] is set to 2.
As a result, expand_symbol(best_table[0]="\0\0", best_table_len[0]=2, buf)
in write_src() will run in infinite recursion until stack overflows,
causing segfault.
This patch checks the find_best_token() return value. If all entries in
token_profit[0x10000] become 0 according to return value, it breaks the loop
in optimize_result().
And expand_symbol() works well when best_table_len[i] is 0.
Signed-off-by: Xiaochen Wang <wangxiaochen0@gmail.com>
Acked-by: Paulo Marques <pmarques@grupopie.com>
Signed-off-by: Michal Marek <mmarek@suse.cz>
As no error was handled, we wouldn't be able to know when an error does
occur. The fix preserves error messages while it doesn't let unnecessary
compiling warnings show up.
Signed-off-by: Jean Sacren <sakiwit@gmail.com>
Signed-off-by: Michal Marek <mmarek@suse.cz>
Suppress a warn_unused_result warning.
fgets is called as a part of error handling. It is called just to drop a
line and return immediately. read_map is reading the file in a loop and
read_symbol reads line by line. So I think there is no point in using
return value for useful checking. Other checks like 3 items were returned
or !EOF have already been done.
Signed-off-by: Himanshu Chauhan <hschauhan@nulltrace.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Cc: Michal Marek <mmarek@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Michal Marek <mmarek@suse.cz>
Commit b478b782e1 "kallsyms, tracing: output
more proper symbol name" introduces a "bugfix" that introduces a segfault
in kallsyms in my configurations.
The cause is the introduction of prefix_underscores_count() which attempts
to count underscores, even in symbols that do not have them. As a result,
it just uselessly runs past the end of the buffer until it crashes:
CC init/version.o
LD init/built-in.o
LD .tmp_vmlinux1
KSYM .tmp_kallsyms1.S
/bin/sh: line 1: 16934 Done sh-linux-gnu-nm -n .tmp_vmlinux1
16935 Segmentation fault | scripts/kallsyms > .tmp_kallsyms1.S
make: *** [.tmp_kallsyms1.S] Error 139
This simplifies the logic and just does a straightforward count.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Reviewed-by: Li Zefan <lizf@cn.fujitsu.com>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Paulo Marques <pmarques@grupopie.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: <stable@kernel.org> [2.6.30.x, 2.6.31.x]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The previous commit (17b1f0de) introduced a slightly broken consolidation
of the memory text range checking.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>