The s390 architecture can store the CPU registers of the crashed system
after the kdump kernel has been started and this is the preferred way.
Remove the remaining code fragments that deal with storing CPU registers
while the crashed system is still active.
Acked-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Remove dead code, since this could only happen on a 31 bit machine
where the kernel wouldn't IPL.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The novx parameter disables the vector facility but the HWCAP_S390_VXRS
bit in the ELf hardware capabilies is always set if the machine has
the vector facility. If the user space program uses the "vx" string
in the features field of /proc/cpuinfo to utilize vector instruction
it will crash if the novx kernel paramter is set.
Convert setup_hwcaps to an arch_initcall and use MACHINE_HAS_VX to
decide if the HWCAPS_S390_VXRS bit needs to be set.
Cc: stable@vger.kernel.org # 3.18+
Reported-by: Ulrich Weigand <uweigand@de.ibm.com>
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Enable core NUMA support for s390 and add one simple default mode "plain"
that creates one single NUMA node.
This patch contains several changes from Michael Holzheu.
Signed-off-by: Philipp Hachtmann <phacht@linux.vnet.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for the generic CPU feature modalias implementation that wires
up optional CPU features to udev-based module autoprobing.
The <asm/cpufeature.h> file provides definitions to map CPU features to
s390 ELF hardware capabilities.
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Heiko noticed that the current check for hugepage support on s390 is a
little bit too harsh as systems which do not support will crash.
The reason is that pageblock_order can now get negative when we set
HPAGE_SHIFT to 0. To avoid all this and to avoid opening another can of
worms with enabling HUGETLB_PAGE_SIZE_VARIABLE I think it would be best
to simply allow architectures to define their own hugepages_supported().
Revert bea41197ea ("s390/mm: make hugepages_supported a boot time
decision") in preparation.
Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull more s390 updates from Martin Schwidefsky:
"There is one larger patch for the AP bus code to make it work with the
longer reset periods of the latest crypto cards.
A new default configuration, a naming cleanup for SMP and a few fixes"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390/kdump: fix compile for !SMP
s390/kdump: fix nosmt kernel parameter
s390: new default configuration
s390/smp: cleanup core vs. cpu in the SCLP interface
s390/smp: fix sigp cpu detection loop
s390/zcrypt: Fixed reset and interrupt handling of AP queues
s390/kdump: fix REGSET_VX_LOW vector register ELF notes
s390/bpf: Fix backward jumps
There is a potential bug with KVM and hugetlbfs if the hardware does not
support hugepages (EDAT1). We fix this by making EDAT1 a hard requirement
for hugepages and therefore removing and simplifying code.
As s390, with the sw-emulated hugepages, was the only user of
arch_prepare/release_hugepage I also removed theses calls from common and
other architecture code.
This patch (of 5):
By dropping support for hugepages on machines which do not have the
hardware feature EDAT1, we fix a potential s390 KVM bug.
The bug would happen if a guest is backed by hugetlbfs (not supported
currently), but does not get pagetables with PGSTE. This would lead to
random memory overwrites.
Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
It turned out that SIGP set-multi-threading can only be done once.
Therefore switching to a different MT level after switching to
sclp.mtid_prev in the dump case fails.
As a symptom specifying the "nosmt" parameter currently fails for
the kdump kernel and the kernel starts with multi-threading enabled.
So fix this and issue diag 308 subcode 1 call after collecting the
CPU states for the dump. Also enhance the diag308_reset() function to
be usable also with enabled lowcore protection and prefix register != 0.
After the reset it is possible to switch the MT level again. We have
to do the reset very early in order not to kill the already initialized
console. Therefore instead of kmalloc() the corresponding memblock
functions have to be used. To avoid copying the sclp cpu code into
sclp_early, we now use the simple sigp loop method for CPU detection.
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Addresses from the usable space in [_ehead, _stext] lead to false
positives in DMA_API_DEBUG code (which will complain when an address
is in [_text, _etext]).
Avoid these warnings by not using that memory in case of
CONFIG_DMA_API_DEBUG=y.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Let's unify basic access to sclp fields by storing the data in an external
struct in asm/sclp.h.
The values can now directly be accessed by other components, so there is
no need for most accessor functions and external variables anymore.
The mtid, mtid_max and facility part will be cleaned up separately.
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Remove the 31 bit support in order to reduce maintenance cost and
effectively remove dead code. Since a couple of years there is no
distribution left that comes with a 31 bit kernel.
The 31 bit kernel also has been broken since more than a year before
anybody noticed. In addition I added a removal warning to the kernel
shown at ipl for 5 minutes: a960062e58 ("s390: add 31 bit warning
message") which let everybody know about the plan to remove 31 bit
code. We didn't get any response.
Given that the last 31 bit only machine was introduced in 1999 let's
remove the code.
Anybody with 31 bit user space code can still use the compat mode.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
There is no reason to initialize the topology cpu masks already while
setup_arch() is being called. It is sufficient to initialize the masks
before the scheduler becomes SMP aware.
Therefore a pre-SMP initcall aka early_initcall is suffucient.
This also allows to convert the cpu_topology array into a per cpu
variable with a later patch. Without this patch this wouldn't be
possible since the per cpu memory areas are not allocated while setup_arch
is executed.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
If the function tracer is enabled, allow to set kprobes on the first
instruction of a function (which is the function trace caller):
If no kprobe is set handling of enabling and disabling function tracing
of a function simply patches the first instruction. Either it is a nop
(right now it's an unconditional branch, which skips the mcount block),
or it's a branch to the ftrace_caller() function.
If a kprobe is being placed on a function tracer calling instruction
we encode if we actually have a nop or branch in the remaining bytes
after the breakpoint instruction (illegal opcode).
This is possible, since the size of the instruction used for the nop
and branch is six bytes, while the size of the breakpoint is only
two bytes.
Therefore the first two bytes contain the illegal opcode and the last
four bytes contain either "0" for nop or "1" for branch. The kprobes
code will then execute/simulate the correct instruction.
Instruction patching for kprobes and function tracer is always done
with stop_machine(). Therefore we don't have any races where an
instruction is patched concurrently on a different cpu.
Besides that also the program check handler which executes the function
trace caller instruction won't be executed concurrently to any
stop_machine() execution.
This allows to keep full fault based kprobes handling which generates
correct pt_regs contents automatically.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The vector extension introduces 32 128-bit vector registers and a set of
instruction to operate on the vector registers.
The kernel can control the use of vector registers for the problem state
program with a bit in control register 0. Once enabled for a process the
kernel needs to retain the content of the vector registers on context
switch. The signal frame is extended to include the vector registers.
Two new register sets NT_S390_VXRS_LOW and NT_S390_VXRS_HIGH are added
to the regset interface for the debugger and core dumps.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Fix calculation to decide if a 4-level kernel page table is required.
Git commit c972cc60c2 "s390/vmalloc: have separate modules area"
added the separate module area which reduces the size of the vmalloc
area but fails to take it into account for the 3 vs 4 level page table
decision.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The virtual-machine cpu information data block and the cpu-id of
the boot cpu can be used as source of device randomness.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
We only have to check kdump memory for the MEM_GOING_OFFLINE action.
Therefore skip the test and return NOTIFY_OK for all other memory
hotplug actions.
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Currently there are two s390 kernel dump config options "CONFIG_ZFCPDUMP"
and "CONFIG_CRASH_DUMP". In order to keep things simple and because the
"CONFIG_ZFCPDUMP" option already has a dependency to "CONFIG_CRASH_DUMP"
remove the CONFIG_ZFCPDUMP option.
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Reviewed-by: Eric Farman <farman@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use lowcore constant to improve the code generated for spinlocks.
[ Martin Schwidefsky: patch breakdown and code beautification ]
Signed-off-by: Philipp Hachtmann <phacht@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The original bootmem allocator is getting replaced by memblock. To
cover the needs of the s390 kdump implementation the physical memory
list is used.
With this patch the bootmem allocator and its bitmaps are completely
removed from s390.
Signed-off-by: Philipp Hachtmann <phacht@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Remove another leftover from the time when we supported running
user space in either home or primary address space.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
There are only two uaccess variants on s390 left: the version that is used
if the mvcos instruction is available, and the page table walk variant.
So there is no need for expensive indirect function calls.
By default the mvcos variant will be called. If the mvcos instruction is not
available it will call the page table walk variant.
For minimal performance impact the "if (mvcos_is_available)" is implemented
with a jump label, which will be a six byte nop on machines with mvcos.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Pull s390 updates from Martin Schwidefsky:
"The bulk of the s390 updates for v3.14.
New features are the perf support for the CPU-Measurement Sample
Facility and the EP11 support for the crypto cards. And the normal
cleanups and bug-fixes"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (44 commits)
s390/cpum_sf: fix printk format warnings
s390: Fix misspellings using 'codespell' tool
s390/qdio: bridgeport support - CHSC part
s390: delete new instances of __cpuinit usage
s390/compat: fix PSW32_USER_BITS definition
s390/zcrypt: add support for EP11 coprocessor cards
s390/mm: optimize randomize_et_dyn for !PF_RANDOMIZE
s390: use IS_ENABLED to check if a CONFIG is set to y or m
s390/cio: use device_lock to synchronize calls to the ccwgroup driver
s390/cio: use device_lock to synchronize calls to the ccw driver
s390/cio: fix unlocked access of online member
s390/cpum_sf: Add flag to process full SDBs only
s390/cpum_sf: Add raw data sampling to support the diagnostic-sampling function
s390/cpum_sf: Filter perf events based event->attr.exclude_* settings
s390/cpum_sf: Detect KVM guest samples
s390/cpum_sf: Add helper to read TOD from trailer entries
s390/cpum_sf: Atomically reset trailer entry fields of sample-data-blocks
s390/cpum_sf: Dynamically extend the sampling buffer if overflows occur
s390/pci: reenable per default
s390/pci/dma: fix accounting of allocated_pages
...
Since under z/VM we cannot have more than 64 cpus, make sure the
cpu_possible_mask does not contain more bits.
This avoids wasting memory for dynamic per-cpu allocations if
CONFIG_NR_CPUS is larger than 64.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Currently we have hardcoded the HSA size to 32 MiB. With this patch the
HSA size is determined dynamically via SCLP in early.c.
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Simplify the uaccess code by removing the user_mode=home option.
The kernel will now always run in the home space mode.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Just add the new model number where appropiate.
Cc: stable@vger.kernel.org # v3.10
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The only user of saved_max_pfn in s390 is read_oldmem interface but we
have removed that interface, so saved_max_pfn is now unneeded in s390, and
we needn't set it anymore.
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Dave Hansen <dave@sr71.net>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Matt Fleming <matt.fleming@intel.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Simplify the memory detection code a bit by removing the CHUNK_OLDMEM
and CHUNK_CRASHK memory types.
They are not needed. Everything that is needed is a mechanism to
insert holes into the detected memory.
Reviewed-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The current memory detection loop will detect all present memory of
a machine. This is true even if the user specified the "mem=" parameter
on the kernel command line.
This can be a problem since the memory detection may cause a fully
populated host page table for the guest, even for those parts of the
memory that the guest will never use afterwards.
So fix this and only detect memory up to a user supplied "mem=" limit
if specified.
Reported-by: Michael Johanssen <johanssn@de.ibm.com>
Reviewed-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
When in kdump mode the kernel may access only the first couple of
megabytes for execution, the rest contains the dump. However
the size of the bitmap used by the bootmem allocator was calculated
for the whole amount of memory of the machine.
For very large machines this can lead to the situation that the kdump
kernel will not come up because not enough memory is available.
So fix this and calculate the size of the bitmap only for the piece
of memory that the kdump kernel actually uses.
Call reserve_oldmem() before setup_memory_end() so that the memory_chunk
array already has been updated with respect to oldmem chunks.
Afterwards setup_memory_end() will ignore those chunks.
Reviewed-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The variable real_memory_size has odd semantics and has been used in
a broken way by e.g. the old kvm code.
Therefore get rid of it before anybody else makes use of it.
Reviewed-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use the 'ipldev' and 'condev' cio_ignore keywords to setup the
command line for zfcpdump.
Reviewed-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Store the stack pointers in the lowcore for the kernel stack, the async
stack and the panic stack with the offset required for the first user.
This avoids an unnecessary add instruction on the system call path.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The size of the vmemmap must be a multiple of PAGES_PER_SECTION, since the
common code always initializes the vmemmap in such pieces.
So we must round up in order to not have a too small vmemmap.
Fixes an IPL crash on 31 bit with more than 1920MB.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Export pm_power_off symbol. Needed by at least one of the new device
drivers that come with CONFIG_PCI.
And all other architectures export that symbol as well.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Allow to generate code that only runs on zEC12 machines.
Also add a check which prevents the kernel to run on machines which
do not have any of the following new facilities installed:
- (48) decimal-floating-point zoned-conversion
- (49) execution-hint
- (49) load-and-trap
- (49) miscellaneous-instruction-extensions
- (49) processor-assist
- (50) constrained transactional-execution
- (73) transactional-execution
48, 49, 50 and 73 are the bit numbers of the facility indications for
each of the required facilities.
Note that we assume that user-space gets compiled with the same
compiler options, therefore we also test for a dfp facility even
if the kernel doesn't make use of it.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Move and rename init_storage_keys() to pageattr.c, so it can also be
used from the sclp memory hotplug code in order to initialize
storage keys.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add a special module area on top of the vmalloc area, which may be only
used for modules and bpf jit generated code.
This makes sure that inter module branches will always happen without a
trampoline and in addition having all the code within a 2GB frame is
branch prediction unit friendly.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Make use of the pfmf instruction, if available, to initialize storage keys
of whole 1MB or 2GB frames instead of initializing every single page with
the sske instruction.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Since "Kconfig: split the s390 base menu" CONFIG_KEXEC gets always selected.
Therefore there is no point in keeping CONFIG_KEXEC anywhere.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Shorten the code for addressing mode initialization. Also add missing
__init annotations, since this code is only used during kernel initialization.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Renaming the globally visible variable "user_mode" to "addressing_mode" in
order to fix a name clash was not a good idea. (Commit 37fe1d73 "s390/mm:
rename user_mode variable to addressing_mode")
Looking at the code after a couple of weeks one thinks: addressing mode of
what?
So rename the variable again. This time to s390_user_mode. Which hopefully
makes more sense.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Change the default addressing mode so that user space runs in primary space
and the kernel runs in home space.
In addition remove the "switch_amode" kernel parameter so all users who
already specified they want the new default behaviour will stay in the
"switched" mode instead of in the opposite they intended.
If there is a need to switch addressing modes, this can be done with the
"user_mode" kernel parameter: user_mode=home
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Allow user-space processes to use transactional execution (TX).
If the TX facility is available user space programs can use
transactions for fine-grained serialization based on the data
objects that are referenced during a transaction. This is
useful for lockless data structures and speculative compiler
optimizations.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The bit for high gprs in the AT_HWCAP auxiliary vector field and the
highgprs tag in the output of /proc/cpuinfo should not be set for
31 bit kernels.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>