On ppc64 we support 4K hash pte with 64K page size. That requires
us to track the hash pte slot information on a per 4k basis. We do that
by storing the slot details in the second half of pte page. The pte bit
_PAGE_COMBO is used to indicate whether the second half need to be
looked while building real_pte. We need to use read memory barrier while
doing that so that load of hidx is not reordered w.r.t _PAGE_COMBO
check. On the store side we already do a lwsync in __hash_page_4K
CC: <stable@vger.kernel.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
We would get wrong results in compiler recomputed old_pmd. Avoid
that by using ACCESS_ONCE
CC: <stable@vger.kernel.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
As per ISA, for 4k base page size we compare 14..65 bits of VA specified
with the entry_VA in tlb. That implies we need to make sure we do a
tlbie with all the possible 4k va we used to access the 16MB hugepage.
With 64k base page size we compare 14..57 bits of VA. Hence we cannot
ignore the lower 24 bits of va while tlbie .We also cannot tlb
invalidate a 16MB entry with just one tlbie instruction because
we don't track which va was used to instantiate the tlb entry.
CC: <stable@vger.kernel.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
If we changed base page size of the segment, either via sub_page_protect
or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash
table entries. We do a lazy hash page table flush for all mapped pages
in the demoted segment. This happens when we handle hash page fault for
these pages.
We use _PAGE_COMBO bit along with _PAGE_HASHPTE to indicate whether a
pte is backed by 4K hash pte. If we find _PAGE_COMBO not set on the pte,
that implies that we could possibly have older 64K hash pte entries in
the hash page table and we need to invalidate those entries.
Use _PAGE_COMBO to determine the page size with which we should
invalidate the hash table entries on unmap.
CC: <stable@vger.kernel.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
If we changed base page size of the segment, either via sub_page_protect
or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash
table entries. We do a lazy hash page table flush for all mapped pages
in the demoted segment. This happens when we handle hash page fault
for these pages.
We use _PAGE_COMBO bit along with _PAGE_HASHPTE to indicate whether a
pte is backed by 4K hash pte. If we find _PAGE_COMBO not set on the pte,
that implies that we could possibly have older 64K hash pte entries in
the hash page table and we need to invalidate those entries.
Handle this correctly for 16M pages
CC: <stable@vger.kernel.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The segment identifier and segment size will remain the same in
the loop, So we can compute it outside. We also change the
hugepage_invalidate interface so that we can use it the later patch
CC: <stable@vger.kernel.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
With hugepages, we store the hpte valid information in the pte page
whose address is stored in the second half of the PMD. Use a
write barrier to make sure clearing pmd busy bit and updating
hpte valid info are ordered properly.
CC: <stable@vger.kernel.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
There is an issue currently where NUMA information is used on powerpc
(and possibly ia64) before it has been read from the device-tree, which
leads to large slab consumption with CONFIG_SLUB and memoryless nodes.
NUMA powerpc non-boot CPU's cpu_to_node/cpu_to_mem is only accurate
after start_secondary(), similar to ia64, which is invoked via
smp_init().
Commit 6ee0578b4d ("workqueue: mark init_workqueues() as
early_initcall()") made init_workqueues() be invoked via
do_pre_smp_initcalls(), which is obviously before the secondary
processors are online.
Additionally, the following commits changed init_workqueues() to use
cpu_to_node to determine the node to use for kthread_create_on_node:
bce903809a ("workqueue: add wq_numa_tbl_len and
wq_numa_possible_cpumask[]")
f3f90ad469 ("workqueue: determine NUMA node of workers accourding to
the allowed cpumask")
Therefore, when init_workqueues() runs, it sees all CPUs as being on
Node 0. On LPARs or KVM guests where Node 0 is memoryless, this leads to
a high number of slab deactivations
(http://www.spinics.net/lists/linux-mm/msg67489.html).
Fix this by initializing the powerpc-specific CPU<->node/local memory
node mapping as early as possible, which on powerpc is
do_init_bootmem(). Currently that function initializes the mapping for
the boot CPU, but we extend it to setup the mapping for all possible
CPUs. Then, in smp_prepare_cpus(), we can correspondingly set the
per-cpu values for all possible CPUs. That ensures that before the
early_initcalls run (and really as early as possible), the per-cpu NUMA
mapping is accurate.
While testing memoryless nodes on PowerKVM guests with a fix to the
workqueue logic to use cpu_to_mem() instead of cpu_to_node(), with a
guest topology of:
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
node 0 size: 0 MB
node 0 free: 0 MB
node 1 cpus: 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
node 1 size: 16336 MB
node 1 free: 15329 MB
node distances:
node 0 1
0: 10 40
1: 40 10
the slab consumption decreases from
Slab: 932416 kB
SUnreclaim: 902336 kB
to
Slab: 395264 kB
SUnreclaim: 359424 kB
And we a corresponding increase in the slab efficiency from
slab mem objs slabs
used active active
------------------------------------------------------------
kmalloc-16384 337 MB 11.28% 100.00%
task_struct 288 MB 9.93% 100.00%
to
slab mem objs slabs
used active active
------------------------------------------------------------
kmalloc-16384 37 MB 100.00% 100.00%
task_struct 31 MB 100.00% 100.00%
Powerpc didn't support memoryless nodes until recently (64bb80d87f
"powerpc/numa: Enable CONFIG_HAVE_MEMORYLESS_NODES" and 8c27226119
"powerpc/numa: Enable USE_PERCPU_NUMA_NODE_ID"). Those commits also
helped improve memory consumption with these kind of environments.
Signed-off-by: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Free memory allocated using kmem_cache_zalloc using kmem_cache_free
rather than kfree.
The Coccinelle semantic patch that makes this change is as follows:
// <smpl>
@@
expression x,E,c;
@@
x = \(kmem_cache_alloc\|kmem_cache_zalloc\|kmem_cache_alloc_node\)(c,...)
... when != x = E
when != &x
?-kfree(x)
+kmem_cache_free(c,x)
// </smpl>
Signed-off-by: Himangi Saraogi <himangi774@gmail.com>
Acked-by: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
A buffer returned by H_VTERM_PARTNER_INFO contains device information
in big endian format, causing problems for little endian architectures.
This patch ensures that they are in cpu endian.
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
xmon only soft disables interrupts. This seems like a bad idea - we
certainly don't want decrementer and PMU exceptions going off when
we are debugging something inside xmon.
This issue was uncovered when the hard lockup detector went off
inside xmon. To ensure we wont get a spurious hard lockup warning,
I also call touch_nmi_watchdog() when exiting xmon.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
It appears that commits 7f06f21d40 ("powerpc/tm: Add checking to
treclaim/trechkpt") and e4e3812150 ("KVM: PPC: Book3S HV: Add
transactional memory support") both added definitions of TEXASR_FS.
Remove one of them. At the same time, fix the alignment of the remaining
definition (should be tab-separated like the rest of the #defines).
Signed-off-by: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Avoids this warning:
arch/powerpc/boot/gunzip_util.c:118:9: warning: comparison of distinct pointer types lacks a cast
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
PowerNV platform is capable of capturing host memory region when system
crashes (because of host/firmware). We have new OPAL API to register/
unregister memory region to be captured when system crashes.
This patch adds support for new API. Also during boot time we register
kernel log buffer and unregister before doing kexec.
Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
We have been a bit slack about updating the CPU_FTRS_POSSIBLE and
CPU_FTRS_ALWAYS masks. When we added POWER8, and also POWER8E we forgot
to update the ALWAYS mask. And when we added POWER8_DD1 we forgot to
update both the POSSIBLE and ALWAYS masks.
Luckily this hasn't caused any actual bugs AFAICS. Failing to update the
ALWAYS mask just forgoes a potential optimisation opportunity. Failing
to update the POSSIBLE mask for POWER8_DD1 is also OK because it only
removes a bit rather than adding any.
Regardless they should all be in both masks so as to avoid any future
bugs when the set of ALWAYS/POSSIBLE bits changes, or the masks
themselves change.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Acked-by: Michael Neuling <mikey@neuling.org>
Acked-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This patch disables the branch target address CAM which under specific
circumstances may cause the processor to skip execution of 1-4
instructions. This fixes IBM Erratum #47.
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
When we take full hotplug to recover from EEH errors, PCI buses
could be involved. For the case, the child devices of involved
PCI buses can't be attached to IOMMU group properly, which is
caused by commit 3f28c5a ("powerpc/powernv: Reduce multi-hit of
iommu_add_device()").
When adding the PCI devices of the newly created PCI buses to
the system, the IOMMU group is expected to be added in (C).
(A) fails to bind the IOMMU group because bus->is_added is
false. (B) fails because the device doesn't have binding IOMMU
table yet. bus->is_added is set to true at end of (C) and
pdev->is_added is set to true at (D).
pcibios_add_pci_devices()
pci_scan_bridge()
pci_scan_child_bus()
pci_scan_slot()
pci_scan_single_device()
pci_scan_device()
pci_device_add()
pcibios_add_device() A: Ignore
device_add() B: Ignore
pcibios_fixup_bus()
pcibios_setup_bus_devices()
pcibios_setup_device() C: Hit
pcibios_finish_adding_to_bus()
pci_bus_add_devices()
pci_bus_add_device() D: Add device
If the parent PCI bus isn't involved in hotplug, the IOMMU
group is expected to be bound in (B). (A) should fail as the
sysfs entries aren't populated.
The patch fixes the issue by reverting commit 3f28c5a and remove
WARN_ON() in iommu_add_device() to allow calling the function
even the specified device already has associated IOMMU group.
Cc: <stable@vger.kernel.org> # 3.16+
Reported-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Acked-by: Wei Yang <weiyang@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Similar to the previous commit which described why we need to add a
barrier to arch_spin_is_locked(), we have a similar problem with
spin_unlock_wait().
We need a barrier on entry to ensure any spinlock we have previously
taken is visibly locked prior to the load of lock->slock.
It's also not clear if spin_unlock_wait() is intended to have ACQUIRE
semantics. For now be conservative and add a barrier on exit to give it
ACQUIRE semantics.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The kernel defines the function spin_is_locked(), which can be used to
check if a spinlock is currently locked.
Using spin_is_locked() on a lock you don't hold is obviously racy. That
is, even though you may observe that the lock is unlocked, it may become
locked at any time.
There is (at least) one exception to that, which is if two locks are
used as a pair, and the holder of each checks the status of the other
before doing any update.
Assuming *A and *B are two locks, and *COUNTER is a shared non-atomic
value:
The first CPU does:
spin_lock(*A)
if spin_is_locked(*B)
# nothing
else
smp_mb()
LOAD r = *COUNTER
r++
STORE *COUNTER = r
spin_unlock(*A)
And the second CPU does:
spin_lock(*B)
if spin_is_locked(*A)
# nothing
else
smp_mb()
LOAD r = *COUNTER
r++
STORE *COUNTER = r
spin_unlock(*B)
Although this is a strange locking construct, it should work.
It seems to be understood, but not documented, that spin_is_locked() is
not a memory barrier, so in the examples above and below the caller
inserts its own memory barrier before acting on the result of
spin_is_locked().
For now we assume spin_is_locked() is implemented as below, and we break
it out in our examples:
bool spin_is_locked(*LOCK) {
LOAD l = *LOCK
return l.locked
}
Our intuition is that there should be no problem even if the two code
sequences run simultaneously such as:
CPU 0 CPU 1
==================================================
spin_lock(*A) spin_lock(*B)
LOAD b = *B LOAD a = *A
if b.locked # true if a.locked # true
# nothing # nothing
spin_unlock(*A) spin_unlock(*B)
If one CPU gets the lock before the other then it will do the update and
the other CPU will back off:
CPU 0 CPU 1
==================================================
spin_lock(*A)
LOAD b = *B
spin_lock(*B)
if b.locked # false LOAD a = *A
else if a.locked # true
smp_mb() # nothing
LOAD r1 = *COUNTER spin_unlock(*B)
r1++
STORE *COUNTER = r1
spin_unlock(*A)
However in reality spin_lock() itself is not indivisible. On powerpc we
implement it as a load-and-reserve and store-conditional.
Ignoring the retry logic for the lost reservation case, it boils down to:
spin_lock(*LOCK) {
LOAD l = *LOCK
l.locked = true
STORE *LOCK = l
ACQUIRE_BARRIER
}
The ACQUIRE_BARRIER is required to give spin_lock() ACQUIRE semantics as
defined in memory-barriers.txt:
This acts as a one-way permeable barrier. It guarantees that all
memory operations after the ACQUIRE operation will appear to happen
after the ACQUIRE operation with respect to the other components of
the system.
On modern powerpc systems we use lwsync for ACQUIRE_BARRIER. lwsync is
also know as "lightweight sync", or "sync 1".
As described in Power ISA v2.07 section B.2.1.1, in this scenario the
lwsync is not the barrier itself. It instead causes the LOAD of *LOCK to
act as the barrier, preventing any loads or stores in the locked region
from occurring prior to the load of *LOCK.
Whether this behaviour is in accordance with the definition of ACQUIRE
semantics in memory-barriers.txt is open to discussion, we may switch to
a different barrier in future.
What this means in practice is that the following can occur:
CPU 0 CPU 1
==================================================
LOAD a = *A LOAD b = *B
a.locked = true b.locked = true
LOAD b = *B LOAD a = *A
STORE *A = a STORE *B = b
if b.locked # false if a.locked # false
else else
smp_mb() smp_mb()
LOAD r1 = *COUNTER LOAD r2 = *COUNTER
r1++ r2++
STORE *COUNTER = r1
STORE *COUNTER = r2 # Lost update
spin_unlock(*A) spin_unlock(*B)
That is, the load of *B can occur prior to the store that makes *A
visibly locked. And similarly for CPU 1. The result is both CPUs hold
their lock and believe the other lock is unlocked.
The easiest fix for this is to add a full memory barrier to the start of
spin_is_locked(), so adding to our previous definition would give us:
bool spin_is_locked(*LOCK) {
smp_mb()
LOAD l = *LOCK
return l.locked
}
The new barrier orders the store to the lock we are locking vs the load
of the other lock:
CPU 0 CPU 1
==================================================
LOAD a = *A LOAD b = *B
a.locked = true b.locked = true
STORE *A = a STORE *B = b
smp_mb() smp_mb()
LOAD b = *B LOAD a = *A
if b.locked # true if a.locked # true
# nothing # nothing
spin_unlock(*A) spin_unlock(*B)
Although the above example is theoretical, there is code similar to this
example in sem_lock() in ipc/sem.c. This commit in addition to the next
commit appears to be a fix for crashes we are seeing in that code where
we believe this race happens in practice.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Once again, we see
arch/powerpc/kernel/exceptions-64s.S: Assembler messages:
arch/powerpc/kernel/exceptions-64s.S:865: Error: attempt to move .org backwards
arch/powerpc/kernel/exceptions-64s.S:866: Error: attempt to move .org backwards
arch/powerpc/kernel/exceptions-64s.S:890: Error: attempt to move .org backwards
when compiling ppc:allmodconfig.
This time the problem has been caused by to commit 0869b6fd20
("powerpc/book3s: Add basic infrastructure to handle HMI in Linux"),
which adds functions hmi_exception_early and hmi_exception_after_realmode
into a critical (size-limited) code area, even though that does not appear
to be necessary.
Move those functions to a non-critical area of the file.
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
__early_init_mmu() does some things that are really only needed by the
boot cpu. On FSL booke, This includes calling
memblock_enforce_memory_limit(), which is labelled __init. Secondary
cpu init code can't be __init as that would break CPU hotplug.
While it's probably a bug that memblock_enforce_memory_limit() isn't
__init_memblock instead, there's no reason why we should be doing this
stuff for secondary cpus in the first place.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
- A short branch of OMAP fixes that we didn't merge before the window opened.
- A small cleanup that sorts the rk3288 dts entries properly
- A build fix due to a reference to a removed DT node on exynos
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
iQIcBAABAgAGBQJT5v4gAAoJEIwa5zzehBx3QGYP/0A0F0nVP69NqDgctdse1mgr
wTvxJWqIF9EKUWYkNxdEWfFPJaTLG6Q3d/PrPfMpoSqt//lWmHkQhdPeki1nWg6L
FwDDBcbED6xAtoyl3PFPXKG/gSK3hH/CZj83rgLjFwby1hgMrNNAUCZ+7CrM3CSe
Gq8b1GD3UDfuLW8l41qtVayk7g8m05NPfyY/2e09RWj5Rfbr8tbK4S0fiS5NqbgA
FUIrRa4aeAvOKJkDrBmTm8RcHL/PQ7WdwgQ/gJLFnZl/Qch6nAm7nC5N/BHwhGYo
hbTF0m9ZMaf0IBZGPaKgtZ8OGx0v9WE8yG9oQq16IPyXn1s6DREoe9XbUuHGYFf1
7wdCz6WbbmwNXuA8v5cV8FlCKuMzjeeM6Fud6L+Rpiy1WBXwVp+ObFuOUdf5QF6r
O+dvtE1ooPcJLdMCC+MbfBV/ZN2QHNR3ijqkX3s30D6aARVr7DzSPArmYpQ9XpKD
8koMiyyN0IGHq3W95fYP86ABN0ohE2oD6KQorN3Cyv7qsj9hORTKqaOjZvnu+9KD
ZWpjXv4U6W31dSwzatXI4wv8m0IBnaR1Avd/gY0a2Lkv+Ac57+MWZF4A0JuHhRxa
DV2qAQ9/bLlbAK1xNbmnvF/NrJh70fUMagm+6e5yk2erQV076ayiBc+3CRPD2z6Y
+sQPLWSBxXXRCAcMq6hT
=x9hc
-----END PGP SIGNATURE-----
Merge tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC fixes from Olof Johansson:
- a short branch of OMAP fixes that we didn't merge before the window
opened.
- a small cleanup that sorts the rk3288 dts entries properly
- a build fix due to a reference to a removed DT node on exynos
* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
ARM: dts: exynos5420: remove disp_pd
ARM: EXYNOS: Fix suspend/resume sequences
ARM: dts: Fix the sort ordering of EHCI and HSIC in rk3288.dtsi
ARM: OMAP3: Fix coding style problems in arch/arm/mach-omap2/control.c
ARM: OMAP3: Fix choice of omap3_restore_es function in OMAP34XX rev3.1.2 case.
ARM: OMAP2+: clock: allow omap2_dpll_round_rate() to round to next-lowest rate
arm and arm64 architectures. It required some minor updates to the generic
tracepoint system, so it had to wait for me to implement them.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJT5N6DAAoJEKQekfcNnQGuv60H/2NXDO/kUtvdF0L7ewaGbDaO
sjGOXMHDDgF4fQixPsIYNHdra0iGSPL59NBjIaLsESFsB8SUOVqXSclV0MSiZJQc
1PgTduE19p2kEMsqw6F4l8Ir8hPrUT8V8pQScR9lUkww3ANpyTB6Bbg1rZHcmTYA
yAq20q85rfQrAGwbvvhg40UYF8/su0FMUAbt/a180kVL8yeQI2liAkNOJTMCVq35
PpL7if4dlqAhKMqne71ae080PIPOH34q2lmZX3/SbpRvT2tSkS4dkoSFtCAD4pvx
c2TKNOxEDDWlinN/305PXH2yQ87MTIm44SBaTu/WPllUSQoO//EKI7+13tNS8Qc=
=/VeP
-----END PGP SIGNATURE-----
Merge tag 'trace-ipi-tracepoints' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull IPI tracepoints for ARM from Steven Rostedt:
"Nicolas Pitre added generic tracepoints for tracing IPIs and updated
the arm and arm64 architectures. It required some minor updates to
the generic tracepoint system, so it had to wait for me to implement
them"
* tag 'trace-ipi-tracepoints' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
ARM64: add IPI tracepoints
ARM: add IPI tracepoints
tracepoint: add generic tracepoint definitions for IPI tracing
tracing: Do not do anything special with tracepoint_string when tracing is disabled
Pull arch signal handling cleanup from Richard Weinberger:
"This patch series moves all remaining archs to the get_signal(),
signal_setup_done() and sigsp() functions.
Currently these archs use open coded variants of the said functions.
Further, unused parameters get removed from get_signal_to_deliver(),
tracehook_signal_handler() and signal_delivered().
At the end of the day we save around 500 lines of code."
* 'signal-cleanup' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/misc: (43 commits)
powerpc: Use sigsp()
openrisc: Use sigsp()
mn10300: Use sigsp()
mips: Use sigsp()
microblaze: Use sigsp()
metag: Use sigsp()
m68k: Use sigsp()
m32r: Use sigsp()
hexagon: Use sigsp()
frv: Use sigsp()
cris: Use sigsp()
c6x: Use sigsp()
blackfin: Use sigsp()
avr32: Use sigsp()
arm64: Use sigsp()
arc: Use sigsp()
sas_ss_flags: Remove nested ternary if
Rip out get_signal_to_deliver()
Clean up signal_delivered()
tracehook_signal_handler: Remove sig, info, ka and regs
...
Pull ARM fixes from Russell King:
"A number of small fixes:
- fix loading of the translation table base registers for LPAE
- add two new syscalls to the ARM syscall tables"
* 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm:
ARM: wire up memfd_create syscall
ARM: wire up getrandom syscall
ARM: 8114/1: LPAE: load upper bits of early TTBR0/TTBR1
Mostly cleanup/refactoring in core intc, cache flush, IPI send...
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
iQIcBAABAgAGBQJT5YDrAAoJEGnX8d3iisJePtgQALJDm3Nn7fUkJRlk6Smb+IpQ
5LBLysiNWpauPNIEw26FKrAbUb3EnLL79v+DVPc5+5kVsp54T8ucl0kYGWdVrA2F
yXEWynROB5HLDHW+3a4BhYDUAWf51PmJoL1IO7jXdPi/fM6W+m88UnpuTV2ISNfy
h6N47MxhFj7EVgjULh254kOgbzpNkaIUsT2CH1VDPlOjn3TFVCvxBL7+IEkYvzFR
zHV2GI91Y9YwU+xrX2/HhvyXYZYXUIFV/zFH78q0mOH4jl8ED0cH65y0tmG66tA6
P15Gt1761vU45dSctOvQtapUQ5UI2el2py9nto8m3mhLvKn8UOZ8cvEC40kQ16me
SsnypLa0ONRMMPApvcJIQkyDiazNeilSkmT1mrX2u5HYJaRozYAbuChxXBwTHkX1
04HC3fAuZgr+/kQxOWtg0WjUahkycXoYP6nzNQ0MDjrYrbKCKpn7m5zphBWFEdbp
jKQS1vqgMmMOz/hkez8Ba9G35LOLfzHt+oWB5gf1ibTaLGPIuf9tLOdFk6paz2e0
lPfDMERBx0JG705+nc7pZR1cWhpRLNPy7YOxQs+HaOqqPIuzYAklUNGwQFliuB12
AmXbRC/2VOXQIFgmf/jR0oaaxkp6BC9a46N0Qlpnwh9tlpsv7IxmkdwFdO3hCG9x
s4ySPRIKcBZsJqjpYwcK
=Mx5r
-----END PGP SIGNATURE-----
Merge tag 'arc-v3.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc
Pull ARC changes from Vineet Gupta:
"Mostly cleanup/refactoring in core intc, cache flush, IPI send..."
* tag 'arc-v3.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc:
mm, arc: remove obsolete pagefault oom killer comment
ARC: help gcc elide icache helper for !SMP
ARC: move common ops for line/full cache into helpers
ARC: cache boot reporting updates
ARC: [intc] mask/unmask can be hidden again
ARC: [plat-arcfpga] No need for init_irq hack
ARC: [intc] don't mask all IRQ by default
ARC: prune extra header includes from smp.c
ARC: update some comments
ARC: [SMP] unify cpu private IRQ requests (TIMER/IPI)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJT4qKEAAoJEKurIx+X31iB7/cP/34HRGvWZQeRYCErh7Ub87tz
IqiYPM9aK0YN0K7i/dBlwpAUNrzrmmwqlLd/BzcwwJ9kJaIbS5l/lzHszREUz2GV
p8SnsOsppMMSd30vWF0XOgx9YSq4xanZ8urBaazoGds8KF8CXg+LKmihN0XHVMNg
X/sT7dH9QFpwIGoblefuPY1USo4Fb82Oewj7beCl/qe8b6IZYkx+fHSEjGdYhac+
yk7V4qij/HWgHn/YsreZdUCD7UT3LVDjMPZLnJlp2vBoGCMkMVcaYrDJn1d/j/Lr
t8/+oYNtVBMwQ09RCCjgpDimYxb0QNfWOx3iMCjOf1LJXALypl31SdMgKeZ1wbAA
/DQPZrkt0KMMJ3CPKJvPBbGDEbWGdwNJqjzyHzuZ7JOqa08xHwj5mptx5XyzutpS
FEAbncaxj1feIWS5kd6FwSOV9u2auMFBwVQFDZ4joBaDF0LAulkX2jMq0gZkm1dl
5LRq/wkXdIzcgYkvZIGiVbcs96njZdVWmYvugj8ER9Zo7UpH9hsOR6w8Z8xN4oL4
0h0i3Odup2XOqfnclCoXasGrjlLtHViYDwvP3E6ju+d+zbMPmP3JF6mzB1sCHLtQ
bf7m7z6Xxlf5gec6Nyibcq7R3h8yv8F7Owy3BGSaAwlZs9Bb8FHrSJPyD0ehvGOj
IUgZCdeoggu6xQIJ+IgO
=zjzv
-----END PGP SIGNATURE-----
Merge tag 'please-pull-getrandom' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux
Pull ia64 system call update from Tony Luck:
"Wire up getrandom system call for ia64"
* tag 'please-pull-getrandom' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux:
[IA64] Wire up getrandom() system call
This was caused by commit 5a8da52404 ("ARM: dts: exynos5420: add dsi
node"), which conflicted with d51cad7df8 ("ARM: dts: remove display
power domain for exynos5420").
The DTS addition should never have been merged through the DRM tree in
the first place, and it lacked an ack from the platform maintainer
(who would have known that the disp_pd reference got removed).
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Olof Johansson <olof@lixom.net>
Due to recent consolidation of Exynos suspend and cpuidle code, some
parts of suspend and resume sequences are executed two times, once from
exynos_pm_syscore_ops and then from exynos_cpu_pm_notifier() and thus it
breaks suspend, at least on Exynos4-based boards. In addition, simple
core power down from a cpuidle driver could, in case of CPU 0 could
result in calling functions that are specific to suspend and deeper idle
states.
This patch fixes the issue by moving those operations outside the CPU PM
notifier into suspend and AFTR code paths. This leads to a bit of code
duplication, but allows additional code simplification, so in the end
more code is removed than added.
Fixes: 85f9f90808 ("ARM: EXYNOS: Use the cpu_pm notifier for pm")
Cc: Kukjin Kim <kgene.kim@samsung.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Olof Johansson <olof@lixom.net>
Cc: arm@kernel.org
Signed-off-by: Tomasz Figa <t.figa@samsung.com>
[b.zolnierkie: ported patch over current changes]
[b.zolnierkie: fixed exynos_aftr_finisher() return value]
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Olof Johansson <olof@lixom.net>
- Fix for DPLL rate rounding
- Fix for omap3 ES3.1.2 suspend
- Few coding style fixes
Note that these are based on earlier omap-for-v3.17/soc.
No strict dependency to it though, but I already merged in
the pull request from Paul for the first fix into it and
these are all SoC related.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJT21LQAAoJEBvUPslcq6VzMVMP/jgMYuWubEC+fBxiS4rVhi70
UtSXZC5BbUejf3EudJvIusS0SDoWApMLsOyanNTb5zyHMdSbYgkwJx1xfw9TYLZJ
j9WnOUB3a3XA6Forz9GD1m5Z+Ii/iPhovJSWwHXllF+JW6IlkgIkCshiPohBwf4P
AZiDtz76sYiCql7S25rZAchE/6kZNrsgOauWmNyINi1IKtVL/U0LhECV1VPbKly8
LbfHhHL7/RrEtmda7KG0/7FydkqKB6a6MpDGmB8UyaiMrUlu8MmUFxWHq8mbRdyk
WNX/eJw5//ee5WlCx9peHMTq5swnCnyGkd5Ov2x8Fr5SnZp6mLlG6DNxhuBWoiVT
UgLvhAOOE66WQJ8DrkNNJD4L9u1hpulm1bRJXfBMz+Dsu72ax0w+tRYpWzv1P+/b
dtQBrZcH8D7Wz85y3hrF+ck+yxvGduEZT/+qax9GvqaYJHiuiYIDzPLlYX/61JxZ
H5tES+TY9Fe6fxJAc4xWdpfLkr0Qk7K6pTLph1F2mgT3qqBd/lBtCtbJP6aiaZmo
tjEyeOr5Cgxo+aEg5NTIzgOnuMO+lcU5CZ80wlJPv14szQ1DHj+4DXDE8/vgTjwT
qBRxwZkultjNb6ZqgIfT3+2SyeP9ZduBmh66NLVxHHAxXa8TVrBpBHlrrhLxD2zW
00GT2lygadrg7eKgseFY
=jsEw
-----END PGP SIGNATURE-----
Merge tag 'omap-for-v3.17/soc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into fixes
Merge "few omap fixes for v3.17 merge window" from Tony Lindgren:
Few fixes for the v3.17 merge window:
- Fix for DPLL rate rounding
- Fix for omap3 ES3.1.2 suspend
- Few coding style fixes
* tag 'omap-for-v3.17/soc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap:
ARM: OMAP3: Fix coding style problems in arch/arm/mach-omap2/control.c
ARM: OMAP3: Fix choice of omap3_restore_es function in OMAP34XX rev3.1.2 case.
ARM: OMAP2+: clock: allow omap2_dpll_round_rate() to round to next-lowest rate
Signed-off-by: Olof Johansson <olof@lixom.net>
The EHCI and HSIC device tree nodes were added in the wrong place.
Fix them.
Signed-off-by: Doug Anderson <dianders@chromium.org>
Signed-off-by: Kever Yang <kever.yang@rock-chips.com>
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Olof Johansson <olof@lixom.net>
This patch fixes booting when idmap pgd lays above 4gb. Commit
4756dcbfd3 mostly had fixed this, but it'd failed to load upper bits.
Also this fixes adding TTBR1_OFFSET to TTRR1: if lower part overflows
carry flag must be added to the upper part.
Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
cycle, and this time we got a lot of action going on and
it will continue:
- The core GPIO library implementation has been split up in
three different files:
- gpiolib.c for the latest and greatest and shiny GPIO
library code using GPIO descriptors only
- gpiolib-legacy.c for the old integer number space API
that we are phasing out gradually
- gpiolib-sysfs.c for the sysfs interface that we are
not entirely happy with, but has to live on for
ABI compatibility
- Add a flags argument to *gpiod_get* functions, with some
backward-compatibility macros to ease transitions. We
should have had the flags there from the beginning it
seems, now we need to clean up the mess. There is a plan
on how to move forward here devised by Alexandre Courbot
and Mark Brown.
- Split off a special <linux/gpio/machine.h> header for the
board gpio table registration, as per example from the
regulator subsystem.
- Start to kill off the return value from gpiochip_remove()
by removing the __must_check attribute and removing all
checks inside the drivers/gpio directory. The rationale
is: well what were we supposed to do if there is an error
code? Not much: print an error message. And gpiolib already
does that. So make this function return void eventually.
- Some cleanups of hairy gpiolib code, make some functions
not to be used outside the library private and make sure
they are not exported, remove gpiod_lock/unlock_as_irq()
as the existing function is for driver-internal use and
fine as it is, delete gpio_ensure_requested() as it is
not meaningful anymore.
- Support the GPIOF_ACTIVE_LOW flag from gpio_request_one()
function calls, which is logical since this is already
supported when referencing GPIOs from e.g. device trees.
- Switch STMPE, intel-mid, lynxpoint and ACPI (!) to use
the gpiolib irqchip helpers cutting down on GPIO irqchip
boilerplate a bit more.
- New driver for the Zynq GPIO block.
- The usual incremental improvements around a bunch of
drivers.
- Janitorial syntactic and semantic cleanups by Jingoo Han,
and Rickard Strandqvist especially.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJT5Ji9AAoJEEEQszewGV1zch8QAKp67+8ScxRBf/7RCSV6U/dy
i7kt+nP4au/TScwtjbX264DM8hroW7BzN+GqF10NEFeGkYDR+42lMav9PrNjtKtk
ojQPWdoGWzwwL0wa4j9rsuG/pRnbAEgDWPb+EkFdHQsLl6h71fyVoLOK+gKwJFyn
aPYGXyNbT1FN38oj1rarENiOUxM7VMXvcJFfvDYFdDDhCS4PLYPOMw0lrsGtsHMZ
epDa4z3yt4zHgYiUIT578nQ7EkIbGN3goywk3NQ+9WDQG+sLFHh4BdqcRKg6b9VM
I64+47uNQxkyvWCvcLma5ziqvtNQk113986g+cv5YeTh18Ajyio1kxEIZM181eBk
ITUPGrAorWHPLGNbe3psLmtK3+/BwmWIurPmHpckuW8d2JWWSVe0oepkUuqDwu/w
lUB5KtM0joFOr5k61fj5tCKxH344jc1zvHJ/N+bVYilbOMvunWzuMJlc4hADIGC2
1uxUAcPbYUAphGaxhdMBca9ellm0lWG19Gj5TtdGbRtNgp6R2qrwI66DDzk+1kLR
8Szx6KHQdEHFTlCLKSIAMv33p1ClfmNikhdicT3urwR8PeXmmTR1pD7kGmVTDDZa
gXSU5EilgGpak+77j/GZ2Ivp0Qt5M97UwWlZ7zTp++T1ZY+wwTHJI/09qcoWYjdz
IpZRIqrQchalbscpn3LY
=e+d6
-----END PGP SIGNATURE-----
Merge tag 'gpio-v3.17-1' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio
Pull GPIO update from Linus Walleij:
"This is the bulk of GPIO changes for the v3.17 development cycle, and
this time we got a lot of action going on and it will continue:
- The core GPIO library implementation has been split up in three
different files:
- gpiolib.c for the latest and greatest and shiny GPIO library code
using GPIO descriptors only
- gpiolib-legacy.c for the old integer number space API that we are
phasing out gradually
- gpiolib-sysfs.c for the sysfs interface that we are not entirely
happy with, but has to live on for ABI compatibility
- Add a flags argument to *gpiod_get* functions, with some
backward-compatibility macros to ease transitions. We should have
had the flags there from the beginning it seems, now we need to
clean up the mess. There is a plan on how to move forward here
devised by Alexandre Courbot and Mark Brown
- Split off a special <linux/gpio/machine.h> header for the board
gpio table registration, as per example from the regulator
subsystem
- Start to kill off the return value from gpiochip_remove() by
removing the __must_check attribute and removing all checks inside
the drivers/gpio directory. The rationale is: well what were we
supposed to do if there is an error code? Not much: print an error
message. And gpiolib already does that. So make this function
return void eventually
- Some cleanups of hairy gpiolib code, make some functions not to be
used outside the library private and make sure they are not
exported, remove gpiod_lock/unlock_as_irq() as the existing
function is for driver-internal use and fine as it is, delete
gpio_ensure_requested() as it is not meaningful anymore
- Support the GPIOF_ACTIVE_LOW flag from gpio_request_one() function
calls, which is logical since this is already supported when
referencing GPIOs from e.g. device trees
- Switch STMPE, intel-mid, lynxpoint and ACPI (!) to use the gpiolib
irqchip helpers cutting down on GPIO irqchip boilerplate a bit more
- New driver for the Zynq GPIO block
- The usual incremental improvements around a bunch of drivers
- Janitorial syntactic and semantic cleanups by Jingoo Han, and
Rickard Strandqvist especially"
* tag 'gpio-v3.17-1' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio: (37 commits)
MAINTAINERS: update GPIO include files
gpio: add missing includes in machine.h
gpio: add flags argument to gpiod_get*() functions
MAINTAINERS: Update Samsung pin control entry
gpio / ACPI: Move event handling registration to gpiolib irqchip helpers
gpio: lynxpoint: Convert to use gpiolib irqchip
gpio: split gpiod board registration into machine header
gpio: remove gpio_ensure_requested()
gpio: remove useless check in gpiolib_sysfs_init()
gpiolib: Export gpiochip_request_own_desc and gpiochip_free_own_desc
gpio: move gpio_ensure_requested() into legacy C file
gpio: remove gpiod_lock/unlock_as_irq()
gpio: make gpiochip_get_desc() gpiolib-private
gpio: simplify gpiochip_export()
gpio: remove export of private of_get_named_gpio_flags()
gpio: Add support for GPIOF_ACTIVE_LOW to gpio_request_one functions
gpio: zynq: Clear pending interrupt when enabling a IRQ
gpio: drop retval check enforcing from gpiochip_remove()
gpio: remove all usage of gpio_remove retval in driver/gpio
devicetree: Add Zynq GPIO devicetree bindings documentation
...
Pull input updates from Dmitry Torokhov:
- big update to Wacom driver by Benjamin Tissoires, converting it to
HID infrastructure and unifying USB and Bluetooth models
- large update to ALPS driver by Hans de Goede, which adds support for
newer touchpad models as well as cleans up and restructures the code
- more changes to Atmel MXT driver, including device tree support
- new driver for iPaq x3xxx touchscreen
- driver for serial Wacom tablets
- driver for Microchip's CAP1106
- assorted cleanups and improvements to existing drover and input core
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input: (93 commits)
Input: wacom - update the ABI doc according to latest changes
Input: wacom - only register once the MODULE_* macros
Input: HID - remove hid-wacom Bluetooth driver
Input: wacom - add copyright note and bump version to 2.0
Input: wacom - remove passing id for wacom_set_report
Input: wacom - check for bluetooth protocol while setting OLEDs
Input: wacom - handle Intuos 4 BT in wacom.ko
Input: wacom - handle Graphire BT tablets in wacom.ko
Input: wacom - prepare the driver to include BT devices
Input: hyperv-keyboard - register as a wakeup source
Input: imx_keypad - remove ifdef round PM methods
Input: jornada720_ts - get rid of space indentation and use tab
Input: jornada720_ts - switch to using managed resources
Input: alps - Rushmore and v7 resolution support
Input: mcs5000_ts - remove ifdef around power management methods
Input: mcs5000_ts - protect PM functions with CONFIG_PM_SLEEP
Input: ads7846 - release resources on failure for clean exit
Input: wacom - add support for 0x12C ISDv4 sensor
Input: atmel_mxt_ts - use deep sleep mode when stopped
ARM: dts: am437x-gp-evm: Update binding for touchscreen size
...
Merge more incoming from Andrew Morton:
"Two new syscalls:
memfd_create in "shm: add memfd_create() syscall"
kexec_file_load in "kexec: implementation of new syscall kexec_file_load"
And:
- Most (all?) of the rest of MM
- Lots of the usual misc bits
- fs/autofs4
- drivers/rtc
- fs/nilfs
- procfs
- fork.c, exec.c
- more in lib/
- rapidio
- Janitorial work in filesystems: fs/ufs, fs/reiserfs, fs/adfs,
fs/cramfs, fs/romfs, fs/qnx6.
- initrd/initramfs work
- "file sealing" and the memfd_create() syscall, in tmpfs
- add pci_zalloc_consistent, use it in lots of places
- MAINTAINERS maintenance
- kexec feature work"
* emailed patches from Andrew Morton <akpm@linux-foundation.org: (193 commits)
MAINTAINERS: update nomadik patterns
MAINTAINERS: update usb/gadget patterns
MAINTAINERS: update DMA BUFFER SHARING patterns
kexec: verify the signature of signed PE bzImage
kexec: support kexec/kdump on EFI systems
kexec: support for kexec on panic using new system call
kexec-bzImage64: support for loading bzImage using 64bit entry
kexec: load and relocate purgatory at kernel load time
purgatory: core purgatory functionality
purgatory/sha256: provide implementation of sha256 in purgaotory context
kexec: implementation of new syscall kexec_file_load
kexec: new syscall kexec_file_load() declaration
kexec: make kexec_segment user buffer pointer a union
resource: provide new functions to walk through resources
kexec: use common function for kimage_normal_alloc() and kimage_crash_alloc()
kexec: move segment verification code in a separate function
kexec: rename unusebale_pages to unusable_pages
kernel: build bin2c based on config option CONFIG_BUILD_BIN2C
bin2c: move bin2c in scripts/basic
shm: wait for pins to be released when sealing
...
This is the final piece of the puzzle of verifying kernel image signature
during kexec_file_load() syscall.
This patch calls into PE file routines to verify signature of bzImage. If
signature are valid, kexec_file_load() succeeds otherwise it fails.
Two new config options have been introduced. First one is
CONFIG_KEXEC_VERIFY_SIG. This option enforces that kernel has to be
validly signed otherwise kernel load will fail. If this option is not
set, no signature verification will be done. Only exception will be when
secureboot is enabled. In that case signature verification should be
automatically enforced when secureboot is enabled. But that will happen
when secureboot patches are merged.
Second config option is CONFIG_KEXEC_BZIMAGE_VERIFY_SIG. This option
enables signature verification support on bzImage. If this option is not
set and previous one is set, kernel image loading will fail because kernel
does not have support to verify signature of bzImage.
I tested these patches with both "pesign" and "sbsign" signed bzImages.
I used signing_key.priv key and signing_key.x509 cert for signing as
generated during kernel build process (if module signing is enabled).
Used following method to sign bzImage.
pesign
======
- Convert DER format cert to PEM format cert
openssl x509 -in signing_key.x509 -inform DER -out signing_key.x509.PEM -outform
PEM
- Generate a .p12 file from existing cert and private key file
openssl pkcs12 -export -out kernel-key.p12 -inkey signing_key.priv -in
signing_key.x509.PEM
- Import .p12 file into pesign db
pk12util -i /tmp/kernel-key.p12 -d /etc/pki/pesign
- Sign bzImage
pesign -i /boot/vmlinuz-3.16.0-rc3+ -o /boot/vmlinuz-3.16.0-rc3+.signed.pesign
-c "Glacier signing key - Magrathea" -s
sbsign
======
sbsign --key signing_key.priv --cert signing_key.x509.PEM --output
/boot/vmlinuz-3.16.0-rc3+.signed.sbsign /boot/vmlinuz-3.16.0-rc3+
Patch details:
Well all the hard work is done in previous patches. Now bzImage loader
has just call into that code and verify whether bzImage signature are
valid or not.
Also create two config options. First one is CONFIG_KEXEC_VERIFY_SIG.
This option enforces that kernel has to be validly signed otherwise kernel
load will fail. If this option is not set, no signature verification will
be done. Only exception will be when secureboot is enabled. In that case
signature verification should be automatically enforced when secureboot is
enabled. But that will happen when secureboot patches are merged.
Second config option is CONFIG_KEXEC_BZIMAGE_VERIFY_SIG. This option
enables signature verification support on bzImage. If this option is not
set and previous one is set, kernel image loading will fail because kernel
does not have support to verify signature of bzImage.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Matthew Garrett <mjg59@srcf.ucam.org>
Cc: Greg Kroah-Hartman <greg@kroah.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: WANG Chao <chaowang@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Matt Fleming <matt@console-pimps.org>
Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This patch does two things. It passes EFI run time mappings to second
kernel in bootparams efi_info. Second kernel parse this info and create
new mappings in second kernel. That means mappings in first and second
kernel will be same. This paves the way to enable EFI in kexec kernel.
This patch also prepares and passes EFI setup data through bootparams.
This contains bunch of information about various tables and their
addresses.
These information gathering and passing has been written along the lines
of what current kexec-tools is doing to make kexec work with UEFI.
[akpm@linux-foundation.org: s/get_efi/efi_get/g, per Matt]
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Matthew Garrett <mjg59@srcf.ucam.org>
Cc: Greg Kroah-Hartman <greg@kroah.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: WANG Chao <chaowang@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This patch adds support for loading a kexec on panic (kdump) kernel usning
new system call.
It prepares ELF headers for memory areas to be dumped and for saved cpu
registers. Also prepares the memory map for second kernel and limits its
boot to reserved areas only.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Matthew Garrett <mjg59@srcf.ucam.org>
Cc: Greg Kroah-Hartman <greg@kroah.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: WANG Chao <chaowang@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This is loader specific code which can load bzImage and set it up for
64bit entry. This does not take care of 32bit entry or real mode entry.
32bit mode entry can be implemented if somebody needs it.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Matthew Garrett <mjg59@srcf.ucam.org>
Cc: Greg Kroah-Hartman <greg@kroah.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: WANG Chao <chaowang@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Load purgatory code in RAM and relocate it based on the location.
Relocation code has been inspired by module relocation code and purgatory
relocation code in kexec-tools.
Also compute the checksums of loaded kexec segments and store them in
purgatory.
Arch independent code provides this functionality so that arch dependent
bootloaders can make use of it.
Helper functions are provided to get/set symbol values in purgatory which
are used by bootloaders later to set things like stack and entry point of
second kernel etc.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Matthew Garrett <mjg59@srcf.ucam.org>
Cc: Greg Kroah-Hartman <greg@kroah.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: WANG Chao <chaowang@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Create a stand alone relocatable object purgatory which runs between two
kernels. This name, concept and some code has been taken from
kexec-tools. Idea is that this code runs after a crash and it runs in
minimal environment. So keep it separate from rest of the kernel and in
long term we will have to practically do no maintenance of this code.
This code also has the logic to do verify sha256 hashes of various
segments which have been loaded into memory. So first we verify that the
kernel we are jumping to is fine and has not been corrupted and make
progress only if checsums are verified.
This code also takes care of copying some memory contents to backup region.
[sfr@canb.auug.org.au: run host built programs from objtree]
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Matthew Garrett <mjg59@srcf.ucam.org>
Cc: Greg Kroah-Hartman <greg@kroah.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: WANG Chao <chaowang@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Next two patches provide code for purgatory. This is a code which does
not link against the kernel and runs stand alone. This code runs between
two kernels. One of the primary purpose of this code is to verify the
digest of newly loaded kernel and making sure it matches the digest
computed at kernel load time.
We use sha256 for calculating digest of kexec segmetns. Purgatory can't
use stanard crypto API as that API is not available in purgatory context.
Hence, I have copied code from crypto/sha256_generic.c and compiled it
with purgaotry code so that it could be used. I could not #include
sha256_generic.c file here as some of the function signature requiered
little tweaking. Original functions work with crypto API but these ones
don't
So instead of doing #include on sha256_generic.c I just copied relevant
portions of code into arch/x86/purgatory/sha256.c. Now we shouldn't have
to touch this code at all. Do let me know if there are better ways to
handle it.
This patch does not enable compiling of this code. That happens in next
patch. I wanted to highlight this change in a separate patch for easy
review.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Matthew Garrett <mjg59@srcf.ucam.org>
Cc: Greg Kroah-Hartman <greg@kroah.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: WANG Chao <chaowang@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Previous patch provided the interface definition and this patch prvides
implementation of new syscall.
Previously segment list was prepared in user space. Now user space just
passes kernel fd, initrd fd and command line and kernel will create a
segment list internally.
This patch contains generic part of the code. Actual segment preparation
and loading is done by arch and image specific loader. Which comes in
next patch.
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Matthew Garrett <mjg59@srcf.ucam.org>
Cc: Greg Kroah-Hartman <greg@kroah.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: WANG Chao <chaowang@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This is the new syscall kexec_file_load() declaration/interface. I have
reserved the syscall number only for x86_64 so far. Other architectures
(including i386) can reserve syscall number when they enable the support
for this new syscall.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Matthew Garrett <mjg59@srcf.ucam.org>
Cc: Greg Kroah-Hartman <greg@kroah.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: WANG Chao <chaowang@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
currently bin2c builds only if CONFIG_IKCONFIG=y. But bin2c will now be
used by kexec too. So make it compilation dependent on CONFIG_BUILD_BIN2C
and this config option can be selected by CONFIG_KEXEC and CONFIG_IKCONFIG.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Matthew Garrett <mjg59@srcf.ucam.org>
Cc: Greg Kroah-Hartman <greg@kroah.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: WANG Chao <chaowang@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
memfd_create() is similar to mmap(MAP_ANON), but returns a file-descriptor
that you can pass to mmap(). It can support sealing and avoids any
connection to user-visible mount-points. Thus, it's not subject to quotas
on mounted file-systems, but can be used like malloc()'ed memory, but with
a file-descriptor to it.
memfd_create() returns the raw shmem file, so calls like ftruncate() can
be used to modify the underlying inode. Also calls like fstat() will
return proper information and mark the file as regular file. If you want
sealing, you can specify MFD_ALLOW_SEALING. Otherwise, sealing is not
supported (like on all other regular files).
Compared to O_TMPFILE, it does not require a tmpfs mount-point and is not
subject to a filesystem size limit. It is still properly accounted to
memcg limits, though, and to the same overcommit or no-overcommit
accounting as all user memory.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Hugh Dickins <hughd@google.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Ryan Lortie <desrt@desrt.ca>
Cc: Lennart Poettering <lennart@poettering.net>
Cc: Daniel Mack <zonque@gmail.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>