Usually segment and region tables are 16k aligned due to the way the
buddy allocator works. This is not true for the vmem code which only
asks for a 4k alignment. In order to be consistent enforce a 16k
alignment here as well.
This alignment will be assumed and therefore is required by the
pageattr code.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
We have already two inline assemblies which make use of the csp
instruction. Since I need a third instance let's introduce a generic
inline assmebly which can be used by everyone.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
commit 1e133ab296 ("s390/mm: split arch/s390/mm/pgtable.c") factored
out the page table handling code from __gmap_zap and __s390_reset_cmma
into ptep_zap_unused and added a simple flag that tells which one of the
function (reset or not) is to be made. This also changed the behaviour,
as it also zaps unused page table entries on reset.
Turns out that this is wrong as s390_reset_cmma uses the page walker,
which DOES NOT take the ptl lock.
The most simple fix is to not do the zapping part on reset (which uses
the walker)
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Fixes: 1e133ab296 ("s390/mm: split arch/s390/mm/pgtable.c")
Cc: stable@vger.kernel.org # 4.6+
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
- ptrace: Fix out of bounds array access warning from Khem Raj
- pseries: Fix PCI config address for DDW from Gavin Shan
- pseries: Fix IBM_ARCH_VEC_NRCORES_OFFSET since POWER8NVL was added from Michael Ellerman
- of: fix autoloading due to broken modalias with no 'compatible' from Wolfram Sang
- radix: Fix always false comparison against MMU_NO_CONTEXT from Aneesh Kumar K.V
- hash: Compute the segment size correctly for ISA 3.0 from Aneesh Kumar K.V
- nohash: Fix build break with 64K pages from Michael Ellerman
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXWpyDAAoJEFHr6jzI4aWAxl8P/0m7sbx8yyExRrRlv0kXPSHX
5LI69fPgU44Vq6KImndN7D0UWxCZeSG6ZJmdEzDonYlPrMtXTC6wkG+tk1l2ov2/
nsatn1HeCqO7W9rniKuFXAnleTZlq3pxPk55JUPySjS+6oIlfkJfNIQpJednNe/b
RFj6HAOJEwKDRguADxJFHPi1ATIF5ahFVkowV0p8aCT2kII+Ixe/4I2RDfQ4oxVg
l3Iq7TLmbl7s2jxIaSA3Qf2FZRgXHqtULWI+sj7uTHaAB/3tfqLf7ITG8VGSM0uQ
cuBEPG/hPuebo0C/kFw3x1hGed7jsmAq8QIHIHBKwJNU3A3NKOoftivfUBuO6FrF
zkkS21GhMNJb+7DZeF+8QPmvG/ORG6YZeZndvFXRyimQHTHP0XGeq86RckQ6zcl6
mh3bEuqIzGV11IIA8JC2FhnRPx+3mSKPfewVZTX0tse+ZzbJWUz3yYB9AQjyAXoY
fbHz9V9HCk4hfwb8CWm0GjVzHitSSDyJouwp0oUz84R+1X4rnPeZLxHMe3bMRI4k
t6R9DmGdoe1Lgefd5SoPEE/sBxq0BMuyIiG6ICK/MW2SCb5VLGGh8bDrSI9vIVvy
2uwfyj1toJlNWB1M08376AWWru7l/VgYW7I+sXJSp86eQ/FmdGChfuo+sn2shQyj
kRoWoakEGUO5f6Ed5qPA
=5kLq
-----END PGP SIGNATURE-----
Merge tag 'powerpc-4.7-3Michael Ellerman:' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc fixes from
- ptrace: Fix out of bounds array access warning from Khem Raj
- pseries: Fix PCI config address for DDW from Gavin Shan
- pseries: Fix IBM_ARCH_VEC_NRCORES_OFFSET since POWER8NVL was added
from Michael Ellerman
- of: fix autoloading due to broken modalias with no 'compatible' from
Wolfram Sang
- radix: Fix always false comparison against MMU_NO_CONTEXT from Aneesh
Kumar K.V
- hash: Compute the segment size correctly for ISA 3.0 from Aneesh
Kumar K.V
- nohash: Fix build break with 64K pages from Michael Ellerman
* tag 'powerpc-4.7-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
powerpc/nohash: Fix build break with 64K pages
powerpc/mm/hash: Compute the segment size correctly for ISA 3.0
powerpc/mm/radix: Fix always false comparison against MMU_NO_CONTEXT
of: fix autoloading due to broken modalias with no 'compatible'
powerpc/pseries: Fix IBM_ARCH_VEC_NRCORES_OFFSET since POWER8NVL was added
powerpc/pseries: Fix PCI config address for DDW
powerpc/ptrace: Fix out of bounds array access warning
- Fix an issue where we fail to fault in old pages on a write when
CONFIG_ARM64_HW_AFDBM is enabled
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJXWrCWAAoJELescNyEwWM08y8IAJH1gTSkohoD9v0ncZRZNbje
WDYCC7RPulTewZvpjO7QqBS8gHH71NOS3OxayXIPCcwKd4r3Fyzh55+HoIzKt57O
rbAxBPnUdOetw2cOwDjZntEk/N8HsMi1wqwc1Y8XPYsbNzRaL3LrdKy3PCayawJY
i/NUOoUliFWNrmqIDRR5N4QMcSUH5P1gZrguodxipCQ9a3fUfwScu4vKYRz6u23O
KhIliQJGJuSsIwFIAEYX5w1FaMIM/2Du7zO2ME/pMVt+ms73cwMTFcRHi/FdqqjV
Klqaiawd4FAfaPtzCvE2r/7+RM6YBjf7ZTUd0R5JEioDYp0bm4w8hsTzx3l9m3A=
=91UM
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fix from Will Deacon:
"A fix for an issue that Alex saw whilst swapping with hardware
access/dirty bit support enabled in the kernel: Fix a failure to fault
in old pages on a write when CONFIG_ARM64_HW_AFDBM is enabled"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: mm: always take dirty state from new pte in ptep_set_access_flags
Pull x86 fixes from Ingo Molnar:
"Misc fixes from all around the map, plus a commit that introduces a
new header of Intel model name symbols (unused) that will make the
next merge window easier"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/ioapic: Fix incorrect pointers in ioapic_setup_resources()
x86/entry/traps: Don't force in_interrupt() to return true in IST handlers
x86/cpu/AMD: Extend X86_FEATURE_TOPOEXT workaround to newer models
x86/cpu/intel: Introduce macros for Intel family numbers
x86, build: copy ldlinux.c32 to image.iso
x86/msr: Use the proper trace point conditional for writes
Pull perf fixes from Ingo Molnar:
"A handful of tooling fixes, two PMU driver fixes and a cleanup of
redundant code that addresses a security analyzer false positive"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/core: Remove a redundant check
perf/x86/intel/uncore: Remove SBOX support for Broadwell server
perf ctf: Convert invalid chars in a string before set value
perf record: Fix crash when kptr is restricted
perf symbols: Check kptr_restrict for root
perf/x86/intel/rapl: Fix pmus free during cleanup
On a 4-socket Brickland system, hot-removing one ioapic is fine.
Hot-removing the 2nd one causes panic in mp_unregister_ioapic()
while calling release_resource().
It is because the iomem_res pointer has already been released
when removing the first ioapic.
To explain the use of &res[num] here: res is assigned to ioapic_resources,
and later in ioapic_insert_resources() we do:
struct resource *r = ioapic_resources;
for_each_ioapic(i) {
insert_resource(&iomem_resource, r);
r++;
}
Here 'r' is treated as an arry of 'struct resource', and the r++ ensures
that each element of the array is inserted separately. Thus we should call
release_resouce() on each element at &res[num].
Fix it by assigning the correct pointers to ioapics[i].iomem_res in
ioapic_setup_resources().
Signed-off-by: Rui Wang <rui.y.wang@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: tony.luck@intel.com
Cc: linux-pci@vger.kernel.org
Cc: rjw@rjwysocki.net
Cc: linux-acpi@vger.kernel.org
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1465369193-4816-3-git-send-email-rui.y.wang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Forcing in_interrupt() to return true if we're not in a bona fide
interrupt confuses the softirq code. This fixes warnings like:
NOHZ: local_softirq_pending 282
... which can happen when running things like selftests/x86.
This will change perf's static percpu buffer usage in IST context.
I think this is okay, and it's changing the behavior to match
historical (pre-4.0) behavior.
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Fixes: 9592747538 ("x86, traps: Track entry into and exit from IST context")
Link: http://lkml.kernel.org/r/cdc215f94d118d691d73df35275022331156fb45.1464130360.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Commit 74701d5947 "powerpc/mm: Rename function to indicate we are
allocating fragments" renamed page_table_free() to pte_fragment_free().
One occurrence was mistyped as pte_fragment_fre().
This only breaks the nohash 64K page build, which is not the default or
enabled in any defconfig.
Fixes: 74701d5947 ("powerpc/mm: Rename function to indicate we are allocating fragments")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
- Revert of ll-sc backoff retry workaround in atomics/spinlocks as hardware
is now proven to work just fine
- Typo fixes (Thanks Andrea Gelmini)
- Removal of obsolete DT property (Alexey)
- Other minor fixes
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXWOz1AAoJEGnX8d3iisJe6H4QAKaXFUGr3WxTMbMGF8w9LYCx
cYDMCKoJgSxpE2S4uT36O39swFDhA9CTH+T4pziZgTFtCduf9jqtTd6JwLmVsvZK
vveDZnZMP+AtWqiwpNuBTrj4+P0FcvwSq+lrRYSWWQ3HjgTQk2q5kU5355yLpD38
Q0BHuMwhVGohWDzi167kOnZcHtd+hxDhj21Uqh/+/4tzCHkiqkO9HjR9i+EYUjPK
YAEBeqK2ayT3GdjQhOlRSIopNJ5rMQhvBboKRzUpXV0SjAxbTQBP+94sgxY0OdJ+
C3qFFjeGHeJmVIM1BaojPcobJJoyjdTcIq3KBjO+Pz7GipwBgFQnlQDBcA7tsdS8
x6devNjaDOLKS5ii3LPrkVplQxEinPJL/snMkpcfNQw8xbsUD2zxCPNruf4F6USD
pFRs/Or9IbfpZu1BXVSNMmgVQ3sdpEcI6ZJnzHaZWI0LWSEhYMak4vn+sDgyzZs2
bWXFjAtMpU3aXa/61GlXa3azs/wYbkPWyZD6biWswUwE7NiQDAe+bGHinyBT8BfK
kyvAr5jcdCGY3zWDyVIs6C/4wS+c7ATBnqHLRWYnOJ3ZyAbeuP6Jqtg4Ofe5WZE+
6OHFTI82hntp7geXaO61CS5SFyHXzLX0JcFBCGv/FSSkngwBOllBw91IRjpuVb3G
t+wToBJ3DN6oosQlLFpo
=VQXp
-----END PGP SIGNATURE-----
Merge tag 'arc-4.7-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc
Pull ARC fixes from Vineet Gupta:
- Revert of ll-sc backoff retry workaround in atomics/spinlocks as
hardware is now proven to work just fine
- Typo fixes (Thanks Andrea Gelmini)
- Removal of obsolete DT property (Alexey)
- Other minor fixes
* tag 'arc-4.7-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc:
Revert "ARCv2: spinlock/rwlock/atomics: Delayed retry of failed SCOND with exponential backoff"
Revert "ARCv2: spinlock/rwlock: Reset retry delay when starting a new spin-wait cycle"
Revert "ARCv2: spinlock/rwlock/atomics: reduce 1 instruction in exponential backoff"
ARC: don't enable DISCONTIGMEM unconditionally
ARC: [intc-compact] simplify code for 2 priority levels
arc: Get rid of root core-frequency property
Fix typos
We need to reenable the topology extensions CPUID leafs on newer models
too, if BIOS has disabled them, as we rely on them to get proper compute
unit topology.
Make the printk a once thing, while at it.
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rui Huang <ray.huang@amd.com>
Cc: Sherry Hurwitz <sherry.hurwitz@amd.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-hwmon@vger.kernel.org
Link: http://lkml.kernel.org/r/1464775468-23355-1-git-send-email-bp@alien8.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Problem:
We have a boatload of open-coded family-6 model numbers. Half of
them have these model numbers in hex and the other half in
decimal. This makes grepping for them tons of fun, if you were
to try.
Solution:
Consolidate all the magic numbers. Put all the definitions in
one header.
The names here are closely derived from the comments describing
the models from arch/x86/events/intel/core.c. We could easily
make them shorter by doing things like s/SANDYBRIDGE/SNB/, but
they seemed fine even with the longer versions to me.
Do not take any of these names too literally, like "DESKTOP"
or "MOBILE". These are all colloquial names and not precise
descriptions of everywhere a given model will show up.
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Darren Hart <dvhart@infradead.org>
Cc: Dave Hansen <dave@sr71.net>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Doug Thompson <dougthompson@xmission.com>
Cc: Eduardo Valentin <edubezval@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Len Brown <lenb@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Rajneesh Bhardwaj <rajneesh.bhardwaj@intel.com>
Cc: Souvik Kumar Chakravarty <souvik.k.chakravarty@intel.com>
Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Vishwanath Somayaji <vishwanath.somayaji@intel.com>
Cc: Zhang Rui <rui.zhang@intel.com>
Cc: jacob.jun.pan@intel.com
Cc: linux-acpi@vger.kernel.org
Cc: linux-edac@vger.kernel.org
Cc: linux-mmc@vger.kernel.org
Cc: linux-pm@vger.kernel.org
Cc: platform-driver-x86@vger.kernel.org
Link: http://lkml.kernel.org/r/20160603001927.F2A7D828@viggo.jf.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Commit 66dbd6e61a ("arm64: Implement ptep_set_access_flags() for
hardware AF/DBM") ensured that pte flags are updated atomically in the
face of potential concurrent, hardware-assisted updates. However, Alex
reports that:
| This patch breaks swapping for me.
| In the broken case, you'll see either systemd cpu time spike (because
| it's stuck in a page fault loop) or the system hang (because the
| application owning the screen is stuck in a page fault loop).
It turns out that this is because the 'dirty' argument to
ptep_set_access_flags is always 0 for read faults, and so we can't use
it to set PTE_RDONLY. The failing sequence is:
1. We put down a PTE_WRITE | PTE_DIRTY | PTE_AF pte
2. Memory pressure -> pte_mkold(pte) -> clear PTE_AF
3. A read faults due to the missing access flag
4. ptep_set_access_flags is called with dirty = 0, due to the read fault
5. pte is then made PTE_WRITE | PTE_DIRTY | PTE_AF | PTE_RDONLY (!)
6. A write faults, but pte_write is true so we get stuck
The solution is to check the new page table entry (as would be done by
the generic, non-atomic definition of ptep_set_access_flags that just
calls set_pte_at) to establish the dirty state.
Cc: <stable@vger.kernel.org> # 4.3+
Fixes: 66dbd6e61a ("arm64: Implement ptep_set_access_flags() for hardware AF/DBM")
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Alexander Graf <agraf@suse.de>
Tested-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
PowerISA 3.0 encodes the segment size in the second half of hash page
table entry. Update hpte_decode() accordingly.
Fixes: 50de596de8 ("powerpc/mm/hash: Add support for Power9 Hash")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
In some of the radix TLB flush routines, we use a local to store the
mm->context.id, AKA the PID.
Currently we use an int, but the PID is unsigned long, so large values
of PID will be truncated. In particular MMU_NO_CONTEXT is -1, which
means all our comparisons against that value can never be true.
This means we'll issue TLB flushes when we shouldn't on radix enabled
machines.
Fix it by using an unsigned long for the local. Discovered by Coverity.
Fixes: 1a472c9dba ("powerpc/mm/radix: Add tlbflush routines")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Reviewed-by: Balbir Singh <bsingharora@gmail.com>
[mpe: Write change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Pull vfs fixes from Al Viro:
"Fixes for crap of assorted ages: EOPENSTALE one is 4.2+, autofs one is
4.6, d_walk - 3.2+.
The atomic_open() and coredump ones are regressions from this window"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
coredump: fix dumping through pipes
fix a regression in atomic_open()
fix d_walk()/non-delayed __d_free() race
autofs braino fix for do_last()
fix EOPENSTALE bug in do_last()
The offset in the core file used to be tracked with ->written field of
the coredump_params structure. The field was retired in favour of
file->f_pos.
However, ->f_pos is not maintained for pipes which leads to breakage.
Restore explicit tracking of the offset in coredump_params. Introduce
->pos field for this purpose since ->written was already reused.
Fixes: a008393951 ("get rid of coredump_params->written").
Reported-by: Zbigniew Jędrzejewski-Szmek <zbyszek@in.waw.pl>
Signed-off-by: Mateusz Guzik <mguzik@redhat.com>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
The recent commit 7cc851039d ("powerpc/pseries: Add POWER8NVL support
to ibm,client-architecture-support call") added a new PVR mask & value
to the start of the ibm_architecture_vec[] array.
However it missed the fact that further down in the array, we hard code
the offset of one of the fields, and then at boot use that value to
patch the value in the array. This means every update to the array must
also update the #define, ugh.
This means that on pseries machines we will misreport to firmware the
number of cores we support, by a factor of threads_per_core.
Fix it for now by updating the #define.
Fixes: 7cc851039d ("powerpc/pseries: Add POWER8NVL support to ibm,client-architecture-support call")
Cc: stable@vger.kernel.org # v4.0+
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
stragglers that didn't get merged by anyone this time around. Better to
do it now than wait for another one to pop up. There's also a minor
maintainers update and a Kconfig fix.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJXVoIpAAoJEK0CiJfG5JUlk/wP/Ro3dPTTJW8tf1kabMWwYRym
PRsKeBNUbiAbbiDdYFcDVgVrxMpkeRQX+qoTPT37FypMbyDnu+rIEeWqHyyNCdzR
4+di548c8XzStBMPNGaKG+WWVDOU/rRWGrun1vc2NR8JohgWFBx8ciV9Kht4g+Ss
5ggm0E/ZKV5Hj7SuiBVbzMsZ/jufDM/V9NeIHy5Gnz6dPuRBkzrvwu9obJ/QLCWE
mh7eRug4C+6xYaQrPbXzgxTXqRJQkk/M27ArodVhvZy16gPr70HC+oNGUJwHk+Fs
yiqx9wicQuxNQqibgOC087RjUTDfFcGLdV71ouIQQhWuZFdQlHr9RfKaq+v1g/DB
s3n8whjHJAukU4i34btG3Mq1UcoLTL4vkOYMW+2yjvUfdUdY5BtKGphrkPO5xKMP
4hpAKkNW3ViTLn3cJQMuk5OgzPr0XrVjd++GtU7XjczzDKx8j9vTbhyZL0mRl+6s
jx8GU4hGuEkuhBIfWENNe2W2rf4TBrfQeiLsJt9nLFY4yqJRNByplkMmL75in/cD
PzbF647286PJYJdhjP0n70E2jyZbfyGYaUdZ9rbuwbEtA3XpOq4ZiWG0ZqPi7aOf
UickP3QW0AoY4Y0QhZ+thTcNxAZPPq6IfEzFNvzGXArR6msQLYzF9Y1dQ/HuXmZP
+tyYKKBCZbKObv463cM3
=JewH
-----END PGP SIGNATURE-----
Merge tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux
Pull clk fixes from Stephen Boyd:
"This finally removes the CLK_IS_ROOT flag by picking up the last few
stragglers that didn't get merged by anyone this time around.
Better to do it now than wait for another one to pop up. There's also
a minor maintainers update and a Kconfig fix"
* tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux:
clk: nxp: Select MFD_SYSCON for creg driver
MAINTAINERS: Add file patterns for clock device tree bindings
clk: Remove CLK_IS_ROOT flag
clk: microchip: Remove CLK_IS_ROOT
powerpc/512x: clk: Remove CLK_IS_ROOT
vexpress/spc: Remove CLK_IS_ROOT
For newer versions of Syslinux, we need ldlinux.c32 in addition to
isolinux.bin to reside on the boot disk, so if the latter is found,
copy it, too, to the isoimage tree.
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Cc: Linux Stable Tree <stable@vger.kernel.org>
In commit 8445a87f70 "powerpc/iommu: Remove the dependency on EEH
struct in DDW mechanism", the PE address was replaced with the PCI
config address in order to remove dependency on EEH. According to PAPR
spec, firmware (pHyp or QEMU) should accept "xxBBSSxx" format PCI config
address, not "xxxxBBSS" provided by the patch. Note that "BB" is PCI bus
number and "SS" is the combination of slot and function number.
This fixes the PCI address passed to DDW RTAS calls.
Fixes: 8445a87f70 ("powerpc/iommu: Remove the dependency on EEH struct in DDW mechanism")
Cc: stable@vger.kernel.org # v3.4+
Reported-by: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com>
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Tested-by: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
gcc-6 correctly warns about a out of bounds access
arch/powerpc/kernel/ptrace.c:407:24: warning: index 32 denotes an offset greater than size of 'u64[32][1] {aka long long unsigned int[32][1]}' [-Warray-bounds]
offsetof(struct thread_fp_state, fpr[32][0]));
^
check the end of array instead of beginning of next element to fix this
Signed-off-by: Khem Raj <raj.khem@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Segher Boessenkool <segher@kernel.crashing.org>
Tested-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
One of the debian buildd servers had this crash in the syslog without
any other information:
Unaligned handler failed, ret = -2
clock_adjtime (pid 22578): Unaligned data reference (code 28)
CPU: 1 PID: 22578 Comm: clock_adjtime Tainted: G E 4.5.0-2-parisc64-smp #1 Debian 4.5.4-1
task: 000000007d9960f8 ti: 00000001bde7c000 task.ti: 00000001bde7c000
YZrvWESTHLNXBCVMcbcbcbcbOGFRQPDI
PSW: 00001000000001001111100000001111 Tainted: G E
r00-03 000000ff0804f80f 00000001bde7c2b0 00000000402d2be8 00000001bde7c2b0
r04-07 00000000409e1fd0 00000000fa6f7fff 00000001bde7c148 00000000fa6f7fff
r08-11 0000000000000000 00000000ffffffff 00000000fac9bb7b 000000000002b4d4
r12-15 000000000015241c 000000000015242c 000000000000002d 00000000fac9bb7b
r16-19 0000000000028800 0000000000000001 0000000000000070 00000001bde7c218
r20-23 0000000000000000 00000001bde7c210 0000000000000002 0000000000000000
r24-27 0000000000000000 0000000000000000 00000001bde7c148 00000000409e1fd0
r28-31 0000000000000001 00000001bde7c320 00000001bde7c350 00000001bde7c218
sr00-03 0000000001200000 0000000001200000 0000000000000000 0000000001200000
sr04-07 0000000000000000 0000000000000000 0000000000000000 0000000000000000
IASQ: 0000000000000000 0000000000000000 IAOQ: 00000000402d2e84 00000000402d2e88
IIR: 0ca0d089 ISR: 0000000001200000 IOR: 00000000fa6f7fff
CPU: 1 CR30: 00000001bde7c000 CR31: ffffffffffffffff
ORIG_R28: 00000002369fe628
IAOQ[0]: compat_get_timex+0x2dc/0x3c0
IAOQ[1]: compat_get_timex+0x2e0/0x3c0
RP(r2): compat_get_timex+0x40/0x3c0
Backtrace:
[<00000000402d4608>] compat_SyS_clock_adjtime+0x40/0xc0
[<0000000040205024>] syscall_exit+0x0/0x14
This means the userspace program clock_adjtime called the clock_adjtime()
syscall and then crashed inside the compat_get_timex() function.
Syscalls should never crash programs, but instead return EFAULT.
The IIR register contains the executed instruction, which disassebles
into "ldw 0(sr3,r5),r9".
This load-word instruction is part of __get_user() which tried to read the word
at %r5/IOR (0xfa6f7fff). This means the unaligned handler jumped in. The
unaligned handler is able to emulate all ldw instructions, but it fails if it
fails to read the source e.g. because of page fault.
The following program reproduces the problem:
#define _GNU_SOURCE
#include <unistd.h>
#include <sys/syscall.h>
#include <sys/mman.h>
int main(void) {
/* allocate 8k */
char *ptr = mmap(NULL, 2*4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
/* free second half (upper 4k) and make it invalid. */
munmap(ptr+4096, 4096);
/* syscall where first int is unaligned and clobbers into invalid memory region */
/* syscall should return EFAULT */
return syscall(__NR_clock_adjtime, 0, ptr+4095);
}
To fix this issue we simply need to check if the faulting instruction address
is in the exception fixup table when the unaligned handler failed. If it
is, call the fixup routine instead of crashing.
While looking at the unaligned handler I found another issue as well: The
target register should not be modified if the handler was unsuccessful.
Signed-off-by: Helge Deller <deller@gmx.de>
Cc: stable@vger.kernel.org
This patch fixes backtrace on PA-RISC
There were several problems:
1) The code that decodes instructions handles instructions that subtract
from the stack pointer incorrectly. If the instruction subtracts the
number X from the stack pointer the code increases the frame size by
(0x100000000-X). This results in invalid accesses to memory and
recursive page faults.
2) Because gcc reorders blocks, handling instructions that subtract from
the frame pointer is incorrect. For example, this function
int f(int a)
{
if (__builtin_expect(a, 1))
return a;
g();
return a;
}
is compiled in such a way, that the code that decreases the stack
pointer for the first "return a" is placed before the code for "g" call.
If we recognize this decrement, we mistakenly believe that the frame
size for the "g" call is zero.
To fix problems 1) and 2), the patch doesn't recognize instructions that
decrease the stack pointer at all. To further safeguard the unwind code
against nonsense values, we don't allow frame size larger than
Total_frame_size.
3) The backtrace is not locked. If stack dump races with module unload,
invalid table can be accessed.
This patch adds a spinlock when processing module tables.
Note, that for correct backtrace, you need recent binutils.
Binutils 2.18 from Debian 5 produce garbage unwind tables.
Binutils 2.21 work better (it sometimes forgets function frames, but at
least it doesn't generate garbage).
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Helge Deller <deller@gmx.de>
Pull irq fixes from Thomas Gleixner:
- a few simple fixes for fallout from the recent gic-v3 changes
- a workaround for a Cavium thunderX erratum
- a bugfix for the pic32 irqchip to make external interrupts work proper
- a missing return value in the generic IPI management code
* 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
irqchip/irq-pic32-evic: Fix bug with external interrupts.
irqchip/gicv3-its: numa: Enable workaround for Cavium thunderx erratum 23144
irqchip/gic-v3: Fix quiescence check in gic_enable_redist
irqchip/gic-v3: Fix copy+paste mistakes in defines
irqchip/gic-v3: Fix ICC_SGI1R_EL1.INTID decoding mask
genirq: Fix missing return value in irq_destroy_ipi()
Pull ARM fix from Russell King:
"Just one fix to the ptrace code, spotted by Simon Marchi, where if a
thread migrates to a different CPU and the VFP registers are changed
through ptrace, the application doesn't see the updated VFP registers"
* 'fixes' of git://git.armlinux.org.uk/~rmk/linux-arm:
ARM: fix PTRACE_SETVFPREGS on SMP systems
- Revert a previous revert and get hugetlb going with contiguous hints
- Wire up missing compat syscalls
- Enable CONFIG_SET_MODULE_RONX by default
- Add missing line to our compat /proc/cpuinfo output
- Clarify levels in our page table dumps
- Fix booting with RANDOMIZE_TEXT_OFFSET enabled
- Misc fixes to the ARM CPU PMU driver (refcounting, probe failure)
- Remove some dead code and update a comment
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCgAGBQJXUZzdAAoJELescNyEwWM0Fm4H/215TJBXcohdIpTYX/Nr3+Bh
5jOqMgEWKe6ipmXCnzWpZ1F1GHoxLxndSWhJbi1Gwpdc+EYdXWxT5Z/IsQMKZXOb
n81RzJbit6hXlebZ99YbEjfokfpMATno/9pD8sr1F2Xg/LsrD00O/0aPk9Fp38vX
VRK2TlaVdBQbF80rMI6fUEEVksbJBqS+jBlqwkRri9JPusP+F5F4Qg8f4VqXDSef
ZDEBzaI1BjrIQhDGSf0baXQ9EvuyotT2VR1NKAbYzdK8JoLTYiacIDviU2TCsuYZ
ujB3MTOUytxAJbFhC2KBiDJfukSNY8vg+fLFg7rM3qUj6TMw6ESe1goYVScljHY=
=cAlD
-----END PGP SIGNATURE-----
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Will Deacon:
"The main thing here is reviving hugetlb support using contiguous ptes,
which we ended up reverting at the last minute in 4.5 pending a fix
which went into the core mm/ code during the recent merge window.
- Revert a previous revert and get hugetlb going with contiguous hints
- Wire up missing compat syscalls
- Enable CONFIG_SET_MODULE_RONX by default
- Add missing line to our compat /proc/cpuinfo output
- Clarify levels in our page table dumps
- Fix booting with RANDOMIZE_TEXT_OFFSET enabled
- Misc fixes to the ARM CPU PMU driver (refcounting, probe failure)
- Remove some dead code and update a comment"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: fix alignment when RANDOMIZE_TEXT_OFFSET is enabled
arm64: move {PAGE,CONT}_SHIFT into Kconfig
arm64: mm: dump: log span level
arm64: update stale PAGE_OFFSET comment
drivers/perf: arm_pmu: Avoid leaking pmu->irq_affinity on error
drivers/perf: arm_pmu: Defer the setting of __oprofile_cpu_pmu
drivers/perf: arm_pmu: Fix reference count of a device_node in of_pmu_irq_cfg
arm64: report CPU number in bad_mode
arm64: unistd32.h: wire up missing syscalls for compat tasks
arm64: Provide "model name" in /proc/cpuinfo for PER_LINUX32 tasks
arm64: enable CONFIG_SET_MODULE_RONX by default
arm64: Remove orphaned __addr_ok() definition
Revert "arm64: hugetlb: partial revert of 66b3923a1a0f"
- Handle RTAS delay requests in configure_bridge from Russell Currey
- Refactor the configure_bridge RTAS tokens from Russell Currey
- Fix definition of SIAR and SDAR registers from Thomas Huth
- Use privileged SPR number for MMCR2 from Thomas Huth
- Update LPCR only if it is powernv from Aneesh Kumar K.V
- Fix the reference bit update when handling hash fault from Aneesh Kumar K.V
- Add missing tlb flush from Aneesh Kumar K.V
- Add POWER8NVL support to ibm,client-architecture-support call from Thomas Huth
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXUWmzAAoJEFHr6jzI4aWAuEAQAKAWjT4JCIQMzc2Qk0LbBFhD
Mw3xIt618g0PLRLN431rqk9xE1phh9ecSNt40kQV0UYypCwQyYaOzAAHq7QibRFh
wFDLyrO10aiqTVBI9Dl7l6ZDespGP4ufOyj4GF3vIndTQ8CLT6+Jj/cfaT8lNIna
vOp0Q8RKoSuqJu7wOZwxGHc9vhdpDftBOi2Yl2KHWrWUMTVtyDzgRwmFiCk/9SvX
+xYVGfbCRkU1b05X9RgbsbMdiAzC7odzAC8e4VNHqB+YHgBFenlOKcx6uuz5Oubv
/0dGJq1OEKxKgkEWZsXMhfcezI1i6H1qa5gwqO2pJn/mQiWUx/IxxC5j1tGMcgXJ
tU4cJtMccUmB22bOTqJsGhQJyHh09UDql2PIAHdvTZplYBj34R+2fr6LMRC5TUyY
yXsk/26CAh1ihPK8juytK8D2f7CfFJVe96HXMz8yGWNy2MeaJHh3Nz7E2/hA6L6B
GzT08EBGmGU/1/L4xJi1uuRLQXMgvWkuo0C65m9jqFah7y82gHetypTBSbLOBcUX
+pJRcWouZlksbEHydMlbvNrqmJz1zPx9JJFJJrfDzkYUB/O61NxOpaQPZHfhMt01
Bd8n0wt7OPW9bZL9WrW5W11OhOxNZfuxcr5GmVnzsaALPazFThQm1gZH2NOp6Wr/
9aWy/lZAuDOrtOnmfcZm
=OkUs
-----END PGP SIGNATURE-----
Merge tag 'powerpc-4.7-2' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc fixes from Michael Ellerman:
- Handle RTAS delay requests in configure_bridge from Russell Currey
- Refactor the configure_bridge RTAS tokens from Russell Currey
- Fix definition of SIAR and SDAR registers from Thomas Huth
- Use privileged SPR number for MMCR2 from Thomas Huth
- Update LPCR only if it is powernv from Aneesh Kumar K.V
- Fix the reference bit update when handling hash fault from Aneesh
Kumar K.V
- Add missing tlb flush from Aneesh Kumar K.V
- Add POWER8NVL support to ibm,client-architecture-support call from
Thomas Huth
* tag 'powerpc-4.7-2' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
powerpc/pseries: Add POWER8NVL support to ibm,client-architecture-support call
powerpc/mm/radix: Add missing tlb flush
powerpc/mm/hash: Fix the reference bit update when handling hash fault
powerpc/mm/radix: Update LPCR only if it is powernv
powerpc: Use privileged SPR number for MMCR2
powerpc: Fix definition of SIAR and SDAR registers
powerpc/pseries/eeh: Refactor the configure_bridge RTAS tokens
powerpc/pseries/eeh: Handle RTAS delay requests in configure_bridge
With ARM64_64K_PAGES and RANDOMIZE_TEXT_OFFSET enabled, we hit the
following issue on the boot:
kernel BUG at arch/arm64/mm/mmu.c:480!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
Modules linked in:
CPU: 0 PID: 0 Comm: swapper Not tainted 4.6.0 #310
Hardware name: ARM Juno development board (r2) (DT)
task: ffff000008d58a80 ti: ffff000008d30000 task.ti: ffff000008d30000
PC is at map_kernel_segment+0x44/0xb0
LR is at paging_init+0x84/0x5b0
pc : [<ffff000008c450b4>] lr : [<ffff000008c451a4>] pstate: 600002c5
Call trace:
[<ffff000008c450b4>] map_kernel_segment+0x44/0xb0
[<ffff000008c451a4>] paging_init+0x84/0x5b0
[<ffff000008c42728>] setup_arch+0x198/0x534
[<ffff000008c40848>] start_kernel+0x70/0x388
[<ffff000008c401bc>] __primary_switched+0x30/0x74
Commit 7eb90f2ff7 ("arm64: cover the .head.text section in the .text
segment mapping") removed the alignment between the .head.text and .text
sections, and used the _text rather than the _stext interval for mapping
the .text segment.
Prior to this commit _stext was always section aligned and didn't cause
any issue even when RANDOMIZE_TEXT_OFFSET was enabled. Since that
alignment has been removed and _text is used to map the .text segment,
we need ensure _text is always page aligned when RANDOMIZE_TEXT_OFFSET
is enabled.
This patch adds logic to TEXT_OFFSET fuzzing to ensure that the offset
is always aligned to the kernel page size. To ensure this, we rely on
the PAGE_SHIFT being available via Kconfig.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Sudeep Holla <sudeep.holla@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Fixes: 7eb90f2ff7 ("arm64: cover the .head.text section in the .text segment mapping")
Signed-off-by: Will Deacon <will.deacon@arm.com>
In some cases (e.g. the awk for CONFIG_RANDOMIZE_TEXT_OFFSET) we would
like to make use of PAGE_SHIFT outside of code that can include the
usual header files.
Add a new CONFIG_ARM64_PAGE_SHIFT for this, likewise with
ARM64_CONT_SHIFT for consistency.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Sudeep Holla <sudeep.holla@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The page table dump code logs spans of entries at the same level
(pgd/pud/pmd/pte) which have the same attributes. While we log the
(decoded) attributes, we don't log the level, which leaves the output
ambiguous and/or confusing in some cases.
For example:
0xffff800800000000-0xffff800980000000 6G RW NX SHD AF BLK UXN MEM/NORMAL
If using 4K pages, this may describe a span of 6 1G block entries at the
PGD/PUD level, or 3072 2M block entries at the PMD level.
This patch adds the page table level to each output line, removing this
ambiguity. For the example above, this will produce:
0xffffffc800000000-0xffffffc980000000 6G PUD RW NX SHD AF BLK UXN MEM/NORMAL
When 3 level tables are in use, and we use the asm-generic/nopud.h
definitions, the dump code treats each entry in the PGD as a 1 element
table at the PUD level, and logs spans as being PUDs, which can be
confusing. To counteract this, the "PUD" mnemonic is replaced with "PGD"
when CONFIG_PGTABLE_LEVELS <= 3. Likewise for "PMD" when
CONFIG_PGTABLE_LEVELS <= 2.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Huang Shijie <shijie.huang@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Steve Capper <steve.capper@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Commit ab893fb9f1 ("arm64: introduce KIMAGE_VADDR as the virtual
base of the kernel region") logically split KIMAGE_VADDR from
PAGE_OFFSET, and since commit f9040773b7 ("arm64: move kernel
image to base of vmalloc area") the two have been distinct values.
Unfortunately, neither commit updated the comment above these
definitions, which now erroneously states that PAGE_OFFSET is the start
of the kernel image rather than the start of the linear mapping.
This patch fixes said comment, and introduces an explanation of
KIMAGE_VADDR.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
If we take an exception we don't expect (e.g. SError), we report this in
the bad_mode handler with pr_crit. Depending on the configured log
level, we may or may not log additional information in functions called
subsequently. Notably, the messages in dump_stack (including the CPU
number) are printed with KERN_DEFAULT and may not appear.
Some exceptions have an IMPLEMENTATION DEFINED ESR_ELx.ISS encoding, and
knowing the CPU number is crucial to correctly decode them. To ensure
that this is always possible, we should log the CPU number along with
the ESR_ELx value, so we are not reliant on subsequent logs or
additional printk configuration options.
This patch logs the CPU number in bad_mode such that it is possible for
a developer to decode these exceptions, provided access to sufficient
documentation.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Al Grant <Al.Grant@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
There was a report that on certain Broadwell-EP systems writing any bit of
the SBOX PMU initialization MSR would #GP at boot. This did not happen
on all systems. My test systems booted fine.
Considering both DE and EP may have such issues, this patch removes SBOX
support for all Broadwell platforms for now.
Reported-and-tested-by: Mark van Dijk <mark@voidzero.net>
Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Link: http://lkml.kernel.org/r/1464347540-5763-1-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
- two fixes for 4.6 vgic [Christoffer]
(cc stable)
- six fixes for 4.7 vgic [Marc]
x86:
- six fixes from syzkaller reports [Paolo]
(two of them cc stable)
- allow OS X to boot [Dmitry]
- don't trust compilers [Nadav]
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABCAAGBQJXUI6+AAoJEED/6hsPKofof70H/iL+tri1Mu9q+8+EtANPp9i1
XWjjAQCGszq27H/A1Io+igszW/ghhp0OoVYSB8wSUBubTSjjyba90dWe41UpBywt
r2W41xbGUBRdC6aK5rZ0+Zw2MIv5+ubSsbDKqfY8IsWQzWln+NHo+q5UU8+NfdZ7
Nu7+lRhXJfJWy31k/FeDp3FeZXPZk9b9iZBafmnNcg/i1JMjrrVK8nSBXOSUvZ22
Dy/b3ln+8emv4lvOX0jfostQH0HkfulpBmJouZ5TadoegBtQEDEKpkglxSyUHAcT
TmMhRwRyx4K6i5Bp3zAJ0LtjEks9/ayHVmerMJ+LH1MBaJqNmvxUzSPXvKGA8JE=
=PCWN
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull KVM fixes from Radim Krčmář:
"ARM:
- two fixes for 4.6 vgic [Christoffer] (cc stable)
- six fixes for 4.7 vgic [Marc]
x86:
- six fixes from syzkaller reports [Paolo] (two of them cc stable)
- allow OS X to boot [Dmitry]
- don't trust compilers [Nadav]"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: x86: fix OOPS after invalid KVM_SET_DEBUGREGS
KVM: x86: avoid vmalloc(0) in the KVM_SET_CPUID
KVM: irqfd: fix NULL pointer dereference in kvm_irq_map_gsi
KVM: fail KVM_SET_VCPU_EVENTS with invalid exception number
KVM: x86: avoid vmalloc(0) in the KVM_SET_CPUID
kvm: x86: avoid warning on repeated KVM_SET_TSS_ADDR
KVM: Handle MSR_IA32_PERF_CTL
KVM: x86: avoid write-tearing of TDP
KVM: arm/arm64: vgic-new: Removel harmful BUG_ON
arm64: KVM: vgic-v3: Relax synchronization when SRE==1
arm64: KVM: vgic-v3: Prevent the guest from messing with ICC_SRE_EL1
arm64: KVM: Make ICC_SRE_EL1 access return the configured SRE value
KVM: arm/arm64: vgic-v3: Always resample level interrupts
KVM: arm/arm64: vgic-v2: Always resample level interrupts
KVM: arm/arm64: vgic-v3: Clear all dirty LRs
KVM: arm/arm64: vgic-v2: Clear all dirty LRs
The erratum fixes the hang of ITS SYNC command by avoiding inter node
io and collections/cpu mapping on thunderx dual-socket platform.
This fix is only applicable for Cavium's ThunderX dual-socket platform.
Reviewed-by: Robert Richter <rrichter@cavium.com>
Signed-off-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com>
Signed-off-by: Robert Richter <rrichter@cavium.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Intel CPUs having Turbo Boost feature implement an MSR to provide a
control interface via rdmsr/wrmsr instructions. One could detect the
presence of this feature by issuing one of these instructions and
handling the #GP exception which is generated in case the referenced MSR
is not implemented by the CPU.
KVM's vCPU model behaves exactly as a real CPU in this case by injecting
a fault when MSR_IA32_PERF_CTL is called (which KVM does not support).
However, some operating systems use this register during an early boot
stage in which their kernel is not capable of handling #GP correctly,
causing #DP and finally a triple fault effectively resetting the vCPU.
This patch implements a dummy handler for MSR_IA32_PERF_CTL to avoid the
crashes.
Signed-off-by: Dmitry Bilunov <kmeaw@yandex-team.ru>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
In theory, nothing prevents the compiler from write-tearing PTEs, or
split PTE writes. These partially-modified PTEs can be fetched by other
cores and cause mayhem. I have not really encountered such case in
real-life, but it does seem possible.
For example, the compiler may try to do something creative for
kvm_set_pte_rmapp() and perform multiple writes to the PTE.
Signed-off-by: Nadav Amit <nadav.amit@gmail.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
PTRACE_SETVFPREGS fails to properly mark the VFP register set to be
reloaded, because it undoes one of the effects of vfp_flush_hwstate().
Specifically vfp_flush_hwstate() sets thread->vfpstate.hard.cpu to
an invalid CPU number, but vfp_set() overwrites this with the original
CPU number, thereby rendering the hardware state as apparently "valid",
even though the software state is more recent.
Fix this by reverting the previous change.
Cc: <stable@vger.kernel.org>
Fixes: 8130b9d7b9 ("ARM: 7308/1: vfp: flush thread hwstate before copying ptrace registers")
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Simon Marchi <simon.marchi@ericsson.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
This reverts commit e78fdfef84.
The issue was fixed in hardware in HS2.1C release and there are no known
external users of affected RTL so revert the whole delayed retry series !
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This reverts commit b89aa12c17.
The issue was fixed in hardware in HS2.1C release and there are no known
external users of affected RTL so revert the whole delayed retry series !
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This reverts commit 1097163870.
The issue was fixed in hardware in HS2.1C release and there are no known
external users of affected RTL - so revert thw whole delayed retry
series !
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This flag is a no-op now (see commit 47b0eeb3dc "clk: Deprecate
CLK_IS_ROOT", 2016-02-02) so remove it.
Cc: Gerhard Sittig <gsi@denx.de>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
This flag is a no-op now (see commit 47b0eeb3dc "clk: Deprecate
CLK_IS_ROOT", 2016-02-02) so remove it.
Acked-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
We're missing entries for mlock2, copy_file_range, preadv2 and pwritev2
in our compat syscall table, so hook them up. Only the last two need
compat wrappers.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Pull sparc fixes from David Miller:
"sparc64 mmu context allocation and trap return bug fixes"
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc:
sparc64: Fix return from trap window fill crashes.
sparc: Harden signal return frame checks.
sparc64: Take ctx_alloc_lock properly in hugetlb_setup().
If we do not provide the PVR for POWER8NVL, a guest on this system
currently ends up in PowerISA 2.06 compatibility mode on KVM, since QEMU
does not provide a generic PowerISA 2.07 mode yet. So some new
instructions from POWER8 (like "mtvsrd") get disabled for the guest,
resulting in crashes when using code compiled explicitly for
POWER8 (e.g. with the "-mcpu=power8" option of GCC).
Fixes: ddee09c099 ("powerpc: Add PVR for POWER8NVL processor")
Cc: stable@vger.kernel.org # v4.0+
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This should not have any impact on hash, because hash does tlb
invalidate with every pte update and we don't implement
flush_tlb_* functions for hash. With radix we should make an explicit
call to flush tlb outside pte update.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
When we converted the asm routines to C functions, we missed updating
HPTE_R_R based on _PAGE_ACCESSED. ASM code used to copy over the lower
bits from pte via.
andi. r3,r30,0x1fe /* Get basic set of flags */
We also update the code such that we won't update the Change bit ('C'
bit) always. This was added by commit c5cf0e30bf ("powerpc: Fix
buglet with MMU hash management").
With hash64, we need to make sure that hardware doesn't do a pte update
directly. This is because we do end up with entries in TLB with no hash
page table entry. This happens because when we find a hash bucket full,
we "evict" a more/less random entry from it. When we do that we don't
invalidate the TLB (hpte_remove) because we assume the old translation
is still technically "valid". For more info look at commit
0608d692463("powerpc/mm: Always invalidate tlb on hpte invalidate and
update").
Thus it's critical that valid hash PTEs always have reference bit set
and writeable ones have change bit set. We do this by hashing a
non-dirty linux PTE as read-only and always setting _PAGE_ACCESSED (and
thus R) when hashing anything else in. Any attempt by Linux at clearing
those bits also removes the corresponding hash entry.
Commit 5cf0e30bf3d8 did that for 'C' bit by enabling 'C' bit always.
We don't really need to do that because we never map a RW pte entry
without setting 'C' bit. On READ fault on a RW pte entry, we still map
it READ only, hence a store update in the page will still cause a hash
pte fault.
This patch reverts the part of commit c5cf0e30bf ("[PATCH] powerpc:
Fix buglet with MMU hash management") and retain the updatepp part.
- If we hit the updatepp path on native, the old code without that
commit, would fail to set C bcause native_hpte_updatepp()
was implemented to filter the same bits as H_PROTECT and not let C
through thus we would "upgrade" a RO HPTE to RW without setting C
thus causing the bug. So the real fix in that commit was the change
to native_hpte_updatepp
Fixes: 89ff725051 ("powerpc/mm: Convert __hash_page_64K to C")
Cc: stable@vger.kernel.org # v4.5+
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
LPCR cannot be updated when running in guest mode.
Fixes: 2bfd65e45e ("powerpc/mm/radix: Add radix callbacks for early init routines")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This patch brings the PER_LINUX32 /proc/cpuinfo format more in line with
the 32-bit ARM one by providing an additional line:
model name : ARMv8 Processor rev X (v8l)
Cc: <stable@vger.kernel.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Pull s390 fixes from Martin Schwidefsky:
"Three bugs fixes and an update for the default configuration"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390: fix info leak in do_sigsegv
s390/config: update default configuration
s390/bpf: fix recache skb->data/hlen for skb_vlan_push/pop
s390/bpf: reduce maximum program size to 64 KB
The GICv3 backend of the vgic is quite barrier heavy, in order
to ensure synchronization of the system registers and the
memory mapped view for a potential GICv2 guest.
But when the guest is using a GICv3 model, there is absolutely
no need to execute all these heavy barriers, and it is actually
beneficial to avoid them altogether.
This patch makes the synchonization conditional, and ensures
that we do not change the EL1 SRE settings if we do not need to.
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Both our GIC emulations are "strict", in the sense that we either
emulate a GICv2 or a GICv3, and not a GICv3 with GICv2 legacy
support.
But when running on a GICv3 host, we still allow the guest to
tinker with the ICC_SRE_EL1 register during its time slice:
it can switch SRE off, observe that it is off, and yet on the
next world switch, find the SRE bit to be set again. Not very
nice.
An obvious solution is to always trap accesses to ICC_SRE_EL1
(by clearing ICC_SRE_EL2.Enable), and to let the handler return
the programmed value on a read, or ignore the write.
That way, the guest can always observe that our GICv3 is SRE==1
only.
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
When we trap ICC_SRE_EL1, we handle it as RAZ/WI. It would be
more correct to actual make it RO, and return the configured
value when read.
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
When saving the state of the list registers, it is critical to
reset them zero, as we could otherwise leave unexpected EOI
interrupts pending for virtual level interrupts.
Cc: stable@vger.kernel.org # v4.6+
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
The SET_MODULE_RONX protections are effectively the same as the
DEBUG_RODATA protections we enabled by default back in commit
57efac2f71 ("arm64: enable CONFIG_DEBUG_RODATA by default"). It
seems unusual to have one but not the other.
As evidenced by the help text, the rationale appears to be that
SET_MODULE_RONX interacts poorly with tracing and patching, but both of
these make use of the insn framework, which takes SET_MODULE_RONX into
account. Any remaining issues are bugs which should be fixed regardless
of the default state of the option.
This patch enables DEBUG_SET_MODULE_RONX by default, and replaces the
help text with a new wording derived from the DEBUG_RODATA help text,
which better describes the functionality. Previously, the DEBUG_RODATA
entry was inconsistently indented with spaces, which are replaced with
tabs as with the other Kconfig entries.
Additionally, the wording of recommended defaults is made consistent for
all options. These are placed in a new paragraph, unquoted, as a full
sentence (with a period/full stop) as this appears to be the most common
form per $(git grep 'in doubt').
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Acked-by: Kees Cook <keescook@chromium.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Since commit 12a0ef7b0a ("arm64: use generic strnlen_user and
strncpy_from_user functions"), the definition of __addr_ok() has been
languishing unused; eradicate the sucker.
CC: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
We are already using the privileged versions of MMCR0, MMCR1
and MMCRA in the kernel, so for MMCR2, we should better use
the privileged versions, too, to be consistent.
Fixes: 240686c136 ("powerpc: Initialise PMU related regs on Power8")
Cc: stable@vger.kernel.org # v3.10+
Suggested-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Acked-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The SIAR and SDAR registers are available twice, one time as SPRs
780 / 781 (unprivileged, but read-only), and one time as the SPRs
796 / 797 (privileged, but read and write). The Linux kernel code
currently uses the unprivileged SPRs - while this is OK for reading,
writing to that register of course does not work.
Since the KVM code tries to write to this register, too (see the mtspr
in book3s_hv_rmhandlers.S), the contents of this register sometimes get
lost for the guests, e.g. during migration of a VM.
To fix this issue, simply switch to the privileged SPR numbers instead.
Cc: stable@vger.kernel.org
Signed-off-by: Thomas Huth <thuth@redhat.com>
Acked-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This reverts commit ff7925848b.
Now that the contiguous-hint hugetlb regression has been debugged and
fixed upstream by 66ee95d16a ("mm: exclude HugeTLB pages from THP
page_mapped() logic"), we can revert the previous partial revert of this
feature.
Signed-off-by: Will Deacon <will.deacon@arm.com>
ARC700 support for 2 interrupt priorities historically allowed even slow
perpherals such as emac and uart to setup high priority interrupts
which was wrong from the beginning as they could possibly delay the more
critical timer interrupt.
The hardware support for 2 level interrupts in ARCompact is less than
ideal anyways (judging from the "hacks" in low level entry code and thus
is not used in productions systems I know of.
So reduce the scope of this to timer only, thereby reducing a bunch of
complexity.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Now when we switched to usage of real clk devices for CPU core
frequency those root properties make no sense any longer.
Se we're just getting rid of them here to not confuse readers of
our .dts files.
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Cc: Christian Ruppert <christian.ruppert@alitech.com>
Cc: Noam Camus <noamca@mellanox.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
The RTAS calls "ibm,configure-pe" and "ibm,configure-bridge" perform the
same actions, however the former can skip configuration if unnecessary.
The existing code treats them as different tokens even though only one
will ever be called. Refactor this by making a single token that is
assigned during init.
Signed-off-by: Russell Currey <ruscur@russell.cc>
Acked-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
In the "ibm,configure-pe" and "ibm,configure-bridge" RTAS calls, the
spec states that values of 9900-9905 can be returned, indicating that
software should delay for 10^x (where x is the last digit, i.e. 990x)
milliseconds and attempt the call again. Currently, the kernel doesn't
know about this, and respecting it fixes some PCI failures when the
hypervisor is busy.
The delay is capped at 0.2 seconds.
Cc: <stable@vger.kernel.org> # 3.10+
Signed-off-by: Russell Currey <ruscur@russell.cc>
Acked-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
We must handle data access exception as well as memory address unaligned
exceptions from return from trap window fill faults, not just normal
TLB misses.
Otherwise we can get an OOPS that looks like this:
ld-linux.so.2(36808): Kernel bad sw trap 5 [#1]
CPU: 1 PID: 36808 Comm: ld-linux.so.2 Not tainted 4.6.0 #34
task: fff8000303be5c60 ti: fff8000301344000 task.ti: fff8000301344000
TSTATE: 0000004410001601 TPC: 0000000000a1a784 TNPC: 0000000000a1a788 Y: 00000002 Not tainted
TPC: <do_sparc64_fault+0x5c4/0x700>
g0: fff8000024fc8248 g1: 0000000000db04dc g2: 0000000000000000 g3: 0000000000000001
g4: fff8000303be5c60 g5: fff800030e672000 g6: fff8000301344000 g7: 0000000000000001
o0: 0000000000b95ee8 o1: 000000000000012b o2: 0000000000000000 o3: 0000000200b9b358
o4: 0000000000000000 o5: fff8000301344040 sp: fff80003013475c1 ret_pc: 0000000000a1a77c
RPC: <do_sparc64_fault+0x5bc/0x700>
l0: 00000000000007ff l1: 0000000000000000 l2: 000000000000005f l3: 0000000000000000
l4: fff8000301347e98 l5: fff8000024ff3060 l6: 0000000000000000 l7: 0000000000000000
i0: fff8000301347f60 i1: 0000000000102400 i2: 0000000000000000 i3: 0000000000000000
i4: 0000000000000000 i5: 0000000000000000 i6: fff80003013476a1 i7: 0000000000404d4c
I7: <user_rtt_fill_fixup+0x6c/0x7c>
Call Trace:
[0000000000404d4c] user_rtt_fill_fixup+0x6c/0x7c
The window trap handlers are slightly clever, the trap table entries for them are
composed of two pieces of code. First comes the code that actually performs
the window fill or spill trap handling, and then there are three instructions at
the end which are for exception processing.
The userland register window fill handler is:
add %sp, STACK_BIAS + 0x00, %g1; \
ldxa [%g1 + %g0] ASI, %l0; \
mov 0x08, %g2; \
mov 0x10, %g3; \
ldxa [%g1 + %g2] ASI, %l1; \
mov 0x18, %g5; \
ldxa [%g1 + %g3] ASI, %l2; \
ldxa [%g1 + %g5] ASI, %l3; \
add %g1, 0x20, %g1; \
ldxa [%g1 + %g0] ASI, %l4; \
ldxa [%g1 + %g2] ASI, %l5; \
ldxa [%g1 + %g3] ASI, %l6; \
ldxa [%g1 + %g5] ASI, %l7; \
add %g1, 0x20, %g1; \
ldxa [%g1 + %g0] ASI, %i0; \
ldxa [%g1 + %g2] ASI, %i1; \
ldxa [%g1 + %g3] ASI, %i2; \
ldxa [%g1 + %g5] ASI, %i3; \
add %g1, 0x20, %g1; \
ldxa [%g1 + %g0] ASI, %i4; \
ldxa [%g1 + %g2] ASI, %i5; \
ldxa [%g1 + %g3] ASI, %i6; \
ldxa [%g1 + %g5] ASI, %i7; \
restored; \
retry; nop; nop; nop; nop; \
b,a,pt %xcc, fill_fixup_dax; \
b,a,pt %xcc, fill_fixup_mna; \
b,a,pt %xcc, fill_fixup;
And the way this works is that if any of those memory accesses
generate an exception, the exception handler can revector to one of
those final three branch instructions depending upon which kind of
exception the memory access took. In this way, the fault handler
doesn't have to know if it was a spill or a fill that it's handling
the fault for. It just always branches to the last instruction in
the parent trap's handler.
For example, for a regular fault, the code goes:
winfix_trampoline:
rdpr %tpc, %g3
or %g3, 0x7c, %g3
wrpr %g3, %tnpc
done
All window trap handlers are 0x80 aligned, so if we "or" 0x7c into the
trap time program counter, we'll get that final instruction in the
trap handler.
On return from trap, we have to pull the register window in but we do
this by hand instead of just executing a "restore" instruction for
several reasons. The largest being that from Niagara and onward we
simply don't have enough levels in the trap stack to fully resolve all
possible exception cases of a window fault when we are already at
trap level 1 (which we enter to get ready to return from the original
trap).
This is executed inline via the FILL_*_RTRAP handlers. rtrap_64.S's
code branches directly to these to do the window fill by hand if
necessary. Now if you look at them, we'll see at the end:
ba,a,pt %xcc, user_rtt_fill_fixup;
ba,a,pt %xcc, user_rtt_fill_fixup;
ba,a,pt %xcc, user_rtt_fill_fixup;
And oops, all three cases are handled like a fault.
This doesn't work because each of these trap types (data access
exception, memory address unaligned, and faults) store their auxiliary
info in different registers to pass on to the C handler which does the
real work.
So in the case where the stack was unaligned, the unaligned trap
handler sets up the arg registers one way, and then we branched to
the fault handler which expects them setup another way.
So the FAULT_TYPE_* value ends up basically being garbage, and
randomly would generate the backtrace seen above.
Reported-by: Nick Alcock <nix@esperi.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
All signal frames must be at least 16-byte aligned, because that is
the alignment we explicitly create when we build signal return stack
frames.
All stack pointers must be at least 8-byte aligned.
Signed-off-by: David S. Miller <davem@davemloft.net>
Pull string hash improvements from George Spelvin:
"This series does several related things:
- Makes the dcache hash (fs/namei.c) useful for general kernel use.
(Thanks to Bruce for noticing the zero-length corner case)
- Converts the string hashes in <linux/sunrpc/svcauth.h> to use the
above.
- Avoids 64-bit multiplies in hash_64() on 32-bit platforms. Two
32-bit multiplies will do well enough.
- Rids the world of the bad hash multipliers in hash_32.
This finishes the job started in commit 689de1d6ca ("Minimal
fix-up of bad hashing behavior of hash_64()")
The vast majority of Linux architectures have hardware support for
32x32-bit multiply and so derive no benefit from "simplified"
multipliers.
The few processors that do not (68000, h8/300 and some models of
Microblaze) have arch-specific implementations added. Those
patches are last in the series.
- Overhauls the dcache hash mixing.
The patch in commit 0fed3ac866 ("namei: Improve hash mixing if
CONFIG_DCACHE_WORD_ACCESS") was an off-the-cuff suggestion.
Replaced with a much more careful design that's simultaneously
faster and better. (My own invention, as there was noting suitable
in the literature I could find. Comments welcome!)
- Modify the hash_name() loop to skip the initial HASH_MIX(). This
would let us salt the hash if we ever wanted to.
- Sort out partial_name_hash().
The hash function is declared as using a long state, even though
it's truncated to 32 bits at the end and the extra internal state
contributes nothing to the result. And some callers do odd things:
- fs/hfs/string.c only allocates 32 bits of state
- fs/hfsplus/unicode.c uses it to hash 16-bit unicode symbols not bytes
- Modify bytemask_from_count to handle inputs of 1..sizeof(long)
rather than 0..sizeof(long)-1. This would simplify users other
than full_name_hash"
Special thanks to Bruce Fields for testing and finding bugs in v1. (I
learned some humbling lessons about "obviously correct" code.)
On the arch-specific front, the m68k assembly has been tested in a
standalone test harness, I've been in contact with the Microblaze
maintainers who mostly don't care, as the hardware multiplier is never
omitted in real-world applications, and I haven't heard anything from
the H8/300 world"
* 'hash' of git://ftp.sciencehorizons.net/linux:
h8300: Add <asm/hash.h>
microblaze: Add <asm/hash.h>
m68k: Add <asm/hash.h>
<linux/hash.h>: Add support for architecture-specific functions
fs/namei.c: Improve dcache hash function
Eliminate bad hash multipliers from hash_32() and hash_64()
Change hash_64() return value to 32 bits
<linux/sunrpc/svcauth.h>: Define hash_str() in terms of hashlen_string()
fs/namei.c: Add hashlen_string() function
Pull out string hash to <linux/stringhash.h>
This will improve the performance of hash_32() and hash_64(), but due
to complete lack of multi-bit shift instructions on H8, performance will
still be bad in surrounding code.
Designing H8-specific hash algorithms to work around that is a separate
project. (But if the maintainers would like to get in touch...)
Signed-off-by: George Spelvin <linux@sciencehorizons.net>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: uclinux-h8-devel@lists.sourceforge.jp
Microblaze is an FPGA soft core that can be configured various ways.
If it is configured without a multiplier, the standard __hash_32()
will require a call to __mulsi3, which is a slow software loop.
Instead, use a shift-and-add sequence for the constant multiply.
GCC knows how to do this, but it's not as clever as some.
Signed-off-by: George Spelvin <linux@sciencehorizons.net>
Cc: Alistair Francis <alistair.francis@xilinx.com>
Cc: Michal Simek <michal.simek@xilinx.com>
This provides a multiply by constant GOLDEN_RATIO_32 = 0x61C88647
for the original mc68000, which lacks a 32x32-bit multiply instruction.
Yes, the amount of optimization effort put in is excessive. :-)
Shift-add chain found by Yevgen Voronenko's Hcub algorithm at
http://spiral.ece.cmu.edu/mcm/gen.html
Signed-off-by: George Spelvin <linux@sciencehorizons.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greg Ungerer <gerg@linux-m68k.org>
Cc: Andreas Schwab <schwab@linux-m68k.org>
Cc: Philippe De Muyter <phdm@macq.eu>
Cc: linux-m68k@lists.linux-m68k.org
This is just the infrastructure; there are no users yet.
This is modelled on CONFIG_ARCH_RANDOM; a CONFIG_ symbol declares
the existence of <asm/hash.h>.
That file may define its own versions of various functions, and define
HAVE_* symbols (no CONFIG_ prefix!) to suppress the generic ones.
Included is a self-test (in lib/test_hash.c) that verifies the basics.
It is NOT in general required that the arch-specific functions compute
the same thing as the generic, but if a HAVE_* symbol is defined with
the value 1, then equality is tested.
Signed-off-by: George Spelvin <linux@sciencehorizons.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Greg Ungerer <gerg@linux-m68k.org>
Cc: Andreas Schwab <schwab@linux-m68k.org>
Cc: Philippe De Muyter <phdm@macq.eu>
Cc: linux-m68k@lists.linux-m68k.org
Cc: Alistair Francis <alistai@xilinx.com>
Cc: Michal Simek <michal.simek@xilinx.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: uclinux-h8-devel@lists.sourceforge.jp
MicroMIPS kernels may be expected to run on microMIPS only cores which
don't support the normal MIPS instruction set, so be sure to pass the
-mmicromips flag through to the VDSO cflags.
Fixes: ebb5e78cc6 ("MIPS: Initial implementation of a VDSO")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 4.4.x-
Patchwork: https://patchwork.linux-mips.org/patch/13349/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
In microMIPS kernels, handle_signal() sets the isa16 mode bit in the
vdso address so that the sigreturn trampolines (which are offset from
the VDSO) get executed as microMIPS.
However commit ebb5e78cc6 ("MIPS: Initial implementation of a VDSO")
changed the offsets to come from the VDSO image, which already have the
isa16 mode bit set correctly since they're extracted from the VDSO
shared library symbol table.
Drop the isa16 mode bit handling from handle_signal() to fix sigreturn
for cores which support both microMIPS and normal MIPS. This doesn't fix
microMIPS only cores, since the VDSO is still built for normal MIPS, but
thats a separate problem.
Fixes: ebb5e78cc6 ("MIPS: Initial implementation of a VDSO")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 4.4.x-
Patchwork: https://patchwork.linux-mips.org/patch/13348/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Here is the quote from [1]:
The unit-address must match the first address specified
in the reg property of the node. If the node has no reg property,
the @ and unit-address must be omitted and the node-name alone
differentiates the node from other nodes at the same level
This patch adjusts MIPS dts-files and devicetree binding
documentation in accordance with [1].
[1] Power.org(tm) Standard for Embedded Power Architecture(tm)
Platform Requirements (ePAPR). Version 1.1 – 08 April 2011.
Chapter 2.2.1.1 Node Name Requirements
Signed-off-by: Antony Pavlov <antonynpavlov@gmail.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ian Campbell <ijc+devicetree@hellion.org.uk>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: linux-mips@linux-mips.org
Cc: devicetree@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13345/
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Avoid an aliasing issue causing a build error in VDSO:
In file included from include/linux/srcu.h:34:0,
from include/linux/notifier.h:15,
from ./arch/mips/include/asm/uprobes.h:9,
from include/linux/uprobes.h:61,
from include/linux/mm_types.h:13,
from ./arch/mips/include/asm/vdso.h:14,
from arch/mips/vdso/vdso.h:27,
from arch/mips/vdso/gettimeofday.c:11:
include/linux/workqueue.h: In function 'work_static':
include/linux/workqueue.h:186:2: error: dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
return *work_data_bits(work) & WORK_STRUCT_STATIC;
^
cc1: all warnings being treated as errors
make[2]: *** [arch/mips/vdso/gettimeofday.o] Error 1
with a CONFIG_DEBUG_OBJECTS_WORK configuration and GCC 5.2.0. Include
`-fno-strict-aliasing' along with compiler options used, as required for
kernel code, fixing a problem present since the introduction of VDSO
with commit ebb5e78cc6 ("MIPS: Initial implementation of a VDSO").
Thanks to Tejun for diagnosing this properly!
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Fixes: ebb5e78cc6 ("MIPS: Initial implementation of a VDSO")
Cc: Tejun Heo <tj@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # v4.3+
Patchwork: https://patchwork.linux-mips.org/patch/13357/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Allow KASLR to be selected on Pistachio based systems. Tested on a
Creator Ci40.
Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: Andrew Bresticker <abrestic@chromium.org>
Cc: Jonas Gorski <jogo@openwrt.org>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13356/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
On certain MIPS32 devices, the ftrace tracer "function_graph" uses
__lshrdi3() during the capturing of trace data. ftrace then attempts to
trace __lshrdi3() which leads to infinite recursion and a stack overflow.
Fix this by marking __lshrdi3() as notrace. Mark the other compiler
intrinsics as notrace in case the compiler decides to use them in the
ftrace path.
Signed-off-by: Harvey Hunt <harvey.hunt@imgtec.com>
Cc: <linux-mips@linux-mips.org>
Cc: <linux-kernel@vger.kernel.org>
Cc: <stable@vger.kernel.org> # 4.2.x-
Patchwork: https://patchwork.linux-mips.org/patch/13354/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The Hardware page Table Walker (HTW) is being misconfigured on 64-bit
kernels. The PWSize.PS (pointer size) bit determines whether pointers
within directories are loaded as 32-bit or 64-bit addresses, but was
never being set to 1 for 64-bit kernels where the unsigned long in pgd_t
is 64-bits wide.
This actually reduces rather than improves performance when the HTW is
enabled on P6600 since the HTW is initiated lots, but walks are all
aborted due I think to bad intermediate pointers.
Since we were already taking the width of the PTEs into account by
setting PWSize.PTEW, which is the left shift applied to the page table
index *in addition to* the native pointer size, we also need to reduce
PTEW by 1 when PS=1. This is done by calculating PTEW based on the
relative size of pte_t compared to pgd_t.
Finally in order for the HTW to be used when PS=1, the appropriate
XK/XS/XU bits corresponding to the different 64-bit segments need to be
set in PWCtl. We enable only XU for now to enable walking for XUSeg.
Supporting walking for XKSeg would be a bit more involved so is left for
a future patch. It would either require the use of a per-CPU top level
base directory if supported by the HTW (a bit like pgd_current but with
a second entry pointing at swapper_pg_dir), or the HTW would prepend bit
63 of the address to the global directory index which doesn't really
match how we split user and kernel page directories.
Fixes: cab25bc753 ("MIPS: Extend hardware table walking support to MIPS64")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13364/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add field definitions for some of the 64-bit specific Hardware page
Table Walker (HTW) register fields in PWSize and PWCtl, in preparation
for fixing the 64-bit HTW configuration.
Also print these fields out along with the others in print_htw_config().
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13363/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Simplify the DSP instruction wrapper macros which use explicit encodings
for microMIPS and normal MIPS by using the new encoding macros and
removing duplication.
To me this makes it easier to read since it is much shorter, but it also
ensures .insn is used, preventing objdump disassembling the microMIPS
code as normal MIPS.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13314/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Hardcoded MIPS instruction encodings are provided for tlbinvf, mfhc0 &
mthc0 instructions, but microMIPS encodings are missing. I doubt any
microMIPS cores exist at present which support these instructions, but
the microMIPS encodings exist, and microMIPS cores may support them in
the future. Add the missing microMIPS encodings using the new macros.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13313/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When the toolchain doesn't support MSA we encode MSA instructions
explicitly in assembly. Unfortunately we use .word for both MIPS and
microMIPS encodings which is wrong, since 32-bit microMIPS instructions
are made up from a pair of halfwords.
- The most significant halfword always comes first, so for little endian
builds the halves will be emitted in the wrong order.
- 32-bit alignment isn't guaranteed, so the assembler may insert a
16-bit nop instruction to pad the instruction stream to a 32-bit
boundary.
Use the new instruction encoding macros to encode microMIPS MSA
instructions correctly.
Fixes: d96cc3d1ec ("MIPS: Add microMIPS MSA support.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paul Burton <Paul.Burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13312/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Toolchains may be used which support microMIPS but not VZ instructions
(i.e. binutis 2.22 & 2.23), so extend the explicitly encoded versions of
the guest COP0 register & guest TLB access macros to support microMIPS
encodings too, using the new macros.
This prevents non-microMIPS instructions being executed in microMIPS
mode during CPU probe on cores supporting VZ (e.g. M5150), which cause
reserved instruction exceptions early during boot.
Fixes: bad50d7925 ("MIPS: Fix VZ probe gas errors with binutils <2.24")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13311/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
To allow simplification of macros which use inline assembly to
explicitly encode instructions, add a few simple abstractions to
mipsregs.h which expand to specific microMIPS or normal MIPS encodings
depending on what type of kernel is being built:
_ASM_INSN_IF_MIPS(_enc) : Emit a 32bit MIPS instruction if microMIPS is
not enabled.
_ASM_INSN32_IF_MM(_enc) : Emit a 32bit microMIPS instruction if enabled.
_ASM_INSN16_IF_MM(_enc) : Emit a 16bit microMIPS instruction if enabled.
The macros can be used one after another since the MIPS / microMIPS
macros are mutually exclusive, for example:
__asm__ __volatile__(
".set push\n\t"
".set noat\n\t"
"# mfgc0 $1, $%1, %2\n\t"
_ASM_INSN_IF_MIPS(0x40610000 | %1 << 11 | %2)
_ASM_INSN32_IF_MM(0x002004fc | %1 << 16 | %2 << 11)
"move %0, $1\n\t"
".set pop"
: "=r" (__res)
: "i" (source), "i" (sel));
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13310/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
As noticed by Sergei in the discussion of Andrea Gelmini's patch series.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Reported-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>