- Remove unused fallback for BUILD_BUG_ON (which technically contains a VLA)
- Lift -Wvla to the top-level Makefile
-----BEGIN PGP SIGNATURE-----
Comment: Kees Cook <kees@outflux.net>
iQJKBAABCgA0FiEEpcP2jyKd1g9yPm4TiXL039xtwCYFAlvV7jMWHGtlZXNjb29r
QGNocm9taXVtLm9yZwAKCRCJcvTf3G3AJkUwD/46aPTmVXQqzVr1QxRC087Aou5H
hCMaUSG0mSuinhIpB398xh58imTqz48n44gf8yrBgittecV+g8cQ3TJZBp8fSbj4
zyuSy0xlghxqNYhsirMPdN61A8qOS/F7i60XFBXSKpdzKorUUsqlP6paDg1CWslB
KWIOr2aHxvQk93pHFsWjOeM7CGqQIq1brKKDPAL+R4zj8EzXfi0s1sOR4tHCXRZP
sTsuHysAjsBlaw54tvbCA5SIyABzZK5xsQoeChSKMoCDQb8TOQK4j8f78470/nmk
lFWZWGKFr2sPUPcuf1casL5Cp57ycjwi4qTzKX2Qa1hhEhrYTIvcOwzOWdc0AY+6
Fttbopla1QmrGndLtm8FOJRGWiCAzhiSpV9vk1VDaP2jeCc6MEvTC0shsAgxSfsr
JRIHqq37w3TBr78qeNuxOaSEkoqtjTVYug2aq7kefG66DGGChzCTVNQrLVNei3Qg
ZdamzUZz7FVV6WmXlWsBfbm14sIRd02r7XORm0cJdIVvIwqJ9QIGJigR/Sfc4Qdi
pXuuE3TNSfArACXlCkaBfqMYAhWO35qy41TerRlRDkri89DNHPY8RAVV0GpNSp7q
kPaPBHZRKXAjHPnnypXz3A/zQoqJ7uWRG5msethAWtEXJBQ4qQWVhjTNmV7tkOkr
HIaJFTb03LLIcuv23Q==
=Vnw8
-----END PGP SIGNATURE-----
Merge tag 'vla-v4.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux
Pull VLA removal from Kees Cook:
"Globally warn on VLA use.
This turns on "-Wvla" globally now that the last few trees with their
VLA removals have landed (crypto, block, net, and powerpc).
Arnd mentioned that there may be a couple more VLAs hiding in
hard-to-find randconfigs, but nothing big has shaken out in the last
month or so in linux-next.
We should be basically VLA-free now! Wheee. :)
Summary:
- Remove unused fallback for BUILD_BUG_ON (which technically contains
a VLA)
- Lift -Wvla to the top-level Makefile"
* tag 'vla-v4.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
Makefile: Globally enable VLA warning
compiler.h: give up __compiletime_assert_fallback()
- optimize kallsyms slightly
- remove check for old CFLAGS usage
- add some compiler flags unconditionally instead of evaluating
$(call cc-option,...)
- fix variable shadowing in host tools
- refactor scripts/mkmakefile
- refactor various makefiles
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJb1eyEAAoJED2LAQed4NsGUjwP/0/W0nLP+nKCJ0NpsSD151Ea
Vrbm+RMxl8uPuQ0+Bh59rdO2yWR8v7jOX8CPbX1DqW0XwW4vTNRZm9j6A83rJzHv
dPpinUObq6vXGJCYvMLoumhOkM1lRZie1AeBeLgKF0G0jprxUspGvakSyM/5SOo6
3gIhpPcczijC980KHIJbmwiGqRFVs3/zwqcKjQaRD3C6f4HdJL3i9Zr7kuZF0g1x
dxCdN4Shv5x0igggp58z8646bbKlN7hItYaPq2csDop55/jWKzUkBkFkeaDjiUdB
B6ysQfwkuTb6sbKXE6euYLUTyc6epx9v9pey0FpOx5tjXT+QmgK1ddLQEwdFH2+/
Fd5VR70h8uKTNvmqFJ+iebpR/aC71sUqYAzsoiTVZibR/F6+QAuhHJPZJDlSSwLC
QH7j0fwJ3X5p9PYe3JBWyWzOd9BDvtV+HGE67Kx17iRNW0FDyKSvn7tezRVtqvHr
KENrMT4n+lNQzB+c4lfjOE9ANpOf+PP4+ODhbrtvKItyb5QfF4F+5CDVif3wv3Kt
6dvh13hvR06gQCpbuxcFgD17mT1T/lVieuzOnWjtEh4HiMI1H0abYfkRWbQshrti
u7TfYxwAyWKtisiPiT/1THYsW3ux7QmTSIo2iOznQxDlbmzENpQx+f22HsLaC0XU
AUgvKGBaN0WOar2f2gyq
=jM4E
-----END PGP SIGNATURE-----
Merge tag 'kbuild-v4.20' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
Pull Kbuild updates from Masahiro Yamada:
- optimize kallsyms slightly
- remove check for old CFLAGS usage
- add some compiler flags unconditionally instead of evaluating
$(call cc-option,...)
- fix variable shadowing in host tools
- refactor scripts/mkmakefile
- refactor various makefiles
* tag 'kbuild-v4.20' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
modpost: Create macro to avoid variable shadowing
ASN.1: Remove unnecessary shadowed local variable
kbuild: use 'else ifeq' for checksrc to improve readability
kbuild: remove unneeded link_multi_deps
kbuild: add -Wno-unused-but-set-variable flag unconditionally
kbuild: add -Wdeclaration-after-statement flag unconditionally
kbuild: add -Wno-pointer-sign flag unconditionally
modpost: remove leftover symbol prefix handling for module device table
kbuild: simplify command line creation in scripts/mkmakefile
kbuild: do not pass $(objtree) to scripts/mkmakefile
kbuild: remove user ID check in scripts/mkmakefile
kbuild: remove VERSION and PATCHLEVEL from $(objtree)/Makefile
kbuild: add --include-dir flag only for out-of-tree build
kbuild: remove dead code in cmd_files calculation in top Makefile
kbuild: hide most of targets when running config or mixed targets
kbuild: remove old check for CFLAGS use
kbuild: prefix Makefile.dtbinst path with $(srctree) unconditionally
kallsyms: remove left-over Blackfin code
kallsyms: reduce size a little on 64-bit
This Kselftest update for Linux 4.20-rc1 consists of:
- Improvements to ftrace test suite from Masami Hiramatsu.
- Color coded ftrace PASS / FAIL results from Steven Rostedt (VMware)
to improve readability of reports.
- watchdog Fixes and enhancement to add gettimeout and get|set pretimeout
options from Jerry Hoemann.
- Several fixes to warnings and spelling etc.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEPZKym/RZuOCGeA/kCwJExA0NQxwFAlvSA/oACgkQCwJExA0N
QxxQgA//UINds/f39g3nPf6QMhHcNQmNxdT/EYBSmKkorUTl3ojRFmcjerSt1DQ2
evgfFqU2oxyeqEvgApb1b39s7FGLVwzU67fALxpnn8xgFCvcHu1hWjbGLN34W1OY
bnHnrTAhzgL6FV2EpJDAvN/1M1msq4Gp74GjZ/47aGwYQxkGXtm/WXMbQw+bJTFd
bPC103URY56ihRlLLzxn+cNpbueYcbhI/OpYlYajvwBCFpIhTbdluscZpOmPt5nl
Ej/TJ6DjgzlXqf8ZdvbAKblvTmjp2ik2OmtNybbr7CuvgKvxSfhsfP7itRL4UKhg
/uHs2Ki5KmGczYxv6BYpNpjpyslWDz0WMcEBAsH+017IFsB+jrkEH7ys9KEkUhWc
U+/LqPld/B9C1FpLah92SDuRck0MSUfOvqKdJTABdSIWO8REDiPhiIDlvH9tYrPm
XgDvf/yldU3Sxemrqsxe9ooHp4kqr+vmVuXBfFd24CJTaGBPtcM25uTYEb06IyyL
Lkd07YoHMRXzKxCGV9Lpfs4DQpP8WGhkMwFoFhbTXYkrf8yLk5GVpj3UVUluTAk4
q0/L9wKcaR5r1CqwCIE+ZsioRGlFBcpsAYD6zPc8HaV50X1zj65boK7Fj/HHzmTs
Oyi9epp3T4Ml5NAbJeiA66GCpy1RRlEX3r/EVp2dW+FvBlyidUE=
=cpOg
-----END PGP SIGNATURE-----
Merge tag 'linux-kselftest-4.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest
Pull kselftest updates from Shuah Khan:
"This Kselftest update for Linux 4.20-rc1 consists of:
- Improvements to ftrace test suite from Masami Hiramatsu.
- Color coded ftrace PASS / FAIL results from Steven Rostedt (VMware)
to improve readability of reports.
- watchdog Fixes and enhancement to add gettimeout and get|set
pretimeout options from Jerry Hoemann.
- Several fixes to warnings and spelling etc"
* tag 'linux-kselftest-4.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: (40 commits)
selftests/ftrace: Strip escape sequences for log file
selftests/ftrace: Use colored output when available
selftests: fix warning: "_GNU_SOURCE" redefined
selftests: kvm: Fix -Wformat warnings
selftests/ftrace: Add color to the PASS / FAIL results
kvm: selftests: fix spelling mistake "Insufficent" -> "Insufficient"
selftests: gpio: Fix OUTPUT directory in Makefile
selftests: gpio: restructure Makefile
selftests: watchdog: Fix ioctl SET* error paths to take oneshot exit path
selftests: watchdog: Add gettimeout and get|set pretimeout
selftests: watchdog: Fix error message.
selftests: watchdog: fix message when /dev/watchdog open fails
selftests/ftrace: Add ftrace cpumask testcase
selftests/ftrace: Add wakeup_rt tracer testcase
selftests/ftrace: Add wakeup tracer testcase
selftests/ftrace: Add stacktrace ftrace filter command testcase
selftests/ftrace: Add trace_pipe testcase
selftests/ftrace: Add function filter on module testcase
selftests/ftrace: Add max stack tracer testcase
selftests/ftrace: Add function profiling stat testcase
...
Fix the synthetic event test case to remove event correctly.
If redirecting command to synthetic_event file without append
mode, it cleans up all existing events and execute (parse) the
command. This means "delete event" always fails to find the
target event.
Since previous synthetic event has a bug which doesn't return
-ENOENT even if it fails to find the deleting event, this test
passed. But fixing that bug, this test fails because this test
itself has a bug.
This fixes that bug by trying to delete event right after
adding an event, and use append mode redirection ('>>') instead
of normal redirection ('>').
Link: http://lkml.kernel.org/r/154013452832.25576.2305459545429386517.stgit@devbox
Acked-by: Shuah Khan <shuah@kernel.org>
Acked-by: Tom Zanussi <zanussi@linux.intel.com>
Tested-by: Tom Zanussi <zanussi@linux.intel.com>
Cc: Tom Zanussi <zanussi@kernel.org>
Cc: Tom Zanussi <tom.zanussi@linux.intel.com>
Cc: Rajvi Jingar <rajvi.jingar@intel.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: stable@vger.kernel.org
Fixes: f06eec4d0f ('selftests: ftrace: Add inter-event hist triggers testcases')
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Pull XArray conversion from Matthew Wilcox:
"The XArray provides an improved interface to the radix tree data
structure, providing locking as part of the API, specifying GFP flags
at allocation time, eliminating preloading, less re-walking the tree,
more efficient iterations and not exposing RCU-protected pointers to
its users.
This patch set
1. Introduces the XArray implementation
2. Converts the pagecache to use it
3. Converts memremap to use it
The page cache is the most complex and important user of the radix
tree, so converting it was most important. Converting the memremap
code removes the only other user of the multiorder code, which allows
us to remove the radix tree code that supported it.
I have 40+ followup patches to convert many other users of the radix
tree over to the XArray, but I'd like to get this part in first. The
other conversions haven't been in linux-next and aren't suitable for
applying yet, but you can see them in the xarray-conv branch if you're
interested"
* 'xarray' of git://git.infradead.org/users/willy/linux-dax: (90 commits)
radix tree: Remove multiorder support
radix tree test: Convert multiorder tests to XArray
radix tree tests: Convert item_delete_rcu to XArray
radix tree tests: Convert item_kill_tree to XArray
radix tree tests: Move item_insert_order
radix tree test suite: Remove multiorder benchmarking
radix tree test suite: Remove __item_insert
memremap: Convert to XArray
xarray: Add range store functionality
xarray: Move multiorder_check to in-kernel tests
xarray: Move multiorder_shrink to kernel tests
xarray: Move multiorder account test in-kernel
radix tree test suite: Convert iteration test to XArray
radix tree test suite: Convert tag_tagged_items to XArray
radix tree: Remove radix_tree_clear_tags
radix tree: Remove radix_tree_maybe_preload_order
radix tree: Remove split/join code
radix tree: Remove radix_tree_update_node_t
page cache: Finish XArray conversion
dax: Convert page fault handlers to XArray
...
Just like with normal GRO processing, we have to initialize
skb->next to NULL when we unlink overflow packets from the
GRO hash lists.
Fixes: d4546c2509 ("net: Convert GRO SKB handling to list_head.")
Reported-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Signed-off-by: David S. Miller <davem@davemloft.net>
Create DEF_FIELD_ADDR_VAR as a more generic version of the DEF_FIELD_ADD
macro, allowing usage of a variable name other than the struct element name.
Also, sets DEF_FIELD_ADDR as a specific usage of DEF_FILD_ADDR_VAR in which
the var name is the same as the struct element name.
Then, makes use of DEF_FIELD_ADDR_VAR to create a variable of another name,
in order to avoid variable shadowing.
Signed-off-by: Leonardo Bras <leobras.c@gmail.com>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Remove an unnecessary shadowed local variable (start).
It was used only once, with the same value it was started before
the if block.
Signed-off-by: Leonardo Bras <leobras.c@gmail.com>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
I believe the following change will fix the cache/TLB inconsistency
observed by Meelis. After changing the page table entries, we need to
flush the cache and TLB to ensure that we don't have any stale PTE
values in the cache or TLB.
The alternative patching is done after all CPUs are running. Thus, we
need to flush the whole cache and TLB.
I included the init section in the range modified by map_pages as
suggested by Helge. Some routines in the init section may require
patching.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
Return -ENOENT error if there is no target synthetic event.
This notices an operation failure to user as below;
# echo 'wakeup_latency u64 lat; pid_t pid;' > synthetic_events
# echo '!wakeup' >> synthetic_events
sh: write error: No such file or directory
Link: http://lkml.kernel.org/r/154013449986.25576.9487131386597290172.stgit@devbox
Acked-by: Tom Zanussi <zanussi@linux.intel.com>
Tested-by: Tom Zanussi <zanussi@linux.intel.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Rajvi Jingar <rajvi.jingar@intel.com>
Cc: stable@vger.kernel.org
Fixes: 4b147936fa ('tracing: Add support for 'synthetic' events')
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
.. even when that "default y" is hidden syntactically as a
default !EXPERT
it's wrong.
The only reason something should be 'default y' is if it used to be
built-in, and it was made configurable, and the 'default y' is just
retaining the status quo.
Altheratively, the hardware for the driver has become _so_ common that
it really makes sense for everybody to build it. Finally, one possible
reason for 'default y' is because the option is not enabling any new
code at all, but is just enabling other options (the networking people
do this for vendor options, for example, so that you can disable whole
vendors at a time).
Clearly, none of these cases hold for the BigBen Interactive Kids'
gamepad, and HID_BIGBEN_FF should thus most definitely not default
to on for everybody.
Cc: Hanno Zulla <kontakt@hanno.de>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.14 (GNU/Linux)
iEYEABECAAYFAlvURsoACgkQ+iyteGJfRspw9gCgtLHtFIm++UnS48IhP+ZlioGI
KqYAoLMhbd9IzpgPd1F8cLT/N29zAUcU
=bvZ1
-----END PGP SIGNATURE-----
Merge tag 'linux-watchdog-4.20-rc1' of git://www.linux-watchdog.org/linux-watchdog
Pull watchdog updates from Wim Van Sebroeck:
- Add Armada 37xx CPU watchdog
- w83627hf_wdt: Add Support for NCT6796D, NCT6797D, NCT6798D
- hpwdt: several improvements
- renesas_wdt: SPDX identifiers, stop when unregistering, support for
R7S9210
- rza_wdt: SPDX identifiers, support longer timeouts
- core: fix null pointer dereference when releasing cdev
- iTCO_wdt: Drop option vendorsupport=2
- sama5d4: fix timeout-sec usage
- lantiq_wdt: convert to watchdog framework
- several small fixes
* tag 'linux-watchdog-4.20-rc1' of git://www.linux-watchdog.org/linux-watchdog: (30 commits)
watchdog: ts4800: release syscon device node in ts4800_wdt_probe()
watchdog: armada_37xx_wdt: use do_div for u64 division
documentation: watchdog: add documentation for armada-37xx-wdt
dt-bindings: watchdog: Document armada-37xx-wdt binding
watchdog: Add support for Armada 37xx CPU watchdog
dt-bindings: watchdog: add mpc8xxx-wdt support
watchdog: mpc8xxx: provide boot status
MAINTAINERS: Fix file pattern for MEN Z069 watchdog driver
dt-bindings: watchdog: renesas-wdt: Add support for R7S9210
watchdog: rza_wdt: Support longer timeouts
watchdog: hpwdt: Disable PreTimeout when Timeout is smaller
watchdog: w83627hf_wdt: Support NCT6796D, NCT6797D, NCT6798D
watchdog: mpc8xxx: use dev_xxxx() instead of pr_xxxx()
watchdog: lantiq: add get_timeleft callback
watchdog: lantiq: Convert to watchdog_device
watchdog: lantiq: update register names to better match spec
watchdog: sama5d4: fix timeout-sec usage
watchdog: fix a small number of "watchog" typos in comments
watchdog: rza_wdt: convert to SPDX identifiers
watchdog: iTCO_wdt: Remove unused hooks
...
Subsystem:
- non devm managed registration is now removed from the driver API.
- all the unnecessary rtc_valid_tm() calls have been removed
Drivers:
- abx80X: watchdog support
- cmos: fix non ACPI support
- sc27xx: fix alarm support
- Remove a possible sysfs race condition for ab8500, ds1307, ds1685, isl1208
- Fix a possible race condition where an irq handler may be called before the
rtc_device struct is allocated for mt6397, pl030, menelaus, armada38x
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEXx9Viay1+e7J/aM4AyWl4gNJNJIFAlvTekMACgkQAyWl4gNJ
NJIJaA/+ONa/L5xvbMXRAAx164WX1f/8kZUZzku5rDAdnwrfj/4N13LRj3rtQlpD
0tv7/JZW3f1P8Ccfd3AkIZa1t7i3ruFpsCvQ9ModzB+gAR9n9ic+k8ugExEXXi4d
wrVwEiPHgoYyX/GwGKLUrbTJTqOBcp3PeSW5jK2WhKdnCnYVOQNL38MVmBxSCZ/m
3p+lOBsR4ND2bIf3QjJpck6K+6thaX0JQUIuM14oK2rmNA5dfrhaUw2benlBG/dv
Cd/H7TO5qRuUaY+HwwUB/M78wYhaLyVjPU0JbfHwVf04dZFyWQYuZxVZJh9boowO
XwkwKBPgti43Opa9i6VIcOR6j2/ym2U8JJcU5eL4scFxjwoTUbbE2WAvNJ9/sF5J
z434yfn9Iy0dVEmp/K5Dl0YQAhNHVQChi0RsQ0UnqxDzuxEU8J26JQvp8MHNKYsp
FxeyCDjEMf8u+xqQpCQ24chzyIKUGSyFurYGVDhDOW2/Rh08Z17GMhiLfOO73pFU
B9GAaEpsJwfZbmtlqJAm3lE521kCo4JBoVM7tY+75p3b9epPW5L6tGQIDdj6Q2A7
VJkCo+O/2cVO3NRXWopTe6/vMm8mvJALvaQVccrLqx/oBpN6GBDFPXfz4a6AB+GF
o2XBNEkUOrEAkSONfGKEKAkKU//tIssr9Yx6XmpjFOcrZxgfAL0=
=5JM5
-----END PGP SIGNATURE-----
Merge tag 'rtc-4.20' of git://git.kernel.org/pub/scm/linux/kernel/git/abelloni/linux
Pull RTC updates from Alexandre Belloni:
"This cycle, there were mostly non urgent fixes in drivers. I also
finally unexported the non managed registration.
Subsystem:
- non devm managed registration is now removed from the driver API
- all the unnecessary rtc_valid_tm() calls have been removed
Drivers:
- abx80X: watchdog support
- cmos: fix non ACPI support
- sc27xx: fix alarm support
- Remove a possible sysfs race condition for ab8500, ds1307, ds1685,
isl1208
- Fix a possible race condition where an irq handler may be called
before the rtc_device struct is allocated for mt6397, pl030,
menelaus, armada38x"
* tag 'rtc-4.20' of git://git.kernel.org/pub/scm/linux/kernel/git/abelloni/linux: (54 commits)
rtc: sc27xx: Always read normal alarm when registering RTC device
rtc: sc27xx: Add check to see if need to enable the alarm interrupt
rtc: sc27xx: Remove interrupts disable and clear in probe()
rtc: sc27xx: Clear SPG value update interrupt status
rtc: sc27xx: Set wakeup capability before registering rtc device
rtc: s35390a: Change buf's type to u8 in s35390a_init
rtc: ds1307: fix ds1339 wakealarm support
rtc: ds1685: simplify getting .driver_data
rtc: m41t80: mark expected switch fall-through
rtc: tegra: Propagate errors from platform_get_irq()
rtc: cmos: Remove the `use_acpi_alarm' module parameter for !ACPI
rtc: cmos: Fix non-ACPI undefined reference to `hpet_rtc_interrupt'
rtc: mv: let the core handle invalid alarms
rtc: vr41xx: switch to rtc_time64_to_tm/rtc_tm_to_time64
rtc: ab8500: remove useless check
rtc: ab8500: let the core handle range
rtc: ab8500: use rtc_add_group
rtc: rs5c348: report error when time is invalid
rtc: rs5c348: remove forward declaration
rtc: rs5c348: remove useless label
...
-----BEGIN PGP SIGNATURE-----
iJAEABYIADkWIQQUwxxKyE5l/npt8ARiEGxRG/Sl2wUCW9NpRRscamFjZWsuYW5h
c3pld3NraUBnbWFpbC5jb20ACgkQYhBsURv0pdvr0AD4zq6QtPbvLI7+s7S3812N
BOfYaA/gTA1eMZr/4QZPPAEA6h2sMjEA+jhf5lhA8QsCMGIeuWu/npcYkpFgnN9Q
8gM=
=MecC
-----END PGP SIGNATURE-----
Merge tag 'led-fix-for-4.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/j.anaszewski/linux-leds
Pull LED fix from Jacek Anaszewski.
* tag 'led-fix-for-4.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/j.anaszewski/linux-leds:
leds: gpio: set led_dat->gpiod pointer for OF defined GPIO leds
Commit 9ee3e06610 ("HID: i2c-hid: override HID descriptors for certain
devices") added a new dmi_system_id quirk table to override certain HID
report descriptors for some systems that lack them.
But the table wasn't properly terminated, causing the dmi matching to
walk off into la-la-land, and starting to treat random data as dmi
descriptor pointers, causing boot-time oopses if you were at all
unlucky.
Terminate the array.
We really should have some way to just statically check that arrays that
should be terminated by an empty entry actually are so. But the HID
people really should have caught this themselves, rather than have me
deal with an oops during the merge window. Tssk, tssk.
Cc: Julian Sax <jsbc@gmx.de>
Cc: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The stack tracer traces every function call checking the current stack (in
non interrupt context), looking for the deepest stack, and saving it when it
finds a new max depth. The problem is that it calls save_stack_trace(), and
with the new ORC unwinder, it can skip too much. As it looks at the ip of
the function call in the backtrace to find where it should start, it doesn't
need to skip anything.
The stack trace selftest would fail when the kernel was complied with the
ORC UNDWINDER enabled. Without skipping functions when doing the stack
trace, it now passes again.
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Daniel Borkmann says:
====================
pull-request: bpf 2018-10-27
The following pull-request contains BPF updates for your *net* tree.
The main changes are:
1) Fix toctou race in BTF header validation, from Martin and Wenwen.
2) Fix devmap interface comparison in notifier call which was
neglecting netns, from Taehee.
3) Several fixes in various places, for example, correcting direct
packet access and helper function availability, from Daniel.
4) Fix BPF kselftest config fragment to include af_xdp and sockmap,
from Naresh.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Merge updates from Andrew Morton:
- a few misc things
- ocfs2 updates
- most of MM
* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (132 commits)
hugetlbfs: dirty pages as they are added to pagecache
mm: export add_swap_extent()
mm: split SWP_FILE into SWP_ACTIVATED and SWP_FS
tools/testing/selftests/vm/map_fixed_noreplace.c: add test for MAP_FIXED_NOREPLACE
mm: thp: relocate flush_cache_range() in migrate_misplaced_transhuge_page()
mm: thp: fix mmu_notifier in migrate_misplaced_transhuge_page()
mm: thp: fix MADV_DONTNEED vs migrate_misplaced_transhuge_page race condition
mm/kasan/quarantine.c: make quarantine_lock a raw_spinlock_t
mm/gup: cache dev_pagemap while pinning pages
Revert "x86/e820: put !E820_TYPE_RAM regions into memblock.reserved"
mm: return zero_resv_unavail optimization
mm: zero remaining unavailable struct pages
tools/testing/selftests/vm/gup_benchmark.c: add MAP_HUGETLB option
tools/testing/selftests/vm/gup_benchmark.c: add MAP_SHARED option
tools/testing/selftests/vm/gup_benchmark.c: allow user specified file
tools/testing/selftests/vm/gup_benchmark.c: fix 'write' flag usage
mm/gup_benchmark.c: add additional pinning methods
mm/gup_benchmark.c: time put_page()
mm: don't raise MEMCG_OOM event due to failed high-order allocation
mm/page-writeback.c: fix range_cyclic writeback vs writepages deadlock
...
Pull networking fixes from David Miller:
"What better way to start off a weekend than with some networking bug
fixes:
1) net namespace leak in dump filtering code of ipv4 and ipv6, fixed
by David Ahern and Bjørn Mork.
2) Handle bad checksums from hardware when using CHECKSUM_COMPLETE
properly in UDP, from Sean Tranchetti.
3) Remove TCA_OPTIONS from policy validation, it turns out we don't
consistently use nested attributes for this across all packet
schedulers. From David Ahern.
4) Fix SKB corruption in cadence driver, from Tristram Ha.
5) Fix broken WoL handling in r8169 driver, from Heiner Kallweit.
6) Fix OOPS in pneigh_dump_table(), from Eric Dumazet"
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (28 commits)
net/neigh: fix NULL deref in pneigh_dump_table()
net: allow traceroute with a specified interface in a vrf
bridge: do not add port to router list when receives query with source 0.0.0.0
net/smc: fix smc_buf_unuse to use the lgr pointer
ipv6/ndisc: Preserve IPv6 control buffer if protocol error handlers are called
net/{ipv4,ipv6}: Do not put target net if input nsid is invalid
lan743x: Remove SPI dependency from Microchip group.
drivers: net: remove <net/busy_poll.h> inclusion when not needed
net: phy: genphy_10g_driver: Avoid NULL pointer dereference
r8169: fix broken Wake-on-LAN from S5 (poweroff)
octeontx2-af: Use GFP_ATOMIC under spin lock
net: ethernet: cadence: fix socket buffer corruption problem
net/ipv6: Allow onlink routes to have a device mismatch if it is the default route
net: sched: Remove TCA_OPTIONS from policy
ice: Poll for link status change
ice: Allocate VF interrupts and set queue map
ice: Introduce ice_dev_onetime_setup
net: hns3: Fix for warning uninitialized symbol hw_err_lst3
octeontx2-af: Copy the right amount of memory
net: udp: fix handling of CHECKSUM_COMPLETE packets
...
Pull sparc fixes from David Miller:
"Some more sparc fixups, mostly aimed at getting the allmodconfig build
up and clean again"
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc:
sparc64: Rework xchg() definition to avoid warnings.
sparc64: Export __node_distance.
sparc64: Make corrupted user stacks more debuggable.
Some test systems were experiencing negative huge page reserve counts and
incorrect file block counts. This was traced to /proc/sys/vm/drop_caches
removing clean pages from hugetlbfs file pagecaches. When non-hugetlbfs
explicit code removes the pages, the appropriate accounting is not
performed.
This can be recreated as follows:
fallocate -l 2M /dev/hugepages/foo
echo 1 > /proc/sys/vm/drop_caches
fallocate -l 2M /dev/hugepages/foo
grep -i huge /proc/meminfo
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 2048
HugePages_Free: 2047
HugePages_Rsvd: 18446744073709551615
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 4194304 kB
ls -lsh /dev/hugepages/foo
4.0M -rw-r--r--. 1 root root 2.0M Oct 17 20:05 /dev/hugepages/foo
To address this issue, dirty pages as they are added to pagecache. This
can easily be reproduced with fallocate as shown above. Read faulted
pages will eventually end up being marked dirty. But there is a window
where they are clean and could be impacted by code such as drop_caches.
So, just dirty them all as they are added to the pagecache.
Link: http://lkml.kernel.org/r/b5be45b8-5afe-56cd-9482-28384699a049@oracle.com
Fixes: 6bda666a03 ("hugepages: fold find_or_alloc_pages into huge_no_page()")
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Mihcla Hocko <mhocko@suse.com>
Reviewed-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Btrfs currently does not support swap files because swap's use of bmap
does not work with copy-on-write and multiple devices. See 35054394c4
("Btrfs: stop providing a bmap operation to avoid swapfile corruptions").
However, the swap code has a mechanism for the filesystem to manually add
swap extents using add_swap_extent() from the ->swap_activate() aop.
iomap has done this since 67482129cd ("iomap: add a swapfile activation
function"). Btrfs will do the same in a later patch, so export
add_swap_extent().
Link: http://lkml.kernel.org/r/bb1208575e02829aae51b538709476964f97b1ea.1536704650.git.osandov@fb.com
Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: David Sterba <dsterba@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The SWP_FILE flag serves two purposes: to make swap_{read,write}page() go
through the filesystem, and to make swapoff() call ->swap_deactivate().
For Btrfs, we want the latter but not the former, so split this flag into
two. This makes us always call ->swap_deactivate() if ->swap_activate()
succeeded, not just if it didn't add any swap extents itself.
This also resolves the issue of the very misleading name of SWP_FILE,
which is only used for swap files over NFS.
Link: http://lkml.kernel.org/r/6d63d8668c4287a4f6d203d65696e96f80abdfc7.1536704650.git.osandov@fb.com
Signed-off-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: David Sterba <dsterba@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add a test for MAP_FIXED_NOREPLACE, based on some code originally by Jann
Horn. This would have caught the overlap bug reported by Daniel Micay.
I originally suggested to Michal that we create MAP_FIXED_NOREPLACE, but
instead of writing a selftest I spent my time bike-shedding whether it
should be called MAP_FIXED_SAFE/NOCLOBBER/WEAK/NEW .. mea culpa.
Link: http://lkml.kernel.org/r/20181013133929.28653-1-mpe@ellerman.id.au
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Khalid Aziz <khalid.aziz@oracle.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Jann Horn <jannh@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Florian Weimer <fweimer@redhat.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
Cc: Joel Stanley <joel@jms.id.au>
Cc: Jason Evans <jasone@google.com>
Cc: David Goldblatt <davidtgoldblatt@gmail.com>
Cc: Daniel Micay <danielmicay@gmail.com>
Cc: Shuah Khan <shuah@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
There should be no cache left by the time we overwrite the old transhuge
pmd with the new one. It's already too late to flush through the virtual
address because we already copied the page data to the new physical
address.
So flush the cache before the data copy.
Also delete the "end" variable to shutoff a "unused variable" warning on
x86 where flush_cache_range() is a noop.
Link: http://lkml.kernel.org/r/20181015202311.7209-1-aarcange@redhat.com
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Aaron Tomlin <atomlin@redhat.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
change_huge_pmd() after arming the numa/protnone pmd doesn't flush the TLB
right away. do_huge_pmd_numa_page() flushes the TLB before calling
migrate_misplaced_transhuge_page(). By the time do_huge_pmd_numa_page()
runs some CPU could still access the page through the TLB.
change_huge_pmd() before arming the numa/protnone transhuge pmd calls
mmu_notifier_invalidate_range_start(). So there's no need of
mmu_notifier_invalidate_range_start()/mmu_notifier_invalidate_range_only_end()
sequence in migrate_misplaced_transhuge_page() too, because by the time
migrate_misplaced_transhuge_page() runs, the pmd mapping has already been
invalidated in the secondary MMUs. It has to or if a secondary MMU can
still write to the page, the migrate_page_copy() would lose data.
However an explicit mmu_notifier_invalidate_range() is needed before
migrate_misplaced_transhuge_page() starts copying the data of the
transhuge page or the below can happen for MMU notifier users sharing the
primary MMU pagetables and only implementing ->invalidate_range:
CPU0 CPU1 GPU sharing linux pagetables using
only ->invalidate_range
----------- ------------ ---------
GPU secondary MMU writes to the page
mapped by the transhuge pmd
change_pmd_range()
mmu..._range_start()
->invalidate_range_start() noop
change_huge_pmd()
set_pmd_at(numa/protnone)
pmd_unlock()
do_huge_pmd_numa_page()
CPU TLB flush globally (1)
CPU cannot write to page
migrate_misplaced_transhuge_page()
GPU writes to the page...
migrate_page_copy()
...GPU stops writing to the page
CPU TLB flush (2)
mmu..._range_end() (3)
->invalidate_range_stop() noop
->invalidate_range()
GPU secondary MMU is invalidated
and cannot write to the page anymore
(too late)
Just like we need a CPU TLB flush (1) because the TLB flush (2) arrives
too late, we also need a mmu_notifier_invalidate_range() before calling
migrate_misplaced_transhuge_page(), because the ->invalidate_range() in
(3) also arrives too late.
This requirement is the result of the lazy optimization in
change_huge_pmd() that releases the pmd_lock without first flushing the
TLB and without first calling mmu_notifier_invalidate_range().
Even converting the removed mmu_notifier_invalidate_range_only_end() into
a mmu_notifier_invalidate_range_end() would not have been enough to fix
this, because it run after migrate_page_copy().
After the hugepage data copy is done migrate_misplaced_transhuge_page()
can proceed and call set_pmd_at without having to flush the TLB nor any
secondary MMUs because the secondary MMU invalidate, just like the CPU TLB
flush, has to happen before the migrate_page_copy() is called or it would
be a bug in the first place (and it was for drivers using
->invalidate_range()).
KVM is unaffected because it doesn't implement ->invalidate_range().
The standard PAGE_SIZEd migrate_misplaced_page is less accelerated and
uses the generic migrate_pages which transitions the pte from
numa/protnone to a migration entry in try_to_unmap_one() and flushes TLBs
and all mmu notifiers there before copying the page.
Link: http://lkml.kernel.org/r/20181013002430.698-3-aarcange@redhat.com
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Aaron Tomlin <atomlin@redhat.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Patch series "migrate_misplaced_transhuge_page race conditions".
Aaron found a new instance of the THP MADV_DONTNEED race against
pmdp_clear_flush* variants, that was apparently left unfixed.
While looking into the race found by Aaron, I may have found two more
issues in migrate_misplaced_transhuge_page.
These race conditions would not cause kernel instability, but they'd
corrupt userland data or leave data non zero after MADV_DONTNEED.
I did only minor testing, and I don't expect to be able to reproduce this
(especially the lack of ->invalidate_range before migrate_page_copy,
requires the latest iommu hardware or infiniband to reproduce). The last
patch is noop for x86 and it needs further review from maintainers of
archs that implement flush_cache_range() (not in CC yet).
To avoid confusion, it's not the first patch that introduces the bug fixed
in the second patch, even before removing the
pmdp_huge_clear_flush_notify, that _notify suffix was called after
migrate_page_copy already run.
This patch (of 3):
This is a corollary of ced108037c ("thp: fix MADV_DONTNEED vs. numa
balancing race"), 58ceeb6bec ("thp: fix MADV_DONTNEED vs. MADV_FREE
race") and 5b7abeae3a ("thp: fix MADV_DONTNEED vs clear soft dirty
race).
When the above three fixes where posted Dave asked
https://lkml.kernel.org/r/929b3844-aec2-0111-fef7-8002f9d4e2b9@intel.com
but apparently this was missed.
The pmdp_clear_flush* in migrate_misplaced_transhuge_page() was introduced
in a54a407fbf ("mm: Close races between THP migration and PMD numa
clearing").
The important part of such commit is only the part where the page lock is
not released until the first do_huge_pmd_numa_page() finished disarming
the pagenuma/protnone.
The addition of pmdp_clear_flush() wasn't beneficial to such commit and
there's no commentary about such an addition either.
I guess the pmdp_clear_flush() in such commit was added just in case for
safety, but it ended up introducing the MADV_DONTNEED race condition found
by Aaron.
At that point in time nobody thought of such kind of MADV_DONTNEED race
conditions yet (they were fixed later) so the code may have looked more
robust by adding the pmdp_clear_flush().
This specific race condition won't destabilize the kernel, but it can
confuse userland because after MADV_DONTNEED the memory won't be zeroed
out.
This also optimizes the code and removes a superfluous TLB flush.
[akpm@linux-foundation.org: reflow comment to 80 cols, fix grammar and typo (beacuse)]
Link: http://lkml.kernel.org/r/20181013002430.698-2-aarcange@redhat.com
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Reported-by: Aaron Tomlin <atomlin@redhat.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The static lock quarantine_lock is used in quarantine.c to protect the
quarantine queue datastructures. It is taken inside quarantine queue
manipulation routines (quarantine_put(), quarantine_reduce() and
quarantine_remove_cache()), with IRQs disabled. This is not a problem on
a stock kernel but is problematic on an RT kernel where spin locks are
sleeping spinlocks, which can sleep and can not be acquired with disabled
interrupts.
Convert the quarantine_lock to a raw spinlock_t. The usage of
quarantine_lock is confined to quarantine.c and the work performed while
the lock is held is used for debug purpose.
[bigeasy@linutronix.de: slightly altered the commit message]
Link: http://lkml.kernel.org/r/20181010214945.5owshc3mlrh74z4b@linutronix.de
Signed-off-by: Clark Williams <williams@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Getting pages from ZONE_DEVICE memory needs to check the backing device's
live-ness, which is tracked in the device's dev_pagemap metadata. This
metadata is stored in a radix tree and looking it up adds measurable
software overhead.
This patch avoids repeating this relatively costly operation when
dev_pagemap is used by caching the last dev_pagemap while getting user
pages. The gup_benchmark kernel self test reports this reduces time to
get user pages to as low as 1/3 of the previous time.
Link: http://lkml.kernel.org/r/20181012173040.15669-1-keith.busch@intel.com
Signed-off-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
commit 124049decb ("x86/e820: put !E820_TYPE_RAM regions into
memblock.reserved") breaks movable_node kernel option because it changed
the memory gap range to reserved memblock. So, the node is marked as
Normal zone even if the SRAT has Hot pluggable affinity.
=====================================================================
kernel: BIOS-e820: [mem 0x0000180000000000-0x0000180fffffffff] usable
kernel: BIOS-e820: [mem 0x00001c0000000000-0x00001c0fffffffff] usable
...
kernel: reserved[0x12]#011[0x0000181000000000-0x00001bffffffffff], 0x000003f000000000 bytes flags: 0x0
...
kernel: ACPI: SRAT: Node 2 PXM 6 [mem 0x180000000000-0x1bffffffffff] hotplug
kernel: ACPI: SRAT: Node 3 PXM 7 [mem 0x1c0000000000-0x1fffffffffff] hotplug
...
kernel: Movable zone start for each node
kernel: Node 3: 0x00001c0000000000
kernel: Early memory node ranges
...
=====================================================================
The original issue is fixed by the former patches, so let's revert commit
124049decb ("x86/e820: put !E820_TYPE_RAM regions into
memblock.reserved").
Link: http://lkml.kernel.org/r/20181002143821.5112-4-msys.mizuma@gmail.com
Signed-off-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
Reviewed-by: Pavel Tatashin <pavel.tatashin@microsoft.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Oscar Salvador <osalvador@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
When checking for valid pfns in zero_resv_unavail(), it is not necessary
to verify that pfns within pageblock_nr_pages ranges are valid, only the
first one needs to be checked. This is because memory for pages are
allocated in contiguous chunks that contain pageblock_nr_pages struct
pages.
Link: http://lkml.kernel.org/r/20181002143821.5112-3-msys.mizuma@gmail.com
Signed-off-by: Pavel Tatashin <pavel.tatashin@microsoft.com>
Signed-off-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
Reviewed-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Patch series "mm: Fix for movable_node boot option", v3.
This patch series contains a fix for the movable_node boot option issue
which was introduced by commit 124049decb ("x86/e820: put !E820_TYPE_RAM
regions into memblock.reserved").
The commit breaks the option because it changed the memory gap range to
reserved memblock. So, the node is marked as Normal zone even if the SRAT
has Hot pluggable affinity.
First and second patch fix the original issue which the commit tried to
fix, then revert the commit.
This patch (of 3):
There is a kernel panic that is triggered when reading /proc/kpageflags on
the kernel booted with kernel parameter 'memmap=nn[KMG]!ss[KMG]':
BUG: unable to handle kernel paging request at fffffffffffffffe
PGD 9b20e067 P4D 9b20e067 PUD 9b210067 PMD 0
Oops: 0000 [#1] SMP PTI
CPU: 2 PID: 1728 Comm: page-types Not tainted 4.17.0-rc6-mm1-v4.17-rc6-180605-0816-00236-g2dfb086ef02c+ #160
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-2.fc28 04/01/2014
RIP: 0010:stable_page_flags+0x27/0x3c0
Code: 00 00 00 0f 1f 44 00 00 48 85 ff 0f 84 a0 03 00 00 41 54 55 49 89 fc 53 48 8b 57 08 48 8b 2f 48 8d 42 ff 83 e2 01 48 0f 44 c7 <48> 8b 00 f6 c4 01 0f 84 10 03 00 00 31 db 49 8b 54 24 08 4c 89 e7
RSP: 0018:ffffbbd44111fde0 EFLAGS: 00010202
RAX: fffffffffffffffe RBX: 00007fffffffeff9 RCX: 0000000000000000
RDX: 0000000000000001 RSI: 0000000000000202 RDI: ffffed1182fff5c0
RBP: ffffffffffffffff R08: 0000000000000001 R09: 0000000000000001
R10: ffffbbd44111fed8 R11: 0000000000000000 R12: ffffed1182fff5c0
R13: 00000000000bffd7 R14: 0000000002fff5c0 R15: ffffbbd44111ff10
FS: 00007efc4335a500(0000) GS:ffff93a5bfc00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: fffffffffffffffe CR3: 00000000b2a58000 CR4: 00000000001406e0
Call Trace:
kpageflags_read+0xc7/0x120
proc_reg_read+0x3c/0x60
__vfs_read+0x36/0x170
vfs_read+0x89/0x130
ksys_pread64+0x71/0x90
do_syscall_64+0x5b/0x160
entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x7efc42e75e23
Code: 09 00 ba 9f 01 00 00 e8 ab 81 f4 ff 66 2e 0f 1f 84 00 00 00 00 00 90 83 3d 29 0a 2d 00 00 75 13 49 89 ca b8 11 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 34 c3 48 83 ec 08 e8 db d3 01 00 48 89 04 24
According to kernel bisection, this problem became visible due to commit
f7f99100d8 which changes how struct pages are initialized.
Memblock layout affects the pfn ranges covered by node/zone. Consider
that we have a VM with 2 NUMA nodes and each node has 4GB memory, and the
default (no memmap= given) memblock layout is like below:
MEMBLOCK configuration:
memory size = 0x00000001fff75c00 reserved size = 0x000000000300c000
memory.cnt = 0x4
memory[0x0] [0x0000000000001000-0x000000000009efff], 0x000000000009e000 bytes on node 0 flags: 0x0
memory[0x1] [0x0000000000100000-0x00000000bffd6fff], 0x00000000bfed7000 bytes on node 0 flags: 0x0
memory[0x2] [0x0000000100000000-0x000000013fffffff], 0x0000000040000000 bytes on node 0 flags: 0x0
memory[0x3] [0x0000000140000000-0x000000023fffffff], 0x0000000100000000 bytes on node 1 flags: 0x0
...
If you give memmap=1G!4G (so it just covers memory[0x2]),
the range [0x100000000-0x13fffffff] is gone:
MEMBLOCK configuration:
memory size = 0x00000001bff75c00 reserved size = 0x000000000300c000
memory.cnt = 0x3
memory[0x0] [0x0000000000001000-0x000000000009efff], 0x000000000009e000 bytes on node 0 flags: 0x0
memory[0x1] [0x0000000000100000-0x00000000bffd6fff], 0x00000000bfed7000 bytes on node 0 flags: 0x0
memory[0x2] [0x0000000140000000-0x000000023fffffff], 0x0000000100000000 bytes on node 1 flags: 0x0
...
This causes shrinking node 0's pfn range because it is calculated by the
address range of memblock.memory. So some of struct pages in the gap
range are left uninitialized.
We have a function zero_resv_unavail() which does zeroing the struct pages
outside memblock.memory, but currently it covers only the reserved
unavailable range (i.e. memblock.memory && !memblock.reserved). This
patch extends it to cover all unavailable range, which fixes the reported
issue.
Link: http://lkml.kernel.org/r/20181002143821.5112-2-msys.mizuma@gmail.com
Fixes: f7f99100d8 ("mm: stop zeroing memory during allocation in vmemmap")
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Signed-off-by-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
Tested-by: Oscar Salvador <osalvador@suse.de>
Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
Reviewed-by: Pavel Tatashin <pavel.tatashin@microsoft.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add a new option '-H' to the gup benchmark to help understand how hugetlb
mapping pages compare with the default.
Link: http://lkml.kernel.org/r/20181010195605.10689-6-keith.busch@intel.com
Signed-off-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Kirill Shutemov <kirill.shutemov@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add a new benchmark option, -S, to request MAP_SHARED. This can be used
to compare with MAP_PRIVATE, or for files that require this option, like
dax.
Link: http://lkml.kernel.org/r/20181010195605.10689-5-keith.busch@intel.com
Signed-off-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Kirill Shutemov <kirill.shutemov@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Allow a user to specify a file to map by adding a new option, '-f',
providing a means to test various file backings.
If not specified, the benchmark will use a private mapping of /dev/zero,
which produces an anonymous mapping as before.
[akpm@linux-foundation.org: avoid using comma operator]
Link: http://lkml.kernel.org/r/20181010195605.10689-4-keith.busch@intel.com
Signed-off-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Kirill Shutemov <kirill.shutemov@linux.intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If the '-w' parameter was provided, the benchmark would exit due to a
mssing 'break'.
Link: http://lkml.kernel.org/r/20181010195605.10689-3-keith.busch@intel.com
Signed-off-by: Keith Busch <keith.busch@intel.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Provide new gup benchmark ioctl commands to run different user page
pinning methods, get_user_pages_longterm() and get_user_pages(), in
addition to the existing get_user_pages_fast().
Link: http://lkml.kernel.org/r/20181010195605.10689-2-keith.busch@intel.com
Signed-off-by: Keith Busch <keith.busch@intel.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We'd like to measure time to unpin user pages, so this adds a second
benchmark timer on put_page, separate from get_page.
Adding the field breaks this ioctl ABI, but should be okay since this an
in-tree kernel selftest.
[akpm@linux-foundation.org: add expansion to struct gup_benchmark for future use]
Link: http://lkml.kernel.org/r/20181010195605.10689-1-keith.busch@intel.com
Signed-off-by: Keith Busch <keith.busch@intel.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
It was reported that on some of our machines containers were restarted
with OOM symptoms without an obvious reason. Despite there were almost no
memory pressure and plenty of page cache, MEMCG_OOM event was raised
occasionally, causing the container management software to think, that OOM
has happened. However, no tasks have been killed.
The following investigation showed that the problem is caused by a failing
attempt to charge a high-order page. In such case, the OOM killer is
never invoked. As shown below, it can happen under conditions, which are
very far from a real OOM: e.g. there is plenty of clean page cache and no
memory pressure.
There is no sense in raising an OOM event in this case, as it might
confuse a user and lead to wrong and excessive actions (e.g. restart the
workload, as in my case).
Let's look at the charging path in try_charge(). If the memory usage is
about memory.max, which is absolutely natural for most memory cgroups, we
try to reclaim some pages. Even if we were able to reclaim enough memory
for the allocation, the following check can fail due to a race with
another concurrent allocation:
if (mem_cgroup_margin(mem_over_limit) >= nr_pages)
goto retry;
For regular pages the following condition will save us from triggering
the OOM:
if (nr_reclaimed && nr_pages <= (1 << PAGE_ALLOC_COSTLY_ORDER))
goto retry;
But for high-order allocation this condition will intentionally fail. The
reason behind is that we'll likely fall to regular pages anyway, so it's
ok and even preferred to return ENOMEM.
In this case the idea of raising MEMCG_OOM looks dubious.
Fix this by moving MEMCG_OOM raising to mem_cgroup_oom() after allocation
order check, so that the event won't be raised for high order allocations.
This change doesn't affect regular pages allocation and charging.
Link: http://lkml.kernel.org/r/20181004214050.7417-1-guro@fb.com
Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Michal Hocko <mhocko@kernel.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We've recently seen a workload on XFS filesystems with a repeatable
deadlock between background writeback and a multi-process application
doing concurrent writes and fsyncs to a small range of a file.
range_cyclic
writeback Process 1 Process 2
xfs_vm_writepages
write_cache_pages
writeback_index = 2
cycled = 0
....
find page 2 dirty
lock Page 2
->writepage
page 2 writeback
page 2 clean
page 2 added to bio
no more pages
write()
locks page 1
dirties page 1
locks page 2
dirties page 1
fsync()
....
xfs_vm_writepages
write_cache_pages
start index 0
find page 1 towrite
lock Page 1
->writepage
page 1 writeback
page 1 clean
page 1 added to bio
find page 2 towrite
lock Page 2
page 2 is writeback
<blocks>
write()
locks page 1
dirties page 1
fsync()
....
xfs_vm_writepages
write_cache_pages
start index 0
!done && !cycled
sets index to 0, restarts lookup
find page 1 dirty
find page 1 towrite
lock Page 1
page 1 is writeback
<blocks>
lock Page 1
<blocks>
DEADLOCK because:
- process 1 needs page 2 writeback to complete to make
enough progress to issue IO pending for page 1
- writeback needs page 1 writeback to complete so process 2
can progress and unlock the page it is blocked on, then it
can issue the IO pending for page 2
- process 2 can't make progress until process 1 issues IO
for page 1
The underlying cause of the problem here is that range_cyclic writeback is
processing pages in descending index order as we hold higher index pages
in a structure controlled from above write_cache_pages(). The
write_cache_pages() caller needs to be able to submit these pages for IO
before write_cache_pages restarts writeback at mapping index 0 to avoid
wcp inverting the page lock/writeback wait order.
generic_writepages() is not susceptible to this bug as it has no private
context held across write_cache_pages() - filesystems using this
infrastructure always submit pages in ->writepage immediately and so there
is no problem with range_cyclic going back to mapping index 0.
However:
mpage_writepages() has a private bio context,
exofs_writepages() has page_collect
fuse_writepages() has fuse_fill_wb_data
nfs_writepages() has nfs_pageio_descriptor
xfs_vm_writepages() has xfs_writepage_ctx
All of these ->writepages implementations can hold pages under writeback
in their private structures until write_cache_pages() returns, and hence
they are all susceptible to this deadlock.
Also worth noting is that ext4 has it's own bastardised version of
write_cache_pages() and so it /may/ have an equivalent deadlock. I looked
at the code long enough to understand that it has a similar retry loop for
range_cyclic writeback reaching the end of the file and then promptly ran
away before my eyes bled too much. I'll leave it for the ext4 developers
to determine if their code is actually has this deadlock and how to fix it
if it has.
There's a few ways I can see avoid this deadlock. There's probably more,
but these are the first I've though of:
1. get rid of range_cyclic altogether
2. range_cyclic always stops at EOF, and we start again from
writeback index 0 on the next call into write_cache_pages()
2a. wcp also returns EAGAIN to ->writepages implementations to
indicate range cyclic has hit EOF. writepages implementations can
then flush the current context and call wpc again to continue. i.e.
lift the retry into the ->writepages implementation
3. range_cyclic uses trylock_page() rather than lock_page(), and it
skips pages it can't lock without blocking. It will already do this
for pages under writeback, so this seems like a no-brainer
3a. all non-WB_SYNC_ALL writeback uses trylock_page() to avoid
blocking as per pages under writeback.
I don't think #1 is an option - range_cyclic prevents frequently
dirtied lower file offset from starving background writeback of
rarely touched higher file offsets.
#2 is simple, and I don't think it will have any impact on
performance as going back to the start of the file implies an
immediate seek. We'll have exactly the same number of seeks if we
switch writeback to another inode, and then come back to this one
later and restart from index 0.
#2a is pretty much "status quo without the deadlock". Moving the
retry loop up into the wcp caller means we can issue IO on the
pending pages before calling wcp again, and so avoid locking or
waiting on pages in the wrong order. I'm not convinced we need to do
this given that we get the same thing from #2 on the next writeback
call from the writeback infrastructure.
#3 is really just a band-aid - it doesn't fix the access/wait
inversion problem, just prevents it from becoming a deadlock
situation. I'd prefer we fix the inversion, not sweep it under the
carpet like this.
#3a is really an optimisation that just so happens to include the
band-aid fix of #3.
So it seems that the simplest way to fix this issue is to implement
solution #2
Link: http://lkml.kernel.org/r/20181005054526.21507-1-david@fromorbit.com
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Jan Kara <jack@suse.de>
Cc: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
memmap_init_zone, is getting complex, because it is called from different
contexts: hotplug, and during boot, and also because it must handle some
architecture quirks. One of them is mirrored memory.
Move the code that decides whether to skip mirrored memory outside of
memmap_init_zone, into a separate function.
[pasha.tatashin@oracle.com: uninline overlap_memmap_init()]
Link: http://lkml.kernel.org/r/20180726193509.3326-4-pasha.tatashin@oracle.com
Link: http://lkml.kernel.org/r/20180724235520.10200-4-pasha.tatashin@oracle.com
Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: Pasha Tatashin <Pavel.Tatashin@microsoft.com>
Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Jérôme Glisse <jglisse@redhat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Souptick Joarder <jrdr.linux@gmail.com>
Cc: Steven Sistare <steven.sistare@oracle.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
update_defer_init() should be called only when struct page is about to be
initialized. Because it counts number of initialized struct pages, but
there we may skip struct pages if there is some mirrored memory.
So move, update_defer_init() after checking for mirrored memory.
Also, rename update_defer_init() to defer_init() and reverse the return
boolean to emphasize that this is a boolean function, that tells that the
reset of memmap initialization should be deferred.
Make this function self-contained: do not pass number of already
initialized pages in this zone by using static counters.
I found this bug by reading the code. The effect is that fewer than
expected struct pages are initialized early in boot, and it is possible
that in some corner cases we may fail to boot when mirrored pages are
used. The deferred on demand code should somewhat mitigate this. But
this still brings some inconsistencies compared to when booting without
mirrored pages, so it is better to fix.
[pasha.tatashin@oracle.com: add comment about defer_init's lack of locking]
Link: http://lkml.kernel.org/r/20180726193509.3326-3-pasha.tatashin@oracle.com
[akpm@linux-foundation.org: make defer_init non-inline, __meminit]
Link: http://lkml.kernel.org/r/20180724235520.10200-3-pasha.tatashin@oracle.com
Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Jérôme Glisse <jglisse@redhat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Souptick Joarder <jrdr.linux@gmail.com>
Cc: Steven Sistare <steven.sistare@oracle.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@gmail.com>
Cc: Pasha Tatashin <Pavel.Tatashin@microsoft.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
memmap_init is sometimes a macro sometimes a function based on
__HAVE_ARCH_MEMMAP_INIT. It is only a function on ia64. Make memmap_init
a weak function instead, and let ia64 redefine it.
Link: http://lkml.kernel.org/r/20180724235520.10200-2-pasha.tatashin@oracle.com
Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: Steven Sistare <steven.sistare@oracle.com>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jérôme Glisse <jglisse@redhat.com>
Cc: Souptick Joarder <jrdr.linux@gmail.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@gmail.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Pavel Tatashin <pasha.tatashin@oracle.com>
Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Pasha Tatashin <Pavel.Tatashin@microsoft.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This will allow to use generic refcount_t interfaces to check counters
overflow instead of currently existing VM_BUG_ON(). The only difference
after the patch is VM_BUG_ON() may cause BUG(), while refcount_t fires
with WARN(). But this seems not to be significant here, since such the
problems are usually caught by syzbot with panic-on-warn enabled.
Link: http://lkml.kernel.org/r/153910718919.7006.13400779039257185427.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Andrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If move_freepages_block() returns 0 because !zone_spans_pfn(),
*num_movable can hold the value from the stack because it does not get
initialized in move_freepages().
Move the initialization to move_freepages_block() to guarantee the value
actually makes sense.
This currently doesn't affect its only caller where num_movable != NULL,
so no bug fix, but just more robust.
Link: http://lkml.kernel.org/r/alpine.DEB.2.21.1810051355490.212229@chino.kir.corp.google.com
Signed-off-by: David Rientjes <rientjes@google.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Greg Thelen <gthelen@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Replace "fallthru" with a proper "fall through" annotation.
This fix is part of the ongoing efforts to enabling
-Wimplicit-fallthrough
Link: http://lkml.kernel.org/r/20181003105114.GA24423@embeddedor.com
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Now we recycle the uffd servicing threads earlier than the lock threads.
It might happen that when the lock thread is still blocked at a pthread
mutex lock while the servicing thread has already quitted for the cpu so
the lock thread will be blocked forever and hang the test program. To fix
the possible race, recycle the lock threads first.
This never happens with current missing-only tests, but when I start to
run the write-protection tests (the feature is not yet posted upstream) it
happens every time of the run possibly because in that new test we'll need
to service two page faults for each lock operation.
Link: http://lkml.kernel.org/r/20180930074259.18229-4-peterx@redhat.com
Signed-off-by: Peter Xu <peterx@redhat.com>
Acked-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Zi Yan <zi.yan@cs.rutgers.edu>
Cc: "Kirill A . Shutemov" <kirill@shutemov.name>
Cc: Shaohua Li <shli@fb.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: "Dr . David Alan Gilbert" <dgilbert@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We do very similar things in read and poll modes, but we're copying the
codes around. Share the codes properly on reading the message and
handling the page fault to make the code cleaner. Meanwhile this solves
previous mismatch of behaviors between the two modes on that the old code:
- did not check EAGAIN case in read() mode
- ignored BOUNCE_VERIFY check in read() mode
Link: http://lkml.kernel.org/r/20180930074259.18229-3-peterx@redhat.com
Signed-off-by: Peter Xu <peterx@redhat.com>
Acked-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Zi Yan <zi.yan@cs.rutgers.edu>
Cc: "Kirill A . Shutemov" <kirill@shutemov.name>
Cc: Shaohua Li <shli@fb.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: "Dr . David Alan Gilbert" <dgilbert@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>