2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* sysctl.c: General linux system control interface
|
|
|
|
*
|
|
|
|
* Begun 24 March 1995, Stephen Tweedie
|
|
|
|
* Added /proc support, Dec 1995
|
|
|
|
* Added bdflush entry and intvec min/max checking, 2/23/96, Tom Dyas.
|
|
|
|
* Added hooks for /proc/sys/net (minor, minor patch), 96/4/1, Mike Shaver.
|
|
|
|
* Added kernel/java-{interpreter,appletviewer}, 96/5/10, Mike Shaver.
|
|
|
|
* Dynamic registration fixes, Stephen Tweedie.
|
|
|
|
* Added kswapd-interval, ctrl-alt-del, printk stuff, 1/8/97, Chris Horn.
|
|
|
|
* Made sysctl support optional via CONFIG_SYSCTL, 1/10/97, Chris
|
|
|
|
* Horn.
|
|
|
|
* Added proc_doulongvec_ms_jiffies_minmax, 09/08/99, Carlos H. Bauer.
|
|
|
|
* Added proc_doulongvec_minmax, 09/08/99, Carlos H. Bauer.
|
|
|
|
* Changed linked lists to use list.h instead of lists.h, 02/24/00, Bill
|
|
|
|
* Wendling.
|
|
|
|
* The list_for_each() macro wasn't appropriate for the sysctl loop.
|
|
|
|
* Removed it and replaced it with older style, 03/23/00, Bill Wendling
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/module.h>
|
2015-02-22 23:58:50 +07:00
|
|
|
#include <linux/aio.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/swap.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/sysctl.h>
|
2012-03-29 04:42:50 +07:00
|
|
|
#include <linux/bitmap.h>
|
2010-03-11 06:23:59 +07:00
|
|
|
#include <linux/signal.h>
|
kptr_restrict for hiding kernel pointers from unprivileged users
Add the %pK printk format specifier and the /proc/sys/kernel/kptr_restrict
sysctl.
The %pK format specifier is designed to hide exposed kernel pointers,
specifically via /proc interfaces. Exposing these pointers provides an
easy target for kernel write vulnerabilities, since they reveal the
locations of writable structures containing easily triggerable function
pointers. The behavior of %pK depends on the kptr_restrict sysctl.
If kptr_restrict is set to 0, no deviation from the standard %p behavior
occurs. If kptr_restrict is set to 1, the default, if the current user
(intended to be a reader via seq_printf(), etc.) does not have CAP_SYSLOG
(currently in the LSM tree), kernel pointers using %pK are printed as 0's.
If kptr_restrict is set to 2, kernel pointers using %pK are printed as
0's regardless of privileges. Replacing with 0's was chosen over the
default "(null)", which cannot be parsed by userland %p, which expects
"(nil)".
[akpm@linux-foundation.org: check for IRQ context when !kptr_restrict, save an indent level, s/WARN/WARN_ONCE/]
[akpm@linux-foundation.org: coding-style fixup]
[randy.dunlap@oracle.com: fix kernel/sysctl.c warning]
Signed-off-by: Dan Rosenberg <drosenberg@vsecurity.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Cc: James Morris <jmorris@namei.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Thomas Graf <tgraf@infradead.org>
Cc: Eugene Teo <eugeneteo@kernel.org>
Cc: Kees Cook <kees.cook@canonical.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: David S. Miller <davem@davemloft.net>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Eric Paris <eparis@parisplace.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-13 07:59:41 +07:00
|
|
|
#include <linux/printk.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/proc_fs.h>
|
V3 file capabilities: alter behavior of cap_setpcap
The non-filesystem capability meaning of CAP_SETPCAP is that a process, p1,
can change the capabilities of another process, p2. This is not the
meaning that was intended for this capability at all, and this
implementation came about purely because, without filesystem capabilities,
there was no way to use capabilities without one process bestowing them on
another.
Since we now have a filesystem support for capabilities we can fix the
implementation of CAP_SETPCAP.
The most significant thing about this change is that, with it in effect, no
process can set the capabilities of another process.
The capabilities of a program are set via the capability convolution
rules:
pI(post-exec) = pI(pre-exec)
pP(post-exec) = (X(aka cap_bset) & fP) | (pI(post-exec) & fI)
pE(post-exec) = fE ? pP(post-exec) : 0
at exec() time. As such, the only influence the pre-exec() program can
have on the post-exec() program's capabilities are through the pI
capability set.
The correct implementation for CAP_SETPCAP (and that enabled by this patch)
is that it can be used to add extra pI capabilities to the current process
- to be picked up by subsequent exec()s when the above convolution rules
are applied.
Here is how it works:
Let's say we have a process, p. It has capability sets, pE, pP and pI.
Generally, p, can change the value of its own pI to pI' where
(pI' & ~pI) & ~pP = 0.
That is, the only new things in pI' that were not present in pI need to
be present in pP.
The role of CAP_SETPCAP is basically to permit changes to pI beyond
the above:
if (pE & CAP_SETPCAP) {
pI' = anything; /* ie., even (pI' & ~pI) & ~pP != 0 */
}
This capability is useful for things like login, which (say, via
pam_cap) might want to raise certain inheritable capabilities for use
by the children of the logged-in user's shell, but those capabilities
are not useful to or needed by the login program itself.
One such use might be to limit who can run ping. You set the
capabilities of the 'ping' program to be "= cap_net_raw+i", and then
only shells that have (pI & CAP_NET_RAW) will be able to run
it. Without CAP_SETPCAP implemented as described above, login(pam_cap)
would have to also have (pP & CAP_NET_RAW) in order to raise this
capability and pass it on through the inheritable set.
Signed-off-by: Andrew Morgan <morgan@kernel.org>
Signed-off-by: Serge E. Hallyn <serue@us.ibm.com>
Cc: Stephen Smalley <sds@tycho.nsa.gov>
Cc: James Morris <jmorris@namei.org>
Cc: Casey Schaufler <casey@schaufler-ca.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-18 17:05:59 +07:00
|
|
|
#include <linux/security.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/ctype.h>
|
2012-07-31 04:42:48 +07:00
|
|
|
#include <linux/kmemleak.h>
|
2007-07-17 18:03:45 +07:00
|
|
|
#include <linux/fs.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/kernel.h>
|
2005-11-11 11:33:52 +07:00
|
|
|
#include <linux/kobject.h>
|
2005-08-16 12:18:02 +07:00
|
|
|
#include <linux/net.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/sysrq.h>
|
|
|
|
#include <linux/highuid.h>
|
|
|
|
#include <linux/writeback.h>
|
2009-09-22 21:18:09 +07:00
|
|
|
#include <linux/ratelimit.h>
|
2010-05-25 04:32:28 +07:00
|
|
|
#include <linux/compaction.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/hugetlb.h>
|
|
|
|
#include <linux/initrd.h>
|
2008-04-29 15:01:32 +07:00
|
|
|
#include <linux/key.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/times.h>
|
|
|
|
#include <linux/limits.h>
|
|
|
|
#include <linux/dcache.h>
|
2010-01-21 03:27:56 +07:00
|
|
|
#include <linux/dnotify.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/syscalls.h>
|
2008-07-24 11:27:03 +07:00
|
|
|
#include <linux/vmstat.h>
|
2006-02-21 09:27:58 +07:00
|
|
|
#include <linux/nfs_fs.h>
|
|
|
|
#include <linux/acpi.h>
|
2007-07-18 08:37:02 +07:00
|
|
|
#include <linux/reboot.h>
|
2008-05-13 02:20:43 +07:00
|
|
|
#include <linux/ftrace.h>
|
perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!
In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.
Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.
All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)
The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.
Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.
User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)
This patch has been generated via the following script:
FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
sed -i \
-e 's/PERF_EVENT_/PERF_RECORD_/g' \
-e 's/PERF_COUNTER/PERF_EVENT/g' \
-e 's/perf_counter/perf_event/g' \
-e 's/nb_counters/nb_events/g' \
-e 's/swcounter/swevent/g' \
-e 's/tpcounter_event/tp_event/g' \
$FILES
for N in $(find . -name perf_counter.[ch]); do
M=$(echo $N | sed 's/perf_counter/perf_event/g')
mv $N $M
done
FILES=$(find . -name perf_event.*)
sed -i \
-e 's/COUNTER_MASK/REG_MASK/g' \
-e 's/COUNTER/EVENT/g' \
-e 's/\<event\>/event_id/g' \
-e 's/counter/event/g' \
-e 's/Counter/Event/g' \
$FILES
... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.
Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.
( NOTE: 'counters' are still the proper terminology when we deal
with hardware registers - and these sed scripts are a bit
over-eager in renaming them. I've undone some of that, but
in case there's something left where 'counter' would be
better than 'event' we can undo that on an individual basis
instead of touching an otherwise nicely automated patch. )
Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 17:02:48 +07:00
|
|
|
#include <linux/perf_event.h>
|
2010-02-25 20:34:15 +07:00
|
|
|
#include <linux/kprobes.h>
|
2010-05-20 02:03:16 +07:00
|
|
|
#include <linux/pipe_fs_i.h>
|
2010-08-10 07:18:56 +07:00
|
|
|
#include <linux/oom.h>
|
2011-04-02 04:07:50 +07:00
|
|
|
#include <linux/kmod.h>
|
2011-11-01 07:11:20 +07:00
|
|
|
#include <linux/capability.h>
|
2012-02-13 10:58:52 +07:00
|
|
|
#include <linux/binfmts.h>
|
2013-02-07 22:46:59 +07:00
|
|
|
#include <linux/sched/sysctl.h>
|
2017-02-09 00:51:30 +07:00
|
|
|
#include <linux/sched/coredump.h>
|
kexec: add sysctl to disable kexec_load
For general-purpose (i.e. distro) kernel builds it makes sense to build
with CONFIG_KEXEC to allow end users to choose what kind of things they
want to do with kexec. However, in the face of trying to lock down a
system with such a kernel, there needs to be a way to disable kexec_load
(much like module loading can be disabled). Without this, it is too easy
for the root user to modify kernel memory even when CONFIG_STRICT_DEVMEM
and modules_disabled are set. With this change, it is still possible to
load an image for use later, then disable kexec_load so the image (or lack
of image) can't be altered.
The intention is for using this in environments where "perfect"
enforcement is hard. Without a verified boot, along with verified
modules, and along with verified kexec, this is trying to give a system a
better chance to defend itself (or at least grow the window of
discoverability) against attack in the face of a privilege escalation.
In my mind, I consider several boot scenarios:
1) Verified boot of read-only verified root fs loading fd-based
verification of kexec images.
2) Secure boot of writable root fs loading signed kexec images.
3) Regular boot loading kexec (e.g. kcrash) image early and locking it.
4) Regular boot with no control of kexec image at all.
1 and 2 don't exist yet, but will soon once the verified kexec series has
landed. 4 is the state of things now. The gap between 2 and 4 is too
large, so this change creates scenario 3, a middle-ground above 4 when 2
and 1 are not possible for a system.
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Eric Biederman <ebiederm@xmission.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-24 06:55:59 +07:00
|
|
|
#include <linux/kexec.h>
|
bpf: enable non-root eBPF programs
In order to let unprivileged users load and execute eBPF programs
teach verifier to prevent pointer leaks.
Verifier will prevent
- any arithmetic on pointers
(except R10+Imm which is used to compute stack addresses)
- comparison of pointers
(except if (map_value_ptr == 0) ... )
- passing pointers to helper functions
- indirectly passing pointers in stack to helper functions
- returning pointer from bpf program
- storing pointers into ctx or maps
Spill/fill of pointers into stack is allowed, but mangling
of pointers stored in the stack or reading them byte by byte is not.
Within bpf programs the pointers do exist, since programs need to
be able to access maps, pass skb pointer to LD_ABS insns, etc
but programs cannot pass such pointer values to the outside
or obfuscate them.
Only allow BPF_PROG_TYPE_SOCKET_FILTER unprivileged programs,
so that socket filters (tcpdump), af_packet (quic acceleration)
and future kcm can use it.
tracing and tc cls/act program types still require root permissions,
since tracing actually needs to be able to see all kernel pointers
and tc is for root only.
For example, the following unprivileged socket filter program is allowed:
int bpf_prog1(struct __sk_buff *skb)
{
u32 index = load_byte(skb, ETH_HLEN + offsetof(struct iphdr, protocol));
u64 *value = bpf_map_lookup_elem(&my_map, &index);
if (value)
*value += skb->len;
return 0;
}
but the following program is not:
int bpf_prog1(struct __sk_buff *skb)
{
u32 index = load_byte(skb, ETH_HLEN + offsetof(struct iphdr, protocol));
u64 *value = bpf_map_lookup_elem(&my_map, &index);
if (value)
*value += (u64) skb;
return 0;
}
since it would leak the kernel address into the map.
Unprivileged socket filter bpf programs have access to the
following helper functions:
- map lookup/update/delete (but they cannot store kernel pointers into them)
- get_random (it's already exposed to unprivileged user space)
- get_smp_processor_id
- tail_call into another socket filter program
- ktime_get_ns
The feature is controlled by sysctl kernel.unprivileged_bpf_disabled.
This toggle defaults to off (0), but can be set true (1). Once true,
bpf programs and maps cannot be accessed from unprivileged process,
and the toggle cannot be set back to false.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-10-08 12:23:21 +07:00
|
|
|
#include <linux/bpf.h>
|
2016-09-28 12:27:17 +07:00
|
|
|
#include <linux/mount.h>
|
userfaultfd/sysctl: add vm.unprivileged_userfaultfd
Userfaultfd can be misued to make it easier to exploit existing
use-after-free (and similar) bugs that might otherwise only make a
short window or race condition available. By using userfaultfd to
stall a kernel thread, a malicious program can keep some state that it
wrote, stable for an extended period, which it can then access using an
existing exploit. While it doesn't cause the exploit itself, and while
it's not the only thing that can stall a kernel thread when accessing a
memory location, it's one of the few that never needs privilege.
We can add a flag, allowing userfaultfd to be restricted, so that in
general it won't be useable by arbitrary user programs, but in
environments that require userfaultfd it can be turned back on.
Add a global sysctl knob "vm.unprivileged_userfaultfd" to control
whether userfaultfd is allowed by unprivileged users. When this is
set to zero, only privileged users (root user, or users with the
CAP_SYS_PTRACE capability) will be able to use the userfaultfd
syscalls.
Andrea said:
: The only difference between the bpf sysctl and the userfaultfd sysctl
: this way is that the bpf sysctl adds the CAP_SYS_ADMIN capability
: requirement, while userfaultfd adds the CAP_SYS_PTRACE requirement,
: because the userfaultfd monitor is more likely to need CAP_SYS_PTRACE
: already if it's doing other kind of tracking on processes runtime, in
: addition of userfaultfd. In other words both syscalls works only for
: root, when the two sysctl are opt-in set to 1.
[dgilbert@redhat.com: changelog additions]
[akpm@linux-foundation.org: documentation tweak, per Mike]
Link: http://lkml.kernel.org/r/20190319030722.12441-2-peterx@redhat.com
Signed-off-by: Peter Xu <peterx@redhat.com>
Suggested-by: Andrea Arcangeli <aarcange@redhat.com>
Suggested-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: Maya Gokhale <gokhale2@llnl.gov>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Pavel Emelyanov <xemul@virtuozzo.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Martin Cracauer <cracauer@cons.org>
Cc: Denis Plotnikov <dplotnikov@virtuozzo.com>
Cc: Marty McFadden <mcfadden8@llnl.gov>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: "Kirill A . Shutemov" <kirill@shutemov.name>
Cc: "Dr . David Alan Gilbert" <dgilbert@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 07:16:41 +07:00
|
|
|
#include <linux/userfaultfd_k.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2019-03-08 07:29:40 +07:00
|
|
|
#include "../lib/kstrtox.h"
|
|
|
|
|
2016-12-25 02:46:01 +07:00
|
|
|
#include <linux/uaccess.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <asm/processor.h>
|
|
|
|
|
2006-09-30 06:47:55 +07:00
|
|
|
#ifdef CONFIG_X86
|
|
|
|
#include <asm/nmi.h>
|
2006-12-07 08:14:11 +07:00
|
|
|
#include <asm/stacktrace.h>
|
2008-01-30 19:30:05 +07:00
|
|
|
#include <asm/io.h>
|
2006-09-30 06:47:55 +07:00
|
|
|
#endif
|
2012-03-29 00:30:03 +07:00
|
|
|
#ifdef CONFIG_SPARC
|
|
|
|
#include <asm/setup.h>
|
|
|
|
#endif
|
2010-03-11 06:24:08 +07:00
|
|
|
#ifdef CONFIG_BSD_PROCESS_ACCT
|
|
|
|
#include <linux/acct.h>
|
|
|
|
#endif
|
2010-03-11 06:24:09 +07:00
|
|
|
#ifdef CONFIG_RT_MUTEXES
|
|
|
|
#include <linux/rtmutex.h>
|
|
|
|
#endif
|
2010-03-11 06:24:10 +07:00
|
|
|
#if defined(CONFIG_PROVE_LOCKING) || defined(CONFIG_LOCK_STAT)
|
|
|
|
#include <linux/lockdep.h>
|
|
|
|
#endif
|
2010-03-11 06:24:07 +07:00
|
|
|
#ifdef CONFIG_CHR_DEV_SG
|
|
|
|
#include <scsi/sg.h>
|
|
|
|
#endif
|
2018-08-17 05:17:03 +07:00
|
|
|
#ifdef CONFIG_STACKLEAK_RUNTIME_DISABLE
|
|
|
|
#include <linux/stackleak.h>
|
|
|
|
#endif
|
2010-05-08 04:11:44 +07:00
|
|
|
#ifdef CONFIG_LOCKUP_DETECTOR
|
2010-02-13 05:19:19 +07:00
|
|
|
#include <linux/nmi.h>
|
|
|
|
#endif
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
#if defined(CONFIG_SYSCTL)
|
|
|
|
|
|
|
|
/* External variables not in a header file. */
|
2005-06-23 14:09:43 +07:00
|
|
|
extern int suid_dumpable;
|
2012-10-05 07:15:23 +07:00
|
|
|
#ifdef CONFIG_COREDUMP
|
|
|
|
extern int core_uses_pid;
|
2005-04-17 05:20:36 +07:00
|
|
|
extern char core_pattern[];
|
2009-09-24 05:56:56 +07:00
|
|
|
extern unsigned int core_pipe_limit;
|
2012-10-05 07:15:23 +07:00
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
extern int pid_max;
|
|
|
|
extern int pid_max_min, pid_max_max;
|
2006-01-08 16:00:40 +07:00
|
|
|
extern int percpu_pagelist_fraction;
|
2008-01-26 03:08:34 +07:00
|
|
|
extern int latencytop_enabled;
|
2016-09-02 04:38:52 +07:00
|
|
|
extern unsigned int sysctl_nr_open_min, sysctl_nr_open_max;
|
2009-01-08 19:04:47 +07:00
|
|
|
#ifndef CONFIG_MMU
|
|
|
|
extern int sysctl_nr_trim_pages;
|
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2007-10-17 13:26:09 +07:00
|
|
|
/* Constants used for minimum and maximum */
|
2010-05-08 04:11:46 +07:00
|
|
|
#ifdef CONFIG_LOCKUP_DETECTOR
|
2007-10-17 13:26:09 +07:00
|
|
|
static int sixty = 60;
|
|
|
|
#endif
|
|
|
|
|
2014-01-21 00:34:13 +07:00
|
|
|
static int __maybe_unused neg_one = -1;
|
|
|
|
|
2007-10-17 13:26:09 +07:00
|
|
|
static int zero;
|
2009-04-07 03:38:46 +07:00
|
|
|
static int __maybe_unused one = 1;
|
|
|
|
static int __maybe_unused two = 2;
|
2014-04-04 04:48:19 +07:00
|
|
|
static int __maybe_unused four = 4;
|
2019-04-06 08:39:38 +07:00
|
|
|
static unsigned long zero_ul;
|
2009-02-12 04:04:23 +07:00
|
|
|
static unsigned long one_ul = 1;
|
2019-03-08 07:29:43 +07:00
|
|
|
static unsigned long long_max = LONG_MAX;
|
2007-10-17 13:26:09 +07:00
|
|
|
static int one_hundred = 100;
|
2016-03-18 04:19:14 +07:00
|
|
|
static int one_thousand = 1000;
|
2009-09-23 06:43:33 +07:00
|
|
|
#ifdef CONFIG_PRINTK
|
|
|
|
static int ten_thousand = 10000;
|
|
|
|
#endif
|
2016-04-21 22:28:50 +07:00
|
|
|
#ifdef CONFIG_PERF_EVENTS
|
|
|
|
static int six_hundred_forty_kb = 640 * 1024;
|
|
|
|
#endif
|
2007-10-17 13:26:09 +07:00
|
|
|
|
2009-05-01 05:08:57 +07:00
|
|
|
/* this is needed for the proc_doulongvec_minmax of vm_dirty_bytes */
|
|
|
|
static unsigned long dirty_bytes_min = 2 * PAGE_SIZE;
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/* this is needed for the proc_dointvec_minmax for [fs_]overflow UID and GID */
|
|
|
|
static int maxolduid = 65535;
|
|
|
|
static int minolduid;
|
|
|
|
|
|
|
|
static int ngroups_max = NGROUPS_MAX;
|
2011-11-01 07:11:20 +07:00
|
|
|
static const int cap_last_cap = CAP_LAST_CAP;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2018-08-22 11:55:52 +07:00
|
|
|
/*
|
|
|
|
* This is needed for proc_doulongvec_minmax of sysctl_hung_task_timeout_secs
|
|
|
|
* and hung_task_check_interval_secs
|
|
|
|
*/
|
2014-04-08 05:38:57 +07:00
|
|
|
#ifdef CONFIG_DETECT_HUNG_TASK
|
|
|
|
static unsigned long hung_task_timeout_max = (LONG_MAX/HZ);
|
|
|
|
#endif
|
|
|
|
|
2010-02-26 08:28:57 +07:00
|
|
|
#ifdef CONFIG_INOTIFY_USER
|
|
|
|
#include <linux/inotify.h>
|
|
|
|
#endif
|
2008-09-12 13:29:54 +07:00
|
|
|
#ifdef CONFIG_SPARC
|
2005-04-17 05:20:36 +07:00
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef __hppa__
|
|
|
|
extern int pwrsw_enabled;
|
2013-01-18 16:42:24 +07:00
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef CONFIG_SYSCTL_ARCH_UNALIGN_ALLOW
|
2005-04-17 05:20:36 +07:00
|
|
|
extern int unaligned_enabled;
|
|
|
|
#endif
|
|
|
|
|
2006-03-01 00:42:23 +07:00
|
|
|
#ifdef CONFIG_IA64
|
2009-01-16 01:38:56 +07:00
|
|
|
extern int unaligned_dump_stack;
|
2006-03-01 00:42:23 +07:00
|
|
|
#endif
|
|
|
|
|
2013-01-09 21:36:28 +07:00
|
|
|
#ifdef CONFIG_SYSCTL_ARCH_UNALIGN_NO_WARN
|
|
|
|
extern int no_unaligned_warning;
|
|
|
|
#endif
|
|
|
|
|
2006-10-20 13:28:34 +07:00
|
|
|
#ifdef CONFIG_PROC_SYSCTL
|
sysctl: allow for strict write position handling
When writing to a sysctl string, each write, regardless of VFS position,
begins writing the string from the start. This means the contents of
the last write to the sysctl controls the string contents instead of the
first:
open("/proc/sys/kernel/modprobe", O_WRONLY) = 1
write(1, "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., 4096) = 4096
write(1, "/bin/true", 9) = 9
close(1) = 0
$ cat /proc/sys/kernel/modprobe
/bin/true
Expected behaviour would be to have the sysctl be "AAAA..." capped at
maxlen (in this case KMOD_PATH_LEN: 256), instead of truncating to the
contents of the second write. Similarly, multiple short writes would
not append to the sysctl.
The old behavior is unlike regular POSIX files enough that doing audits
of software that interact with sysctls can end up in unexpected or
dangerous situations. For example, "as long as the input starts with a
trusted path" turns out to be an insufficient filter, as what must also
happen is for the input to be entirely contained in a single write
syscall -- not a common consideration, especially for high level tools.
This provides kernel.sysctl_writes_strict as a way to make this behavior
act in a less surprising manner for strings, and disallows non-zero file
position when writing numeric sysctls (similar to what is already done
when reading from non-zero file positions). For now, the default (0) is
to warn about non-zero file position use, but retain the legacy
behavior. Setting this to -1 disables the warning, and setting this to
1 enables the file position respecting behavior.
[akpm@linux-foundation.org: fix build]
[akpm@linux-foundation.org: move misplaced hunk, per Randy]
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-07 04:37:19 +07:00
|
|
|
|
2017-07-13 04:33:30 +07:00
|
|
|
/**
|
|
|
|
* enum sysctl_writes_mode - supported sysctl write modes
|
|
|
|
*
|
|
|
|
* @SYSCTL_WRITES_LEGACY: each write syscall must fully contain the sysctl value
|
|
|
|
* to be written, and multiple writes on the same sysctl file descriptor
|
|
|
|
* will rewrite the sysctl value, regardless of file position. No warning
|
|
|
|
* is issued when the initial position is not 0.
|
|
|
|
* @SYSCTL_WRITES_WARN: same as above but warn when the initial file position is
|
|
|
|
* not 0.
|
|
|
|
* @SYSCTL_WRITES_STRICT: writes to numeric sysctl entries must always be at
|
|
|
|
* file position 0 and the value must be fully contained in the buffer
|
|
|
|
* sent to the write syscall. If dealing with strings respect the file
|
|
|
|
* position, but restrict this to the max length of the buffer, anything
|
|
|
|
* passed the max lenght will be ignored. Multiple writes will append
|
|
|
|
* to the buffer.
|
|
|
|
*
|
|
|
|
* These write modes control how current file position affects the behavior of
|
|
|
|
* updating sysctl values through the proc interface on each write.
|
|
|
|
*/
|
|
|
|
enum sysctl_writes_mode {
|
|
|
|
SYSCTL_WRITES_LEGACY = -1,
|
|
|
|
SYSCTL_WRITES_WARN = 0,
|
|
|
|
SYSCTL_WRITES_STRICT = 1,
|
|
|
|
};
|
sysctl: allow for strict write position handling
When writing to a sysctl string, each write, regardless of VFS position,
begins writing the string from the start. This means the contents of
the last write to the sysctl controls the string contents instead of the
first:
open("/proc/sys/kernel/modprobe", O_WRONLY) = 1
write(1, "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., 4096) = 4096
write(1, "/bin/true", 9) = 9
close(1) = 0
$ cat /proc/sys/kernel/modprobe
/bin/true
Expected behaviour would be to have the sysctl be "AAAA..." capped at
maxlen (in this case KMOD_PATH_LEN: 256), instead of truncating to the
contents of the second write. Similarly, multiple short writes would
not append to the sysctl.
The old behavior is unlike regular POSIX files enough that doing audits
of software that interact with sysctls can end up in unexpected or
dangerous situations. For example, "as long as the input starts with a
trusted path" turns out to be an insufficient filter, as what must also
happen is for the input to be entirely contained in a single write
syscall -- not a common consideration, especially for high level tools.
This provides kernel.sysctl_writes_strict as a way to make this behavior
act in a less surprising manner for strings, and disallows non-zero file
position when writing numeric sysctls (similar to what is already done
when reading from non-zero file positions). For now, the default (0) is
to warn about non-zero file position use, but retain the legacy
behavior. Setting this to -1 disables the warning, and setting this to
1 enables the file position respecting behavior.
[akpm@linux-foundation.org: fix build]
[akpm@linux-foundation.org: move misplaced hunk, per Randy]
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-07 04:37:19 +07:00
|
|
|
|
2017-07-13 04:33:30 +07:00
|
|
|
static enum sysctl_writes_mode sysctl_writes_strict = SYSCTL_WRITES_STRICT;
|
sysctl: allow for strict write position handling
When writing to a sysctl string, each write, regardless of VFS position,
begins writing the string from the start. This means the contents of
the last write to the sysctl controls the string contents instead of the
first:
open("/proc/sys/kernel/modprobe", O_WRONLY) = 1
write(1, "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., 4096) = 4096
write(1, "/bin/true", 9) = 9
close(1) = 0
$ cat /proc/sys/kernel/modprobe
/bin/true
Expected behaviour would be to have the sysctl be "AAAA..." capped at
maxlen (in this case KMOD_PATH_LEN: 256), instead of truncating to the
contents of the second write. Similarly, multiple short writes would
not append to the sysctl.
The old behavior is unlike regular POSIX files enough that doing audits
of software that interact with sysctls can end up in unexpected or
dangerous situations. For example, "as long as the input starts with a
trusted path" turns out to be an insufficient filter, as what must also
happen is for the input to be entirely contained in a single write
syscall -- not a common consideration, especially for high level tools.
This provides kernel.sysctl_writes_strict as a way to make this behavior
act in a less surprising manner for strings, and disallows non-zero file
position when writing numeric sysctls (similar to what is already done
when reading from non-zero file positions). For now, the default (0) is
to warn about non-zero file position use, but retain the legacy
behavior. Setting this to -1 disables the warning, and setting this to
1 enables the file position respecting behavior.
[akpm@linux-foundation.org: fix build]
[akpm@linux-foundation.org: move misplaced hunk, per Randy]
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-07 04:37:19 +07:00
|
|
|
|
2009-09-24 05:57:19 +07:00
|
|
|
static int proc_do_cad_pid(struct ctl_table *table, int write,
|
2006-10-02 16:19:00 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos);
|
2009-09-24 05:57:19 +07:00
|
|
|
static int proc_taint(struct ctl_table *table, int write,
|
2007-02-10 16:45:24 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos);
|
2006-10-20 13:28:34 +07:00
|
|
|
#endif
|
2006-10-02 16:19:00 +07:00
|
|
|
|
2011-03-24 06:43:11 +07:00
|
|
|
#ifdef CONFIG_PRINTK
|
2012-04-05 01:40:19 +07:00
|
|
|
static int proc_dointvec_minmax_sysadmin(struct ctl_table *table, int write,
|
2011-03-24 06:43:11 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos);
|
|
|
|
#endif
|
|
|
|
|
2012-07-31 04:39:18 +07:00
|
|
|
static int proc_dointvec_minmax_coredump(struct ctl_table *table, int write,
|
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos);
|
2012-10-05 07:15:23 +07:00
|
|
|
#ifdef CONFIG_COREDUMP
|
2012-07-31 04:39:18 +07:00
|
|
|
static int proc_dostring_coredump(struct ctl_table *table, int write,
|
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos);
|
2012-10-05 07:15:23 +07:00
|
|
|
#endif
|
2018-02-07 06:41:49 +07:00
|
|
|
static int proc_dopipe_max_size(struct ctl_table *table, int write,
|
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos);
|
2019-02-28 09:30:44 +07:00
|
|
|
#ifdef CONFIG_BPF_SYSCALL
|
2019-02-26 05:28:39 +07:00
|
|
|
static int proc_dointvec_minmax_bpf_stats(struct ctl_table *table, int write,
|
|
|
|
void __user *buffer, size_t *lenp,
|
|
|
|
loff_t *ppos);
|
2019-02-28 09:30:44 +07:00
|
|
|
#endif
|
2012-07-31 04:39:18 +07:00
|
|
|
|
2010-03-22 12:31:26 +07:00
|
|
|
#ifdef CONFIG_MAGIC_SYSRQ
|
2018-08-22 12:01:06 +07:00
|
|
|
/* Note: sysrq code uses its own private copy */
|
2013-10-07 07:05:46 +07:00
|
|
|
static int __sysrq_enabled = CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE;
|
2010-03-22 12:31:26 +07:00
|
|
|
|
2014-06-07 04:38:08 +07:00
|
|
|
static int sysrq_sysctl_handler(struct ctl_table *table, int write,
|
2010-03-22 12:31:26 +07:00
|
|
|
void __user *buffer, size_t *lenp,
|
|
|
|
loff_t *ppos)
|
|
|
|
{
|
|
|
|
int error;
|
|
|
|
|
|
|
|
error = proc_dointvec(table, write, buffer, lenp, ppos);
|
|
|
|
if (error)
|
|
|
|
return error;
|
|
|
|
|
|
|
|
if (write)
|
|
|
|
sysrq_toggle_support(__sysrq_enabled);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif
|
|
|
|
|
2007-10-18 17:05:22 +07:00
|
|
|
static struct ctl_table kern_table[];
|
|
|
|
static struct ctl_table vm_table[];
|
|
|
|
static struct ctl_table fs_table[];
|
|
|
|
static struct ctl_table debug_table[];
|
|
|
|
static struct ctl_table dev_table[];
|
|
|
|
extern struct ctl_table random_table[];
|
epoll: introduce resource usage limits
It has been thought that the per-user file descriptors limit would also
limit the resources that a normal user can request via the epoll
interface. Vegard Nossum reported a very simple program (a modified
version attached) that can make a normal user to request a pretty large
amount of kernel memory, well within the its maximum number of fds. To
solve such problem, default limits are now imposed, and /proc based
configuration has been introduced. A new directory has been created,
named /proc/sys/fs/epoll/ and inside there, there are two configuration
points:
max_user_instances = Maximum number of devices - per user
max_user_watches = Maximum number of "watched" fds - per user
The current default for "max_user_watches" limits the memory used by epoll
to store "watches", to 1/32 of the amount of the low RAM. As example, a
256MB 32bit machine, will have "max_user_watches" set to roughly 90000.
That should be enough to not break existing heavy epoll users. The
default value for "max_user_instances" is set to 128, that should be
enough too.
This also changes the userspace, because a new error code can now come out
from EPOLL_CTL_ADD (-ENOSPC). The EMFILE from epoll_create() was already
listed, so that should be ok.
[akpm@linux-foundation.org: use get_current_user()]
Signed-off-by: Davide Libenzi <davidel@xmailserver.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: <stable@kernel.org>
Cc: Cyrill Gorcunov <gorcunov@gmail.com>
Reported-by: Vegard Nossum <vegardno@ifi.uio.no>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-12-02 04:13:55 +07:00
|
|
|
#ifdef CONFIG_EPOLL
|
|
|
|
extern struct ctl_table epoll_table[];
|
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2018-03-10 21:14:51 +07:00
|
|
|
#ifdef CONFIG_FW_LOADER_USER_HELPER
|
|
|
|
extern struct ctl_table firmware_config_table[];
|
|
|
|
#endif
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
#ifdef HAVE_ARCH_PICK_MMAP_LAYOUT
|
|
|
|
int sysctl_legacy_va_layout;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/* The default sysctl tables: */
|
|
|
|
|
2012-01-06 18:34:20 +07:00
|
|
|
static struct ctl_table sysctl_base_table[] = {
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "kernel",
|
|
|
|
.mode = 0555,
|
|
|
|
.child = kern_table,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "vm",
|
|
|
|
.mode = 0555,
|
|
|
|
.child = vm_table,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "fs",
|
|
|
|
.mode = 0555,
|
|
|
|
.child = fs_table,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "debug",
|
|
|
|
.mode = 0555,
|
|
|
|
.child = debug_table,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "dev",
|
|
|
|
.mode = 0555,
|
|
|
|
.child = dev_table,
|
|
|
|
},
|
2009-04-03 16:30:53 +07:00
|
|
|
{ }
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
|
|
|
|
2007-07-09 23:52:00 +07:00
|
|
|
#ifdef CONFIG_SCHED_DEBUG
|
2007-12-18 21:21:13 +07:00
|
|
|
static int min_sched_granularity_ns = 100000; /* 100 usecs */
|
|
|
|
static int max_sched_granularity_ns = NSEC_PER_SEC; /* 1 second */
|
|
|
|
static int min_wakeup_granularity_ns; /* 0 usecs */
|
|
|
|
static int max_wakeup_granularity_ns = NSEC_PER_SEC; /* 1 second */
|
2012-10-25 19:16:43 +07:00
|
|
|
#ifdef CONFIG_SMP
|
2009-11-30 18:16:47 +07:00
|
|
|
static int min_sched_tunable_scaling = SCHED_TUNABLESCALING_NONE;
|
|
|
|
static int max_sched_tunable_scaling = SCHED_TUNABLESCALING_END-1;
|
2012-10-25 19:16:43 +07:00
|
|
|
#endif /* CONFIG_SMP */
|
|
|
|
#endif /* CONFIG_SCHED_DEBUG */
|
2007-07-09 23:52:00 +07:00
|
|
|
|
2010-05-25 04:32:31 +07:00
|
|
|
#ifdef CONFIG_COMPACTION
|
|
|
|
static int min_extfrag_threshold;
|
|
|
|
static int max_extfrag_threshold = 1000;
|
|
|
|
#endif
|
|
|
|
|
2007-10-18 17:05:22 +07:00
|
|
|
static struct ctl_table kern_table[] = {
|
2009-09-09 20:41:37 +07:00
|
|
|
{
|
|
|
|
.procname = "sched_child_runs_first",
|
|
|
|
.data = &sysctl_sched_child_runs_first,
|
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2009-09-09 20:41:37 +07:00
|
|
|
},
|
2007-07-09 23:52:00 +07:00
|
|
|
#ifdef CONFIG_SCHED_DEBUG
|
|
|
|
{
|
2007-11-10 04:39:37 +07:00
|
|
|
.procname = "sched_min_granularity_ns",
|
|
|
|
.data = &sysctl_sched_min_granularity,
|
2007-07-09 23:52:00 +07:00
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
2009-12-13 02:34:10 +07:00
|
|
|
.proc_handler = sched_proc_update_handler,
|
2007-11-10 04:39:37 +07:00
|
|
|
.extra1 = &min_sched_granularity_ns,
|
|
|
|
.extra2 = &max_sched_granularity_ns,
|
2007-07-09 23:52:00 +07:00
|
|
|
},
|
2007-08-25 23:41:53 +07:00
|
|
|
{
|
|
|
|
.procname = "sched_latency_ns",
|
|
|
|
.data = &sysctl_sched_latency,
|
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
2009-12-13 02:34:10 +07:00
|
|
|
.proc_handler = sched_proc_update_handler,
|
2007-08-25 23:41:53 +07:00
|
|
|
.extra1 = &min_sched_granularity_ns,
|
|
|
|
.extra2 = &max_sched_granularity_ns,
|
|
|
|
},
|
2007-07-09 23:52:00 +07:00
|
|
|
{
|
|
|
|
.procname = "sched_wakeup_granularity_ns",
|
|
|
|
.data = &sysctl_sched_wakeup_granularity,
|
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
2009-12-13 02:34:10 +07:00
|
|
|
.proc_handler = sched_proc_update_handler,
|
2007-07-09 23:52:00 +07:00
|
|
|
.extra1 = &min_wakeup_granularity_ns,
|
|
|
|
.extra2 = &max_wakeup_granularity_ns,
|
|
|
|
},
|
2012-10-25 19:16:43 +07:00
|
|
|
#ifdef CONFIG_SMP
|
2009-11-30 18:16:47 +07:00
|
|
|
{
|
|
|
|
.procname = "sched_tunable_scaling",
|
|
|
|
.data = &sysctl_sched_tunable_scaling,
|
|
|
|
.maxlen = sizeof(enum sched_tunable_scaling),
|
|
|
|
.mode = 0644,
|
2009-12-13 02:34:10 +07:00
|
|
|
.proc_handler = sched_proc_update_handler,
|
2009-11-30 18:16:47 +07:00
|
|
|
.extra1 = &min_sched_tunable_scaling,
|
|
|
|
.extra2 = &max_sched_tunable_scaling,
|
2008-06-27 18:41:35 +07:00
|
|
|
},
|
2007-10-15 22:00:18 +07:00
|
|
|
{
|
2012-08-16 09:15:30 +07:00
|
|
|
.procname = "sched_migration_cost_ns",
|
2007-10-15 22:00:18 +07:00
|
|
|
.data = &sysctl_sched_migration_cost,
|
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2007-10-15 22:00:18 +07:00
|
|
|
},
|
2007-11-10 04:39:39 +07:00
|
|
|
{
|
|
|
|
.procname = "sched_nr_migrate",
|
|
|
|
.data = &sysctl_sched_nr_migrate,
|
|
|
|
.maxlen = sizeof(unsigned int),
|
2008-01-26 03:08:29 +07:00
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2008-01-26 03:08:29 +07:00
|
|
|
},
|
2016-02-05 16:08:36 +07:00
|
|
|
#ifdef CONFIG_SCHEDSTATS
|
|
|
|
{
|
|
|
|
.procname = "sched_schedstats",
|
|
|
|
.data = NULL,
|
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = sysctl_schedstats,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
|
|
|
#endif /* CONFIG_SCHEDSTATS */
|
2012-10-25 19:16:43 +07:00
|
|
|
#endif /* CONFIG_SMP */
|
|
|
|
#ifdef CONFIG_NUMA_BALANCING
|
2012-10-25 19:16:47 +07:00
|
|
|
{
|
|
|
|
.procname = "numa_balancing_scan_delay_ms",
|
|
|
|
.data = &sysctl_numa_balancing_scan_delay,
|
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec,
|
|
|
|
},
|
2012-10-25 19:16:43 +07:00
|
|
|
{
|
|
|
|
.procname = "numa_balancing_scan_period_min_ms",
|
|
|
|
.data = &sysctl_numa_balancing_scan_period_min,
|
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "numa_balancing_scan_period_max_ms",
|
|
|
|
.data = &sysctl_numa_balancing_scan_period_max,
|
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec,
|
|
|
|
},
|
mm: sched: numa: Implement constant, per task Working Set Sampling (WSS) rate
Previously, to probe the working set of a task, we'd use
a very simple and crude method: mark all of its address
space PROT_NONE.
That method has various (obvious) disadvantages:
- it samples the working set at dissimilar rates,
giving some tasks a sampling quality advantage
over others.
- creates performance problems for tasks with very
large working sets
- over-samples processes with large address spaces but
which only very rarely execute
Improve that method by keeping a rotating offset into the
address space that marks the current position of the scan,
and advance it by a constant rate (in a CPU cycles execution
proportional manner). If the offset reaches the last mapped
address of the mm then it then it starts over at the first
address.
The per-task nature of the working set sampling functionality in this tree
allows such constant rate, per task, execution-weight proportional sampling
of the working set, with an adaptive sampling interval/frequency that
goes from once per 100ms up to just once per 8 seconds. The current
sampling volume is 256 MB per interval.
As tasks mature and converge their working set, so does the
sampling rate slow down to just a trickle, 256 MB per 8
seconds of CPU time executed.
This, beyond being adaptive, also rate-limits rarely
executing systems and does not over-sample on overloaded
systems.
[ In AutoNUMA speak, this patch deals with the effective sampling
rate of the 'hinting page fault'. AutoNUMA's scanning is
currently rate-limited, but it is also fundamentally
single-threaded, executing in the knuma_scand kernel thread,
so the limit in AutoNUMA is global and does not scale up with
the number of CPUs, nor does it scan tasks in an execution
proportional manner.
So the idea of rate-limiting the scanning was first implemented
in the AutoNUMA tree via a global rate limit. This patch goes
beyond that by implementing an execution rate proportional
working set sampling rate that is not implemented via a single
global scanning daemon. ]
[ Dan Carpenter pointed out a possible NULL pointer dereference in the
first version of this patch. ]
Based-on-idea-by: Andrea Arcangeli <aarcange@redhat.com>
Bug-Found-By: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
[ Wrote changelog and fixed bug. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
2012-10-25 19:16:45 +07:00
|
|
|
{
|
|
|
|
.procname = "numa_balancing_scan_size_mb",
|
|
|
|
.data = &sysctl_numa_balancing_scan_size,
|
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
2014-10-16 17:39:37 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &one,
|
mm: sched: numa: Implement constant, per task Working Set Sampling (WSS) rate
Previously, to probe the working set of a task, we'd use
a very simple and crude method: mark all of its address
space PROT_NONE.
That method has various (obvious) disadvantages:
- it samples the working set at dissimilar rates,
giving some tasks a sampling quality advantage
over others.
- creates performance problems for tasks with very
large working sets
- over-samples processes with large address spaces but
which only very rarely execute
Improve that method by keeping a rotating offset into the
address space that marks the current position of the scan,
and advance it by a constant rate (in a CPU cycles execution
proportional manner). If the offset reaches the last mapped
address of the mm then it then it starts over at the first
address.
The per-task nature of the working set sampling functionality in this tree
allows such constant rate, per task, execution-weight proportional sampling
of the working set, with an adaptive sampling interval/frequency that
goes from once per 100ms up to just once per 8 seconds. The current
sampling volume is 256 MB per interval.
As tasks mature and converge their working set, so does the
sampling rate slow down to just a trickle, 256 MB per 8
seconds of CPU time executed.
This, beyond being adaptive, also rate-limits rarely
executing systems and does not over-sample on overloaded
systems.
[ In AutoNUMA speak, this patch deals with the effective sampling
rate of the 'hinting page fault'. AutoNUMA's scanning is
currently rate-limited, but it is also fundamentally
single-threaded, executing in the knuma_scand kernel thread,
so the limit in AutoNUMA is global and does not scale up with
the number of CPUs, nor does it scan tasks in an execution
proportional manner.
So the idea of rate-limiting the scanning was first implemented
in the AutoNUMA tree via a global rate limit. This patch goes
beyond that by implementing an execution rate proportional
working set sampling rate that is not implemented via a single
global scanning daemon. ]
[ Dan Carpenter pointed out a possible NULL pointer dereference in the
first version of this patch. ]
Based-on-idea-by: Andrea Arcangeli <aarcange@redhat.com>
Bug-Found-By: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
[ Wrote changelog and fixed bug. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
2012-10-25 19:16:45 +07:00
|
|
|
},
|
2014-01-24 06:53:13 +07:00
|
|
|
{
|
|
|
|
.procname = "numa_balancing",
|
|
|
|
.data = NULL, /* filled in by handler */
|
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = sysctl_numa_balancing,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2012-10-25 19:16:43 +07:00
|
|
|
#endif /* CONFIG_NUMA_BALANCING */
|
|
|
|
#endif /* CONFIG_SCHED_DEBUG */
|
2008-02-13 21:45:39 +07:00
|
|
|
{
|
|
|
|
.procname = "sched_rt_period_us",
|
|
|
|
.data = &sysctl_sched_rt_period,
|
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = sched_rt_handler,
|
2008-02-13 21:45:39 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "sched_rt_runtime_us",
|
|
|
|
.data = &sysctl_sched_rt_runtime,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = sched_rt_handler,
|
2008-02-13 21:45:39 +07:00
|
|
|
},
|
2013-02-07 22:47:04 +07:00
|
|
|
{
|
|
|
|
.procname = "sched_rr_timeslice_ms",
|
2017-01-28 21:00:49 +07:00
|
|
|
.data = &sysctl_sched_rr_timeslice,
|
2013-02-07 22:47:04 +07:00
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = sched_rr_handler,
|
|
|
|
},
|
sched: Add 'autogroup' scheduling feature: automated per session task groups
A recurring complaint from CFS users is that parallel kbuild has
a negative impact on desktop interactivity. This patch
implements an idea from Linus, to automatically create task
groups. Currently, only per session autogroups are implemented,
but the patch leaves the way open for enhancement.
Implementation: each task's signal struct contains an inherited
pointer to a refcounted autogroup struct containing a task group
pointer, the default for all tasks pointing to the
init_task_group. When a task calls setsid(), a new task group
is created, the process is moved into the new task group, and a
reference to the preveious task group is dropped. Child
processes inherit this task group thereafter, and increase it's
refcount. When the last thread of a process exits, the
process's reference is dropped, such that when the last process
referencing an autogroup exits, the autogroup is destroyed.
At runqueue selection time, IFF a task has no cgroup assignment,
its current autogroup is used.
Autogroup bandwidth is controllable via setting it's nice level
through the proc filesystem:
cat /proc/<pid>/autogroup
Displays the task's group and the group's nice level.
echo <nice level> > /proc/<pid>/autogroup
Sets the task group's shares to the weight of nice <level> task.
Setting nice level is rate limited for !admin users due to the
abuse risk of task group locking.
The feature is enabled from boot by default if
CONFIG_SCHED_AUTOGROUP=y is selected, but can be disabled via
the boot option noautogroup, and can also be turned on/off on
the fly via:
echo [01] > /proc/sys/kernel/sched_autogroup_enabled
... which will automatically move tasks to/from the root task group.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Paul Turner <pjt@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
[ Removed the task_group_path() debug code, and fixed !EVENTFD build failure. ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
LKML-Reference: <1290281700.28711.9.camel@maggy.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-11-30 20:18:03 +07:00
|
|
|
#ifdef CONFIG_SCHED_AUTOGROUP
|
|
|
|
{
|
|
|
|
.procname = "sched_autogroup_enabled",
|
|
|
|
.data = &sysctl_sched_autogroup_enabled,
|
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
2011-02-20 14:08:12 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
sched: Add 'autogroup' scheduling feature: automated per session task groups
A recurring complaint from CFS users is that parallel kbuild has
a negative impact on desktop interactivity. This patch
implements an idea from Linus, to automatically create task
groups. Currently, only per session autogroups are implemented,
but the patch leaves the way open for enhancement.
Implementation: each task's signal struct contains an inherited
pointer to a refcounted autogroup struct containing a task group
pointer, the default for all tasks pointing to the
init_task_group. When a task calls setsid(), a new task group
is created, the process is moved into the new task group, and a
reference to the preveious task group is dropped. Child
processes inherit this task group thereafter, and increase it's
refcount. When the last thread of a process exits, the
process's reference is dropped, such that when the last process
referencing an autogroup exits, the autogroup is destroyed.
At runqueue selection time, IFF a task has no cgroup assignment,
its current autogroup is used.
Autogroup bandwidth is controllable via setting it's nice level
through the proc filesystem:
cat /proc/<pid>/autogroup
Displays the task's group and the group's nice level.
echo <nice level> > /proc/<pid>/autogroup
Sets the task group's shares to the weight of nice <level> task.
Setting nice level is rate limited for !admin users due to the
abuse risk of task group locking.
The feature is enabled from boot by default if
CONFIG_SCHED_AUTOGROUP=y is selected, but can be disabled via
the boot option noautogroup, and can also be turned on/off on
the fly via:
echo [01] > /proc/sys/kernel/sched_autogroup_enabled
... which will automatically move tasks to/from the root task group.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Paul Turner <pjt@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
[ Removed the task_group_path() debug code, and fixed !EVENTFD build failure. ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
LKML-Reference: <1290281700.28711.9.camel@maggy.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-11-30 20:18:03 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
|
|
|
#endif
|
2011-07-21 23:43:30 +07:00
|
|
|
#ifdef CONFIG_CFS_BANDWIDTH
|
|
|
|
{
|
|
|
|
.procname = "sched_cfs_bandwidth_slice_us",
|
|
|
|
.data = &sysctl_sched_cfs_bandwidth_slice,
|
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &one,
|
|
|
|
},
|
|
|
|
#endif
|
2018-12-03 16:56:23 +07:00
|
|
|
#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
|
|
|
|
{
|
|
|
|
.procname = "sched_energy_aware",
|
|
|
|
.data = &sysctl_sched_energy_aware,
|
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = sched_energy_aware_handler,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
|
|
|
#endif
|
2007-07-19 15:48:56 +07:00
|
|
|
#ifdef CONFIG_PROVE_LOCKING
|
|
|
|
{
|
|
|
|
.procname = "prove_locking",
|
|
|
|
.data = &prove_locking,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2007-07-19 15:48:56 +07:00
|
|
|
},
|
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_LOCK_STAT
|
|
|
|
{
|
|
|
|
.procname = "lock_stat",
|
|
|
|
.data = &lock_stat,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2007-07-19 15:48:56 +07:00
|
|
|
},
|
2007-07-09 23:52:00 +07:00
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "panic",
|
|
|
|
.data = &panic_timeout,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2012-10-05 07:15:23 +07:00
|
|
|
#ifdef CONFIG_COREDUMP
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "core_uses_pid",
|
|
|
|
.data = &core_uses_pid,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "core_pattern",
|
|
|
|
.data = core_pattern,
|
2007-05-17 12:11:16 +07:00
|
|
|
.maxlen = CORENAME_MAX_SIZE,
|
2005-04-17 05:20:36 +07:00
|
|
|
.mode = 0644,
|
2012-07-31 04:39:18 +07:00
|
|
|
.proc_handler = proc_dostring_coredump,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2009-09-24 05:56:56 +07:00
|
|
|
{
|
|
|
|
.procname = "core_pipe_limit",
|
|
|
|
.data = &core_pipe_limit,
|
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2009-09-24 05:56:56 +07:00
|
|
|
},
|
2012-10-05 07:15:23 +07:00
|
|
|
#endif
|
2007-02-10 16:45:24 +07:00
|
|
|
#ifdef CONFIG_PROC_SYSCTL
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "tainted",
|
2008-10-16 12:01:41 +07:00
|
|
|
.maxlen = sizeof(long),
|
2007-02-10 16:45:24 +07:00
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_taint,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
sysctl: allow for strict write position handling
When writing to a sysctl string, each write, regardless of VFS position,
begins writing the string from the start. This means the contents of
the last write to the sysctl controls the string contents instead of the
first:
open("/proc/sys/kernel/modprobe", O_WRONLY) = 1
write(1, "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., 4096) = 4096
write(1, "/bin/true", 9) = 9
close(1) = 0
$ cat /proc/sys/kernel/modprobe
/bin/true
Expected behaviour would be to have the sysctl be "AAAA..." capped at
maxlen (in this case KMOD_PATH_LEN: 256), instead of truncating to the
contents of the second write. Similarly, multiple short writes would
not append to the sysctl.
The old behavior is unlike regular POSIX files enough that doing audits
of software that interact with sysctls can end up in unexpected or
dangerous situations. For example, "as long as the input starts with a
trusted path" turns out to be an insufficient filter, as what must also
happen is for the input to be entirely contained in a single write
syscall -- not a common consideration, especially for high level tools.
This provides kernel.sysctl_writes_strict as a way to make this behavior
act in a less surprising manner for strings, and disallows non-zero file
position when writing numeric sysctls (similar to what is already done
when reading from non-zero file positions). For now, the default (0) is
to warn about non-zero file position use, but retain the legacy
behavior. Setting this to -1 disables the warning, and setting this to
1 enables the file position respecting behavior.
[akpm@linux-foundation.org: fix build]
[akpm@linux-foundation.org: move misplaced hunk, per Randy]
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-07 04:37:19 +07:00
|
|
|
{
|
|
|
|
.procname = "sysctl_writes_strict",
|
|
|
|
.data = &sysctl_writes_strict,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &neg_one,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2007-02-10 16:45:24 +07:00
|
|
|
#endif
|
2008-01-26 03:08:34 +07:00
|
|
|
#ifdef CONFIG_LATENCYTOP
|
|
|
|
{
|
|
|
|
.procname = "latencytop",
|
|
|
|
.data = &latencytop_enabled,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2016-02-05 16:08:36 +07:00
|
|
|
.proc_handler = sysctl_latencytop,
|
2008-01-26 03:08:34 +07:00
|
|
|
},
|
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
#ifdef CONFIG_BLK_DEV_INITRD
|
|
|
|
{
|
|
|
|
.procname = "real-root-dev",
|
|
|
|
.data = &real_root_dev,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
#endif
|
2007-07-16 13:40:10 +07:00
|
|
|
{
|
|
|
|
.procname = "print-fatal-signals",
|
|
|
|
.data = &print_fatal_signals,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2007-07-16 13:40:10 +07:00
|
|
|
},
|
2008-09-12 13:29:54 +07:00
|
|
|
#ifdef CONFIG_SPARC
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "reboot-cmd",
|
|
|
|
.data = reboot_command,
|
|
|
|
.maxlen = 256,
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dostring,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "stop-a",
|
|
|
|
.data = &stop_a_enabled,
|
|
|
|
.maxlen = sizeof (int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "scons-poweroff",
|
|
|
|
.data = &scons_pwroff,
|
|
|
|
.maxlen = sizeof (int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
#endif
|
2008-11-17 14:49:24 +07:00
|
|
|
#ifdef CONFIG_SPARC64
|
|
|
|
{
|
|
|
|
.procname = "tsb-ratio",
|
|
|
|
.data = &sysctl_tsb_ratio,
|
|
|
|
.maxlen = sizeof (int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2008-11-17 14:49:24 +07:00
|
|
|
},
|
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
#ifdef __hppa__
|
|
|
|
{
|
|
|
|
.procname = "soft-power",
|
|
|
|
.data = &pwrsw_enabled,
|
|
|
|
.maxlen = sizeof (int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2013-01-18 16:42:24 +07:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_SYSCTL_ARCH_UNALIGN_ALLOW
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "unaligned-trap",
|
|
|
|
.data = &unaligned_enabled,
|
|
|
|
.maxlen = sizeof (int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
#endif
|
|
|
|
{
|
|
|
|
.procname = "ctrl-alt-del",
|
|
|
|
.data = &C_A_D,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2008-10-07 06:06:12 +07:00
|
|
|
#ifdef CONFIG_FUNCTION_TRACER
|
2008-05-13 02:20:43 +07:00
|
|
|
{
|
|
|
|
.procname = "ftrace_enabled",
|
|
|
|
.data = &ftrace_enabled,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = ftrace_enable_sysctl,
|
2008-05-13 02:20:43 +07:00
|
|
|
},
|
|
|
|
#endif
|
2008-12-17 11:06:40 +07:00
|
|
|
#ifdef CONFIG_STACK_TRACER
|
|
|
|
{
|
|
|
|
.procname = "stack_tracer_enabled",
|
|
|
|
.data = &stack_tracer_enabled,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = stack_trace_sysctl,
|
2008-12-17 11:06:40 +07:00
|
|
|
},
|
|
|
|
#endif
|
2008-10-24 06:26:08 +07:00
|
|
|
#ifdef CONFIG_TRACING
|
|
|
|
{
|
2008-11-04 17:58:21 +07:00
|
|
|
.procname = "ftrace_dump_on_oops",
|
2008-10-24 06:26:08 +07:00
|
|
|
.data = &ftrace_dump_on_oops,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2008-10-24 06:26:08 +07:00
|
|
|
},
|
2013-06-15 03:21:43 +07:00
|
|
|
{
|
|
|
|
.procname = "traceoff_on_warning",
|
|
|
|
.data = &__disable_trace_on_warning,
|
|
|
|
.maxlen = sizeof(__disable_trace_on_warning),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec,
|
|
|
|
},
|
2014-12-13 10:27:10 +07:00
|
|
|
{
|
|
|
|
.procname = "tracepoint_printk",
|
|
|
|
.data = &tracepoint_printk,
|
|
|
|
.maxlen = sizeof(tracepoint_printk),
|
|
|
|
.mode = 0644,
|
2016-11-24 03:52:45 +07:00
|
|
|
.proc_handler = tracepoint_printk_sysctl,
|
2014-12-13 10:27:10 +07:00
|
|
|
},
|
2008-10-24 06:26:08 +07:00
|
|
|
#endif
|
2015-09-10 05:38:55 +07:00
|
|
|
#ifdef CONFIG_KEXEC_CORE
|
kexec: add sysctl to disable kexec_load
For general-purpose (i.e. distro) kernel builds it makes sense to build
with CONFIG_KEXEC to allow end users to choose what kind of things they
want to do with kexec. However, in the face of trying to lock down a
system with such a kernel, there needs to be a way to disable kexec_load
(much like module loading can be disabled). Without this, it is too easy
for the root user to modify kernel memory even when CONFIG_STRICT_DEVMEM
and modules_disabled are set. With this change, it is still possible to
load an image for use later, then disable kexec_load so the image (or lack
of image) can't be altered.
The intention is for using this in environments where "perfect"
enforcement is hard. Without a verified boot, along with verified
modules, and along with verified kexec, this is trying to give a system a
better chance to defend itself (or at least grow the window of
discoverability) against attack in the face of a privilege escalation.
In my mind, I consider several boot scenarios:
1) Verified boot of read-only verified root fs loading fd-based
verification of kexec images.
2) Secure boot of writable root fs loading signed kexec images.
3) Regular boot loading kexec (e.g. kcrash) image early and locking it.
4) Regular boot with no control of kexec image at all.
1 and 2 don't exist yet, but will soon once the verified kexec series has
landed. 4 is the state of things now. The gap between 2 and 4 is too
large, so this change creates scenario 3, a middle-ground above 4 when 2
and 1 are not possible for a system.
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Eric Biederman <ebiederm@xmission.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-24 06:55:59 +07:00
|
|
|
{
|
|
|
|
.procname = "kexec_load_disabled",
|
|
|
|
.data = &kexec_load_disabled,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
|
|
|
/* only handle a transition from default "0" to "1" */
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &one,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
|
|
|
#endif
|
2008-07-09 00:00:17 +07:00
|
|
|
#ifdef CONFIG_MODULES
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "modprobe",
|
|
|
|
.data = &modprobe_path,
|
|
|
|
.maxlen = KMOD_PATH_LEN,
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dostring,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2009-04-03 05:49:29 +07:00
|
|
|
{
|
|
|
|
.procname = "modules_disabled",
|
|
|
|
.data = &modules_disabled,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
|
|
|
/* only handle a transition from default "0" to "1" */
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
2009-04-03 05:49:29 +07:00
|
|
|
.extra1 = &one,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2005-04-17 05:20:36 +07:00
|
|
|
#endif
|
2014-04-11 04:09:31 +07:00
|
|
|
#ifdef CONFIG_UEVENT_HELPER
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "hotplug",
|
2005-11-16 15:00:00 +07:00
|
|
|
.data = &uevent_helper,
|
|
|
|
.maxlen = UEVENT_HELPER_PATH_LEN,
|
2005-04-17 05:20:36 +07:00
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dostring,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2014-04-11 04:09:31 +07:00
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
#ifdef CONFIG_CHR_DEV_SG
|
|
|
|
{
|
|
|
|
.procname = "sg-big-buff",
|
|
|
|
.data = &sg_big_buff,
|
|
|
|
.maxlen = sizeof (int),
|
|
|
|
.mode = 0444,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_BSD_PROCESS_ACCT
|
|
|
|
{
|
|
|
|
.procname = "acct",
|
|
|
|
.data = &acct_parm,
|
|
|
|
.maxlen = 3*sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_MAGIC_SYSRQ
|
|
|
|
{
|
|
|
|
.procname = "sysrq",
|
2006-12-13 15:34:36 +07:00
|
|
|
.data = &__sysrq_enabled,
|
2005-04-17 05:20:36 +07:00
|
|
|
.maxlen = sizeof (int),
|
|
|
|
.mode = 0644,
|
2010-03-22 12:31:26 +07:00
|
|
|
.proc_handler = sysrq_sysctl_handler,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
#endif
|
2006-10-20 13:28:34 +07:00
|
|
|
#ifdef CONFIG_PROC_SYSCTL
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "cad_pid",
|
2006-10-02 16:19:00 +07:00
|
|
|
.data = NULL,
|
2005-04-17 05:20:36 +07:00
|
|
|
.maxlen = sizeof (int),
|
|
|
|
.mode = 0600,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_do_cad_pid,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2006-10-20 13:28:34 +07:00
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "threads-max",
|
2015-04-17 02:47:50 +07:00
|
|
|
.data = NULL,
|
2005-04-17 05:20:36 +07:00
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2015-04-17 02:47:50 +07:00
|
|
|
.proc_handler = sysctl_max_threads,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "random",
|
|
|
|
.mode = 0555,
|
|
|
|
.child = random_table,
|
|
|
|
},
|
2011-04-02 04:07:50 +07:00
|
|
|
{
|
|
|
|
.procname = "usermodehelper",
|
|
|
|
.mode = 0555,
|
|
|
|
.child = usermodehelper_table,
|
|
|
|
},
|
2018-03-10 21:14:51 +07:00
|
|
|
#ifdef CONFIG_FW_LOADER_USER_HELPER
|
|
|
|
{
|
|
|
|
.procname = "firmware_config",
|
|
|
|
.mode = 0555,
|
|
|
|
.child = firmware_config_table,
|
|
|
|
},
|
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "overflowuid",
|
|
|
|
.data = &overflowuid,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
2005-04-17 05:20:36 +07:00
|
|
|
.extra1 = &minolduid,
|
|
|
|
.extra2 = &maxolduid,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "overflowgid",
|
|
|
|
.data = &overflowgid,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
2005-04-17 05:20:36 +07:00
|
|
|
.extra1 = &minolduid,
|
|
|
|
.extra2 = &maxolduid,
|
|
|
|
},
|
2006-01-06 15:19:28 +07:00
|
|
|
#ifdef CONFIG_S390
|
2005-04-17 05:20:36 +07:00
|
|
|
#ifdef CONFIG_MATHEMU
|
|
|
|
{
|
|
|
|
.procname = "ieee_emulation_warnings",
|
|
|
|
.data = &sysctl_ieee_emulation_warnings,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
#endif
|
|
|
|
{
|
|
|
|
.procname = "userprocess_debug",
|
2010-05-17 15:00:21 +07:00
|
|
|
.data = &show_unhandled_signals,
|
2005-04-17 05:20:36 +07:00
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
#endif
|
|
|
|
{
|
|
|
|
.procname = "pid_max",
|
|
|
|
.data = &pid_max,
|
|
|
|
.maxlen = sizeof (int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
2005-04-17 05:20:36 +07:00
|
|
|
.extra1 = &pid_max_min,
|
|
|
|
.extra2 = &pid_max_max,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "panic_on_oops",
|
|
|
|
.data = &panic_on_oops,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2019-01-04 06:28:20 +07:00
|
|
|
{
|
|
|
|
.procname = "panic_print",
|
|
|
|
.data = &panic_print,
|
|
|
|
.maxlen = sizeof(unsigned long),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_doulongvec_minmax,
|
|
|
|
},
|
2008-02-08 19:21:25 +07:00
|
|
|
#if defined CONFIG_PRINTK
|
|
|
|
{
|
|
|
|
.procname = "printk",
|
|
|
|
.data = &console_loglevel,
|
|
|
|
.maxlen = 4*sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2008-02-08 19:21:25 +07:00
|
|
|
},
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "printk_ratelimit",
|
2008-07-25 15:45:58 +07:00
|
|
|
.data = &printk_ratelimit_state.interval,
|
2005-04-17 05:20:36 +07:00
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec_jiffies,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "printk_ratelimit_burst",
|
2008-07-25 15:45:58 +07:00
|
|
|
.data = &printk_ratelimit_state.burst,
|
2005-04-17 05:20:36 +07:00
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2009-09-23 06:43:33 +07:00
|
|
|
{
|
|
|
|
.procname = "printk_delay",
|
|
|
|
.data = &printk_delay_msec,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
2009-09-23 06:43:33 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &ten_thousand,
|
|
|
|
},
|
2016-08-03 04:04:07 +07:00
|
|
|
{
|
|
|
|
.procname = "printk_devkmsg",
|
|
|
|
.data = devkmsg_log_str,
|
|
|
|
.maxlen = DEVKMSG_STR_MAX_SIZE,
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = devkmsg_sysctl_set_loglvl,
|
|
|
|
},
|
2010-11-12 05:05:18 +07:00
|
|
|
{
|
|
|
|
.procname = "dmesg_restrict",
|
|
|
|
.data = &dmesg_restrict,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2012-04-05 01:40:19 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax_sysadmin,
|
2010-11-12 05:05:18 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
kptr_restrict for hiding kernel pointers from unprivileged users
Add the %pK printk format specifier and the /proc/sys/kernel/kptr_restrict
sysctl.
The %pK format specifier is designed to hide exposed kernel pointers,
specifically via /proc interfaces. Exposing these pointers provides an
easy target for kernel write vulnerabilities, since they reveal the
locations of writable structures containing easily triggerable function
pointers. The behavior of %pK depends on the kptr_restrict sysctl.
If kptr_restrict is set to 0, no deviation from the standard %p behavior
occurs. If kptr_restrict is set to 1, the default, if the current user
(intended to be a reader via seq_printf(), etc.) does not have CAP_SYSLOG
(currently in the LSM tree), kernel pointers using %pK are printed as 0's.
If kptr_restrict is set to 2, kernel pointers using %pK are printed as
0's regardless of privileges. Replacing with 0's was chosen over the
default "(null)", which cannot be parsed by userland %p, which expects
"(nil)".
[akpm@linux-foundation.org: check for IRQ context when !kptr_restrict, save an indent level, s/WARN/WARN_ONCE/]
[akpm@linux-foundation.org: coding-style fixup]
[randy.dunlap@oracle.com: fix kernel/sysctl.c warning]
Signed-off-by: Dan Rosenberg <drosenberg@vsecurity.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Cc: James Morris <jmorris@namei.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Thomas Graf <tgraf@infradead.org>
Cc: Eugene Teo <eugeneteo@kernel.org>
Cc: Kees Cook <kees.cook@canonical.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: David S. Miller <davem@davemloft.net>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Eric Paris <eparis@parisplace.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-13 07:59:41 +07:00
|
|
|
{
|
|
|
|
.procname = "kptr_restrict",
|
|
|
|
.data = &kptr_restrict,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2012-04-05 01:40:19 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax_sysadmin,
|
kptr_restrict for hiding kernel pointers from unprivileged users
Add the %pK printk format specifier and the /proc/sys/kernel/kptr_restrict
sysctl.
The %pK format specifier is designed to hide exposed kernel pointers,
specifically via /proc interfaces. Exposing these pointers provides an
easy target for kernel write vulnerabilities, since they reveal the
locations of writable structures containing easily triggerable function
pointers. The behavior of %pK depends on the kptr_restrict sysctl.
If kptr_restrict is set to 0, no deviation from the standard %p behavior
occurs. If kptr_restrict is set to 1, the default, if the current user
(intended to be a reader via seq_printf(), etc.) does not have CAP_SYSLOG
(currently in the LSM tree), kernel pointers using %pK are printed as 0's.
If kptr_restrict is set to 2, kernel pointers using %pK are printed as
0's regardless of privileges. Replacing with 0's was chosen over the
default "(null)", which cannot be parsed by userland %p, which expects
"(nil)".
[akpm@linux-foundation.org: check for IRQ context when !kptr_restrict, save an indent level, s/WARN/WARN_ONCE/]
[akpm@linux-foundation.org: coding-style fixup]
[randy.dunlap@oracle.com: fix kernel/sysctl.c warning]
Signed-off-by: Dan Rosenberg <drosenberg@vsecurity.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Cc: James Morris <jmorris@namei.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Thomas Graf <tgraf@infradead.org>
Cc: Eugene Teo <eugeneteo@kernel.org>
Cc: Kees Cook <kees.cook@canonical.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: David S. Miller <davem@davemloft.net>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Eric Paris <eparis@parisplace.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-13 07:59:41 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &two,
|
|
|
|
},
|
2010-11-16 12:17:27 +07:00
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "ngroups_max",
|
|
|
|
.data = &ngroups_max,
|
|
|
|
.maxlen = sizeof (int),
|
|
|
|
.mode = 0444,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2011-11-01 07:11:20 +07:00
|
|
|
{
|
|
|
|
.procname = "cap_last_cap",
|
|
|
|
.data = (void *)&cap_last_cap,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0444,
|
|
|
|
.proc_handler = proc_dointvec,
|
|
|
|
},
|
2010-05-08 04:11:44 +07:00
|
|
|
#if defined(CONFIG_LOCKUP_DETECTOR)
|
2010-02-13 05:19:19 +07:00
|
|
|
{
|
2010-05-08 04:11:44 +07:00
|
|
|
.procname = "watchdog",
|
2017-09-13 02:37:15 +07:00
|
|
|
.data = &watchdog_user_enabled,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2015-04-15 05:44:13 +07:00
|
|
|
.proc_handler = proc_watchdog,
|
2011-05-23 12:10:22 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
2010-05-08 04:11:44 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "watchdog_thresh",
|
2011-05-23 12:10:22 +07:00
|
|
|
.data = &watchdog_thresh,
|
2010-05-08 04:11:44 +07:00
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2015-04-15 05:44:13 +07:00
|
|
|
.proc_handler = proc_watchdog_thresh,
|
2013-05-17 09:31:04 +07:00
|
|
|
.extra1 = &zero,
|
2010-05-08 04:11:44 +07:00
|
|
|
.extra2 = &sixty,
|
2010-02-13 05:19:19 +07:00
|
|
|
},
|
2015-04-15 05:44:13 +07:00
|
|
|
{
|
|
|
|
.procname = "nmi_watchdog",
|
2017-09-13 02:37:15 +07:00
|
|
|
.data = &nmi_watchdog_user_enabled,
|
|
|
|
.maxlen = sizeof(int),
|
2017-09-13 02:37:14 +07:00
|
|
|
.mode = NMI_WATCHDOG_SYSCTL_PERM,
|
2015-04-15 05:44:13 +07:00
|
|
|
.proc_handler = proc_nmi_watchdog,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2017-07-13 04:35:46 +07:00
|
|
|
{
|
|
|
|
.procname = "watchdog_cpumask",
|
|
|
|
.data = &watchdog_cpumask_bits,
|
|
|
|
.maxlen = NR_CPUS,
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_watchdog_cpumask,
|
|
|
|
},
|
|
|
|
#ifdef CONFIG_SOFTLOCKUP_DETECTOR
|
2015-04-15 05:44:13 +07:00
|
|
|
{
|
|
|
|
.procname = "soft_watchdog",
|
2017-09-13 02:37:15 +07:00
|
|
|
.data = &soft_watchdog_user_enabled,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2015-04-15 05:44:13 +07:00
|
|
|
.proc_handler = proc_soft_watchdog,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2010-05-08 04:11:46 +07:00
|
|
|
{
|
|
|
|
.procname = "softlockup_panic",
|
|
|
|
.data = &softlockup_panic,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2017-07-13 04:35:46 +07:00
|
|
|
#ifdef CONFIG_SMP
|
2015-11-06 09:44:44 +07:00
|
|
|
{
|
2017-07-13 04:35:46 +07:00
|
|
|
.procname = "softlockup_all_cpu_backtrace",
|
|
|
|
.data = &sysctl_softlockup_all_cpu_backtrace,
|
2015-11-06 09:44:44 +07:00
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2017-07-13 04:35:46 +07:00
|
|
|
#endif /* CONFIG_SMP */
|
2015-11-06 09:44:44 +07:00
|
|
|
#endif
|
2017-07-13 04:35:46 +07:00
|
|
|
#ifdef CONFIG_HARDLOCKUP_DETECTOR
|
2014-06-24 03:22:05 +07:00
|
|
|
{
|
2017-07-13 04:35:46 +07:00
|
|
|
.procname = "hardlockup_panic",
|
|
|
|
.data = &hardlockup_panic,
|
2014-06-24 03:22:05 +07:00
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2017-07-13 04:35:46 +07:00
|
|
|
#ifdef CONFIG_SMP
|
2015-11-06 09:44:41 +07:00
|
|
|
{
|
|
|
|
.procname = "hardlockup_all_cpu_backtrace",
|
|
|
|
.data = &sysctl_hardlockup_all_cpu_backtrace,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2014-06-24 03:22:05 +07:00
|
|
|
#endif /* CONFIG_SMP */
|
2010-11-30 05:07:17 +07:00
|
|
|
#endif
|
2017-07-13 04:35:46 +07:00
|
|
|
#endif
|
|
|
|
|
2010-11-30 05:07:17 +07:00
|
|
|
#if defined(CONFIG_X86_LOCAL_APIC) && defined(CONFIG_X86)
|
|
|
|
{
|
|
|
|
.procname = "unknown_nmi_panic",
|
|
|
|
.data = &unknown_nmi_panic,
|
|
|
|
.maxlen = sizeof (int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec,
|
|
|
|
},
|
2010-02-13 05:19:19 +07:00
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
#if defined(CONFIG_X86)
|
2006-09-26 15:52:27 +07:00
|
|
|
{
|
|
|
|
.procname = "panic_on_unrecovered_nmi",
|
|
|
|
.data = &panic_on_unrecovered_nmi,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2006-09-26 15:52:27 +07:00
|
|
|
},
|
2009-06-25 04:32:11 +07:00
|
|
|
{
|
|
|
|
.procname = "panic_on_io_nmi",
|
|
|
|
.data = &panic_on_io_nmi,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2009-06-25 04:32:11 +07:00
|
|
|
},
|
2011-11-29 13:08:36 +07:00
|
|
|
#ifdef CONFIG_DEBUG_STACKOVERFLOW
|
|
|
|
{
|
|
|
|
.procname = "panic_on_stackoverflow",
|
|
|
|
.data = &sysctl_panic_on_stackoverflow,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec,
|
|
|
|
},
|
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "bootloader_type",
|
|
|
|
.data = &bootloader_type,
|
|
|
|
.maxlen = sizeof (int),
|
|
|
|
.mode = 0444,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2009-05-08 06:54:11 +07:00
|
|
|
{
|
|
|
|
.procname = "bootloader_version",
|
|
|
|
.data = &bootloader_version,
|
|
|
|
.maxlen = sizeof (int),
|
|
|
|
.mode = 0444,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2009-05-08 06:54:11 +07:00
|
|
|
},
|
2008-01-30 19:30:05 +07:00
|
|
|
{
|
|
|
|
.procname = "io_delay_type",
|
|
|
|
.data = &io_delay_type,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2008-01-30 19:30:05 +07:00
|
|
|
},
|
2005-04-17 05:20:36 +07:00
|
|
|
#endif
|
2006-02-21 09:28:07 +07:00
|
|
|
#if defined(CONFIG_MMU)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "randomize_va_space",
|
|
|
|
.data = &randomize_va_space,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2006-02-21 09:28:07 +07:00
|
|
|
#endif
|
2006-01-15 04:21:00 +07:00
|
|
|
#if defined(CONFIG_S390) && defined(CONFIG_SMP)
|
2005-07-28 01:44:57 +07:00
|
|
|
{
|
|
|
|
.procname = "spin_retry",
|
|
|
|
.data = &spin_retry,
|
|
|
|
.maxlen = sizeof (int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-07-28 01:44:57 +07:00
|
|
|
},
|
2006-02-21 09:27:58 +07:00
|
|
|
#endif
|
2007-07-28 14:33:16 +07:00
|
|
|
#if defined(CONFIG_ACPI_SLEEP) && defined(CONFIG_X86)
|
2006-02-21 09:27:58 +07:00
|
|
|
{
|
|
|
|
.procname = "acpi_video_flags",
|
2007-07-19 15:47:41 +07:00
|
|
|
.data = &acpi_realmode_flags,
|
2006-02-21 09:27:58 +07:00
|
|
|
.maxlen = sizeof (unsigned long),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_doulongvec_minmax,
|
2006-02-21 09:27:58 +07:00
|
|
|
},
|
2006-03-01 00:42:23 +07:00
|
|
|
#endif
|
2013-01-09 21:36:28 +07:00
|
|
|
#ifdef CONFIG_SYSCTL_ARCH_UNALIGN_NO_WARN
|
2006-03-01 00:42:23 +07:00
|
|
|
{
|
|
|
|
.procname = "ignore-unaligned-usertrap",
|
|
|
|
.data = &no_unaligned_warning,
|
|
|
|
.maxlen = sizeof (int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2006-03-01 00:42:23 +07:00
|
|
|
},
|
2013-01-09 21:36:28 +07:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_IA64
|
2009-01-16 01:38:56 +07:00
|
|
|
{
|
|
|
|
.procname = "unaligned-dump-stack",
|
|
|
|
.data = &unaligned_dump_stack,
|
|
|
|
.maxlen = sizeof (int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2009-01-16 01:38:56 +07:00
|
|
|
},
|
2006-06-26 18:56:52 +07:00
|
|
|
#endif
|
2009-01-16 02:08:40 +07:00
|
|
|
#ifdef CONFIG_DETECT_HUNG_TASK
|
|
|
|
{
|
|
|
|
.procname = "hung_task_panic",
|
|
|
|
.data = &sysctl_hung_task_panic,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
2009-01-16 02:08:40 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2008-01-26 03:08:02 +07:00
|
|
|
{
|
|
|
|
.procname = "hung_task_check_count",
|
|
|
|
.data = &sysctl_hung_task_check_count,
|
2013-09-23 15:43:58 +07:00
|
|
|
.maxlen = sizeof(int),
|
2008-01-26 03:08:02 +07:00
|
|
|
.mode = 0644,
|
2013-09-23 15:43:58 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
2008-01-26 03:08:02 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "hung_task_timeout_secs",
|
|
|
|
.data = &sysctl_hung_task_timeout_secs,
|
2008-01-26 03:08:34 +07:00
|
|
|
.maxlen = sizeof(unsigned long),
|
2008-01-26 03:08:02 +07:00
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dohung_task_timeout_secs,
|
2014-04-08 05:38:57 +07:00
|
|
|
.extra2 = &hung_task_timeout_max,
|
2008-01-26 03:08:02 +07:00
|
|
|
},
|
2018-08-22 11:55:52 +07:00
|
|
|
{
|
|
|
|
.procname = "hung_task_check_interval_secs",
|
|
|
|
.data = &sysctl_hung_task_check_interval_secs,
|
|
|
|
.maxlen = sizeof(unsigned long),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dohung_task_timeout_secs,
|
|
|
|
.extra2 = &hung_task_timeout_max,
|
|
|
|
},
|
2008-01-26 03:08:02 +07:00
|
|
|
{
|
|
|
|
.procname = "hung_task_warnings",
|
|
|
|
.data = &sysctl_hung_task_warnings,
|
2014-01-21 00:34:13 +07:00
|
|
|
.maxlen = sizeof(int),
|
2008-01-26 03:08:02 +07:00
|
|
|
.mode = 0644,
|
2014-01-21 00:34:13 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &neg_one,
|
2008-01-26 03:08:02 +07:00
|
|
|
},
|
2007-10-17 13:26:09 +07:00
|
|
|
#endif
|
2006-06-27 16:54:53 +07:00
|
|
|
#ifdef CONFIG_RT_MUTEXES
|
|
|
|
{
|
|
|
|
.procname = "max_lock_depth",
|
|
|
|
.data = &max_lock_depth,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2006-06-27 16:54:53 +07:00
|
|
|
},
|
2007-05-08 14:26:04 +07:00
|
|
|
#endif
|
2007-07-18 08:37:02 +07:00
|
|
|
{
|
|
|
|
.procname = "poweroff_cmd",
|
|
|
|
.data = &poweroff_cmd,
|
|
|
|
.maxlen = POWEROFF_CMD_PATH_LEN,
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dostring,
|
2007-07-18 08:37:02 +07:00
|
|
|
},
|
2008-04-29 15:01:32 +07:00
|
|
|
#ifdef CONFIG_KEYS
|
|
|
|
{
|
|
|
|
.procname = "keys",
|
|
|
|
.mode = 0555,
|
|
|
|
.child = key_sysctls,
|
|
|
|
},
|
|
|
|
#endif
|
perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!
In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.
Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.
All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)
The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.
Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.
User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)
This patch has been generated via the following script:
FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
sed -i \
-e 's/PERF_EVENT_/PERF_RECORD_/g' \
-e 's/PERF_COUNTER/PERF_EVENT/g' \
-e 's/perf_counter/perf_event/g' \
-e 's/nb_counters/nb_events/g' \
-e 's/swcounter/swevent/g' \
-e 's/tpcounter_event/tp_event/g' \
$FILES
for N in $(find . -name perf_counter.[ch]); do
M=$(echo $N | sed 's/perf_counter/perf_event/g')
mv $N $M
done
FILES=$(find . -name perf_event.*)
sed -i \
-e 's/COUNTER_MASK/REG_MASK/g' \
-e 's/COUNTER/EVENT/g' \
-e 's/\<event\>/event_id/g' \
-e 's/counter/event/g' \
-e 's/Counter/Event/g' \
$FILES
... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.
Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.
( NOTE: 'counters' are still the proper terminology when we deal
with hardware registers - and these sed scripts are a bit
over-eager in renaming them. I've undone some of that, but
in case there's something left where 'counter' would be
better than 'event' we can undo that on an individual basis
instead of touching an otherwise nicely automated patch. )
Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 17:02:48 +07:00
|
|
|
#ifdef CONFIG_PERF_EVENTS
|
2011-06-04 04:54:40 +07:00
|
|
|
/*
|
|
|
|
* User-space scripts rely on the existence of this file
|
|
|
|
* as a feature check for perf_events being enabled.
|
|
|
|
*
|
|
|
|
* So it's an ABI, do not remove!
|
|
|
|
*/
|
2009-04-09 15:53:45 +07:00
|
|
|
{
|
perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!
In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.
Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.
All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)
The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.
Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.
User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)
This patch has been generated via the following script:
FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
sed -i \
-e 's/PERF_EVENT_/PERF_RECORD_/g' \
-e 's/PERF_COUNTER/PERF_EVENT/g' \
-e 's/perf_counter/perf_event/g' \
-e 's/nb_counters/nb_events/g' \
-e 's/swcounter/swevent/g' \
-e 's/tpcounter_event/tp_event/g' \
$FILES
for N in $(find . -name perf_counter.[ch]); do
M=$(echo $N | sed 's/perf_counter/perf_event/g')
mv $N $M
done
FILES=$(find . -name perf_event.*)
sed -i \
-e 's/COUNTER_MASK/REG_MASK/g' \
-e 's/COUNTER/EVENT/g' \
-e 's/\<event\>/event_id/g' \
-e 's/counter/event/g' \
-e 's/Counter/Event/g' \
$FILES
... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.
Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.
( NOTE: 'counters' are still the proper terminology when we deal
with hardware registers - and these sed scripts are a bit
over-eager in renaming them. I've undone some of that, but
in case there's something left where 'counter' would be
better than 'event' we can undo that on an individual basis
instead of touching an otherwise nicely automated patch. )
Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 17:02:48 +07:00
|
|
|
.procname = "perf_event_paranoid",
|
|
|
|
.data = &sysctl_perf_event_paranoid,
|
|
|
|
.maxlen = sizeof(sysctl_perf_event_paranoid),
|
2009-04-09 15:53:45 +07:00
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2009-04-09 15:53:45 +07:00
|
|
|
},
|
2009-05-05 22:50:24 +07:00
|
|
|
{
|
perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!
In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.
Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.
All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)
The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.
Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.
User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)
This patch has been generated via the following script:
FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
sed -i \
-e 's/PERF_EVENT_/PERF_RECORD_/g' \
-e 's/PERF_COUNTER/PERF_EVENT/g' \
-e 's/perf_counter/perf_event/g' \
-e 's/nb_counters/nb_events/g' \
-e 's/swcounter/swevent/g' \
-e 's/tpcounter_event/tp_event/g' \
$FILES
for N in $(find . -name perf_counter.[ch]); do
M=$(echo $N | sed 's/perf_counter/perf_event/g')
mv $N $M
done
FILES=$(find . -name perf_event.*)
sed -i \
-e 's/COUNTER_MASK/REG_MASK/g' \
-e 's/COUNTER/EVENT/g' \
-e 's/\<event\>/event_id/g' \
-e 's/counter/event/g' \
-e 's/Counter/Event/g' \
$FILES
... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.
Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.
( NOTE: 'counters' are still the proper terminology when we deal
with hardware registers - and these sed scripts are a bit
over-eager in renaming them. I've undone some of that, but
in case there's something left where 'counter' would be
better than 'event' we can undo that on an individual basis
instead of touching an otherwise nicely automated patch. )
Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 17:02:48 +07:00
|
|
|
.procname = "perf_event_mlock_kb",
|
|
|
|
.data = &sysctl_perf_event_mlock,
|
|
|
|
.maxlen = sizeof(sysctl_perf_event_mlock),
|
2009-05-05 22:50:24 +07:00
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2009-05-05 22:50:24 +07:00
|
|
|
},
|
2009-05-25 22:39:05 +07:00
|
|
|
{
|
perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!
In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.
Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.
All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)
The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.
Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.
User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)
This patch has been generated via the following script:
FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
sed -i \
-e 's/PERF_EVENT_/PERF_RECORD_/g' \
-e 's/PERF_COUNTER/PERF_EVENT/g' \
-e 's/perf_counter/perf_event/g' \
-e 's/nb_counters/nb_events/g' \
-e 's/swcounter/swevent/g' \
-e 's/tpcounter_event/tp_event/g' \
$FILES
for N in $(find . -name perf_counter.[ch]); do
M=$(echo $N | sed 's/perf_counter/perf_event/g')
mv $N $M
done
FILES=$(find . -name perf_event.*)
sed -i \
-e 's/COUNTER_MASK/REG_MASK/g' \
-e 's/COUNTER/EVENT/g' \
-e 's/\<event\>/event_id/g' \
-e 's/counter/event/g' \
-e 's/Counter/Event/g' \
$FILES
... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.
Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.
( NOTE: 'counters' are still the proper terminology when we deal
with hardware registers - and these sed scripts are a bit
over-eager in renaming them. I've undone some of that, but
in case there's something left where 'counter' would be
better than 'event' we can undo that on an individual basis
instead of touching an otherwise nicely automated patch. )
Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 17:02:48 +07:00
|
|
|
.procname = "perf_event_max_sample_rate",
|
|
|
|
.data = &sysctl_perf_event_sample_rate,
|
|
|
|
.maxlen = sizeof(sysctl_perf_event_sample_rate),
|
2009-05-25 22:39:05 +07:00
|
|
|
.mode = 0644,
|
2011-02-16 17:22:34 +07:00
|
|
|
.proc_handler = perf_proc_update_handler,
|
2013-09-25 19:29:37 +07:00
|
|
|
.extra1 = &one,
|
2009-05-25 22:39:05 +07:00
|
|
|
},
|
2013-06-21 22:51:36 +07:00
|
|
|
{
|
|
|
|
.procname = "perf_cpu_time_max_percent",
|
|
|
|
.data = &sysctl_perf_cpu_time_max_percent,
|
|
|
|
.maxlen = sizeof(sysctl_perf_cpu_time_max_percent),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = perf_cpu_time_max_percent_handler,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one_hundred,
|
|
|
|
},
|
2016-04-21 22:28:50 +07:00
|
|
|
{
|
|
|
|
.procname = "perf_event_max_stack",
|
2016-05-11 02:34:53 +07:00
|
|
|
.data = &sysctl_perf_event_max_stack,
|
2016-04-21 22:28:50 +07:00
|
|
|
.maxlen = sizeof(sysctl_perf_event_max_stack),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = perf_event_max_stack_handler,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &six_hundred_forty_kb,
|
|
|
|
},
|
2016-05-12 23:06:21 +07:00
|
|
|
{
|
|
|
|
.procname = "perf_event_max_contexts_per_stack",
|
|
|
|
.data = &sysctl_perf_event_max_contexts_per_stack,
|
|
|
|
.maxlen = sizeof(sysctl_perf_event_max_contexts_per_stack),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = perf_event_max_stack_handler,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one_thousand,
|
|
|
|
},
|
2009-09-16 02:53:11 +07:00
|
|
|
#endif
|
2014-12-11 06:45:50 +07:00
|
|
|
{
|
|
|
|
.procname = "panic_on_warn",
|
|
|
|
.data = &panic_on_warn,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2015-05-27 05:50:33 +07:00
|
|
|
#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
|
|
|
|
{
|
|
|
|
.procname = "timer_migration",
|
|
|
|
.data = &sysctl_timer_migration,
|
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = timer_migration_handler,
|
2017-04-20 05:24:50 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
2015-05-27 05:50:33 +07:00
|
|
|
},
|
bpf: enable non-root eBPF programs
In order to let unprivileged users load and execute eBPF programs
teach verifier to prevent pointer leaks.
Verifier will prevent
- any arithmetic on pointers
(except R10+Imm which is used to compute stack addresses)
- comparison of pointers
(except if (map_value_ptr == 0) ... )
- passing pointers to helper functions
- indirectly passing pointers in stack to helper functions
- returning pointer from bpf program
- storing pointers into ctx or maps
Spill/fill of pointers into stack is allowed, but mangling
of pointers stored in the stack or reading them byte by byte is not.
Within bpf programs the pointers do exist, since programs need to
be able to access maps, pass skb pointer to LD_ABS insns, etc
but programs cannot pass such pointer values to the outside
or obfuscate them.
Only allow BPF_PROG_TYPE_SOCKET_FILTER unprivileged programs,
so that socket filters (tcpdump), af_packet (quic acceleration)
and future kcm can use it.
tracing and tc cls/act program types still require root permissions,
since tracing actually needs to be able to see all kernel pointers
and tc is for root only.
For example, the following unprivileged socket filter program is allowed:
int bpf_prog1(struct __sk_buff *skb)
{
u32 index = load_byte(skb, ETH_HLEN + offsetof(struct iphdr, protocol));
u64 *value = bpf_map_lookup_elem(&my_map, &index);
if (value)
*value += skb->len;
return 0;
}
but the following program is not:
int bpf_prog1(struct __sk_buff *skb)
{
u32 index = load_byte(skb, ETH_HLEN + offsetof(struct iphdr, protocol));
u64 *value = bpf_map_lookup_elem(&my_map, &index);
if (value)
*value += (u64) skb;
return 0;
}
since it would leak the kernel address into the map.
Unprivileged socket filter bpf programs have access to the
following helper functions:
- map lookup/update/delete (but they cannot store kernel pointers into them)
- get_random (it's already exposed to unprivileged user space)
- get_smp_processor_id
- tail_call into another socket filter program
- ktime_get_ns
The feature is controlled by sysctl kernel.unprivileged_bpf_disabled.
This toggle defaults to off (0), but can be set true (1). Once true,
bpf programs and maps cannot be accessed from unprivileged process,
and the toggle cannot be set back to false.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-10-08 12:23:21 +07:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_BPF_SYSCALL
|
|
|
|
{
|
|
|
|
.procname = "unprivileged_bpf_disabled",
|
|
|
|
.data = &sysctl_unprivileged_bpf_disabled,
|
|
|
|
.maxlen = sizeof(sysctl_unprivileged_bpf_disabled),
|
|
|
|
.mode = 0644,
|
|
|
|
/* only handle a transition from default "0" to "1" */
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &one,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2019-02-26 05:28:39 +07:00
|
|
|
{
|
|
|
|
.procname = "bpf_stats_enabled",
|
|
|
|
.data = &sysctl_bpf_stats_enabled,
|
|
|
|
.maxlen = sizeof(sysctl_bpf_stats_enabled),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec_minmax_bpf_stats,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2019-02-28 09:30:44 +07:00
|
|
|
#endif
|
2016-06-02 23:51:41 +07:00
|
|
|
#if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU)
|
|
|
|
{
|
|
|
|
.procname = "panic_on_rcu_stall",
|
|
|
|
.data = &sysctl_panic_on_rcu_stall,
|
|
|
|
.maxlen = sizeof(sysctl_panic_on_rcu_stall),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2018-08-17 05:17:03 +07:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_STACKLEAK_RUNTIME_DISABLE
|
|
|
|
{
|
|
|
|
.procname = "stack_erasing",
|
|
|
|
.data = NULL,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0600,
|
|
|
|
.proc_handler = stack_erasing_sysctl,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2015-05-27 05:50:33 +07:00
|
|
|
#endif
|
2009-04-03 16:30:53 +07:00
|
|
|
{ }
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
|
|
|
|
2007-10-18 17:05:22 +07:00
|
|
|
static struct ctl_table vm_table[] = {
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "overcommit_memory",
|
|
|
|
.data = &sysctl_overcommit_memory,
|
|
|
|
.maxlen = sizeof(sysctl_overcommit_memory),
|
|
|
|
.mode = 0644,
|
2011-03-24 06:43:09 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &two,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2006-06-23 16:03:13 +07:00
|
|
|
{
|
|
|
|
.procname = "panic_on_oom",
|
|
|
|
.data = &sysctl_panic_on_oom,
|
|
|
|
.maxlen = sizeof(sysctl_panic_on_oom),
|
|
|
|
.mode = 0644,
|
2011-03-24 06:43:09 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &two,
|
2006-06-23 16:03:13 +07:00
|
|
|
},
|
2007-10-17 13:25:56 +07:00
|
|
|
{
|
|
|
|
.procname = "oom_kill_allocating_task",
|
|
|
|
.data = &sysctl_oom_kill_allocating_task,
|
|
|
|
.maxlen = sizeof(sysctl_oom_kill_allocating_task),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2007-10-17 13:25:56 +07:00
|
|
|
},
|
oom: add sysctl to enable task memory dump
Adds a new sysctl, 'oom_dump_tasks', that enables the kernel to produce a
dump of all system tasks (excluding kernel threads) when performing an
OOM-killing. Information includes pid, uid, tgid, vm size, rss, cpu,
oom_adj score, and name.
This is helpful for determining why there was an OOM condition and which
rogue task caused it.
It is configurable so that large systems, such as those with several
thousand tasks, do not incur a performance penalty associated with dumping
data they may not desire.
If an OOM was triggered as a result of a memory controller, the tasklist
shall be filtered to exclude tasks that are not a member of the same
cgroup.
Cc: Andrea Arcangeli <andrea@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-07 15:14:07 +07:00
|
|
|
{
|
|
|
|
.procname = "oom_dump_tasks",
|
|
|
|
.data = &sysctl_oom_dump_tasks,
|
|
|
|
.maxlen = sizeof(sysctl_oom_dump_tasks),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
oom: add sysctl to enable task memory dump
Adds a new sysctl, 'oom_dump_tasks', that enables the kernel to produce a
dump of all system tasks (excluding kernel threads) when performing an
OOM-killing. Information includes pid, uid, tgid, vm size, rss, cpu,
oom_adj score, and name.
This is helpful for determining why there was an OOM condition and which
rogue task caused it.
It is configurable so that large systems, such as those with several
thousand tasks, do not incur a performance penalty associated with dumping
data they may not desire.
If an OOM was triggered as a result of a memory controller, the tasklist
shall be filtered to exclude tasks that are not a member of the same
cgroup.
Cc: Andrea Arcangeli <andrea@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-07 15:14:07 +07:00
|
|
|
},
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "overcommit_ratio",
|
|
|
|
.data = &sysctl_overcommit_ratio,
|
|
|
|
.maxlen = sizeof(sysctl_overcommit_ratio),
|
|
|
|
.mode = 0644,
|
2014-01-22 06:49:14 +07:00
|
|
|
.proc_handler = overcommit_ratio_handler,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "overcommit_kbytes",
|
|
|
|
.data = &sysctl_overcommit_kbytes,
|
|
|
|
.maxlen = sizeof(sysctl_overcommit_kbytes),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = overcommit_kbytes_handler,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "page-cluster",
|
|
|
|
.data = &page_cluster,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2011-03-24 06:43:09 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "dirty_background_ratio",
|
|
|
|
.data = &dirty_background_ratio,
|
|
|
|
.maxlen = sizeof(dirty_background_ratio),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = dirty_background_ratio_handler,
|
2005-04-17 05:20:36 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one_hundred,
|
|
|
|
},
|
mm: add dirty_background_bytes and dirty_bytes sysctls
This change introduces two new sysctls to /proc/sys/vm:
dirty_background_bytes and dirty_bytes.
dirty_background_bytes is the counterpart to dirty_background_ratio and
dirty_bytes is the counterpart to dirty_ratio.
With growing memory capacities of individual machines, it's no longer
sufficient to specify dirty thresholds as a percentage of the amount of
dirtyable memory over the entire system.
dirty_background_bytes and dirty_bytes specify quantities of memory, in
bytes, that represent the dirty limits for the entire system. If either
of these values is set, its value represents the amount of dirty memory
that is needed to commence either background or direct writeback.
When a `bytes' or `ratio' file is written, its counterpart becomes a
function of the written value. For example, if dirty_bytes is written to
be 8096, 8K of memory is required to commence direct writeback.
dirty_ratio is then functionally equivalent to 8K / the amount of
dirtyable memory:
dirtyable_memory = free pages + mapped pages + file cache
dirty_background_bytes = dirty_background_ratio * dirtyable_memory
-or-
dirty_background_ratio = dirty_background_bytes / dirtyable_memory
AND
dirty_bytes = dirty_ratio * dirtyable_memory
-or-
dirty_ratio = dirty_bytes / dirtyable_memory
Only one of dirty_background_bytes and dirty_background_ratio may be
specified at a time, and only one of dirty_bytes and dirty_ratio may be
specified. When one sysctl is written, the other appears as 0 when read.
The `bytes' files operate on a page size granularity since dirty limits
are compared with ZVC values, which are in page units.
Prior to this change, the minimum dirty_ratio was 5 as implemented by
get_dirty_limits() although /proc/sys/vm/dirty_ratio would show any user
written value between 0 and 100. This restriction is maintained, but
dirty_bytes has a lower limit of only one page.
Also prior to this change, the dirty_background_ratio could not equal or
exceed dirty_ratio. This restriction is maintained in addition to
restricting dirty_background_bytes. If either background threshold equals
or exceeds that of the dirty threshold, it is implicitly set to half the
dirty threshold.
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Andrea Righi <righi.andrea@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-07 05:39:31 +07:00
|
|
|
{
|
|
|
|
.procname = "dirty_background_bytes",
|
|
|
|
.data = &dirty_background_bytes,
|
|
|
|
.maxlen = sizeof(dirty_background_bytes),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = dirty_background_bytes_handler,
|
2009-02-12 04:04:23 +07:00
|
|
|
.extra1 = &one_ul,
|
mm: add dirty_background_bytes and dirty_bytes sysctls
This change introduces two new sysctls to /proc/sys/vm:
dirty_background_bytes and dirty_bytes.
dirty_background_bytes is the counterpart to dirty_background_ratio and
dirty_bytes is the counterpart to dirty_ratio.
With growing memory capacities of individual machines, it's no longer
sufficient to specify dirty thresholds as a percentage of the amount of
dirtyable memory over the entire system.
dirty_background_bytes and dirty_bytes specify quantities of memory, in
bytes, that represent the dirty limits for the entire system. If either
of these values is set, its value represents the amount of dirty memory
that is needed to commence either background or direct writeback.
When a `bytes' or `ratio' file is written, its counterpart becomes a
function of the written value. For example, if dirty_bytes is written to
be 8096, 8K of memory is required to commence direct writeback.
dirty_ratio is then functionally equivalent to 8K / the amount of
dirtyable memory:
dirtyable_memory = free pages + mapped pages + file cache
dirty_background_bytes = dirty_background_ratio * dirtyable_memory
-or-
dirty_background_ratio = dirty_background_bytes / dirtyable_memory
AND
dirty_bytes = dirty_ratio * dirtyable_memory
-or-
dirty_ratio = dirty_bytes / dirtyable_memory
Only one of dirty_background_bytes and dirty_background_ratio may be
specified at a time, and only one of dirty_bytes and dirty_ratio may be
specified. When one sysctl is written, the other appears as 0 when read.
The `bytes' files operate on a page size granularity since dirty limits
are compared with ZVC values, which are in page units.
Prior to this change, the minimum dirty_ratio was 5 as implemented by
get_dirty_limits() although /proc/sys/vm/dirty_ratio would show any user
written value between 0 and 100. This restriction is maintained, but
dirty_bytes has a lower limit of only one page.
Also prior to this change, the dirty_background_ratio could not equal or
exceed dirty_ratio. This restriction is maintained in addition to
restricting dirty_background_bytes. If either background threshold equals
or exceeds that of the dirty threshold, it is implicitly set to half the
dirty threshold.
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Andrea Righi <righi.andrea@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-07 05:39:31 +07:00
|
|
|
},
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "dirty_ratio",
|
|
|
|
.data = &vm_dirty_ratio,
|
|
|
|
.maxlen = sizeof(vm_dirty_ratio),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = dirty_ratio_handler,
|
2005-04-17 05:20:36 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one_hundred,
|
|
|
|
},
|
mm: add dirty_background_bytes and dirty_bytes sysctls
This change introduces two new sysctls to /proc/sys/vm:
dirty_background_bytes and dirty_bytes.
dirty_background_bytes is the counterpart to dirty_background_ratio and
dirty_bytes is the counterpart to dirty_ratio.
With growing memory capacities of individual machines, it's no longer
sufficient to specify dirty thresholds as a percentage of the amount of
dirtyable memory over the entire system.
dirty_background_bytes and dirty_bytes specify quantities of memory, in
bytes, that represent the dirty limits for the entire system. If either
of these values is set, its value represents the amount of dirty memory
that is needed to commence either background or direct writeback.
When a `bytes' or `ratio' file is written, its counterpart becomes a
function of the written value. For example, if dirty_bytes is written to
be 8096, 8K of memory is required to commence direct writeback.
dirty_ratio is then functionally equivalent to 8K / the amount of
dirtyable memory:
dirtyable_memory = free pages + mapped pages + file cache
dirty_background_bytes = dirty_background_ratio * dirtyable_memory
-or-
dirty_background_ratio = dirty_background_bytes / dirtyable_memory
AND
dirty_bytes = dirty_ratio * dirtyable_memory
-or-
dirty_ratio = dirty_bytes / dirtyable_memory
Only one of dirty_background_bytes and dirty_background_ratio may be
specified at a time, and only one of dirty_bytes and dirty_ratio may be
specified. When one sysctl is written, the other appears as 0 when read.
The `bytes' files operate on a page size granularity since dirty limits
are compared with ZVC values, which are in page units.
Prior to this change, the minimum dirty_ratio was 5 as implemented by
get_dirty_limits() although /proc/sys/vm/dirty_ratio would show any user
written value between 0 and 100. This restriction is maintained, but
dirty_bytes has a lower limit of only one page.
Also prior to this change, the dirty_background_ratio could not equal or
exceed dirty_ratio. This restriction is maintained in addition to
restricting dirty_background_bytes. If either background threshold equals
or exceeds that of the dirty threshold, it is implicitly set to half the
dirty threshold.
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Andrea Righi <righi.andrea@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-07 05:39:31 +07:00
|
|
|
{
|
|
|
|
.procname = "dirty_bytes",
|
|
|
|
.data = &vm_dirty_bytes,
|
|
|
|
.maxlen = sizeof(vm_dirty_bytes),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = dirty_bytes_handler,
|
2009-05-01 05:08:57 +07:00
|
|
|
.extra1 = &dirty_bytes_min,
|
mm: add dirty_background_bytes and dirty_bytes sysctls
This change introduces two new sysctls to /proc/sys/vm:
dirty_background_bytes and dirty_bytes.
dirty_background_bytes is the counterpart to dirty_background_ratio and
dirty_bytes is the counterpart to dirty_ratio.
With growing memory capacities of individual machines, it's no longer
sufficient to specify dirty thresholds as a percentage of the amount of
dirtyable memory over the entire system.
dirty_background_bytes and dirty_bytes specify quantities of memory, in
bytes, that represent the dirty limits for the entire system. If either
of these values is set, its value represents the amount of dirty memory
that is needed to commence either background or direct writeback.
When a `bytes' or `ratio' file is written, its counterpart becomes a
function of the written value. For example, if dirty_bytes is written to
be 8096, 8K of memory is required to commence direct writeback.
dirty_ratio is then functionally equivalent to 8K / the amount of
dirtyable memory:
dirtyable_memory = free pages + mapped pages + file cache
dirty_background_bytes = dirty_background_ratio * dirtyable_memory
-or-
dirty_background_ratio = dirty_background_bytes / dirtyable_memory
AND
dirty_bytes = dirty_ratio * dirtyable_memory
-or-
dirty_ratio = dirty_bytes / dirtyable_memory
Only one of dirty_background_bytes and dirty_background_ratio may be
specified at a time, and only one of dirty_bytes and dirty_ratio may be
specified. When one sysctl is written, the other appears as 0 when read.
The `bytes' files operate on a page size granularity since dirty limits
are compared with ZVC values, which are in page units.
Prior to this change, the minimum dirty_ratio was 5 as implemented by
get_dirty_limits() although /proc/sys/vm/dirty_ratio would show any user
written value between 0 and 100. This restriction is maintained, but
dirty_bytes has a lower limit of only one page.
Also prior to this change, the dirty_background_ratio could not equal or
exceed dirty_ratio. This restriction is maintained in addition to
restricting dirty_background_bytes. If either background threshold equals
or exceeds that of the dirty threshold, it is implicitly set to half the
dirty threshold.
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Andrea Righi <righi.andrea@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-07 05:39:31 +07:00
|
|
|
},
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "dirty_writeback_centisecs",
|
2006-03-24 18:15:48 +07:00
|
|
|
.data = &dirty_writeback_interval,
|
|
|
|
.maxlen = sizeof(dirty_writeback_interval),
|
2005-04-17 05:20:36 +07:00
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = dirty_writeback_centisecs_handler,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "dirty_expire_centisecs",
|
2006-03-24 18:15:48 +07:00
|
|
|
.data = &dirty_expire_interval,
|
|
|
|
.maxlen = sizeof(dirty_expire_interval),
|
2005-04-17 05:20:36 +07:00
|
|
|
.mode = 0644,
|
2011-03-24 06:43:09 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2015-03-17 23:23:32 +07:00
|
|
|
{
|
|
|
|
.procname = "dirtytime_expire_seconds",
|
|
|
|
.data = &dirtytime_expire_interval,
|
2018-04-11 06:35:14 +07:00
|
|
|
.maxlen = sizeof(dirtytime_expire_interval),
|
2015-03-17 23:23:32 +07:00
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = dirtytime_interval_handler,
|
|
|
|
.extra1 = &zero,
|
|
|
|
},
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "swappiness",
|
|
|
|
.data = &vm_swappiness,
|
|
|
|
.maxlen = sizeof(vm_swappiness),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
2005-04-17 05:20:36 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one_hundred,
|
|
|
|
},
|
|
|
|
#ifdef CONFIG_HUGETLB_PAGE
|
hugetlb: derive huge pages nodes allowed from task mempolicy
This patch derives a "nodes_allowed" node mask from the numa mempolicy of
the task modifying the number of persistent huge pages to control the
allocation, freeing and adjusting of surplus huge pages when the pool page
count is modified via the new sysctl or sysfs attribute
"nr_hugepages_mempolicy". The nodes_allowed mask is derived as follows:
* For "default" [NULL] task mempolicy, a NULL nodemask_t pointer
is produced. This will cause the hugetlb subsystem to use
node_online_map as the "nodes_allowed". This preserves the
behavior before this patch.
* For "preferred" mempolicy, including explicit local allocation,
a nodemask with the single preferred node will be produced.
"local" policy will NOT track any internode migrations of the
task adjusting nr_hugepages.
* For "bind" and "interleave" policy, the mempolicy's nodemask
will be used.
* Other than to inform the construction of the nodes_allowed node
mask, the actual mempolicy mode is ignored. That is, all modes
behave like interleave over the resulting nodes_allowed mask
with no "fallback".
See the updated documentation [next patch] for more information
about the implications of this patch.
Examples:
Starting with:
Node 0 HugePages_Total: 0
Node 1 HugePages_Total: 0
Node 2 HugePages_Total: 0
Node 3 HugePages_Total: 0
Default behavior [with or without this patch] balances persistent
hugepage allocation across nodes [with sufficient contiguous memory]:
sysctl vm.nr_hugepages[_mempolicy]=32
yields:
Node 0 HugePages_Total: 8
Node 1 HugePages_Total: 8
Node 2 HugePages_Total: 8
Node 3 HugePages_Total: 8
Of course, we only have nr_hugepages_mempolicy with the patch,
but with default mempolicy, nr_hugepages_mempolicy behaves the
same as nr_hugepages.
Applying mempolicy--e.g., with numactl [using '-m' a.k.a.
'--membind' because it allows multiple nodes to be specified
and it's easy to type]--we can allocate huge pages on
individual nodes or sets of nodes. So, starting from the
condition above, with 8 huge pages per node, add 8 more to
node 2 using:
numactl -m 2 sysctl vm.nr_hugepages_mempolicy=40
This yields:
Node 0 HugePages_Total: 8
Node 1 HugePages_Total: 8
Node 2 HugePages_Total: 16
Node 3 HugePages_Total: 8
The incremental 8 huge pages were restricted to node 2 by the
specified mempolicy.
Similarly, we can use mempolicy to free persistent huge pages
from specified nodes:
numactl -m 0,1 sysctl vm.nr_hugepages_mempolicy=32
yields:
Node 0 HugePages_Total: 4
Node 1 HugePages_Total: 4
Node 2 HugePages_Total: 16
Node 3 HugePages_Total: 8
The 8 huge pages freed were balanced over nodes 0 and 1.
[rientjes@google.com: accomodate reworked NODEMASK_ALLOC]
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: Andi Kleen <andi@firstfloor.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Randy Dunlap <randy.dunlap@oracle.com>
Cc: Nishanth Aravamudan <nacc@us.ibm.com>
Cc: Adam Litke <agl@us.ibm.com>
Cc: Andy Whitcroft <apw@canonical.com>
Cc: Eric Whitney <eric.whitney@hp.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-12-15 08:58:21 +07:00
|
|
|
{
|
2005-04-17 05:20:36 +07:00
|
|
|
.procname = "nr_hugepages",
|
2008-07-24 11:27:42 +07:00
|
|
|
.data = NULL,
|
2005-04-17 05:20:36 +07:00
|
|
|
.maxlen = sizeof(unsigned long),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = hugetlb_sysctl_handler,
|
hugetlb: derive huge pages nodes allowed from task mempolicy
This patch derives a "nodes_allowed" node mask from the numa mempolicy of
the task modifying the number of persistent huge pages to control the
allocation, freeing and adjusting of surplus huge pages when the pool page
count is modified via the new sysctl or sysfs attribute
"nr_hugepages_mempolicy". The nodes_allowed mask is derived as follows:
* For "default" [NULL] task mempolicy, a NULL nodemask_t pointer
is produced. This will cause the hugetlb subsystem to use
node_online_map as the "nodes_allowed". This preserves the
behavior before this patch.
* For "preferred" mempolicy, including explicit local allocation,
a nodemask with the single preferred node will be produced.
"local" policy will NOT track any internode migrations of the
task adjusting nr_hugepages.
* For "bind" and "interleave" policy, the mempolicy's nodemask
will be used.
* Other than to inform the construction of the nodes_allowed node
mask, the actual mempolicy mode is ignored. That is, all modes
behave like interleave over the resulting nodes_allowed mask
with no "fallback".
See the updated documentation [next patch] for more information
about the implications of this patch.
Examples:
Starting with:
Node 0 HugePages_Total: 0
Node 1 HugePages_Total: 0
Node 2 HugePages_Total: 0
Node 3 HugePages_Total: 0
Default behavior [with or without this patch] balances persistent
hugepage allocation across nodes [with sufficient contiguous memory]:
sysctl vm.nr_hugepages[_mempolicy]=32
yields:
Node 0 HugePages_Total: 8
Node 1 HugePages_Total: 8
Node 2 HugePages_Total: 8
Node 3 HugePages_Total: 8
Of course, we only have nr_hugepages_mempolicy with the patch,
but with default mempolicy, nr_hugepages_mempolicy behaves the
same as nr_hugepages.
Applying mempolicy--e.g., with numactl [using '-m' a.k.a.
'--membind' because it allows multiple nodes to be specified
and it's easy to type]--we can allocate huge pages on
individual nodes or sets of nodes. So, starting from the
condition above, with 8 huge pages per node, add 8 more to
node 2 using:
numactl -m 2 sysctl vm.nr_hugepages_mempolicy=40
This yields:
Node 0 HugePages_Total: 8
Node 1 HugePages_Total: 8
Node 2 HugePages_Total: 16
Node 3 HugePages_Total: 8
The incremental 8 huge pages were restricted to node 2 by the
specified mempolicy.
Similarly, we can use mempolicy to free persistent huge pages
from specified nodes:
numactl -m 0,1 sysctl vm.nr_hugepages_mempolicy=32
yields:
Node 0 HugePages_Total: 4
Node 1 HugePages_Total: 4
Node 2 HugePages_Total: 16
Node 3 HugePages_Total: 8
The 8 huge pages freed were balanced over nodes 0 and 1.
[rientjes@google.com: accomodate reworked NODEMASK_ALLOC]
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: Andi Kleen <andi@firstfloor.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Randy Dunlap <randy.dunlap@oracle.com>
Cc: Nishanth Aravamudan <nacc@us.ibm.com>
Cc: Adam Litke <agl@us.ibm.com>
Cc: Andy Whitcroft <apw@canonical.com>
Cc: Eric Whitney <eric.whitney@hp.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-12-15 08:58:21 +07:00
|
|
|
},
|
|
|
|
#ifdef CONFIG_NUMA
|
|
|
|
{
|
|
|
|
.procname = "nr_hugepages_mempolicy",
|
|
|
|
.data = NULL,
|
|
|
|
.maxlen = sizeof(unsigned long),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = &hugetlb_mempolicy_sysctl_handler,
|
|
|
|
},
|
2017-11-16 08:38:22 +07:00
|
|
|
{
|
|
|
|
.procname = "numa_stat",
|
|
|
|
.data = &sysctl_vm_numa_stat,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = sysctl_vm_numa_stat_handler,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
hugetlb: derive huge pages nodes allowed from task mempolicy
This patch derives a "nodes_allowed" node mask from the numa mempolicy of
the task modifying the number of persistent huge pages to control the
allocation, freeing and adjusting of surplus huge pages when the pool page
count is modified via the new sysctl or sysfs attribute
"nr_hugepages_mempolicy". The nodes_allowed mask is derived as follows:
* For "default" [NULL] task mempolicy, a NULL nodemask_t pointer
is produced. This will cause the hugetlb subsystem to use
node_online_map as the "nodes_allowed". This preserves the
behavior before this patch.
* For "preferred" mempolicy, including explicit local allocation,
a nodemask with the single preferred node will be produced.
"local" policy will NOT track any internode migrations of the
task adjusting nr_hugepages.
* For "bind" and "interleave" policy, the mempolicy's nodemask
will be used.
* Other than to inform the construction of the nodes_allowed node
mask, the actual mempolicy mode is ignored. That is, all modes
behave like interleave over the resulting nodes_allowed mask
with no "fallback".
See the updated documentation [next patch] for more information
about the implications of this patch.
Examples:
Starting with:
Node 0 HugePages_Total: 0
Node 1 HugePages_Total: 0
Node 2 HugePages_Total: 0
Node 3 HugePages_Total: 0
Default behavior [with or without this patch] balances persistent
hugepage allocation across nodes [with sufficient contiguous memory]:
sysctl vm.nr_hugepages[_mempolicy]=32
yields:
Node 0 HugePages_Total: 8
Node 1 HugePages_Total: 8
Node 2 HugePages_Total: 8
Node 3 HugePages_Total: 8
Of course, we only have nr_hugepages_mempolicy with the patch,
but with default mempolicy, nr_hugepages_mempolicy behaves the
same as nr_hugepages.
Applying mempolicy--e.g., with numactl [using '-m' a.k.a.
'--membind' because it allows multiple nodes to be specified
and it's easy to type]--we can allocate huge pages on
individual nodes or sets of nodes. So, starting from the
condition above, with 8 huge pages per node, add 8 more to
node 2 using:
numactl -m 2 sysctl vm.nr_hugepages_mempolicy=40
This yields:
Node 0 HugePages_Total: 8
Node 1 HugePages_Total: 8
Node 2 HugePages_Total: 16
Node 3 HugePages_Total: 8
The incremental 8 huge pages were restricted to node 2 by the
specified mempolicy.
Similarly, we can use mempolicy to free persistent huge pages
from specified nodes:
numactl -m 0,1 sysctl vm.nr_hugepages_mempolicy=32
yields:
Node 0 HugePages_Total: 4
Node 1 HugePages_Total: 4
Node 2 HugePages_Total: 16
Node 3 HugePages_Total: 8
The 8 huge pages freed were balanced over nodes 0 and 1.
[rientjes@google.com: accomodate reworked NODEMASK_ALLOC]
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Reviewed-by: Andi Kleen <andi@firstfloor.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Randy Dunlap <randy.dunlap@oracle.com>
Cc: Nishanth Aravamudan <nacc@us.ibm.com>
Cc: Adam Litke <agl@us.ibm.com>
Cc: Andy Whitcroft <apw@canonical.com>
Cc: Eric Whitney <eric.whitney@hp.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-12-15 08:58:21 +07:00
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "hugetlb_shm_group",
|
|
|
|
.data = &sysctl_hugetlb_shm_group,
|
|
|
|
.maxlen = sizeof(gid_t),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
hugetlb: introduce nr_overcommit_hugepages sysctl
hugetlb: introduce nr_overcommit_hugepages sysctl
While examining the code to support /proc/sys/vm/hugetlb_dynamic_pool, I
became convinced that having a boolean sysctl was insufficient:
1) To support per-node control of hugepages, I have previously submitted
patches to add a sysfs attribute related to nr_hugepages. However, with
a boolean global value and per-mount quota enforcement constraining the
dynamic pool, adding corresponding control of the dynamic pool on a
per-node basis seems inconsistent to me.
2) Administration of the hugetlb dynamic pool with multiple hugetlbfs
mount points is, arguably, more arduous than it needs to be. Each quota
would need to be set separately, and the sum would need to be monitored.
To ease the administration, and to help make the way for per-node
control of the static & dynamic hugepage pool, I added a separate
sysctl, nr_overcommit_hugepages. This value serves as a high watermark
for the overall hugepage pool, while nr_hugepages serves as a low
watermark. The boolean sysctl can then be removed, as the condition
nr_overcommit_hugepages > 0
indicates the same administrative setting as
hugetlb_dynamic_pool == 1
Quotas still serve as local enforcement of the size of the pool on a
per-mount basis.
A few caveats:
1) There is a race whereby the global surplus huge page counter is
incremented before a hugepage has allocated. Another process could then
try grow the pool, and fail to convert a surplus huge page to a normal
huge page and instead allocate a fresh huge page. I believe this is
benign, as no memory is leaked (the actual pages are still tracked
correctly) and the counters won't go out of sync.
2) Shrinking the static pool while a surplus is in effect will allow the
number of surplus huge pages to exceed the overcommit value. As long as
this condition holds, however, no more surplus huge pages will be
allowed on the system until one of the two sysctls are increased
sufficiently, or the surplus huge pages go out of use and are freed.
Successfully tested on x86_64 with the current libhugetlbfs snapshot,
modified to use the new sysctl.
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
Acked-by: Adam Litke <agl@us.ibm.com>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: Dave Hansen <haveblue@us.ibm.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-12-18 07:20:12 +07:00
|
|
|
{
|
|
|
|
.procname = "nr_overcommit_hugepages",
|
2008-07-24 11:27:42 +07:00
|
|
|
.data = NULL,
|
|
|
|
.maxlen = sizeof(unsigned long),
|
hugetlb: introduce nr_overcommit_hugepages sysctl
hugetlb: introduce nr_overcommit_hugepages sysctl
While examining the code to support /proc/sys/vm/hugetlb_dynamic_pool, I
became convinced that having a boolean sysctl was insufficient:
1) To support per-node control of hugepages, I have previously submitted
patches to add a sysfs attribute related to nr_hugepages. However, with
a boolean global value and per-mount quota enforcement constraining the
dynamic pool, adding corresponding control of the dynamic pool on a
per-node basis seems inconsistent to me.
2) Administration of the hugetlb dynamic pool with multiple hugetlbfs
mount points is, arguably, more arduous than it needs to be. Each quota
would need to be set separately, and the sum would need to be monitored.
To ease the administration, and to help make the way for per-node
control of the static & dynamic hugepage pool, I added a separate
sysctl, nr_overcommit_hugepages. This value serves as a high watermark
for the overall hugepage pool, while nr_hugepages serves as a low
watermark. The boolean sysctl can then be removed, as the condition
nr_overcommit_hugepages > 0
indicates the same administrative setting as
hugetlb_dynamic_pool == 1
Quotas still serve as local enforcement of the size of the pool on a
per-mount basis.
A few caveats:
1) There is a race whereby the global surplus huge page counter is
incremented before a hugepage has allocated. Another process could then
try grow the pool, and fail to convert a surplus huge page to a normal
huge page and instead allocate a fresh huge page. I believe this is
benign, as no memory is leaked (the actual pages are still tracked
correctly) and the counters won't go out of sync.
2) Shrinking the static pool while a surplus is in effect will allow the
number of surplus huge pages to exceed the overcommit value. As long as
this condition holds, however, no more surplus huge pages will be
allowed on the system until one of the two sysctls are increased
sufficiently, or the surplus huge pages go out of use and are freed.
Successfully tested on x86_64 with the current libhugetlbfs snapshot,
modified to use the new sysctl.
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
Acked-by: Adam Litke <agl@us.ibm.com>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: Dave Hansen <haveblue@us.ibm.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-12-18 07:20:12 +07:00
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = hugetlb_overcommit_handler,
|
hugetlb: introduce nr_overcommit_hugepages sysctl
hugetlb: introduce nr_overcommit_hugepages sysctl
While examining the code to support /proc/sys/vm/hugetlb_dynamic_pool, I
became convinced that having a boolean sysctl was insufficient:
1) To support per-node control of hugepages, I have previously submitted
patches to add a sysfs attribute related to nr_hugepages. However, with
a boolean global value and per-mount quota enforcement constraining the
dynamic pool, adding corresponding control of the dynamic pool on a
per-node basis seems inconsistent to me.
2) Administration of the hugetlb dynamic pool with multiple hugetlbfs
mount points is, arguably, more arduous than it needs to be. Each quota
would need to be set separately, and the sum would need to be monitored.
To ease the administration, and to help make the way for per-node
control of the static & dynamic hugepage pool, I added a separate
sysctl, nr_overcommit_hugepages. This value serves as a high watermark
for the overall hugepage pool, while nr_hugepages serves as a low
watermark. The boolean sysctl can then be removed, as the condition
nr_overcommit_hugepages > 0
indicates the same administrative setting as
hugetlb_dynamic_pool == 1
Quotas still serve as local enforcement of the size of the pool on a
per-mount basis.
A few caveats:
1) There is a race whereby the global surplus huge page counter is
incremented before a hugepage has allocated. Another process could then
try grow the pool, and fail to convert a surplus huge page to a normal
huge page and instead allocate a fresh huge page. I believe this is
benign, as no memory is leaked (the actual pages are still tracked
correctly) and the counters won't go out of sync.
2) Shrinking the static pool while a surplus is in effect will allow the
number of surplus huge pages to exceed the overcommit value. As long as
this condition holds, however, no more surplus huge pages will be
allowed on the system until one of the two sysctls are increased
sufficiently, or the surplus huge pages go out of use and are freed.
Successfully tested on x86_64 with the current libhugetlbfs snapshot,
modified to use the new sysctl.
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
Acked-by: Adam Litke <agl@us.ibm.com>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: Dave Hansen <haveblue@us.ibm.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-12-18 07:20:12 +07:00
|
|
|
},
|
2005-04-17 05:20:36 +07:00
|
|
|
#endif
|
|
|
|
{
|
|
|
|
.procname = "lowmem_reserve_ratio",
|
|
|
|
.data = &sysctl_lowmem_reserve_ratio,
|
|
|
|
.maxlen = sizeof(sysctl_lowmem_reserve_ratio),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = lowmem_reserve_ratio_sysctl_handler,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2006-01-08 16:00:39 +07:00
|
|
|
{
|
|
|
|
.procname = "drop_caches",
|
|
|
|
.data = &sysctl_drop_caches,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = drop_caches_sysctl_handler,
|
2011-03-24 06:43:09 +07:00
|
|
|
.extra1 = &one,
|
2014-04-04 04:48:19 +07:00
|
|
|
.extra2 = &four,
|
2006-01-08 16:00:39 +07:00
|
|
|
},
|
2010-05-25 04:32:28 +07:00
|
|
|
#ifdef CONFIG_COMPACTION
|
|
|
|
{
|
|
|
|
.procname = "compact_memory",
|
|
|
|
.data = &sysctl_compact_memory,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0200,
|
|
|
|
.proc_handler = sysctl_compaction_handler,
|
|
|
|
},
|
2010-05-25 04:32:31 +07:00
|
|
|
{
|
|
|
|
.procname = "extfrag_threshold",
|
|
|
|
.data = &sysctl_extfrag_threshold,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2019-03-06 06:43:41 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
2010-05-25 04:32:31 +07:00
|
|
|
.extra1 = &min_extfrag_threshold,
|
|
|
|
.extra2 = &max_extfrag_threshold,
|
|
|
|
},
|
2015-04-16 06:13:20 +07:00
|
|
|
{
|
|
|
|
.procname = "compact_unevictable_allowed",
|
|
|
|
.data = &sysctl_compact_unevictable_allowed,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2010-05-25 04:32:31 +07:00
|
|
|
|
2010-05-25 04:32:28 +07:00
|
|
|
#endif /* CONFIG_COMPACTION */
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "min_free_kbytes",
|
|
|
|
.data = &min_free_kbytes,
|
|
|
|
.maxlen = sizeof(min_free_kbytes),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = min_free_kbytes_sysctl_handler,
|
2005-04-17 05:20:36 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
},
|
mm: reclaim small amounts of memory when an external fragmentation event occurs
An external fragmentation event was previously described as
When the page allocator fragments memory, it records the event using
the mm_page_alloc_extfrag event. If the fallback_order is smaller
than a pageblock order (order-9 on 64-bit x86) then it's considered
an event that will cause external fragmentation issues in the future.
The kernel reduces the probability of such events by increasing the
watermark sizes by calling set_recommended_min_free_kbytes early in the
lifetime of the system. This works reasonably well in general but if
there are enough sparsely populated pageblocks then the problem can still
occur as enough memory is free overall and kswapd stays asleep.
This patch introduces a watermark_boost_factor sysctl that allows a zone
watermark to be temporarily boosted when an external fragmentation causing
events occurs. The boosting will stall allocations that would decrease
free memory below the boosted low watermark and kswapd is woken if the
calling context allows to reclaim an amount of memory relative to the size
of the high watermark and the watermark_boost_factor until the boost is
cleared. When kswapd finishes, it wakes kcompactd at the pageblock order
to clean some of the pageblocks that may have been affected by the
fragmentation event. kswapd avoids any writeback, slab shrinkage and swap
from reclaim context during this operation to avoid excessive system
disruption in the name of fragmentation avoidance. Care is taken so that
kswapd will do normal reclaim work if the system is really low on memory.
This was evaluated using the same workloads as "mm, page_alloc: Spread
allocations across zones before introducing fragmentation".
1-socket Skylake machine
config-global-dhp__workload_thpfioscale XFS (no special madvise)
4 fio threads, 1 THP allocating thread
--------------------------------------
4.20-rc3 extfrag events < order 9: 804694
4.20-rc3+patch: 408912 (49% reduction)
4.20-rc3+patch1-4: 18421 (98% reduction)
4.20.0-rc3 4.20.0-rc3
lowzone-v5r8 boost-v5r8
Amean fault-base-1 653.58 ( 0.00%) 652.71 ( 0.13%)
Amean fault-huge-1 0.00 ( 0.00%) 178.93 * -99.00%*
4.20.0-rc3 4.20.0-rc3
lowzone-v5r8 boost-v5r8
Percentage huge-1 0.00 ( 0.00%) 5.12 ( 100.00%)
Note that external fragmentation causing events are massively reduced by
this path whether in comparison to the previous kernel or the vanilla
kernel. The fault latency for huge pages appears to be increased but that
is only because THP allocations were successful with the patch applied.
1-socket Skylake machine
global-dhp__workload_thpfioscale-madvhugepage-xfs (MADV_HUGEPAGE)
-----------------------------------------------------------------
4.20-rc3 extfrag events < order 9: 291392
4.20-rc3+patch: 191187 (34% reduction)
4.20-rc3+patch1-4: 13464 (95% reduction)
thpfioscale Fault Latencies
4.20.0-rc3 4.20.0-rc3
lowzone-v5r8 boost-v5r8
Min fault-base-1 912.00 ( 0.00%) 905.00 ( 0.77%)
Min fault-huge-1 127.00 ( 0.00%) 135.00 ( -6.30%)
Amean fault-base-1 1467.55 ( 0.00%) 1481.67 ( -0.96%)
Amean fault-huge-1 1127.11 ( 0.00%) 1063.88 * 5.61%*
4.20.0-rc3 4.20.0-rc3
lowzone-v5r8 boost-v5r8
Percentage huge-1 77.64 ( 0.00%) 83.46 ( 7.49%)
As before, massive reduction in external fragmentation events, some jitter
on latencies and an increase in THP allocation success rates.
2-socket Haswell machine
config-global-dhp__workload_thpfioscale XFS (no special madvise)
4 fio threads, 5 THP allocating threads
----------------------------------------------------------------
4.20-rc3 extfrag events < order 9: 215698
4.20-rc3+patch: 200210 (7% reduction)
4.20-rc3+patch1-4: 14263 (93% reduction)
4.20.0-rc3 4.20.0-rc3
lowzone-v5r8 boost-v5r8
Amean fault-base-5 1346.45 ( 0.00%) 1306.87 ( 2.94%)
Amean fault-huge-5 3418.60 ( 0.00%) 1348.94 ( 60.54%)
4.20.0-rc3 4.20.0-rc3
lowzone-v5r8 boost-v5r8
Percentage huge-5 0.78 ( 0.00%) 7.91 ( 910.64%)
There is a 93% reduction in fragmentation causing events, there is a big
reduction in the huge page fault latency and allocation success rate is
higher.
2-socket Haswell machine
global-dhp__workload_thpfioscale-madvhugepage-xfs (MADV_HUGEPAGE)
-----------------------------------------------------------------
4.20-rc3 extfrag events < order 9: 166352
4.20-rc3+patch: 147463 (11% reduction)
4.20-rc3+patch1-4: 11095 (93% reduction)
thpfioscale Fault Latencies
4.20.0-rc3 4.20.0-rc3
lowzone-v5r8 boost-v5r8
Amean fault-base-5 6217.43 ( 0.00%) 7419.67 * -19.34%*
Amean fault-huge-5 3163.33 ( 0.00%) 3263.80 ( -3.18%)
4.20.0-rc3 4.20.0-rc3
lowzone-v5r8 boost-v5r8
Percentage huge-5 95.14 ( 0.00%) 87.98 ( -7.53%)
There is a large reduction in fragmentation events with some jitter around
the latencies and success rates. As before, the high THP allocation
success rate does mean the system is under a lot of pressure. However, as
the fragmentation events are reduced, it would be expected that the
long-term allocation success rate would be higher.
Link: http://lkml.kernel.org/r/20181123114528.28802-5-mgorman@techsingularity.net
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Zi Yan <zi.yan@cs.rutgers.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-28 15:35:52 +07:00
|
|
|
{
|
|
|
|
.procname = "watermark_boost_factor",
|
|
|
|
.data = &watermark_boost_factor,
|
|
|
|
.maxlen = sizeof(watermark_boost_factor),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = watermark_boost_factor_sysctl_handler,
|
|
|
|
.extra1 = &zero,
|
|
|
|
},
|
2016-03-18 04:19:14 +07:00
|
|
|
{
|
|
|
|
.procname = "watermark_scale_factor",
|
|
|
|
.data = &watermark_scale_factor,
|
|
|
|
.maxlen = sizeof(watermark_scale_factor),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = watermark_scale_factor_sysctl_handler,
|
|
|
|
.extra1 = &one,
|
|
|
|
.extra2 = &one_thousand,
|
|
|
|
},
|
2006-01-08 16:00:40 +07:00
|
|
|
{
|
|
|
|
.procname = "percpu_pagelist_fraction",
|
|
|
|
.data = &percpu_pagelist_fraction,
|
|
|
|
.maxlen = sizeof(percpu_pagelist_fraction),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = percpu_pagelist_fraction_sysctl_handler,
|
2014-06-24 03:22:04 +07:00
|
|
|
.extra1 = &zero,
|
2006-01-08 16:00:40 +07:00
|
|
|
},
|
2005-04-17 05:20:36 +07:00
|
|
|
#ifdef CONFIG_MMU
|
|
|
|
{
|
|
|
|
.procname = "max_map_count",
|
|
|
|
.data = &sysctl_max_map_count,
|
|
|
|
.maxlen = sizeof(sysctl_max_map_count),
|
|
|
|
.mode = 0644,
|
2009-12-18 06:27:05 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
2009-12-15 08:59:52 +07:00
|
|
|
.extra1 = &zero,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2009-01-08 19:04:47 +07:00
|
|
|
#else
|
|
|
|
{
|
|
|
|
.procname = "nr_trim_pages",
|
|
|
|
.data = &sysctl_nr_trim_pages,
|
|
|
|
.maxlen = sizeof(sysctl_nr_trim_pages),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
2009-01-08 19:04:47 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
},
|
2005-04-17 05:20:36 +07:00
|
|
|
#endif
|
|
|
|
{
|
|
|
|
.procname = "laptop_mode",
|
|
|
|
.data = &laptop_mode,
|
|
|
|
.maxlen = sizeof(laptop_mode),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec_jiffies,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "block_dump",
|
|
|
|
.data = &block_dump,
|
|
|
|
.maxlen = sizeof(block_dump),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "vfs_cache_pressure",
|
|
|
|
.data = &sysctl_vfs_cache_pressure,
|
|
|
|
.maxlen = sizeof(sysctl_vfs_cache_pressure),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
},
|
|
|
|
#ifdef HAVE_ARCH_PICK_MMAP_LAYOUT
|
|
|
|
{
|
|
|
|
.procname = "legacy_va_layout",
|
|
|
|
.data = &sysctl_legacy_va_layout,
|
|
|
|
.maxlen = sizeof(sysctl_legacy_va_layout),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
},
|
|
|
|
#endif
|
2006-01-19 08:42:32 +07:00
|
|
|
#ifdef CONFIG_NUMA
|
|
|
|
{
|
|
|
|
.procname = "zone_reclaim_mode",
|
2016-07-29 05:46:32 +07:00
|
|
|
.data = &node_reclaim_mode,
|
|
|
|
.maxlen = sizeof(node_reclaim_mode),
|
2006-01-19 08:42:32 +07:00
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2006-02-01 18:05:29 +07:00
|
|
|
.extra1 = &zero,
|
2006-01-19 08:42:32 +07:00
|
|
|
},
|
2006-07-03 14:24:13 +07:00
|
|
|
{
|
|
|
|
.procname = "min_unmapped_ratio",
|
|
|
|
.data = &sysctl_min_unmapped_ratio,
|
|
|
|
.maxlen = sizeof(sysctl_min_unmapped_ratio),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = sysctl_min_unmapped_ratio_sysctl_handler,
|
2006-07-03 14:24:13 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one_hundred,
|
|
|
|
},
|
2006-09-26 13:31:52 +07:00
|
|
|
{
|
|
|
|
.procname = "min_slab_ratio",
|
|
|
|
.data = &sysctl_min_slab_ratio,
|
|
|
|
.maxlen = sizeof(sysctl_min_slab_ratio),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = sysctl_min_slab_ratio_sysctl_handler,
|
2006-09-26 13:31:52 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one_hundred,
|
|
|
|
},
|
[PATCH] vdso: randomize the i386 vDSO by moving it into a vma
Move the i386 VDSO down into a vma and thus randomize it.
Besides the security implications, this feature also helps debuggers, which
can COW a vma-backed VDSO just like a normal DSO and can thus do
single-stepping and other debugging features.
It's good for hypervisors (Xen, VMWare) too, which typically live in the same
high-mapped address space as the VDSO, hence whenever the VDSO is used, they
get lots of guest pagefaults and have to fix such guest accesses up - which
slows things down instead of speeding things up (the primary purpose of the
VDSO).
There's a new CONFIG_COMPAT_VDSO (default=y) option, which provides support
for older glibcs that still rely on a prelinked high-mapped VDSO. Newer
distributions (using glibc 2.3.3 or later) can turn this option off. Turning
it off is also recommended for security reasons: attackers cannot use the
predictable high-mapped VDSO page as syscall trampoline anymore.
There is a new vdso=[0|1] boot option as well, and a runtime
/proc/sys/vm/vdso_enabled sysctl switch, that allows the VDSO to be turned
on/off.
(This version of the VDSO-randomization patch also has working ELF
coredumping, the previous patch crashed in the coredumping code.)
This code is a combined work of the exec-shield VDSO randomization
code and Gerd Hoffmann's hypervisor-centric VDSO patch. Rusty Russell
started this patch and i completed it.
[akpm@osdl.org: cleanups]
[akpm@osdl.org: compile fix]
[akpm@osdl.org: compile fix 2]
[akpm@osdl.org: compile fix 3]
[akpm@osdl.org: revernt MAXMEM change]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@infradead.org>
Cc: Gerd Hoffmann <kraxel@suse.de>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Andi Kleen <ak@muc.de>
Cc: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-27 16:53:50 +07:00
|
|
|
#endif
|
2007-05-09 16:35:13 +07:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
{
|
|
|
|
.procname = "stat_interval",
|
|
|
|
.data = &sysctl_stat_interval,
|
|
|
|
.maxlen = sizeof(sysctl_stat_interval),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec_jiffies,
|
2007-05-09 16:35:13 +07:00
|
|
|
},
|
2016-05-20 07:12:50 +07:00
|
|
|
{
|
|
|
|
.procname = "stat_refresh",
|
|
|
|
.data = NULL,
|
|
|
|
.maxlen = 0,
|
|
|
|
.mode = 0600,
|
|
|
|
.proc_handler = vmstat_refresh,
|
|
|
|
},
|
2007-05-09 16:35:13 +07:00
|
|
|
#endif
|
2009-12-16 02:27:45 +07:00
|
|
|
#ifdef CONFIG_MMU
|
2007-06-29 02:55:21 +07:00
|
|
|
{
|
|
|
|
.procname = "mmap_min_addr",
|
2009-07-31 23:54:11 +07:00
|
|
|
.data = &dac_mmap_min_addr,
|
|
|
|
.maxlen = sizeof(unsigned long),
|
2007-06-29 02:55:21 +07:00
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = mmap_min_addr_handler,
|
2007-06-29 02:55:21 +07:00
|
|
|
},
|
2009-12-16 02:27:45 +07:00
|
|
|
#endif
|
2007-07-16 13:38:01 +07:00
|
|
|
#ifdef CONFIG_NUMA
|
|
|
|
{
|
|
|
|
.procname = "numa_zonelist_order",
|
|
|
|
.data = &numa_zonelist_order,
|
|
|
|
.maxlen = NUMA_ZONELIST_ORDER_LEN,
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = numa_zonelist_order_handler,
|
2007-07-16 13:38:01 +07:00
|
|
|
},
|
|
|
|
#endif
|
2007-10-13 14:16:04 +07:00
|
|
|
#if (defined(CONFIG_X86_32) && !defined(CONFIG_UML))|| \
|
2007-03-01 08:07:42 +07:00
|
|
|
(defined(CONFIG_SUPERH) && defined(CONFIG_VSYSCALL))
|
[PATCH] vdso: randomize the i386 vDSO by moving it into a vma
Move the i386 VDSO down into a vma and thus randomize it.
Besides the security implications, this feature also helps debuggers, which
can COW a vma-backed VDSO just like a normal DSO and can thus do
single-stepping and other debugging features.
It's good for hypervisors (Xen, VMWare) too, which typically live in the same
high-mapped address space as the VDSO, hence whenever the VDSO is used, they
get lots of guest pagefaults and have to fix such guest accesses up - which
slows things down instead of speeding things up (the primary purpose of the
VDSO).
There's a new CONFIG_COMPAT_VDSO (default=y) option, which provides support
for older glibcs that still rely on a prelinked high-mapped VDSO. Newer
distributions (using glibc 2.3.3 or later) can turn this option off. Turning
it off is also recommended for security reasons: attackers cannot use the
predictable high-mapped VDSO page as syscall trampoline anymore.
There is a new vdso=[0|1] boot option as well, and a runtime
/proc/sys/vm/vdso_enabled sysctl switch, that allows the VDSO to be turned
on/off.
(This version of the VDSO-randomization patch also has working ELF
coredumping, the previous patch crashed in the coredumping code.)
This code is a combined work of the exec-shield VDSO randomization
code and Gerd Hoffmann's hypervisor-centric VDSO patch. Rusty Russell
started this patch and i completed it.
[akpm@osdl.org: cleanups]
[akpm@osdl.org: compile fix]
[akpm@osdl.org: compile fix 2]
[akpm@osdl.org: compile fix 3]
[akpm@osdl.org: revernt MAXMEM change]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@infradead.org>
Cc: Gerd Hoffmann <kraxel@suse.de>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Andi Kleen <ak@muc.de>
Cc: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-27 16:53:50 +07:00
|
|
|
{
|
|
|
|
.procname = "vdso_enabled",
|
2014-05-06 02:19:32 +07:00
|
|
|
#ifdef CONFIG_X86_32
|
|
|
|
.data = &vdso32_enabled,
|
|
|
|
.maxlen = sizeof(vdso32_enabled),
|
|
|
|
#else
|
[PATCH] vdso: randomize the i386 vDSO by moving it into a vma
Move the i386 VDSO down into a vma and thus randomize it.
Besides the security implications, this feature also helps debuggers, which
can COW a vma-backed VDSO just like a normal DSO and can thus do
single-stepping and other debugging features.
It's good for hypervisors (Xen, VMWare) too, which typically live in the same
high-mapped address space as the VDSO, hence whenever the VDSO is used, they
get lots of guest pagefaults and have to fix such guest accesses up - which
slows things down instead of speeding things up (the primary purpose of the
VDSO).
There's a new CONFIG_COMPAT_VDSO (default=y) option, which provides support
for older glibcs that still rely on a prelinked high-mapped VDSO. Newer
distributions (using glibc 2.3.3 or later) can turn this option off. Turning
it off is also recommended for security reasons: attackers cannot use the
predictable high-mapped VDSO page as syscall trampoline anymore.
There is a new vdso=[0|1] boot option as well, and a runtime
/proc/sys/vm/vdso_enabled sysctl switch, that allows the VDSO to be turned
on/off.
(This version of the VDSO-randomization patch also has working ELF
coredumping, the previous patch crashed in the coredumping code.)
This code is a combined work of the exec-shield VDSO randomization
code and Gerd Hoffmann's hypervisor-centric VDSO patch. Rusty Russell
started this patch and i completed it.
[akpm@osdl.org: cleanups]
[akpm@osdl.org: compile fix]
[akpm@osdl.org: compile fix 2]
[akpm@osdl.org: compile fix 3]
[akpm@osdl.org: revernt MAXMEM change]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@infradead.org>
Cc: Gerd Hoffmann <kraxel@suse.de>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Andi Kleen <ak@muc.de>
Cc: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-27 16:53:50 +07:00
|
|
|
.data = &vdso_enabled,
|
|
|
|
.maxlen = sizeof(vdso_enabled),
|
2014-05-06 02:19:32 +07:00
|
|
|
#endif
|
[PATCH] vdso: randomize the i386 vDSO by moving it into a vma
Move the i386 VDSO down into a vma and thus randomize it.
Besides the security implications, this feature also helps debuggers, which
can COW a vma-backed VDSO just like a normal DSO and can thus do
single-stepping and other debugging features.
It's good for hypervisors (Xen, VMWare) too, which typically live in the same
high-mapped address space as the VDSO, hence whenever the VDSO is used, they
get lots of guest pagefaults and have to fix such guest accesses up - which
slows things down instead of speeding things up (the primary purpose of the
VDSO).
There's a new CONFIG_COMPAT_VDSO (default=y) option, which provides support
for older glibcs that still rely on a prelinked high-mapped VDSO. Newer
distributions (using glibc 2.3.3 or later) can turn this option off. Turning
it off is also recommended for security reasons: attackers cannot use the
predictable high-mapped VDSO page as syscall trampoline anymore.
There is a new vdso=[0|1] boot option as well, and a runtime
/proc/sys/vm/vdso_enabled sysctl switch, that allows the VDSO to be turned
on/off.
(This version of the VDSO-randomization patch also has working ELF
coredumping, the previous patch crashed in the coredumping code.)
This code is a combined work of the exec-shield VDSO randomization
code and Gerd Hoffmann's hypervisor-centric VDSO patch. Rusty Russell
started this patch and i completed it.
[akpm@osdl.org: cleanups]
[akpm@osdl.org: compile fix]
[akpm@osdl.org: compile fix 2]
[akpm@osdl.org: compile fix 3]
[akpm@osdl.org: revernt MAXMEM change]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@infradead.org>
Cc: Gerd Hoffmann <kraxel@suse.de>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Andi Kleen <ak@muc.de>
Cc: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-27 16:53:50 +07:00
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
[PATCH] vdso: randomize the i386 vDSO by moving it into a vma
Move the i386 VDSO down into a vma and thus randomize it.
Besides the security implications, this feature also helps debuggers, which
can COW a vma-backed VDSO just like a normal DSO and can thus do
single-stepping and other debugging features.
It's good for hypervisors (Xen, VMWare) too, which typically live in the same
high-mapped address space as the VDSO, hence whenever the VDSO is used, they
get lots of guest pagefaults and have to fix such guest accesses up - which
slows things down instead of speeding things up (the primary purpose of the
VDSO).
There's a new CONFIG_COMPAT_VDSO (default=y) option, which provides support
for older glibcs that still rely on a prelinked high-mapped VDSO. Newer
distributions (using glibc 2.3.3 or later) can turn this option off. Turning
it off is also recommended for security reasons: attackers cannot use the
predictable high-mapped VDSO page as syscall trampoline anymore.
There is a new vdso=[0|1] boot option as well, and a runtime
/proc/sys/vm/vdso_enabled sysctl switch, that allows the VDSO to be turned
on/off.
(This version of the VDSO-randomization patch also has working ELF
coredumping, the previous patch crashed in the coredumping code.)
This code is a combined work of the exec-shield VDSO randomization
code and Gerd Hoffmann's hypervisor-centric VDSO patch. Rusty Russell
started this patch and i completed it.
[akpm@osdl.org: cleanups]
[akpm@osdl.org: compile fix]
[akpm@osdl.org: compile fix 2]
[akpm@osdl.org: compile fix 3]
[akpm@osdl.org: revernt MAXMEM change]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arjan van de Ven <arjan@infradead.org>
Cc: Gerd Hoffmann <kraxel@suse.de>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Andi Kleen <ak@muc.de>
Cc: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-27 16:53:50 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
},
|
2005-04-17 05:20:36 +07:00
|
|
|
#endif
|
2008-02-05 13:29:20 +07:00
|
|
|
#ifdef CONFIG_HIGHMEM
|
|
|
|
{
|
|
|
|
.procname = "highmem_is_dirtyable",
|
|
|
|
.data = &vm_highmem_is_dirtyable,
|
|
|
|
.maxlen = sizeof(vm_highmem_is_dirtyable),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
2008-02-05 13:29:20 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
|
|
|
#endif
|
2009-09-16 16:50:15 +07:00
|
|
|
#ifdef CONFIG_MEMORY_FAILURE
|
|
|
|
{
|
|
|
|
.procname = "memory_failure_early_kill",
|
|
|
|
.data = &sysctl_memory_failure_early_kill,
|
|
|
|
.maxlen = sizeof(sysctl_memory_failure_early_kill),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
2009-09-16 16:50:15 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "memory_failure_recovery",
|
|
|
|
.data = &sysctl_memory_failure_recovery,
|
|
|
|
.maxlen = sizeof(sysctl_memory_failure_recovery),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
2009-09-16 16:50:15 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
|
|
|
#endif
|
mm: limit growth of 3% hardcoded other user reserve
Add user_reserve_kbytes knob.
Limit the growth of the memory reserved for other user processes to
min(3% current process size, user_reserve_pages). Only about 8MB is
necessary to enable recovery in the default mode, and only a few hundred
MB are required even when overcommit is disabled.
user_reserve_pages defaults to min(3% free pages, 128MB)
I arrived at 128MB by taking the max VSZ of sshd, login, bash, and top ...
then adding the RSS of each.
This only affects OVERCOMMIT_NEVER mode.
Background
1. user reserve
__vm_enough_memory reserves a hardcoded 3% of the current process size for
other applications when overcommit is disabled. This was done so that a
user could recover if they launched a memory hogging process. Without the
reserve, a user would easily run into a message such as:
bash: fork: Cannot allocate memory
2. admin reserve
Additionally, a hardcoded 3% of free memory is reserved for root in both
overcommit 'guess' and 'never' modes. This was intended to prevent a
scenario where root-cant-log-in and perform recovery operations.
Note that this reserve shrinks, and doesn't guarantee a useful reserve.
Motivation
The two hardcoded memory reserves should be updated to account for current
memory sizes.
Also, the admin reserve would be more useful if it didn't shrink too much.
When the current code was originally written, 1GB was considered
"enterprise". Now the 3% reserve can grow to multiple GB on large memory
systems, and it only needs to be a few hundred MB at most to enable a user
or admin to recover a system with an unwanted memory hogging process.
I've found that reducing these reserves is especially beneficial for a
specific type of application load:
* single application system
* one or few processes (e.g. one per core)
* allocating all available memory
* not initializing every page immediately
* long running
I've run scientific clusters with this sort of load. A long running job
sometimes failed many hours (weeks of CPU time) into a calculation. They
weren't initializing all of their memory immediately, and they weren't
using calloc, so I put systems into overcommit 'never' mode. These
clusters run diskless and have no swap.
However, with the current reserves, a user wishing to allocate as much
memory as possible to one process may be prevented from using, for
example, almost 2GB out of 32GB.
The effect is less, but still significant when a user starts a job with
one process per core. I have repeatedly seen a set of processes
requesting the same amount of memory fail because one of them could not
allocate the amount of memory a user would expect to be able to allocate.
For example, Message Passing Interfce (MPI) processes, one per core. And
it is similar for other parallel programming frameworks.
Changing this reserve code will make the overcommit never mode more useful
by allowing applications to allocate nearly all of the available memory.
Also, the new admin_reserve_kbytes will be safer than the current behavior
since the hardcoded 3% of available memory reserve can shrink to something
useless in the case where applications have grabbed all available memory.
Risks
* "bash: fork: Cannot allocate memory"
The downside of the first patch-- which creates a tunable user reserve
that is only used in overcommit 'never' mode--is that an admin can set
it so low that a user may not be able to kill their process, even if
they already have a shell prompt.
Of course, a user can get in the same predicament with the current 3%
reserve--they just have to launch processes until 3% becomes negligible.
* root-cant-log-in problem
The second patch, adding the tunable rootuser_reserve_pages, allows
the admin to shoot themselves in the foot by setting it too small. They
can easily get the system into a state where root-can't-log-in.
However, the new admin_reserve_kbytes will be safer than the current
behavior since the hardcoded 3% of available memory reserve can shrink
to something useless in the case where applications have grabbed all
available memory.
Alternatives
* Memory cgroups provide a more flexible way to limit application memory.
Not everyone wants to set up cgroups or deal with their overhead.
* We could create a fourth overcommit mode which provides smaller reserves.
The size of useful reserves may be drastically different depending
on the whether the system is embedded or enterprise.
* Force users to initialize all of their memory or use calloc.
Some users don't want/expect the system to overcommit when they malloc.
Overcommit 'never' mode is for this scenario, and it should work well.
The new user and admin reserve tunables are simple to use, with low
overhead compared to cgroups. The patches preserve current behavior where
3% of memory is less than 128MB, except that the admin reserve doesn't
shrink to an unusable size under pressure. The code allows admins to tune
for embedded and enterprise usage.
FAQ
* How is the root-cant-login problem addressed?
What happens if admin_reserve_pages is set to 0?
Root is free to shoot themselves in the foot by setting
admin_reserve_kbytes too low.
On x86_64, the minimum useful reserve is:
8MB for overcommit 'guess'
128MB for overcommit 'never'
admin_reserve_pages defaults to min(3% free memory, 8MB)
So, anyone switching to 'never' mode needs to adjust
admin_reserve_pages.
* How do you calculate a minimum useful reserve?
A user or the admin needs enough memory to login and perform
recovery operations, which includes, at a minimum:
sshd or login + bash (or some other shell) + top (or ps, kill, etc.)
For overcommit 'guess', we can sum resident set sizes (RSS)
because we only need enough memory to handle what the recovery
programs will typically use. On x86_64 this is about 8MB.
For overcommit 'never', we can take the max of their virtual sizes (VSZ)
and add the sum of their RSS. We use VSZ instead of RSS because mode
forces us to ensure we can fulfill all of the requested memory allocations--
even if the programs only use a fraction of what they ask for.
On x86_64 this is about 128MB.
When swap is enabled, reserves are useful even when they are as
small as 10MB, regardless of overcommit mode.
When both swap and overcommit are disabled, then the admin should
tune the reserves higher to be absolutley safe. Over 230MB each
was safest in my testing.
* What happens if user_reserve_pages is set to 0?
Note, this only affects overcomitt 'never' mode.
Then a user will be able to allocate all available memory minus
admin_reserve_kbytes.
However, they will easily see a message such as:
"bash: fork: Cannot allocate memory"
And they won't be able to recover/kill their application.
The admin should be able to recover the system if
admin_reserve_kbytes is set appropriately.
* What's the difference between overcommit 'guess' and 'never'?
"Guess" allows an allocation if there are enough free + reclaimable
pages. It has a hardcoded 3% of free pages reserved for root.
"Never" allows an allocation if there is enough swap + a configurable
percentage (default is 50) of physical RAM. It has a hardcoded 3% of
free pages reserved for root, like "Guess" mode. It also has a
hardcoded 3% of the current process size reserved for additional
applications.
* Why is overcommit 'guess' not suitable even when an app eventually
writes to every page? It takes free pages, file pages, available
swap pages, reclaimable slab pages into consideration. In other words,
these are all pages available, then why isn't overcommit suitable?
Because it only looks at the present state of the system. It
does not take into account the memory that other applications have
malloced, but haven't initialized yet. It overcommits the system.
Test Summary
There was little change in behavior in the default overcommit 'guess'
mode with swap enabled before and after the patch. This was expected.
Systems run most predictably (i.e. no oom kills) in overcommit 'never'
mode with swap enabled. This also allowed the most memory to be allocated
to a user application.
Overcommit 'guess' mode without swap is a bad idea. It is easy to
crash the system. None of the other tested combinations crashed.
This matches my experience on the Roadrunner supercomputer.
Without the tunable user reserve, a system in overcommit 'never' mode
and without swap does not allow the admin to recover, although the
admin can.
With the new tunable reserves, a system in overcommit 'never' mode
and without swap can be configured to:
1. maximize user-allocatable memory, running close to the edge of
recoverability
2. maximize recoverability, sacrificing allocatable memory to
ensure that a user cannot take down a system
Test Description
Fedora 18 VM - 4 x86_64 cores, 5725MB RAM, 4GB Swap
System is booted into multiuser console mode, with unnecessary services
turned off. Caches were dropped before each test.
Hogs are user memtester processes that attempt to allocate all free memory
as reported by /proc/meminfo
In overcommit 'never' mode, memory_ratio=100
Test Results
3.9.0-rc1-mm1
Overcommit | Swap | Hogs | MB Got/Wanted | OOMs | User Recovery | Admin Recovery
---------- ---- ---- ------------- ---- ------------- --------------
guess yes 1 5432/5432 no yes yes
guess yes 4 5444/5444 1 yes yes
guess no 1 5302/5449 no yes yes
guess no 4 - crash no no
never yes 1 5460/5460 1 yes yes
never yes 4 5460/5460 1 yes yes
never no 1 5218/5432 no no yes
never no 4 5203/5448 no no yes
3.9.0-rc1-mm1-tunablereserves
User and Admin Recovery show their respective reserves, if applicable.
Overcommit | Swap | Hogs | MB Got/Wanted | OOMs | User Recovery | Admin Recovery
---------- ---- ---- ------------- ---- ------------- --------------
guess yes 1 5419/5419 no - yes 8MB yes
guess yes 4 5436/5436 1 - yes 8MB yes
guess no 1 5440/5440 * - yes 8MB yes
guess no 4 - crash - no 8MB no
* process would successfully mlock, then the oom killer would pick it
never yes 1 5446/5446 no 10MB yes 20MB yes
never yes 4 5456/5456 no 10MB yes 20MB yes
never no 1 5387/5429 no 128MB no 8MB barely
never no 1 5323/5428 no 226MB barely 8MB barely
never no 1 5323/5428 no 226MB barely 8MB barely
never no 1 5359/5448 no 10MB no 10MB barely
never no 1 5323/5428 no 0MB no 10MB barely
never no 1 5332/5428 no 0MB no 50MB yes
never no 1 5293/5429 no 0MB no 90MB yes
never no 1 5001/5427 no 230MB yes 338MB yes
never no 4* 4998/5424 no 230MB yes 338MB yes
* more memtesters were launched, able to allocate approximately another 100MB
Future Work
- Test larger memory systems.
- Test an embedded image.
- Test other architectures.
- Time malloc microbenchmarks.
- Would it be useful to be able to set overcommit policy for
each memory cgroup?
- Some lines are slightly above 80 chars.
Perhaps define a macro to convert between pages and kb?
Other places in the kernel do this.
[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: make init_user_reserve() static]
Signed-off-by: Andrew Shewmaker <agshew@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-04-30 05:08:10 +07:00
|
|
|
{
|
|
|
|
.procname = "user_reserve_kbytes",
|
|
|
|
.data = &sysctl_user_reserve_kbytes,
|
|
|
|
.maxlen = sizeof(sysctl_user_reserve_kbytes),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_doulongvec_minmax,
|
|
|
|
},
|
2013-04-30 05:08:11 +07:00
|
|
|
{
|
|
|
|
.procname = "admin_reserve_kbytes",
|
|
|
|
.data = &sysctl_admin_reserve_kbytes,
|
|
|
|
.maxlen = sizeof(sysctl_admin_reserve_kbytes),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_doulongvec_minmax,
|
|
|
|
},
|
mm: mmap: add new /proc tunable for mmap_base ASLR
Address Space Layout Randomization (ASLR) provides a barrier to
exploitation of user-space processes in the presence of security
vulnerabilities by making it more difficult to find desired code/data
which could help an attack. This is done by adding a random offset to
the location of regions in the process address space, with a greater
range of potential offset values corresponding to better protection/a
larger search-space for brute force, but also to greater potential for
fragmentation.
The offset added to the mmap_base address, which provides the basis for
the majority of the mappings for a process, is set once on process exec
in arch_pick_mmap_layout() and is done via hard-coded per-arch values,
which reflect, hopefully, the best compromise for all systems. The
trade-off between increased entropy in the offset value generation and
the corresponding increased variability in address space fragmentation
is not absolute, however, and some platforms may tolerate higher amounts
of entropy. This patch introduces both new Kconfig values and a sysctl
interface which may be used to change the amount of entropy used for
offset generation on a system.
The direct motivation for this change was in response to the
libstagefright vulnerabilities that affected Android, specifically to
information provided by Google's project zero at:
http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html
The attack presented therein, by Google's project zero, specifically
targeted the limited randomness used to generate the offset added to the
mmap_base address in order to craft a brute-force-based attack.
Concretely, the attack was against the mediaserver process, which was
limited to respawning every 5 seconds, on an arm device. The hard-coded
8 bits used resulted in an average expected success rate of defeating
the mmap ASLR after just over 10 minutes (128 tries at 5 seconds a
piece). With this patch, and an accompanying increase in the entropy
value to 16 bits, the same attack would take an average expected time of
over 45 hours (32768 tries), which makes it both less feasible and more
likely to be noticed.
The introduced Kconfig and sysctl options are limited by per-arch
minimum and maximum values, the minimum of which was chosen to match the
current hard-coded value and the maximum of which was chosen so as to
give the greatest flexibility without generating an invalid mmap_base
address, generally a 3-4 bits less than the number of bits in the
user-space accessible virtual address space.
When decided whether or not to change the default value, a system
developer should consider that mmap_base address could be placed
anywhere up to 2^(value) bits away from the non-randomized location,
which would introduce variable-sized areas above and below the mmap_base
address such that the maximum vm_area_struct size may be reduced,
preventing very large allocations.
This patch (of 4):
ASLR only uses as few as 8 bits to generate the random offset for the
mmap base address on 32 bit architectures. This value was chosen to
prevent a poorly chosen value from dividing the address space in such a
way as to prevent large allocations. This may not be an issue on all
platforms. Allow the specification of a minimum number of bits so that
platforms desiring greater ASLR protection may determine where to place
the trade-off.
Signed-off-by: Daniel Cashman <dcashman@google.com>
Cc: Russell King <linux@arm.linux.org.uk>
Acked-by: Kees Cook <keescook@chromium.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: Heinrich Schuchardt <xypron.glpk@gmx.de>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: David Rientjes <rientjes@google.com>
Cc: Mark Salyzyn <salyzyn@android.com>
Cc: Jeff Vander Stoep <jeffv@google.com>
Cc: Nick Kralevich <nnk@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Hector Marco-Gisbert <hecmargi@upv.es>
Cc: Borislav Petkov <bp@suse.de>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-15 06:19:53 +07:00
|
|
|
#ifdef CONFIG_HAVE_ARCH_MMAP_RND_BITS
|
|
|
|
{
|
|
|
|
.procname = "mmap_rnd_bits",
|
|
|
|
.data = &mmap_rnd_bits,
|
|
|
|
.maxlen = sizeof(mmap_rnd_bits),
|
|
|
|
.mode = 0600,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = (void *)&mmap_rnd_bits_min,
|
|
|
|
.extra2 = (void *)&mmap_rnd_bits_max,
|
|
|
|
},
|
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_HAVE_ARCH_MMAP_RND_COMPAT_BITS
|
|
|
|
{
|
|
|
|
.procname = "mmap_rnd_compat_bits",
|
|
|
|
.data = &mmap_rnd_compat_bits,
|
|
|
|
.maxlen = sizeof(mmap_rnd_compat_bits),
|
|
|
|
.mode = 0600,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = (void *)&mmap_rnd_compat_bits_min,
|
|
|
|
.extra2 = (void *)&mmap_rnd_compat_bits_max,
|
|
|
|
},
|
userfaultfd/sysctl: add vm.unprivileged_userfaultfd
Userfaultfd can be misued to make it easier to exploit existing
use-after-free (and similar) bugs that might otherwise only make a
short window or race condition available. By using userfaultfd to
stall a kernel thread, a malicious program can keep some state that it
wrote, stable for an extended period, which it can then access using an
existing exploit. While it doesn't cause the exploit itself, and while
it's not the only thing that can stall a kernel thread when accessing a
memory location, it's one of the few that never needs privilege.
We can add a flag, allowing userfaultfd to be restricted, so that in
general it won't be useable by arbitrary user programs, but in
environments that require userfaultfd it can be turned back on.
Add a global sysctl knob "vm.unprivileged_userfaultfd" to control
whether userfaultfd is allowed by unprivileged users. When this is
set to zero, only privileged users (root user, or users with the
CAP_SYS_PTRACE capability) will be able to use the userfaultfd
syscalls.
Andrea said:
: The only difference between the bpf sysctl and the userfaultfd sysctl
: this way is that the bpf sysctl adds the CAP_SYS_ADMIN capability
: requirement, while userfaultfd adds the CAP_SYS_PTRACE requirement,
: because the userfaultfd monitor is more likely to need CAP_SYS_PTRACE
: already if it's doing other kind of tracking on processes runtime, in
: addition of userfaultfd. In other words both syscalls works only for
: root, when the two sysctl are opt-in set to 1.
[dgilbert@redhat.com: changelog additions]
[akpm@linux-foundation.org: documentation tweak, per Mike]
Link: http://lkml.kernel.org/r/20190319030722.12441-2-peterx@redhat.com
Signed-off-by: Peter Xu <peterx@redhat.com>
Suggested-by: Andrea Arcangeli <aarcange@redhat.com>
Suggested-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: Maya Gokhale <gokhale2@llnl.gov>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Pavel Emelyanov <xemul@virtuozzo.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Martin Cracauer <cracauer@cons.org>
Cc: Denis Plotnikov <dplotnikov@virtuozzo.com>
Cc: Marty McFadden <mcfadden8@llnl.gov>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: "Kirill A . Shutemov" <kirill@shutemov.name>
Cc: "Dr . David Alan Gilbert" <dgilbert@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 07:16:41 +07:00
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_USERFAULTFD
|
|
|
|
{
|
|
|
|
.procname = "unprivileged_userfaultfd",
|
|
|
|
.data = &sysctl_unprivileged_userfaultfd,
|
|
|
|
.maxlen = sizeof(sysctl_unprivileged_userfaultfd),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
mm: mmap: add new /proc tunable for mmap_base ASLR
Address Space Layout Randomization (ASLR) provides a barrier to
exploitation of user-space processes in the presence of security
vulnerabilities by making it more difficult to find desired code/data
which could help an attack. This is done by adding a random offset to
the location of regions in the process address space, with a greater
range of potential offset values corresponding to better protection/a
larger search-space for brute force, but also to greater potential for
fragmentation.
The offset added to the mmap_base address, which provides the basis for
the majority of the mappings for a process, is set once on process exec
in arch_pick_mmap_layout() and is done via hard-coded per-arch values,
which reflect, hopefully, the best compromise for all systems. The
trade-off between increased entropy in the offset value generation and
the corresponding increased variability in address space fragmentation
is not absolute, however, and some platforms may tolerate higher amounts
of entropy. This patch introduces both new Kconfig values and a sysctl
interface which may be used to change the amount of entropy used for
offset generation on a system.
The direct motivation for this change was in response to the
libstagefright vulnerabilities that affected Android, specifically to
information provided by Google's project zero at:
http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html
The attack presented therein, by Google's project zero, specifically
targeted the limited randomness used to generate the offset added to the
mmap_base address in order to craft a brute-force-based attack.
Concretely, the attack was against the mediaserver process, which was
limited to respawning every 5 seconds, on an arm device. The hard-coded
8 bits used resulted in an average expected success rate of defeating
the mmap ASLR after just over 10 minutes (128 tries at 5 seconds a
piece). With this patch, and an accompanying increase in the entropy
value to 16 bits, the same attack would take an average expected time of
over 45 hours (32768 tries), which makes it both less feasible and more
likely to be noticed.
The introduced Kconfig and sysctl options are limited by per-arch
minimum and maximum values, the minimum of which was chosen to match the
current hard-coded value and the maximum of which was chosen so as to
give the greatest flexibility without generating an invalid mmap_base
address, generally a 3-4 bits less than the number of bits in the
user-space accessible virtual address space.
When decided whether or not to change the default value, a system
developer should consider that mmap_base address could be placed
anywhere up to 2^(value) bits away from the non-randomized location,
which would introduce variable-sized areas above and below the mmap_base
address such that the maximum vm_area_struct size may be reduced,
preventing very large allocations.
This patch (of 4):
ASLR only uses as few as 8 bits to generate the random offset for the
mmap base address on 32 bit architectures. This value was chosen to
prevent a poorly chosen value from dividing the address space in such a
way as to prevent large allocations. This may not be an issue on all
platforms. Allow the specification of a minimum number of bits so that
platforms desiring greater ASLR protection may determine where to place
the trade-off.
Signed-off-by: Daniel Cashman <dcashman@google.com>
Cc: Russell King <linux@arm.linux.org.uk>
Acked-by: Kees Cook <keescook@chromium.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: Heinrich Schuchardt <xypron.glpk@gmx.de>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: David Rientjes <rientjes@google.com>
Cc: Mark Salyzyn <salyzyn@android.com>
Cc: Jeff Vander Stoep <jeffv@google.com>
Cc: Nick Kralevich <nnk@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Hector Marco-Gisbert <hecmargi@upv.es>
Cc: Borislav Petkov <bp@suse.de>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-15 06:19:53 +07:00
|
|
|
#endif
|
2009-04-03 16:30:53 +07:00
|
|
|
{ }
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
|
|
|
|
2007-10-18 17:05:22 +07:00
|
|
|
static struct ctl_table fs_table[] = {
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "inode-nr",
|
|
|
|
.data = &inodes_stat,
|
fs: bump inode and dentry counters to long
This series reworks our current object cache shrinking infrastructure in
two main ways:
* Noticing that a lot of users copy and paste their own version of LRU
lists for objects, we put some effort in providing a generic version.
It is modeled after the filesystem users: dentries, inodes, and xfs
(for various tasks), but we expect that other users could benefit in
the near future with little or no modification. Let us know if you
have any issues.
* The underlying list_lru being proposed automatically and
transparently keeps the elements in per-node lists, and is able to
manipulate the node lists individually. Given this infrastructure, we
are able to modify the up-to-now hammer called shrink_slab to proceed
with node-reclaim instead of always searching memory from all over like
it has been doing.
Per-node lru lists are also expected to lead to less contention in the lru
locks on multi-node scans, since we are now no longer fighting for a
global lock. The locks usually disappear from the profilers with this
change.
Although we have no official benchmarks for this version - be our guest to
independently evaluate this - earlier versions of this series were
performance tested (details at
http://permalink.gmane.org/gmane.linux.kernel.mm/100537) yielding no
visible performance regressions while yielding a better qualitative
behavior in NUMA machines.
With this infrastructure in place, we can use the list_lru entry point to
provide memcg isolation and per-memcg targeted reclaim. Historically,
those two pieces of work have been posted together. This version presents
only the infrastructure work, deferring the memcg work for a later time,
so we can focus on getting this part tested. You can see more about the
history of such work at http://lwn.net/Articles/552769/
Dave Chinner (18):
dcache: convert dentry_stat.nr_unused to per-cpu counters
dentry: move to per-sb LRU locks
dcache: remove dentries from LRU before putting on dispose list
mm: new shrinker API
shrinker: convert superblock shrinkers to new API
list: add a new LRU list type
inode: convert inode lru list to generic lru list code.
dcache: convert to use new lru list infrastructure
list_lru: per-node list infrastructure
shrinker: add node awareness
fs: convert inode and dentry shrinking to be node aware
xfs: convert buftarg LRU to generic code
xfs: rework buffer dispose list tracking
xfs: convert dquot cache lru to list_lru
fs: convert fs shrinkers to new scan/count API
drivers: convert shrinkers to new count/scan API
shrinker: convert remaining shrinkers to count/scan API
shrinker: Kill old ->shrink API.
Glauber Costa (7):
fs: bump inode and dentry counters to long
super: fix calculation of shrinkable objects for small numbers
list_lru: per-node API
vmscan: per-node deferred work
i915: bail out earlier when shrinker cannot acquire mutex
hugepage: convert huge zero page shrinker to new shrinker API
list_lru: dynamically adjust node arrays
This patch:
There are situations in very large machines in which we can have a large
quantity of dirty inodes, unused dentries, etc. This is particularly true
when umounting a filesystem, where eventually since every live object will
eventually be discarded.
Dave Chinner reported a problem with this while experimenting with the
shrinker revamp patchset. So we believe it is time for a change. This
patch just moves int to longs. Machines where it matters should have a
big long anyway.
Signed-off-by: Glauber Costa <glommer@openvz.org>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Cc: Arve Hjønnevåg <arve@android.com>
Cc: Carlos Maiolino <cmaiolino@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Chuck Lever <chuck.lever@oracle.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: J. Bruce Fields <bfields@redhat.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Kent Overstreet <koverstreet@google.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Steven Whitehouse <swhiteho@redhat.com>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2013-08-28 07:17:53 +07:00
|
|
|
.maxlen = 2*sizeof(long),
|
2005-04-17 05:20:36 +07:00
|
|
|
.mode = 0444,
|
2010-10-23 16:03:02 +07:00
|
|
|
.proc_handler = proc_nr_inodes,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "inode-state",
|
|
|
|
.data = &inodes_stat,
|
fs: bump inode and dentry counters to long
This series reworks our current object cache shrinking infrastructure in
two main ways:
* Noticing that a lot of users copy and paste their own version of LRU
lists for objects, we put some effort in providing a generic version.
It is modeled after the filesystem users: dentries, inodes, and xfs
(for various tasks), but we expect that other users could benefit in
the near future with little or no modification. Let us know if you
have any issues.
* The underlying list_lru being proposed automatically and
transparently keeps the elements in per-node lists, and is able to
manipulate the node lists individually. Given this infrastructure, we
are able to modify the up-to-now hammer called shrink_slab to proceed
with node-reclaim instead of always searching memory from all over like
it has been doing.
Per-node lru lists are also expected to lead to less contention in the lru
locks on multi-node scans, since we are now no longer fighting for a
global lock. The locks usually disappear from the profilers with this
change.
Although we have no official benchmarks for this version - be our guest to
independently evaluate this - earlier versions of this series were
performance tested (details at
http://permalink.gmane.org/gmane.linux.kernel.mm/100537) yielding no
visible performance regressions while yielding a better qualitative
behavior in NUMA machines.
With this infrastructure in place, we can use the list_lru entry point to
provide memcg isolation and per-memcg targeted reclaim. Historically,
those two pieces of work have been posted together. This version presents
only the infrastructure work, deferring the memcg work for a later time,
so we can focus on getting this part tested. You can see more about the
history of such work at http://lwn.net/Articles/552769/
Dave Chinner (18):
dcache: convert dentry_stat.nr_unused to per-cpu counters
dentry: move to per-sb LRU locks
dcache: remove dentries from LRU before putting on dispose list
mm: new shrinker API
shrinker: convert superblock shrinkers to new API
list: add a new LRU list type
inode: convert inode lru list to generic lru list code.
dcache: convert to use new lru list infrastructure
list_lru: per-node list infrastructure
shrinker: add node awareness
fs: convert inode and dentry shrinking to be node aware
xfs: convert buftarg LRU to generic code
xfs: rework buffer dispose list tracking
xfs: convert dquot cache lru to list_lru
fs: convert fs shrinkers to new scan/count API
drivers: convert shrinkers to new count/scan API
shrinker: convert remaining shrinkers to count/scan API
shrinker: Kill old ->shrink API.
Glauber Costa (7):
fs: bump inode and dentry counters to long
super: fix calculation of shrinkable objects for small numbers
list_lru: per-node API
vmscan: per-node deferred work
i915: bail out earlier when shrinker cannot acquire mutex
hugepage: convert huge zero page shrinker to new shrinker API
list_lru: dynamically adjust node arrays
This patch:
There are situations in very large machines in which we can have a large
quantity of dirty inodes, unused dentries, etc. This is particularly true
when umounting a filesystem, where eventually since every live object will
eventually be discarded.
Dave Chinner reported a problem with this while experimenting with the
shrinker revamp patchset. So we believe it is time for a change. This
patch just moves int to longs. Machines where it matters should have a
big long anyway.
Signed-off-by: Glauber Costa <glommer@openvz.org>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Cc: Arve Hjønnevåg <arve@android.com>
Cc: Carlos Maiolino <cmaiolino@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Chuck Lever <chuck.lever@oracle.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: J. Bruce Fields <bfields@redhat.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Kent Overstreet <koverstreet@google.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Steven Whitehouse <swhiteho@redhat.com>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2013-08-28 07:17:53 +07:00
|
|
|
.maxlen = 7*sizeof(long),
|
2005-04-17 05:20:36 +07:00
|
|
|
.mode = 0444,
|
2010-10-23 16:03:02 +07:00
|
|
|
.proc_handler = proc_nr_inodes,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "file-nr",
|
|
|
|
.data = &files_stat,
|
2010-10-27 04:22:44 +07:00
|
|
|
.maxlen = sizeof(files_stat),
|
2005-04-17 05:20:36 +07:00
|
|
|
.mode = 0444,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_nr_files,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "file-max",
|
|
|
|
.data = &files_stat.max_files,
|
2010-10-27 04:22:44 +07:00
|
|
|
.maxlen = sizeof(files_stat.max_files),
|
2005-04-17 05:20:36 +07:00
|
|
|
.mode = 0644,
|
2010-10-27 04:22:44 +07:00
|
|
|
.proc_handler = proc_doulongvec_minmax,
|
2019-04-06 08:39:38 +07:00
|
|
|
.extra1 = &zero_ul,
|
2019-03-08 07:29:43 +07:00
|
|
|
.extra2 = &long_max,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2008-02-06 16:37:16 +07:00
|
|
|
{
|
|
|
|
.procname = "nr_open",
|
|
|
|
.data = &sysctl_nr_open,
|
2016-09-02 04:38:52 +07:00
|
|
|
.maxlen = sizeof(unsigned int),
|
2008-02-06 16:37:16 +07:00
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
2008-05-10 21:08:32 +07:00
|
|
|
.extra1 = &sysctl_nr_open_min,
|
|
|
|
.extra2 = &sysctl_nr_open_max,
|
2008-02-06 16:37:16 +07:00
|
|
|
},
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "dentry-state",
|
|
|
|
.data = &dentry_stat,
|
fs: bump inode and dentry counters to long
This series reworks our current object cache shrinking infrastructure in
two main ways:
* Noticing that a lot of users copy and paste their own version of LRU
lists for objects, we put some effort in providing a generic version.
It is modeled after the filesystem users: dentries, inodes, and xfs
(for various tasks), but we expect that other users could benefit in
the near future with little or no modification. Let us know if you
have any issues.
* The underlying list_lru being proposed automatically and
transparently keeps the elements in per-node lists, and is able to
manipulate the node lists individually. Given this infrastructure, we
are able to modify the up-to-now hammer called shrink_slab to proceed
with node-reclaim instead of always searching memory from all over like
it has been doing.
Per-node lru lists are also expected to lead to less contention in the lru
locks on multi-node scans, since we are now no longer fighting for a
global lock. The locks usually disappear from the profilers with this
change.
Although we have no official benchmarks for this version - be our guest to
independently evaluate this - earlier versions of this series were
performance tested (details at
http://permalink.gmane.org/gmane.linux.kernel.mm/100537) yielding no
visible performance regressions while yielding a better qualitative
behavior in NUMA machines.
With this infrastructure in place, we can use the list_lru entry point to
provide memcg isolation and per-memcg targeted reclaim. Historically,
those two pieces of work have been posted together. This version presents
only the infrastructure work, deferring the memcg work for a later time,
so we can focus on getting this part tested. You can see more about the
history of such work at http://lwn.net/Articles/552769/
Dave Chinner (18):
dcache: convert dentry_stat.nr_unused to per-cpu counters
dentry: move to per-sb LRU locks
dcache: remove dentries from LRU before putting on dispose list
mm: new shrinker API
shrinker: convert superblock shrinkers to new API
list: add a new LRU list type
inode: convert inode lru list to generic lru list code.
dcache: convert to use new lru list infrastructure
list_lru: per-node list infrastructure
shrinker: add node awareness
fs: convert inode and dentry shrinking to be node aware
xfs: convert buftarg LRU to generic code
xfs: rework buffer dispose list tracking
xfs: convert dquot cache lru to list_lru
fs: convert fs shrinkers to new scan/count API
drivers: convert shrinkers to new count/scan API
shrinker: convert remaining shrinkers to count/scan API
shrinker: Kill old ->shrink API.
Glauber Costa (7):
fs: bump inode and dentry counters to long
super: fix calculation of shrinkable objects for small numbers
list_lru: per-node API
vmscan: per-node deferred work
i915: bail out earlier when shrinker cannot acquire mutex
hugepage: convert huge zero page shrinker to new shrinker API
list_lru: dynamically adjust node arrays
This patch:
There are situations in very large machines in which we can have a large
quantity of dirty inodes, unused dentries, etc. This is particularly true
when umounting a filesystem, where eventually since every live object will
eventually be discarded.
Dave Chinner reported a problem with this while experimenting with the
shrinker revamp patchset. So we believe it is time for a change. This
patch just moves int to longs. Machines where it matters should have a
big long anyway.
Signed-off-by: Glauber Costa <glommer@openvz.org>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Cc: Arve Hjønnevåg <arve@android.com>
Cc: Carlos Maiolino <cmaiolino@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Chuck Lever <chuck.lever@oracle.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: J. Bruce Fields <bfields@redhat.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Kent Overstreet <koverstreet@google.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Steven Whitehouse <swhiteho@redhat.com>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2013-08-28 07:17:53 +07:00
|
|
|
.maxlen = 6*sizeof(long),
|
2005-04-17 05:20:36 +07:00
|
|
|
.mode = 0444,
|
2010-10-10 16:36:23 +07:00
|
|
|
.proc_handler = proc_nr_dentry,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "overflowuid",
|
|
|
|
.data = &fs_overflowuid,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
2005-04-17 05:20:36 +07:00
|
|
|
.extra1 = &minolduid,
|
|
|
|
.extra2 = &maxolduid,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "overflowgid",
|
|
|
|
.data = &fs_overflowgid,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
2005-04-17 05:20:36 +07:00
|
|
|
.extra1 = &minolduid,
|
|
|
|
.extra2 = &maxolduid,
|
|
|
|
},
|
2008-08-06 20:12:22 +07:00
|
|
|
#ifdef CONFIG_FILE_LOCKING
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "leases-enable",
|
|
|
|
.data = &leases_enable,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2008-08-06 20:12:22 +07:00
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
#ifdef CONFIG_DNOTIFY
|
|
|
|
{
|
|
|
|
.procname = "dir-notify-enable",
|
|
|
|
.data = &dir_notify_enable,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_MMU
|
2008-08-06 20:12:22 +07:00
|
|
|
#ifdef CONFIG_FILE_LOCKING
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "lease-break-time",
|
|
|
|
.data = &lease_break_time,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2008-08-06 20:12:22 +07:00
|
|
|
#endif
|
2008-10-16 12:05:12 +07:00
|
|
|
#ifdef CONFIG_AIO
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
.procname = "aio-nr",
|
|
|
|
.data = &aio_nr,
|
|
|
|
.maxlen = sizeof(aio_nr),
|
|
|
|
.mode = 0444,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_doulongvec_minmax,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "aio-max-nr",
|
|
|
|
.data = &aio_max_nr,
|
|
|
|
.maxlen = sizeof(aio_max_nr),
|
|
|
|
.mode = 0644,
|
2009-11-16 18:11:48 +07:00
|
|
|
.proc_handler = proc_doulongvec_minmax,
|
2005-04-17 05:20:36 +07:00
|
|
|
},
|
2008-10-16 12:05:12 +07:00
|
|
|
#endif /* CONFIG_AIO */
|
2006-06-02 03:10:59 +07:00
|
|
|
#ifdef CONFIG_INOTIFY_USER
|
2005-07-13 23:38:18 +07:00
|
|
|
{
|
|
|
|
.procname = "inotify",
|
|
|
|
.mode = 0555,
|
|
|
|
.child = inotify_table,
|
|
|
|
},
|
|
|
|
#endif
|
epoll: introduce resource usage limits
It has been thought that the per-user file descriptors limit would also
limit the resources that a normal user can request via the epoll
interface. Vegard Nossum reported a very simple program (a modified
version attached) that can make a normal user to request a pretty large
amount of kernel memory, well within the its maximum number of fds. To
solve such problem, default limits are now imposed, and /proc based
configuration has been introduced. A new directory has been created,
named /proc/sys/fs/epoll/ and inside there, there are two configuration
points:
max_user_instances = Maximum number of devices - per user
max_user_watches = Maximum number of "watched" fds - per user
The current default for "max_user_watches" limits the memory used by epoll
to store "watches", to 1/32 of the amount of the low RAM. As example, a
256MB 32bit machine, will have "max_user_watches" set to roughly 90000.
That should be enough to not break existing heavy epoll users. The
default value for "max_user_instances" is set to 128, that should be
enough too.
This also changes the userspace, because a new error code can now come out
from EPOLL_CTL_ADD (-ENOSPC). The EMFILE from epoll_create() was already
listed, so that should be ok.
[akpm@linux-foundation.org: use get_current_user()]
Signed-off-by: Davide Libenzi <davidel@xmailserver.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: <stable@kernel.org>
Cc: Cyrill Gorcunov <gorcunov@gmail.com>
Reported-by: Vegard Nossum <vegardno@ifi.uio.no>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-12-02 04:13:55 +07:00
|
|
|
#ifdef CONFIG_EPOLL
|
|
|
|
{
|
|
|
|
.procname = "epoll",
|
|
|
|
.mode = 0555,
|
|
|
|
.child = epoll_table,
|
|
|
|
},
|
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
#endif
|
fs: add link restrictions
This adds symlink and hardlink restrictions to the Linux VFS.
Symlinks:
A long-standing class of security issues is the symlink-based
time-of-check-time-of-use race, most commonly seen in world-writable
directories like /tmp. The common method of exploitation of this flaw
is to cross privilege boundaries when following a given symlink (i.e. a
root process follows a symlink belonging to another user). For a likely
incomplete list of hundreds of examples across the years, please see:
http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=/tmp
The solution is to permit symlinks to only be followed when outside
a sticky world-writable directory, or when the uid of the symlink and
follower match, or when the directory owner matches the symlink's owner.
Some pointers to the history of earlier discussion that I could find:
1996 Aug, Zygo Blaxell
http://marc.info/?l=bugtraq&m=87602167419830&w=2
1996 Oct, Andrew Tridgell
http://lkml.indiana.edu/hypermail/linux/kernel/9610.2/0086.html
1997 Dec, Albert D Cahalan
http://lkml.org/lkml/1997/12/16/4
2005 Feb, Lorenzo Hernández García-Hierro
http://lkml.indiana.edu/hypermail/linux/kernel/0502.0/1896.html
2010 May, Kees Cook
https://lkml.org/lkml/2010/5/30/144
Past objections and rebuttals could be summarized as:
- Violates POSIX.
- POSIX didn't consider this situation and it's not useful to follow
a broken specification at the cost of security.
- Might break unknown applications that use this feature.
- Applications that break because of the change are easy to spot and
fix. Applications that are vulnerable to symlink ToCToU by not having
the change aren't. Additionally, no applications have yet been found
that rely on this behavior.
- Applications should just use mkstemp() or O_CREATE|O_EXCL.
- True, but applications are not perfect, and new software is written
all the time that makes these mistakes; blocking this flaw at the
kernel is a single solution to the entire class of vulnerability.
- This should live in the core VFS.
- This should live in an LSM. (https://lkml.org/lkml/2010/5/31/135)
- This should live in an LSM.
- This should live in the core VFS. (https://lkml.org/lkml/2010/8/2/188)
Hardlinks:
On systems that have user-writable directories on the same partition
as system files, a long-standing class of security issues is the
hardlink-based time-of-check-time-of-use race, most commonly seen in
world-writable directories like /tmp. The common method of exploitation
of this flaw is to cross privilege boundaries when following a given
hardlink (i.e. a root process follows a hardlink created by another
user). Additionally, an issue exists where users can "pin" a potentially
vulnerable setuid/setgid file so that an administrator will not actually
upgrade a system fully.
The solution is to permit hardlinks to only be created when the user is
already the existing file's owner, or if they already have read/write
access to the existing file.
Many Linux users are surprised when they learn they can link to files
they have no access to, so this change appears to follow the doctrine
of "least surprise". Additionally, this change does not violate POSIX,
which states "the implementation may require that the calling process
has permission to access the existing file"[1].
This change is known to break some implementations of the "at" daemon,
though the version used by Fedora and Ubuntu has been fixed[2] for
a while. Otherwise, the change has been undisruptive while in use in
Ubuntu for the last 1.5 years.
[1] http://pubs.opengroup.org/onlinepubs/9699919799/functions/linkat.html
[2] http://anonscm.debian.org/gitweb/?p=collab-maint/at.git;a=commitdiff;h=f4114656c3a6c6f6070e315ffdf940a49eda3279
This patch is based on the patches in Openwall and grsecurity, along with
suggestions from Al Viro. I have added a sysctl to enable the protected
behavior, and documentation.
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2012-07-26 07:29:07 +07:00
|
|
|
{
|
|
|
|
.procname = "protected_symlinks",
|
|
|
|
.data = &sysctl_protected_symlinks,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0600,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "protected_hardlinks",
|
|
|
|
.data = &sysctl_protected_hardlinks,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0600,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2018-08-24 07:00:35 +07:00
|
|
|
{
|
|
|
|
.procname = "protected_fifos",
|
|
|
|
.data = &sysctl_protected_fifos,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0600,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &two,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "protected_regular",
|
|
|
|
.data = &sysctl_protected_regular,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0600,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &two,
|
|
|
|
},
|
2005-06-23 14:09:43 +07:00
|
|
|
{
|
|
|
|
.procname = "suid_dumpable",
|
|
|
|
.data = &suid_dumpable,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2012-07-31 04:39:18 +07:00
|
|
|
.proc_handler = proc_dointvec_minmax_coredump,
|
2009-04-03 06:58:33 +07:00
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &two,
|
2005-06-23 14:09:43 +07:00
|
|
|
},
|
2007-02-14 15:34:07 +07:00
|
|
|
#if defined(CONFIG_BINFMT_MISC) || defined(CONFIG_BINFMT_MISC_MODULE)
|
|
|
|
{
|
|
|
|
.procname = "binfmt_misc",
|
|
|
|
.mode = 0555,
|
2015-05-10 10:09:14 +07:00
|
|
|
.child = sysctl_mount_point,
|
2007-02-14 15:34:07 +07:00
|
|
|
},
|
|
|
|
#endif
|
2010-05-20 02:03:16 +07:00
|
|
|
{
|
2010-06-03 19:54:39 +07:00
|
|
|
.procname = "pipe-max-size",
|
|
|
|
.data = &pipe_max_size,
|
pipe: match pipe_max_size data type with procfs
Patch series "A few round_pipe_size() and pipe-max-size fixups", v3.
While backporting Michael's "pipe: fix limit handling" patchset to a
distro-kernel, Mikulas noticed that current upstream pipe limit handling
contains a few problems:
1 - procfs signed wrap: echo'ing a large number into
/proc/sys/fs/pipe-max-size and then cat'ing it back out shows a
negative value.
2 - round_pipe_size() nr_pages overflow on 32bit: this would
subsequently try roundup_pow_of_two(0), which is undefined.
3 - visible non-rounded pipe-max-size value: there is no mutual
exclusion or protection between the time pipe_max_size is assigned
a raw value from proc_dointvec_minmax() and when it is rounded.
4 - unsigned long -> unsigned int conversion makes for potential odd
return errors from do_proc_douintvec_minmax_conv() and
do_proc_dopipe_max_size_conv().
This version underwent the same testing as v1:
https://marc.info/?l=linux-kernel&m=150643571406022&w=2
This patch (of 4):
pipe_max_size is defined as an unsigned int:
unsigned int pipe_max_size = 1048576;
but its procfs/sysctl representation is an integer:
static struct ctl_table fs_table[] = {
...
{
.procname = "pipe-max-size",
.data = &pipe_max_size,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = &pipe_proc_fn,
.extra1 = &pipe_min_size,
},
...
that is signed:
int pipe_proc_fn(struct ctl_table *table, int write, void __user *buf,
size_t *lenp, loff_t *ppos)
{
...
ret = proc_dointvec_minmax(table, write, buf, lenp, ppos)
This leads to signed results via procfs for large values of pipe_max_size:
% echo 2147483647 >/proc/sys/fs/pipe-max-size
% cat /proc/sys/fs/pipe-max-size
-2147483648
Use unsigned operations on this variable to avoid such negative values.
Link: http://lkml.kernel.org/r/1507658689-11669-2-git-send-email-joe.lawrence@redhat.com
Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Reviewed-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-11-18 06:29:17 +07:00
|
|
|
.maxlen = sizeof(pipe_max_size),
|
2010-05-20 02:03:16 +07:00
|
|
|
.mode = 0644,
|
2018-02-07 06:41:49 +07:00
|
|
|
.proc_handler = proc_dopipe_max_size,
|
2010-05-20 02:03:16 +07:00
|
|
|
},
|
2016-01-18 22:36:09 +07:00
|
|
|
{
|
|
|
|
.procname = "pipe-user-pages-hard",
|
|
|
|
.data = &pipe_user_pages_hard,
|
|
|
|
.maxlen = sizeof(pipe_user_pages_hard),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_doulongvec_minmax,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "pipe-user-pages-soft",
|
|
|
|
.data = &pipe_user_pages_soft,
|
|
|
|
.maxlen = sizeof(pipe_user_pages_soft),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_doulongvec_minmax,
|
|
|
|
},
|
2016-09-28 12:27:17 +07:00
|
|
|
{
|
|
|
|
.procname = "mount-max",
|
|
|
|
.data = &sysctl_mount_max,
|
|
|
|
.maxlen = sizeof(unsigned int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &one,
|
|
|
|
},
|
2009-04-03 16:30:53 +07:00
|
|
|
{ }
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
|
|
|
|
2007-10-18 17:05:22 +07:00
|
|
|
static struct ctl_table debug_table[] = {
|
2012-10-09 06:28:16 +07:00
|
|
|
#ifdef CONFIG_SYSCTL_EXCEPTION_TRACE
|
2007-07-22 16:12:28 +07:00
|
|
|
{
|
|
|
|
.procname = "exception-trace",
|
|
|
|
.data = &show_unhandled_signals,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec
|
|
|
|
},
|
2010-02-25 20:34:15 +07:00
|
|
|
#endif
|
|
|
|
#if defined(CONFIG_OPTPROBES)
|
|
|
|
{
|
|
|
|
.procname = "kprobes-optimization",
|
|
|
|
.data = &sysctl_kprobes_optimization,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_kprobes_optimization_handler,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one,
|
|
|
|
},
|
2007-07-22 16:12:28 +07:00
|
|
|
#endif
|
2009-04-03 16:30:53 +07:00
|
|
|
{ }
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
|
|
|
|
2007-10-18 17:05:22 +07:00
|
|
|
static struct ctl_table dev_table[] = {
|
2009-04-03 16:30:53 +07:00
|
|
|
{ }
|
[PATCH] inotify
inotify is intended to correct the deficiencies of dnotify, particularly
its inability to scale and its terrible user interface:
* dnotify requires the opening of one fd per each directory
that you intend to watch. This quickly results in too many
open files and pins removable media, preventing unmount.
* dnotify is directory-based. You only learn about changes to
directories. Sure, a change to a file in a directory affects
the directory, but you are then forced to keep a cache of
stat structures.
* dnotify's interface to user-space is awful. Signals?
inotify provides a more usable, simple, powerful solution to file change
notification:
* inotify's interface is a system call that returns a fd, not SIGIO.
You get a single fd, which is select()-able.
* inotify has an event that says "the filesystem that the item
you were watching is on was unmounted."
* inotify can watch directories or files.
Inotify is currently used by Beagle (a desktop search infrastructure),
Gamin (a FAM replacement), and other projects.
See Documentation/filesystems/inotify.txt.
Signed-off-by: Robert Love <rml@novell.com>
Cc: John McCutchan <ttb@tentacle.dhs.org>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-13 04:06:03 +07:00
|
|
|
};
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-01-06 18:34:20 +07:00
|
|
|
int __init sysctl_init(void)
|
2007-02-14 15:34:13 +07:00
|
|
|
{
|
2012-07-31 04:42:48 +07:00
|
|
|
struct ctl_table_header *hdr;
|
|
|
|
|
|
|
|
hdr = register_sysctl_table(sysctl_base_table);
|
|
|
|
kmemleak_not_leak(hdr);
|
2007-02-14 15:34:13 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2006-09-27 15:51:04 +07:00
|
|
|
#endif /* CONFIG_SYSCTL */
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* /proc/sys support
|
|
|
|
*/
|
|
|
|
|
2006-09-27 15:51:04 +07:00
|
|
|
#ifdef CONFIG_PROC_SYSCTL
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2014-06-07 04:37:17 +07:00
|
|
|
static int _proc_do_string(char *data, int maxlen, int write,
|
|
|
|
char __user *buffer,
|
2006-10-02 16:18:05 +07:00
|
|
|
size_t *lenp, loff_t *ppos)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
size_t len;
|
|
|
|
char __user *p;
|
|
|
|
char c;
|
2007-02-10 16:46:38 +07:00
|
|
|
|
|
|
|
if (!data || !maxlen || !*lenp) {
|
2005-04-17 05:20:36 +07:00
|
|
|
*lenp = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
2007-02-10 16:46:38 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (write) {
|
sysctl: allow for strict write position handling
When writing to a sysctl string, each write, regardless of VFS position,
begins writing the string from the start. This means the contents of
the last write to the sysctl controls the string contents instead of the
first:
open("/proc/sys/kernel/modprobe", O_WRONLY) = 1
write(1, "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., 4096) = 4096
write(1, "/bin/true", 9) = 9
close(1) = 0
$ cat /proc/sys/kernel/modprobe
/bin/true
Expected behaviour would be to have the sysctl be "AAAA..." capped at
maxlen (in this case KMOD_PATH_LEN: 256), instead of truncating to the
contents of the second write. Similarly, multiple short writes would
not append to the sysctl.
The old behavior is unlike regular POSIX files enough that doing audits
of software that interact with sysctls can end up in unexpected or
dangerous situations. For example, "as long as the input starts with a
trusted path" turns out to be an insufficient filter, as what must also
happen is for the input to be entirely contained in a single write
syscall -- not a common consideration, especially for high level tools.
This provides kernel.sysctl_writes_strict as a way to make this behavior
act in a less surprising manner for strings, and disallows non-zero file
position when writing numeric sysctls (similar to what is already done
when reading from non-zero file positions). For now, the default (0) is
to warn about non-zero file position use, but retain the legacy
behavior. Setting this to -1 disables the warning, and setting this to
1 enables the file position respecting behavior.
[akpm@linux-foundation.org: fix build]
[akpm@linux-foundation.org: move misplaced hunk, per Randy]
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-07 04:37:19 +07:00
|
|
|
if (sysctl_writes_strict == SYSCTL_WRITES_STRICT) {
|
|
|
|
/* Only continue writes not past the end of buffer. */
|
|
|
|
len = strlen(data);
|
|
|
|
if (len > maxlen - 1)
|
|
|
|
len = maxlen - 1;
|
|
|
|
|
|
|
|
if (*ppos > len)
|
|
|
|
return 0;
|
|
|
|
len = *ppos;
|
|
|
|
} else {
|
|
|
|
/* Start writing from beginning of buffer. */
|
|
|
|
len = 0;
|
|
|
|
}
|
|
|
|
|
2014-06-07 04:37:18 +07:00
|
|
|
*ppos += *lenp;
|
2005-04-17 05:20:36 +07:00
|
|
|
p = buffer;
|
2014-06-07 04:37:18 +07:00
|
|
|
while ((p - buffer) < *lenp && len < maxlen - 1) {
|
2005-04-17 05:20:36 +07:00
|
|
|
if (get_user(c, p++))
|
|
|
|
return -EFAULT;
|
|
|
|
if (c == 0 || c == '\n')
|
|
|
|
break;
|
2014-06-07 04:37:18 +07:00
|
|
|
data[len++] = c;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2014-06-07 04:37:17 +07:00
|
|
|
data[len] = 0;
|
2005-04-17 05:20:36 +07:00
|
|
|
} else {
|
2006-10-02 16:18:04 +07:00
|
|
|
len = strlen(data);
|
|
|
|
if (len > maxlen)
|
|
|
|
len = maxlen;
|
2007-02-10 16:46:38 +07:00
|
|
|
|
|
|
|
if (*ppos > len) {
|
|
|
|
*lenp = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
data += *ppos;
|
|
|
|
len -= *ppos;
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (len > *lenp)
|
|
|
|
len = *lenp;
|
|
|
|
if (len)
|
2014-06-07 04:37:17 +07:00
|
|
|
if (copy_to_user(buffer, data, len))
|
2005-04-17 05:20:36 +07:00
|
|
|
return -EFAULT;
|
|
|
|
if (len < *lenp) {
|
2014-06-07 04:37:17 +07:00
|
|
|
if (put_user('\n', buffer + len))
|
2005-04-17 05:20:36 +07:00
|
|
|
return -EFAULT;
|
|
|
|
len++;
|
|
|
|
}
|
|
|
|
*lenp = len;
|
|
|
|
*ppos += len;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
sysctl: allow for strict write position handling
When writing to a sysctl string, each write, regardless of VFS position,
begins writing the string from the start. This means the contents of
the last write to the sysctl controls the string contents instead of the
first:
open("/proc/sys/kernel/modprobe", O_WRONLY) = 1
write(1, "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., 4096) = 4096
write(1, "/bin/true", 9) = 9
close(1) = 0
$ cat /proc/sys/kernel/modprobe
/bin/true
Expected behaviour would be to have the sysctl be "AAAA..." capped at
maxlen (in this case KMOD_PATH_LEN: 256), instead of truncating to the
contents of the second write. Similarly, multiple short writes would
not append to the sysctl.
The old behavior is unlike regular POSIX files enough that doing audits
of software that interact with sysctls can end up in unexpected or
dangerous situations. For example, "as long as the input starts with a
trusted path" turns out to be an insufficient filter, as what must also
happen is for the input to be entirely contained in a single write
syscall -- not a common consideration, especially for high level tools.
This provides kernel.sysctl_writes_strict as a way to make this behavior
act in a less surprising manner for strings, and disallows non-zero file
position when writing numeric sysctls (similar to what is already done
when reading from non-zero file positions). For now, the default (0) is
to warn about non-zero file position use, but retain the legacy
behavior. Setting this to -1 disables the warning, and setting this to
1 enables the file position respecting behavior.
[akpm@linux-foundation.org: fix build]
[akpm@linux-foundation.org: move misplaced hunk, per Randy]
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-07 04:37:19 +07:00
|
|
|
static void warn_sysctl_write(struct ctl_table *table)
|
|
|
|
{
|
|
|
|
pr_warn_once("%s wrote to %s when file position was not 0!\n"
|
|
|
|
"This will not be supported in the future. To silence this\n"
|
|
|
|
"warning, set kernel.sysctl_writes_strict = -1\n",
|
|
|
|
current->comm, table->procname);
|
|
|
|
}
|
|
|
|
|
2017-07-13 04:33:33 +07:00
|
|
|
/**
|
2018-08-22 12:01:06 +07:00
|
|
|
* proc_first_pos_non_zero_ignore - check if first position is allowed
|
2017-07-13 04:33:33 +07:00
|
|
|
* @ppos: file position
|
|
|
|
* @table: the sysctl table
|
|
|
|
*
|
|
|
|
* Returns true if the first position is non-zero and the sysctl_writes_strict
|
|
|
|
* mode indicates this is not allowed for numeric input types. String proc
|
2018-08-22 12:01:06 +07:00
|
|
|
* handlers can ignore the return value.
|
2017-07-13 04:33:33 +07:00
|
|
|
*/
|
|
|
|
static bool proc_first_pos_non_zero_ignore(loff_t *ppos,
|
|
|
|
struct ctl_table *table)
|
|
|
|
{
|
|
|
|
if (!*ppos)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
switch (sysctl_writes_strict) {
|
|
|
|
case SYSCTL_WRITES_STRICT:
|
|
|
|
return true;
|
|
|
|
case SYSCTL_WRITES_WARN:
|
|
|
|
warn_sysctl_write(table);
|
|
|
|
return false;
|
|
|
|
default:
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-10-02 16:18:04 +07:00
|
|
|
/**
|
|
|
|
* proc_dostring - read a string sysctl
|
|
|
|
* @table: the sysctl table
|
|
|
|
* @write: %TRUE if this is a write to the sysctl file
|
|
|
|
* @buffer: the user buffer
|
|
|
|
* @lenp: the size of the user buffer
|
|
|
|
* @ppos: file position
|
|
|
|
*
|
|
|
|
* Reads/writes a string from/to the user buffer. If the kernel
|
|
|
|
* buffer provided is not large enough to hold the string, the
|
|
|
|
* string is truncated. The copied string is %NULL-terminated.
|
|
|
|
* If the string is being read by the user process, it is copied
|
|
|
|
* and a newline '\n' is added. It is truncated if the buffer is
|
|
|
|
* not large enough.
|
|
|
|
*
|
|
|
|
* Returns 0 on success.
|
|
|
|
*/
|
2009-09-24 05:57:19 +07:00
|
|
|
int proc_dostring(struct ctl_table *table, int write,
|
2006-10-02 16:18:04 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
2017-07-13 04:33:33 +07:00
|
|
|
if (write)
|
|
|
|
proc_first_pos_non_zero_ignore(ppos, table);
|
sysctl: allow for strict write position handling
When writing to a sysctl string, each write, regardless of VFS position,
begins writing the string from the start. This means the contents of
the last write to the sysctl controls the string contents instead of the
first:
open("/proc/sys/kernel/modprobe", O_WRONLY) = 1
write(1, "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., 4096) = 4096
write(1, "/bin/true", 9) = 9
close(1) = 0
$ cat /proc/sys/kernel/modprobe
/bin/true
Expected behaviour would be to have the sysctl be "AAAA..." capped at
maxlen (in this case KMOD_PATH_LEN: 256), instead of truncating to the
contents of the second write. Similarly, multiple short writes would
not append to the sysctl.
The old behavior is unlike regular POSIX files enough that doing audits
of software that interact with sysctls can end up in unexpected or
dangerous situations. For example, "as long as the input starts with a
trusted path" turns out to be an insufficient filter, as what must also
happen is for the input to be entirely contained in a single write
syscall -- not a common consideration, especially for high level tools.
This provides kernel.sysctl_writes_strict as a way to make this behavior
act in a less surprising manner for strings, and disallows non-zero file
position when writing numeric sysctls (similar to what is already done
when reading from non-zero file positions). For now, the default (0) is
to warn about non-zero file position use, but retain the legacy
behavior. Setting this to -1 disables the warning, and setting this to
1 enables the file position respecting behavior.
[akpm@linux-foundation.org: fix build]
[akpm@linux-foundation.org: move misplaced hunk, per Randy]
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-07 04:37:19 +07:00
|
|
|
|
2014-06-07 04:37:17 +07:00
|
|
|
return _proc_do_string((char *)(table->data), table->maxlen, write,
|
|
|
|
(char __user *)buffer, lenp, ppos);
|
2006-10-02 16:18:04 +07:00
|
|
|
}
|
|
|
|
|
2010-05-05 07:26:45 +07:00
|
|
|
static size_t proc_skip_spaces(char **buf)
|
|
|
|
{
|
|
|
|
size_t ret;
|
|
|
|
char *tmp = skip_spaces(*buf);
|
|
|
|
ret = tmp - *buf;
|
|
|
|
*buf = tmp;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2010-05-05 07:26:55 +07:00
|
|
|
static void proc_skip_char(char **buf, size_t *size, const char v)
|
|
|
|
{
|
|
|
|
while (*size) {
|
|
|
|
if (**buf != v)
|
|
|
|
break;
|
|
|
|
(*size)--;
|
|
|
|
(*buf)++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-03-08 07:29:40 +07:00
|
|
|
/**
|
|
|
|
* strtoul_lenient - parse an ASCII formatted integer from a buffer and only
|
|
|
|
* fail on overflow
|
|
|
|
*
|
|
|
|
* @cp: kernel buffer containing the string to parse
|
|
|
|
* @endp: pointer to store the trailing characters
|
|
|
|
* @base: the base to use
|
|
|
|
* @res: where the parsed integer will be stored
|
|
|
|
*
|
|
|
|
* In case of success 0 is returned and @res will contain the parsed integer,
|
|
|
|
* @endp will hold any trailing characters.
|
|
|
|
* This function will fail the parse on overflow. If there wasn't an overflow
|
|
|
|
* the function will defer the decision what characters count as invalid to the
|
|
|
|
* caller.
|
|
|
|
*/
|
|
|
|
static int strtoul_lenient(const char *cp, char **endp, unsigned int base,
|
|
|
|
unsigned long *res)
|
|
|
|
{
|
|
|
|
unsigned long long result;
|
|
|
|
unsigned int rv;
|
|
|
|
|
|
|
|
cp = _parse_integer_fixup_radix(cp, &base);
|
|
|
|
rv = _parse_integer(cp, base, &result);
|
|
|
|
if ((rv & KSTRTOX_OVERFLOW) || (result != (unsigned long)result))
|
|
|
|
return -ERANGE;
|
|
|
|
|
|
|
|
cp += rv;
|
|
|
|
|
|
|
|
if (endp)
|
|
|
|
*endp = (char *)cp;
|
|
|
|
|
|
|
|
*res = (unsigned long)result;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-05-05 07:26:45 +07:00
|
|
|
#define TMPBUFLEN 22
|
|
|
|
/**
|
2010-05-22 01:29:53 +07:00
|
|
|
* proc_get_long - reads an ASCII formatted integer from a user buffer
|
2010-05-05 07:26:45 +07:00
|
|
|
*
|
2010-05-22 01:29:53 +07:00
|
|
|
* @buf: a kernel buffer
|
|
|
|
* @size: size of the kernel buffer
|
|
|
|
* @val: this is where the number will be stored
|
|
|
|
* @neg: set to %TRUE if number is negative
|
|
|
|
* @perm_tr: a vector which contains the allowed trailers
|
|
|
|
* @perm_tr_len: size of the perm_tr vector
|
|
|
|
* @tr: pointer to store the trailer character
|
2010-05-05 07:26:45 +07:00
|
|
|
*
|
2010-05-22 01:29:53 +07:00
|
|
|
* In case of success %0 is returned and @buf and @size are updated with
|
|
|
|
* the amount of bytes read. If @tr is non-NULL and a trailing
|
|
|
|
* character exists (size is non-zero after returning from this
|
|
|
|
* function), @tr is updated with the trailing character.
|
2010-05-05 07:26:45 +07:00
|
|
|
*/
|
|
|
|
static int proc_get_long(char **buf, size_t *size,
|
|
|
|
unsigned long *val, bool *neg,
|
|
|
|
const char *perm_tr, unsigned perm_tr_len, char *tr)
|
|
|
|
{
|
|
|
|
int len;
|
|
|
|
char *p, tmp[TMPBUFLEN];
|
|
|
|
|
|
|
|
if (!*size)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
len = *size;
|
|
|
|
if (len > TMPBUFLEN - 1)
|
|
|
|
len = TMPBUFLEN - 1;
|
|
|
|
|
|
|
|
memcpy(tmp, *buf, len);
|
|
|
|
|
|
|
|
tmp[len] = 0;
|
|
|
|
p = tmp;
|
|
|
|
if (*p == '-' && *size > 1) {
|
|
|
|
*neg = true;
|
|
|
|
p++;
|
|
|
|
} else
|
|
|
|
*neg = false;
|
|
|
|
if (!isdigit(*p))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2019-03-08 07:29:40 +07:00
|
|
|
if (strtoul_lenient(p, &p, 0, val))
|
|
|
|
return -EINVAL;
|
2010-05-05 07:26:45 +07:00
|
|
|
|
|
|
|
len = p - tmp;
|
|
|
|
|
|
|
|
/* We don't know if the next char is whitespace thus we may accept
|
|
|
|
* invalid integers (e.g. 1234...a) or two integers instead of one
|
|
|
|
* (e.g. 123...1). So lets not allow such large numbers. */
|
|
|
|
if (len == TMPBUFLEN - 1)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (len < *size && perm_tr_len && !memchr(perm_tr, *p, perm_tr_len))
|
|
|
|
return -EINVAL;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-05-05 07:26:45 +07:00
|
|
|
if (tr && (len < *size))
|
|
|
|
*tr = *p;
|
|
|
|
|
|
|
|
*buf += len;
|
|
|
|
*size -= len;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2010-05-22 01:29:53 +07:00
|
|
|
* proc_put_long - converts an integer to a decimal ASCII formatted string
|
2010-05-05 07:26:45 +07:00
|
|
|
*
|
2010-05-22 01:29:53 +07:00
|
|
|
* @buf: the user buffer
|
|
|
|
* @size: the size of the user buffer
|
|
|
|
* @val: the integer to be converted
|
|
|
|
* @neg: sign of the number, %TRUE for negative
|
2010-05-05 07:26:45 +07:00
|
|
|
*
|
2010-05-22 01:29:53 +07:00
|
|
|
* In case of success %0 is returned and @buf and @size are updated with
|
|
|
|
* the amount of bytes written.
|
2010-05-05 07:26:45 +07:00
|
|
|
*/
|
|
|
|
static int proc_put_long(void __user **buf, size_t *size, unsigned long val,
|
|
|
|
bool neg)
|
|
|
|
{
|
|
|
|
int len;
|
|
|
|
char tmp[TMPBUFLEN], *p = tmp;
|
|
|
|
|
|
|
|
sprintf(p, "%s%lu", neg ? "-" : "", val);
|
|
|
|
len = strlen(tmp);
|
|
|
|
if (len > *size)
|
|
|
|
len = *size;
|
|
|
|
if (copy_to_user(*buf, tmp, len))
|
|
|
|
return -EFAULT;
|
|
|
|
*size -= len;
|
|
|
|
*buf += len;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#undef TMPBUFLEN
|
|
|
|
|
|
|
|
static int proc_put_char(void __user **buf, size_t *size, char c)
|
|
|
|
{
|
|
|
|
if (*size) {
|
|
|
|
char __user **buffer = (char __user **)buf;
|
|
|
|
if (put_user(c, *buffer))
|
|
|
|
return -EFAULT;
|
|
|
|
(*size)--, (*buffer)++;
|
|
|
|
*buf = *buffer;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-05-05 07:26:45 +07:00
|
|
|
static int do_proc_dointvec_conv(bool *negp, unsigned long *lvalp,
|
2005-04-17 05:20:36 +07:00
|
|
|
int *valp,
|
|
|
|
int write, void *data)
|
|
|
|
{
|
|
|
|
if (write) {
|
2015-04-17 02:48:07 +07:00
|
|
|
if (*negp) {
|
|
|
|
if (*lvalp > (unsigned long) INT_MAX + 1)
|
|
|
|
return -EINVAL;
|
|
|
|
*valp = -*lvalp;
|
|
|
|
} else {
|
|
|
|
if (*lvalp > (unsigned long) INT_MAX)
|
|
|
|
return -EINVAL;
|
|
|
|
*valp = *lvalp;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
} else {
|
|
|
|
int val = *valp;
|
|
|
|
if (val < 0) {
|
2010-05-05 07:26:45 +07:00
|
|
|
*negp = true;
|
2015-09-10 05:39:06 +07:00
|
|
|
*lvalp = -(unsigned long)val;
|
2005-04-17 05:20:36 +07:00
|
|
|
} else {
|
2010-05-05 07:26:45 +07:00
|
|
|
*negp = false;
|
2005-04-17 05:20:36 +07:00
|
|
|
*lvalp = (unsigned long)val;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2017-07-13 04:33:36 +07:00
|
|
|
static int do_proc_douintvec_conv(unsigned long *lvalp,
|
|
|
|
unsigned int *valp,
|
|
|
|
int write, void *data)
|
2016-08-26 05:16:51 +07:00
|
|
|
{
|
|
|
|
if (write) {
|
2017-04-07 22:51:07 +07:00
|
|
|
if (*lvalp > UINT_MAX)
|
|
|
|
return -EINVAL;
|
2016-08-26 05:16:51 +07:00
|
|
|
*valp = *lvalp;
|
|
|
|
} else {
|
|
|
|
unsigned int val = *valp;
|
|
|
|
*lvalp = (unsigned long)val;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-05-05 07:26:45 +07:00
|
|
|
static const char proc_wspace_sep[] = { ' ', '\t', '\n' };
|
|
|
|
|
2007-10-18 17:05:22 +07:00
|
|
|
static int __do_proc_dointvec(void *tbl_data, struct ctl_table *table,
|
2009-09-24 05:57:19 +07:00
|
|
|
int write, void __user *buffer,
|
2006-10-02 16:18:23 +07:00
|
|
|
size_t *lenp, loff_t *ppos,
|
2010-05-05 07:26:45 +07:00
|
|
|
int (*conv)(bool *negp, unsigned long *lvalp, int *valp,
|
2005-04-17 05:20:36 +07:00
|
|
|
int write, void *data),
|
|
|
|
void *data)
|
|
|
|
{
|
2010-05-05 07:26:45 +07:00
|
|
|
int *i, vleft, first = 1, err = 0;
|
|
|
|
size_t left;
|
2015-12-24 12:13:10 +07:00
|
|
|
char *kbuf = NULL, *p;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-05-05 07:26:45 +07:00
|
|
|
if (!tbl_data || !table->maxlen || !*lenp || (*ppos && !write)) {
|
2005-04-17 05:20:36 +07:00
|
|
|
*lenp = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2006-10-02 16:18:23 +07:00
|
|
|
i = (int *) tbl_data;
|
2005-04-17 05:20:36 +07:00
|
|
|
vleft = table->maxlen / sizeof(*i);
|
|
|
|
left = *lenp;
|
|
|
|
|
|
|
|
if (!conv)
|
|
|
|
conv = do_proc_dointvec_conv;
|
|
|
|
|
2010-05-05 07:26:45 +07:00
|
|
|
if (write) {
|
2017-07-13 04:33:33 +07:00
|
|
|
if (proc_first_pos_non_zero_ignore(ppos, table))
|
|
|
|
goto out;
|
sysctl: allow for strict write position handling
When writing to a sysctl string, each write, regardless of VFS position,
begins writing the string from the start. This means the contents of
the last write to the sysctl controls the string contents instead of the
first:
open("/proc/sys/kernel/modprobe", O_WRONLY) = 1
write(1, "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., 4096) = 4096
write(1, "/bin/true", 9) = 9
close(1) = 0
$ cat /proc/sys/kernel/modprobe
/bin/true
Expected behaviour would be to have the sysctl be "AAAA..." capped at
maxlen (in this case KMOD_PATH_LEN: 256), instead of truncating to the
contents of the second write. Similarly, multiple short writes would
not append to the sysctl.
The old behavior is unlike regular POSIX files enough that doing audits
of software that interact with sysctls can end up in unexpected or
dangerous situations. For example, "as long as the input starts with a
trusted path" turns out to be an insufficient filter, as what must also
happen is for the input to be entirely contained in a single write
syscall -- not a common consideration, especially for high level tools.
This provides kernel.sysctl_writes_strict as a way to make this behavior
act in a less surprising manner for strings, and disallows non-zero file
position when writing numeric sysctls (similar to what is already done
when reading from non-zero file positions). For now, the default (0) is
to warn about non-zero file position use, but retain the legacy
behavior. Setting this to -1 disables the warning, and setting this to
1 enables the file position respecting behavior.
[akpm@linux-foundation.org: fix build]
[akpm@linux-foundation.org: move misplaced hunk, per Randy]
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-07 04:37:19 +07:00
|
|
|
|
2010-05-05 07:26:45 +07:00
|
|
|
if (left > PAGE_SIZE - 1)
|
|
|
|
left = PAGE_SIZE - 1;
|
2015-12-24 12:13:10 +07:00
|
|
|
p = kbuf = memdup_user_nul(buffer, left);
|
|
|
|
if (IS_ERR(kbuf))
|
|
|
|
return PTR_ERR(kbuf);
|
2010-05-05 07:26:45 +07:00
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
for (; left && vleft--; i++, first=0) {
|
2010-05-05 07:26:45 +07:00
|
|
|
unsigned long lval;
|
|
|
|
bool neg;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-05-05 07:26:45 +07:00
|
|
|
if (write) {
|
2015-12-24 12:13:10 +07:00
|
|
|
left -= proc_skip_spaces(&p);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-05-26 06:10:14 +07:00
|
|
|
if (!left)
|
|
|
|
break;
|
2015-12-24 12:13:10 +07:00
|
|
|
err = proc_get_long(&p, &left, &lval, &neg,
|
2010-05-05 07:26:45 +07:00
|
|
|
proc_wspace_sep,
|
|
|
|
sizeof(proc_wspace_sep), NULL);
|
|
|
|
if (err)
|
2005-04-17 05:20:36 +07:00
|
|
|
break;
|
2010-05-05 07:26:45 +07:00
|
|
|
if (conv(&neg, &lval, i, 1, data)) {
|
|
|
|
err = -EINVAL;
|
2005-04-17 05:20:36 +07:00
|
|
|
break;
|
2010-05-05 07:26:45 +07:00
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
} else {
|
2010-05-05 07:26:45 +07:00
|
|
|
if (conv(&neg, &lval, i, 0, data)) {
|
|
|
|
err = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
if (!first)
|
2010-05-05 07:26:45 +07:00
|
|
|
err = proc_put_char(&buffer, &left, '\t');
|
|
|
|
if (err)
|
|
|
|
break;
|
|
|
|
err = proc_put_long(&buffer, &left, lval, neg);
|
|
|
|
if (err)
|
2005-04-17 05:20:36 +07:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-05-05 07:26:45 +07:00
|
|
|
if (!write && !first && left && !err)
|
|
|
|
err = proc_put_char(&buffer, &left, '\n');
|
2010-05-26 06:10:14 +07:00
|
|
|
if (write && !err && left)
|
2015-12-24 12:13:10 +07:00
|
|
|
left -= proc_skip_spaces(&p);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (write) {
|
2015-12-24 12:13:10 +07:00
|
|
|
kfree(kbuf);
|
2010-05-05 07:26:45 +07:00
|
|
|
if (first)
|
|
|
|
return err ? : -EINVAL;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
*lenp -= left;
|
sysctl: allow for strict write position handling
When writing to a sysctl string, each write, regardless of VFS position,
begins writing the string from the start. This means the contents of
the last write to the sysctl controls the string contents instead of the
first:
open("/proc/sys/kernel/modprobe", O_WRONLY) = 1
write(1, "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., 4096) = 4096
write(1, "/bin/true", 9) = 9
close(1) = 0
$ cat /proc/sys/kernel/modprobe
/bin/true
Expected behaviour would be to have the sysctl be "AAAA..." capped at
maxlen (in this case KMOD_PATH_LEN: 256), instead of truncating to the
contents of the second write. Similarly, multiple short writes would
not append to the sysctl.
The old behavior is unlike regular POSIX files enough that doing audits
of software that interact with sysctls can end up in unexpected or
dangerous situations. For example, "as long as the input starts with a
trusted path" turns out to be an insufficient filter, as what must also
happen is for the input to be entirely contained in a single write
syscall -- not a common consideration, especially for high level tools.
This provides kernel.sysctl_writes_strict as a way to make this behavior
act in a less surprising manner for strings, and disallows non-zero file
position when writing numeric sysctls (similar to what is already done
when reading from non-zero file positions). For now, the default (0) is
to warn about non-zero file position use, but retain the legacy
behavior. Setting this to -1 disables the warning, and setting this to
1 enables the file position respecting behavior.
[akpm@linux-foundation.org: fix build]
[akpm@linux-foundation.org: move misplaced hunk, per Randy]
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-07 04:37:19 +07:00
|
|
|
out:
|
2005-04-17 05:20:36 +07:00
|
|
|
*ppos += *lenp;
|
2010-05-05 07:26:45 +07:00
|
|
|
return err;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2009-09-24 05:57:19 +07:00
|
|
|
static int do_proc_dointvec(struct ctl_table *table, int write,
|
2006-10-02 16:18:23 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos,
|
2010-05-05 07:26:45 +07:00
|
|
|
int (*conv)(bool *negp, unsigned long *lvalp, int *valp,
|
2006-10-02 16:18:23 +07:00
|
|
|
int write, void *data),
|
|
|
|
void *data)
|
|
|
|
{
|
2009-09-24 05:57:19 +07:00
|
|
|
return __do_proc_dointvec(table->data, table, write,
|
2006-10-02 16:18:23 +07:00
|
|
|
buffer, lenp, ppos, conv, data);
|
|
|
|
}
|
|
|
|
|
2017-07-13 04:33:36 +07:00
|
|
|
static int do_proc_douintvec_w(unsigned int *tbl_data,
|
|
|
|
struct ctl_table *table,
|
|
|
|
void __user *buffer,
|
|
|
|
size_t *lenp, loff_t *ppos,
|
|
|
|
int (*conv)(unsigned long *lvalp,
|
|
|
|
unsigned int *valp,
|
|
|
|
int write, void *data),
|
|
|
|
void *data)
|
|
|
|
{
|
|
|
|
unsigned long lval;
|
|
|
|
int err = 0;
|
|
|
|
size_t left;
|
|
|
|
bool neg;
|
|
|
|
char *kbuf = NULL, *p;
|
|
|
|
|
|
|
|
left = *lenp;
|
|
|
|
|
|
|
|
if (proc_first_pos_non_zero_ignore(ppos, table))
|
|
|
|
goto bail_early;
|
|
|
|
|
|
|
|
if (left > PAGE_SIZE - 1)
|
|
|
|
left = PAGE_SIZE - 1;
|
|
|
|
|
|
|
|
p = kbuf = memdup_user_nul(buffer, left);
|
|
|
|
if (IS_ERR(kbuf))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
left -= proc_skip_spaces(&p);
|
|
|
|
if (!left) {
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out_free;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = proc_get_long(&p, &left, &lval, &neg,
|
|
|
|
proc_wspace_sep,
|
|
|
|
sizeof(proc_wspace_sep), NULL);
|
|
|
|
if (err || neg) {
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out_free;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (conv(&lval, tbl_data, 1, data)) {
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out_free;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!err && left)
|
|
|
|
left -= proc_skip_spaces(&p);
|
|
|
|
|
|
|
|
out_free:
|
|
|
|
kfree(kbuf);
|
|
|
|
if (err)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* This is in keeping with old __do_proc_dointvec() */
|
|
|
|
bail_early:
|
|
|
|
*ppos += *lenp;
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int do_proc_douintvec_r(unsigned int *tbl_data, void __user *buffer,
|
|
|
|
size_t *lenp, loff_t *ppos,
|
|
|
|
int (*conv)(unsigned long *lvalp,
|
|
|
|
unsigned int *valp,
|
|
|
|
int write, void *data),
|
|
|
|
void *data)
|
|
|
|
{
|
|
|
|
unsigned long lval;
|
|
|
|
int err = 0;
|
|
|
|
size_t left;
|
|
|
|
|
|
|
|
left = *lenp;
|
|
|
|
|
|
|
|
if (conv(&lval, tbl_data, 0, data)) {
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
err = proc_put_long(&buffer, &left, lval, false);
|
|
|
|
if (err || !left)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
err = proc_put_char(&buffer, &left, '\n');
|
|
|
|
|
|
|
|
out:
|
|
|
|
*lenp -= left;
|
|
|
|
*ppos += *lenp;
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __do_proc_douintvec(void *tbl_data, struct ctl_table *table,
|
|
|
|
int write, void __user *buffer,
|
|
|
|
size_t *lenp, loff_t *ppos,
|
|
|
|
int (*conv)(unsigned long *lvalp,
|
|
|
|
unsigned int *valp,
|
|
|
|
int write, void *data),
|
|
|
|
void *data)
|
|
|
|
{
|
|
|
|
unsigned int *i, vleft;
|
|
|
|
|
|
|
|
if (!tbl_data || !table->maxlen || !*lenp || (*ppos && !write)) {
|
|
|
|
*lenp = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
i = (unsigned int *) tbl_data;
|
|
|
|
vleft = table->maxlen / sizeof(*i);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Arrays are not supported, keep this simple. *Do not* add
|
|
|
|
* support for them.
|
|
|
|
*/
|
|
|
|
if (vleft != 1) {
|
|
|
|
*lenp = 0;
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!conv)
|
|
|
|
conv = do_proc_douintvec_conv;
|
|
|
|
|
|
|
|
if (write)
|
|
|
|
return do_proc_douintvec_w(i, table, buffer, lenp, ppos,
|
|
|
|
conv, data);
|
|
|
|
return do_proc_douintvec_r(i, buffer, lenp, ppos, conv, data);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int do_proc_douintvec(struct ctl_table *table, int write,
|
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos,
|
|
|
|
int (*conv)(unsigned long *lvalp,
|
|
|
|
unsigned int *valp,
|
|
|
|
int write, void *data),
|
|
|
|
void *data)
|
|
|
|
{
|
|
|
|
return __do_proc_douintvec(table->data, table, write,
|
|
|
|
buffer, lenp, ppos, conv, data);
|
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/**
|
|
|
|
* proc_dointvec - read a vector of integers
|
|
|
|
* @table: the sysctl table
|
|
|
|
* @write: %TRUE if this is a write to the sysctl file
|
|
|
|
* @buffer: the user buffer
|
|
|
|
* @lenp: the size of the user buffer
|
|
|
|
* @ppos: file position
|
|
|
|
*
|
|
|
|
* Reads/writes up to table->maxlen/sizeof(unsigned int) integer
|
|
|
|
* values from/to the user buffer, treated as an ASCII string.
|
|
|
|
*
|
|
|
|
* Returns 0 on success.
|
|
|
|
*/
|
2009-09-24 05:57:19 +07:00
|
|
|
int proc_dointvec(struct ctl_table *table, int write,
|
2005-04-17 05:20:36 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
2016-08-26 05:16:51 +07:00
|
|
|
return do_proc_dointvec(table, write, buffer, lenp, ppos, NULL, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* proc_douintvec - read a vector of unsigned integers
|
|
|
|
* @table: the sysctl table
|
|
|
|
* @write: %TRUE if this is a write to the sysctl file
|
|
|
|
* @buffer: the user buffer
|
|
|
|
* @lenp: the size of the user buffer
|
|
|
|
* @ppos: file position
|
|
|
|
*
|
|
|
|
* Reads/writes up to table->maxlen/sizeof(unsigned int) unsigned integer
|
|
|
|
* values from/to the user buffer, treated as an ASCII string.
|
|
|
|
*
|
|
|
|
* Returns 0 on success.
|
|
|
|
*/
|
|
|
|
int proc_douintvec(struct ctl_table *table, int write,
|
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
2017-07-13 04:33:36 +07:00
|
|
|
return do_proc_douintvec(table, write, buffer, lenp, ppos,
|
|
|
|
do_proc_douintvec_conv, NULL);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2007-02-10 16:45:24 +07:00
|
|
|
/*
|
2008-10-16 12:01:41 +07:00
|
|
|
* Taint values can only be increased
|
|
|
|
* This means we can safely use a temporary.
|
2007-02-10 16:45:24 +07:00
|
|
|
*/
|
2009-09-24 05:57:19 +07:00
|
|
|
static int proc_taint(struct ctl_table *table, int write,
|
2007-02-10 16:45:24 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
2008-10-16 12:01:41 +07:00
|
|
|
struct ctl_table t;
|
|
|
|
unsigned long tmptaint = get_taint();
|
|
|
|
int err;
|
2007-02-10 16:45:24 +07:00
|
|
|
|
2007-04-24 04:41:14 +07:00
|
|
|
if (write && !capable(CAP_SYS_ADMIN))
|
2007-02-10 16:45:24 +07:00
|
|
|
return -EPERM;
|
|
|
|
|
2008-10-16 12:01:41 +07:00
|
|
|
t = *table;
|
|
|
|
t.data = &tmptaint;
|
2009-09-24 05:57:19 +07:00
|
|
|
err = proc_doulongvec_minmax(&t, write, buffer, lenp, ppos);
|
2008-10-16 12:01:41 +07:00
|
|
|
if (err < 0)
|
|
|
|
return err;
|
|
|
|
|
|
|
|
if (write) {
|
|
|
|
/*
|
|
|
|
* Poor man's atomic or. Not worth adding a primitive
|
|
|
|
* to everyone's atomic.h for this
|
|
|
|
*/
|
|
|
|
int i;
|
|
|
|
for (i = 0; i < BITS_PER_LONG && tmptaint >> i; i++) {
|
|
|
|
if ((tmptaint >> i) & 1)
|
2013-01-21 13:47:39 +07:00
|
|
|
add_taint(i, LOCKDEP_STILL_OK);
|
2008-10-16 12:01:41 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return err;
|
2007-02-10 16:45:24 +07:00
|
|
|
}
|
|
|
|
|
2011-03-24 06:43:11 +07:00
|
|
|
#ifdef CONFIG_PRINTK
|
2012-04-05 01:40:19 +07:00
|
|
|
static int proc_dointvec_minmax_sysadmin(struct ctl_table *table, int write,
|
2011-03-24 06:43:11 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
if (write && !capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
|
|
|
return proc_dointvec_minmax(table, write, buffer, lenp, ppos);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2018-04-11 06:35:38 +07:00
|
|
|
/**
|
|
|
|
* struct do_proc_dointvec_minmax_conv_param - proc_dointvec_minmax() range checking structure
|
|
|
|
* @min: pointer to minimum allowable value
|
|
|
|
* @max: pointer to maximum allowable value
|
|
|
|
*
|
|
|
|
* The do_proc_dointvec_minmax_conv_param structure provides the
|
|
|
|
* minimum and maximum values for doing range checking for those sysctl
|
|
|
|
* parameters that use the proc_dointvec_minmax() handler.
|
|
|
|
*/
|
2005-04-17 05:20:36 +07:00
|
|
|
struct do_proc_dointvec_minmax_conv_param {
|
|
|
|
int *min;
|
|
|
|
int *max;
|
|
|
|
};
|
|
|
|
|
2010-05-05 07:26:45 +07:00
|
|
|
static int do_proc_dointvec_minmax_conv(bool *negp, unsigned long *lvalp,
|
|
|
|
int *valp,
|
2005-04-17 05:20:36 +07:00
|
|
|
int write, void *data)
|
|
|
|
{
|
2019-03-12 13:28:06 +07:00
|
|
|
int tmp, ret;
|
2005-04-17 05:20:36 +07:00
|
|
|
struct do_proc_dointvec_minmax_conv_param *param = data;
|
2019-03-12 13:28:06 +07:00
|
|
|
/*
|
|
|
|
* If writing, first do so via a temporary local int so we can
|
|
|
|
* bounds-check it before touching *valp.
|
|
|
|
*/
|
|
|
|
int *ip = write ? &tmp : valp;
|
|
|
|
|
|
|
|
ret = do_proc_dointvec_conv(negp, lvalp, ip, write, data);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (write) {
|
2019-03-12 13:28:06 +07:00
|
|
|
if ((param->min && *param->min > tmp) ||
|
|
|
|
(param->max && *param->max < tmp))
|
2005-04-17 05:20:36 +07:00
|
|
|
return -EINVAL;
|
2019-03-12 13:28:06 +07:00
|
|
|
*valp = tmp;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
2019-03-12 13:28:06 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* proc_dointvec_minmax - read a vector of integers with min/max values
|
|
|
|
* @table: the sysctl table
|
|
|
|
* @write: %TRUE if this is a write to the sysctl file
|
|
|
|
* @buffer: the user buffer
|
|
|
|
* @lenp: the size of the user buffer
|
|
|
|
* @ppos: file position
|
|
|
|
*
|
|
|
|
* Reads/writes up to table->maxlen/sizeof(unsigned int) integer
|
|
|
|
* values from/to the user buffer, treated as an ASCII string.
|
|
|
|
*
|
|
|
|
* This routine will ensure the values are within the range specified by
|
|
|
|
* table->extra1 (min) and table->extra2 (max).
|
|
|
|
*
|
2018-04-11 06:35:38 +07:00
|
|
|
* Returns 0 on success or -EINVAL on write when the range check fails.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2009-09-24 05:57:19 +07:00
|
|
|
int proc_dointvec_minmax(struct ctl_table *table, int write,
|
2005-04-17 05:20:36 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
struct do_proc_dointvec_minmax_conv_param param = {
|
|
|
|
.min = (int *) table->extra1,
|
|
|
|
.max = (int *) table->extra2,
|
|
|
|
};
|
2009-09-24 05:57:19 +07:00
|
|
|
return do_proc_dointvec(table, write, buffer, lenp, ppos,
|
2005-04-17 05:20:36 +07:00
|
|
|
do_proc_dointvec_minmax_conv, ¶m);
|
|
|
|
}
|
|
|
|
|
2018-04-11 06:35:38 +07:00
|
|
|
/**
|
|
|
|
* struct do_proc_douintvec_minmax_conv_param - proc_douintvec_minmax() range checking structure
|
|
|
|
* @min: pointer to minimum allowable value
|
|
|
|
* @max: pointer to maximum allowable value
|
|
|
|
*
|
|
|
|
* The do_proc_douintvec_minmax_conv_param structure provides the
|
|
|
|
* minimum and maximum values for doing range checking for those sysctl
|
|
|
|
* parameters that use the proc_douintvec_minmax() handler.
|
|
|
|
*/
|
2017-07-13 04:33:40 +07:00
|
|
|
struct do_proc_douintvec_minmax_conv_param {
|
|
|
|
unsigned int *min;
|
|
|
|
unsigned int *max;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int do_proc_douintvec_minmax_conv(unsigned long *lvalp,
|
|
|
|
unsigned int *valp,
|
|
|
|
int write, void *data)
|
|
|
|
{
|
2019-03-12 13:28:06 +07:00
|
|
|
int ret;
|
|
|
|
unsigned int tmp;
|
2017-07-13 04:33:40 +07:00
|
|
|
struct do_proc_douintvec_minmax_conv_param *param = data;
|
2019-03-12 13:28:06 +07:00
|
|
|
/* write via temporary local uint for bounds-checking */
|
|
|
|
unsigned int *up = write ? &tmp : valp;
|
2017-07-13 04:33:40 +07:00
|
|
|
|
2019-03-12 13:28:06 +07:00
|
|
|
ret = do_proc_douintvec_conv(lvalp, up, write, data);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
2017-11-18 06:29:28 +07:00
|
|
|
|
2019-03-12 13:28:06 +07:00
|
|
|
if (write) {
|
|
|
|
if ((param->min && *param->min > tmp) ||
|
|
|
|
(param->max && *param->max < tmp))
|
2017-07-13 04:33:40 +07:00
|
|
|
return -ERANGE;
|
|
|
|
|
2019-03-12 13:28:06 +07:00
|
|
|
*valp = tmp;
|
2017-07-13 04:33:40 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* proc_douintvec_minmax - read a vector of unsigned ints with min/max values
|
|
|
|
* @table: the sysctl table
|
|
|
|
* @write: %TRUE if this is a write to the sysctl file
|
|
|
|
* @buffer: the user buffer
|
|
|
|
* @lenp: the size of the user buffer
|
|
|
|
* @ppos: file position
|
|
|
|
*
|
|
|
|
* Reads/writes up to table->maxlen/sizeof(unsigned int) unsigned integer
|
|
|
|
* values from/to the user buffer, treated as an ASCII string. Negative
|
|
|
|
* strings are not allowed.
|
|
|
|
*
|
|
|
|
* This routine will ensure the values are within the range specified by
|
|
|
|
* table->extra1 (min) and table->extra2 (max). There is a final sanity
|
|
|
|
* check for UINT_MAX to avoid having to support wrap around uses from
|
|
|
|
* userspace.
|
|
|
|
*
|
2018-04-11 06:35:38 +07:00
|
|
|
* Returns 0 on success or -ERANGE on write when the range check fails.
|
2017-07-13 04:33:40 +07:00
|
|
|
*/
|
|
|
|
int proc_douintvec_minmax(struct ctl_table *table, int write,
|
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
struct do_proc_douintvec_minmax_conv_param param = {
|
|
|
|
.min = (unsigned int *) table->extra1,
|
|
|
|
.max = (unsigned int *) table->extra2,
|
|
|
|
};
|
|
|
|
return do_proc_douintvec(table, write, buffer, lenp, ppos,
|
|
|
|
do_proc_douintvec_minmax_conv, ¶m);
|
|
|
|
}
|
|
|
|
|
pipe: add proc_dopipe_max_size() to safely assign pipe_max_size
pipe_max_size is assigned directly via procfs sysctl:
static struct ctl_table fs_table[] = {
...
{
.procname = "pipe-max-size",
.data = &pipe_max_size,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = &pipe_proc_fn,
.extra1 = &pipe_min_size,
},
...
int pipe_proc_fn(struct ctl_table *table, int write, void __user *buf,
size_t *lenp, loff_t *ppos)
{
...
ret = proc_dointvec_minmax(table, write, buf, lenp, ppos)
...
and then later rounded in-place a few statements later:
...
pipe_max_size = round_pipe_size(pipe_max_size);
...
This leaves a window of time between initial assignment and rounding
that may be visible to other threads. (For example, one thread sets a
non-rounded value to pipe_max_size while another reads its value.)
Similar reads of pipe_max_size are potentially racy:
pipe.c :: alloc_pipe_info()
pipe.c :: pipe_set_size()
Add a new proc_dopipe_max_size() that consolidates reading the new value
from the user buffer, verifying bounds, and calling round_pipe_size()
with a single assignment to pipe_max_size.
Link: http://lkml.kernel.org/r/1507658689-11669-4-git-send-email-joe.lawrence@redhat.com
Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Reviewed-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-11-18 06:29:24 +07:00
|
|
|
static int do_proc_dopipe_max_size_conv(unsigned long *lvalp,
|
|
|
|
unsigned int *valp,
|
|
|
|
int write, void *data)
|
|
|
|
{
|
|
|
|
if (write) {
|
2017-11-18 06:29:28 +07:00
|
|
|
unsigned int val;
|
pipe: add proc_dopipe_max_size() to safely assign pipe_max_size
pipe_max_size is assigned directly via procfs sysctl:
static struct ctl_table fs_table[] = {
...
{
.procname = "pipe-max-size",
.data = &pipe_max_size,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = &pipe_proc_fn,
.extra1 = &pipe_min_size,
},
...
int pipe_proc_fn(struct ctl_table *table, int write, void __user *buf,
size_t *lenp, loff_t *ppos)
{
...
ret = proc_dointvec_minmax(table, write, buf, lenp, ppos)
...
and then later rounded in-place a few statements later:
...
pipe_max_size = round_pipe_size(pipe_max_size);
...
This leaves a window of time between initial assignment and rounding
that may be visible to other threads. (For example, one thread sets a
non-rounded value to pipe_max_size while another reads its value.)
Similar reads of pipe_max_size are potentially racy:
pipe.c :: alloc_pipe_info()
pipe.c :: pipe_set_size()
Add a new proc_dopipe_max_size() that consolidates reading the new value
from the user buffer, verifying bounds, and calling round_pipe_size()
with a single assignment to pipe_max_size.
Link: http://lkml.kernel.org/r/1507658689-11669-4-git-send-email-joe.lawrence@redhat.com
Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Reviewed-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-11-18 06:29:24 +07:00
|
|
|
|
2017-11-18 06:29:28 +07:00
|
|
|
val = round_pipe_size(*lvalp);
|
pipe: add proc_dopipe_max_size() to safely assign pipe_max_size
pipe_max_size is assigned directly via procfs sysctl:
static struct ctl_table fs_table[] = {
...
{
.procname = "pipe-max-size",
.data = &pipe_max_size,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = &pipe_proc_fn,
.extra1 = &pipe_min_size,
},
...
int pipe_proc_fn(struct ctl_table *table, int write, void __user *buf,
size_t *lenp, loff_t *ppos)
{
...
ret = proc_dointvec_minmax(table, write, buf, lenp, ppos)
...
and then later rounded in-place a few statements later:
...
pipe_max_size = round_pipe_size(pipe_max_size);
...
This leaves a window of time between initial assignment and rounding
that may be visible to other threads. (For example, one thread sets a
non-rounded value to pipe_max_size while another reads its value.)
Similar reads of pipe_max_size are potentially racy:
pipe.c :: alloc_pipe_info()
pipe.c :: pipe_set_size()
Add a new proc_dopipe_max_size() that consolidates reading the new value
from the user buffer, verifying bounds, and calling round_pipe_size()
with a single assignment to pipe_max_size.
Link: http://lkml.kernel.org/r/1507658689-11669-4-git-send-email-joe.lawrence@redhat.com
Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Reviewed-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-11-18 06:29:24 +07:00
|
|
|
if (val == 0)
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
*valp = val;
|
|
|
|
} else {
|
|
|
|
unsigned int val = *valp;
|
|
|
|
*lvalp = (unsigned long) val;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-02-07 06:41:49 +07:00
|
|
|
static int proc_dopipe_max_size(struct ctl_table *table, int write,
|
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
pipe: add proc_dopipe_max_size() to safely assign pipe_max_size
pipe_max_size is assigned directly via procfs sysctl:
static struct ctl_table fs_table[] = {
...
{
.procname = "pipe-max-size",
.data = &pipe_max_size,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = &pipe_proc_fn,
.extra1 = &pipe_min_size,
},
...
int pipe_proc_fn(struct ctl_table *table, int write, void __user *buf,
size_t *lenp, loff_t *ppos)
{
...
ret = proc_dointvec_minmax(table, write, buf, lenp, ppos)
...
and then later rounded in-place a few statements later:
...
pipe_max_size = round_pipe_size(pipe_max_size);
...
This leaves a window of time between initial assignment and rounding
that may be visible to other threads. (For example, one thread sets a
non-rounded value to pipe_max_size while another reads its value.)
Similar reads of pipe_max_size are potentially racy:
pipe.c :: alloc_pipe_info()
pipe.c :: pipe_set_size()
Add a new proc_dopipe_max_size() that consolidates reading the new value
from the user buffer, verifying bounds, and calling round_pipe_size()
with a single assignment to pipe_max_size.
Link: http://lkml.kernel.org/r/1507658689-11669-4-git-send-email-joe.lawrence@redhat.com
Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Reviewed-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-11-18 06:29:24 +07:00
|
|
|
{
|
|
|
|
return do_proc_douintvec(table, write, buffer, lenp, ppos,
|
2018-02-07 06:41:45 +07:00
|
|
|
do_proc_dopipe_max_size_conv, NULL);
|
pipe: add proc_dopipe_max_size() to safely assign pipe_max_size
pipe_max_size is assigned directly via procfs sysctl:
static struct ctl_table fs_table[] = {
...
{
.procname = "pipe-max-size",
.data = &pipe_max_size,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = &pipe_proc_fn,
.extra1 = &pipe_min_size,
},
...
int pipe_proc_fn(struct ctl_table *table, int write, void __user *buf,
size_t *lenp, loff_t *ppos)
{
...
ret = proc_dointvec_minmax(table, write, buf, lenp, ppos)
...
and then later rounded in-place a few statements later:
...
pipe_max_size = round_pipe_size(pipe_max_size);
...
This leaves a window of time between initial assignment and rounding
that may be visible to other threads. (For example, one thread sets a
non-rounded value to pipe_max_size while another reads its value.)
Similar reads of pipe_max_size are potentially racy:
pipe.c :: alloc_pipe_info()
pipe.c :: pipe_set_size()
Add a new proc_dopipe_max_size() that consolidates reading the new value
from the user buffer, verifying bounds, and calling round_pipe_size()
with a single assignment to pipe_max_size.
Link: http://lkml.kernel.org/r/1507658689-11669-4-git-send-email-joe.lawrence@redhat.com
Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Reviewed-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-11-18 06:29:24 +07:00
|
|
|
}
|
|
|
|
|
2012-07-31 04:39:18 +07:00
|
|
|
static void validate_coredump_safety(void)
|
|
|
|
{
|
2012-10-05 07:15:23 +07:00
|
|
|
#ifdef CONFIG_COREDUMP
|
2013-02-28 08:03:15 +07:00
|
|
|
if (suid_dumpable == SUID_DUMP_ROOT &&
|
2012-07-31 04:39:18 +07:00
|
|
|
core_pattern[0] != '/' && core_pattern[0] != '|') {
|
2016-12-15 06:04:14 +07:00
|
|
|
printk(KERN_WARNING
|
|
|
|
"Unsafe core_pattern used with fs.suid_dumpable=2.\n"
|
|
|
|
"Pipe handler or fully qualified core dump path required.\n"
|
|
|
|
"Set kernel.core_pattern before fs.suid_dumpable.\n"
|
|
|
|
);
|
2012-07-31 04:39:18 +07:00
|
|
|
}
|
2012-10-05 07:15:23 +07:00
|
|
|
#endif
|
2012-07-31 04:39:18 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static int proc_dointvec_minmax_coredump(struct ctl_table *table, int write,
|
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
int error = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
|
|
|
|
if (!error)
|
|
|
|
validate_coredump_safety();
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
2012-10-05 07:15:23 +07:00
|
|
|
#ifdef CONFIG_COREDUMP
|
2012-07-31 04:39:18 +07:00
|
|
|
static int proc_dostring_coredump(struct ctl_table *table, int write,
|
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
int error = proc_dostring(table, write, buffer, lenp, ppos);
|
|
|
|
if (!error)
|
|
|
|
validate_coredump_safety();
|
|
|
|
return error;
|
|
|
|
}
|
2012-10-05 07:15:23 +07:00
|
|
|
#endif
|
2012-07-31 04:39:18 +07:00
|
|
|
|
2007-10-18 17:05:22 +07:00
|
|
|
static int __do_proc_doulongvec_minmax(void *data, struct ctl_table *table, int write,
|
2005-04-17 05:20:36 +07:00
|
|
|
void __user *buffer,
|
|
|
|
size_t *lenp, loff_t *ppos,
|
|
|
|
unsigned long convmul,
|
|
|
|
unsigned long convdiv)
|
|
|
|
{
|
2010-05-05 07:26:45 +07:00
|
|
|
unsigned long *i, *min, *max;
|
|
|
|
int vleft, first = 1, err = 0;
|
|
|
|
size_t left;
|
2015-12-24 12:13:10 +07:00
|
|
|
char *kbuf = NULL, *p;
|
2010-05-05 07:26:45 +07:00
|
|
|
|
|
|
|
if (!data || !table->maxlen || !*lenp || (*ppos && !write)) {
|
2005-04-17 05:20:36 +07:00
|
|
|
*lenp = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
2010-05-05 07:26:45 +07:00
|
|
|
|
2006-10-02 16:18:23 +07:00
|
|
|
i = (unsigned long *) data;
|
2005-04-17 05:20:36 +07:00
|
|
|
min = (unsigned long *) table->extra1;
|
|
|
|
max = (unsigned long *) table->extra2;
|
|
|
|
vleft = table->maxlen / sizeof(unsigned long);
|
|
|
|
left = *lenp;
|
2010-05-05 07:26:45 +07:00
|
|
|
|
|
|
|
if (write) {
|
2017-07-13 04:33:33 +07:00
|
|
|
if (proc_first_pos_non_zero_ignore(ppos, table))
|
|
|
|
goto out;
|
sysctl: allow for strict write position handling
When writing to a sysctl string, each write, regardless of VFS position,
begins writing the string from the start. This means the contents of
the last write to the sysctl controls the string contents instead of the
first:
open("/proc/sys/kernel/modprobe", O_WRONLY) = 1
write(1, "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., 4096) = 4096
write(1, "/bin/true", 9) = 9
close(1) = 0
$ cat /proc/sys/kernel/modprobe
/bin/true
Expected behaviour would be to have the sysctl be "AAAA..." capped at
maxlen (in this case KMOD_PATH_LEN: 256), instead of truncating to the
contents of the second write. Similarly, multiple short writes would
not append to the sysctl.
The old behavior is unlike regular POSIX files enough that doing audits
of software that interact with sysctls can end up in unexpected or
dangerous situations. For example, "as long as the input starts with a
trusted path" turns out to be an insufficient filter, as what must also
happen is for the input to be entirely contained in a single write
syscall -- not a common consideration, especially for high level tools.
This provides kernel.sysctl_writes_strict as a way to make this behavior
act in a less surprising manner for strings, and disallows non-zero file
position when writing numeric sysctls (similar to what is already done
when reading from non-zero file positions). For now, the default (0) is
to warn about non-zero file position use, but retain the legacy
behavior. Setting this to -1 disables the warning, and setting this to
1 enables the file position respecting behavior.
[akpm@linux-foundation.org: fix build]
[akpm@linux-foundation.org: move misplaced hunk, per Randy]
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-07 04:37:19 +07:00
|
|
|
|
2010-05-05 07:26:45 +07:00
|
|
|
if (left > PAGE_SIZE - 1)
|
|
|
|
left = PAGE_SIZE - 1;
|
2015-12-24 12:13:10 +07:00
|
|
|
p = kbuf = memdup_user_nul(buffer, left);
|
|
|
|
if (IS_ERR(kbuf))
|
|
|
|
return PTR_ERR(kbuf);
|
2010-05-05 07:26:45 +07:00
|
|
|
}
|
|
|
|
|
2010-10-08 02:59:29 +07:00
|
|
|
for (; left && vleft--; i++, first = 0) {
|
2010-05-05 07:26:45 +07:00
|
|
|
unsigned long val;
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
if (write) {
|
2010-05-05 07:26:45 +07:00
|
|
|
bool neg;
|
|
|
|
|
2015-12-24 12:13:10 +07:00
|
|
|
left -= proc_skip_spaces(&p);
|
proc/sysctl: fix return error for proc_doulongvec_minmax()
If the number of input parameters is less than the total parameters, an
EINVAL error will be returned.
For example, we use proc_doulongvec_minmax to pass up to two parameters
with kern_table:
{
.procname = "monitor_signals",
.data = &monitor_sigs,
.maxlen = 2*sizeof(unsigned long),
.mode = 0644,
.proc_handler = proc_doulongvec_minmax,
},
Reproduce:
When passing two parameters, it's work normal. But passing only one
parameter, an error "Invalid argument"(EINVAL) is returned.
[root@cl150 ~]# echo 1 2 > /proc/sys/kernel/monitor_signals
[root@cl150 ~]# cat /proc/sys/kernel/monitor_signals
1 2
[root@cl150 ~]# echo 3 > /proc/sys/kernel/monitor_signals
-bash: echo: write error: Invalid argument
[root@cl150 ~]# echo $?
1
[root@cl150 ~]# cat /proc/sys/kernel/monitor_signals
3 2
[root@cl150 ~]#
The following is the result after apply this patch. No error is
returned when the number of input parameters is less than the total
parameters.
[root@cl150 ~]# echo 1 2 > /proc/sys/kernel/monitor_signals
[root@cl150 ~]# cat /proc/sys/kernel/monitor_signals
1 2
[root@cl150 ~]# echo 3 > /proc/sys/kernel/monitor_signals
[root@cl150 ~]# echo $?
0
[root@cl150 ~]# cat /proc/sys/kernel/monitor_signals
3 2
[root@cl150 ~]#
There are three processing functions dealing with digital parameters,
__do_proc_dointvec/__do_proc_douintvec/__do_proc_doulongvec_minmax.
This patch deals with __do_proc_doulongvec_minmax, just as
__do_proc_dointvec does, adding a check for parameters 'left'. In
__do_proc_douintvec, its code implementation explicitly does not support
multiple inputs.
static int __do_proc_douintvec(...){
...
/*
* Arrays are not supported, keep this simple. *Do not* add
* support for them.
*/
if (vleft != 1) {
*lenp = 0;
return -EINVAL;
}
...
}
So, just __do_proc_doulongvec_minmax has the problem. And most use of
proc_doulongvec_minmax/proc_doulongvec_ms_jiffies_minmax just have one
parameter.
Link: http://lkml.kernel.org/r/1544081775-15720-1-git-send-email-cheng.lin130@zte.com.cn
Signed-off-by: Cheng Lin <cheng.lin130@zte.com.cn>
Acked-by: Luis Chamberlain <mcgrof@kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-04 06:26:13 +07:00
|
|
|
if (!left)
|
|
|
|
break;
|
2010-05-05 07:26:45 +07:00
|
|
|
|
2015-12-24 12:13:10 +07:00
|
|
|
err = proc_get_long(&p, &left, &val, &neg,
|
2010-05-05 07:26:45 +07:00
|
|
|
proc_wspace_sep,
|
|
|
|
sizeof(proc_wspace_sep), NULL);
|
|
|
|
if (err)
|
2005-04-17 05:20:36 +07:00
|
|
|
break;
|
|
|
|
if (neg)
|
|
|
|
continue;
|
2017-01-26 09:20:55 +07:00
|
|
|
val = convmul * val / convdiv;
|
2019-05-15 05:44:55 +07:00
|
|
|
if ((min && val < *min) || (max && val > *max)) {
|
|
|
|
err = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
*i = val;
|
|
|
|
} else {
|
2010-05-05 07:26:45 +07:00
|
|
|
val = convdiv * (*i) / convmul;
|
2013-11-13 06:11:21 +07:00
|
|
|
if (!first) {
|
2010-05-05 07:26:45 +07:00
|
|
|
err = proc_put_char(&buffer, &left, '\t');
|
2013-11-13 06:11:21 +07:00
|
|
|
if (err)
|
|
|
|
break;
|
|
|
|
}
|
2010-05-05 07:26:45 +07:00
|
|
|
err = proc_put_long(&buffer, &left, val, false);
|
|
|
|
if (err)
|
|
|
|
break;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-05-05 07:26:45 +07:00
|
|
|
if (!write && !first && left && !err)
|
|
|
|
err = proc_put_char(&buffer, &left, '\n');
|
|
|
|
if (write && !err)
|
2015-12-24 12:13:10 +07:00
|
|
|
left -= proc_skip_spaces(&p);
|
2005-04-17 05:20:36 +07:00
|
|
|
if (write) {
|
2015-12-24 12:13:10 +07:00
|
|
|
kfree(kbuf);
|
2010-05-05 07:26:45 +07:00
|
|
|
if (first)
|
|
|
|
return err ? : -EINVAL;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
*lenp -= left;
|
sysctl: allow for strict write position handling
When writing to a sysctl string, each write, regardless of VFS position,
begins writing the string from the start. This means the contents of
the last write to the sysctl controls the string contents instead of the
first:
open("/proc/sys/kernel/modprobe", O_WRONLY) = 1
write(1, "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"..., 4096) = 4096
write(1, "/bin/true", 9) = 9
close(1) = 0
$ cat /proc/sys/kernel/modprobe
/bin/true
Expected behaviour would be to have the sysctl be "AAAA..." capped at
maxlen (in this case KMOD_PATH_LEN: 256), instead of truncating to the
contents of the second write. Similarly, multiple short writes would
not append to the sysctl.
The old behavior is unlike regular POSIX files enough that doing audits
of software that interact with sysctls can end up in unexpected or
dangerous situations. For example, "as long as the input starts with a
trusted path" turns out to be an insufficient filter, as what must also
happen is for the input to be entirely contained in a single write
syscall -- not a common consideration, especially for high level tools.
This provides kernel.sysctl_writes_strict as a way to make this behavior
act in a less surprising manner for strings, and disallows non-zero file
position when writing numeric sysctls (similar to what is already done
when reading from non-zero file positions). For now, the default (0) is
to warn about non-zero file position use, but retain the legacy
behavior. Setting this to -1 disables the warning, and setting this to
1 enables the file position respecting behavior.
[akpm@linux-foundation.org: fix build]
[akpm@linux-foundation.org: move misplaced hunk, per Randy]
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-07 04:37:19 +07:00
|
|
|
out:
|
2005-04-17 05:20:36 +07:00
|
|
|
*ppos += *lenp;
|
2010-05-05 07:26:45 +07:00
|
|
|
return err;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2007-10-18 17:05:22 +07:00
|
|
|
static int do_proc_doulongvec_minmax(struct ctl_table *table, int write,
|
2006-10-02 16:18:23 +07:00
|
|
|
void __user *buffer,
|
|
|
|
size_t *lenp, loff_t *ppos,
|
|
|
|
unsigned long convmul,
|
|
|
|
unsigned long convdiv)
|
|
|
|
{
|
|
|
|
return __do_proc_doulongvec_minmax(table->data, table, write,
|
2009-09-24 05:57:19 +07:00
|
|
|
buffer, lenp, ppos, convmul, convdiv);
|
2006-10-02 16:18:23 +07:00
|
|
|
}
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/**
|
|
|
|
* proc_doulongvec_minmax - read a vector of long integers with min/max values
|
|
|
|
* @table: the sysctl table
|
|
|
|
* @write: %TRUE if this is a write to the sysctl file
|
|
|
|
* @buffer: the user buffer
|
|
|
|
* @lenp: the size of the user buffer
|
|
|
|
* @ppos: file position
|
|
|
|
*
|
|
|
|
* Reads/writes up to table->maxlen/sizeof(unsigned long) unsigned long
|
|
|
|
* values from/to the user buffer, treated as an ASCII string.
|
|
|
|
*
|
|
|
|
* This routine will ensure the values are within the range specified by
|
|
|
|
* table->extra1 (min) and table->extra2 (max).
|
|
|
|
*
|
|
|
|
* Returns 0 on success.
|
|
|
|
*/
|
2009-09-24 05:57:19 +07:00
|
|
|
int proc_doulongvec_minmax(struct ctl_table *table, int write,
|
2005-04-17 05:20:36 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
2009-09-24 05:57:19 +07:00
|
|
|
return do_proc_doulongvec_minmax(table, write, buffer, lenp, ppos, 1l, 1l);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* proc_doulongvec_ms_jiffies_minmax - read a vector of millisecond values with min/max values
|
|
|
|
* @table: the sysctl table
|
|
|
|
* @write: %TRUE if this is a write to the sysctl file
|
|
|
|
* @buffer: the user buffer
|
|
|
|
* @lenp: the size of the user buffer
|
|
|
|
* @ppos: file position
|
|
|
|
*
|
|
|
|
* Reads/writes up to table->maxlen/sizeof(unsigned long) unsigned long
|
|
|
|
* values from/to the user buffer, treated as an ASCII string. The values
|
|
|
|
* are treated as milliseconds, and converted to jiffies when they are stored.
|
|
|
|
*
|
|
|
|
* This routine will ensure the values are within the range specified by
|
|
|
|
* table->extra1 (min) and table->extra2 (max).
|
|
|
|
*
|
|
|
|
* Returns 0 on success.
|
|
|
|
*/
|
2007-10-18 17:05:22 +07:00
|
|
|
int proc_doulongvec_ms_jiffies_minmax(struct ctl_table *table, int write,
|
2005-04-17 05:20:36 +07:00
|
|
|
void __user *buffer,
|
|
|
|
size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
2009-09-24 05:57:19 +07:00
|
|
|
return do_proc_doulongvec_minmax(table, write, buffer,
|
2005-04-17 05:20:36 +07:00
|
|
|
lenp, ppos, HZ, 1000l);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2010-05-05 07:26:45 +07:00
|
|
|
static int do_proc_dointvec_jiffies_conv(bool *negp, unsigned long *lvalp,
|
2005-04-17 05:20:36 +07:00
|
|
|
int *valp,
|
|
|
|
int write, void *data)
|
|
|
|
{
|
|
|
|
if (write) {
|
2017-05-09 05:54:58 +07:00
|
|
|
if (*lvalp > INT_MAX / HZ)
|
2006-03-24 18:15:50 +07:00
|
|
|
return 1;
|
2005-04-17 05:20:36 +07:00
|
|
|
*valp = *negp ? -(*lvalp*HZ) : (*lvalp*HZ);
|
|
|
|
} else {
|
|
|
|
int val = *valp;
|
|
|
|
unsigned long lval;
|
|
|
|
if (val < 0) {
|
2010-05-05 07:26:45 +07:00
|
|
|
*negp = true;
|
2015-09-10 05:39:06 +07:00
|
|
|
lval = -(unsigned long)val;
|
2005-04-17 05:20:36 +07:00
|
|
|
} else {
|
2010-05-05 07:26:45 +07:00
|
|
|
*negp = false;
|
2005-04-17 05:20:36 +07:00
|
|
|
lval = (unsigned long)val;
|
|
|
|
}
|
|
|
|
*lvalp = lval / HZ;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-05-05 07:26:45 +07:00
|
|
|
static int do_proc_dointvec_userhz_jiffies_conv(bool *negp, unsigned long *lvalp,
|
2005-04-17 05:20:36 +07:00
|
|
|
int *valp,
|
|
|
|
int write, void *data)
|
|
|
|
{
|
|
|
|
if (write) {
|
2006-03-24 18:15:50 +07:00
|
|
|
if (USER_HZ < HZ && *lvalp > (LONG_MAX / HZ) * USER_HZ)
|
|
|
|
return 1;
|
2005-04-17 05:20:36 +07:00
|
|
|
*valp = clock_t_to_jiffies(*negp ? -*lvalp : *lvalp);
|
|
|
|
} else {
|
|
|
|
int val = *valp;
|
|
|
|
unsigned long lval;
|
|
|
|
if (val < 0) {
|
2010-05-05 07:26:45 +07:00
|
|
|
*negp = true;
|
2015-09-10 05:39:06 +07:00
|
|
|
lval = -(unsigned long)val;
|
2005-04-17 05:20:36 +07:00
|
|
|
} else {
|
2010-05-05 07:26:45 +07:00
|
|
|
*negp = false;
|
2005-04-17 05:20:36 +07:00
|
|
|
lval = (unsigned long)val;
|
|
|
|
}
|
|
|
|
*lvalp = jiffies_to_clock_t(lval);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-05-05 07:26:45 +07:00
|
|
|
static int do_proc_dointvec_ms_jiffies_conv(bool *negp, unsigned long *lvalp,
|
2005-04-17 05:20:36 +07:00
|
|
|
int *valp,
|
|
|
|
int write, void *data)
|
|
|
|
{
|
|
|
|
if (write) {
|
2013-07-24 15:39:07 +07:00
|
|
|
unsigned long jif = msecs_to_jiffies(*negp ? -*lvalp : *lvalp);
|
|
|
|
|
|
|
|
if (jif > INT_MAX)
|
|
|
|
return 1;
|
|
|
|
*valp = (int)jif;
|
2005-04-17 05:20:36 +07:00
|
|
|
} else {
|
|
|
|
int val = *valp;
|
|
|
|
unsigned long lval;
|
|
|
|
if (val < 0) {
|
2010-05-05 07:26:45 +07:00
|
|
|
*negp = true;
|
2015-09-10 05:39:06 +07:00
|
|
|
lval = -(unsigned long)val;
|
2005-04-17 05:20:36 +07:00
|
|
|
} else {
|
2010-05-05 07:26:45 +07:00
|
|
|
*negp = false;
|
2005-04-17 05:20:36 +07:00
|
|
|
lval = (unsigned long)val;
|
|
|
|
}
|
|
|
|
*lvalp = jiffies_to_msecs(lval);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* proc_dointvec_jiffies - read a vector of integers as seconds
|
|
|
|
* @table: the sysctl table
|
|
|
|
* @write: %TRUE if this is a write to the sysctl file
|
|
|
|
* @buffer: the user buffer
|
|
|
|
* @lenp: the size of the user buffer
|
|
|
|
* @ppos: file position
|
|
|
|
*
|
|
|
|
* Reads/writes up to table->maxlen/sizeof(unsigned int) integer
|
|
|
|
* values from/to the user buffer, treated as an ASCII string.
|
|
|
|
* The values read are assumed to be in seconds, and are converted into
|
|
|
|
* jiffies.
|
|
|
|
*
|
|
|
|
* Returns 0 on success.
|
|
|
|
*/
|
2009-09-24 05:57:19 +07:00
|
|
|
int proc_dointvec_jiffies(struct ctl_table *table, int write,
|
2005-04-17 05:20:36 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
2009-09-24 05:57:19 +07:00
|
|
|
return do_proc_dointvec(table,write,buffer,lenp,ppos,
|
2005-04-17 05:20:36 +07:00
|
|
|
do_proc_dointvec_jiffies_conv,NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* proc_dointvec_userhz_jiffies - read a vector of integers as 1/USER_HZ seconds
|
|
|
|
* @table: the sysctl table
|
|
|
|
* @write: %TRUE if this is a write to the sysctl file
|
|
|
|
* @buffer: the user buffer
|
|
|
|
* @lenp: the size of the user buffer
|
2005-11-07 16:01:06 +07:00
|
|
|
* @ppos: pointer to the file position
|
2005-04-17 05:20:36 +07:00
|
|
|
*
|
|
|
|
* Reads/writes up to table->maxlen/sizeof(unsigned int) integer
|
|
|
|
* values from/to the user buffer, treated as an ASCII string.
|
|
|
|
* The values read are assumed to be in 1/USER_HZ seconds, and
|
|
|
|
* are converted into jiffies.
|
|
|
|
*
|
|
|
|
* Returns 0 on success.
|
|
|
|
*/
|
2009-09-24 05:57:19 +07:00
|
|
|
int proc_dointvec_userhz_jiffies(struct ctl_table *table, int write,
|
2005-04-17 05:20:36 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
2009-09-24 05:57:19 +07:00
|
|
|
return do_proc_dointvec(table,write,buffer,lenp,ppos,
|
2005-04-17 05:20:36 +07:00
|
|
|
do_proc_dointvec_userhz_jiffies_conv,NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* proc_dointvec_ms_jiffies - read a vector of integers as 1 milliseconds
|
|
|
|
* @table: the sysctl table
|
|
|
|
* @write: %TRUE if this is a write to the sysctl file
|
|
|
|
* @buffer: the user buffer
|
|
|
|
* @lenp: the size of the user buffer
|
2005-05-01 22:59:26 +07:00
|
|
|
* @ppos: file position
|
|
|
|
* @ppos: the current position in the file
|
2005-04-17 05:20:36 +07:00
|
|
|
*
|
|
|
|
* Reads/writes up to table->maxlen/sizeof(unsigned int) integer
|
|
|
|
* values from/to the user buffer, treated as an ASCII string.
|
|
|
|
* The values read are assumed to be in 1/1000 seconds, and
|
|
|
|
* are converted into jiffies.
|
|
|
|
*
|
|
|
|
* Returns 0 on success.
|
|
|
|
*/
|
2009-09-24 05:57:19 +07:00
|
|
|
int proc_dointvec_ms_jiffies(struct ctl_table *table, int write,
|
2005-04-17 05:20:36 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
2009-09-24 05:57:19 +07:00
|
|
|
return do_proc_dointvec(table, write, buffer, lenp, ppos,
|
2005-04-17 05:20:36 +07:00
|
|
|
do_proc_dointvec_ms_jiffies_conv, NULL);
|
|
|
|
}
|
|
|
|
|
2009-09-24 05:57:19 +07:00
|
|
|
static int proc_do_cad_pid(struct ctl_table *table, int write,
|
2006-10-02 16:19:00 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
struct pid *new_pid;
|
|
|
|
pid_t tmp;
|
|
|
|
int r;
|
|
|
|
|
2008-02-08 19:19:20 +07:00
|
|
|
tmp = pid_vnr(cad_pid);
|
2006-10-02 16:19:00 +07:00
|
|
|
|
2009-09-24 05:57:19 +07:00
|
|
|
r = __do_proc_dointvec(&tmp, table, write, buffer,
|
2006-10-02 16:19:00 +07:00
|
|
|
lenp, ppos, NULL, NULL);
|
|
|
|
if (r || !write)
|
|
|
|
return r;
|
|
|
|
|
|
|
|
new_pid = find_get_pid(tmp);
|
|
|
|
if (!new_pid)
|
|
|
|
return -ESRCH;
|
|
|
|
|
|
|
|
put_pid(xchg(&cad_pid, new_pid));
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-05-05 07:26:55 +07:00
|
|
|
/**
|
|
|
|
* proc_do_large_bitmap - read/write from/to a large bitmap
|
|
|
|
* @table: the sysctl table
|
|
|
|
* @write: %TRUE if this is a write to the sysctl file
|
|
|
|
* @buffer: the user buffer
|
|
|
|
* @lenp: the size of the user buffer
|
|
|
|
* @ppos: file position
|
|
|
|
*
|
|
|
|
* The bitmap is stored at table->data and the bitmap length (in bits)
|
|
|
|
* in table->maxlen.
|
|
|
|
*
|
|
|
|
* We use a range comma separated format (e.g. 1,3-4,10-10) so that
|
|
|
|
* large bitmaps may be represented in a compact manner. Writing into
|
|
|
|
* the file will clear the bitmap then update it with the given input.
|
|
|
|
*
|
|
|
|
* Returns 0 on success.
|
|
|
|
*/
|
|
|
|
int proc_do_large_bitmap(struct ctl_table *table, int write,
|
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
int err = 0;
|
|
|
|
bool first = 1;
|
|
|
|
size_t left = *lenp;
|
|
|
|
unsigned long bitmap_len = table->maxlen;
|
2014-05-13 06:04:53 +07:00
|
|
|
unsigned long *bitmap = *(unsigned long **) table->data;
|
2010-05-05 07:26:55 +07:00
|
|
|
unsigned long *tmp_bitmap = NULL;
|
|
|
|
char tr_a[] = { '-', ',', '\n' }, tr_b[] = { ',', '\n', 0 }, c;
|
|
|
|
|
2014-05-13 06:04:53 +07:00
|
|
|
if (!bitmap || !bitmap_len || !left || (*ppos && !write)) {
|
2010-05-05 07:26:55 +07:00
|
|
|
*lenp = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (write) {
|
2015-12-24 12:13:10 +07:00
|
|
|
char *kbuf, *p;
|
2019-05-15 05:45:13 +07:00
|
|
|
size_t skipped = 0;
|
2010-05-05 07:26:55 +07:00
|
|
|
|
2019-05-15 05:45:13 +07:00
|
|
|
if (left > PAGE_SIZE - 1) {
|
2010-05-05 07:26:55 +07:00
|
|
|
left = PAGE_SIZE - 1;
|
2019-05-15 05:45:13 +07:00
|
|
|
/* How much of the buffer we'll skip this pass */
|
|
|
|
skipped = *lenp - left;
|
|
|
|
}
|
2010-05-05 07:26:55 +07:00
|
|
|
|
2015-12-24 12:13:10 +07:00
|
|
|
p = kbuf = memdup_user_nul(buffer, left);
|
|
|
|
if (IS_ERR(kbuf))
|
|
|
|
return PTR_ERR(kbuf);
|
2010-05-05 07:26:55 +07:00
|
|
|
|
2019-05-15 05:44:52 +07:00
|
|
|
tmp_bitmap = bitmap_zalloc(bitmap_len, GFP_KERNEL);
|
2010-05-05 07:26:55 +07:00
|
|
|
if (!tmp_bitmap) {
|
2015-12-24 12:13:10 +07:00
|
|
|
kfree(kbuf);
|
2010-05-05 07:26:55 +07:00
|
|
|
return -ENOMEM;
|
|
|
|
}
|
2015-12-24 12:13:10 +07:00
|
|
|
proc_skip_char(&p, &left, '\n');
|
2010-05-05 07:26:55 +07:00
|
|
|
while (!err && left) {
|
|
|
|
unsigned long val_a, val_b;
|
|
|
|
bool neg;
|
2019-05-15 05:45:13 +07:00
|
|
|
size_t saved_left;
|
2010-05-05 07:26:55 +07:00
|
|
|
|
2019-05-15 05:45:13 +07:00
|
|
|
/* In case we stop parsing mid-number, we can reset */
|
|
|
|
saved_left = left;
|
2015-12-24 12:13:10 +07:00
|
|
|
err = proc_get_long(&p, &left, &val_a, &neg, tr_a,
|
2010-05-05 07:26:55 +07:00
|
|
|
sizeof(tr_a), &c);
|
2019-05-15 05:45:13 +07:00
|
|
|
/*
|
|
|
|
* If we consumed the entirety of a truncated buffer or
|
|
|
|
* only one char is left (may be a "-"), then stop here,
|
|
|
|
* reset, & come back for more.
|
|
|
|
*/
|
|
|
|
if ((left <= 1) && skipped) {
|
|
|
|
left = saved_left;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2010-05-05 07:26:55 +07:00
|
|
|
if (err)
|
|
|
|
break;
|
|
|
|
if (val_a >= bitmap_len || neg) {
|
|
|
|
err = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
val_b = val_a;
|
|
|
|
if (left) {
|
2015-12-24 12:13:10 +07:00
|
|
|
p++;
|
2010-05-05 07:26:55 +07:00
|
|
|
left--;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (c == '-') {
|
2015-12-24 12:13:10 +07:00
|
|
|
err = proc_get_long(&p, &left, &val_b,
|
2010-05-05 07:26:55 +07:00
|
|
|
&neg, tr_b, sizeof(tr_b),
|
|
|
|
&c);
|
2019-05-15 05:45:13 +07:00
|
|
|
/*
|
|
|
|
* If we consumed all of a truncated buffer or
|
|
|
|
* then stop here, reset, & come back for more.
|
|
|
|
*/
|
|
|
|
if (!left && skipped) {
|
|
|
|
left = saved_left;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2010-05-05 07:26:55 +07:00
|
|
|
if (err)
|
|
|
|
break;
|
|
|
|
if (val_b >= bitmap_len || neg ||
|
|
|
|
val_a > val_b) {
|
|
|
|
err = -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (left) {
|
2015-12-24 12:13:10 +07:00
|
|
|
p++;
|
2010-05-05 07:26:55 +07:00
|
|
|
left--;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-03-29 04:42:50 +07:00
|
|
|
bitmap_set(tmp_bitmap, val_a, val_b - val_a + 1);
|
2010-05-05 07:26:55 +07:00
|
|
|
first = 0;
|
2015-12-24 12:13:10 +07:00
|
|
|
proc_skip_char(&p, &left, '\n');
|
2010-05-05 07:26:55 +07:00
|
|
|
}
|
2015-12-24 12:13:10 +07:00
|
|
|
kfree(kbuf);
|
2019-05-15 05:45:13 +07:00
|
|
|
left += skipped;
|
2010-05-05 07:26:55 +07:00
|
|
|
} else {
|
|
|
|
unsigned long bit_a, bit_b = 0;
|
|
|
|
|
|
|
|
while (left) {
|
|
|
|
bit_a = find_next_bit(bitmap, bitmap_len, bit_b);
|
|
|
|
if (bit_a >= bitmap_len)
|
|
|
|
break;
|
|
|
|
bit_b = find_next_zero_bit(bitmap, bitmap_len,
|
|
|
|
bit_a + 1) - 1;
|
|
|
|
|
|
|
|
if (!first) {
|
|
|
|
err = proc_put_char(&buffer, &left, ',');
|
|
|
|
if (err)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
err = proc_put_long(&buffer, &left, bit_a, false);
|
|
|
|
if (err)
|
|
|
|
break;
|
|
|
|
if (bit_a != bit_b) {
|
|
|
|
err = proc_put_char(&buffer, &left, '-');
|
|
|
|
if (err)
|
|
|
|
break;
|
|
|
|
err = proc_put_long(&buffer, &left, bit_b, false);
|
|
|
|
if (err)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
first = 0; bit_b++;
|
|
|
|
}
|
|
|
|
if (!err)
|
|
|
|
err = proc_put_char(&buffer, &left, '\n');
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!err) {
|
|
|
|
if (write) {
|
|
|
|
if (*ppos)
|
|
|
|
bitmap_or(bitmap, bitmap, tmp_bitmap, bitmap_len);
|
|
|
|
else
|
2012-03-29 04:42:50 +07:00
|
|
|
bitmap_copy(bitmap, tmp_bitmap, bitmap_len);
|
2010-05-05 07:26:55 +07:00
|
|
|
}
|
|
|
|
*lenp -= left;
|
|
|
|
*ppos += *lenp;
|
|
|
|
}
|
2017-11-18 06:30:26 +07:00
|
|
|
|
2019-05-15 05:44:52 +07:00
|
|
|
bitmap_free(tmp_bitmap);
|
2017-11-18 06:30:26 +07:00
|
|
|
return err;
|
2010-05-05 07:26:55 +07:00
|
|
|
}
|
|
|
|
|
2011-01-13 08:00:45 +07:00
|
|
|
#else /* CONFIG_PROC_SYSCTL */
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2009-09-24 05:57:19 +07:00
|
|
|
int proc_dostring(struct ctl_table *table, int write,
|
2005-04-17 05:20:36 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
return -ENOSYS;
|
|
|
|
}
|
|
|
|
|
2009-09-24 05:57:19 +07:00
|
|
|
int proc_dointvec(struct ctl_table *table, int write,
|
2005-04-17 05:20:36 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
return -ENOSYS;
|
|
|
|
}
|
|
|
|
|
2016-08-26 05:16:51 +07:00
|
|
|
int proc_douintvec(struct ctl_table *table, int write,
|
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
return -ENOSYS;
|
|
|
|
}
|
|
|
|
|
2009-09-24 05:57:19 +07:00
|
|
|
int proc_dointvec_minmax(struct ctl_table *table, int write,
|
2005-04-17 05:20:36 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
return -ENOSYS;
|
|
|
|
}
|
|
|
|
|
2017-07-13 04:33:40 +07:00
|
|
|
int proc_douintvec_minmax(struct ctl_table *table, int write,
|
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
return -ENOSYS;
|
|
|
|
}
|
|
|
|
|
2009-09-24 05:57:19 +07:00
|
|
|
int proc_dointvec_jiffies(struct ctl_table *table, int write,
|
2005-04-17 05:20:36 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
return -ENOSYS;
|
|
|
|
}
|
|
|
|
|
2009-09-24 05:57:19 +07:00
|
|
|
int proc_dointvec_userhz_jiffies(struct ctl_table *table, int write,
|
2005-04-17 05:20:36 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
return -ENOSYS;
|
|
|
|
}
|
|
|
|
|
2009-09-24 05:57:19 +07:00
|
|
|
int proc_dointvec_ms_jiffies(struct ctl_table *table, int write,
|
2005-04-17 05:20:36 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
return -ENOSYS;
|
|
|
|
}
|
|
|
|
|
2009-09-24 05:57:19 +07:00
|
|
|
int proc_doulongvec_minmax(struct ctl_table *table, int write,
|
2005-04-17 05:20:36 +07:00
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
return -ENOSYS;
|
|
|
|
}
|
|
|
|
|
2007-10-18 17:05:22 +07:00
|
|
|
int proc_doulongvec_ms_jiffies_minmax(struct ctl_table *table, int write,
|
2005-04-17 05:20:36 +07:00
|
|
|
void __user *buffer,
|
|
|
|
size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
return -ENOSYS;
|
|
|
|
}
|
|
|
|
|
2019-04-18 03:35:49 +07:00
|
|
|
int proc_do_large_bitmap(struct ctl_table *table, int write,
|
|
|
|
void __user *buffer, size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
return -ENOSYS;
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2011-01-13 08:00:45 +07:00
|
|
|
#endif /* CONFIG_PROC_SYSCTL */
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2019-03-05 03:34:12 +07:00
|
|
|
#if defined(CONFIG_BPF_SYSCALL) && defined(CONFIG_SYSCTL)
|
2019-02-26 05:28:39 +07:00
|
|
|
static int proc_dointvec_minmax_bpf_stats(struct ctl_table *table, int write,
|
|
|
|
void __user *buffer, size_t *lenp,
|
|
|
|
loff_t *ppos)
|
|
|
|
{
|
|
|
|
int ret, bpf_stats = *(int *)table->data;
|
|
|
|
struct ctl_table tmp = *table;
|
|
|
|
|
|
|
|
if (write && !capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
|
|
|
|
tmp.data = &bpf_stats;
|
|
|
|
ret = proc_dointvec_minmax(&tmp, write, buffer, lenp, ppos);
|
|
|
|
if (write && !ret) {
|
|
|
|
*(int *)table->data = bpf_stats;
|
|
|
|
if (bpf_stats)
|
|
|
|
static_branch_enable(&bpf_stats_enabled_key);
|
|
|
|
else
|
|
|
|
static_branch_disable(&bpf_stats_enabled_key);
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
2019-02-28 09:30:44 +07:00
|
|
|
#endif
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* No sense putting this after each symbol definition, twice,
|
|
|
|
* exception granted :-)
|
|
|
|
*/
|
|
|
|
EXPORT_SYMBOL(proc_dointvec);
|
2016-08-26 05:16:51 +07:00
|
|
|
EXPORT_SYMBOL(proc_douintvec);
|
2005-04-17 05:20:36 +07:00
|
|
|
EXPORT_SYMBOL(proc_dointvec_jiffies);
|
|
|
|
EXPORT_SYMBOL(proc_dointvec_minmax);
|
2017-07-13 04:33:40 +07:00
|
|
|
EXPORT_SYMBOL_GPL(proc_douintvec_minmax);
|
2005-04-17 05:20:36 +07:00
|
|
|
EXPORT_SYMBOL(proc_dointvec_userhz_jiffies);
|
|
|
|
EXPORT_SYMBOL(proc_dointvec_ms_jiffies);
|
|
|
|
EXPORT_SYMBOL(proc_dostring);
|
|
|
|
EXPORT_SYMBOL(proc_doulongvec_minmax);
|
|
|
|
EXPORT_SYMBOL(proc_doulongvec_ms_jiffies_minmax);
|
2019-04-18 03:35:49 +07:00
|
|
|
EXPORT_SYMBOL(proc_do_large_bitmap);
|