2005-04-17 05:20:36 +07:00
|
|
|
#ifndef _LINUX_SHM_H_
|
|
|
|
#define _LINUX_SHM_H_
|
|
|
|
|
shm: make exit_shm work proportional to task activity
This is small set of patches our team has had kicking around for a few
versions internally that fixes tasks getting hung on shm_exit when there
are many threads hammering it at once.
Anton wrote a simple test to cause the issue:
http://ozlabs.org/~anton/junkcode/bust_shm_exit.c
Before applying this patchset, this test code will cause either hanging
tracebacks or pthread out of memory errors.
After this patchset, it will still produce output like:
root@somehost:~# ./bust_shm_exit 1024 160
...
INFO: rcu_sched detected stalls on CPUs/tasks: {} (detected by 116, t=2111 jiffies, g=241, c=240, q=7113)
INFO: Stall ended before state dump start
...
But the task will continue to run along happily, so we consider this an
improvement over hanging, even if it's a bit noisy.
This patch (of 3):
exit_shm obtains the ipc_ns shm rwsem for write and holds it while it
walks every shared memory segment in the namespace. Thus the amount of
work is related to the number of shm segments in the namespace not the
number of segments that might need to be cleaned.
In addition, this occurs after the task has been notified the thread has
exited, so the number of tasks waiting for the ns shm rwsem can grow
without bound until memory is exausted.
Add a list to the task struct of all shmids allocated by this task. Init
the list head in copy_process. Use the ns->rwsem for locking. Add
segments after id is added, remove before removing from id.
On unshare of NEW_IPCNS orphan any ids as if the task had exited, similar
to handling of semaphore undo.
I chose a define for the init sequence since its a simple list init,
otherwise it would require a function call to avoid include loops between
the semaphore code and the task struct. Converting the list_del to
list_del_init for the unshare cases would remove the exit followed by
init, but I left it blow up if not inited.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Jack Miller <millerjo@us.ibm.com>
Cc: Davidlohr Bueso <davidlohr@hp.com>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-09 04:23:19 +07:00
|
|
|
#include <linux/list.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <asm/page.h>
|
2012-10-13 16:46:48 +07:00
|
|
|
#include <uapi/linux/shm.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <asm/shmparam.h>
|
ipc/shm.c: increase the defaults for SHMALL, SHMMAX
System V shared memory
a) can be abused to trigger out-of-memory conditions and the standard
measures against out-of-memory do not work:
- it is not possible to use setrlimit to limit the size of shm segments.
- segments can exist without association with any processes, thus
the oom-killer is unable to free that memory.
b) is typically used for shared information - today often multiple GB.
(e.g. database shared buffers)
The current default is a maximum segment size of 32 MB and a maximum
total size of 8 GB. This is often too much for a) and not enough for
b), which means that lots of users must change the defaults.
This patch increases the default limits (nearly) to the maximum, which
is perfect for case b). The defaults are used after boot and as the
initial value for each new namespace.
Admins/distros that need a protection against a) should reduce the
limits and/or enable shm_rmid_forced.
Unix has historically required setting these limits for shared memory,
and Linux inherited such behavior. The consequence of this is added
complexity for users and administrators. One very common example are
Database setup/installation documents and scripts, where users must
manually calculate the values for these limits. This also requires
(some) knowledge of how the underlying memory management works, thus
causing, in many occasions, the limits to just be flat out wrong.
Disabling these limits sooner could have saved companies a lot of time,
headaches and money for support. But it's never too late, simplify
users life now.
Further notes:
- The patch only changes default, overrides behave as before:
# sysctl kernel.shmall=33554432
would recreate the previous limit for SHMMAX (for the current namespace).
- Disabling sysv shm allocation is possible with:
# sysctl kernel.shmall=0
(not a new feature, also per-namespace)
- The limits are intentionally set to a value slightly less than ULONG_MAX,
to avoid triggering overflows in user space apps.
[not unreasonable, see http://marc.info/?l=linux-mm&m=139638334330127]
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Reported-by: Davidlohr Bueso <davidlohr@hp.com>
Acked-by: Michael Kerrisk <mtk.manpages@gmail.com>
Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-07 04:37:42 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
struct shmid_kernel /* private to the kernel */
|
|
|
|
{
|
|
|
|
struct kern_ipc_perm shm_perm;
|
2014-01-28 08:07:04 +07:00
|
|
|
struct file *shm_file;
|
2005-04-17 05:20:36 +07:00
|
|
|
unsigned long shm_nattch;
|
|
|
|
unsigned long shm_segsz;
|
|
|
|
time_t shm_atim;
|
|
|
|
time_t shm_dtim;
|
|
|
|
time_t shm_ctim;
|
|
|
|
pid_t shm_cprid;
|
|
|
|
pid_t shm_lprid;
|
|
|
|
struct user_struct *mlock_user;
|
2011-07-29 06:55:31 +07:00
|
|
|
|
|
|
|
/* The task created the shm object. NULL if the task is dead. */
|
|
|
|
struct task_struct *shm_creator;
|
shm: make exit_shm work proportional to task activity
This is small set of patches our team has had kicking around for a few
versions internally that fixes tasks getting hung on shm_exit when there
are many threads hammering it at once.
Anton wrote a simple test to cause the issue:
http://ozlabs.org/~anton/junkcode/bust_shm_exit.c
Before applying this patchset, this test code will cause either hanging
tracebacks or pthread out of memory errors.
After this patchset, it will still produce output like:
root@somehost:~# ./bust_shm_exit 1024 160
...
INFO: rcu_sched detected stalls on CPUs/tasks: {} (detected by 116, t=2111 jiffies, g=241, c=240, q=7113)
INFO: Stall ended before state dump start
...
But the task will continue to run along happily, so we consider this an
improvement over hanging, even if it's a bit noisy.
This patch (of 3):
exit_shm obtains the ipc_ns shm rwsem for write and holds it while it
walks every shared memory segment in the namespace. Thus the amount of
work is related to the number of shm segments in the namespace not the
number of segments that might need to be cleaned.
In addition, this occurs after the task has been notified the thread has
exited, so the number of tasks waiting for the ns shm rwsem can grow
without bound until memory is exausted.
Add a list to the task struct of all shmids allocated by this task. Init
the list head in copy_process. Use the ns->rwsem for locking. Add
segments after id is added, remove before removing from id.
On unshare of NEW_IPCNS orphan any ids as if the task had exited, similar
to handling of semaphore undo.
I chose a define for the init sequence since its a simple list init,
otherwise it would require a function call to avoid include loops between
the semaphore code and the task struct. Converting the list_del to
list_del_init for the unshare cases would remove the exit followed by
init, but I left it blow up if not inited.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Jack Miller <millerjo@us.ibm.com>
Cc: Davidlohr Bueso <davidlohr@hp.com>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-09 04:23:19 +07:00
|
|
|
struct list_head shm_clist; /* list by creator */
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
|
|
|
|
|
|
|
/* shm_mode upper byte flags */
|
|
|
|
#define SHM_DEST 01000 /* segment will be destroyed on last detach */
|
|
|
|
#define SHM_LOCKED 02000 /* segment will not be swapped */
|
|
|
|
#define SHM_HUGETLB 04000 /* segment will use huge TLB pages */
|
2005-11-07 15:59:27 +07:00
|
|
|
#define SHM_NORESERVE 010000 /* don't check for reservations */
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2012-12-12 07:01:34 +07:00
|
|
|
/* Bits [26:31] are reserved */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* When SHM_HUGETLB is set bits [26:31] encode the log2 of the huge page size.
|
|
|
|
* This gives us 6 bits, which is enough until someone invents 128 bit address
|
|
|
|
* spaces.
|
|
|
|
*
|
|
|
|
* Assume these are all power of twos.
|
|
|
|
* When 0 use the default page size.
|
|
|
|
*/
|
|
|
|
#define SHM_HUGE_SHIFT 26
|
|
|
|
#define SHM_HUGE_MASK 0x3f
|
|
|
|
#define SHM_HUGE_2MB (21 << SHM_HUGE_SHIFT)
|
|
|
|
#define SHM_HUGE_1GB (30 << SHM_HUGE_SHIFT)
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
#ifdef CONFIG_SYSVIPC
|
shm: make exit_shm work proportional to task activity
This is small set of patches our team has had kicking around for a few
versions internally that fixes tasks getting hung on shm_exit when there
are many threads hammering it at once.
Anton wrote a simple test to cause the issue:
http://ozlabs.org/~anton/junkcode/bust_shm_exit.c
Before applying this patchset, this test code will cause either hanging
tracebacks or pthread out of memory errors.
After this patchset, it will still produce output like:
root@somehost:~# ./bust_shm_exit 1024 160
...
INFO: rcu_sched detected stalls on CPUs/tasks: {} (detected by 116, t=2111 jiffies, g=241, c=240, q=7113)
INFO: Stall ended before state dump start
...
But the task will continue to run along happily, so we consider this an
improvement over hanging, even if it's a bit noisy.
This patch (of 3):
exit_shm obtains the ipc_ns shm rwsem for write and holds it while it
walks every shared memory segment in the namespace. Thus the amount of
work is related to the number of shm segments in the namespace not the
number of segments that might need to be cleaned.
In addition, this occurs after the task has been notified the thread has
exited, so the number of tasks waiting for the ns shm rwsem can grow
without bound until memory is exausted.
Add a list to the task struct of all shmids allocated by this task. Init
the list head in copy_process. Use the ns->rwsem for locking. Add
segments after id is added, remove before removing from id.
On unshare of NEW_IPCNS orphan any ids as if the task had exited, similar
to handling of semaphore undo.
I chose a define for the init sequence since its a simple list init,
otherwise it would require a function call to avoid include loops between
the semaphore code and the task struct. Converting the list_del to
list_del_init for the unshare cases would remove the exit followed by
init, but I left it blow up if not inited.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Jack Miller <millerjo@us.ibm.com>
Cc: Davidlohr Bueso <davidlohr@hp.com>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-09 04:23:19 +07:00
|
|
|
struct sysv_shm {
|
|
|
|
struct list_head shm_clist;
|
|
|
|
};
|
|
|
|
|
2012-07-31 04:42:38 +07:00
|
|
|
long do_shmat(int shmid, char __user *shmaddr, int shmflg, unsigned long *addr,
|
|
|
|
unsigned long shmlba);
|
2016-01-21 06:01:11 +07:00
|
|
|
bool is_file_shm_hugepages(struct file *file);
|
shm: make exit_shm work proportional to task activity
This is small set of patches our team has had kicking around for a few
versions internally that fixes tasks getting hung on shm_exit when there
are many threads hammering it at once.
Anton wrote a simple test to cause the issue:
http://ozlabs.org/~anton/junkcode/bust_shm_exit.c
Before applying this patchset, this test code will cause either hanging
tracebacks or pthread out of memory errors.
After this patchset, it will still produce output like:
root@somehost:~# ./bust_shm_exit 1024 160
...
INFO: rcu_sched detected stalls on CPUs/tasks: {} (detected by 116, t=2111 jiffies, g=241, c=240, q=7113)
INFO: Stall ended before state dump start
...
But the task will continue to run along happily, so we consider this an
improvement over hanging, even if it's a bit noisy.
This patch (of 3):
exit_shm obtains the ipc_ns shm rwsem for write and holds it while it
walks every shared memory segment in the namespace. Thus the amount of
work is related to the number of shm segments in the namespace not the
number of segments that might need to be cleaned.
In addition, this occurs after the task has been notified the thread has
exited, so the number of tasks waiting for the ns shm rwsem can grow
without bound until memory is exausted.
Add a list to the task struct of all shmids allocated by this task. Init
the list head in copy_process. Use the ns->rwsem for locking. Add
segments after id is added, remove before removing from id.
On unshare of NEW_IPCNS orphan any ids as if the task had exited, similar
to handling of semaphore undo.
I chose a define for the init sequence since its a simple list init,
otherwise it would require a function call to avoid include loops between
the semaphore code and the task struct. Converting the list_del to
list_del_init for the unshare cases would remove the exit followed by
init, but I left it blow up if not inited.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Jack Miller <millerjo@us.ibm.com>
Cc: Davidlohr Bueso <davidlohr@hp.com>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-09 04:23:19 +07:00
|
|
|
void exit_shm(struct task_struct *task);
|
|
|
|
#define shm_init_task(task) INIT_LIST_HEAD(&(task)->sysvshm.shm_clist)
|
2005-04-17 05:20:36 +07:00
|
|
|
#else
|
shm: make exit_shm work proportional to task activity
This is small set of patches our team has had kicking around for a few
versions internally that fixes tasks getting hung on shm_exit when there
are many threads hammering it at once.
Anton wrote a simple test to cause the issue:
http://ozlabs.org/~anton/junkcode/bust_shm_exit.c
Before applying this patchset, this test code will cause either hanging
tracebacks or pthread out of memory errors.
After this patchset, it will still produce output like:
root@somehost:~# ./bust_shm_exit 1024 160
...
INFO: rcu_sched detected stalls on CPUs/tasks: {} (detected by 116, t=2111 jiffies, g=241, c=240, q=7113)
INFO: Stall ended before state dump start
...
But the task will continue to run along happily, so we consider this an
improvement over hanging, even if it's a bit noisy.
This patch (of 3):
exit_shm obtains the ipc_ns shm rwsem for write and holds it while it
walks every shared memory segment in the namespace. Thus the amount of
work is related to the number of shm segments in the namespace not the
number of segments that might need to be cleaned.
In addition, this occurs after the task has been notified the thread has
exited, so the number of tasks waiting for the ns shm rwsem can grow
without bound until memory is exausted.
Add a list to the task struct of all shmids allocated by this task. Init
the list head in copy_process. Use the ns->rwsem for locking. Add
segments after id is added, remove before removing from id.
On unshare of NEW_IPCNS orphan any ids as if the task had exited, similar
to handling of semaphore undo.
I chose a define for the init sequence since its a simple list init,
otherwise it would require a function call to avoid include loops between
the semaphore code and the task struct. Converting the list_del to
list_del_init for the unshare cases would remove the exit followed by
init, but I left it blow up if not inited.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Jack Miller <millerjo@us.ibm.com>
Cc: Davidlohr Bueso <davidlohr@hp.com>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-09 04:23:19 +07:00
|
|
|
struct sysv_shm {
|
|
|
|
/* empty */
|
|
|
|
};
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
static inline long do_shmat(int shmid, char __user *shmaddr,
|
2012-07-31 04:42:38 +07:00
|
|
|
int shmflg, unsigned long *addr,
|
|
|
|
unsigned long shmlba)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
|
|
|
return -ENOSYS;
|
|
|
|
}
|
2016-01-21 06:01:11 +07:00
|
|
|
static inline bool is_file_shm_hugepages(struct file *file)
|
2007-03-02 06:46:08 +07:00
|
|
|
{
|
2016-01-21 06:01:11 +07:00
|
|
|
return false;
|
2007-03-02 06:46:08 +07:00
|
|
|
}
|
2011-07-27 06:08:48 +07:00
|
|
|
static inline void exit_shm(struct task_struct *task)
|
|
|
|
{
|
|
|
|
}
|
shm: make exit_shm work proportional to task activity
This is small set of patches our team has had kicking around for a few
versions internally that fixes tasks getting hung on shm_exit when there
are many threads hammering it at once.
Anton wrote a simple test to cause the issue:
http://ozlabs.org/~anton/junkcode/bust_shm_exit.c
Before applying this patchset, this test code will cause either hanging
tracebacks or pthread out of memory errors.
After this patchset, it will still produce output like:
root@somehost:~# ./bust_shm_exit 1024 160
...
INFO: rcu_sched detected stalls on CPUs/tasks: {} (detected by 116, t=2111 jiffies, g=241, c=240, q=7113)
INFO: Stall ended before state dump start
...
But the task will continue to run along happily, so we consider this an
improvement over hanging, even if it's a bit noisy.
This patch (of 3):
exit_shm obtains the ipc_ns shm rwsem for write and holds it while it
walks every shared memory segment in the namespace. Thus the amount of
work is related to the number of shm segments in the namespace not the
number of segments that might need to be cleaned.
In addition, this occurs after the task has been notified the thread has
exited, so the number of tasks waiting for the ns shm rwsem can grow
without bound until memory is exausted.
Add a list to the task struct of all shmids allocated by this task. Init
the list head in copy_process. Use the ns->rwsem for locking. Add
segments after id is added, remove before removing from id.
On unshare of NEW_IPCNS orphan any ids as if the task had exited, similar
to handling of semaphore undo.
I chose a define for the init sequence since its a simple list init,
otherwise it would require a function call to avoid include loops between
the semaphore code and the task struct. Converting the list_del to
list_del_init for the unshare cases would remove the exit followed by
init, but I left it blow up if not inited.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Jack Miller <millerjo@us.ibm.com>
Cc: Davidlohr Bueso <davidlohr@hp.com>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-09 04:23:19 +07:00
|
|
|
static inline void shm_init_task(struct task_struct *task)
|
|
|
|
{
|
|
|
|
}
|
2005-04-17 05:20:36 +07:00
|
|
|
#endif
|
|
|
|
|
|
|
|
#endif /* _LINUX_SHM_H_ */
|