License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 21:07:57 +07:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2017-02-09 00:51:30 +07:00
|
|
|
#ifndef _LINUX_SCHED_SIGNAL_H
|
|
|
|
#define _LINUX_SCHED_SIGNAL_H
|
|
|
|
|
2017-02-04 07:27:20 +07:00
|
|
|
#include <linux/rculist.h>
|
2017-02-04 05:47:37 +07:00
|
|
|
#include <linux/signal.h>
|
2017-02-09 00:51:30 +07:00
|
|
|
#include <linux/sched.h>
|
2017-02-09 00:51:32 +07:00
|
|
|
#include <linux/sched/jobctl.h>
|
2017-02-04 07:20:53 +07:00
|
|
|
#include <linux/sched/task.h>
|
2017-02-03 01:15:33 +07:00
|
|
|
#include <linux/cred.h>
|
2019-01-18 19:27:26 +07:00
|
|
|
#include <linux/refcount.h>
|
2017-02-09 00:51:30 +07:00
|
|
|
|
2017-02-02 14:35:14 +07:00
|
|
|
/*
|
|
|
|
* Types defining task->signal and task->sighand and APIs using them:
|
|
|
|
*/
|
|
|
|
|
|
|
|
struct sighand_struct {
|
2019-01-18 19:27:26 +07:00
|
|
|
refcount_t count;
|
2017-02-02 14:35:14 +07:00
|
|
|
struct k_sigaction action[_NSIG];
|
|
|
|
spinlock_t siglock;
|
|
|
|
wait_queue_head_t signalfd_wqh;
|
|
|
|
};
|
|
|
|
|
2017-02-02 18:06:10 +07:00
|
|
|
/*
|
|
|
|
* Per-process accounting stats:
|
|
|
|
*/
|
|
|
|
struct pacct_struct {
|
|
|
|
int ac_flag;
|
|
|
|
long ac_exitcode;
|
|
|
|
unsigned long ac_mem;
|
|
|
|
u64 ac_utime, ac_stime;
|
|
|
|
unsigned long ac_minflt, ac_majflt;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct cpu_itimer {
|
|
|
|
u64 expires;
|
|
|
|
u64 incr;
|
|
|
|
};
|
|
|
|
|
2017-02-05 17:48:36 +07:00
|
|
|
/*
|
|
|
|
* This is the atomic variant of task_cputime, which can be used for
|
|
|
|
* storing and updating task_cputime statistics without locking.
|
|
|
|
*/
|
|
|
|
struct task_cputime_atomic {
|
|
|
|
atomic64_t utime;
|
|
|
|
atomic64_t stime;
|
|
|
|
atomic64_t sum_exec_runtime;
|
|
|
|
};
|
|
|
|
|
|
|
|
#define INIT_CPUTIME_ATOMIC \
|
|
|
|
(struct task_cputime_atomic) { \
|
|
|
|
.utime = ATOMIC64_INIT(0), \
|
|
|
|
.stime = ATOMIC64_INIT(0), \
|
|
|
|
.sum_exec_runtime = ATOMIC64_INIT(0), \
|
|
|
|
}
|
|
|
|
/**
|
|
|
|
* struct thread_group_cputimer - thread group interval timer counts
|
|
|
|
* @cputime_atomic: atomic thread group interval timers.
|
|
|
|
* @running: true when there are timers running and
|
|
|
|
* @cputime_atomic receives updates.
|
|
|
|
* @checking_timer: true when a thread in the group is in the
|
|
|
|
* process of checking for thread group timers.
|
|
|
|
*
|
|
|
|
* This structure contains the version of task_cputime, above, that is
|
|
|
|
* used for thread group CPU timer calculations.
|
|
|
|
*/
|
|
|
|
struct thread_group_cputimer {
|
|
|
|
struct task_cputime_atomic cputime_atomic;
|
|
|
|
bool running;
|
|
|
|
bool checking_timer;
|
|
|
|
};
|
|
|
|
|
2018-07-24 03:20:37 +07:00
|
|
|
struct multiprocess_signals {
|
|
|
|
sigset_t signal;
|
|
|
|
struct hlist_node node;
|
|
|
|
};
|
|
|
|
|
2017-02-02 14:35:14 +07:00
|
|
|
/*
|
|
|
|
* NOTE! "signal_struct" does not have its own
|
|
|
|
* locking, because a shared signal_struct always
|
|
|
|
* implies a shared sighand_struct, so locking
|
|
|
|
* sighand_struct is always a proper superset of
|
|
|
|
* the locking of signal_struct.
|
|
|
|
*/
|
|
|
|
struct signal_struct {
|
2019-01-18 19:27:27 +07:00
|
|
|
refcount_t sigcnt;
|
2017-02-02 14:35:14 +07:00
|
|
|
atomic_t live;
|
|
|
|
int nr_threads;
|
|
|
|
struct list_head thread_head;
|
|
|
|
|
|
|
|
wait_queue_head_t wait_chldexit; /* for wait4() */
|
|
|
|
|
|
|
|
/* current thread group signal load-balancing target: */
|
|
|
|
struct task_struct *curr_target;
|
|
|
|
|
|
|
|
/* shared signal handling: */
|
|
|
|
struct sigpending shared_pending;
|
|
|
|
|
2018-07-24 03:20:37 +07:00
|
|
|
/* For collecting multiprocess signals during fork */
|
|
|
|
struct hlist_head multiprocess;
|
|
|
|
|
2017-02-02 14:35:14 +07:00
|
|
|
/* thread group exit support */
|
|
|
|
int group_exit_code;
|
|
|
|
/* overloaded:
|
|
|
|
* - notify group_exit_task when ->count is equal to notify_count
|
|
|
|
* - everyone except group_exit_task is stopped during signal delivery
|
|
|
|
* of fatal signals, group_exit_task processes the signal.
|
|
|
|
*/
|
|
|
|
int notify_count;
|
|
|
|
struct task_struct *group_exit_task;
|
|
|
|
|
|
|
|
/* thread group stop support, overloads group_exit_code too */
|
|
|
|
int group_stop_count;
|
|
|
|
unsigned int flags; /* see SIGNAL_* flags below */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* PR_SET_CHILD_SUBREAPER marks a process, like a service
|
|
|
|
* manager, to re-parent orphan (double-forking) child processes
|
|
|
|
* to this process instead of 'init'. The service manager is
|
|
|
|
* able to receive SIGCHLD signals and is able to investigate
|
|
|
|
* the process until it calls wait(). All children of this
|
|
|
|
* process will inherit a flag if they should look for a
|
|
|
|
* child_subreaper process at exit.
|
|
|
|
*/
|
|
|
|
unsigned int is_child_subreaper:1;
|
|
|
|
unsigned int has_child_subreaper:1;
|
|
|
|
|
|
|
|
#ifdef CONFIG_POSIX_TIMERS
|
|
|
|
|
|
|
|
/* POSIX.1b Interval Timers */
|
|
|
|
int posix_timer_id;
|
|
|
|
struct list_head posix_timers;
|
|
|
|
|
|
|
|
/* ITIMER_REAL timer for the process */
|
|
|
|
struct hrtimer real_timer;
|
|
|
|
ktime_t it_real_incr;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ITIMER_PROF and ITIMER_VIRTUAL timers for the process, we use
|
|
|
|
* CPUCLOCK_PROF and CPUCLOCK_VIRT for indexing array as these
|
|
|
|
* values are defined to 0 and 1 respectively
|
|
|
|
*/
|
|
|
|
struct cpu_itimer it[2];
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Thread group totals for process CPU timers.
|
|
|
|
* See thread_group_cputimer(), et al, for details.
|
|
|
|
*/
|
|
|
|
struct thread_group_cputimer cputimer;
|
|
|
|
|
|
|
|
/* Earliest-expiration cache. */
|
|
|
|
struct task_cputime cputime_expires;
|
|
|
|
|
|
|
|
struct list_head cpu_timers[3];
|
|
|
|
|
|
|
|
#endif
|
|
|
|
|
2017-09-27 01:06:43 +07:00
|
|
|
/* PID/PID hash table linkage. */
|
|
|
|
struct pid *pids[PIDTYPE_MAX];
|
2017-02-02 14:35:14 +07:00
|
|
|
|
|
|
|
#ifdef CONFIG_NO_HZ_FULL
|
|
|
|
atomic_t tick_dep_mask;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
struct pid *tty_old_pgrp;
|
|
|
|
|
|
|
|
/* boolean value for session group leader */
|
|
|
|
int leader;
|
|
|
|
|
|
|
|
struct tty_struct *tty; /* NULL if no tty */
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_AUTOGROUP
|
|
|
|
struct autogroup *autogroup;
|
|
|
|
#endif
|
|
|
|
/*
|
|
|
|
* Cumulative resource counters for dead threads in the group,
|
|
|
|
* and for reaped dead child processes forked by this group.
|
|
|
|
* Live threads maintain their own counters and add to these
|
|
|
|
* in __exit_signal, except for the group leader.
|
|
|
|
*/
|
|
|
|
seqlock_t stats_lock;
|
|
|
|
u64 utime, stime, cutime, cstime;
|
|
|
|
u64 gtime;
|
|
|
|
u64 cgtime;
|
|
|
|
struct prev_cputime prev_cputime;
|
|
|
|
unsigned long nvcsw, nivcsw, cnvcsw, cnivcsw;
|
|
|
|
unsigned long min_flt, maj_flt, cmin_flt, cmaj_flt;
|
|
|
|
unsigned long inblock, oublock, cinblock, coublock;
|
|
|
|
unsigned long maxrss, cmaxrss;
|
|
|
|
struct task_io_accounting ioac;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Cumulative ns of schedule CPU time fo dead threads in the
|
|
|
|
* group, not including a zombie group leader, (This only differs
|
|
|
|
* from jiffies_to_ns(utime + stime) if sched_clock uses something
|
|
|
|
* other than jiffies.)
|
|
|
|
*/
|
|
|
|
unsigned long long sum_sched_runtime;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We don't bother to synchronize most readers of this at all,
|
|
|
|
* because there is no reader checking a limit that actually needs
|
|
|
|
* to get both rlim_cur and rlim_max atomically, and either one
|
|
|
|
* alone is a single word that can safely be read normally.
|
|
|
|
* getrlimit/setrlimit use task_lock(current->group_leader) to
|
|
|
|
* protect this instead of the siglock, because they really
|
|
|
|
* have no need to disable irqs.
|
|
|
|
*/
|
|
|
|
struct rlimit rlim[RLIM_NLIMITS];
|
|
|
|
|
|
|
|
#ifdef CONFIG_BSD_PROCESS_ACCT
|
|
|
|
struct pacct_struct pacct; /* per-process accounting information */
|
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_TASKSTATS
|
|
|
|
struct taskstats *stats;
|
|
|
|
#endif
|
|
|
|
#ifdef CONFIG_AUDIT
|
|
|
|
unsigned audit_tty;
|
|
|
|
struct tty_audit_buf *tty_audit_buf;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Thread is the potential origin of an oom condition; kill first on
|
|
|
|
* oom
|
|
|
|
*/
|
|
|
|
bool oom_flag_origin;
|
|
|
|
short oom_score_adj; /* OOM kill score adjustment */
|
|
|
|
short oom_score_adj_min; /* OOM kill score adjustment min value.
|
|
|
|
* Only settable by CAP_SYS_RESOURCE. */
|
|
|
|
struct mm_struct *oom_mm; /* recorded mm when the thread group got
|
|
|
|
* killed by the oom killer */
|
|
|
|
|
|
|
|
struct mutex cred_guard_mutex; /* guard against foreign influences on
|
|
|
|
* credential calculations
|
|
|
|
* (notably. ptrace) */
|
2016-10-28 15:22:25 +07:00
|
|
|
} __randomize_layout;
|
2017-02-02 14:35:14 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Bits in flags field of signal_struct.
|
|
|
|
*/
|
|
|
|
#define SIGNAL_STOP_STOPPED 0x00000001 /* job control stop in effect */
|
|
|
|
#define SIGNAL_STOP_CONTINUED 0x00000002 /* SIGCONT since WCONTINUED reap */
|
|
|
|
#define SIGNAL_GROUP_EXIT 0x00000004 /* group exit in progress */
|
|
|
|
#define SIGNAL_GROUP_COREDUMP 0x00000008 /* coredump in progress */
|
|
|
|
/*
|
|
|
|
* Pending notifications to parent.
|
|
|
|
*/
|
|
|
|
#define SIGNAL_CLD_STOPPED 0x00000010
|
|
|
|
#define SIGNAL_CLD_CONTINUED 0x00000020
|
|
|
|
#define SIGNAL_CLD_MASK (SIGNAL_CLD_STOPPED|SIGNAL_CLD_CONTINUED)
|
|
|
|
|
|
|
|
#define SIGNAL_UNKILLABLE 0x00000040 /* for init: ignore fatal signals */
|
|
|
|
|
|
|
|
#define SIGNAL_STOP_MASK (SIGNAL_CLD_MASK | SIGNAL_STOP_STOPPED | \
|
|
|
|
SIGNAL_STOP_CONTINUED)
|
|
|
|
|
|
|
|
static inline void signal_set_stop_flags(struct signal_struct *sig,
|
|
|
|
unsigned int flags)
|
|
|
|
{
|
|
|
|
WARN_ON(sig->flags & (SIGNAL_GROUP_EXIT|SIGNAL_GROUP_COREDUMP));
|
|
|
|
sig->flags = (sig->flags & ~SIGNAL_STOP_MASK) | flags;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If true, all threads except ->group_exit_task have pending SIGKILL */
|
|
|
|
static inline int signal_group_exit(const struct signal_struct *sig)
|
|
|
|
{
|
|
|
|
return (sig->flags & SIGNAL_GROUP_EXIT) ||
|
|
|
|
(sig->group_exit_task != NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
extern void flush_signals(struct task_struct *);
|
|
|
|
extern void ignore_signals(struct task_struct *);
|
|
|
|
extern void flush_signal_handlers(struct task_struct *, int force_default);
|
2019-05-15 05:46:42 +07:00
|
|
|
extern int dequeue_signal(struct task_struct *task,
|
|
|
|
sigset_t *mask, kernel_siginfo_t *info);
|
2017-02-02 14:35:14 +07:00
|
|
|
|
2018-07-20 09:31:13 +07:00
|
|
|
static inline int kernel_dequeue_signal(void)
|
2017-02-02 14:35:14 +07:00
|
|
|
{
|
2019-05-15 05:46:42 +07:00
|
|
|
struct task_struct *task = current;
|
2018-09-25 16:27:20 +07:00
|
|
|
kernel_siginfo_t __info;
|
2017-02-02 14:35:14 +07:00
|
|
|
int ret;
|
|
|
|
|
2019-05-15 05:46:42 +07:00
|
|
|
spin_lock_irq(&task->sighand->siglock);
|
|
|
|
ret = dequeue_signal(task, &task->blocked, &__info);
|
|
|
|
spin_unlock_irq(&task->sighand->siglock);
|
2017-02-02 14:35:14 +07:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void kernel_signal_stop(void)
|
|
|
|
{
|
|
|
|
spin_lock_irq(¤t->sighand->siglock);
|
|
|
|
if (current->jobctl & JOBCTL_STOP_DEQUEUED)
|
2018-04-30 19:51:01 +07:00
|
|
|
set_special_state(TASK_STOPPED);
|
2017-02-02 14:35:14 +07:00
|
|
|
spin_unlock_irq(¤t->sighand->siglock);
|
|
|
|
|
|
|
|
schedule();
|
|
|
|
}
|
2018-01-19 03:54:54 +07:00
|
|
|
#ifdef __ARCH_SI_TRAPNO
|
|
|
|
# define ___ARCH_SI_TRAPNO(_a1) , _a1
|
|
|
|
#else
|
|
|
|
# define ___ARCH_SI_TRAPNO(_a1)
|
|
|
|
#endif
|
|
|
|
#ifdef __ia64__
|
|
|
|
# define ___ARCH_SI_IA64(_a1, _a2, _a3) , _a1, _a2, _a3
|
|
|
|
#else
|
|
|
|
# define ___ARCH_SI_IA64(_a1, _a2, _a3)
|
|
|
|
#endif
|
|
|
|
|
|
|
|
int force_sig_fault(int sig, int code, void __user *addr
|
|
|
|
___ARCH_SI_TRAPNO(int trapno)
|
|
|
|
___ARCH_SI_IA64(int imm, unsigned int flags, unsigned long isr)
|
|
|
|
, struct task_struct *t);
|
|
|
|
int send_sig_fault(int sig, int code, void __user *addr
|
|
|
|
___ARCH_SI_TRAPNO(int trapno)
|
|
|
|
___ARCH_SI_IA64(int imm, unsigned int flags, unsigned long isr)
|
|
|
|
, struct task_struct *t);
|
|
|
|
|
2018-01-19 07:54:31 +07:00
|
|
|
int force_sig_mceerr(int code, void __user *, short, struct task_struct *);
|
|
|
|
int send_sig_mceerr(int code, void __user *, short, struct task_struct *);
|
|
|
|
|
|
|
|
int force_sig_bnderr(void __user *addr, void __user *lower, void __user *upper);
|
|
|
|
int force_sig_pkuerr(void __user *addr, u32 pkey);
|
|
|
|
|
2018-01-23 03:37:25 +07:00
|
|
|
int force_sig_ptrace_errno_trap(int errno, void __user *addr);
|
|
|
|
|
2018-09-25 16:27:20 +07:00
|
|
|
extern int send_sig_info(int, struct kernel_siginfo *, struct task_struct *);
|
2018-08-22 11:59:51 +07:00
|
|
|
extern void force_sigsegv(int sig, struct task_struct *p);
|
2018-09-25 16:27:20 +07:00
|
|
|
extern int force_sig_info(int, struct kernel_siginfo *, struct task_struct *);
|
|
|
|
extern int __kill_pgrp_info(int sig, struct kernel_siginfo *info, struct pid *pgrp);
|
|
|
|
extern int kill_pid_info(int sig, struct kernel_siginfo *info, struct pid *pid);
|
|
|
|
extern int kill_pid_info_as_cred(int, struct kernel_siginfo *, struct pid *,
|
usb, signal, security: only pass the cred, not the secid, to kill_pid_info_as_cred and security_task_kill
commit d178bc3a708f39cbfefc3fab37032d3f2511b4ec ("user namespace: usb:
make usb urbs user namespace aware (v2)") changed kill_pid_info_as_uid
to kill_pid_info_as_cred, saving and passing a cred structure instead of
uids. Since the secid can be obtained from the cred, drop the secid fields
from the usb_dev_state and async structures, and drop the secid argument to
kill_pid_info_as_cred. Replace the secid argument to security_task_kill
with the cred. Update SELinux, Smack, and AppArmor to use the cred, which
avoids the need for Smack and AppArmor to use a secid at all in this hook.
Further changes to Smack might still be required to take full advantage of
this change, since it should now be possible to perform capability
checking based on the supplied cred. The changes to Smack and AppArmor
have only been compile-tested.
Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov>
Acked-by: Paul Moore <paul@paul-moore.com>
Acked-by: Casey Schaufler <casey@schaufler-ca.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: John Johansen <john.johansen@canonical.com>
Signed-off-by: James Morris <james.morris@microsoft.com>
2017-09-08 23:40:01 +07:00
|
|
|
const struct cred *);
|
2017-02-02 14:35:14 +07:00
|
|
|
extern int kill_pgrp(struct pid *pid, int sig, int priv);
|
|
|
|
extern int kill_pid(struct pid *pid, int sig, int priv);
|
|
|
|
extern __must_check bool do_notify_parent(struct task_struct *, int);
|
|
|
|
extern void __wake_up_parent(struct task_struct *p, struct task_struct *parent);
|
|
|
|
extern void force_sig(int, struct task_struct *);
|
|
|
|
extern int send_sig(int, struct task_struct *, int);
|
|
|
|
extern int zap_other_threads(struct task_struct *p);
|
|
|
|
extern struct sigqueue *sigqueue_alloc(void);
|
|
|
|
extern void sigqueue_free(struct sigqueue *);
|
2018-07-21 02:30:23 +07:00
|
|
|
extern int send_sigqueue(struct sigqueue *, struct pid *, enum pid_type);
|
2017-02-02 14:35:14 +07:00
|
|
|
extern int do_sigaction(int, struct k_sigaction *, struct k_sigaction *);
|
|
|
|
|
2017-02-03 01:15:33 +07:00
|
|
|
static inline int restart_syscall(void)
|
|
|
|
{
|
|
|
|
set_tsk_thread_flag(current, TIF_SIGPENDING);
|
|
|
|
return -ERESTARTNOINTR;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int signal_pending(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return unlikely(test_tsk_thread_flag(p,TIF_SIGPENDING));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int __fatal_signal_pending(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return unlikely(sigismember(&p->pending.signal, SIGKILL));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int fatal_signal_pending(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return signal_pending(p) && __fatal_signal_pending(p);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int signal_pending_state(long state, struct task_struct *p)
|
|
|
|
{
|
|
|
|
if (!(state & (TASK_INTERRUPTIBLE | TASK_WAKEKILL)))
|
|
|
|
return 0;
|
|
|
|
if (!signal_pending(p))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return (state & TASK_INTERRUPTIBLE) || __fatal_signal_pending(p);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Reevaluate whether the task has signals pending delivery.
|
|
|
|
* Wake the task if so.
|
|
|
|
* This is required every time the blocked sigset_t changes.
|
|
|
|
* callers must hold sighand->siglock.
|
|
|
|
*/
|
|
|
|
extern void recalc_sigpending_and_wake(struct task_struct *t);
|
|
|
|
extern void recalc_sigpending(void);
|
2018-07-24 05:26:49 +07:00
|
|
|
extern void calculate_sigpending(void);
|
2017-02-03 01:15:33 +07:00
|
|
|
|
|
|
|
extern void signal_wake_up_state(struct task_struct *t, unsigned int state);
|
|
|
|
|
|
|
|
static inline void signal_wake_up(struct task_struct *t, bool resume)
|
|
|
|
{
|
|
|
|
signal_wake_up_state(t, resume ? TASK_WAKEKILL : 0);
|
|
|
|
}
|
|
|
|
static inline void ptrace_signal_wake_up(struct task_struct *t, bool resume)
|
|
|
|
{
|
|
|
|
signal_wake_up_state(t, resume ? __TASK_TRACED : 0);
|
|
|
|
}
|
|
|
|
|
2018-07-24 01:38:00 +07:00
|
|
|
void task_join_group_stop(struct task_struct *task);
|
|
|
|
|
2017-02-02 14:35:14 +07:00
|
|
|
#ifdef TIF_RESTORE_SIGMASK
|
|
|
|
/*
|
|
|
|
* Legacy restore_sigmask accessors. These are inefficient on
|
|
|
|
* SMP architectures because they require atomic operations.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/**
|
|
|
|
* set_restore_sigmask() - make sure saved_sigmask processing gets done
|
|
|
|
*
|
|
|
|
* This sets TIF_RESTORE_SIGMASK and ensures that the arch signal code
|
|
|
|
* will run before returning to user mode, to process the flag. For
|
|
|
|
* all callers, TIF_SIGPENDING is already set or it's no harm to set
|
|
|
|
* it. TIF_RESTORE_SIGMASK need not be in the set of bits that the
|
|
|
|
* arch code will notice on return to user mode, in case those bits
|
|
|
|
* are scarce. We set TIF_SIGPENDING here to ensure that the arch
|
|
|
|
* signal code always gets run when TIF_RESTORE_SIGMASK is set.
|
|
|
|
*/
|
|
|
|
static inline void set_restore_sigmask(void)
|
|
|
|
{
|
|
|
|
set_thread_flag(TIF_RESTORE_SIGMASK);
|
|
|
|
WARN_ON(!test_thread_flag(TIF_SIGPENDING));
|
|
|
|
}
|
2019-03-29 10:44:13 +07:00
|
|
|
|
2019-05-15 05:46:42 +07:00
|
|
|
static inline void clear_tsk_restore_sigmask(struct task_struct *task)
|
2019-03-29 10:44:13 +07:00
|
|
|
{
|
2019-05-15 05:46:42 +07:00
|
|
|
clear_tsk_thread_flag(task, TIF_RESTORE_SIGMASK);
|
2019-03-29 10:44:13 +07:00
|
|
|
}
|
|
|
|
|
2017-02-02 14:35:14 +07:00
|
|
|
static inline void clear_restore_sigmask(void)
|
|
|
|
{
|
|
|
|
clear_thread_flag(TIF_RESTORE_SIGMASK);
|
|
|
|
}
|
2019-05-15 05:46:42 +07:00
|
|
|
static inline bool test_tsk_restore_sigmask(struct task_struct *task)
|
2019-03-29 10:44:13 +07:00
|
|
|
{
|
2019-05-15 05:46:42 +07:00
|
|
|
return test_tsk_thread_flag(task, TIF_RESTORE_SIGMASK);
|
2019-03-29 10:44:13 +07:00
|
|
|
}
|
2017-02-02 14:35:14 +07:00
|
|
|
static inline bool test_restore_sigmask(void)
|
|
|
|
{
|
|
|
|
return test_thread_flag(TIF_RESTORE_SIGMASK);
|
|
|
|
}
|
|
|
|
static inline bool test_and_clear_restore_sigmask(void)
|
|
|
|
{
|
|
|
|
return test_and_clear_thread_flag(TIF_RESTORE_SIGMASK);
|
|
|
|
}
|
|
|
|
|
|
|
|
#else /* TIF_RESTORE_SIGMASK */
|
|
|
|
|
|
|
|
/* Higher-quality implementation, used if TIF_RESTORE_SIGMASK doesn't exist. */
|
|
|
|
static inline void set_restore_sigmask(void)
|
|
|
|
{
|
|
|
|
current->restore_sigmask = true;
|
|
|
|
WARN_ON(!test_thread_flag(TIF_SIGPENDING));
|
|
|
|
}
|
2019-05-15 05:46:42 +07:00
|
|
|
static inline void clear_tsk_restore_sigmask(struct task_struct *task)
|
2019-03-29 10:44:13 +07:00
|
|
|
{
|
2019-05-15 05:46:42 +07:00
|
|
|
task->restore_sigmask = false;
|
2019-03-29 10:44:13 +07:00
|
|
|
}
|
2017-02-02 14:35:14 +07:00
|
|
|
static inline void clear_restore_sigmask(void)
|
|
|
|
{
|
|
|
|
current->restore_sigmask = false;
|
|
|
|
}
|
|
|
|
static inline bool test_restore_sigmask(void)
|
|
|
|
{
|
|
|
|
return current->restore_sigmask;
|
|
|
|
}
|
2019-05-15 05:46:42 +07:00
|
|
|
static inline bool test_tsk_restore_sigmask(struct task_struct *task)
|
2019-03-29 10:44:13 +07:00
|
|
|
{
|
2019-05-15 05:46:42 +07:00
|
|
|
return task->restore_sigmask;
|
2019-03-29 10:44:13 +07:00
|
|
|
}
|
2017-02-02 14:35:14 +07:00
|
|
|
static inline bool test_and_clear_restore_sigmask(void)
|
|
|
|
{
|
|
|
|
if (!current->restore_sigmask)
|
|
|
|
return false;
|
|
|
|
current->restore_sigmask = false;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
static inline void restore_saved_sigmask(void)
|
|
|
|
{
|
|
|
|
if (test_and_clear_restore_sigmask())
|
|
|
|
__set_current_blocked(¤t->saved_sigmask);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline sigset_t *sigmask_to_save(void)
|
|
|
|
{
|
|
|
|
sigset_t *res = ¤t->blocked;
|
|
|
|
if (unlikely(test_restore_sigmask()))
|
|
|
|
res = ¤t->saved_sigmask;
|
|
|
|
return res;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int kill_cad_pid(int sig, int priv)
|
|
|
|
{
|
|
|
|
return kill_pid(cad_pid, sig, priv);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* These can be the second arg to send_sig_info/send_group_sig_info. */
|
2018-09-25 16:27:20 +07:00
|
|
|
#define SEND_SIG_NOINFO ((struct kernel_siginfo *) 0)
|
|
|
|
#define SEND_SIG_PRIV ((struct kernel_siginfo *) 1)
|
2017-02-02 14:35:14 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* True if we are on the alternate signal stack.
|
|
|
|
*/
|
|
|
|
static inline int on_sig_stack(unsigned long sp)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* If the signal stack is SS_AUTODISARM then, by construction, we
|
|
|
|
* can't be on the signal stack unless user code deliberately set
|
|
|
|
* SS_AUTODISARM when we were already on it.
|
|
|
|
*
|
|
|
|
* This improves reliability: if user state gets corrupted such that
|
|
|
|
* the stack pointer points very close to the end of the signal stack,
|
|
|
|
* then this check will enable the signal to be handled anyway.
|
|
|
|
*/
|
|
|
|
if (current->sas_ss_flags & SS_AUTODISARM)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
#ifdef CONFIG_STACK_GROWSUP
|
|
|
|
return sp >= current->sas_ss_sp &&
|
|
|
|
sp - current->sas_ss_sp < current->sas_ss_size;
|
|
|
|
#else
|
|
|
|
return sp > current->sas_ss_sp &&
|
|
|
|
sp - current->sas_ss_sp <= current->sas_ss_size;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int sas_ss_flags(unsigned long sp)
|
|
|
|
{
|
|
|
|
if (!current->sas_ss_size)
|
|
|
|
return SS_DISABLE;
|
|
|
|
|
|
|
|
return on_sig_stack(sp) ? SS_ONSTACK : 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void sas_ss_reset(struct task_struct *p)
|
|
|
|
{
|
|
|
|
p->sas_ss_sp = 0;
|
|
|
|
p->sas_ss_size = 0;
|
|
|
|
p->sas_ss_flags = SS_DISABLE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned long sigsp(unsigned long sp, struct ksignal *ksig)
|
|
|
|
{
|
|
|
|
if (unlikely((ksig->ka.sa.sa_flags & SA_ONSTACK)) && ! sas_ss_flags(sp))
|
|
|
|
#ifdef CONFIG_STACK_GROWSUP
|
|
|
|
return current->sas_ss_sp;
|
|
|
|
#else
|
|
|
|
return current->sas_ss_sp + current->sas_ss_size;
|
|
|
|
#endif
|
|
|
|
return sp;
|
|
|
|
}
|
|
|
|
|
|
|
|
extern void __cleanup_sighand(struct sighand_struct *);
|
|
|
|
extern void flush_itimer_signals(void);
|
|
|
|
|
|
|
|
#define tasklist_empty() \
|
|
|
|
list_empty(&init_task.tasks)
|
|
|
|
|
|
|
|
#define next_task(p) \
|
|
|
|
list_entry_rcu((p)->tasks.next, struct task_struct, tasks)
|
|
|
|
|
|
|
|
#define for_each_process(p) \
|
|
|
|
for (p = &init_task ; (p = next_task(p)) != &init_task ; )
|
|
|
|
|
|
|
|
extern bool current_is_single_threaded(void);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Careful: do_each_thread/while_each_thread is a double loop so
|
|
|
|
* 'break' will not work as expected - use goto instead.
|
|
|
|
*/
|
|
|
|
#define do_each_thread(g, t) \
|
|
|
|
for (g = t = &init_task ; (g = t = next_task(g)) != &init_task ; ) do
|
|
|
|
|
|
|
|
#define while_each_thread(g, t) \
|
|
|
|
while ((t = next_thread(t)) != g)
|
|
|
|
|
|
|
|
#define __for_each_thread(signal, t) \
|
|
|
|
list_for_each_entry_rcu(t, &(signal)->thread_head, thread_node)
|
|
|
|
|
|
|
|
#define for_each_thread(p, t) \
|
|
|
|
__for_each_thread((p)->signal, t)
|
|
|
|
|
|
|
|
/* Careful: this is a double loop, 'break' won't work as expected. */
|
|
|
|
#define for_each_process_thread(p, t) \
|
|
|
|
for_each_process(p) for_each_thread(p, t)
|
|
|
|
|
|
|
|
typedef int (*proc_visitor)(struct task_struct *p, void *data);
|
|
|
|
void walk_process_tree(struct task_struct *top, proc_visitor, void *);
|
|
|
|
|
2017-05-06 01:45:14 +07:00
|
|
|
static inline
|
|
|
|
struct pid *task_pid_type(struct task_struct *task, enum pid_type type)
|
|
|
|
{
|
2017-09-27 01:06:43 +07:00
|
|
|
struct pid *pid;
|
|
|
|
if (type == PIDTYPE_PID)
|
|
|
|
pid = task_pid(task);
|
|
|
|
else
|
|
|
|
pid = task->signal->pids[type];
|
|
|
|
return pid;
|
2017-05-06 01:45:14 +07:00
|
|
|
}
|
|
|
|
|
2017-09-27 00:45:33 +07:00
|
|
|
static inline struct pid *task_tgid(struct task_struct *task)
|
|
|
|
{
|
2017-06-04 16:32:13 +07:00
|
|
|
return task->signal->pids[PIDTYPE_TGID];
|
2017-09-27 00:45:33 +07:00
|
|
|
}
|
|
|
|
|
2017-09-27 01:06:43 +07:00
|
|
|
/*
|
|
|
|
* Without tasklist or RCU lock it is not safe to dereference
|
|
|
|
* the result of task_pgrp/task_session even if task == current,
|
|
|
|
* we can race with another thread doing sys_setsid/sys_setpgid.
|
|
|
|
*/
|
|
|
|
static inline struct pid *task_pgrp(struct task_struct *task)
|
|
|
|
{
|
|
|
|
return task->signal->pids[PIDTYPE_PGID];
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct pid *task_session(struct task_struct *task)
|
|
|
|
{
|
|
|
|
return task->signal->pids[PIDTYPE_SID];
|
|
|
|
}
|
|
|
|
|
2019-05-15 05:46:42 +07:00
|
|
|
static inline int get_nr_threads(struct task_struct *task)
|
2017-02-02 14:35:14 +07:00
|
|
|
{
|
2019-05-15 05:46:42 +07:00
|
|
|
return task->signal->nr_threads;
|
2017-02-02 14:35:14 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool thread_group_leader(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return p->exit_signal >= 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Do to the insanities of de_thread it is possible for a process
|
|
|
|
* to have the pid of the thread group leader without actually being
|
|
|
|
* the thread group leader. For iteration through the pids in proc
|
|
|
|
* all we care about is that we have a task with the appropriate
|
|
|
|
* pid, we don't actually care if we have the right task.
|
|
|
|
*/
|
|
|
|
static inline bool has_group_leader_pid(struct task_struct *p)
|
|
|
|
{
|
2017-06-04 16:32:13 +07:00
|
|
|
return task_pid(p) == task_tgid(p);
|
2017-02-02 14:35:14 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline
|
|
|
|
bool same_thread_group(struct task_struct *p1, struct task_struct *p2)
|
|
|
|
{
|
|
|
|
return p1->signal == p2->signal;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct task_struct *next_thread(const struct task_struct *p)
|
|
|
|
{
|
|
|
|
return list_entry_rcu(p->thread_group.next,
|
|
|
|
struct task_struct, thread_group);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int thread_group_empty(struct task_struct *p)
|
|
|
|
{
|
|
|
|
return list_empty(&p->thread_group);
|
|
|
|
}
|
|
|
|
|
|
|
|
#define delay_group_leader(p) \
|
|
|
|
(thread_group_leader(p) && !thread_group_empty(p))
|
|
|
|
|
2019-05-15 05:46:42 +07:00
|
|
|
extern struct sighand_struct *__lock_task_sighand(struct task_struct *task,
|
2017-02-02 14:35:14 +07:00
|
|
|
unsigned long *flags);
|
|
|
|
|
2019-05-15 05:46:42 +07:00
|
|
|
static inline struct sighand_struct *lock_task_sighand(struct task_struct *task,
|
2017-02-02 14:35:14 +07:00
|
|
|
unsigned long *flags)
|
|
|
|
{
|
|
|
|
struct sighand_struct *ret;
|
|
|
|
|
2019-05-15 05:46:42 +07:00
|
|
|
ret = __lock_task_sighand(task, flags);
|
|
|
|
(void)__cond_lock(&task->sighand->siglock, ret);
|
2017-02-02 14:35:14 +07:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-05-15 05:46:42 +07:00
|
|
|
static inline void unlock_task_sighand(struct task_struct *task,
|
2017-02-02 14:35:14 +07:00
|
|
|
unsigned long *flags)
|
|
|
|
{
|
2019-05-15 05:46:42 +07:00
|
|
|
spin_unlock_irqrestore(&task->sighand->siglock, *flags);
|
2017-02-02 14:35:14 +07:00
|
|
|
}
|
|
|
|
|
2019-05-15 05:46:42 +07:00
|
|
|
static inline unsigned long task_rlimit(const struct task_struct *task,
|
2017-02-02 14:35:14 +07:00
|
|
|
unsigned int limit)
|
|
|
|
{
|
2019-05-15 05:46:42 +07:00
|
|
|
return READ_ONCE(task->signal->rlim[limit].rlim_cur);
|
2017-02-02 14:35:14 +07:00
|
|
|
}
|
|
|
|
|
2019-05-15 05:46:42 +07:00
|
|
|
static inline unsigned long task_rlimit_max(const struct task_struct *task,
|
2017-02-02 14:35:14 +07:00
|
|
|
unsigned int limit)
|
|
|
|
{
|
2019-05-15 05:46:42 +07:00
|
|
|
return READ_ONCE(task->signal->rlim[limit].rlim_max);
|
2017-02-02 14:35:14 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned long rlimit(unsigned int limit)
|
|
|
|
{
|
|
|
|
return task_rlimit(current, limit);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned long rlimit_max(unsigned int limit)
|
|
|
|
{
|
|
|
|
return task_rlimit_max(current, limit);
|
|
|
|
}
|
|
|
|
|
2017-02-09 00:51:30 +07:00
|
|
|
#endif /* _LINUX_SCHED_SIGNAL_H */
|