License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 21:07:57 +07:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2005-04-17 05:20:36 +07:00
|
|
|
#ifndef _LINUX_WAIT_H
|
|
|
|
#define _LINUX_WAIT_H
|
2013-10-04 15:24:49 +07:00
|
|
|
/*
|
|
|
|
* Linux wait queue related types and methods
|
|
|
|
*/
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <linux/list.h>
|
|
|
|
#include <linux/stddef.h>
|
|
|
|
#include <linux/spinlock.h>
|
2017-02-02 23:54:15 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
#include <asm/current.h>
|
2012-10-13 16:46:48 +07:00
|
|
|
#include <uapi/linux/wait.h>
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2017-06-20 17:06:13 +07:00
|
|
|
typedef struct wait_queue_entry wait_queue_entry_t;
|
2017-03-05 16:33:16 +07:00
|
|
|
|
|
|
|
typedef int (*wait_queue_func_t)(struct wait_queue_entry *wq_entry, unsigned mode, int flags, void *key);
|
|
|
|
int default_wake_function(struct wait_queue_entry *wq_entry, unsigned mode, int flags, void *key);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2017-06-20 17:06:13 +07:00
|
|
|
/* wait_queue_entry::flags */
|
2014-09-24 15:18:47 +07:00
|
|
|
#define WQ_FLAG_EXCLUSIVE 0x01
|
|
|
|
#define WQ_FLAG_WOKEN 0x02
|
2017-08-25 23:13:54 +07:00
|
|
|
#define WQ_FLAG_BOOKMARK 0x04
|
2014-09-24 15:18:47 +07:00
|
|
|
|
2017-06-20 17:06:13 +07:00
|
|
|
/*
|
|
|
|
* A single wait-queue entry structure:
|
|
|
|
*/
|
|
|
|
struct wait_queue_entry {
|
2013-10-04 15:24:49 +07:00
|
|
|
unsigned int flags;
|
|
|
|
void *private;
|
|
|
|
wait_queue_func_t func;
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 17:06:46 +07:00
|
|
|
struct list_head entry;
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
|
|
|
|
2017-03-05 17:10:18 +07:00
|
|
|
struct wait_queue_head {
|
2013-10-04 15:24:49 +07:00
|
|
|
spinlock_t lock;
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 17:06:46 +07:00
|
|
|
struct list_head head;
|
2005-04-17 05:20:36 +07:00
|
|
|
};
|
2017-03-05 17:10:18 +07:00
|
|
|
typedef struct wait_queue_head wait_queue_head_t;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2005-11-07 15:59:43 +07:00
|
|
|
struct task_struct;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Macros for declaration and initialisaton of the datatypes
|
|
|
|
*/
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __WAITQUEUE_INITIALIZER(name, tsk) { \
|
|
|
|
.private = tsk, \
|
|
|
|
.func = default_wake_function, \
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 17:06:46 +07:00
|
|
|
.entry = { NULL, NULL } }
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define DECLARE_WAITQUEUE(name, tsk) \
|
2017-03-05 16:33:16 +07:00
|
|
|
struct wait_queue_entry name = __WAITQUEUE_INITIALIZER(name, tsk)
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __WAIT_QUEUE_HEAD_INITIALIZER(name) { \
|
|
|
|
.lock = __SPIN_LOCK_UNLOCKED(name.lock), \
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 17:06:46 +07:00
|
|
|
.head = { &(name).head, &(name).head } }
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
#define DECLARE_WAIT_QUEUE_HEAD(name) \
|
2017-03-05 17:10:18 +07:00
|
|
|
struct wait_queue_head name = __WAIT_QUEUE_HEAD_INITIALIZER(name)
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2017-03-05 17:10:18 +07:00
|
|
|
extern void __init_waitqueue_head(struct wait_queue_head *wq_head, const char *name, struct lock_class_key *);
|
2009-08-10 18:33:05 +07:00
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define init_waitqueue_head(wq_head) \
|
|
|
|
do { \
|
|
|
|
static struct lock_class_key __key; \
|
|
|
|
\
|
|
|
|
__init_waitqueue_head((wq_head), #wq_head, &__key); \
|
2009-08-10 18:33:05 +07:00
|
|
|
} while (0)
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2006-10-30 13:46:36 +07:00
|
|
|
#ifdef CONFIG_LOCKDEP
|
|
|
|
# define __WAIT_QUEUE_HEAD_INIT_ONSTACK(name) \
|
|
|
|
({ init_waitqueue_head(&name); name; })
|
|
|
|
# define DECLARE_WAIT_QUEUE_HEAD_ONSTACK(name) \
|
2017-03-05 17:10:18 +07:00
|
|
|
struct wait_queue_head name = __WAIT_QUEUE_HEAD_INIT_ONSTACK(name)
|
2006-10-30 13:46:36 +07:00
|
|
|
#else
|
|
|
|
# define DECLARE_WAIT_QUEUE_HEAD_ONSTACK(name) DECLARE_WAIT_QUEUE_HEAD(name)
|
|
|
|
#endif
|
|
|
|
|
2017-03-05 16:33:16 +07:00
|
|
|
static inline void init_waitqueue_entry(struct wait_queue_entry *wq_entry, struct task_struct *p)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2017-03-05 16:33:16 +07:00
|
|
|
wq_entry->flags = 0;
|
|
|
|
wq_entry->private = p;
|
|
|
|
wq_entry->func = default_wake_function;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2013-10-04 15:24:49 +07:00
|
|
|
static inline void
|
2017-03-05 16:33:16 +07:00
|
|
|
init_waitqueue_func_entry(struct wait_queue_entry *wq_entry, wait_queue_func_t func)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2017-03-05 16:33:16 +07:00
|
|
|
wq_entry->flags = 0;
|
|
|
|
wq_entry->private = NULL;
|
|
|
|
wq_entry->func = func;
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2015-10-23 19:32:34 +07:00
|
|
|
/**
|
|
|
|
* waitqueue_active -- locklessly test for waiters on the queue
|
2017-03-05 17:10:18 +07:00
|
|
|
* @wq_head: the waitqueue to test for waiters
|
2015-10-23 19:32:34 +07:00
|
|
|
*
|
|
|
|
* returns true if the wait list is not empty
|
|
|
|
*
|
|
|
|
* NOTE: this function is lockless and requires care, incorrect usage _will_
|
|
|
|
* lead to sporadic and non-obvious failure.
|
|
|
|
*
|
2017-03-05 17:10:18 +07:00
|
|
|
* Use either while holding wait_queue_head::lock or when used for wakeups
|
2015-10-23 19:32:34 +07:00
|
|
|
* with an extra smp_mb() like:
|
|
|
|
*
|
|
|
|
* CPU0 - waker CPU1 - waiter
|
|
|
|
*
|
|
|
|
* for (;;) {
|
2017-03-05 18:07:33 +07:00
|
|
|
* @cond = true; prepare_to_wait(&wq_head, &wait, state);
|
2015-10-23 19:32:34 +07:00
|
|
|
* smp_mb(); // smp_mb() from set_current_state()
|
2017-03-05 18:07:33 +07:00
|
|
|
* if (waitqueue_active(wq_head)) if (@cond)
|
|
|
|
* wake_up(wq_head); break;
|
2015-10-23 19:32:34 +07:00
|
|
|
* schedule();
|
|
|
|
* }
|
2017-03-05 18:07:33 +07:00
|
|
|
* finish_wait(&wq_head, &wait);
|
2015-10-23 19:32:34 +07:00
|
|
|
*
|
|
|
|
* Because without the explicit smp_mb() it's possible for the
|
|
|
|
* waitqueue_active() load to get hoisted over the @cond store such that we'll
|
|
|
|
* observe an empty wait list while the waiter might not observe @cond.
|
|
|
|
*
|
|
|
|
* Also note that this 'optimization' trades a spin_lock() for an smp_mb(),
|
|
|
|
* which (when the lock is uncontended) are of roughly equal cost.
|
|
|
|
*/
|
2017-03-05 17:10:18 +07:00
|
|
|
static inline int waitqueue_active(struct wait_queue_head *wq_head)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 17:06:46 +07:00
|
|
|
return !list_empty(&wq_head->head);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2015-11-26 12:55:39 +07:00
|
|
|
/**
|
|
|
|
* wq_has_sleeper - check if there are any waiting processes
|
2017-03-05 18:07:33 +07:00
|
|
|
* @wq_head: wait queue head
|
2015-11-26 12:55:39 +07:00
|
|
|
*
|
2017-03-05 18:07:33 +07:00
|
|
|
* Returns true if wq_head has waiting processes
|
2015-11-26 12:55:39 +07:00
|
|
|
*
|
|
|
|
* Please refer to the comment for waitqueue_active.
|
|
|
|
*/
|
2017-03-05 17:10:18 +07:00
|
|
|
static inline bool wq_has_sleeper(struct wait_queue_head *wq_head)
|
2015-11-26 12:55:39 +07:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* We need to be sure we are in sync with the
|
|
|
|
* add_wait_queue modifications to the wait queue.
|
|
|
|
*
|
|
|
|
* This memory barrier should be paired with one on the
|
|
|
|
* waiting side.
|
|
|
|
*/
|
|
|
|
smp_mb();
|
2017-03-05 17:10:18 +07:00
|
|
|
return waitqueue_active(wq_head);
|
2015-11-26 12:55:39 +07:00
|
|
|
}
|
|
|
|
|
2017-03-05 17:10:18 +07:00
|
|
|
extern void add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry);
|
|
|
|
extern void add_wait_queue_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry);
|
|
|
|
extern void remove_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2017-03-05 17:10:18 +07:00
|
|
|
static inline void __add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 17:06:46 +07:00
|
|
|
list_add(&wq_entry->entry, &wq_head->head);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Used for wake-one threads:
|
|
|
|
*/
|
2013-10-04 15:24:49 +07:00
|
|
|
static inline void
|
2017-03-05 17:10:18 +07:00
|
|
|
__add_wait_queue_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry)
|
2010-05-07 13:33:26 +07:00
|
|
|
{
|
2017-03-05 16:33:16 +07:00
|
|
|
wq_entry->flags |= WQ_FLAG_EXCLUSIVE;
|
2017-03-05 17:10:18 +07:00
|
|
|
__add_wait_queue(wq_head, wq_entry);
|
2010-05-07 13:33:26 +07:00
|
|
|
}
|
|
|
|
|
2017-03-05 17:10:18 +07:00
|
|
|
static inline void __add_wait_queue_entry_tail(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 17:06:46 +07:00
|
|
|
list_add_tail(&wq_entry->entry, &wq_head->head);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2013-10-04 15:24:49 +07:00
|
|
|
static inline void
|
2017-03-05 17:10:18 +07:00
|
|
|
__add_wait_queue_entry_tail_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry)
|
2010-05-07 13:33:26 +07:00
|
|
|
{
|
2017-03-05 16:33:16 +07:00
|
|
|
wq_entry->flags |= WQ_FLAG_EXCLUSIVE;
|
2017-03-05 17:10:18 +07:00
|
|
|
__add_wait_queue_entry_tail(wq_head, wq_entry);
|
2010-05-07 13:33:26 +07:00
|
|
|
}
|
|
|
|
|
2013-10-04 15:24:49 +07:00
|
|
|
static inline void
|
2017-03-05 17:10:18 +07:00
|
|
|
__remove_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 17:06:46 +07:00
|
|
|
list_del(&wq_entry->entry);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2017-03-05 17:10:18 +07:00
|
|
|
void __wake_up(struct wait_queue_head *wq_head, unsigned int mode, int nr, void *key);
|
|
|
|
void __wake_up_locked_key(struct wait_queue_head *wq_head, unsigned int mode, void *key);
|
2017-08-25 23:13:55 +07:00
|
|
|
void __wake_up_locked_key_bookmark(struct wait_queue_head *wq_head,
|
|
|
|
unsigned int mode, void *key, wait_queue_entry_t *bookmark);
|
2017-03-05 17:10:18 +07:00
|
|
|
void __wake_up_sync_key(struct wait_queue_head *wq_head, unsigned int mode, int nr, void *key);
|
|
|
|
void __wake_up_locked(struct wait_queue_head *wq_head, unsigned int mode, int nr);
|
|
|
|
void __wake_up_sync(struct wait_queue_head *wq_head, unsigned int mode, int nr);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2007-12-07 05:34:36 +07:00
|
|
|
#define wake_up(x) __wake_up(x, TASK_NORMAL, 1, NULL)
|
|
|
|
#define wake_up_nr(x, nr) __wake_up(x, TASK_NORMAL, nr, NULL)
|
|
|
|
#define wake_up_all(x) __wake_up(x, TASK_NORMAL, 0, NULL)
|
2011-12-01 06:04:00 +07:00
|
|
|
#define wake_up_locked(x) __wake_up_locked((x), TASK_NORMAL, 1)
|
|
|
|
#define wake_up_all_locked(x) __wake_up_locked((x), TASK_NORMAL, 0)
|
2007-12-07 05:34:36 +07:00
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
#define wake_up_interruptible(x) __wake_up(x, TASK_INTERRUPTIBLE, 1, NULL)
|
|
|
|
#define wake_up_interruptible_nr(x, nr) __wake_up(x, TASK_INTERRUPTIBLE, nr, NULL)
|
|
|
|
#define wake_up_interruptible_all(x) __wake_up(x, TASK_INTERRUPTIBLE, 0, NULL)
|
2007-12-07 05:34:36 +07:00
|
|
|
#define wake_up_interruptible_sync(x) __wake_up_sync((x), TASK_INTERRUPTIBLE, 1)
|
2005-04-17 05:20:36 +07:00
|
|
|
|
lockdep: annotate epoll
On Sat, 2008-01-05 at 13:35 -0800, Davide Libenzi wrote:
> I remember I talked with Arjan about this time ago. Basically, since 1)
> you can drop an epoll fd inside another epoll fd 2) callback-based wakeups
> are used, you can see a wake_up() from inside another wake_up(), but they
> will never refer to the same lock instance.
> Think about:
>
> dfd = socket(...);
> efd1 = epoll_create();
> efd2 = epoll_create();
> epoll_ctl(efd1, EPOLL_CTL_ADD, dfd, ...);
> epoll_ctl(efd2, EPOLL_CTL_ADD, efd1, ...);
>
> When a packet arrives to the device underneath "dfd", the net code will
> issue a wake_up() on its poll wake list. Epoll (efd1) has installed a
> callback wakeup entry on that queue, and the wake_up() performed by the
> "dfd" net code will end up in ep_poll_callback(). At this point epoll
> (efd1) notices that it may have some event ready, so it needs to wake up
> the waiters on its poll wait list (efd2). So it calls ep_poll_safewake()
> that ends up in another wake_up(), after having checked about the
> recursion constraints. That are, no more than EP_MAX_POLLWAKE_NESTS, to
> avoid stack blasting. Never hit the same queue, to avoid loops like:
>
> epoll_ctl(efd2, EPOLL_CTL_ADD, efd1, ...);
> epoll_ctl(efd3, EPOLL_CTL_ADD, efd2, ...);
> epoll_ctl(efd4, EPOLL_CTL_ADD, efd3, ...);
> epoll_ctl(efd1, EPOLL_CTL_ADD, efd4, ...);
>
> The code "if (tncur->wq == wq || ..." prevents re-entering the same
> queue/lock.
Since the epoll code is very careful to not nest same instance locks
allow the recursion.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Tested-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Acked-by: Davide Libenzi <davidel@xmailserver.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-05 13:27:20 +07:00
|
|
|
/*
|
2009-04-01 05:24:20 +07:00
|
|
|
* Wakeup macros to be used to report events to the targets.
|
lockdep: annotate epoll
On Sat, 2008-01-05 at 13:35 -0800, Davide Libenzi wrote:
> I remember I talked with Arjan about this time ago. Basically, since 1)
> you can drop an epoll fd inside another epoll fd 2) callback-based wakeups
> are used, you can see a wake_up() from inside another wake_up(), but they
> will never refer to the same lock instance.
> Think about:
>
> dfd = socket(...);
> efd1 = epoll_create();
> efd2 = epoll_create();
> epoll_ctl(efd1, EPOLL_CTL_ADD, dfd, ...);
> epoll_ctl(efd2, EPOLL_CTL_ADD, efd1, ...);
>
> When a packet arrives to the device underneath "dfd", the net code will
> issue a wake_up() on its poll wake list. Epoll (efd1) has installed a
> callback wakeup entry on that queue, and the wake_up() performed by the
> "dfd" net code will end up in ep_poll_callback(). At this point epoll
> (efd1) notices that it may have some event ready, so it needs to wake up
> the waiters on its poll wait list (efd2). So it calls ep_poll_safewake()
> that ends up in another wake_up(), after having checked about the
> recursion constraints. That are, no more than EP_MAX_POLLWAKE_NESTS, to
> avoid stack blasting. Never hit the same queue, to avoid loops like:
>
> epoll_ctl(efd2, EPOLL_CTL_ADD, efd1, ...);
> epoll_ctl(efd3, EPOLL_CTL_ADD, efd2, ...);
> epoll_ctl(efd4, EPOLL_CTL_ADD, efd3, ...);
> epoll_ctl(efd1, EPOLL_CTL_ADD, efd4, ...);
>
> The code "if (tncur->wq == wq || ..." prevents re-entering the same
> queue/lock.
Since the epoll code is very careful to not nest same instance locks
allow the recursion.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Tested-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Acked-by: Davide Libenzi <davidel@xmailserver.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-05 13:27:20 +07:00
|
|
|
*/
|
2017-07-04 07:14:56 +07:00
|
|
|
#define poll_to_key(m) ((void *)(__force uintptr_t)(__poll_t)(m))
|
|
|
|
#define key_to_poll(m) ((__force __poll_t)(uintptr_t)(void *)(m))
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wake_up_poll(x, m) \
|
2017-07-04 07:14:56 +07:00
|
|
|
__wake_up(x, TASK_NORMAL, 1, poll_to_key(m))
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wake_up_locked_poll(x, m) \
|
2017-07-04 07:14:56 +07:00
|
|
|
__wake_up_locked_key((x), TASK_NORMAL, poll_to_key(m))
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wake_up_interruptible_poll(x, m) \
|
2017-07-04 07:14:56 +07:00
|
|
|
__wake_up(x, TASK_INTERRUPTIBLE, 1, poll_to_key(m))
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wake_up_interruptible_sync_poll(x, m) \
|
2017-07-04 07:14:56 +07:00
|
|
|
__wake_up_sync_key((x), TASK_INTERRUPTIBLE, 1, poll_to_key(m))
|
lockdep: annotate epoll
On Sat, 2008-01-05 at 13:35 -0800, Davide Libenzi wrote:
> I remember I talked with Arjan about this time ago. Basically, since 1)
> you can drop an epoll fd inside another epoll fd 2) callback-based wakeups
> are used, you can see a wake_up() from inside another wake_up(), but they
> will never refer to the same lock instance.
> Think about:
>
> dfd = socket(...);
> efd1 = epoll_create();
> efd2 = epoll_create();
> epoll_ctl(efd1, EPOLL_CTL_ADD, dfd, ...);
> epoll_ctl(efd2, EPOLL_CTL_ADD, efd1, ...);
>
> When a packet arrives to the device underneath "dfd", the net code will
> issue a wake_up() on its poll wake list. Epoll (efd1) has installed a
> callback wakeup entry on that queue, and the wake_up() performed by the
> "dfd" net code will end up in ep_poll_callback(). At this point epoll
> (efd1) notices that it may have some event ready, so it needs to wake up
> the waiters on its poll wait list (efd2). So it calls ep_poll_safewake()
> that ends up in another wake_up(), after having checked about the
> recursion constraints. That are, no more than EP_MAX_POLLWAKE_NESTS, to
> avoid stack blasting. Never hit the same queue, to avoid loops like:
>
> epoll_ctl(efd2, EPOLL_CTL_ADD, efd1, ...);
> epoll_ctl(efd3, EPOLL_CTL_ADD, efd2, ...);
> epoll_ctl(efd4, EPOLL_CTL_ADD, efd3, ...);
> epoll_ctl(efd1, EPOLL_CTL_ADD, efd4, ...);
>
> The code "if (tncur->wq == wq || ..." prevents re-entering the same
> queue/lock.
Since the epoll code is very careful to not nest same instance locks
allow the recursion.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Tested-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Acked-by: Davide Libenzi <davidel@xmailserver.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-05 13:27:20 +07:00
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define ___wait_cond_timeout(condition) \
|
|
|
|
({ \
|
|
|
|
bool __cond = (condition); \
|
|
|
|
if (__cond && !__ret) \
|
|
|
|
__ret = 1; \
|
|
|
|
__cond || !__ret; \
|
2013-10-02 16:22:19 +07:00
|
|
|
})
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define ___wait_is_interruptible(state) \
|
|
|
|
(!__builtin_constant_p(state) || \
|
|
|
|
state == TASK_INTERRUPTIBLE || state == TASK_KILLABLE) \
|
2013-10-02 16:22:21 +07:00
|
|
|
|
2017-03-05 16:33:16 +07:00
|
|
|
extern void init_wait_entry(struct wait_queue_entry *wq_entry, int flags);
|
2016-09-06 21:00:55 +07:00
|
|
|
|
2014-04-19 05:07:17 +07:00
|
|
|
/*
|
|
|
|
* The below macro ___wait_event() has an explicit shadow of the __ret
|
|
|
|
* variable when used from the wait_event_*() macros.
|
|
|
|
*
|
|
|
|
* This is so that both can use the ___wait_cond_timeout() construct
|
|
|
|
* to wrap the condition.
|
|
|
|
*
|
|
|
|
* The type inconsistency of the wait_event_*() __ret variable is also
|
|
|
|
* on purpose; we use long where we can return timeout values and int
|
|
|
|
* otherwise.
|
|
|
|
*/
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define ___wait_event(wq_head, condition, state, exclusive, ret, cmd) \
|
|
|
|
({ \
|
|
|
|
__label__ __out; \
|
|
|
|
struct wait_queue_entry __wq_entry; \
|
|
|
|
long __ret = ret; /* explicit shadow */ \
|
|
|
|
\
|
|
|
|
init_wait_entry(&__wq_entry, exclusive ? WQ_FLAG_EXCLUSIVE : 0); \
|
|
|
|
for (;;) { \
|
|
|
|
long __int = prepare_to_wait_event(&wq_head, &__wq_entry, state);\
|
|
|
|
\
|
|
|
|
if (condition) \
|
|
|
|
break; \
|
|
|
|
\
|
|
|
|
if (___wait_is_interruptible(state) && __int) { \
|
|
|
|
__ret = __int; \
|
|
|
|
goto __out; \
|
|
|
|
} \
|
|
|
|
\
|
|
|
|
cmd; \
|
|
|
|
} \
|
|
|
|
finish_wait(&wq_head, &__wq_entry); \
|
|
|
|
__out: __ret; \
|
2013-10-02 16:22:33 +07:00
|
|
|
})
|
2013-10-02 16:22:21 +07:00
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __wait_event(wq_head, condition) \
|
|
|
|
(void)___wait_event(wq_head, condition, TASK_UNINTERRUPTIBLE, 0, 0, \
|
2013-10-02 16:22:33 +07:00
|
|
|
schedule())
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event - sleep until a condition gets true
|
2017-03-05 18:07:33 +07:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2005-04-17 05:20:36 +07:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_UNINTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true. The @condition is checked each time
|
2017-03-05 18:07:33 +07:00
|
|
|
* the waitqueue @wq_head is woken up.
|
2005-04-17 05:20:36 +07:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event(wq_head, condition) \
|
|
|
|
do { \
|
|
|
|
might_sleep(); \
|
|
|
|
if (condition) \
|
|
|
|
break; \
|
|
|
|
__wait_event(wq_head, condition); \
|
2005-04-17 05:20:36 +07:00
|
|
|
} while (0)
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __io_wait_event(wq_head, condition) \
|
|
|
|
(void)___wait_event(wq_head, condition, TASK_UNINTERRUPTIBLE, 0, 0, \
|
2015-02-03 18:55:31 +07:00
|
|
|
io_schedule())
|
|
|
|
|
|
|
|
/*
|
|
|
|
* io_wait_event() -- like wait_event() but with io_schedule()
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define io_wait_event(wq_head, condition) \
|
|
|
|
do { \
|
|
|
|
might_sleep(); \
|
|
|
|
if (condition) \
|
|
|
|
break; \
|
|
|
|
__io_wait_event(wq_head, condition); \
|
2015-02-03 18:55:31 +07:00
|
|
|
} while (0)
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __wait_event_freezable(wq_head, condition) \
|
|
|
|
___wait_event(wq_head, condition, TASK_INTERRUPTIBLE, 0, 0, \
|
2014-10-29 18:21:57 +07:00
|
|
|
schedule(); try_to_freeze())
|
|
|
|
|
|
|
|
/**
|
2016-02-23 20:39:28 +07:00
|
|
|
* wait_event_freezable - sleep (or freeze) until a condition gets true
|
2017-03-05 18:07:33 +07:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2014-10-29 18:21:57 +07:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE -- so as not to contribute
|
|
|
|
* to system load) until the @condition evaluates to true. The
|
2017-03-05 18:07:33 +07:00
|
|
|
* @condition is checked each time the waitqueue @wq_head is woken up.
|
2014-10-29 18:21:57 +07:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_freezable(wq_head, condition) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_freezable(wq_head, condition); \
|
|
|
|
__ret; \
|
2014-10-29 18:21:57 +07:00
|
|
|
})
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __wait_event_timeout(wq_head, condition, timeout) \
|
|
|
|
___wait_event(wq_head, ___wait_cond_timeout(condition), \
|
|
|
|
TASK_UNINTERRUPTIBLE, 0, timeout, \
|
2013-10-02 16:22:33 +07:00
|
|
|
__ret = schedule_timeout(__ret))
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_timeout - sleep until a condition gets true or a timeout elapses
|
2017-03-05 18:07:33 +07:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2005-04-17 05:20:36 +07:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @timeout: timeout, in jiffies
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_UNINTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true. The @condition is checked each time
|
2017-03-05 18:07:33 +07:00
|
|
|
* the waitqueue @wq_head is woken up.
|
2005-04-17 05:20:36 +07:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
2014-08-25 00:12:27 +07:00
|
|
|
* Returns:
|
|
|
|
* 0 if the @condition evaluated to %false after the @timeout elapsed,
|
|
|
|
* 1 if the @condition evaluated to %true after the @timeout elapsed,
|
|
|
|
* or the remaining jiffies (at least 1) if the @condition evaluated
|
|
|
|
* to %true before the @timeout elapsed.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_timeout(wq_head, condition, timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = timeout; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!___wait_cond_timeout(condition)) \
|
|
|
|
__ret = __wait_event_timeout(wq_head, condition, timeout); \
|
|
|
|
__ret; \
|
2005-04-17 05:20:36 +07:00
|
|
|
})
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __wait_event_freezable_timeout(wq_head, condition, timeout) \
|
|
|
|
___wait_event(wq_head, ___wait_cond_timeout(condition), \
|
|
|
|
TASK_INTERRUPTIBLE, 0, timeout, \
|
2014-10-29 18:21:57 +07:00
|
|
|
__ret = schedule_timeout(__ret); try_to_freeze())
|
|
|
|
|
|
|
|
/*
|
|
|
|
* like wait_event_timeout() -- except it uses TASK_INTERRUPTIBLE to avoid
|
|
|
|
* increasing load and is freezable.
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_freezable_timeout(wq_head, condition, timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = timeout; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!___wait_cond_timeout(condition)) \
|
|
|
|
__ret = __wait_event_freezable_timeout(wq_head, condition, timeout); \
|
|
|
|
__ret; \
|
2014-10-29 18:21:57 +07:00
|
|
|
})
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __wait_event_exclusive_cmd(wq_head, condition, cmd1, cmd2) \
|
|
|
|
(void)___wait_event(wq_head, condition, TASK_UNINTERRUPTIBLE, 1, 0, \
|
2015-05-08 15:19:05 +07:00
|
|
|
cmd1; schedule(); cmd2)
|
|
|
|
/*
|
|
|
|
* Just like wait_event_cmd(), except it sets exclusive flag
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_exclusive_cmd(wq_head, condition, cmd1, cmd2) \
|
|
|
|
do { \
|
|
|
|
if (condition) \
|
|
|
|
break; \
|
|
|
|
__wait_event_exclusive_cmd(wq_head, condition, cmd1, cmd2); \
|
2015-05-08 15:19:05 +07:00
|
|
|
} while (0)
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __wait_event_cmd(wq_head, condition, cmd1, cmd2) \
|
|
|
|
(void)___wait_event(wq_head, condition, TASK_UNINTERRUPTIBLE, 0, 0, \
|
2013-11-14 11:16:16 +07:00
|
|
|
cmd1; schedule(); cmd2)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_cmd - sleep until a condition gets true
|
2017-03-05 18:07:33 +07:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2013-11-14 11:16:16 +07:00
|
|
|
* @condition: a C expression for the event to wait for
|
2014-01-21 23:22:06 +07:00
|
|
|
* @cmd1: the command will be executed before sleep
|
|
|
|
* @cmd2: the command will be executed after sleep
|
2013-11-14 11:16:16 +07:00
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_UNINTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true. The @condition is checked each time
|
2017-03-05 18:07:33 +07:00
|
|
|
* the waitqueue @wq_head is woken up.
|
2013-11-14 11:16:16 +07:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_cmd(wq_head, condition, cmd1, cmd2) \
|
|
|
|
do { \
|
|
|
|
if (condition) \
|
|
|
|
break; \
|
|
|
|
__wait_event_cmd(wq_head, condition, cmd1, cmd2); \
|
2013-11-14 11:16:16 +07:00
|
|
|
} while (0)
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __wait_event_interruptible(wq_head, condition) \
|
|
|
|
___wait_event(wq_head, condition, TASK_INTERRUPTIBLE, 0, 0, \
|
2013-10-02 16:22:24 +07:00
|
|
|
schedule())
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible - sleep until a condition gets true
|
2017-03-05 18:07:33 +07:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2005-04-17 05:20:36 +07:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
2017-03-05 18:07:33 +07:00
|
|
|
* The @condition is checked each time the waitqueue @wq_head is woken up.
|
2005-04-17 05:20:36 +07:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* The function will return -ERESTARTSYS if it was interrupted by a
|
|
|
|
* signal and 0 if @condition evaluated to true.
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_interruptible(wq_head, condition) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_interruptible(wq_head, condition); \
|
|
|
|
__ret; \
|
2005-04-17 05:20:36 +07:00
|
|
|
})
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __wait_event_interruptible_timeout(wq_head, condition, timeout) \
|
|
|
|
___wait_event(wq_head, ___wait_cond_timeout(condition), \
|
|
|
|
TASK_INTERRUPTIBLE, 0, timeout, \
|
2013-10-02 16:22:33 +07:00
|
|
|
__ret = schedule_timeout(__ret))
|
2005-04-17 05:20:36 +07:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible_timeout - sleep until a condition gets true or a timeout elapses
|
2017-03-05 18:07:33 +07:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2005-04-17 05:20:36 +07:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @timeout: timeout, in jiffies
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
2017-03-05 18:07:33 +07:00
|
|
|
* The @condition is checked each time the waitqueue @wq_head is woken up.
|
2005-04-17 05:20:36 +07:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
2013-05-25 05:55:09 +07:00
|
|
|
* Returns:
|
2014-08-25 00:12:27 +07:00
|
|
|
* 0 if the @condition evaluated to %false after the @timeout elapsed,
|
|
|
|
* 1 if the @condition evaluated to %true after the @timeout elapsed,
|
|
|
|
* the remaining jiffies (at least 1) if the @condition evaluated
|
|
|
|
* to %true before the @timeout elapsed, or -%ERESTARTSYS if it was
|
|
|
|
* interrupted by a signal.
|
2005-04-17 05:20:36 +07:00
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_interruptible_timeout(wq_head, condition, timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = timeout; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!___wait_cond_timeout(condition)) \
|
|
|
|
__ret = __wait_event_interruptible_timeout(wq_head, \
|
|
|
|
condition, timeout); \
|
|
|
|
__ret; \
|
2005-04-17 05:20:36 +07:00
|
|
|
})
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __wait_event_hrtimeout(wq_head, condition, timeout, state) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
struct hrtimer_sleeper __t; \
|
|
|
|
\
|
|
|
|
hrtimer_init_on_stack(&__t.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); \
|
|
|
|
hrtimer_init_sleeper(&__t, current); \
|
|
|
|
if ((timeout) != KTIME_MAX) \
|
|
|
|
hrtimer_start_range_ns(&__t.timer, timeout, \
|
|
|
|
current->timer_slack_ns, \
|
|
|
|
HRTIMER_MODE_REL); \
|
|
|
|
\
|
|
|
|
__ret = ___wait_event(wq_head, condition, state, 0, 0, \
|
|
|
|
if (!__t.task) { \
|
|
|
|
__ret = -ETIME; \
|
|
|
|
break; \
|
|
|
|
} \
|
|
|
|
schedule()); \
|
|
|
|
\
|
|
|
|
hrtimer_cancel(&__t.timer); \
|
|
|
|
destroy_hrtimer_on_stack(&__t.timer); \
|
|
|
|
__ret; \
|
2013-05-08 06:18:43 +07:00
|
|
|
})
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_hrtimeout - sleep until a condition gets true or a timeout elapses
|
2017-03-05 18:07:33 +07:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2013-05-08 06:18:43 +07:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @timeout: timeout, as a ktime_t
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_UNINTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
2017-03-05 18:07:33 +07:00
|
|
|
* The @condition is checked each time the waitqueue @wq_head is woken up.
|
2013-05-08 06:18:43 +07:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* The function returns 0 if @condition became true, or -ETIME if the timeout
|
|
|
|
* elapsed.
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_hrtimeout(wq_head, condition, timeout) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_hrtimeout(wq_head, condition, timeout, \
|
|
|
|
TASK_UNINTERRUPTIBLE); \
|
|
|
|
__ret; \
|
2013-05-08 06:18:43 +07:00
|
|
|
})
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible_hrtimeout - sleep until a condition gets true or a timeout elapses
|
2017-07-25 02:58:00 +07:00
|
|
|
* @wq: the waitqueue to wait on
|
2013-05-08 06:18:43 +07:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @timeout: timeout, as a ktime_t
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
2017-07-25 02:58:00 +07:00
|
|
|
* The @condition is checked each time the waitqueue @wq is woken up.
|
2013-05-08 06:18:43 +07:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* The function returns 0 if @condition became true, -ERESTARTSYS if it was
|
|
|
|
* interrupted by a signal, or -ETIME if the timeout elapsed.
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_interruptible_hrtimeout(wq, condition, timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = 0; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_hrtimeout(wq, condition, timeout, \
|
|
|
|
TASK_INTERRUPTIBLE); \
|
|
|
|
__ret; \
|
2013-05-08 06:18:43 +07:00
|
|
|
})
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __wait_event_interruptible_exclusive(wq, condition) \
|
|
|
|
___wait_event(wq, condition, TASK_INTERRUPTIBLE, 1, 0, \
|
2013-10-02 16:22:26 +07:00
|
|
|
schedule())
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_interruptible_exclusive(wq, condition) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_interruptible_exclusive(wq, condition); \
|
|
|
|
__ret; \
|
2005-04-17 05:20:36 +07:00
|
|
|
})
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __wait_event_killable_exclusive(wq, condition) \
|
|
|
|
___wait_event(wq, condition, TASK_KILLABLE, 1, 0, \
|
2016-07-19 14:04:34 +07:00
|
|
|
schedule())
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_killable_exclusive(wq, condition) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_killable_exclusive(wq, condition); \
|
|
|
|
__ret; \
|
2016-07-19 14:04:34 +07:00
|
|
|
})
|
|
|
|
|
2010-05-05 17:53:11 +07:00
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __wait_event_freezable_exclusive(wq, condition) \
|
|
|
|
___wait_event(wq, condition, TASK_INTERRUPTIBLE, 1, 0, \
|
2014-10-29 18:21:57 +07:00
|
|
|
schedule(); try_to_freeze())
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_freezable_exclusive(wq, condition) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_freezable_exclusive(wq, condition); \
|
|
|
|
__ret; \
|
2014-10-29 18:21:57 +07:00
|
|
|
})
|
|
|
|
|
2018-02-13 04:22:36 +07:00
|
|
|
/**
|
|
|
|
* wait_event_idle - wait for a condition without contributing to system load
|
|
|
|
* @wq_head: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_IDLE) until the
|
|
|
|
* @condition evaluates to true.
|
|
|
|
* The @condition is checked each time the waitqueue @wq_head is woken up.
|
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
#define wait_event_idle(wq_head, condition) \
|
|
|
|
do { \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
___wait_event(wq_head, condition, TASK_IDLE, 0, 0, schedule()); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_idle_exclusive - wait for a condition with contributing to system load
|
|
|
|
* @wq_head: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_IDLE) until the
|
|
|
|
* @condition evaluates to true.
|
|
|
|
* The @condition is checked each time the waitqueue @wq_head is woken up.
|
|
|
|
*
|
|
|
|
* The process is put on the wait queue with an WQ_FLAG_EXCLUSIVE flag
|
|
|
|
* set thus if other processes wait on the same list, when this
|
|
|
|
* process is woken further processes are not considered.
|
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
#define wait_event_idle_exclusive(wq_head, condition) \
|
|
|
|
do { \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
___wait_event(wq_head, condition, TASK_IDLE, 1, 0, schedule()); \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
#define __wait_event_idle_timeout(wq_head, condition, timeout) \
|
|
|
|
___wait_event(wq_head, ___wait_cond_timeout(condition), \
|
|
|
|
TASK_IDLE, 0, timeout, \
|
|
|
|
__ret = schedule_timeout(__ret))
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_idle_timeout - sleep without load until a condition becomes true or a timeout elapses
|
|
|
|
* @wq_head: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @timeout: timeout, in jiffies
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_IDLE) until the
|
|
|
|
* @condition evaluates to true. The @condition is checked each time
|
|
|
|
* the waitqueue @wq_head is woken up.
|
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* 0 if the @condition evaluated to %false after the @timeout elapsed,
|
|
|
|
* 1 if the @condition evaluated to %true after the @timeout elapsed,
|
|
|
|
* or the remaining jiffies (at least 1) if the @condition evaluated
|
|
|
|
* to %true before the @timeout elapsed.
|
|
|
|
*/
|
|
|
|
#define wait_event_idle_timeout(wq_head, condition, timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = timeout; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!___wait_cond_timeout(condition)) \
|
|
|
|
__ret = __wait_event_idle_timeout(wq_head, condition, timeout); \
|
|
|
|
__ret; \
|
|
|
|
})
|
|
|
|
|
|
|
|
#define __wait_event_idle_exclusive_timeout(wq_head, condition, timeout) \
|
|
|
|
___wait_event(wq_head, ___wait_cond_timeout(condition), \
|
|
|
|
TASK_IDLE, 1, timeout, \
|
|
|
|
__ret = schedule_timeout(__ret))
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_idle_exclusive_timeout - sleep without load until a condition becomes true or a timeout elapses
|
|
|
|
* @wq_head: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @timeout: timeout, in jiffies
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_IDLE) until the
|
|
|
|
* @condition evaluates to true. The @condition is checked each time
|
|
|
|
* the waitqueue @wq_head is woken up.
|
|
|
|
*
|
|
|
|
* The process is put on the wait queue with an WQ_FLAG_EXCLUSIVE flag
|
|
|
|
* set thus if other processes wait on the same list, when this
|
|
|
|
* process is woken further processes are not considered.
|
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* 0 if the @condition evaluated to %false after the @timeout elapsed,
|
|
|
|
* 1 if the @condition evaluated to %true after the @timeout elapsed,
|
|
|
|
* or the remaining jiffies (at least 1) if the @condition evaluated
|
|
|
|
* to %true before the @timeout elapsed.
|
|
|
|
*/
|
|
|
|
#define wait_event_idle_exclusive_timeout(wq_head, condition, timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = timeout; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!___wait_cond_timeout(condition)) \
|
|
|
|
__ret = __wait_event_idle_exclusive_timeout(wq_head, condition, timeout);\
|
|
|
|
__ret; \
|
|
|
|
})
|
|
|
|
|
2017-06-20 17:06:13 +07:00
|
|
|
extern int do_wait_intr(wait_queue_head_t *, wait_queue_entry_t *);
|
|
|
|
extern int do_wait_intr_irq(wait_queue_head_t *, wait_queue_entry_t *);
|
2014-10-29 18:21:57 +07:00
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __wait_event_interruptible_locked(wq, condition, exclusive, fn) \
|
|
|
|
({ \
|
|
|
|
int __ret; \
|
|
|
|
DEFINE_WAIT(__wait); \
|
|
|
|
if (exclusive) \
|
|
|
|
__wait.flags |= WQ_FLAG_EXCLUSIVE; \
|
|
|
|
do { \
|
|
|
|
__ret = fn(&(wq), &__wait); \
|
|
|
|
if (__ret) \
|
|
|
|
break; \
|
|
|
|
} while (!(condition)); \
|
|
|
|
__remove_wait_queue(&(wq), &__wait); \
|
|
|
|
__set_current_state(TASK_RUNNING); \
|
|
|
|
__ret; \
|
2010-05-05 17:53:11 +07:00
|
|
|
})
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible_locked - sleep until a condition gets true
|
|
|
|
* @wq: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
|
|
|
* The @condition is checked each time the waitqueue @wq is woken up.
|
|
|
|
*
|
|
|
|
* It must be called with wq.lock being held. This spinlock is
|
|
|
|
* unlocked while sleeping but @condition testing is done while lock
|
|
|
|
* is held and when this macro exits the lock is held.
|
|
|
|
*
|
|
|
|
* The lock is locked/unlocked using spin_lock()/spin_unlock()
|
|
|
|
* functions which must match the way they are locked/unlocked outside
|
|
|
|
* of this macro.
|
|
|
|
*
|
|
|
|
* wake_up_locked() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* The function will return -ERESTARTSYS if it was interrupted by a
|
|
|
|
* signal and 0 if @condition evaluated to true.
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_interruptible_locked(wq, condition) \
|
|
|
|
((condition) \
|
2017-03-08 06:33:14 +07:00
|
|
|
? 0 : __wait_event_interruptible_locked(wq, condition, 0, do_wait_intr))
|
2010-05-05 17:53:11 +07:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible_locked_irq - sleep until a condition gets true
|
|
|
|
* @wq: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
|
|
|
* The @condition is checked each time the waitqueue @wq is woken up.
|
|
|
|
*
|
|
|
|
* It must be called with wq.lock being held. This spinlock is
|
|
|
|
* unlocked while sleeping but @condition testing is done while lock
|
|
|
|
* is held and when this macro exits the lock is held.
|
|
|
|
*
|
|
|
|
* The lock is locked/unlocked using spin_lock_irq()/spin_unlock_irq()
|
|
|
|
* functions which must match the way they are locked/unlocked outside
|
|
|
|
* of this macro.
|
|
|
|
*
|
|
|
|
* wake_up_locked() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* The function will return -ERESTARTSYS if it was interrupted by a
|
|
|
|
* signal and 0 if @condition evaluated to true.
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_interruptible_locked_irq(wq, condition) \
|
|
|
|
((condition) \
|
2017-03-08 06:33:14 +07:00
|
|
|
? 0 : __wait_event_interruptible_locked(wq, condition, 0, do_wait_intr_irq))
|
2010-05-05 17:53:11 +07:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible_exclusive_locked - sleep exclusively until a condition gets true
|
|
|
|
* @wq: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
|
|
|
* The @condition is checked each time the waitqueue @wq is woken up.
|
|
|
|
*
|
|
|
|
* It must be called with wq.lock being held. This spinlock is
|
|
|
|
* unlocked while sleeping but @condition testing is done while lock
|
|
|
|
* is held and when this macro exits the lock is held.
|
|
|
|
*
|
|
|
|
* The lock is locked/unlocked using spin_lock()/spin_unlock()
|
|
|
|
* functions which must match the way they are locked/unlocked outside
|
|
|
|
* of this macro.
|
|
|
|
*
|
|
|
|
* The process is put on the wait queue with an WQ_FLAG_EXCLUSIVE flag
|
|
|
|
* set thus when other process waits process on the list if this
|
|
|
|
* process is awaken further processes are not considered.
|
|
|
|
*
|
|
|
|
* wake_up_locked() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* The function will return -ERESTARTSYS if it was interrupted by a
|
|
|
|
* signal and 0 if @condition evaluated to true.
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_interruptible_exclusive_locked(wq, condition) \
|
|
|
|
((condition) \
|
2017-03-08 06:33:14 +07:00
|
|
|
? 0 : __wait_event_interruptible_locked(wq, condition, 1, do_wait_intr))
|
2010-05-05 17:53:11 +07:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible_exclusive_locked_irq - sleep until a condition gets true
|
|
|
|
* @wq: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
|
|
|
* The @condition is checked each time the waitqueue @wq is woken up.
|
|
|
|
*
|
|
|
|
* It must be called with wq.lock being held. This spinlock is
|
|
|
|
* unlocked while sleeping but @condition testing is done while lock
|
|
|
|
* is held and when this macro exits the lock is held.
|
|
|
|
*
|
|
|
|
* The lock is locked/unlocked using spin_lock_irq()/spin_unlock_irq()
|
|
|
|
* functions which must match the way they are locked/unlocked outside
|
|
|
|
* of this macro.
|
|
|
|
*
|
|
|
|
* The process is put on the wait queue with an WQ_FLAG_EXCLUSIVE flag
|
|
|
|
* set thus when other process waits process on the list if this
|
|
|
|
* process is awaken further processes are not considered.
|
|
|
|
*
|
|
|
|
* wake_up_locked() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* The function will return -ERESTARTSYS if it was interrupted by a
|
|
|
|
* signal and 0 if @condition evaluated to true.
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_interruptible_exclusive_locked_irq(wq, condition) \
|
|
|
|
((condition) \
|
2017-03-08 06:33:14 +07:00
|
|
|
? 0 : __wait_event_interruptible_locked(wq, condition, 1, do_wait_intr_irq))
|
2010-05-05 17:53:11 +07:00
|
|
|
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __wait_event_killable(wq, condition) \
|
2013-10-02 16:22:33 +07:00
|
|
|
___wait_event(wq, condition, TASK_KILLABLE, 0, 0, schedule())
|
2007-12-07 00:00:00 +07:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_killable - sleep until a condition gets true
|
2017-07-25 02:58:00 +07:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2007-12-07 00:00:00 +07:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_KILLABLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received.
|
2017-07-25 02:58:00 +07:00
|
|
|
* The @condition is checked each time the waitqueue @wq_head is woken up.
|
2007-12-07 00:00:00 +07:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* The function will return -ERESTARTSYS if it was interrupted by a
|
|
|
|
* signal and 0 if @condition evaluated to true.
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_killable(wq_head, condition) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_killable(wq_head, condition); \
|
|
|
|
__ret; \
|
2007-12-07 00:00:00 +07:00
|
|
|
})
|
|
|
|
|
2017-08-19 05:15:55 +07:00
|
|
|
#define __wait_event_killable_timeout(wq_head, condition, timeout) \
|
|
|
|
___wait_event(wq_head, ___wait_cond_timeout(condition), \
|
|
|
|
TASK_KILLABLE, 0, timeout, \
|
|
|
|
__ret = schedule_timeout(__ret))
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_killable_timeout - sleep until a condition gets true or a timeout elapses
|
|
|
|
* @wq_head: the waitqueue to wait on
|
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @timeout: timeout, in jiffies
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_KILLABLE) until the
|
|
|
|
* @condition evaluates to true or a kill signal is received.
|
|
|
|
* The @condition is checked each time the waitqueue @wq_head is woken up.
|
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* 0 if the @condition evaluated to %false after the @timeout elapsed,
|
|
|
|
* 1 if the @condition evaluated to %true after the @timeout elapsed,
|
|
|
|
* the remaining jiffies (at least 1) if the @condition evaluated
|
|
|
|
* to %true before the @timeout elapsed, or -%ERESTARTSYS if it was
|
|
|
|
* interrupted by a kill signal.
|
|
|
|
*
|
|
|
|
* Only kill signals interrupt this process.
|
|
|
|
*/
|
|
|
|
#define wait_event_killable_timeout(wq_head, condition, timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = timeout; \
|
|
|
|
might_sleep(); \
|
|
|
|
if (!___wait_cond_timeout(condition)) \
|
|
|
|
__ret = __wait_event_killable_timeout(wq_head, \
|
|
|
|
condition, timeout); \
|
|
|
|
__ret; \
|
|
|
|
})
|
|
|
|
|
2012-11-30 17:42:40 +07:00
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __wait_event_lock_irq(wq_head, condition, lock, cmd) \
|
|
|
|
(void)___wait_event(wq_head, condition, TASK_UNINTERRUPTIBLE, 0, 0, \
|
|
|
|
spin_unlock_irq(&lock); \
|
|
|
|
cmd; \
|
|
|
|
schedule(); \
|
2013-10-02 16:22:33 +07:00
|
|
|
spin_lock_irq(&lock))
|
2012-11-30 17:42:40 +07:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_lock_irq_cmd - sleep until a condition gets true. The
|
|
|
|
* condition is checked under the lock. This
|
|
|
|
* is expected to be called with the lock
|
|
|
|
* taken.
|
2017-03-05 18:07:33 +07:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2012-11-30 17:42:40 +07:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @lock: a locked spinlock_t, which will be released before cmd
|
|
|
|
* and schedule() and reacquired afterwards.
|
|
|
|
* @cmd: a command which is invoked outside the critical section before
|
|
|
|
* sleep
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_UNINTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true. The @condition is checked each time
|
2017-03-05 18:07:33 +07:00
|
|
|
* the waitqueue @wq_head is woken up.
|
2012-11-30 17:42:40 +07:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* This is supposed to be called while holding the lock. The lock is
|
|
|
|
* dropped before invoking the cmd and going to sleep and is reacquired
|
|
|
|
* afterwards.
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_lock_irq_cmd(wq_head, condition, lock, cmd) \
|
|
|
|
do { \
|
|
|
|
if (condition) \
|
|
|
|
break; \
|
|
|
|
__wait_event_lock_irq(wq_head, condition, lock, cmd); \
|
2012-11-30 17:42:40 +07:00
|
|
|
} while (0)
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_lock_irq - sleep until a condition gets true. The
|
|
|
|
* condition is checked under the lock. This
|
|
|
|
* is expected to be called with the lock
|
|
|
|
* taken.
|
2017-03-05 18:07:33 +07:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2012-11-30 17:42:40 +07:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @lock: a locked spinlock_t, which will be released before schedule()
|
|
|
|
* and reacquired afterwards.
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_UNINTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true. The @condition is checked each time
|
2017-03-05 18:07:33 +07:00
|
|
|
* the waitqueue @wq_head is woken up.
|
2012-11-30 17:42:40 +07:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* This is supposed to be called while holding the lock. The lock is
|
|
|
|
* dropped before going to sleep and is reacquired afterwards.
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_lock_irq(wq_head, condition, lock) \
|
|
|
|
do { \
|
|
|
|
if (condition) \
|
|
|
|
break; \
|
|
|
|
__wait_event_lock_irq(wq_head, condition, lock, ); \
|
2012-11-30 17:42:40 +07:00
|
|
|
} while (0)
|
|
|
|
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define __wait_event_interruptible_lock_irq(wq_head, condition, lock, cmd) \
|
|
|
|
___wait_event(wq_head, condition, TASK_INTERRUPTIBLE, 0, 0, \
|
|
|
|
spin_unlock_irq(&lock); \
|
|
|
|
cmd; \
|
|
|
|
schedule(); \
|
2013-10-02 16:22:28 +07:00
|
|
|
spin_lock_irq(&lock))
|
2012-11-30 17:42:40 +07:00
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible_lock_irq_cmd - sleep until a condition gets true.
|
|
|
|
* The condition is checked under the lock. This is expected to
|
|
|
|
* be called with the lock taken.
|
2017-03-05 18:07:33 +07:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2012-11-30 17:42:40 +07:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @lock: a locked spinlock_t, which will be released before cmd and
|
|
|
|
* schedule() and reacquired afterwards.
|
|
|
|
* @cmd: a command which is invoked outside the critical section before
|
|
|
|
* sleep
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or a signal is received. The @condition is
|
2017-03-05 18:07:33 +07:00
|
|
|
* checked each time the waitqueue @wq_head is woken up.
|
2012-11-30 17:42:40 +07:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* This is supposed to be called while holding the lock. The lock is
|
|
|
|
* dropped before invoking the cmd and going to sleep and is reacquired
|
|
|
|
* afterwards.
|
|
|
|
*
|
|
|
|
* The macro will return -ERESTARTSYS if it was interrupted by a signal
|
|
|
|
* and 0 if @condition evaluated to true.
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_interruptible_lock_irq_cmd(wq_head, condition, lock, cmd) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_interruptible_lock_irq(wq_head, \
|
|
|
|
condition, lock, cmd); \
|
|
|
|
__ret; \
|
2012-11-30 17:42:40 +07:00
|
|
|
})
|
|
|
|
|
|
|
|
/**
|
|
|
|
* wait_event_interruptible_lock_irq - sleep until a condition gets true.
|
|
|
|
* The condition is checked under the lock. This is expected
|
|
|
|
* to be called with the lock taken.
|
2017-03-05 18:07:33 +07:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2012-11-30 17:42:40 +07:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @lock: a locked spinlock_t, which will be released before schedule()
|
|
|
|
* and reacquired afterwards.
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or signal is received. The @condition is
|
2017-03-05 18:07:33 +07:00
|
|
|
* checked each time the waitqueue @wq_head is woken up.
|
2012-11-30 17:42:40 +07:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* This is supposed to be called while holding the lock. The lock is
|
|
|
|
* dropped before going to sleep and is reacquired afterwards.
|
|
|
|
*
|
|
|
|
* The macro will return -ERESTARTSYS if it was interrupted by a signal
|
|
|
|
* and 0 if @condition evaluated to true.
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_interruptible_lock_irq(wq_head, condition, lock) \
|
|
|
|
({ \
|
|
|
|
int __ret = 0; \
|
|
|
|
if (!(condition)) \
|
|
|
|
__ret = __wait_event_interruptible_lock_irq(wq_head, \
|
|
|
|
condition, lock,); \
|
|
|
|
__ret; \
|
2012-11-30 17:42:40 +07:00
|
|
|
})
|
|
|
|
|
2018-10-10 10:23:09 +07:00
|
|
|
#define __wait_event_lock_irq_timeout(wq_head, condition, lock, timeout, state) \
|
2017-03-05 18:07:33 +07:00
|
|
|
___wait_event(wq_head, ___wait_cond_timeout(condition), \
|
2018-10-10 10:23:09 +07:00
|
|
|
state, 0, timeout, \
|
2017-03-05 18:07:33 +07:00
|
|
|
spin_unlock_irq(&lock); \
|
|
|
|
__ret = schedule_timeout(__ret); \
|
2013-10-02 16:22:29 +07:00
|
|
|
spin_lock_irq(&lock));
|
2013-08-22 22:45:36 +07:00
|
|
|
|
|
|
|
/**
|
2013-10-04 15:24:49 +07:00
|
|
|
* wait_event_interruptible_lock_irq_timeout - sleep until a condition gets
|
|
|
|
* true or a timeout elapses. The condition is checked under
|
|
|
|
* the lock. This is expected to be called with the lock taken.
|
2017-03-05 18:07:33 +07:00
|
|
|
* @wq_head: the waitqueue to wait on
|
2013-08-22 22:45:36 +07:00
|
|
|
* @condition: a C expression for the event to wait for
|
|
|
|
* @lock: a locked spinlock_t, which will be released before schedule()
|
|
|
|
* and reacquired afterwards.
|
|
|
|
* @timeout: timeout, in jiffies
|
|
|
|
*
|
|
|
|
* The process is put to sleep (TASK_INTERRUPTIBLE) until the
|
|
|
|
* @condition evaluates to true or signal is received. The @condition is
|
2017-03-05 18:07:33 +07:00
|
|
|
* checked each time the waitqueue @wq_head is woken up.
|
2013-08-22 22:45:36 +07:00
|
|
|
*
|
|
|
|
* wake_up() has to be called after changing any variable that could
|
|
|
|
* change the result of the wait condition.
|
|
|
|
*
|
|
|
|
* This is supposed to be called while holding the lock. The lock is
|
|
|
|
* dropped before going to sleep and is reacquired afterwards.
|
|
|
|
*
|
|
|
|
* The function returns 0 if the @timeout elapsed, -ERESTARTSYS if it
|
|
|
|
* was interrupted by a signal, and the remaining jiffies otherwise
|
|
|
|
* if the condition evaluated to true before the timeout elapsed.
|
|
|
|
*/
|
2017-03-05 18:07:33 +07:00
|
|
|
#define wait_event_interruptible_lock_irq_timeout(wq_head, condition, lock, \
|
|
|
|
timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = timeout; \
|
|
|
|
if (!___wait_cond_timeout(condition)) \
|
2018-10-10 10:23:09 +07:00
|
|
|
__ret = __wait_event_lock_irq_timeout( \
|
|
|
|
wq_head, condition, lock, timeout, \
|
|
|
|
TASK_INTERRUPTIBLE); \
|
|
|
|
__ret; \
|
|
|
|
})
|
|
|
|
|
|
|
|
#define wait_event_lock_irq_timeout(wq_head, condition, lock, timeout) \
|
|
|
|
({ \
|
|
|
|
long __ret = timeout; \
|
|
|
|
if (!___wait_cond_timeout(condition)) \
|
|
|
|
__ret = __wait_event_lock_irq_timeout( \
|
|
|
|
wq_head, condition, lock, timeout, \
|
|
|
|
TASK_UNINTERRUPTIBLE); \
|
2017-03-05 18:07:33 +07:00
|
|
|
__ret; \
|
2013-08-22 22:45:36 +07:00
|
|
|
})
|
|
|
|
|
2005-04-17 05:20:36 +07:00
|
|
|
/*
|
|
|
|
* Waitqueues which are removed from the waitqueue_head at wakeup time
|
|
|
|
*/
|
2017-03-05 17:10:18 +07:00
|
|
|
void prepare_to_wait(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state);
|
|
|
|
void prepare_to_wait_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state);
|
|
|
|
long prepare_to_wait_event(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry, int state);
|
|
|
|
void finish_wait(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry);
|
2017-03-05 16:33:16 +07:00
|
|
|
long wait_woken(struct wait_queue_entry *wq_entry, unsigned mode, long timeout);
|
|
|
|
int woken_wake_function(struct wait_queue_entry *wq_entry, unsigned mode, int sync, void *key);
|
|
|
|
int autoremove_wake_function(struct wait_queue_entry *wq_entry, unsigned mode, int sync, void *key);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define DEFINE_WAIT_FUNC(name, function) \
|
|
|
|
struct wait_queue_entry name = { \
|
|
|
|
.private = current, \
|
|
|
|
.func = function, \
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 17:06:46 +07:00
|
|
|
.entry = LIST_HEAD_INIT((name).entry), \
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2009-04-28 16:24:21 +07:00
|
|
|
#define DEFINE_WAIT(name) DEFINE_WAIT_FUNC(name, autoremove_wake_function)
|
|
|
|
|
2017-03-05 18:07:33 +07:00
|
|
|
#define init_wait(wait) \
|
|
|
|
do { \
|
|
|
|
(wait)->private = current; \
|
|
|
|
(wait)->func = autoremove_wake_function; \
|
sched/wait: Disambiguate wq_entry->task_list and wq_head->task_list naming
So I've noticed a number of instances where it was not obvious from the
code whether ->task_list was for a wait-queue head or a wait-queue entry.
Furthermore, there's a number of wait-queue users where the lists are
not for 'tasks' but other entities (poll tables, etc.), in which case
the 'task_list' name is actively confusing.
To clear this all up, name the wait-queue head and entry list structure
fields unambiguously:
struct wait_queue_head::task_list => ::head
struct wait_queue_entry::task_list => ::entry
For example, this code:
rqw->wait.task_list.next != &wait->task_list
... is was pretty unclear (to me) what it's doing, while now it's written this way:
rqw->wait.head.next != &wait->entry
... which makes it pretty clear that we are iterating a list until we see the head.
Other examples are:
list_for_each_entry_safe(pos, next, &x->task_list, task_list) {
list_for_each_entry(wq, &fence->wait.task_list, task_list) {
... where it's unclear (to me) what we are iterating, and during review it's
hard to tell whether it's trying to walk a wait-queue entry (which would be
a bug), while now it's written as:
list_for_each_entry_safe(pos, next, &x->head, entry) {
list_for_each_entry(wq, &fence->wait.head, entry) {
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-20 17:06:46 +07:00
|
|
|
INIT_LIST_HEAD(&(wait)->entry); \
|
2017-03-05 18:07:33 +07:00
|
|
|
(wait)->flags = 0; \
|
2005-04-17 05:20:36 +07:00
|
|
|
} while (0)
|
|
|
|
|
2013-10-04 15:24:49 +07:00
|
|
|
#endif /* _LINUX_WAIT_H */
|