License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 21:07:57 +07:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2011-09-30 03:29:44 +07:00
|
|
|
#include <linux/pm_qos.h>
|
|
|
|
|
2012-08-06 06:44:28 +07:00
|
|
|
static inline void device_pm_init_common(struct device *dev)
|
|
|
|
{
|
2012-08-06 06:45:11 +07:00
|
|
|
if (!dev->power.early_init) {
|
|
|
|
spin_lock_init(&dev->power.lock);
|
2013-03-04 20:22:57 +07:00
|
|
|
dev->power.qos = NULL;
|
2012-08-06 06:45:11 +07:00
|
|
|
dev->power.early_init = true;
|
|
|
|
}
|
2012-08-06 06:44:28 +07:00
|
|
|
}
|
|
|
|
|
2014-11-28 04:38:05 +07:00
|
|
|
#ifdef CONFIG_PM
|
2009-08-19 04:38:32 +07:00
|
|
|
|
2012-08-06 06:45:11 +07:00
|
|
|
static inline void pm_runtime_early_init(struct device *dev)
|
|
|
|
{
|
|
|
|
dev->power.disable_depth = 1;
|
|
|
|
device_pm_init_common(dev);
|
|
|
|
}
|
|
|
|
|
2009-08-19 04:38:32 +07:00
|
|
|
extern void pm_runtime_init(struct device *dev);
|
PM / runtime: Re-init runtime PM states at probe error and driver unbind
There are two common expectations among several subsystems/drivers that
deploys runtime PM support, but which isn't met by the driver core.
Expectation 1)
At ->probe() the subsystem/driver expects the runtime PM status of the
device to be RPM_SUSPENDED, which is the initial status being assigned at
device registration.
This expectation is especially common among some of those subsystems/
drivers that manages devices with an attached PM domain, as those requires
the ->runtime_resume() callback at the PM domain level to be invoked
during ->probe().
Moreover these subsystems/drivers entirely relies on runtime PM resources
being managed at the PM domain level, thus don't implement their own set
of runtime PM callbacks.
These are two scenarios that suffers from this unmet expectation.
i) A failed ->probe() sequence requests probe deferral:
->probe()
...
pm_runtime_enable()
pm_runtime_get_sync()
...
err:
pm_runtime_put()
pm_runtime_disable()
...
As there are no guarantees that such sequence turns the runtime PM status
of the device into RPM_SUSPENDED, the re-trying ->probe() may start with
the status in RPM_ACTIVE.
In such case the runtime PM core won't invoke the ->runtime_resume()
callback because of a pm_runtime_get_sync(), as it considers the device to
be already runtime resumed.
ii) A driver re-bind sequence:
At driver unbind, the subsystem/driver's >remove() callback invokes a
sequence of runtime PM APIs, to undo actions during ->probe() and to put
the device into low power state.
->remove()
...
pm_runtime_put()
pm_runtime_disable()
...
Similar as in the failing ->probe() case, this sequence don't guarantee
the runtime PM status of the device to turn into RPM_SUSPENDED.
Trying to re-bind the driver thus causes the same issue as when re-trying
->probe(), in the probe deferral scenario.
Expectation 2)
Drivers that invokes the pm_runtime_irq_safe() API during ->probe(),
triggers the runtime PM core to increase the usage count for the device's
parent and permanently make it runtime resumed.
The usage count is only dropped at device removal, which also allows it to
be runtime suspended again.
A re-trying ->probe() repeats the call to pm_runtime_irq_safe() and thus
once more triggers the usage count of the device's parent to be increased.
This leads to not only an imbalance issue of the usage count of the
device's parent, but also to keep it runtime resumed permanently even if
->probe() fails.
To address these issues, let's change the policy of the driver core to
meet these expectations. More precisely, at ->probe() failures and driver
unbind, restore the initial states of runtime PM.
Although to still allow subsystem's to control PM for devices that doesn't
->probe() successfully, don't restore the initial states unless runtime PM
is disabled.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2015-11-18 17:48:39 +07:00
|
|
|
extern void pm_runtime_reinit(struct device *dev);
|
2009-08-19 04:38:32 +07:00
|
|
|
extern void pm_runtime_remove(struct device *dev);
|
|
|
|
|
2016-12-06 07:38:16 +07:00
|
|
|
#define WAKE_IRQ_DEDICATED_ALLOCATED BIT(0)
|
|
|
|
#define WAKE_IRQ_DEDICATED_MANAGED BIT(1)
|
|
|
|
#define WAKE_IRQ_DEDICATED_MASK (WAKE_IRQ_DEDICATED_ALLOCATED | \
|
|
|
|
WAKE_IRQ_DEDICATED_MANAGED)
|
|
|
|
|
PM / Wakeirq: Add automated device wake IRQ handling
Turns out we can automate the handling for the device_may_wakeup()
quite a bit by using the kernel wakeup source list as suggested
by Rafael J. Wysocki <rjw@rjwysocki.net>.
And as some hardware has separate dedicated wake-up interrupt
in addition to the IO interrupt, we can automate the handling by
adding a generic threaded interrupt handler that just calls the
device PM runtime to wake up the device.
This allows dropping code from device drivers as we currently
are doing it in multiple ways, and often wrong.
For most drivers, we should be able to drop the following
boilerplate code from runtime_suspend and runtime_resume
functions:
...
device_init_wakeup(dev, true);
...
if (device_may_wakeup(dev))
enable_irq_wake(irq);
...
if (device_may_wakeup(dev))
disable_irq_wake(irq);
...
device_init_wakeup(dev, false);
...
We can replace it with just the following init and exit
time code:
...
device_init_wakeup(dev, true);
dev_pm_set_wake_irq(dev, irq);
...
dev_pm_clear_wake_irq(dev);
device_init_wakeup(dev, false);
...
And for hardware with dedicated wake-up interrupts:
...
device_init_wakeup(dev, true);
dev_pm_set_dedicated_wake_irq(dev, irq);
...
dev_pm_clear_wake_irq(dev);
device_init_wakeup(dev, false);
...
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2015-05-19 05:40:29 +07:00
|
|
|
struct wake_irq {
|
|
|
|
struct device *dev;
|
2016-12-06 07:38:16 +07:00
|
|
|
unsigned int status;
|
PM / Wakeirq: Add automated device wake IRQ handling
Turns out we can automate the handling for the device_may_wakeup()
quite a bit by using the kernel wakeup source list as suggested
by Rafael J. Wysocki <rjw@rjwysocki.net>.
And as some hardware has separate dedicated wake-up interrupt
in addition to the IO interrupt, we can automate the handling by
adding a generic threaded interrupt handler that just calls the
device PM runtime to wake up the device.
This allows dropping code from device drivers as we currently
are doing it in multiple ways, and often wrong.
For most drivers, we should be able to drop the following
boilerplate code from runtime_suspend and runtime_resume
functions:
...
device_init_wakeup(dev, true);
...
if (device_may_wakeup(dev))
enable_irq_wake(irq);
...
if (device_may_wakeup(dev))
disable_irq_wake(irq);
...
device_init_wakeup(dev, false);
...
We can replace it with just the following init and exit
time code:
...
device_init_wakeup(dev, true);
dev_pm_set_wake_irq(dev, irq);
...
dev_pm_clear_wake_irq(dev);
device_init_wakeup(dev, false);
...
And for hardware with dedicated wake-up interrupts:
...
device_init_wakeup(dev, true);
dev_pm_set_dedicated_wake_irq(dev, irq);
...
dev_pm_clear_wake_irq(dev);
device_init_wakeup(dev, false);
...
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2015-05-19 05:40:29 +07:00
|
|
|
int irq;
|
2018-02-08 23:30:10 +07:00
|
|
|
const char *name;
|
PM / Wakeirq: Add automated device wake IRQ handling
Turns out we can automate the handling for the device_may_wakeup()
quite a bit by using the kernel wakeup source list as suggested
by Rafael J. Wysocki <rjw@rjwysocki.net>.
And as some hardware has separate dedicated wake-up interrupt
in addition to the IO interrupt, we can automate the handling by
adding a generic threaded interrupt handler that just calls the
device PM runtime to wake up the device.
This allows dropping code from device drivers as we currently
are doing it in multiple ways, and often wrong.
For most drivers, we should be able to drop the following
boilerplate code from runtime_suspend and runtime_resume
functions:
...
device_init_wakeup(dev, true);
...
if (device_may_wakeup(dev))
enable_irq_wake(irq);
...
if (device_may_wakeup(dev))
disable_irq_wake(irq);
...
device_init_wakeup(dev, false);
...
We can replace it with just the following init and exit
time code:
...
device_init_wakeup(dev, true);
dev_pm_set_wake_irq(dev, irq);
...
dev_pm_clear_wake_irq(dev);
device_init_wakeup(dev, false);
...
And for hardware with dedicated wake-up interrupts:
...
device_init_wakeup(dev, true);
dev_pm_set_dedicated_wake_irq(dev, irq);
...
dev_pm_clear_wake_irq(dev);
device_init_wakeup(dev, false);
...
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2015-05-19 05:40:29 +07:00
|
|
|
};
|
|
|
|
|
|
|
|
extern void dev_pm_arm_wake_irq(struct wake_irq *wirq);
|
|
|
|
extern void dev_pm_disarm_wake_irq(struct wake_irq *wirq);
|
2016-12-06 07:38:16 +07:00
|
|
|
extern void dev_pm_enable_wake_irq_check(struct device *dev,
|
|
|
|
bool can_change_status);
|
|
|
|
extern void dev_pm_disable_wake_irq_check(struct device *dev);
|
PM / Wakeirq: Add automated device wake IRQ handling
Turns out we can automate the handling for the device_may_wakeup()
quite a bit by using the kernel wakeup source list as suggested
by Rafael J. Wysocki <rjw@rjwysocki.net>.
And as some hardware has separate dedicated wake-up interrupt
in addition to the IO interrupt, we can automate the handling by
adding a generic threaded interrupt handler that just calls the
device PM runtime to wake up the device.
This allows dropping code from device drivers as we currently
are doing it in multiple ways, and often wrong.
For most drivers, we should be able to drop the following
boilerplate code from runtime_suspend and runtime_resume
functions:
...
device_init_wakeup(dev, true);
...
if (device_may_wakeup(dev))
enable_irq_wake(irq);
...
if (device_may_wakeup(dev))
disable_irq_wake(irq);
...
device_init_wakeup(dev, false);
...
We can replace it with just the following init and exit
time code:
...
device_init_wakeup(dev, true);
dev_pm_set_wake_irq(dev, irq);
...
dev_pm_clear_wake_irq(dev);
device_init_wakeup(dev, false);
...
And for hardware with dedicated wake-up interrupts:
...
device_init_wakeup(dev, true);
dev_pm_set_dedicated_wake_irq(dev, irq);
...
dev_pm_clear_wake_irq(dev);
device_init_wakeup(dev, false);
...
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2015-05-19 05:40:29 +07:00
|
|
|
|
|
|
|
#ifdef CONFIG_PM_SLEEP
|
|
|
|
|
2018-01-05 08:18:42 +07:00
|
|
|
extern void device_wakeup_attach_irq(struct device *dev, struct wake_irq *wakeirq);
|
PM / Wakeirq: Add automated device wake IRQ handling
Turns out we can automate the handling for the device_may_wakeup()
quite a bit by using the kernel wakeup source list as suggested
by Rafael J. Wysocki <rjw@rjwysocki.net>.
And as some hardware has separate dedicated wake-up interrupt
in addition to the IO interrupt, we can automate the handling by
adding a generic threaded interrupt handler that just calls the
device PM runtime to wake up the device.
This allows dropping code from device drivers as we currently
are doing it in multiple ways, and often wrong.
For most drivers, we should be able to drop the following
boilerplate code from runtime_suspend and runtime_resume
functions:
...
device_init_wakeup(dev, true);
...
if (device_may_wakeup(dev))
enable_irq_wake(irq);
...
if (device_may_wakeup(dev))
disable_irq_wake(irq);
...
device_init_wakeup(dev, false);
...
We can replace it with just the following init and exit
time code:
...
device_init_wakeup(dev, true);
dev_pm_set_wake_irq(dev, irq);
...
dev_pm_clear_wake_irq(dev);
device_init_wakeup(dev, false);
...
And for hardware with dedicated wake-up interrupts:
...
device_init_wakeup(dev, true);
dev_pm_set_dedicated_wake_irq(dev, irq);
...
dev_pm_clear_wake_irq(dev);
device_init_wakeup(dev, false);
...
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2015-05-19 05:40:29 +07:00
|
|
|
extern void device_wakeup_detach_irq(struct device *dev);
|
|
|
|
extern void device_wakeup_arm_wake_irqs(void);
|
|
|
|
extern void device_wakeup_disarm_wake_irqs(void);
|
|
|
|
|
|
|
|
#else
|
|
|
|
|
2018-01-05 08:18:42 +07:00
|
|
|
static inline void device_wakeup_attach_irq(struct device *dev,
|
|
|
|
struct wake_irq *wakeirq) {}
|
PM / Wakeirq: Add automated device wake IRQ handling
Turns out we can automate the handling for the device_may_wakeup()
quite a bit by using the kernel wakeup source list as suggested
by Rafael J. Wysocki <rjw@rjwysocki.net>.
And as some hardware has separate dedicated wake-up interrupt
in addition to the IO interrupt, we can automate the handling by
adding a generic threaded interrupt handler that just calls the
device PM runtime to wake up the device.
This allows dropping code from device drivers as we currently
are doing it in multiple ways, and often wrong.
For most drivers, we should be able to drop the following
boilerplate code from runtime_suspend and runtime_resume
functions:
...
device_init_wakeup(dev, true);
...
if (device_may_wakeup(dev))
enable_irq_wake(irq);
...
if (device_may_wakeup(dev))
disable_irq_wake(irq);
...
device_init_wakeup(dev, false);
...
We can replace it with just the following init and exit
time code:
...
device_init_wakeup(dev, true);
dev_pm_set_wake_irq(dev, irq);
...
dev_pm_clear_wake_irq(dev);
device_init_wakeup(dev, false);
...
And for hardware with dedicated wake-up interrupts:
...
device_init_wakeup(dev, true);
dev_pm_set_dedicated_wake_irq(dev, irq);
...
dev_pm_clear_wake_irq(dev);
device_init_wakeup(dev, false);
...
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2015-05-19 05:40:29 +07:00
|
|
|
|
|
|
|
static inline void device_wakeup_detach_irq(struct device *dev)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif /* CONFIG_PM_SLEEP */
|
|
|
|
|
2014-11-28 04:38:05 +07:00
|
|
|
/*
|
|
|
|
* sysfs.c
|
|
|
|
*/
|
|
|
|
|
|
|
|
extern int dpm_sysfs_add(struct device *dev);
|
|
|
|
extern void dpm_sysfs_remove(struct device *dev);
|
|
|
|
extern void rpm_sysfs_remove(struct device *dev);
|
|
|
|
extern int wakeup_sysfs_add(struct device *dev);
|
|
|
|
extern void wakeup_sysfs_remove(struct device *dev);
|
|
|
|
extern int pm_qos_sysfs_add_resume_latency(struct device *dev);
|
|
|
|
extern void pm_qos_sysfs_remove_resume_latency(struct device *dev);
|
|
|
|
extern int pm_qos_sysfs_add_flags(struct device *dev);
|
|
|
|
extern void pm_qos_sysfs_remove_flags(struct device *dev);
|
2015-07-27 22:03:56 +07:00
|
|
|
extern int pm_qos_sysfs_add_latency_tolerance(struct device *dev);
|
|
|
|
extern void pm_qos_sysfs_remove_latency_tolerance(struct device *dev);
|
2014-11-28 04:38:05 +07:00
|
|
|
|
|
|
|
#else /* CONFIG_PM */
|
2009-08-19 04:38:32 +07:00
|
|
|
|
2012-08-06 06:45:11 +07:00
|
|
|
static inline void pm_runtime_early_init(struct device *dev)
|
|
|
|
{
|
|
|
|
device_pm_init_common(dev);
|
|
|
|
}
|
|
|
|
|
2009-08-19 04:38:32 +07:00
|
|
|
static inline void pm_runtime_init(struct device *dev) {}
|
PM / runtime: Re-init runtime PM states at probe error and driver unbind
There are two common expectations among several subsystems/drivers that
deploys runtime PM support, but which isn't met by the driver core.
Expectation 1)
At ->probe() the subsystem/driver expects the runtime PM status of the
device to be RPM_SUSPENDED, which is the initial status being assigned at
device registration.
This expectation is especially common among some of those subsystems/
drivers that manages devices with an attached PM domain, as those requires
the ->runtime_resume() callback at the PM domain level to be invoked
during ->probe().
Moreover these subsystems/drivers entirely relies on runtime PM resources
being managed at the PM domain level, thus don't implement their own set
of runtime PM callbacks.
These are two scenarios that suffers from this unmet expectation.
i) A failed ->probe() sequence requests probe deferral:
->probe()
...
pm_runtime_enable()
pm_runtime_get_sync()
...
err:
pm_runtime_put()
pm_runtime_disable()
...
As there are no guarantees that such sequence turns the runtime PM status
of the device into RPM_SUSPENDED, the re-trying ->probe() may start with
the status in RPM_ACTIVE.
In such case the runtime PM core won't invoke the ->runtime_resume()
callback because of a pm_runtime_get_sync(), as it considers the device to
be already runtime resumed.
ii) A driver re-bind sequence:
At driver unbind, the subsystem/driver's >remove() callback invokes a
sequence of runtime PM APIs, to undo actions during ->probe() and to put
the device into low power state.
->remove()
...
pm_runtime_put()
pm_runtime_disable()
...
Similar as in the failing ->probe() case, this sequence don't guarantee
the runtime PM status of the device to turn into RPM_SUSPENDED.
Trying to re-bind the driver thus causes the same issue as when re-trying
->probe(), in the probe deferral scenario.
Expectation 2)
Drivers that invokes the pm_runtime_irq_safe() API during ->probe(),
triggers the runtime PM core to increase the usage count for the device's
parent and permanently make it runtime resumed.
The usage count is only dropped at device removal, which also allows it to
be runtime suspended again.
A re-trying ->probe() repeats the call to pm_runtime_irq_safe() and thus
once more triggers the usage count of the device's parent to be increased.
This leads to not only an imbalance issue of the usage count of the
device's parent, but also to keep it runtime resumed permanently even if
->probe() fails.
To address these issues, let's change the policy of the driver core to
meet these expectations. More precisely, at ->probe() failures and driver
unbind, restore the initial states of runtime PM.
Although to still allow subsystem's to control PM for devices that doesn't
->probe() successfully, don't restore the initial states unless runtime PM
is disabled.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2015-11-18 17:48:39 +07:00
|
|
|
static inline void pm_runtime_reinit(struct device *dev) {}
|
2009-08-19 04:38:32 +07:00
|
|
|
static inline void pm_runtime_remove(struct device *dev) {}
|
|
|
|
|
2014-11-28 04:38:05 +07:00
|
|
|
static inline int dpm_sysfs_add(struct device *dev) { return 0; }
|
|
|
|
static inline void dpm_sysfs_remove(struct device *dev) {}
|
|
|
|
|
|
|
|
#endif
|
2008-08-08 00:06:12 +07:00
|
|
|
|
2007-07-30 04:27:18 +07:00
|
|
|
#ifdef CONFIG_PM_SLEEP
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-01-24 04:25:15 +07:00
|
|
|
/* kernel/power/main.c */
|
|
|
|
extern int pm_async_enabled;
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2010-01-24 04:25:15 +07:00
|
|
|
/* drivers/base/power/main.c */
|
Introduce new top level suspend and hibernation callbacks
Introduce 'struct pm_ops' and 'struct pm_ext_ops' ('ext' meaning
'extended') representing suspend and hibernation operations for bus
types, device classes, device types and device drivers.
Modify the PM core to use 'struct pm_ops' and 'struct pm_ext_ops'
objects, if defined, instead of the ->suspend(), ->resume(),
->suspend_late(), and ->resume_early() callbacks (the old callbacks
will be considered as legacy and gradually phased out).
The main purpose of doing this is to separate suspend (aka S2RAM and
standby) callbacks from hibernation callbacks in such a way that the
new callbacks won't take arguments and the semantics of each of them
will be clearly specified. This has been requested for multiple
times by many people, including Linus himself, and the reason is that
within the current scheme if ->resume() is called, for example, it's
difficult to say why it's been called (ie. is it a resume from RAM or
from hibernation or a suspend/hibernation failure etc.?).
The second purpose is to make the suspend/hibernation callbacks more
flexible so that device drivers can handle more than they can within
the current scheme. For example, some drivers may need to prevent
new children of the device from being registered before their
->suspend() callbacks are executed or they may want to carry out some
operations requiring the availability of some other devices, not
directly bound via the parent-child relationship, in order to prepare
for the execution of ->suspend(), etc.
Ultimately, we'd like to stop using the freezing of tasks for suspend
and therefore the drivers' suspend/hibernation code will have to take
care of the handling of the user space during suspend/hibernation.
That, in turn, would be difficult within the current scheme, without
the new ->prepare() and ->complete() callbacks.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
2008-05-21 04:00:01 +07:00
|
|
|
extern struct list_head dpm_list; /* The active device list */
|
2005-04-17 05:20:36 +07:00
|
|
|
|
2007-11-22 05:55:18 +07:00
|
|
|
static inline struct device *to_device(struct list_head *entry)
|
2005-04-17 05:20:36 +07:00
|
|
|
{
|
2007-09-22 02:36:56 +07:00
|
|
|
return container_of(entry, struct device, power.entry);
|
2005-04-17 05:20:36 +07:00
|
|
|
}
|
|
|
|
|
2012-08-06 06:44:28 +07:00
|
|
|
extern void device_pm_sleep_init(struct device *dev);
|
2008-08-08 00:06:12 +07:00
|
|
|
extern void device_pm_add(struct device *);
|
2005-04-17 05:20:36 +07:00
|
|
|
extern void device_pm_remove(struct device *);
|
2009-03-04 18:44:00 +07:00
|
|
|
extern void device_pm_move_before(struct device *, struct device *);
|
|
|
|
extern void device_pm_move_after(struct device *, struct device *);
|
|
|
|
extern void device_pm_move_last(struct device *);
|
2016-01-07 22:46:14 +07:00
|
|
|
extern void device_pm_check_callbacks(struct device *dev);
|
2005-04-17 05:20:36 +07:00
|
|
|
|
driver core: Functional dependencies tracking support
Currently, there is a problem with taking functional dependencies
between devices into account.
What I mean by a "functional dependency" is when the driver of device
B needs device A to be functional and (generally) its driver to be
present in order to work properly. This has certain consequences
for power management (suspend/resume and runtime PM ordering) and
shutdown ordering of these devices. In general, it also implies that
the driver of A needs to be working for B to be probed successfully
and it cannot be unbound from the device before the B's driver.
Support for representing those functional dependencies between
devices is added here to allow the driver core to track them and act
on them in certain cases where applicable.
The argument for doing that in the driver core is that there are
quite a few distinct use cases involving device dependencies, they
are relatively hard to get right in a driver (if one wants to
address all of them properly) and it only gets worse if multiplied
by the number of drivers potentially needing to do it. Morever, at
least one case (asynchronous system suspend/resume) cannot be handled
in a single driver at all, because it requires the driver of A to
wait for B to suspend (during system suspend) and the driver of B to
wait for A to resume (during system resume).
For this reason, represent dependencies between devices as "links",
with the help of struct device_link objects each containing pointers
to the "linked" devices, a list node for each of them, status
information, flags, and an RCU head for synchronization.
Also add two new list heads, representing the lists of links to the
devices that depend on the given one (consumers) and to the devices
depended on by it (suppliers), and a "driver presence status" field
(needed for figuring out initial states of device links) to struct
device.
The entire data structure consisting of all of the lists of link
objects for all devices is protected by a mutex (for link object
addition/removal and for list walks during device driver probing
and removal) and by SRCU (for list walking in other case that will
be introduced by subsequent change sets). If CONFIG_SRCU is not
selected, however, an rwsem is used for protecting the entire data
structure.
In addition, each link object has an internal status field whose
value reflects whether or not drivers are bound to the devices
pointed to by the link or probing/removal of their drivers is in
progress etc. That field is only modified under the device links
mutex, but it may be read outside of it in some cases (introduced by
subsequent change sets), so modifications of it are annotated with
WRITE_ONCE().
New links are added by calling device_link_add() which takes three
arguments: pointers to the devices in question and flags. In
particular, if DL_FLAG_STATELESS is set in the flags, the link status
is not to be taken into account for this link and the driver core
will not manage it. In turn, if DL_FLAG_AUTOREMOVE is set in the
flags, the driver core will remove the link automatically when the
consumer device driver unbinds from it.
One of the actions carried out by device_link_add() is to reorder
the lists used for device shutdown and system suspend/resume to
put the consumer device along with all of its children and all of
its consumers (and so on, recursively) to the ends of those lists
in order to ensure the right ordering between all of the supplier
and consumer devices.
For this reason, it is not possible to create a link between two
devices if the would-be supplier device already depends on the
would-be consumer device as either a direct descendant of it or a
consumer of one of its direct descendants or one of its consumers
and so on.
There are two types of link objects, persistent and non-persistent.
The persistent ones stay around until one of the target devices is
deleted, while the non-persistent ones are removed automatically when
the consumer driver unbinds from its device (ie. they are assumed to
be valid only as long as the consumer device has a driver bound to
it). Persistent links are created by default and non-persistent
links are created when the DL_FLAG_AUTOREMOVE flag is passed
to device_link_add().
Both persistent and non-persistent device links can be deleted
with an explicit call to device_link_del().
Links created without the DL_FLAG_STATELESS flag set are managed
by the driver core using a simple state machine. There are 5 states
each link can be in: DORMANT (unused), AVAILABLE (the supplier driver
is present and functional), CONSUMER_PROBE (the consumer driver is
probing), ACTIVE (both supplier and consumer drivers are present and
functional), and SUPPLIER_UNBIND (the supplier driver is unbinding).
The driver core updates the link state automatically depending on
what happens to the linked devices and for each link state specific
actions are taken in addition to that.
For example, if the supplier driver unbinds from its device, the
driver core will also unbind the drivers of all of its consumers
automatically under the assumption that they cannot function
properly without the supplier. Analogously, the driver core will
only allow the consumer driver to bind to its device if the
supplier driver is present and functional (ie. the link is in
the AVAILABLE state). If that's not the case, it will rely on
the existing deferred probing mechanism to wait for the supplier
driver to become available.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-10-30 23:32:16 +07:00
|
|
|
static inline bool device_pm_initialized(struct device *dev)
|
|
|
|
{
|
|
|
|
return dev->power.in_dpm_list;
|
|
|
|
}
|
|
|
|
|
2009-08-19 04:38:32 +07:00
|
|
|
#else /* !CONFIG_PM_SLEEP */
|
|
|
|
|
2012-08-06 06:44:28 +07:00
|
|
|
static inline void device_pm_sleep_init(struct device *dev) {}
|
2009-08-19 04:38:32 +07:00
|
|
|
|
2013-03-04 20:22:57 +07:00
|
|
|
static inline void device_pm_add(struct device *dev) {}
|
2011-09-30 03:29:44 +07:00
|
|
|
|
2009-08-19 04:38:32 +07:00
|
|
|
static inline void device_pm_remove(struct device *dev)
|
|
|
|
{
|
|
|
|
pm_runtime_remove(dev);
|
|
|
|
}
|
2007-11-22 05:55:18 +07:00
|
|
|
|
2009-03-04 18:44:00 +07:00
|
|
|
static inline void device_pm_move_before(struct device *deva,
|
|
|
|
struct device *devb) {}
|
|
|
|
static inline void device_pm_move_after(struct device *deva,
|
|
|
|
struct device *devb) {}
|
|
|
|
static inline void device_pm_move_last(struct device *dev) {}
|
2008-01-13 02:40:46 +07:00
|
|
|
|
2016-01-07 22:46:14 +07:00
|
|
|
static inline void device_pm_check_callbacks(struct device *dev) {}
|
|
|
|
|
driver core: Functional dependencies tracking support
Currently, there is a problem with taking functional dependencies
between devices into account.
What I mean by a "functional dependency" is when the driver of device
B needs device A to be functional and (generally) its driver to be
present in order to work properly. This has certain consequences
for power management (suspend/resume and runtime PM ordering) and
shutdown ordering of these devices. In general, it also implies that
the driver of A needs to be working for B to be probed successfully
and it cannot be unbound from the device before the B's driver.
Support for representing those functional dependencies between
devices is added here to allow the driver core to track them and act
on them in certain cases where applicable.
The argument for doing that in the driver core is that there are
quite a few distinct use cases involving device dependencies, they
are relatively hard to get right in a driver (if one wants to
address all of them properly) and it only gets worse if multiplied
by the number of drivers potentially needing to do it. Morever, at
least one case (asynchronous system suspend/resume) cannot be handled
in a single driver at all, because it requires the driver of A to
wait for B to suspend (during system suspend) and the driver of B to
wait for A to resume (during system resume).
For this reason, represent dependencies between devices as "links",
with the help of struct device_link objects each containing pointers
to the "linked" devices, a list node for each of them, status
information, flags, and an RCU head for synchronization.
Also add two new list heads, representing the lists of links to the
devices that depend on the given one (consumers) and to the devices
depended on by it (suppliers), and a "driver presence status" field
(needed for figuring out initial states of device links) to struct
device.
The entire data structure consisting of all of the lists of link
objects for all devices is protected by a mutex (for link object
addition/removal and for list walks during device driver probing
and removal) and by SRCU (for list walking in other case that will
be introduced by subsequent change sets). If CONFIG_SRCU is not
selected, however, an rwsem is used for protecting the entire data
structure.
In addition, each link object has an internal status field whose
value reflects whether or not drivers are bound to the devices
pointed to by the link or probing/removal of their drivers is in
progress etc. That field is only modified under the device links
mutex, but it may be read outside of it in some cases (introduced by
subsequent change sets), so modifications of it are annotated with
WRITE_ONCE().
New links are added by calling device_link_add() which takes three
arguments: pointers to the devices in question and flags. In
particular, if DL_FLAG_STATELESS is set in the flags, the link status
is not to be taken into account for this link and the driver core
will not manage it. In turn, if DL_FLAG_AUTOREMOVE is set in the
flags, the driver core will remove the link automatically when the
consumer device driver unbinds from it.
One of the actions carried out by device_link_add() is to reorder
the lists used for device shutdown and system suspend/resume to
put the consumer device along with all of its children and all of
its consumers (and so on, recursively) to the ends of those lists
in order to ensure the right ordering between all of the supplier
and consumer devices.
For this reason, it is not possible to create a link between two
devices if the would-be supplier device already depends on the
would-be consumer device as either a direct descendant of it or a
consumer of one of its direct descendants or one of its consumers
and so on.
There are two types of link objects, persistent and non-persistent.
The persistent ones stay around until one of the target devices is
deleted, while the non-persistent ones are removed automatically when
the consumer driver unbinds from its device (ie. they are assumed to
be valid only as long as the consumer device has a driver bound to
it). Persistent links are created by default and non-persistent
links are created when the DL_FLAG_AUTOREMOVE flag is passed
to device_link_add().
Both persistent and non-persistent device links can be deleted
with an explicit call to device_link_del().
Links created without the DL_FLAG_STATELESS flag set are managed
by the driver core using a simple state machine. There are 5 states
each link can be in: DORMANT (unused), AVAILABLE (the supplier driver
is present and functional), CONSUMER_PROBE (the consumer driver is
probing), ACTIVE (both supplier and consumer drivers are present and
functional), and SUPPLIER_UNBIND (the supplier driver is unbinding).
The driver core updates the link state automatically depending on
what happens to the linked devices and for each link state specific
actions are taken in addition to that.
For example, if the supplier driver unbinds from its device, the
driver core will also unbind the drivers of all of its consumers
automatically under the assumption that they cannot function
properly without the supplier. Analogously, the driver core will
only allow the consumer driver to bind to its device if the
supplier driver is present and functional (ie. the link is in
the AVAILABLE state). If that's not the case, it will rely on
the existing deferred probing mechanism to wait for the supplier
driver to become available.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-10-30 23:32:16 +07:00
|
|
|
static inline bool device_pm_initialized(struct device *dev)
|
|
|
|
{
|
|
|
|
return device_is_registered(dev);
|
|
|
|
}
|
|
|
|
|
2009-08-19 04:38:32 +07:00
|
|
|
#endif /* !CONFIG_PM_SLEEP */
|
2007-11-22 05:55:18 +07:00
|
|
|
|
2012-08-06 06:44:28 +07:00
|
|
|
static inline void device_pm_init(struct device *dev)
|
|
|
|
{
|
|
|
|
device_pm_init_common(dev);
|
|
|
|
device_pm_sleep_init(dev);
|
|
|
|
pm_runtime_init(dev);
|
|
|
|
}
|