License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 21:07:57 +07:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2012-09-05 02:12:07 +07:00
|
|
|
/*
|
|
|
|
* You SHOULD NOT be including this unless you're vsyscall
|
|
|
|
* handling code or timekeeping internal code!
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _LINUX_TIMEKEEPER_INTERNAL_H
|
|
|
|
#define _LINUX_TIMEKEEPER_INTERNAL_H
|
|
|
|
|
|
|
|
#include <linux/clocksource.h>
|
|
|
|
#include <linux/jiffies.h>
|
|
|
|
#include <linux/time.h>
|
|
|
|
|
2014-07-17 04:05:16 +07:00
|
|
|
/**
|
|
|
|
* struct tk_read_base - base structure for timekeeping readout
|
|
|
|
* @clock: Current clocksource used for timekeeping.
|
|
|
|
* @mask: Bitmask for two's complement subtraction of non 64bit clocks
|
|
|
|
* @cycle_last: @clock cycle value at last update
|
2015-03-19 16:09:06 +07:00
|
|
|
* @mult: (NTP adjusted) multiplier for scaled math conversion
|
2014-07-17 04:05:16 +07:00
|
|
|
* @shift: Shift value for scaled math conversion
|
|
|
|
* @xtime_nsec: Shifted (fractional) nano seconds offset for readout
|
2015-03-19 16:09:06 +07:00
|
|
|
* @base: ktime_t (nanoseconds) base time for readout
|
2017-08-31 22:12:48 +07:00
|
|
|
* @base_real: Nanoseconds base value for clock REALTIME readout
|
2014-07-17 04:04:07 +07:00
|
|
|
*
|
2014-07-17 04:05:16 +07:00
|
|
|
* This struct has size 56 byte on 64 bit. Together with a seqcount it
|
|
|
|
* occupies a single 64byte cache line.
|
2014-07-17 04:04:07 +07:00
|
|
|
*
|
2014-07-17 04:05:16 +07:00
|
|
|
* The struct is separate from struct timekeeper as it is also used
|
2015-03-19 16:09:06 +07:00
|
|
|
* for a fast NMI safe accessors.
|
2017-08-31 22:12:48 +07:00
|
|
|
*
|
|
|
|
* @base_real is for the fast NMI safe accessor to allow reading clock
|
|
|
|
* realtime from any context.
|
2014-07-17 04:04:07 +07:00
|
|
|
*/
|
2014-07-17 04:05:16 +07:00
|
|
|
struct tk_read_base {
|
2012-09-05 02:12:07 +07:00
|
|
|
struct clocksource *clock;
|
2016-12-22 02:32:01 +07:00
|
|
|
u64 mask;
|
|
|
|
u64 cycle_last;
|
2012-09-05 02:12:07 +07:00
|
|
|
u32 mult;
|
|
|
|
u32 shift;
|
2014-07-17 04:04:07 +07:00
|
|
|
u64 xtime_nsec;
|
2015-03-19 16:09:06 +07:00
|
|
|
ktime_t base;
|
2017-08-31 22:12:48 +07:00
|
|
|
u64 base_real;
|
2014-07-17 04:05:16 +07:00
|
|
|
};
|
2014-07-17 04:04:10 +07:00
|
|
|
|
2014-07-17 04:05:16 +07:00
|
|
|
/**
|
|
|
|
* struct timekeeper - Structure holding internal timekeeping values.
|
2015-03-19 16:09:06 +07:00
|
|
|
* @tkr_mono: The readout base structure for CLOCK_MONOTONIC
|
2015-03-19 15:28:44 +07:00
|
|
|
* @tkr_raw: The readout base structure for CLOCK_MONOTONIC_RAW
|
2014-07-17 04:05:16 +07:00
|
|
|
* @xtime_sec: Current CLOCK_REALTIME time in seconds
|
2014-10-29 17:31:16 +07:00
|
|
|
* @ktime_sec: Current CLOCK_MONOTONIC time in seconds
|
2014-07-17 04:05:16 +07:00
|
|
|
* @wall_to_monotonic: CLOCK_REALTIME to CLOCK_MONOTONIC offset
|
|
|
|
* @offs_real: Offset clock monotonic -> clock realtime
|
|
|
|
* @offs_boot: Offset clock monotonic -> clock boottime
|
|
|
|
* @offs_tai: Offset clock monotonic -> clock tai
|
|
|
|
* @tai_offset: The current UTC to TAI offset in seconds
|
2015-04-15 04:08:37 +07:00
|
|
|
* @clock_was_set_seq: The sequence number of clock was set events
|
time: Add history to cross timestamp interface supporting slower devices
Another representative use case of time sync and the correlated
clocksource (in addition to PTP noted above) is PTP synchronized
audio.
In a streaming application, as an example, samples will be sent and/or
received by multiple devices with a presentation time that is in terms
of the PTP master clock. Synchronizing the audio output on these
devices requires correlating the audio clock with the PTP master
clock. The more precise this correlation is, the better the audio
quality (i.e. out of sync audio sounds bad).
From an application standpoint, to correlate the PTP master clock with
the audio device clock, the system clock is used as a intermediate
timebase. The transforms such an application would perform are:
System Clock <-> Audio clock
System Clock <-> Network Device Clock [<-> PTP Master Clock]
Modern Intel platforms can perform a more accurate cross timestamp in
hardware (ART,audio device clock). The audio driver requires
ART->system time transforms -- the same as required for the network
driver. These platforms offload audio processing (including
cross-timestamps) to a DSP which to ensure uninterrupted audio
processing, communicates and response to the host only once every
millsecond. As a result is takes up to a millisecond for the DSP to
receive a request, the request is processed by the DSP, the audio
output hardware is polled for completion, the result is copied into
shared memory, and the host is notified. All of these operation occur
on a millisecond cadence. This transaction requires about 2 ms, but
under heavier workloads it may take up to 4 ms.
Adding a history allows these slow devices the option of providing an
ART value outside of the current interval. In this case, the callback
provided is an accessor function for the previously obtained counter
value. If get_system_device_crosststamp() receives a counter value
previous to cycle_last, it consults the history provided as an
argument in history_ref and interpolates the realtime and monotonic
raw system time using the provided counter value. If there are any
clock discontinuities, e.g. from calling settimeofday(), the monotonic
raw time is interpolated in the usual way, but the realtime clock time
is adjusted by scaling the monotonic raw adjustment.
When an accessor function is used a history argument *must* be
provided. The history is initialized using ktime_get_snapshot() and
must be called before the counter values are read.
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: kevin.b.stanton@intel.com
Cc: kevin.j.clarke@intel.com
Cc: hpa@zytor.com
Cc: jeffrey.t.kirsher@intel.com
Cc: netdev@vger.kernel.org
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Christopher S. Hall <christopher.s.hall@intel.com>
[jstultz: Fixed up cycles_t/cycle_t type confusion]
Signed-off-by: John Stultz <john.stultz@linaro.org>
2016-02-22 18:15:23 +07:00
|
|
|
* @cs_was_changed_seq: The sequence number of clocksource change events
|
time: Prevent early expiry of hrtimers[CLOCK_REALTIME] at the leap second edge
Currently, leapsecond adjustments are done at tick time. As a result,
the leapsecond was applied at the first timer tick *after* the
leapsecond (~1-10ms late depending on HZ), rather then exactly on the
second edge.
This was in part historical from back when we were always tick based,
but correcting this since has been avoided since it adds extra
conditional checks in the gettime fastpath, which has performance
overhead.
However, it was recently pointed out that ABS_TIME CLOCK_REALTIME
timers set for right after the leapsecond could fire a second early,
since some timers may be expired before we trigger the timekeeping
timer, which then applies the leapsecond.
This isn't quite as bad as it sounds, since behaviorally it is similar
to what is possible w/ ntpd made leapsecond adjustments done w/o using
the kernel discipline. Where due to latencies, timers may fire just
prior to the settimeofday call. (Also, one should note that all
applications using CLOCK_REALTIME timers should always be careful,
since they are prone to quirks from settimeofday() disturbances.)
However, the purpose of having the kernel do the leap adjustment is to
avoid such latencies, so I think this is worth fixing.
So in order to properly keep those timers from firing a second early,
this patch modifies the ntp and timekeeping logic so that we keep
enough state so that the update_base_offsets_now accessor, which
provides the hrtimer core the current time, can check and apply the
leapsecond adjustment on the second edge. This prevents the hrtimer
core from expiring timers too early.
This patch does not modify any other time read path, so no additional
overhead is incurred. However, this also means that the leap-second
continues to be applied at tick time for all other read-paths.
Apologies to Richard Cochran, who pushed for similar changes years
ago, which I resisted due to the concerns about the performance
overhead.
While I suspect this isn't extremely critical, folks who care about
strict leap-second correctness will likely want to watch
this. Potentially a -stable candidate eventually.
Originally-suggested-by: Richard Cochran <richardcochran@gmail.com>
Reported-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Reported-by: Prarit Bhargava <prarit@redhat.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jiri Bohac <jbohac@suse.cz>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/r/1434063297-28657-4-git-send-email-john.stultz@linaro.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-12 05:54:55 +07:00
|
|
|
* @next_leap_ktime: CLOCK_MONOTONIC time value of a pending leap-second
|
2017-05-23 07:20:20 +07:00
|
|
|
* @raw_sec: CLOCK_MONOTONIC_RAW time in seconds
|
2019-08-22 18:00:15 +07:00
|
|
|
* @monotonic_to_boot: CLOCK_MONOTONIC to CLOCK_BOOTTIME offset
|
2014-07-17 04:05:16 +07:00
|
|
|
* @cycle_interval: Number of clock cycles in one NTP interval
|
|
|
|
* @xtime_interval: Number of clock shifted nano seconds in one NTP
|
|
|
|
* interval.
|
|
|
|
* @xtime_remainder: Shifted nano seconds left over when rounding
|
|
|
|
* @cycle_interval
|
2017-06-09 06:44:21 +07:00
|
|
|
* @raw_interval: Shifted raw nano seconds accumulated per NTP interval.
|
2014-07-17 04:05:16 +07:00
|
|
|
* @ntp_error: Difference between accumulated time and NTP time in ntp
|
|
|
|
* shifted nano seconds.
|
|
|
|
* @ntp_error_shift: Shift conversion between clock shifted nano seconds and
|
|
|
|
* ntp shifted nano seconds.
|
2015-05-14 06:04:47 +07:00
|
|
|
* @last_warning: Warning ratelimiter (DEBUG_TIMEKEEPING)
|
|
|
|
* @underflow_seen: Underflow warning flag (DEBUG_TIMEKEEPING)
|
|
|
|
* @overflow_seen: Overflow warning flag (DEBUG_TIMEKEEPING)
|
2014-07-17 04:05:16 +07:00
|
|
|
*
|
|
|
|
* Note: For timespec(64) based interfaces wall_to_monotonic is what
|
|
|
|
* we need to add to xtime (or xtime corrected for sub jiffie times)
|
|
|
|
* to get to monotonic time. Monotonic is pegged at zero at system
|
|
|
|
* boot time, so wall_to_monotonic will be negative, however, we will
|
|
|
|
* ALWAYS keep the tv_nsec part positive so we can use the usual
|
|
|
|
* normalization.
|
|
|
|
*
|
|
|
|
* wall_to_monotonic is moved after resume from suspend for the
|
|
|
|
* monotonic time not to jump. We need to add total_sleep_time to
|
|
|
|
* wall_to_monotonic to get the real boot based time offset.
|
|
|
|
*
|
|
|
|
* wall_to_monotonic is no longer the boot time, getboottime must be
|
|
|
|
* used instead.
|
2019-08-22 18:00:15 +07:00
|
|
|
*
|
|
|
|
* @monotonic_to_boottime is a timespec64 representation of @offs_boot to
|
|
|
|
* accelerate the VDSO update for CLOCK_BOOTTIME.
|
2014-07-17 04:05:16 +07:00
|
|
|
*/
|
|
|
|
struct timekeeper {
|
2015-03-19 16:09:06 +07:00
|
|
|
struct tk_read_base tkr_mono;
|
2015-03-19 15:28:44 +07:00
|
|
|
struct tk_read_base tkr_raw;
|
2014-07-17 04:04:07 +07:00
|
|
|
u64 xtime_sec;
|
2014-10-29 17:31:16 +07:00
|
|
|
unsigned long ktime_sec;
|
2014-07-17 04:04:07 +07:00
|
|
|
struct timespec64 wall_to_monotonic;
|
|
|
|
ktime_t offs_real;
|
|
|
|
ktime_t offs_boot;
|
|
|
|
ktime_t offs_tai;
|
|
|
|
s32 tai_offset;
|
2015-04-15 04:08:37 +07:00
|
|
|
unsigned int clock_was_set_seq;
|
time: Add history to cross timestamp interface supporting slower devices
Another representative use case of time sync and the correlated
clocksource (in addition to PTP noted above) is PTP synchronized
audio.
In a streaming application, as an example, samples will be sent and/or
received by multiple devices with a presentation time that is in terms
of the PTP master clock. Synchronizing the audio output on these
devices requires correlating the audio clock with the PTP master
clock. The more precise this correlation is, the better the audio
quality (i.e. out of sync audio sounds bad).
From an application standpoint, to correlate the PTP master clock with
the audio device clock, the system clock is used as a intermediate
timebase. The transforms such an application would perform are:
System Clock <-> Audio clock
System Clock <-> Network Device Clock [<-> PTP Master Clock]
Modern Intel platforms can perform a more accurate cross timestamp in
hardware (ART,audio device clock). The audio driver requires
ART->system time transforms -- the same as required for the network
driver. These platforms offload audio processing (including
cross-timestamps) to a DSP which to ensure uninterrupted audio
processing, communicates and response to the host only once every
millsecond. As a result is takes up to a millisecond for the DSP to
receive a request, the request is processed by the DSP, the audio
output hardware is polled for completion, the result is copied into
shared memory, and the host is notified. All of these operation occur
on a millisecond cadence. This transaction requires about 2 ms, but
under heavier workloads it may take up to 4 ms.
Adding a history allows these slow devices the option of providing an
ART value outside of the current interval. In this case, the callback
provided is an accessor function for the previously obtained counter
value. If get_system_device_crosststamp() receives a counter value
previous to cycle_last, it consults the history provided as an
argument in history_ref and interpolates the realtime and monotonic
raw system time using the provided counter value. If there are any
clock discontinuities, e.g. from calling settimeofday(), the monotonic
raw time is interpolated in the usual way, but the realtime clock time
is adjusted by scaling the monotonic raw adjustment.
When an accessor function is used a history argument *must* be
provided. The history is initialized using ktime_get_snapshot() and
must be called before the counter values are read.
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: kevin.b.stanton@intel.com
Cc: kevin.j.clarke@intel.com
Cc: hpa@zytor.com
Cc: jeffrey.t.kirsher@intel.com
Cc: netdev@vger.kernel.org
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Christopher S. Hall <christopher.s.hall@intel.com>
[jstultz: Fixed up cycles_t/cycle_t type confusion]
Signed-off-by: John Stultz <john.stultz@linaro.org>
2016-02-22 18:15:23 +07:00
|
|
|
u8 cs_was_changed_seq;
|
time: Prevent early expiry of hrtimers[CLOCK_REALTIME] at the leap second edge
Currently, leapsecond adjustments are done at tick time. As a result,
the leapsecond was applied at the first timer tick *after* the
leapsecond (~1-10ms late depending on HZ), rather then exactly on the
second edge.
This was in part historical from back when we were always tick based,
but correcting this since has been avoided since it adds extra
conditional checks in the gettime fastpath, which has performance
overhead.
However, it was recently pointed out that ABS_TIME CLOCK_REALTIME
timers set for right after the leapsecond could fire a second early,
since some timers may be expired before we trigger the timekeeping
timer, which then applies the leapsecond.
This isn't quite as bad as it sounds, since behaviorally it is similar
to what is possible w/ ntpd made leapsecond adjustments done w/o using
the kernel discipline. Where due to latencies, timers may fire just
prior to the settimeofday call. (Also, one should note that all
applications using CLOCK_REALTIME timers should always be careful,
since they are prone to quirks from settimeofday() disturbances.)
However, the purpose of having the kernel do the leap adjustment is to
avoid such latencies, so I think this is worth fixing.
So in order to properly keep those timers from firing a second early,
this patch modifies the ntp and timekeeping logic so that we keep
enough state so that the update_base_offsets_now accessor, which
provides the hrtimer core the current time, can check and apply the
leapsecond adjustment on the second edge. This prevents the hrtimer
core from expiring timers too early.
This patch does not modify any other time read path, so no additional
overhead is incurred. However, this also means that the leap-second
continues to be applied at tick time for all other read-paths.
Apologies to Richard Cochran, who pushed for similar changes years
ago, which I resisted due to the concerns about the performance
overhead.
While I suspect this isn't extremely critical, folks who care about
strict leap-second correctness will likely want to watch
this. Potentially a -stable candidate eventually.
Originally-suggested-by: Richard Cochran <richardcochran@gmail.com>
Reported-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Reported-by: Prarit Bhargava <prarit@redhat.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jiri Bohac <jbohac@suse.cz>
Cc: Shuah Khan <shuahkh@osg.samsung.com>
Cc: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/r/1434063297-28657-4-git-send-email-john.stultz@linaro.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-06-12 05:54:55 +07:00
|
|
|
ktime_t next_leap_ktime;
|
2017-05-23 07:20:20 +07:00
|
|
|
u64 raw_sec;
|
2019-08-22 18:00:15 +07:00
|
|
|
struct timespec64 monotonic_to_boot;
|
2014-07-17 04:04:07 +07:00
|
|
|
|
2014-07-17 04:05:16 +07:00
|
|
|
/* The following members are for timekeeping internal use */
|
2016-12-22 02:32:01 +07:00
|
|
|
u64 cycle_interval;
|
2012-09-05 02:12:07 +07:00
|
|
|
u64 xtime_interval;
|
|
|
|
s64 xtime_remainder;
|
2017-06-09 06:44:21 +07:00
|
|
|
u64 raw_interval;
|
2014-04-24 10:53:29 +07:00
|
|
|
/* The ntp_tick_length() value currently being used.
|
|
|
|
* This cached copy ensures we consistently apply the tick
|
|
|
|
* length for an entire tick, as ntp_tick_length may change
|
|
|
|
* mid-tick, and we don't want to apply that new value to
|
|
|
|
* the tick in progress.
|
|
|
|
*/
|
|
|
|
u64 ntp_tick;
|
|
|
|
/* Difference between accumulated time and NTP time in ntp
|
|
|
|
* shifted nano seconds. */
|
2012-09-05 02:12:07 +07:00
|
|
|
s64 ntp_error;
|
2014-07-17 04:04:07 +07:00
|
|
|
u32 ntp_error_shift;
|
timekeeping: Rework frequency adjustments to work better w/ nohz
The existing timekeeping_adjust logic has always been complicated
to understand. Further, since it was developed prior to NOHZ becoming
common, its not surprising it performs poorly when NOHZ is enabled.
Since Miroslav pointed out the problematic nature of the existing code
in the NOHZ case, I've tried to refactor the code to perform better.
The problem with the previous approach was that it tried to adjust
for the total cumulative error using a scaled dampening factor. This
resulted in large errors to be corrected slowly, while small errors
were corrected quickly. With NOHZ the timekeeping code doesn't know
how far out the next tick will be, so this results in bad
over-correction to small errors, and insufficient correction to large
errors.
Inspired by Miroslav's patch, I've refactored the code to try to
address the correction in two steps.
1) Check the future freq error for the next tick, and if the frequency
error is large, try to make sure we correct it so it doesn't cause
much accumulated error.
2) Then make a small single unit adjustment to correct any cumulative
error that has collected over time.
This method performs fairly well in the simulator Miroslav created.
Major credit to Miroslav for pointing out the issue, providing the
original patch to resolve this, a simulator for testing, as well as
helping debug and resolve issues in my implementation so that it
performed closer to his original implementation.
Cc: Miroslav Lichvar <mlichvar@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Reported-by: Miroslav Lichvar <mlichvar@redhat.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
2013-12-07 08:25:21 +07:00
|
|
|
u32 ntp_err_mult;
|
2018-03-10 01:42:48 +07:00
|
|
|
/* Flag used to avoid updating NTP twice with same second */
|
|
|
|
u32 skip_second_overflow;
|
2015-05-14 06:04:47 +07:00
|
|
|
#ifdef CONFIG_DEBUG_TIMEKEEPING
|
|
|
|
long last_warning;
|
|
|
|
/*
|
|
|
|
* These simple flag variables are managed
|
|
|
|
* without locks, which is racy, but they are
|
|
|
|
* ok since we don't really care about being
|
|
|
|
* super precise about how many events were
|
|
|
|
* seen, just that a problem was observed.
|
|
|
|
*/
|
|
|
|
int underflow_seen;
|
|
|
|
int overflow_seen;
|
|
|
|
#endif
|
2012-09-05 02:12:07 +07:00
|
|
|
};
|
2012-09-05 02:27:48 +07:00
|
|
|
|
2012-09-12 06:58:13 +07:00
|
|
|
#ifdef CONFIG_GENERIC_TIME_VSYSCALL
|
|
|
|
|
|
|
|
extern void update_vsyscall(struct timekeeper *tk);
|
|
|
|
extern void update_vsyscall_tz(void);
|
2012-09-05 02:27:48 +07:00
|
|
|
|
|
|
|
#else
|
2012-09-12 06:58:13 +07:00
|
|
|
|
|
|
|
static inline void update_vsyscall(struct timekeeper *tk)
|
2012-09-05 02:27:48 +07:00
|
|
|
{
|
|
|
|
}
|
|
|
|
static inline void update_vsyscall_tz(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2012-09-05 02:12:07 +07:00
|
|
|
#endif /* _LINUX_TIMEKEEPER_INTERNAL_H */
|