2006-03-21 08:41:47 +07:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2005, 2006 Andrea Bittau <a.bittau@cs.ucl.ac.uk>
|
|
|
|
*
|
|
|
|
* Changes to meet Linux coding standards, and DCCP infrastructure fixes.
|
|
|
|
*
|
|
|
|
* Copyright (c) 2006 Arnaldo Carvalho de Melo <acme@conectiva.com.br>
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or modify
|
|
|
|
* it under the terms of the GNU General Public License as published by
|
|
|
|
* the Free Software Foundation; either version 2 of the License, or
|
|
|
|
* (at your option) any later version.
|
|
|
|
*
|
|
|
|
* This program is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
* GNU General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public License
|
|
|
|
* along with this program; if not, write to the Free Software
|
|
|
|
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2006-10-25 06:17:51 +07:00
|
|
|
* This implementation should follow RFC 4341
|
2006-03-21 08:41:47 +07:00
|
|
|
*/
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 15:04:11 +07:00
|
|
|
#include <linux/slab.h>
|
2008-11-12 15:43:40 +07:00
|
|
|
#include "../feat.h"
|
2006-03-21 08:41:47 +07:00
|
|
|
#include "ccid2.h"
|
|
|
|
|
|
|
|
|
2006-09-20 03:12:44 +07:00
|
|
|
#ifdef CONFIG_IP_DCCP_CCID2_DEBUG
|
2011-12-19 21:08:01 +07:00
|
|
|
static bool ccid2_debug;
|
2006-11-21 03:26:03 +07:00
|
|
|
#define ccid2_pr_debug(format, a...) DCCP_PR_DEBUG(ccid2_debug, format, ##a)
|
2006-03-21 08:41:47 +07:00
|
|
|
#else
|
2006-11-21 03:26:03 +07:00
|
|
|
#define ccid2_pr_debug(format, a...)
|
2006-03-21 08:41:47 +07:00
|
|
|
#endif
|
|
|
|
|
2009-10-05 07:53:12 +07:00
|
|
|
static int ccid2_hc_tx_alloc_seq(struct ccid2_hc_tx_sock *hc)
|
2006-09-20 03:13:37 +07:00
|
|
|
{
|
|
|
|
struct ccid2_seq *seqp;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/* check if we have space to preserve the pointer to the buffer */
|
2009-10-05 07:53:12 +07:00
|
|
|
if (hc->tx_seqbufc >= (sizeof(hc->tx_seqbuf) /
|
|
|
|
sizeof(struct ccid2_seq *)))
|
2006-09-20 03:13:37 +07:00
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
/* allocate buffer and initialize linked list */
|
2007-10-05 04:41:00 +07:00
|
|
|
seqp = kmalloc(CCID2_SEQBUF_LEN * sizeof(struct ccid2_seq), gfp_any());
|
2006-09-20 03:13:37 +07:00
|
|
|
if (seqp == NULL)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2007-10-05 04:41:00 +07:00
|
|
|
for (i = 0; i < (CCID2_SEQBUF_LEN - 1); i++) {
|
2006-09-20 03:13:37 +07:00
|
|
|
seqp[i].ccid2s_next = &seqp[i + 1];
|
|
|
|
seqp[i + 1].ccid2s_prev = &seqp[i];
|
|
|
|
}
|
2007-10-05 04:41:00 +07:00
|
|
|
seqp[CCID2_SEQBUF_LEN - 1].ccid2s_next = seqp;
|
|
|
|
seqp->ccid2s_prev = &seqp[CCID2_SEQBUF_LEN - 1];
|
2006-09-20 03:13:37 +07:00
|
|
|
|
|
|
|
/* This is the first allocation. Initiate the head and tail. */
|
2009-10-05 07:53:12 +07:00
|
|
|
if (hc->tx_seqbufc == 0)
|
|
|
|
hc->tx_seqh = hc->tx_seqt = seqp;
|
2006-09-20 03:13:37 +07:00
|
|
|
else {
|
|
|
|
/* link the existing list with the one we just created */
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_seqh->ccid2s_next = seqp;
|
|
|
|
seqp->ccid2s_prev = hc->tx_seqh;
|
2006-09-20 03:13:37 +07:00
|
|
|
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_seqt->ccid2s_prev = &seqp[CCID2_SEQBUF_LEN - 1];
|
|
|
|
seqp[CCID2_SEQBUF_LEN - 1].ccid2s_next = hc->tx_seqt;
|
2006-09-20 03:13:37 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/* store the original pointer to the buffer so we can free it */
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_seqbuf[hc->tx_seqbufc] = seqp;
|
|
|
|
hc->tx_seqbufc++;
|
2006-09-20 03:13:37 +07:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2006-11-29 04:55:06 +07:00
|
|
|
static int ccid2_hc_tx_send_packet(struct sock *sk, struct sk_buff *skb)
|
2006-03-21 08:41:47 +07:00
|
|
|
{
|
2010-10-28 02:16:28 +07:00
|
|
|
if (ccid2_cwnd_network_limited(ccid2_hc_tx_sk(sk)))
|
|
|
|
return CCID_PACKET_WILL_DEQUEUE_LATER;
|
|
|
|
return CCID_PACKET_SEND_AT_ONCE;
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
|
|
|
|
2007-11-25 06:32:53 +07:00
|
|
|
static void ccid2_change_l_ack_ratio(struct sock *sk, u32 val)
|
2006-03-21 08:41:47 +07:00
|
|
|
{
|
2009-10-05 07:53:10 +07:00
|
|
|
u32 max_ratio = DIV_ROUND_UP(ccid2_hc_tx_sk(sk)->tx_cwnd, 2);
|
2007-11-25 06:40:24 +07:00
|
|
|
|
2006-03-21 08:41:47 +07:00
|
|
|
/*
|
2007-11-25 06:40:24 +07:00
|
|
|
* Ensure that Ack Ratio does not exceed ceil(cwnd/2), which is (2) from
|
|
|
|
* RFC 4341, 6.1.2. We ignore the statement that Ack Ratio 2 is always
|
|
|
|
* acceptable since this causes starvation/deadlock whenever cwnd < 2.
|
|
|
|
* The same problem arises when Ack Ratio is 0 (ie. Ack Ratio disabled).
|
2006-03-21 08:41:47 +07:00
|
|
|
*/
|
2007-11-25 06:40:24 +07:00
|
|
|
if (val == 0 || val > max_ratio) {
|
|
|
|
DCCP_WARN("Limiting Ack Ratio (%u) to %u\n", val, max_ratio);
|
|
|
|
val = max_ratio;
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
2011-07-25 09:49:19 +07:00
|
|
|
dccp_feat_signal_nn_change(sk, DCCPF_ACK_RATIO,
|
|
|
|
min_t(u32, val, DCCPF_ACK_RATIO_MAX));
|
|
|
|
}
|
2007-11-25 06:40:24 +07:00
|
|
|
|
2011-07-25 09:57:49 +07:00
|
|
|
static void ccid2_check_l_ack_ratio(struct sock *sk)
|
|
|
|
{
|
|
|
|
struct ccid2_hc_tx_sock *hc = ccid2_hc_tx_sk(sk);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* After a loss, idle period, application limited period, or RTO we
|
|
|
|
* need to check that the ack ratio is still less than the congestion
|
|
|
|
* window. Otherwise, we will send an entire congestion window of
|
|
|
|
* packets and got no response because we haven't sent ack ratio
|
|
|
|
* packets yet.
|
|
|
|
* If the ack ratio does need to be reduced, we reduce it to half of
|
|
|
|
* the congestion window (or 1 if that's zero) instead of to the
|
|
|
|
* congestion window. This prevents problems if one ack is lost.
|
|
|
|
*/
|
|
|
|
if (dccp_feat_nn_get(sk, DCCPF_ACK_RATIO) > hc->tx_cwnd)
|
|
|
|
ccid2_change_l_ack_ratio(sk, hc->tx_cwnd/2 ? : 1U);
|
|
|
|
}
|
|
|
|
|
2011-07-25 09:49:19 +07:00
|
|
|
static void ccid2_change_l_seq_window(struct sock *sk, u64 val)
|
|
|
|
{
|
|
|
|
dccp_feat_signal_nn_change(sk, DCCPF_SEQUENCE_WINDOW,
|
|
|
|
clamp_val(val, DCCPF_SEQ_WMIN,
|
|
|
|
DCCPF_SEQ_WMAX));
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
|
|
|
|
2017-10-24 15:46:09 +07:00
|
|
|
static void ccid2_hc_tx_rto_expire(struct timer_list *t)
|
2006-03-21 08:41:47 +07:00
|
|
|
{
|
2017-10-24 15:46:09 +07:00
|
|
|
struct ccid2_hc_tx_sock *hc = from_timer(hc, t, tx_rtotimer);
|
|
|
|
struct sock *sk = hc->sk;
|
2010-10-28 02:16:28 +07:00
|
|
|
const bool sender_was_blocked = ccid2_cwnd_network_limited(hc);
|
2006-03-21 08:41:47 +07:00
|
|
|
|
|
|
|
bh_lock_sock(sk);
|
|
|
|
if (sock_owned_by_user(sk)) {
|
2009-10-05 07:53:12 +07:00
|
|
|
sk_reset_timer(sk, &hc->tx_rtotimer, jiffies + HZ / 5);
|
2006-03-21 08:41:47 +07:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
ccid2_pr_debug("RTO_EXPIRE\n");
|
|
|
|
|
|
|
|
/* back-off timer */
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_rto <<= 1;
|
dccp ccid-2: Replace broken RTT estimator with better algorithm
The current CCID-2 RTT estimator code is in parts broken and lags behind the
suggestions in RFC2988 of using scaled variants for SRTT/RTTVAR.
That code is replaced by the present patch, which reuses the Linux TCP RTT
estimator code.
Further details:
----------------
1. The minimum RTO of previously one second has been replaced with TCP's, since
RFC4341, sec. 5 says that the minimum of 1 sec. (suggested in RFC2988, 2.4)
is not necessary. Instead, the TCP_RTO_MIN is used, which agrees with DCCP's
concept of a default RTT (RFC 4340, 3.4).
2. The maximum RTO has been set to DCCP_RTO_MAX (64 sec), which agrees with
RFC2988, (2.5).
3. De-inlined the function ccid2_new_ack().
4. Added a FIXME: the RTT is sampled several times per Ack Vector, which will
give the wrong estimate. It should be replaced with one sample per Ack.
However, at the moment this can not be resolved easily, since
- it depends on TX history code (which also needs some work),
- the cleanest solution is not to use the `sent' time at all (saves 4 bytes
per entry) and use DCCP timestamps / elapsed time to estimated the RTT,
which however is non-trivial to get right (but needs to be done).
Reasons for reusing the Linux TCP estimator algorithm:
------------------------------------------------------
Some time was spent to find a better alternative, using basic RFC2988 as a first
step. Further analysis and experimentation showed that the Linux TCP RTO
estimator is superior to a basic RFC2988 implementation. A summary is on
http://www.erg.abdn.ac.uk/users/gerrit/dccp/notes/ccid2/rto_estimator/
In addition, this estimator fared well in a recent empirical evaluation:
Rewaskar, Sushant, Jasleen Kaur and F. Donelson Smith.
A Performance Study of Loss Detection/Recovery in Real-world TCP
Implementations. Proceedings of 15th IEEE International
Conference on Network Protocols (ICNP-07), 2007.
Thus there is significant benefit in reusing the existing TCP code.
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-08-23 02:41:40 +07:00
|
|
|
if (hc->tx_rto > DCCP_RTO_MAX)
|
|
|
|
hc->tx_rto = DCCP_RTO_MAX;
|
2008-09-09 18:27:22 +07:00
|
|
|
|
2006-03-21 08:41:47 +07:00
|
|
|
/* adjust pipe, cwnd etc */
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_ssthresh = hc->tx_cwnd / 2;
|
|
|
|
if (hc->tx_ssthresh < 2)
|
|
|
|
hc->tx_ssthresh = 2;
|
2010-08-23 02:41:36 +07:00
|
|
|
hc->tx_cwnd = 1;
|
|
|
|
hc->tx_pipe = 0;
|
2006-03-21 08:41:47 +07:00
|
|
|
|
|
|
|
/* clear state about stuff we sent */
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_seqt = hc->tx_seqh;
|
|
|
|
hc->tx_packets_acked = 0;
|
2006-03-21 08:41:47 +07:00
|
|
|
|
|
|
|
/* clear ack ratio state. */
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_rpseq = 0;
|
|
|
|
hc->tx_rpdupack = -1;
|
2006-03-21 08:41:47 +07:00
|
|
|
ccid2_change_l_ack_ratio(sk, 1);
|
2010-10-28 02:16:28 +07:00
|
|
|
|
|
|
|
/* if we were blocked before, we may now send cwnd=1 packet */
|
|
|
|
if (sender_was_blocked)
|
|
|
|
tasklet_schedule(&dccp_sk(sk)->dccps_xmitlet);
|
|
|
|
/* restart backed-off timer */
|
|
|
|
sk_reset_timer(sk, &hc->tx_rtotimer, jiffies + hc->tx_rto);
|
2006-03-21 08:41:47 +07:00
|
|
|
out:
|
|
|
|
bh_unlock_sock(sk);
|
2006-03-21 08:57:52 +07:00
|
|
|
sock_put(sk);
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
|
|
|
|
dccp ccid-2: Perform congestion-window validation
CCID-2's cwnd increases like TCP during slow-start, which has implications for
* the local Sequence Window value (should be > cwnd),
* the Ack Ratio value.
Hence an exponential growth, if it does not reflect the actual network
conditions, can quickly lead to instability.
This patch adds congestion-window validation (RFC2861) to CCID-2:
* cwnd is constrained if the sender is application limited;
* cwnd is reduced after a long idle period, as suggested in the '90 paper
by Van Jacobson, in RFC 2581 (sec. 4.1);
* cwnd is never reduced below the RFC 3390 initial window.
As marked in the comments, the code is actually almost a direct copy of the
TCP congestion-window-validation algorithms. By continuing this work, it may
in future be possible to use the TCP code (not possible at the moment).
The mechanism can be turned off using a module parameter. Sampling of the
currently-used window (moving-maximum) is however done constantly; this is
used to determine the expected window, which can be exploited to regulate
DCCP's Sequence Window value.
This patch also sets slow-start-after-idle (RFC 4341, 5.1), i.e. it behaves like
TCP when net.ipv4.tcp_slow_start_after_idle = 1.
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
2011-07-03 22:55:03 +07:00
|
|
|
/*
|
|
|
|
* Congestion window validation (RFC 2861).
|
|
|
|
*/
|
2011-12-19 21:08:01 +07:00
|
|
|
static bool ccid2_do_cwv = true;
|
dccp ccid-2: Perform congestion-window validation
CCID-2's cwnd increases like TCP during slow-start, which has implications for
* the local Sequence Window value (should be > cwnd),
* the Ack Ratio value.
Hence an exponential growth, if it does not reflect the actual network
conditions, can quickly lead to instability.
This patch adds congestion-window validation (RFC2861) to CCID-2:
* cwnd is constrained if the sender is application limited;
* cwnd is reduced after a long idle period, as suggested in the '90 paper
by Van Jacobson, in RFC 2581 (sec. 4.1);
* cwnd is never reduced below the RFC 3390 initial window.
As marked in the comments, the code is actually almost a direct copy of the
TCP congestion-window-validation algorithms. By continuing this work, it may
in future be possible to use the TCP code (not possible at the moment).
The mechanism can be turned off using a module parameter. Sampling of the
currently-used window (moving-maximum) is however done constantly; this is
used to determine the expected window, which can be exploited to regulate
DCCP's Sequence Window value.
This patch also sets slow-start-after-idle (RFC 4341, 5.1), i.e. it behaves like
TCP when net.ipv4.tcp_slow_start_after_idle = 1.
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
2011-07-03 22:55:03 +07:00
|
|
|
module_param(ccid2_do_cwv, bool, 0644);
|
|
|
|
MODULE_PARM_DESC(ccid2_do_cwv, "Perform RFC2861 Congestion Window Validation");
|
|
|
|
|
|
|
|
/**
|
|
|
|
* ccid2_update_used_window - Track how much of cwnd is actually used
|
|
|
|
* This is done in addition to CWV. The sender needs to have an idea of how many
|
|
|
|
* packets may be in flight, to set the local Sequence Window value accordingly
|
|
|
|
* (RFC 4340, 7.5.2). The CWV mechanism is exploited to keep track of the
|
|
|
|
* maximum-used window. We use an EWMA low-pass filter to filter out noise.
|
|
|
|
*/
|
|
|
|
static void ccid2_update_used_window(struct ccid2_hc_tx_sock *hc, u32 new_wnd)
|
|
|
|
{
|
|
|
|
hc->tx_expected_wnd = (3 * hc->tx_expected_wnd + new_wnd) / 4;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* This borrows the code of tcp_cwnd_application_limited() */
|
|
|
|
static void ccid2_cwnd_application_limited(struct sock *sk, const u32 now)
|
|
|
|
{
|
|
|
|
struct ccid2_hc_tx_sock *hc = ccid2_hc_tx_sk(sk);
|
|
|
|
/* don't reduce cwnd below the initial window (IW) */
|
|
|
|
u32 init_win = rfc3390_bytes_to_packets(dccp_sk(sk)->dccps_mss_cache),
|
|
|
|
win_used = max(hc->tx_cwnd_used, init_win);
|
|
|
|
|
|
|
|
if (win_used < hc->tx_cwnd) {
|
|
|
|
hc->tx_ssthresh = max(hc->tx_ssthresh,
|
|
|
|
(hc->tx_cwnd >> 1) + (hc->tx_cwnd >> 2));
|
|
|
|
hc->tx_cwnd = (hc->tx_cwnd + win_used) >> 1;
|
|
|
|
}
|
|
|
|
hc->tx_cwnd_used = 0;
|
|
|
|
hc->tx_cwnd_stamp = now;
|
2011-07-25 09:57:49 +07:00
|
|
|
|
|
|
|
ccid2_check_l_ack_ratio(sk);
|
dccp ccid-2: Perform congestion-window validation
CCID-2's cwnd increases like TCP during slow-start, which has implications for
* the local Sequence Window value (should be > cwnd),
* the Ack Ratio value.
Hence an exponential growth, if it does not reflect the actual network
conditions, can quickly lead to instability.
This patch adds congestion-window validation (RFC2861) to CCID-2:
* cwnd is constrained if the sender is application limited;
* cwnd is reduced after a long idle period, as suggested in the '90 paper
by Van Jacobson, in RFC 2581 (sec. 4.1);
* cwnd is never reduced below the RFC 3390 initial window.
As marked in the comments, the code is actually almost a direct copy of the
TCP congestion-window-validation algorithms. By continuing this work, it may
in future be possible to use the TCP code (not possible at the moment).
The mechanism can be turned off using a module parameter. Sampling of the
currently-used window (moving-maximum) is however done constantly; this is
used to determine the expected window, which can be exploited to regulate
DCCP's Sequence Window value.
This patch also sets slow-start-after-idle (RFC 4341, 5.1), i.e. it behaves like
TCP when net.ipv4.tcp_slow_start_after_idle = 1.
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
2011-07-03 22:55:03 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/* This borrows the code of tcp_cwnd_restart() */
|
|
|
|
static void ccid2_cwnd_restart(struct sock *sk, const u32 now)
|
|
|
|
{
|
|
|
|
struct ccid2_hc_tx_sock *hc = ccid2_hc_tx_sk(sk);
|
|
|
|
u32 cwnd = hc->tx_cwnd, restart_cwnd,
|
|
|
|
iwnd = rfc3390_bytes_to_packets(dccp_sk(sk)->dccps_mss_cache);
|
|
|
|
|
|
|
|
hc->tx_ssthresh = max(hc->tx_ssthresh, (cwnd >> 1) + (cwnd >> 2));
|
|
|
|
|
|
|
|
/* don't reduce cwnd below the initial window (IW) */
|
|
|
|
restart_cwnd = min(cwnd, iwnd);
|
|
|
|
cwnd >>= (now - hc->tx_lsndtime) / hc->tx_rto;
|
|
|
|
hc->tx_cwnd = max(cwnd, restart_cwnd);
|
|
|
|
|
|
|
|
hc->tx_cwnd_stamp = now;
|
|
|
|
hc->tx_cwnd_used = 0;
|
2011-07-25 09:57:49 +07:00
|
|
|
|
|
|
|
ccid2_check_l_ack_ratio(sk);
|
dccp ccid-2: Perform congestion-window validation
CCID-2's cwnd increases like TCP during slow-start, which has implications for
* the local Sequence Window value (should be > cwnd),
* the Ack Ratio value.
Hence an exponential growth, if it does not reflect the actual network
conditions, can quickly lead to instability.
This patch adds congestion-window validation (RFC2861) to CCID-2:
* cwnd is constrained if the sender is application limited;
* cwnd is reduced after a long idle period, as suggested in the '90 paper
by Van Jacobson, in RFC 2581 (sec. 4.1);
* cwnd is never reduced below the RFC 3390 initial window.
As marked in the comments, the code is actually almost a direct copy of the
TCP congestion-window-validation algorithms. By continuing this work, it may
in future be possible to use the TCP code (not possible at the moment).
The mechanism can be turned off using a module parameter. Sampling of the
currently-used window (moving-maximum) is however done constantly; this is
used to determine the expected window, which can be exploited to regulate
DCCP's Sequence Window value.
This patch also sets slow-start-after-idle (RFC 4341, 5.1), i.e. it behaves like
TCP when net.ipv4.tcp_slow_start_after_idle = 1.
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
2011-07-03 22:55:03 +07:00
|
|
|
}
|
|
|
|
|
2010-10-12 01:37:38 +07:00
|
|
|
static void ccid2_hc_tx_packet_sent(struct sock *sk, unsigned int len)
|
2006-03-21 08:41:47 +07:00
|
|
|
{
|
|
|
|
struct dccp_sock *dp = dccp_sk(sk);
|
2009-10-05 07:53:12 +07:00
|
|
|
struct ccid2_hc_tx_sock *hc = ccid2_hc_tx_sk(sk);
|
2017-05-17 04:00:02 +07:00
|
|
|
const u32 now = ccid2_jiffies32;
|
2006-09-20 03:13:37 +07:00
|
|
|
struct ccid2_seq *next;
|
2006-03-21 08:41:47 +07:00
|
|
|
|
dccp ccid-2: Perform congestion-window validation
CCID-2's cwnd increases like TCP during slow-start, which has implications for
* the local Sequence Window value (should be > cwnd),
* the Ack Ratio value.
Hence an exponential growth, if it does not reflect the actual network
conditions, can quickly lead to instability.
This patch adds congestion-window validation (RFC2861) to CCID-2:
* cwnd is constrained if the sender is application limited;
* cwnd is reduced after a long idle period, as suggested in the '90 paper
by Van Jacobson, in RFC 2581 (sec. 4.1);
* cwnd is never reduced below the RFC 3390 initial window.
As marked in the comments, the code is actually almost a direct copy of the
TCP congestion-window-validation algorithms. By continuing this work, it may
in future be possible to use the TCP code (not possible at the moment).
The mechanism can be turned off using a module parameter. Sampling of the
currently-used window (moving-maximum) is however done constantly; this is
used to determine the expected window, which can be exploited to regulate
DCCP's Sequence Window value.
This patch also sets slow-start-after-idle (RFC 4341, 5.1), i.e. it behaves like
TCP when net.ipv4.tcp_slow_start_after_idle = 1.
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
2011-07-03 22:55:03 +07:00
|
|
|
/* slow-start after idle periods (RFC 2581, RFC 2861) */
|
|
|
|
if (ccid2_do_cwv && !hc->tx_pipe &&
|
|
|
|
(s32)(now - hc->tx_lsndtime) >= hc->tx_rto)
|
|
|
|
ccid2_cwnd_restart(sk, now);
|
|
|
|
|
|
|
|
hc->tx_lsndtime = now;
|
|
|
|
hc->tx_pipe += 1;
|
|
|
|
|
|
|
|
/* see whether cwnd was fully used (RFC 2861), update expected window */
|
|
|
|
if (ccid2_cwnd_network_limited(hc)) {
|
|
|
|
ccid2_update_used_window(hc, hc->tx_cwnd);
|
|
|
|
hc->tx_cwnd_used = 0;
|
|
|
|
hc->tx_cwnd_stamp = now;
|
|
|
|
} else {
|
|
|
|
if (hc->tx_pipe > hc->tx_cwnd_used)
|
|
|
|
hc->tx_cwnd_used = hc->tx_pipe;
|
|
|
|
|
|
|
|
ccid2_update_used_window(hc, hc->tx_cwnd_used);
|
|
|
|
|
|
|
|
if (ccid2_do_cwv && (s32)(now - hc->tx_cwnd_stamp) >= hc->tx_rto)
|
|
|
|
ccid2_cwnd_application_limited(sk, now);
|
|
|
|
}
|
2006-03-21 08:41:47 +07:00
|
|
|
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_seqh->ccid2s_seq = dp->dccps_gss;
|
|
|
|
hc->tx_seqh->ccid2s_acked = 0;
|
dccp ccid-2: Perform congestion-window validation
CCID-2's cwnd increases like TCP during slow-start, which has implications for
* the local Sequence Window value (should be > cwnd),
* the Ack Ratio value.
Hence an exponential growth, if it does not reflect the actual network
conditions, can quickly lead to instability.
This patch adds congestion-window validation (RFC2861) to CCID-2:
* cwnd is constrained if the sender is application limited;
* cwnd is reduced after a long idle period, as suggested in the '90 paper
by Van Jacobson, in RFC 2581 (sec. 4.1);
* cwnd is never reduced below the RFC 3390 initial window.
As marked in the comments, the code is actually almost a direct copy of the
TCP congestion-window-validation algorithms. By continuing this work, it may
in future be possible to use the TCP code (not possible at the moment).
The mechanism can be turned off using a module parameter. Sampling of the
currently-used window (moving-maximum) is however done constantly; this is
used to determine the expected window, which can be exploited to regulate
DCCP's Sequence Window value.
This patch also sets slow-start-after-idle (RFC 4341, 5.1), i.e. it behaves like
TCP when net.ipv4.tcp_slow_start_after_idle = 1.
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
2011-07-03 22:55:03 +07:00
|
|
|
hc->tx_seqh->ccid2s_sent = now;
|
2006-03-21 08:41:47 +07:00
|
|
|
|
2009-10-05 07:53:12 +07:00
|
|
|
next = hc->tx_seqh->ccid2s_next;
|
2006-09-20 03:13:37 +07:00
|
|
|
/* check if we need to alloc more space */
|
2009-10-05 07:53:12 +07:00
|
|
|
if (next == hc->tx_seqt) {
|
|
|
|
if (ccid2_hc_tx_alloc_seq(hc)) {
|
2007-10-05 04:41:26 +07:00
|
|
|
DCCP_CRIT("packet history - out of memory!");
|
|
|
|
/* FIXME: find a more graceful way to bail out */
|
|
|
|
return;
|
|
|
|
}
|
2009-10-05 07:53:12 +07:00
|
|
|
next = hc->tx_seqh->ccid2s_next;
|
|
|
|
BUG_ON(next == hc->tx_seqt);
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_seqh = next;
|
2006-09-20 03:13:37 +07:00
|
|
|
|
2009-10-05 07:53:12 +07:00
|
|
|
ccid2_pr_debug("cwnd=%d pipe=%d\n", hc->tx_cwnd, hc->tx_pipe);
|
2006-03-21 08:41:47 +07:00
|
|
|
|
2007-11-25 06:58:33 +07:00
|
|
|
/*
|
|
|
|
* FIXME: The code below is broken and the variables have been removed
|
|
|
|
* from the socket struct. The `ackloss' variable was always set to 0,
|
|
|
|
* and with arsent there are several problems:
|
|
|
|
* (i) it doesn't just count the number of Acks, but all sent packets;
|
|
|
|
* (ii) it is expressed in # of packets, not # of windows, so the
|
|
|
|
* comparison below uses the wrong formula: Appendix A of RFC 4341
|
|
|
|
* comes up with the number K = cwnd / (R^2 - R) of consecutive windows
|
|
|
|
* of data with no lost or marked Ack packets. If arsent were the # of
|
|
|
|
* consecutive Acks received without loss, then Ack Ratio needs to be
|
|
|
|
* decreased by 1 when
|
|
|
|
* arsent >= K * cwnd / R = cwnd^2 / (R^3 - R^2)
|
|
|
|
* where cwnd / R is the number of Acks received per window of data
|
|
|
|
* (cf. RFC 4341, App. A). The problems are that
|
|
|
|
* - arsent counts other packets as well;
|
|
|
|
* - the comparison uses a formula different from RFC 4341;
|
|
|
|
* - computing a cubic/quadratic equation each time is too complicated.
|
|
|
|
* Hence a different algorithm is needed.
|
|
|
|
*/
|
|
|
|
#if 0
|
2006-03-21 08:41:47 +07:00
|
|
|
/* Ack Ratio. Need to maintain a concept of how many windows we sent */
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_arsent++;
|
2006-03-21 08:41:47 +07:00
|
|
|
/* We had an ack loss in this window... */
|
2009-10-05 07:53:12 +07:00
|
|
|
if (hc->tx_ackloss) {
|
|
|
|
if (hc->tx_arsent >= hc->tx_cwnd) {
|
|
|
|
hc->tx_arsent = 0;
|
|
|
|
hc->tx_ackloss = 0;
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
2006-03-21 13:05:37 +07:00
|
|
|
} else {
|
|
|
|
/* No acks lost up to now... */
|
2006-03-21 08:41:47 +07:00
|
|
|
/* decrease ack ratio if enough packets were sent */
|
|
|
|
if (dp->dccps_l_ack_ratio > 1) {
|
|
|
|
/* XXX don't calculate denominator each time */
|
2006-03-21 13:05:37 +07:00
|
|
|
int denom = dp->dccps_l_ack_ratio * dp->dccps_l_ack_ratio -
|
|
|
|
dp->dccps_l_ack_ratio;
|
2006-03-21 08:41:47 +07:00
|
|
|
|
2009-10-05 07:53:12 +07:00
|
|
|
denom = hc->tx_cwnd * hc->tx_cwnd / denom;
|
2006-03-21 08:41:47 +07:00
|
|
|
|
2009-10-05 07:53:12 +07:00
|
|
|
if (hc->tx_arsent >= denom) {
|
2006-03-21 08:41:47 +07:00
|
|
|
ccid2_change_l_ack_ratio(sk, dp->dccps_l_ack_ratio - 1);
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_arsent = 0;
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
2006-03-21 13:05:37 +07:00
|
|
|
} else {
|
|
|
|
/* we can't increase ack ratio further [1] */
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_arsent = 0; /* or maybe set it to cwnd*/
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
|
|
|
}
|
2007-11-25 06:58:33 +07:00
|
|
|
#endif
|
2006-03-21 08:41:47 +07:00
|
|
|
|
2010-08-30 02:23:11 +07:00
|
|
|
sk_reset_timer(sk, &hc->tx_rtotimer, jiffies + hc->tx_rto);
|
2006-03-21 13:05:37 +07:00
|
|
|
|
2006-09-20 03:12:44 +07:00
|
|
|
#ifdef CONFIG_IP_DCCP_CCID2_DEBUG
|
2006-03-21 08:41:47 +07:00
|
|
|
do {
|
2009-10-05 07:53:12 +07:00
|
|
|
struct ccid2_seq *seqp = hc->tx_seqt;
|
2006-03-21 08:41:47 +07:00
|
|
|
|
2009-10-05 07:53:12 +07:00
|
|
|
while (seqp != hc->tx_seqh) {
|
2010-08-30 02:23:10 +07:00
|
|
|
ccid2_pr_debug("out seq=%llu acked=%d time=%u\n",
|
2006-12-11 01:01:18 +07:00
|
|
|
(unsigned long long)seqp->ccid2s_seq,
|
2006-10-30 07:03:30 +07:00
|
|
|
seqp->ccid2s_acked, seqp->ccid2s_sent);
|
2006-03-21 08:41:47 +07:00
|
|
|
seqp = seqp->ccid2s_next;
|
|
|
|
}
|
2006-03-21 13:05:37 +07:00
|
|
|
} while (0);
|
2006-03-21 08:41:47 +07:00
|
|
|
ccid2_pr_debug("=========\n");
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
dccp ccid-2: Replace broken RTT estimator with better algorithm
The current CCID-2 RTT estimator code is in parts broken and lags behind the
suggestions in RFC2988 of using scaled variants for SRTT/RTTVAR.
That code is replaced by the present patch, which reuses the Linux TCP RTT
estimator code.
Further details:
----------------
1. The minimum RTO of previously one second has been replaced with TCP's, since
RFC4341, sec. 5 says that the minimum of 1 sec. (suggested in RFC2988, 2.4)
is not necessary. Instead, the TCP_RTO_MIN is used, which agrees with DCCP's
concept of a default RTT (RFC 4340, 3.4).
2. The maximum RTO has been set to DCCP_RTO_MAX (64 sec), which agrees with
RFC2988, (2.5).
3. De-inlined the function ccid2_new_ack().
4. Added a FIXME: the RTT is sampled several times per Ack Vector, which will
give the wrong estimate. It should be replaced with one sample per Ack.
However, at the moment this can not be resolved easily, since
- it depends on TX history code (which also needs some work),
- the cleanest solution is not to use the `sent' time at all (saves 4 bytes
per entry) and use DCCP timestamps / elapsed time to estimated the RTT,
which however is non-trivial to get right (but needs to be done).
Reasons for reusing the Linux TCP estimator algorithm:
------------------------------------------------------
Some time was spent to find a better alternative, using basic RFC2988 as a first
step. Further analysis and experimentation showed that the Linux TCP RTO
estimator is superior to a basic RFC2988 implementation. A summary is on
http://www.erg.abdn.ac.uk/users/gerrit/dccp/notes/ccid2/rto_estimator/
In addition, this estimator fared well in a recent empirical evaluation:
Rewaskar, Sushant, Jasleen Kaur and F. Donelson Smith.
A Performance Study of Loss Detection/Recovery in Real-world TCP
Implementations. Proceedings of 15th IEEE International
Conference on Network Protocols (ICNP-07), 2007.
Thus there is significant benefit in reusing the existing TCP code.
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-08-23 02:41:40 +07:00
|
|
|
/**
|
|
|
|
* ccid2_rtt_estimator - Sample RTT and compute RTO using RFC2988 algorithm
|
|
|
|
* This code is almost identical with TCP's tcp_rtt_estimator(), since
|
|
|
|
* - it has a higher sampling frequency (recommended by RFC 1323),
|
|
|
|
* - the RTO does not collapse into RTT due to RTTVAR going towards zero,
|
|
|
|
* - it is simple (cf. more complex proposals such as Eifel timer or research
|
|
|
|
* which suggests that the gain should be set according to window size),
|
|
|
|
* - in tests it was found to work well with CCID2 [gerrit].
|
|
|
|
*/
|
|
|
|
static void ccid2_rtt_estimator(struct sock *sk, const long mrtt)
|
|
|
|
{
|
|
|
|
struct ccid2_hc_tx_sock *hc = ccid2_hc_tx_sk(sk);
|
|
|
|
long m = mrtt ? : 1;
|
|
|
|
|
|
|
|
if (hc->tx_srtt == 0) {
|
|
|
|
/* First measurement m */
|
|
|
|
hc->tx_srtt = m << 3;
|
|
|
|
hc->tx_mdev = m << 1;
|
|
|
|
|
2010-08-30 02:23:13 +07:00
|
|
|
hc->tx_mdev_max = max(hc->tx_mdev, tcp_rto_min(sk));
|
dccp ccid-2: Replace broken RTT estimator with better algorithm
The current CCID-2 RTT estimator code is in parts broken and lags behind the
suggestions in RFC2988 of using scaled variants for SRTT/RTTVAR.
That code is replaced by the present patch, which reuses the Linux TCP RTT
estimator code.
Further details:
----------------
1. The minimum RTO of previously one second has been replaced with TCP's, since
RFC4341, sec. 5 says that the minimum of 1 sec. (suggested in RFC2988, 2.4)
is not necessary. Instead, the TCP_RTO_MIN is used, which agrees with DCCP's
concept of a default RTT (RFC 4340, 3.4).
2. The maximum RTO has been set to DCCP_RTO_MAX (64 sec), which agrees with
RFC2988, (2.5).
3. De-inlined the function ccid2_new_ack().
4. Added a FIXME: the RTT is sampled several times per Ack Vector, which will
give the wrong estimate. It should be replaced with one sample per Ack.
However, at the moment this can not be resolved easily, since
- it depends on TX history code (which also needs some work),
- the cleanest solution is not to use the `sent' time at all (saves 4 bytes
per entry) and use DCCP timestamps / elapsed time to estimated the RTT,
which however is non-trivial to get right (but needs to be done).
Reasons for reusing the Linux TCP estimator algorithm:
------------------------------------------------------
Some time was spent to find a better alternative, using basic RFC2988 as a first
step. Further analysis and experimentation showed that the Linux TCP RTO
estimator is superior to a basic RFC2988 implementation. A summary is on
http://www.erg.abdn.ac.uk/users/gerrit/dccp/notes/ccid2/rto_estimator/
In addition, this estimator fared well in a recent empirical evaluation:
Rewaskar, Sushant, Jasleen Kaur and F. Donelson Smith.
A Performance Study of Loss Detection/Recovery in Real-world TCP
Implementations. Proceedings of 15th IEEE International
Conference on Network Protocols (ICNP-07), 2007.
Thus there is significant benefit in reusing the existing TCP code.
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-08-23 02:41:40 +07:00
|
|
|
hc->tx_rttvar = hc->tx_mdev_max;
|
2010-08-30 02:23:13 +07:00
|
|
|
|
dccp ccid-2: Replace broken RTT estimator with better algorithm
The current CCID-2 RTT estimator code is in parts broken and lags behind the
suggestions in RFC2988 of using scaled variants for SRTT/RTTVAR.
That code is replaced by the present patch, which reuses the Linux TCP RTT
estimator code.
Further details:
----------------
1. The minimum RTO of previously one second has been replaced with TCP's, since
RFC4341, sec. 5 says that the minimum of 1 sec. (suggested in RFC2988, 2.4)
is not necessary. Instead, the TCP_RTO_MIN is used, which agrees with DCCP's
concept of a default RTT (RFC 4340, 3.4).
2. The maximum RTO has been set to DCCP_RTO_MAX (64 sec), which agrees with
RFC2988, (2.5).
3. De-inlined the function ccid2_new_ack().
4. Added a FIXME: the RTT is sampled several times per Ack Vector, which will
give the wrong estimate. It should be replaced with one sample per Ack.
However, at the moment this can not be resolved easily, since
- it depends on TX history code (which also needs some work),
- the cleanest solution is not to use the `sent' time at all (saves 4 bytes
per entry) and use DCCP timestamps / elapsed time to estimated the RTT,
which however is non-trivial to get right (but needs to be done).
Reasons for reusing the Linux TCP estimator algorithm:
------------------------------------------------------
Some time was spent to find a better alternative, using basic RFC2988 as a first
step. Further analysis and experimentation showed that the Linux TCP RTO
estimator is superior to a basic RFC2988 implementation. A summary is on
http://www.erg.abdn.ac.uk/users/gerrit/dccp/notes/ccid2/rto_estimator/
In addition, this estimator fared well in a recent empirical evaluation:
Rewaskar, Sushant, Jasleen Kaur and F. Donelson Smith.
A Performance Study of Loss Detection/Recovery in Real-world TCP
Implementations. Proceedings of 15th IEEE International
Conference on Network Protocols (ICNP-07), 2007.
Thus there is significant benefit in reusing the existing TCP code.
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-08-23 02:41:40 +07:00
|
|
|
hc->tx_rtt_seq = dccp_sk(sk)->dccps_gss;
|
|
|
|
} else {
|
|
|
|
/* Update scaled SRTT as SRTT += 1/8 * (m - SRTT) */
|
|
|
|
m -= (hc->tx_srtt >> 3);
|
|
|
|
hc->tx_srtt += m;
|
|
|
|
|
|
|
|
/* Similarly, update scaled mdev with regard to |m| */
|
|
|
|
if (m < 0) {
|
|
|
|
m = -m;
|
|
|
|
m -= (hc->tx_mdev >> 2);
|
|
|
|
/*
|
|
|
|
* This neutralises RTO increase when RTT < SRTT - mdev
|
|
|
|
* (see P. Sarolahti, A. Kuznetsov,"Congestion Control
|
|
|
|
* in Linux TCP", USENIX 2002, pp. 49-62).
|
|
|
|
*/
|
|
|
|
if (m > 0)
|
|
|
|
m >>= 3;
|
|
|
|
} else {
|
|
|
|
m -= (hc->tx_mdev >> 2);
|
|
|
|
}
|
|
|
|
hc->tx_mdev += m;
|
|
|
|
|
|
|
|
if (hc->tx_mdev > hc->tx_mdev_max) {
|
|
|
|
hc->tx_mdev_max = hc->tx_mdev;
|
|
|
|
if (hc->tx_mdev_max > hc->tx_rttvar)
|
|
|
|
hc->tx_rttvar = hc->tx_mdev_max;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Decay RTTVAR at most once per flight, exploiting that
|
|
|
|
* 1) pipe <= cwnd <= Sequence_Window = W (RFC 4340, 7.5.2)
|
|
|
|
* 2) AWL = GSS-W+1 <= GAR <= GSS (RFC 4340, 7.5.1)
|
|
|
|
* GAR is a useful bound for FlightSize = pipe.
|
|
|
|
* AWL is probably too low here, as it over-estimates pipe.
|
|
|
|
*/
|
|
|
|
if (after48(dccp_sk(sk)->dccps_gar, hc->tx_rtt_seq)) {
|
|
|
|
if (hc->tx_mdev_max < hc->tx_rttvar)
|
|
|
|
hc->tx_rttvar -= (hc->tx_rttvar -
|
|
|
|
hc->tx_mdev_max) >> 2;
|
|
|
|
hc->tx_rtt_seq = dccp_sk(sk)->dccps_gss;
|
2010-08-30 02:23:13 +07:00
|
|
|
hc->tx_mdev_max = tcp_rto_min(sk);
|
dccp ccid-2: Replace broken RTT estimator with better algorithm
The current CCID-2 RTT estimator code is in parts broken and lags behind the
suggestions in RFC2988 of using scaled variants for SRTT/RTTVAR.
That code is replaced by the present patch, which reuses the Linux TCP RTT
estimator code.
Further details:
----------------
1. The minimum RTO of previously one second has been replaced with TCP's, since
RFC4341, sec. 5 says that the minimum of 1 sec. (suggested in RFC2988, 2.4)
is not necessary. Instead, the TCP_RTO_MIN is used, which agrees with DCCP's
concept of a default RTT (RFC 4340, 3.4).
2. The maximum RTO has been set to DCCP_RTO_MAX (64 sec), which agrees with
RFC2988, (2.5).
3. De-inlined the function ccid2_new_ack().
4. Added a FIXME: the RTT is sampled several times per Ack Vector, which will
give the wrong estimate. It should be replaced with one sample per Ack.
However, at the moment this can not be resolved easily, since
- it depends on TX history code (which also needs some work),
- the cleanest solution is not to use the `sent' time at all (saves 4 bytes
per entry) and use DCCP timestamps / elapsed time to estimated the RTT,
which however is non-trivial to get right (but needs to be done).
Reasons for reusing the Linux TCP estimator algorithm:
------------------------------------------------------
Some time was spent to find a better alternative, using basic RFC2988 as a first
step. Further analysis and experimentation showed that the Linux TCP RTO
estimator is superior to a basic RFC2988 implementation. A summary is on
http://www.erg.abdn.ac.uk/users/gerrit/dccp/notes/ccid2/rto_estimator/
In addition, this estimator fared well in a recent empirical evaluation:
Rewaskar, Sushant, Jasleen Kaur and F. Donelson Smith.
A Performance Study of Loss Detection/Recovery in Real-world TCP
Implementations. Proceedings of 15th IEEE International
Conference on Network Protocols (ICNP-07), 2007.
Thus there is significant benefit in reusing the existing TCP code.
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-08-23 02:41:40 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set RTO from SRTT and RTTVAR
|
|
|
|
* As in TCP, 4 * RTTVAR >= TCP_RTO_MIN, giving a minimum RTO of 200 ms.
|
|
|
|
* This agrees with RFC 4341, 5:
|
|
|
|
* "Because DCCP does not retransmit data, DCCP does not require
|
|
|
|
* TCP's recommended minimum timeout of one second".
|
|
|
|
*/
|
|
|
|
hc->tx_rto = (hc->tx_srtt >> 3) + hc->tx_rttvar;
|
|
|
|
|
|
|
|
if (hc->tx_rto > DCCP_RTO_MAX)
|
|
|
|
hc->tx_rto = DCCP_RTO_MAX;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ccid2_new_ack(struct sock *sk, struct ccid2_seq *seqp,
|
|
|
|
unsigned int *maxincr)
|
2006-09-20 03:14:43 +07:00
|
|
|
{
|
2009-10-05 07:53:12 +07:00
|
|
|
struct ccid2_hc_tx_sock *hc = ccid2_hc_tx_sk(sk);
|
2011-07-25 09:49:19 +07:00
|
|
|
struct dccp_sock *dp = dccp_sk(sk);
|
|
|
|
int r_seq_used = hc->tx_cwnd / dp->dccps_l_ack_ratio;
|
|
|
|
|
|
|
|
if (hc->tx_cwnd < dp->dccps_l_seq_win &&
|
|
|
|
r_seq_used < dp->dccps_r_seq_win) {
|
|
|
|
if (hc->tx_cwnd < hc->tx_ssthresh) {
|
dccp ccid-2: increment cwnd correctly
This patch fixes an issue where CCID-2 will not increase the congestion
window for numerous RTTs after an idle period, application-limited period,
or a loss once the algorithm is in Congestion Avoidance.
What happens is that, when CCID-2 is in Congestion Avoidance mode, it will
increase hc->tx_packets_acked by one for every packet and will increment cwnd
every cwnd packets. However, if there is now an idle period in the connection,
cwnd will be reduced, possibly below the slow start threshold. This will
cause the connection to go into Slow Start. However, in Slow Start CCID-2
performs this test to increment cwnd every second ack:
++hc->tx_packets_acked == 2
Unfortunately, this will be incorrect, if cwnd previous to the idle period
was larger than 2 and if tx_packets_acked was close to cwnd. For example:
cwnd=50 and tx_packets_acked=45.
In this case, the current code, will increment tx_packets_acked until it
equals two, which will only be once tx_packets_acked (an unsigned 32-bit
integer) overflows.
My fix is simply to change that test for tx_packets_acked greater than or
equal to two in slow start.
Signed-off-by: Samuel Jero <sj323707@ohio.edu>
Acked-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
2011-07-25 10:05:16 +07:00
|
|
|
if (*maxincr > 0 && ++hc->tx_packets_acked >= 2) {
|
2011-07-25 09:49:19 +07:00
|
|
|
hc->tx_cwnd += 1;
|
|
|
|
*maxincr -= 1;
|
|
|
|
hc->tx_packets_acked = 0;
|
|
|
|
}
|
|
|
|
} else if (++hc->tx_packets_acked >= hc->tx_cwnd) {
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_cwnd += 1;
|
|
|
|
hc->tx_packets_acked = 0;
|
2008-09-09 18:27:22 +07:00
|
|
|
}
|
2006-09-20 03:14:43 +07:00
|
|
|
}
|
2011-07-25 09:49:19 +07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Adjust the local sequence window and the ack ratio to allow about
|
|
|
|
* 5 times the number of packets in the network (RFC 4340 7.5.2)
|
|
|
|
*/
|
|
|
|
if (r_seq_used * CCID2_WIN_CHANGE_FACTOR >= dp->dccps_r_seq_win)
|
|
|
|
ccid2_change_l_ack_ratio(sk, dp->dccps_l_ack_ratio * 2);
|
|
|
|
else if (r_seq_used * CCID2_WIN_CHANGE_FACTOR < dp->dccps_r_seq_win/2)
|
|
|
|
ccid2_change_l_ack_ratio(sk, dp->dccps_l_ack_ratio / 2 ? : 1U);
|
|
|
|
|
|
|
|
if (hc->tx_cwnd * CCID2_WIN_CHANGE_FACTOR >= dp->dccps_l_seq_win)
|
|
|
|
ccid2_change_l_seq_window(sk, dp->dccps_l_seq_win * 2);
|
|
|
|
else if (hc->tx_cwnd * CCID2_WIN_CHANGE_FACTOR < dp->dccps_l_seq_win/2)
|
|
|
|
ccid2_change_l_seq_window(sk, dp->dccps_l_seq_win / 2);
|
|
|
|
|
dccp ccid-2: Replace broken RTT estimator with better algorithm
The current CCID-2 RTT estimator code is in parts broken and lags behind the
suggestions in RFC2988 of using scaled variants for SRTT/RTTVAR.
That code is replaced by the present patch, which reuses the Linux TCP RTT
estimator code.
Further details:
----------------
1. The minimum RTO of previously one second has been replaced with TCP's, since
RFC4341, sec. 5 says that the minimum of 1 sec. (suggested in RFC2988, 2.4)
is not necessary. Instead, the TCP_RTO_MIN is used, which agrees with DCCP's
concept of a default RTT (RFC 4340, 3.4).
2. The maximum RTO has been set to DCCP_RTO_MAX (64 sec), which agrees with
RFC2988, (2.5).
3. De-inlined the function ccid2_new_ack().
4. Added a FIXME: the RTT is sampled several times per Ack Vector, which will
give the wrong estimate. It should be replaced with one sample per Ack.
However, at the moment this can not be resolved easily, since
- it depends on TX history code (which also needs some work),
- the cleanest solution is not to use the `sent' time at all (saves 4 bytes
per entry) and use DCCP timestamps / elapsed time to estimated the RTT,
which however is non-trivial to get right (but needs to be done).
Reasons for reusing the Linux TCP estimator algorithm:
------------------------------------------------------
Some time was spent to find a better alternative, using basic RFC2988 as a first
step. Further analysis and experimentation showed that the Linux TCP RTO
estimator is superior to a basic RFC2988 implementation. A summary is on
http://www.erg.abdn.ac.uk/users/gerrit/dccp/notes/ccid2/rto_estimator/
In addition, this estimator fared well in a recent empirical evaluation:
Rewaskar, Sushant, Jasleen Kaur and F. Donelson Smith.
A Performance Study of Loss Detection/Recovery in Real-world TCP
Implementations. Proceedings of 15th IEEE International
Conference on Network Protocols (ICNP-07), 2007.
Thus there is significant benefit in reusing the existing TCP code.
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-08-23 02:41:40 +07:00
|
|
|
/*
|
|
|
|
* FIXME: RTT is sampled several times per acknowledgment (for each
|
|
|
|
* entry in the Ack Vector), instead of once per Ack (as in TCP SACK).
|
|
|
|
* This causes the RTT to be over-estimated, since the older entries
|
|
|
|
* in the Ack Vector have earlier sending times.
|
|
|
|
* The cleanest solution is to not use the ccid2s_sent field at all
|
|
|
|
* and instead use DCCP timestamps: requires changes in other places.
|
|
|
|
*/
|
2017-05-17 04:00:02 +07:00
|
|
|
ccid2_rtt_estimator(sk, ccid2_jiffies32 - seqp->ccid2s_sent);
|
2008-09-09 18:27:22 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void ccid2_congestion_event(struct sock *sk, struct ccid2_seq *seqp)
|
|
|
|
{
|
2009-10-05 07:53:12 +07:00
|
|
|
struct ccid2_hc_tx_sock *hc = ccid2_hc_tx_sk(sk);
|
2008-09-09 18:27:22 +07:00
|
|
|
|
2010-08-30 02:23:10 +07:00
|
|
|
if ((s32)(seqp->ccid2s_sent - hc->tx_last_cong) < 0) {
|
2008-09-09 18:27:22 +07:00
|
|
|
ccid2_pr_debug("Multiple losses in an RTT---treating as one\n");
|
|
|
|
return;
|
2008-09-04 12:30:19 +07:00
|
|
|
}
|
2008-09-09 18:27:22 +07:00
|
|
|
|
2017-05-17 04:00:02 +07:00
|
|
|
hc->tx_last_cong = ccid2_jiffies32;
|
2008-09-09 18:27:22 +07:00
|
|
|
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_cwnd = hc->tx_cwnd / 2 ? : 1U;
|
|
|
|
hc->tx_ssthresh = max(hc->tx_cwnd, 2U);
|
2008-09-09 18:27:22 +07:00
|
|
|
|
2011-07-25 09:57:49 +07:00
|
|
|
ccid2_check_l_ack_ratio(sk);
|
2008-09-04 12:30:19 +07:00
|
|
|
}
|
|
|
|
|
2010-11-14 23:26:13 +07:00
|
|
|
static int ccid2_hc_tx_parse_options(struct sock *sk, u8 packet_type,
|
|
|
|
u8 option, u8 *optval, u8 optlen)
|
|
|
|
{
|
|
|
|
struct ccid2_hc_tx_sock *hc = ccid2_hc_tx_sk(sk);
|
|
|
|
|
|
|
|
switch (option) {
|
|
|
|
case DCCPO_ACK_VECTOR_0:
|
|
|
|
case DCCPO_ACK_VECTOR_1:
|
|
|
|
return dccp_ackvec_parsed_add(&hc->tx_av_chunks, optval, optlen,
|
|
|
|
option - DCCPO_ACK_VECTOR_0);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2006-03-21 08:41:47 +07:00
|
|
|
static void ccid2_hc_tx_packet_recv(struct sock *sk, struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
struct dccp_sock *dp = dccp_sk(sk);
|
2009-10-05 07:53:12 +07:00
|
|
|
struct ccid2_hc_tx_sock *hc = ccid2_hc_tx_sk(sk);
|
2010-10-28 02:16:28 +07:00
|
|
|
const bool sender_was_blocked = ccid2_cwnd_network_limited(hc);
|
2010-11-14 23:26:13 +07:00
|
|
|
struct dccp_ackvec_parsed *avp;
|
2006-03-21 08:41:47 +07:00
|
|
|
u64 ackno, seqno;
|
|
|
|
struct ccid2_seq *seqp;
|
|
|
|
int done = 0;
|
|
|
|
unsigned int maxincr = 0;
|
|
|
|
|
|
|
|
/* check reverse path congestion */
|
|
|
|
seqno = DCCP_SKB_CB(skb)->dccpd_seq;
|
|
|
|
|
|
|
|
/* XXX this whole "algorithm" is broken. Need to fix it to keep track
|
|
|
|
* of the seqnos of the dupacks so that rpseq and rpdupack are correct
|
|
|
|
* -sorbo.
|
|
|
|
*/
|
|
|
|
/* need to bootstrap */
|
2009-10-05 07:53:12 +07:00
|
|
|
if (hc->tx_rpdupack == -1) {
|
|
|
|
hc->tx_rpdupack = 0;
|
|
|
|
hc->tx_rpseq = seqno;
|
2006-03-21 13:05:37 +07:00
|
|
|
} else {
|
2006-03-21 08:41:47 +07:00
|
|
|
/* check if packet is consecutive */
|
2009-10-05 07:53:12 +07:00
|
|
|
if (dccp_delta_seqno(hc->tx_rpseq, seqno) == 1)
|
|
|
|
hc->tx_rpseq = seqno;
|
2006-03-21 08:41:47 +07:00
|
|
|
/* it's a later packet */
|
2009-10-05 07:53:12 +07:00
|
|
|
else if (after48(seqno, hc->tx_rpseq)) {
|
|
|
|
hc->tx_rpdupack++;
|
2006-03-21 08:41:47 +07:00
|
|
|
|
|
|
|
/* check if we got enough dupacks */
|
2009-10-05 07:53:12 +07:00
|
|
|
if (hc->tx_rpdupack >= NUMDUPACK) {
|
|
|
|
hc->tx_rpdupack = -1; /* XXX lame */
|
|
|
|
hc->tx_rpseq = 0;
|
2011-07-25 10:18:25 +07:00
|
|
|
#ifdef __CCID2_COPES_GRACEFULLY_WITH_ACK_CONGESTION_CONTROL__
|
|
|
|
/*
|
|
|
|
* FIXME: Ack Congestion Control is broken; in
|
|
|
|
* the current state instabilities occurred with
|
|
|
|
* Ack Ratios greater than 1; causing hang-ups
|
|
|
|
* and long RTO timeouts. This needs to be fixed
|
|
|
|
* before opening up dynamic changes. -- gerrit
|
|
|
|
*/
|
2007-11-25 06:32:53 +07:00
|
|
|
ccid2_change_l_ack_ratio(sk, 2 * dp->dccps_l_ack_ratio);
|
2011-07-25 10:18:25 +07:00
|
|
|
#endif
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* check forward path congestion */
|
2010-11-14 23:26:13 +07:00
|
|
|
if (dccp_packet_without_ack(skb))
|
2006-03-21 08:41:47 +07:00
|
|
|
return;
|
|
|
|
|
2010-11-14 23:26:13 +07:00
|
|
|
/* still didn't send out new data packets */
|
|
|
|
if (hc->tx_seqh == hc->tx_seqt)
|
|
|
|
goto done;
|
2006-03-21 08:41:47 +07:00
|
|
|
|
|
|
|
ackno = DCCP_SKB_CB(skb)->dccpd_ack_seq;
|
2009-10-05 07:53:12 +07:00
|
|
|
if (after48(ackno, hc->tx_high_ack))
|
|
|
|
hc->tx_high_ack = ackno;
|
2006-11-16 23:28:40 +07:00
|
|
|
|
2009-10-05 07:53:12 +07:00
|
|
|
seqp = hc->tx_seqt;
|
2006-11-16 23:28:40 +07:00
|
|
|
while (before48(seqp->ccid2s_seq, ackno)) {
|
|
|
|
seqp = seqp->ccid2s_next;
|
2009-10-05 07:53:12 +07:00
|
|
|
if (seqp == hc->tx_seqh) {
|
|
|
|
seqp = hc->tx_seqh->ccid2s_prev;
|
2006-11-16 23:28:40 +07:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2006-03-21 08:41:47 +07:00
|
|
|
|
2007-11-25 07:10:29 +07:00
|
|
|
/*
|
|
|
|
* In slow-start, cwnd can increase up to a maximum of Ack Ratio/2
|
|
|
|
* packets per acknowledgement. Rounding up avoids that cwnd is not
|
|
|
|
* advanced when Ack Ratio is 1 and gives a slight edge otherwise.
|
2006-03-21 08:41:47 +07:00
|
|
|
*/
|
2009-10-05 07:53:12 +07:00
|
|
|
if (hc->tx_cwnd < hc->tx_ssthresh)
|
2007-11-25 07:10:29 +07:00
|
|
|
maxincr = DIV_ROUND_UP(dp->dccps_l_ack_ratio, 2);
|
2006-03-21 08:41:47 +07:00
|
|
|
|
|
|
|
/* go through all ack vectors */
|
2010-11-14 23:26:13 +07:00
|
|
|
list_for_each_entry(avp, &hc->tx_av_chunks, node) {
|
2006-03-21 08:41:47 +07:00
|
|
|
/* go through this ack vector */
|
2010-11-14 23:26:13 +07:00
|
|
|
for (; avp->len--; avp->vec++) {
|
|
|
|
u64 ackno_end_rl = SUB48(ackno,
|
|
|
|
dccp_ackvec_runlen(avp->vec));
|
2006-03-21 08:41:47 +07:00
|
|
|
|
2010-11-14 23:26:13 +07:00
|
|
|
ccid2_pr_debug("ackvec %llu |%u,%u|\n",
|
2006-10-30 07:03:30 +07:00
|
|
|
(unsigned long long)ackno,
|
2010-11-14 23:26:13 +07:00
|
|
|
dccp_ackvec_state(avp->vec) >> 6,
|
|
|
|
dccp_ackvec_runlen(avp->vec));
|
2006-03-21 08:41:47 +07:00
|
|
|
/* if the seqno we are analyzing is larger than the
|
|
|
|
* current ackno, then move towards the tail of our
|
|
|
|
* seqnos.
|
|
|
|
*/
|
|
|
|
while (after48(seqp->ccid2s_seq, ackno)) {
|
2009-10-05 07:53:12 +07:00
|
|
|
if (seqp == hc->tx_seqt) {
|
2006-03-21 08:41:47 +07:00
|
|
|
done = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
seqp = seqp->ccid2s_prev;
|
|
|
|
}
|
|
|
|
if (done)
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* check all seqnos in the range of the vector
|
|
|
|
* run length
|
|
|
|
*/
|
|
|
|
while (between48(seqp->ccid2s_seq,ackno_end_rl,ackno)) {
|
2010-11-14 23:26:13 +07:00
|
|
|
const u8 state = dccp_ackvec_state(avp->vec);
|
2006-03-21 08:41:47 +07:00
|
|
|
|
|
|
|
/* new packet received or marked */
|
2010-11-11 03:20:07 +07:00
|
|
|
if (state != DCCPAV_NOT_RECEIVED &&
|
2006-03-21 08:41:47 +07:00
|
|
|
!seqp->ccid2s_acked) {
|
2010-11-11 03:20:07 +07:00
|
|
|
if (state == DCCPAV_ECN_MARKED)
|
2007-11-25 06:40:24 +07:00
|
|
|
ccid2_congestion_event(sk,
|
2006-09-20 03:14:43 +07:00
|
|
|
seqp);
|
2010-11-11 03:20:07 +07:00
|
|
|
else
|
2006-03-21 08:41:47 +07:00
|
|
|
ccid2_new_ack(sk, seqp,
|
|
|
|
&maxincr);
|
|
|
|
|
|
|
|
seqp->ccid2s_acked = 1;
|
|
|
|
ccid2_pr_debug("Got ack for %llu\n",
|
2006-10-30 07:03:30 +07:00
|
|
|
(unsigned long long)seqp->ccid2s_seq);
|
2010-08-23 02:41:39 +07:00
|
|
|
hc->tx_pipe--;
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
2009-10-05 07:53:12 +07:00
|
|
|
if (seqp == hc->tx_seqt) {
|
2006-03-21 08:41:47 +07:00
|
|
|
done = 1;
|
|
|
|
break;
|
|
|
|
}
|
2007-11-25 05:37:48 +07:00
|
|
|
seqp = seqp->ccid2s_prev;
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
|
|
|
if (done)
|
|
|
|
break;
|
|
|
|
|
2007-11-25 05:43:59 +07:00
|
|
|
ackno = SUB48(ackno_end_rl, 1);
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
|
|
|
if (done)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* The state about what is acked should be correct now
|
|
|
|
* Check for NUMDUPACK
|
|
|
|
*/
|
2009-10-05 07:53:12 +07:00
|
|
|
seqp = hc->tx_seqt;
|
|
|
|
while (before48(seqp->ccid2s_seq, hc->tx_high_ack)) {
|
2006-11-16 23:28:40 +07:00
|
|
|
seqp = seqp->ccid2s_next;
|
2009-10-05 07:53:12 +07:00
|
|
|
if (seqp == hc->tx_seqh) {
|
|
|
|
seqp = hc->tx_seqh->ccid2s_prev;
|
2006-11-16 23:28:40 +07:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2006-03-21 08:41:47 +07:00
|
|
|
done = 0;
|
|
|
|
while (1) {
|
|
|
|
if (seqp->ccid2s_acked) {
|
|
|
|
done++;
|
2007-11-25 07:04:35 +07:00
|
|
|
if (done == NUMDUPACK)
|
2006-03-21 08:41:47 +07:00
|
|
|
break;
|
|
|
|
}
|
2009-10-05 07:53:12 +07:00
|
|
|
if (seqp == hc->tx_seqt)
|
2006-03-21 08:41:47 +07:00
|
|
|
break;
|
|
|
|
seqp = seqp->ccid2s_prev;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If there are at least 3 acknowledgements, anything unacknowledged
|
|
|
|
* below the last sequence number is considered lost
|
|
|
|
*/
|
2007-11-25 07:04:35 +07:00
|
|
|
if (done == NUMDUPACK) {
|
2006-03-21 08:41:47 +07:00
|
|
|
struct ccid2_seq *last_acked = seqp;
|
|
|
|
|
|
|
|
/* check for lost packets */
|
|
|
|
while (1) {
|
|
|
|
if (!seqp->ccid2s_acked) {
|
2006-09-20 03:14:43 +07:00
|
|
|
ccid2_pr_debug("Packet lost: %llu\n",
|
2006-10-30 07:03:30 +07:00
|
|
|
(unsigned long long)seqp->ccid2s_seq);
|
2006-09-20 03:14:43 +07:00
|
|
|
/* XXX need to traverse from tail -> head in
|
|
|
|
* order to detect multiple congestion events in
|
|
|
|
* one ack vector.
|
|
|
|
*/
|
2007-11-25 06:40:24 +07:00
|
|
|
ccid2_congestion_event(sk, seqp);
|
2010-08-23 02:41:39 +07:00
|
|
|
hc->tx_pipe--;
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
2009-10-05 07:53:12 +07:00
|
|
|
if (seqp == hc->tx_seqt)
|
2006-03-21 08:41:47 +07:00
|
|
|
break;
|
|
|
|
seqp = seqp->ccid2s_prev;
|
|
|
|
}
|
|
|
|
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_seqt = last_acked;
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
/* trim acked packets in tail */
|
2009-10-05 07:53:12 +07:00
|
|
|
while (hc->tx_seqt != hc->tx_seqh) {
|
|
|
|
if (!hc->tx_seqt->ccid2s_acked)
|
2006-03-21 08:41:47 +07:00
|
|
|
break;
|
|
|
|
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_seqt = hc->tx_seqt->ccid2s_next;
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
2010-08-23 02:41:39 +07:00
|
|
|
|
|
|
|
/* restart RTO timer if not all outstanding data has been acked */
|
|
|
|
if (hc->tx_pipe == 0)
|
|
|
|
sk_stop_timer(sk, &hc->tx_rtotimer);
|
|
|
|
else
|
|
|
|
sk_reset_timer(sk, &hc->tx_rtotimer, jiffies + hc->tx_rto);
|
2010-11-14 23:26:13 +07:00
|
|
|
done:
|
2010-10-28 02:16:28 +07:00
|
|
|
/* check if incoming Acks allow pending packets to be sent */
|
|
|
|
if (sender_was_blocked && !ccid2_cwnd_network_limited(hc))
|
|
|
|
tasklet_schedule(&dccp_sk(sk)->dccps_xmitlet);
|
2010-11-14 23:26:13 +07:00
|
|
|
dccp_ackvec_parsed_cleanup(&hc->tx_av_chunks);
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
|
|
|
|
2006-03-21 10:21:44 +07:00
|
|
|
static int ccid2_hc_tx_init(struct ccid *ccid, struct sock *sk)
|
2006-03-21 08:41:47 +07:00
|
|
|
{
|
2009-10-05 07:53:12 +07:00
|
|
|
struct ccid2_hc_tx_sock *hc = ccid_priv(ccid);
|
2007-11-25 06:44:30 +07:00
|
|
|
struct dccp_sock *dp = dccp_sk(sk);
|
|
|
|
u32 max_ratio;
|
2006-03-21 08:41:47 +07:00
|
|
|
|
2007-11-25 06:44:30 +07:00
|
|
|
/* RFC 4341, 5: initialise ssthresh to arbitrarily high (max) value */
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_ssthresh = ~0U;
|
2006-03-21 08:41:47 +07:00
|
|
|
|
2010-08-30 02:23:12 +07:00
|
|
|
/* Use larger initial windows (RFC 4341, section 5). */
|
|
|
|
hc->tx_cwnd = rfc3390_bytes_to_packets(dp->dccps_mss_cache);
|
dccp ccid-2: Perform congestion-window validation
CCID-2's cwnd increases like TCP during slow-start, which has implications for
* the local Sequence Window value (should be > cwnd),
* the Ack Ratio value.
Hence an exponential growth, if it does not reflect the actual network
conditions, can quickly lead to instability.
This patch adds congestion-window validation (RFC2861) to CCID-2:
* cwnd is constrained if the sender is application limited;
* cwnd is reduced after a long idle period, as suggested in the '90 paper
by Van Jacobson, in RFC 2581 (sec. 4.1);
* cwnd is never reduced below the RFC 3390 initial window.
As marked in the comments, the code is actually almost a direct copy of the
TCP congestion-window-validation algorithms. By continuing this work, it may
in future be possible to use the TCP code (not possible at the moment).
The mechanism can be turned off using a module parameter. Sampling of the
currently-used window (moving-maximum) is however done constantly; this is
used to determine the expected window, which can be exploited to regulate
DCCP's Sequence Window value.
This patch also sets slow-start-after-idle (RFC 4341, 5.1), i.e. it behaves like
TCP when net.ipv4.tcp_slow_start_after_idle = 1.
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
2011-07-03 22:55:03 +07:00
|
|
|
hc->tx_expected_wnd = hc->tx_cwnd;
|
2007-11-25 06:44:30 +07:00
|
|
|
|
|
|
|
/* Make sure that Ack Ratio is enabled and within bounds. */
|
2009-10-05 07:53:12 +07:00
|
|
|
max_ratio = DIV_ROUND_UP(hc->tx_cwnd, 2);
|
2007-11-25 06:44:30 +07:00
|
|
|
if (dp->dccps_l_ack_ratio == 0 || dp->dccps_l_ack_ratio > max_ratio)
|
|
|
|
dp->dccps_l_ack_ratio = max_ratio;
|
|
|
|
|
2006-03-21 08:41:47 +07:00
|
|
|
/* XXX init ~ to window size... */
|
2009-10-05 07:53:12 +07:00
|
|
|
if (ccid2_hc_tx_alloc_seq(hc))
|
2006-03-21 08:41:47 +07:00
|
|
|
return -ENOMEM;
|
2006-03-21 10:21:44 +07:00
|
|
|
|
dccp ccid-2: Replace broken RTT estimator with better algorithm
The current CCID-2 RTT estimator code is in parts broken and lags behind the
suggestions in RFC2988 of using scaled variants for SRTT/RTTVAR.
That code is replaced by the present patch, which reuses the Linux TCP RTT
estimator code.
Further details:
----------------
1. The minimum RTO of previously one second has been replaced with TCP's, since
RFC4341, sec. 5 says that the minimum of 1 sec. (suggested in RFC2988, 2.4)
is not necessary. Instead, the TCP_RTO_MIN is used, which agrees with DCCP's
concept of a default RTT (RFC 4340, 3.4).
2. The maximum RTO has been set to DCCP_RTO_MAX (64 sec), which agrees with
RFC2988, (2.5).
3. De-inlined the function ccid2_new_ack().
4. Added a FIXME: the RTT is sampled several times per Ack Vector, which will
give the wrong estimate. It should be replaced with one sample per Ack.
However, at the moment this can not be resolved easily, since
- it depends on TX history code (which also needs some work),
- the cleanest solution is not to use the `sent' time at all (saves 4 bytes
per entry) and use DCCP timestamps / elapsed time to estimated the RTT,
which however is non-trivial to get right (but needs to be done).
Reasons for reusing the Linux TCP estimator algorithm:
------------------------------------------------------
Some time was spent to find a better alternative, using basic RFC2988 as a first
step. Further analysis and experimentation showed that the Linux TCP RTO
estimator is superior to a basic RFC2988 implementation. A summary is on
http://www.erg.abdn.ac.uk/users/gerrit/dccp/notes/ccid2/rto_estimator/
In addition, this estimator fared well in a recent empirical evaluation:
Rewaskar, Sushant, Jasleen Kaur and F. Donelson Smith.
A Performance Study of Loss Detection/Recovery in Real-world TCP
Implementations. Proceedings of 15th IEEE International
Conference on Network Protocols (ICNP-07), 2007.
Thus there is significant benefit in reusing the existing TCP code.
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2010-08-23 02:41:40 +07:00
|
|
|
hc->tx_rto = DCCP_TIMEOUT_INIT;
|
2009-10-05 07:53:12 +07:00
|
|
|
hc->tx_rpdupack = -1;
|
2017-05-17 04:00:02 +07:00
|
|
|
hc->tx_last_cong = hc->tx_lsndtime = hc->tx_cwnd_stamp = ccid2_jiffies32;
|
dccp ccid-2: Perform congestion-window validation
CCID-2's cwnd increases like TCP during slow-start, which has implications for
* the local Sequence Window value (should be > cwnd),
* the Ack Ratio value.
Hence an exponential growth, if it does not reflect the actual network
conditions, can quickly lead to instability.
This patch adds congestion-window validation (RFC2861) to CCID-2:
* cwnd is constrained if the sender is application limited;
* cwnd is reduced after a long idle period, as suggested in the '90 paper
by Van Jacobson, in RFC 2581 (sec. 4.1);
* cwnd is never reduced below the RFC 3390 initial window.
As marked in the comments, the code is actually almost a direct copy of the
TCP congestion-window-validation algorithms. By continuing this work, it may
in future be possible to use the TCP code (not possible at the moment).
The mechanism can be turned off using a module parameter. Sampling of the
currently-used window (moving-maximum) is however done constantly; this is
used to determine the expected window, which can be exploited to regulate
DCCP's Sequence Window value.
This patch also sets slow-start-after-idle (RFC 4341, 5.1), i.e. it behaves like
TCP when net.ipv4.tcp_slow_start_after_idle = 1.
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
2011-07-03 22:55:03 +07:00
|
|
|
hc->tx_cwnd_used = 0;
|
2017-10-24 15:46:09 +07:00
|
|
|
hc->sk = sk;
|
|
|
|
timer_setup(&hc->tx_rtotimer, ccid2_hc_tx_rto_expire, 0);
|
2010-11-14 23:26:13 +07:00
|
|
|
INIT_LIST_HEAD(&hc->tx_av_chunks);
|
2006-03-21 08:41:47 +07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ccid2_hc_tx_exit(struct sock *sk)
|
|
|
|
{
|
2009-10-05 07:53:12 +07:00
|
|
|
struct ccid2_hc_tx_sock *hc = ccid2_hc_tx_sk(sk);
|
2006-09-20 03:13:37 +07:00
|
|
|
int i;
|
2006-03-21 08:41:47 +07:00
|
|
|
|
2010-08-30 02:23:11 +07:00
|
|
|
sk_stop_timer(sk, &hc->tx_rtotimer);
|
2006-09-20 03:13:37 +07:00
|
|
|
|
2009-10-05 07:53:12 +07:00
|
|
|
for (i = 0; i < hc->tx_seqbufc; i++)
|
|
|
|
kfree(hc->tx_seqbuf[i]);
|
|
|
|
hc->tx_seqbufc = 0;
|
2017-03-13 06:01:30 +07:00
|
|
|
dccp_ackvec_parsed_cleanup(&hc->tx_av_chunks);
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void ccid2_hc_rx_packet_recv(struct sock *sk, struct sk_buff *skb)
|
|
|
|
{
|
2009-10-05 07:53:12 +07:00
|
|
|
struct ccid2_hc_rx_sock *hc = ccid2_hc_rx_sk(sk);
|
2006-03-21 08:41:47 +07:00
|
|
|
|
2011-07-03 22:53:12 +07:00
|
|
|
if (!dccp_data_packet(skb))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (++hc->rx_num_data_pkts >= dccp_sk(sk)->dccps_r_ack_ratio) {
|
|
|
|
dccp_send_ack(sk);
|
|
|
|
hc->rx_num_data_pkts = 0;
|
2006-03-21 08:41:47 +07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-01-05 12:42:53 +07:00
|
|
|
struct ccid_operations ccid2_ops = {
|
2010-11-14 23:26:13 +07:00
|
|
|
.ccid_id = DCCPC_CCID2,
|
|
|
|
.ccid_name = "TCP-like",
|
|
|
|
.ccid_hc_tx_obj_size = sizeof(struct ccid2_hc_tx_sock),
|
|
|
|
.ccid_hc_tx_init = ccid2_hc_tx_init,
|
|
|
|
.ccid_hc_tx_exit = ccid2_hc_tx_exit,
|
|
|
|
.ccid_hc_tx_send_packet = ccid2_hc_tx_send_packet,
|
|
|
|
.ccid_hc_tx_packet_sent = ccid2_hc_tx_packet_sent,
|
|
|
|
.ccid_hc_tx_parse_options = ccid2_hc_tx_parse_options,
|
|
|
|
.ccid_hc_tx_packet_recv = ccid2_hc_tx_packet_recv,
|
|
|
|
.ccid_hc_rx_obj_size = sizeof(struct ccid2_hc_rx_sock),
|
|
|
|
.ccid_hc_rx_packet_recv = ccid2_hc_rx_packet_recv,
|
2006-03-21 08:41:47 +07:00
|
|
|
};
|
|
|
|
|
2006-11-21 03:26:03 +07:00
|
|
|
#ifdef CONFIG_IP_DCCP_CCID2_DEBUG
|
2008-08-23 18:28:27 +07:00
|
|
|
module_param(ccid2_debug, bool, 0644);
|
2009-01-05 12:42:53 +07:00
|
|
|
MODULE_PARM_DESC(ccid2_debug, "Enable CCID-2 debug messages");
|
2006-11-21 03:26:03 +07:00
|
|
|
#endif
|