Dual-Queue Coupled Active Queue Management (AQM) for Low Latency, Low Loss, and Scalable Throughput (L4S)Nokia Bell LabsAntwerpBelgiumkoen.de_schepper@nokia.comhttps://www.bell-labs.com/about/researcher-profiles/koende_schepper/IndependentUnited Kingdomietf@bobbriscoe.nethttps://bobbriscoe.net/CableLabsLouisvilleCOUnited States of AmericaG.White@CableLabs.com
tsv
tsvwgPerformanceQueuing DelayOne Way DelayRound-Trip TimeRTTJitterCongestion ControlCongestion AvoidanceQuality of ServiceQoSQuality of ExperienceQoEActive Queue ManagementAQMExplicit Congestion NotificationECNPacingBurstinessThis specification defines a framework for coupling the Active Queue
Management (AQM) algorithms in two queues intended for flows with
different responses to congestion. This provides a way for the Internet
to transition from the scaling problems of standard TCP-Reno-friendly
('Classic') congestion controls to the family of 'Scalable' congestion
controls. These are designed for consistently very low queuing latency,
very low congestion loss, and scaling of per-flow throughput by
using Explicit Congestion Notification (ECN) in a modified way. Until
the Coupled Dual Queue (DualQ), these Scalable L4S congestion controls could only be
deployed where a clean-slate environment could be arranged, such as in
private data centres.This specification first explains how a Coupled DualQ works. It then
gives the normative requirements that are necessary for it to work well.
All this is independent of which two AQMs are used, but pseudocode
examples of specific AQMs are given in appendices.Status of This Memo
This document is not an Internet Standards Track specification; it is
published for examination, experimental implementation, and
evaluation.
This document defines an Experimental Protocol for the Internet
community. This document is a product of the Internet Engineering
Task Force (IETF). It represents the consensus of the IETF community.
It has received public review and has been approved for publication
by the Internet Engineering Steering Group (IESG). Not all documents
approved by the IESG are candidates for any level of Internet
Standard; see Section 2 of RFC 7841.
Information about the current status of this document, any
errata, and how to provide feedback on it may be obtained at
.
Copyright Notice
Copyright (c) 2023 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
() in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with
respect to this document. Code Components extracted from this
document must include Revised BSD License text as described in
Section 4.e of the Trust Legal Provisions and are provided without
warranty as described in the Revised BSD License.
Table of Contents
. Introduction
. Outline of the Problem
. Context, Scope, and Applicability
. Terminology
. Features
. DualQ Coupled AQM
. Coupled AQM
. Dual Queue
. Traffic Classification
. Overall DualQ Coupled AQM Structure
. Normative Requirements for a DualQ Coupled AQM
. Functional Requirements
. Requirements in Unexpected Cases
. Management Requirements
. Configuration
. Monitoring
. Anomaly Detection
. Deployment, Coexistence, and Scaling
. IANA Considerations
. Security Considerations
. Low Delay without Requiring Per-flow Processing
. Handling Unresponsive Flows and Overload
. Unresponsive Traffic without Overload
. Avoiding Short-Term Classic Starvation: Sacrifice L4S Throughput or Delay?
. L4S ECN Saturation: Introduce Drop or Delay?
. Protecting against Overload by Unresponsive ECN-Capable Traffic
. References
. Normative References
. Informative References
. Example DualQ Coupled PI2 Algorithm
. Pass #1: Core Concepts
. Pass #2: Edge-Case Details
. Example DualQ Coupled Curvy RED Algorithm
. Curvy RED in Pseudocode
. Efficient Implementation of Curvy RED
. Choice of Coupling Factor, k
. RTT-Dependence
. Guidance on Controlling Throughput Equivalence
Acknowledgements
Contributors
Authors' Addresses
IntroductionThis document specifies a framework for DualQ Coupled AQMs, which can
serve as the network part of the L4S architecture . A DualQ Coupled AQM consists of two
queues: L4S and Classic. The L4S queue is intended for Scalable
congestion controls that can maintain very low queuing latency
(sub-millisecond on average) and high throughput at the same time. The
Coupled DualQ acts like a semi-permeable membrane: the L4S queue
isolates the sub-millisecond average queuing delay of L4S from Classic
latency, while the coupling between the queues pools the capacity
between both queues so that ad hoc numbers of capacity-seeking
applications all sharing the same capacity can have roughly equivalent
throughput per flow, whichever queue they use. The DualQ achieves this
indirectly, without having to inspect transport-layer flow identifiers
and without compromising the performance of the Classic traffic,
relative to a single queue. The DualQ design has low complexity and
requires no configuration for the public Internet.Outline of the ProblemLatency is becoming the critical performance factor for many
(perhaps most) applications on the public Internet, e.g., interactive
web, web services, voice, conversational video, interactive video,
interactive remote presence, instant messaging, online gaming, remote
desktop, cloud-based applications, and video-assisted remote control
of machinery and industrial processes. Once access network bitrates
reach levels now common in the developed world, further increases
offer diminishing returns unless latency is also addressed . In the last decade or so, much has been done
to reduce propagation time by placing caches or servers closer to
users. However, queuing remains a major intermittent component of
latency.Previously, very low latency has only been available for a few
selected low-rate applications, that confine their sending rate within
a specially carved-off portion of capacity, which is prioritized over
other traffic, e.g., Diffserv Expedited Forwarding (EF) . Up
to now, it has not been possible to allow any number of low-latency,
high throughput applications to seek to fully utilize available
capacity, because the capacity-seeking process itself causes too much
queuing delay.To reduce this queuing delay caused by the capacity-seeking
process, changes either to the network alone or to end systems alone
are in progress. L4S involves a recognition that both approaches are
yielding diminishing returns:
Recent state-of-the-art AQM in the
network, e.g., Flow Queue CoDel ,
Proportional Integral controller Enhanced (PIE) , and Adaptive Random Early Detection (ARED) ), has reduced queuing delay for all traffic, not
just a select few applications. However, no matter how good the
AQM, the capacity-seeking (sawtoothing) rate of TCP-like
congestion controls represents a lower limit that will cause either
the queuing delay to vary or the link to be
underutilized.
These AQMs are tuned to allow a typical
capacity-seeking TCP-Reno-friendly flow to induce an average queue
that roughly doubles the base round-trip time (RTT), adding 5-15 ms of queuing on
average for a mix of long-running flows and web traffic (cf. 500 microseconds with L4S for the same traffic mix ). However, for many applications, low
delay is not useful unless it is consistently low. With these
AQMs, 99th percentile queuing delay is 20-30 ms (cf. 2 ms with the
same traffic over L4S).
Similarly, recent research into using end-to-end congestion control
without needing an AQM in the network (e.g., Bottleneck Bandwidth and Round-trip propagation time (BBR) ) seems to
have hit a similar queuing delay floor of about 20 ms on
average, but there are also regular 25 ms delay spikes due to
bandwidth probes and 60 ms spikes due to flow-starts.
L4S learns from the experience of Data Center TCP (DCTCP) , which shows the power of complementary changes
both in the network and on end systems. DCTCP teaches us that two
small but radical changes to congestion control are needed to cut the
two major outstanding causes of queuing delay variability:
Far smaller rate variations (sawteeth) than Reno-friendly
congestion controls.
A shift of smoothing and hence smoothing delay from network to
sender.
Without the former, a 'Classic' (e.g., Reno-friendly)
flow's RTT varies between roughly 1 and 2 times the
base RTT between the machines in question. Without the latter, a
'Classic' flow's response to changing events is delayed by a
worst-case (transcontinental) RTT, which could be hundreds of times
the actual smoothing delay needed for the RTT of typical traffic from
localized Content Delivery Networks (CDNs).These changes are the two main features of the family of so-called
'Scalable' congestion controls (which include DCTCP, Prague, and
Self-Clocked Rate Adaptation for Multimedia (SCReAM)). Both of these changes only reduce delay in combination with a
complementary change in the network, and they are both only feasible
with ECN, not drop, for the signalling:
The smaller sawteeth allow an extremely shallow ECN
packet-marking threshold in the queue.
No smoothing in the network means that every fluctuation of
the queue is signalled immediately.
Without ECN, either of these would lead to very high loss
levels. In contrast, with ECN, the resulting high marking levels are just
signals, not impairments.
(Note that BBRv2
combines the best of both worlds -- it works as a Scalable congestion
control when ECN is available, but it also aims to minimize delay when ECN
is absent.)However, until now, Scalable congestion controls (like DCTCP) did
not coexist well in a shared ECN-capable queue with existing Classic
(e.g., Reno or CUBIC ) congestion controls -- Scalable controls are
so aggressive that these 'Classic' algorithms would drive themselves
to a small capacity share. Therefore, until now, L4S controls could
only be deployed where a clean-slate environment could be arranged,
such as in private data centres (hence the name DCTCP).One way to solve the problem of coexistence between Scalable and
Classic flows is to use a per-flow-queuing (FQ) approach such as
FQ-CoDel . It classifies packets by flow
identifier into separate queues in order to isolate sparse flows from
the higher latency in the queues assigned to heavier flows. However,
if a Classic flow needs both low delay and high throughput, having a
queue to itself does not isolate it from the harm it causes to itself.
Also FQ approaches need to inspect flow identifiers, which is not
always practical.In summary, Scalable congestion controls address the root cause of
the latency, loss and scaling problems with Classic congestion
controls. Both FQ and DualQ AQMs can be enablers for this smooth low-latency
scalable behaviour. The DualQ approach is particularly useful
because identifying flows is sometimes not practical or desirable.Context, Scope, and ApplicabilityL4S involves complementary changes in the network and on
end systems:
Network:
A DualQ Coupled AQM (defined in the present
document) or a modification to flow queue AQMs (described in paragraph "b" in
Section of the L4S architecture ).
End system:
A Scalable congestion control (defined in Section of the L4S ECN protocol spec ).
Packet identifier:
The network and end-system parts
of L4S can be deployed incrementally, because they both identify
L4S packets using the experimentally assigned ECN codepoints in the IP header: ECT(1) and
CE .
DCTCP is an example
of a Scalable congestion control for controlled environments that has
been deployed for some time in Linux, Windows, and FreeBSD operating
systems. During the progress of this document through the IETF, a
number of other Scalable congestion controls were implemented,
e.g., Prague over TCP and QUIC , BBRv2 , and
the L4S variant of SCReAM for real-time media .The focus of this specification is to enable deployment of the
network part of the L4S service. Then, without any management
intervention, applications can exploit this new network capability as
the applications or their operating systems migrate to Scalable congestion controls, which
can then evolve while their benefits are
being enjoyed by everyone on the Internet.The DualQ Coupled AQM framework can incorporate any AQM designed
for a single queue that generates a statistical or deterministic
mark/drop probability driven by the queue dynamics. Pseudocode
examples of two different DualQ Coupled AQMs are given in the
appendices.
In many cases the framework simplifies the basic control
algorithm and requires little extra processing.
Therefore, it is
believed the Coupled AQM would be applicable and easy to deploy in all
types of buffers such as buffers in cost-reduced mass-market residential
equipment; buffers in end-system stacks; buffers in carrier-scale
equipment including remote access servers, routers, firewalls, and
Ethernet switches; buffers in network interface cards; buffers in
virtualized network appliances, hypervisors; and so on.For the public Internet, nearly all the benefit will typically be
achieved by deploying the Coupled AQM into either end of the access
link between a 'site' and the Internet, which is invariably the
bottleneck (see
about deployment, which also defines the term 'site' to mean a home,
an office, a campus, or mobile user equipment).Latency is not the only concern of L4S:
The 'Low Loss' part of the name denotes that L4S generally
achieves zero congestion loss (which would otherwise cause
retransmission delays), due to its use of ECN.
The 'Scalable throughput' part of the name denotes that the
per-flow throughput of Scalable congestion controls should scale
indefinitely, avoiding the imminent scaling problems with
'TCP-Friendly' congestion control algorithms .
The former is clearly in scope of this AQM document. However,
the latter is an outcome of the end-system behaviour and is therefore
outside the scope of this AQM document, even though the AQM is an
enabler.The overall L4S architecture gives more detail, including on
wider deployment aspects such as backwards compatibility of Scalable
congestion controls in bottlenecks where a DualQ Coupled AQM has not
been deployed. The supporting papers , ,
, and give the full rationale for the AQM design, both
discursively and in more precise mathematical form, as well as the
results of performance evaluations. The main results have been
validated independently when using the Prague congestion control (experiments are run using Prague and DCTCP, but
only the former is relevant for validation, because Prague fixes a
number of problems with the Linux DCTCP code that make it unsuitable
for the public Internet).Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED",
"MAY", and "OPTIONAL" in this document are to be interpreted as
described in BCP 14
when, and only when, they appear in all capitals, as shown here.
The DualQ Coupled AQM uses two queues for two services:
Classic Service/Queue:
The Classic service is
intended for all the congestion control behaviours that coexist
with Reno (e.g., Reno itself,
CUBIC , and TFRC ). The term 'Classic queue' means a queue providing the Classic service.
Low Latency, Low Loss, and Scalable throughput (L4S) Service/Queue:
The
'L4S' service is intended for traffic from Scalable congestion
control algorithms, such as the Prague congestion control , which was
derived from Data Center TCP . The
L4S service is for more general traffic than just Prague
-- it allows the set of congestion controls with similar
scaling properties to Prague to evolve, such as the examples listed below (Relentless, SCReAM, etc.). The term 'L4S queue' means a queue providing the L4S service.
Classic Congestion Control:
A congestion control
behaviour that can coexist with standard Reno without causing significantly negative impact
on its flow rate . With Classic
congestion controls, such as Reno or CUBIC, because flow rate has
scaled since TCP congestion control was first designed in 1988, it
now takes hundreds of round trips (and growing) to recover after a
congestion signal (whether a loss or an ECN mark) as shown in the
examples in Section of the L4S architecture and in . Therefore, control of queuing and utilization
becomes very slack, and the slightest disturbances (e.g., from
new flows starting) prevent a high rate from being attained.
Scalable Congestion Control:
A congestion control
where the average time from one congestion signal to the next (the
recovery time) remains invariant as flow rate scales, all
other factors being equal. This maintains the same degree of
control over queuing and utilization whatever the flow rate, as
well as ensuring that high throughput is robust to disturbances.
For instance, DCTCP averages 2 congestion signals per round trip,
whatever the flow rate, as do other recently developed Scalable
congestion controls, e.g., Relentless TCP , Prague , BBRv2 , and the L4S
variant of SCReAM for real-time media . For the public
Internet, a Scalable transport has to comply with the requirements
in (a.k.a. the 'Prague L4S requirements').
C:
Abbreviation for Classic, e.g., when used as
a subscript.
L:
Abbreviation for L4S, e.g., when used as a
subscript.The terms Classic or L4S can
also qualify other nouns, such as 'codepoint', 'identifier',
'classification', 'packet', and 'flow'. For example, an L4S packet
means a packet with an L4S identifier sent from an L4S congestion
control.Both Classic and L4S services can
cope with a proportion of unresponsive or less-responsive traffic
as well but, in the L4S case, its rate has to be smooth enough or
low enough to not build a queue (e.g., DNS, Voice over IP (VoIP), game sync
datagrams, etc.). The DualQ Coupled AQM behaviour is defined to be
similar to a single First-In, First-Out (FIFO) queue with respect to unresponsive and
overload traffic.
Reno-friendly:
The subset of Classic traffic that is
friendly to the standard Reno congestion control defined for TCP
in .
The TFRC spec indirectly implies that 'friendly' is
defined as "generally within a factor of two of the sending rate
of a TCP flow under the same conditions". 'Reno-friendly' is used here in place of
'TCP-friendly', given the latter has become imprecise, because the
TCP protocol is now used with so many different congestion control
behaviours, and Reno is used in non-TCP transports, such as
QUIC .
DualQ or DualQ AQM:
Used loosely as shorthand for a Dual-Queue Coupled AQM, where the context
makes 'Coupled AQM' obvious.
Classic ECN:
The original Explicit Congestion
Notification (ECN) protocol that
requires ECN signals to be treated as equivalent to drops, both when
generated in the network and when responded to by the
sender.For L4S, the names used for the four codepoints of the 2-bit IP-ECN field are unchanged from those
defined in the ECN spec , i.e., Not-ECT, ECT(0), ECT(1), and
CE, where ECT stands for ECN-Capable Transport and CE stands for
Congestion Experienced. A packet marked with the CE codepoint is
termed 'ECN-marked' or sometimes just 'marked' where the context
makes ECN obvious.
FeaturesThe AQM couples marking and/or dropping from the Classic queue to
the L4S queue in such a way that a flow will get roughly the same
throughput whichever it uses. Therefore, both queues can feed into the
full capacity of a link, and no rates need to be configured for the
queues.
The L4S queue enables Scalable congestion controls like DCTCP
or Prague to give very low and consistently low latency, without
compromising the performance of competing 'Classic' Internet
traffic.Thousands of tests have been conducted in a typical fixed
residential broadband setting. Experiments used a range of base round-trip
delays up to 100 ms and link rates up to 200 Mb/s between the data
centre and home network, with varying amounts of background traffic in
both queues. For every L4S packet, the AQM kept the average queuing
delay below 1 ms (or 2 packets where serialization delay exceeded 1 ms
on slower links), with the 99th percentile being no worse than 2 ms. No losses at
all were introduced by the L4S AQM. Details of the extensive
experiments are available in and .
Subjective testing using
very demanding high-bandwidth low-latency applications over a single
shared access link is also described in and summarized in Section of the L4S architecture .
In all these experiments, the host was connected to the home
network by fixed Ethernet, in order to quantify the queuing delay that
can be achieved by a user who cares about delay. It should be
emphasized that L4S support at the bottleneck link cannot 'undelay'
bursts introduced by another link on the path, for instance by legacy
Wi-Fi equipment. However, if L4S support is added to the queue feeding
the outgoing WAN link of a home gateway,
it would be counterproductive not to also reduce the burstiness of the
incoming Wi-Fi. Also, trials of Wi-Fi
equipment with an L4S DualQ Coupled AQM on the outgoing
Wi-Fi interface are in progress, and early results of an L4S DualQ
Coupled AQM in a 5G radio access network testbed with emulated outdoor
cell edge radio fading are given in .Unlike Diffserv EF, the L4S queue does not have
to be limited to a small proportion of the link capacity in order to
achieve low delay. The L4S queue can be filled with a heavy load of
capacity-seeking flows (Prague, BBRv2, etc.) and still achieve low delay.
The L4S queue does not rely on the presence of other traffic in the
Classic queue that can be 'overtaken'.
It gives low latency to L4S
traffic whether or not there is Classic traffic. The tail latency of
traffic served by the Classic AQM is sometimes a little better,
sometimes a little worse, when a proportion of the traffic is L4S.The two queues are only necessary because:
The large variations (sawteeth) of Classic flows need roughly a
base RTT of queuing delay to ensure full utilization.
Scalable flows do not need a queue to keep utilization high,
but they cannot keep latency consistently low if they are mixed
with Classic traffic.
The L4S queue has latency priority within sub-round-trip
timescales, but over longer periods the coupling from the Classic to
the L4S AQM (explained below) ensures that it does not have bandwidth
priority over the Classic queue.DualQ Coupled AQMThere are two main aspects to the DualQ Coupled AQM approach:
The Coupled AQM that addresses throughput equivalence between
Classic (e.g., Reno or CUBIC) flows and L4S flows (that satisfy
the Prague L4S requirements).
The Dual-Queue structure that provides latency separation for L4S
flows to isolate them from the typically large Classic queue.
Coupled AQMIn the 1990s, the 'TCP formula' was derived for the relationship
between the steady-state congestion window, cwnd, and the drop
probability, p of standard Reno congestion control . To a first-order approximation, the steady-state
cwnd of Reno is inversely proportional to the square root of p.The design focuses on Reno as the worst case, because if it does no
harm to Reno, it will not harm CUBIC or any traffic designed to be
friendly to Reno. TCP CUBIC implements a Reno-friendly mode,
which is relevant for typical RTTs under 20 ms as long as the
throughput of a single flow is less than about 350 Mb/s. In such cases,
it can be assumed that CUBIC traffic behaves similarly to Reno. The
term 'Classic' will be used for the collection of Reno-friendly
traffic including CUBIC and potentially other experimental congestion
controls intended not to significantly impact the flow rate of
Reno.A supporting paper includes the
derivation of the equivalent rate equation for DCTCP, for which cwnd
is inversely proportional to p (not the square root), where in this
case p is the ECN-marking probability. DCTCP is not the only
congestion control that behaves like this, so the term 'Scalable' will
be used for all similar congestion control behaviours (see examples in
). The term 'L4S' is used for traffic
driven by a Scalable congestion control that also complies with the
additional 'Prague L4S requirements' .For safe coexistence, under stationary conditions, a Scalable flow
has to run at roughly the same rate as a Reno TCP flow (all other
factors being equal). So the drop or marking probability for Classic
traffic, p_C, has to be distinct from the marking probability for L4S
traffic, p_L. The original ECN spec required these probabilities to be the same, but
updates to enable experiments in
which these probabilities are different.Also, to remain stable, Classic sources need the network to smooth
p_C so it changes relatively slowly. It is hard for a network node to
know the RTTs of all the flows, so a Classic AQM adds a worst-case RTT of smoothing delay (about 100-200
ms). In contrast, L4S shifts responsibility for smoothing ECN feedback
to the sender, which only delays its response by its own RTT, as well as allowing a more immediate
response if necessary.The Coupled AQM achieves safe coexistence by making the Classic
drop probability p_C proportional to the square of the coupled L4S
probability p_CL. p_CL is an input to the instantaneous L4S marking
probability p_L, but it changes as slowly as p_C. This makes the Reno
flow rate roughly equal the DCTCP flow rate, because the squaring of
p_CL counterbalances the square root of p_C in the 'TCP formula' of
Classic Reno congestion control.Stating this as a formula, the relation between Classic drop
probability, p_C, and the coupled L4S probability p_CL needs to take
the following form:
p_C = ( p_CL / k )^2, (1)where k is the constant of proportionality, which is termed the
'coupling factor'.Dual QueueClassic traffic needs to build a large queue to prevent
underutilization. Therefore, a separate queue is provided for L4S
traffic, and it is scheduled with priority over the Classic queue.
Priority is conditional to prevent starvation of Classic traffic in
certain conditions (see ).Nonetheless, coupled marking ensures that giving priority to L4S
traffic still leaves the right amount of spare scheduling time for
Classic flows to each get equivalent throughput to DCTCP flows (all
other factors, such as RTT, being equal).Traffic ClassificationBoth the Coupled AQM and DualQ mechanisms need an identifier to
distinguish L4S (L) and Classic (C) packets.
Then the coupling
algorithm can achieve coexistence without having to inspect flow
identifiers, because it can apply the appropriate marking or dropping
probability to all flows of each type. A separate
specification requires
the network to treat the ECT(1) and CE codepoints of the ECN field as
this identifier. An additional process document has proved necessary
to make the ECT(1) codepoint available for experimentation .For policy reasons, an operator might choose to steer certain
packets (e.g., from certain flows or with certain addresses) out
of the L queue, even though they identify themselves as L4S by their
ECN codepoints. In such cases, the L4S ECN protocol states that the device "MUST NOT
alter the end-to-end L4S ECN identifier" so that it is preserved
end to end. The aim is that each operator can choose how it treats L4S
traffic locally, but an individual operator does not alter the
identification of L4S packets, which would prevent other operators
downstream from making their own choices on how to treat L4S
traffic.In addition, an operator could use other identifiers to classify
certain additional packet types into the L queue that it deems will
not risk harm to the L4S service, for instance, addresses of specific
applications or hosts; specific Diffserv codepoints such as EF, Voice-Admit, or the Non-Queue-Building (NQB)
per-hop behaviour; or certain protocols (e.g., ARP and DNS) (see . Note
that
states that "a network node MUST NOT
change Not-ECT or ECT(0) in the IP-ECN field into an L4S identifier."
Thus, the L queue is not solely an L4S queue; it
can be considered more generally as a low-latency queue.Overall DualQ Coupled AQM Structure shows the overall structure
that any DualQ Coupled AQM is likely to have. This schematic is
intended to aid understanding of the current designs of DualQ Coupled
AQMs. However, it is not intended to preclude other innovative ways of
satisfying the normative requirements in that minimally define a DualQ Coupled AQM.
Also, the schematic only illustrates operation under normally expected
circumstances; behaviour under overload or with operator-specific
classifiers is deferred to .The classifier on the left separates incoming traffic between the
two queues (L and C). Each queue has its own AQM that determines the
likelihood of marking or dropping (p_L and p_C).
In , it has been
proved that it is preferable to control load
with a linear controller, then square the output before applying it as
a drop probability to Reno-friendly traffic (because Reno congestion
control decreases its load proportional to the square root of the
increase in drop). So, the AQM for Classic traffic needs to be
implemented in two stages: i) a base stage that outputs an internal
probability p' (pronounced 'p-prime') and ii) a squaring stage that
outputs p_C, where
p_C = (p')^2. (2)Substituting for p_C in equation (1) gives
p' = p_CL / k.So the slow-moving input to ECN marking in the L queue (the
coupled L4S probability) is
p_CL = k*p'. (3)The actual ECN-marking probability p_L that is applied to the L
queue needs to track the immediate L queue delay under L-only
congestion conditions, as well as track p_CL under coupled congestion
conditions. So the L queue uses a 'Native AQM' that calculates a
probability p'_L as a function of the instantaneous L queue delay.
And given the L queue has conditional priority over the C queue,
whenever the L queue grows, the AQM ought to apply marking probability
p'_L, but p_L ought to not fall below p_CL. This suggests
p_L = max(p'_L, p_CL), (4)which has also been found to work very well in
practice.The two transformations of p' in equations (2) and (3) implement
the required coupling given in equation (1) earlier.The constant of proportionality or coupling factor, k, in equation
(1) determines the ratio between the congestion probabilities (loss or
marking) experienced by L4S and Classic traffic. Thus, k indirectly
determines the ratio between L4S and Classic flow rates, because flows
(assuming they are responsive) adjust their rate in response to
congestion probability. gives
guidance on the choice of k and its effect on relative flow rates.After the AQMs have applied their dropping or marking, the
scheduler forwards their packets to the link. Even though the
scheduler gives priority to the L queue, it is not as strong as the
coupling from the C queue. This is because, as the C queue grows, the
'Base AQM' applies more congestion signals to L traffic (as well as to C).
As L flows reduce their rate in response, they use less than the
scheduling share for L traffic. So, because the scheduler is work
preserving, it schedules any C traffic in the gaps.Giving priority to the L queue has the benefit of very low L queue
delay, because the L queue is kept empty whenever L traffic is
controlled by the coupling. Also, there only has to be a coupling in
one direction -- from Classic to L4S. Priority has to be conditional in
some way to prevent the C queue from being starved in the short term (see
) to give C traffic a means
to push in, as explained next. With normal responsive L traffic, the
coupled ECN marking gives C traffic the ability to push back against
even strict priority, by congestion marking the L traffic to make it
yield some space. However, if there is just a small finite set of C
packets (e.g., a DNS request or an initial window of data), some
Classic AQMs will not induce enough ECN marking in the L queue, no
matter how long the small set of C packets waits. Then, if the L queue
happens to remain busy, the C traffic would never get a scheduling
opportunity from a strict priority scheduler. Ideally, the Classic AQM
would be designed to increase the coupled marking the longer that C
packets have been waiting, but this is not always practical -- hence
the need for L priority to be conditional. Giving a small weight or
limited waiting time for C traffic improves response times for short
Classic messages, such as DNS requests, and improves Classic flow
startup because immediate capacity is available.Example DualQ Coupled AQM algorithms called 'DualPI2' and 'Curvy RED'
are given in Appendices and . Either example AQM can be used to couple
packet marking and dropping across a DualQ:
DualPI2 uses a Proportional Integral (PI) controller as the Base
AQM. Indeed, this Base AQM with just the squared output and no L4S
queue can be used as a drop-in replacement for PIE , in which case it is just called PI2 .
PI2 is a principled simplification of PIE that is both
more responsive and more stable in the face of dynamically varying
load.
Curvy RED is derived from RED , except
its configuration parameters are delay-based to make them insensitive
to link rate, and it requires fewer operations per packet than RED.
However, DualPI2 is more responsive and stable over a wider range of
RTTs than Curvy RED. As a consequence, at the time of writing, DualPI2
has attracted more development and evaluation attention than Curvy
RED, leaving the Curvy RED design not so fully evaluated.
Both AQMs regulate their queue against targets configured in units
of time rather than bytes. As already explained, this ensures
configuration can be invariant for different drain rates. With AQMs in
a DualQ structure this is particularly important because the drain
rate of each queue can vary rapidly as flows for the two queues arrive
and depart, even if the combined link rate is constant.It would be possible to control the queues with other alternative
AQMs, as long as the normative requirements (those expressed in
capitals) in are observed.The two queues could optionally be part of a larger queuing
hierarchy, such as the initial example ideas in .Normative Requirements for a DualQ Coupled AQMThe following requirements are intended to capture only the
essential aspects of a DualQ Coupled AQM. They are intended to be
independent of the particular AQMs implemented for each queue but to
still define the DualQ framework built around those AQMs.Functional RequirementsA DualQ Coupled AQM implementation MUST comply with the
prerequisite L4S behaviours for any L4S network node (not just a
DualQ) as specified in . These primarily concern
classification and re-marking as briefly summarized earlier in . But
also gives guidance on reducing the burstiness of the
link technology underlying any L4S AQM.A DualQ Coupled AQM implementation MUST utilize two queues,
each with an AQM algorithm.The AQM algorithm for the low-latency (L) queue MUST be able to
apply ECN marking to ECN-capable packets.The scheduler draining the two queues MUST give L4S packets
priority over Classic, although priority MUST be bounded in order
not to starve Classic traffic (see ). The scheduler SHOULD be
work-conserving, or otherwise close to work-conserving. This is
because Classic traffic needs to be able to efficiently fill any
space left by L4S traffic even though the scheduler would otherwise
allocate it to L4S. defines the meaning of
an ECN marking on L4S traffic, relative to drop of Classic traffic.
In order to ensure coexistence of Classic and Scalable L4S traffic,
it says,
"the likelihood that the AQM drops a Not-ECT Classic packet
(p_C) MUST be roughly proportional to the square of the likelihood
that it would have marked it if it had been an L4S packet (p_L)."
The term 'likelihood' is used to allow for marking and dropping to
be either probabilistic or deterministic.For the current specification, this translates into the following
requirement. A DualQ Coupled AQM MUST apply ECN marking to traffic
in the L queue that is no lower than that derived from the
likelihood of drop (or ECN marking) in the Classic queue using equation
(1).The constant of proportionality, k, in equation (1) determines the
relative flow rates of Classic and L4S flows when the AQM concerned
is the bottleneck (all other factors being equal). The L4S ECN
protocol says,
"The
constant of proportionality (k) does not have to be standardised for
interoperability, but a value of 2 is RECOMMENDED."
Assuming Scalable congestion controls for the Internet will be as
aggressive as DCTCP, this will ensure their congestion window will
be roughly the same as that of a Standards Track TCP Reno congestion
control (Reno) and other Reno-friendly
controls, such as TCP CUBIC in its Reno-friendly mode.The choice of k is a matter of operator policy, and operators MAY
choose a different value using the guidelines in .If multiple customers or users share capacity at a bottleneck
(e.g., in the Internet access link of a campus network), the
operator's choice of k will determine capacity sharing between the
flows of different customers. However, on the public Internet,
access network operators typically isolate customers from each other
with some form of Layer 2 multiplexing
(OFDM(A) in DOCSIS 3.1,
CDMA in 3G, and SC-FDMA in LTE) or Layer 3 scheduling (Weighted Round Robin (WRR) for DSL) rather than
relying on host congestion controls to share capacity between
customers . In such cases, the choice
of k will solely affect relative flow rates within each customer's
access capacity, not between customers. Also, k will not affect
relative flow rates at any times when all flows are Classic or all
flows are L4S, and it will not affect the relative throughput of
small flows.Requirements in Unexpected CasesThe flexibility to allow operator-specific classifiers () leads to the need to specify what
the AQM in each queue ought to do with packets that do not carry
the ECN field expected for that queue. It is expected that the AQM
in each queue will inspect the ECN field to determine what sort of
congestion notification to signal, then it will decide whether to
apply congestion notification to this particular packet, as
follows:
If a packet that does not carry an ECT(1) or a CE codepoint
is classified into the L queue, then:
if the packet is ECT(0), the L AQM SHOULD apply
CE marking using a probability appropriate to Classic
congestion control and appropriate to the target delay in
the L queue
if the packet is Not-ECT, the appropriate action
depends on whether some other function is protecting the L
queue from misbehaving flows (e.g., per-flow queue
protection or latency
policing):
if separate queue protection is provided, the L AQM
SHOULD ignore the packet and forward it unchanged,
meaning it should not calculate whether to apply
congestion notification, and it should neither drop nor
CE mark the packet (for instance, the operator might
classify EF traffic that is unresponsive to drop into
the L queue, alongside responsive L4S-ECN traffic)
if separate queue protection is not provided, the L
AQM SHOULD apply drop using a drop probability
appropriate to Classic congestion control and
to the target delay in the L queue
If a packet that carries an ECT(1) codepoint is classified
into the C queue:
the C AQM SHOULD apply CE marking using the Coupled AQM
probability p_CL (= k*p').
The above requirements are worded as "SHOULD"s, because
operator-specific classifiers are for flexibility, by definition.
Therefore, alternative actions might be appropriate in the
operator's specific circumstances.
An example would be where the
operator knows that certain legacy traffic set to one
codepoint actually has a congestion response associated with
another codepoint.If the DualQ Coupled AQM has detected overload, it MUST
introduce Classic drop to both types of ECN-capable traffic until
the overload episode has subsided. Introducing drop if ECN marking
is persistently high is recommended in
Section of the ECN spec
and in Section of
the AQM Recommendations .Management RequirementsConfigurationBy default, a DualQ Coupled AQM SHOULD NOT need any
configuration for use at a bottleneck on the public
Internet . The following parameters
MAY be operator-configurable, e.g., to tune for non-Internet
settings:
Optional packet classifier(s) to use in addition to the ECN
field (see ).
Expected typical RTT, which can be used to determine the
queuing delay of the Classic AQM at its operating point, in
order to prevent typical lone flows from underutilizing
capacity. For example:
for the PI2 algorithm (), the queuing delay target is
dependent on the typical RTT.
for the Curvy RED algorithm (), the queuing delay at the desired
operating point of the curvy ramp is configured to
encompass a typical RTT.
if another Classic AQM was used, it would be likely to
need an operating point for the queue based on the typical
RTT, and if so, it SHOULD be expressed in units of
time.
An operating point that is manually calculated might
be directly configurable instead, e.g., for links with
large numbers of flows where underutilization by a single
flow would be unlikely.
Expected maximum RTT, which can be used to set the
stability parameter(s) of the Classic AQM. For example:
for the PI2 algorithm (), the gain parameters of the
PI algorithm depend on the maximum RTT.
for the Curvy RED algorithm (), the smoothing parameter is
chosen to filter out transients in the queue within a
maximum RTT.
Any stability parameter that is manually calculated
assuming a maximum RTT might be directly configurable
instead.
Coupling factor, k (see ).
A limit to the conditional priority of L4S. This is
scheduler-dependent, but it SHOULD be expressed as a relation
between the max delay of a C packet and an L packet. For
example:
for a WRR scheduler, a weight ratio between L and C of
w:1 means that the maximum delay of a C packet is w times
that of an L packet.
for a time-shifted FIFO (TS-FIFO) scheduler (see ), a time-shift of
tshift means that the maximum delay to a C packet is
tshift greater than that of an L packet. tshift could be
expressed as a multiple of the typical RTT rather than as
an absolute delay.
The maximum Classic ECN-marking probability, p_Cmax, before
introducing drop.
MonitoringAn experimental DualQ Coupled AQM SHOULD allow the operator to
monitor each of the following operational statistics on demand,
per queue and per configurable sample interval, for performance
monitoring and perhaps also for accounting in some cases:
bits forwarded, from which utilization can be
calculated;
total packets in the three categories: arrived, presented
to the AQM, and forwarded. The difference between the first
two will measure any non-AQM tail discard. The difference
between the last two will measure proactive AQM discard;
ECN packets marked, non-ECN packets dropped, and ECN packets
dropped, which can be combined with the three total packet
counts above to calculate marking and dropping
probabilities; and
queue delay (not including serialization delay of the head
packet or medium acquisition delay) -- see further notes
below.Unlike the other statistics,
queue delay cannot be captured in a simple accumulating
counter. Therefore, the type of queue delay statistics
produced (mean, percentiles, etc.) will depend on
implementation constraints. To facilitate comparative
evaluation of different implementations and approaches, an
implementation SHOULD allow mean and 99th percentile queue
delay to be derived (per queue per sample interval). A
relatively simple way to do this would be to store a
coarse-grained histogram of queue delay. This could be done
with a small number of bins with configurable edges that
represent contiguous ranges of queue delay. Then, over a
sample interval, each bin would accumulate a count of the
number of packets that had fallen within each range. The
maximum queue delay per queue per interval MAY also be
recorded, to aid diagnosis of faults and anomalous events.
Anomaly DetectionAn experimental DualQ Coupled AQM SHOULD asynchronously report
the following data about anomalous conditions:
Start time and duration of overload state.A hysteresis mechanism SHOULD be used to
prevent flapping in and out of overload causing an event
storm. For instance, exiting from overload state could trigger
one report but also latch a timer. Then, during that time, if
the AQM enters and exits overload state any number of times,
the duration in overload state is accumulated, but no new
report is generated until the first time the AQM is out of
overload once the timer has expired.
Deployment, Coexistence, and Scaling suggests that deployment, coexistence,
and scaling should also be covered as management requirements. The
raison d'etre of the DualQ Coupled AQM is to enable
deployment and coexistence of Scalable congestion controls (as
incremental replacements for today's Reno-friendly controls that
do not scale with bandwidth-delay product). Therefore, there is no
need to repeat these motivating issues here given they are already
explained in the Introduction and detailed in the L4S
architecture .The descriptions of specific DualQ Coupled AQM algorithms in
the appendices cover scaling of their configuration parameters,
e.g., with respect to RTT and sampling frequency.IANA ConsiderationsThis document has no IANA actions.Security ConsiderationsLow Delay without Requiring Per-flow ProcessingThe L4S architecture
compares the DualQ and FQ approaches to L4S. The
privacy considerations section in that document motivates the DualQ on
the grounds that users who want to encrypt application flow
identifiers, e.g., in IPsec or other encrypted VPN tunnels, don't
have to sacrifice low delay ( encourages
avoidance of such privacy compromises).The security considerations section of the L4S architecture also
includes subsections on policing of relative flow rates (Section ) and on
policing of flows that cause excessive queuing delay (Section ). It explains
that the interests of users do not collide in the same way for delay
as they do for bandwidth. For someone to get more of the bandwidth of
a shared link, someone else necessarily gets less (a 'zero-sum game'),
whereas queuing delay can be reduced for everyone, without any need
for someone else to lose out. It also explains that, on the current
Internet, scheduling usually enforces separation of bandwidth between
'sites' (e.g., households, businesses, or mobile users), but it is not
common to need to schedule or police the bandwidth used by individual
application flows.By the above arguments, per-flow rate policing might not be
necessary, and in trusted environments (e.g., private data centres),
it is certainly unlikely to be needed. Therefore, because it is hard
to avoid complexity and unintended side effects with per-flow rate
policing, it needs to be separable from a basic AQM, as an option,
under policy control. On this basis, the DualQ Coupled AQM provides
low delay without prejudging the question of per-flow rate
policing.Nonetheless, the interests of users or flows might conflict,
e.g., in case of accident or malice. Then per-flow rate control
could be necessary. If per-flow rate control is needed, it can be provided
as a modular addition to a DualQ. And similarly, if protection against
excessive queue delay is needed, a per-flow queue protection option
can be added to a DualQ (e.g., ).Handling Unresponsive Flows and OverloadIn the absence of any per-flow control, it is important that the
basic DualQ Coupled AQM gives unresponsive flows no more throughput
advantage than a single-queue AQM would, and that it at least handles
overload situations. Overload means that incoming load significantly
or persistently exceeds output capacity, but it is not intended to be
a precise term -- significant and persistent are matters of
degree.A trade-off needs to be made between complexity and the risk of
either traffic class harming the other. In overloaded conditions, the
higher priority L4S service will have to sacrifice some aspect of its
performance. Depending on the degree of overload, alternative
solutions may relax a different factor: for example, throughput, delay,
or drop. These choices need to be made either by the developer or by
operator policy, rather than by the IETF.
Subsequent subsections
discuss handling different degrees of overload:
Unresponsive flows (L and/or C) but not overloaded,
i.e., the sum of unresponsive load before adding any
responsive traffic is below capacity.
This case is handled by the regular Coupled DualQ () but not discussed there. So below,
explains the
design goal and how it is achieved in practice.
Unresponsive flows (L and/or C) causing persistent overload,
i.e., the sum of unresponsive load even before adding any
responsive traffic persistently exceeds capacity.
This case is not covered by the regular Coupled DualQ
mechanism (), but the last paragraph
in sets out a requirement to
handle the case where ECN-capable traffic could starve
non-ECN-capable traffic. below discusses the
general options and gives specific examples.
Short-term overload that lies between the 'not overloaded' and
'persistently overloaded' cases.
For the period before overload is deemed persistent, discusses options for
more immediate mechanisms at the scheduler timescale. These
prevent short-term starvation of the C queue by making the
priority of the L queue conditional, as required in .
Unresponsive Traffic without OverloadWhen one or more L flows and/or C flows are unresponsive, but
their total load is within the link capacity so that they do not
saturate the coupled marking (below 100%), the goal of a DualQ AQM
is to behave no worse than a single-queue AQM.Tests have shown that this is indeed the case with no additional
mechanism beyond the regular Coupled DualQ of (see the results of 'overload experiments'
in ). Perhaps counterintuitively, whether
the unresponsive flow classifies itself into the L or the C queue,
the DualQ system behaves as if it has subtracted from the overall
link capacity. Then, the coupling shares out the remaining capacity
between any competing responsive flows (in either queue). See also
, which discusses
scheduler-specific details.Avoiding Short-Term Classic Starvation: Sacrifice L4S Throughput or Delay?Priority of L4S is required to be conditional (see Sections and ) to avoid short-term starvation of
Classic. Otherwise, as explained in , even a lone responsive L4S flow
could temporarily block a small finite set of C packets
(e.g., an initial window or DNS request). The blockage would
only be brief, but it could be longer for certain AQM
implementations that can only increase the congestion signal coupled
from the C queue when C packets are actually being dequeued. There
is then the question of whether to sacrifice L4S throughput or L4S
delay (or some other policy) to make the priority conditional:
Sacrifice L4S throughput:
By using WRR as the conditional priority scheduler, the L4S
service can sacrifice some throughput during overload. This can
be thought of as guaranteeing either a minimum throughput
service for Classic traffic or a maximum delay
for a packet at the head of the Classic queue.The scheduling weight of the Classic queue
should be small (e.g., 1/16). In most traffic scenarios, the
scheduler will not interfere and it will not need to, because
the coupling mechanism and the end systems will determine the
share of capacity across both queues as if it were a single
pool. However, if L4S traffic is over-aggressive or
unresponsive, the scheduler weight for Classic traffic will at
least be large enough to ensure it does not starve in the
short term. Although WRR scheduling is
only expected to address short-term overload, there are
(somewhat rare) cases when WRR has an effect on capacity shares
over longer timescales. But its effect is minor, and it
certainly does no harm. Specifically, in cases where the ratio
of L4S to Classic flows (e.g., 19:1) is greater than the
ratio of their scheduler weights (e.g., 15:1), the L4S flows
will get less than an equal share of the capacity, but only
slightly. For instance, with the example numbers given, each L4S
flow will get (15/16)/19 = 4.9% when ideally each would get
1/20 = 5%. In the rather specific case of an unresponsive flow
taking up just less than the capacity set aside for L4S
(e.g., 14/16 in the above example), using WRR could
significantly reduce the capacity left for any responsive L4S
flows.The scheduling weight of the
Classic queue should not be too small, otherwise a C packet at
the head of the queue could be excessively delayed by a
continually busy L queue. For instance, if the Classic weight is
1/16, the maximum that a Classic packet at the head of the queue
can be delayed by L traffic is the serialization delay of 15
MTU-sized packets.
Sacrifice L4S delay:
The operator could choose to
control overload of the Classic queue by allowing some delay to
'leak' across to the L4S queue. The scheduler can be made to
behave like a single FIFO queue with
different service times by implementing a very simple
conditional priority scheduler that could be called a
"time-shifted FIFO" (TS-FIFO) (see the Modifier Earliest Deadline First
(MEDF) scheduler ). This scheduler
adds tshift to the queue delay of the next L4S packet, before
comparing it with the queue delay of the next Classic packet,
then it selects the packet with the greater adjusted queue
delay.Under regular conditions, the
TS-FIFO scheduler behaves just like a strict priority
scheduler. But under moderate or high overload, it prevents
starvation of the Classic queue, because the time-shift (tshift)
defines the maximum extra queuing delay of Classic packets
relative to L4S.
This would control milder overload of
responsive traffic by introducing delay to defer invoking the
overload mechanisms in , particularly when close to
the maximum congestion signal.
The example implementations in Appendices
and could both be implemented with
either policy.L4S ECN Saturation: Introduce Drop or Delay?This section concerns persistent overload caused by unresponsive
L and/or C flows. To keep the throughput of both L4S and Classic
flows roughly equal over the full load range, a different control
strategy needs to be defined above the point where the L4S AQM
persistently saturates to an ECN marking probability of 100%, leaving
no room to push back the load any harder. L4S ECN marking will
saturate first (assuming the coupling factor k>1), even though
saturation could be caused by the sum of unresponsive traffic in
either or both queues exceeding the link capacity.The term 'unresponsive' includes cases where a flow becomes
temporarily unresponsive, for instance, a real-time flow that takes
a while to adapt its rate in response to congestion, or a standard
Reno flow that is normally responsive, but above a certain
congestion level it will not be able to reduce its congestion window
below the allowed minimum of 2 segments , effectively becoming unresponsive. (Note that
L4S traffic ought to remain responsive below a window of 2 segments.
See the L4S requirements .)Saturation raises the question of whether to relieve congestion
by introducing some drop into the L4S queue or by allowing delay to
grow in both queues (which could eventually lead to drop due to
buffer exhaustion anyway):
Drop on Saturation:
Persistent saturation can be
defined by a maximum threshold for coupled L4S ECN marking
(assuming k>1) before saturation starts to make the flow
rates of the different traffic types diverge. Above that, the
drop probability of Classic traffic is applied to all packets of
all traffic types. Then experiments have shown that queuing
delay can be kept at the target in any overload situation,
including with unresponsive traffic, and no further measures are
required ().
Delay on Saturation:
When L4S marking saturates,
instead of introducing L4S drop, the drop and marking
probabilities of both queues could be capped. Beyond that, delay
will grow either solely in the queue with unresponsive traffic
(if WRR is used) or in both queues (if TS-FIFO is
used). In either case, the higher delay ought to control
temporary high congestion. If the overload is more persistent,
eventually the combined DualQ will overflow and tail drop will
control congestion.
The example implementation in
solely applies the "drop on saturation" policy. The DOCSIS
specification of a DualQ Coupled AQM
also implements the 'drop on saturation' policy with a very shallow
L buffer. However, the addition of DOCSIS per-flow Queue Protection
turns this into
'delay on saturation' by redirecting some packets of the flow or flows
that are most responsible for L queue overload into the C queue, which has a
higher delay target. If overload continues, this again becomes 'drop
on saturation' as the level of drop in the C queue rises to maintain
the target delay of the C queue.Protecting against Overload by Unresponsive ECN-Capable TrafficWithout a specific overload mechanism, unresponsive traffic
would have a greater advantage if it were also ECN-capable. The
advantage is undetectable at normal low levels of marking.
However, it would become significant with the higher levels of
marking typical during overload, when it could evade a significant
degree of drop. This is an issue whether the ECN-capable traffic
is L4S or Classic.This raises the question of whether and when to introduce drop
of ECN-capable traffic, as required by both Section of the ECN spec and Section of the AQM
recommendations .As an example, experiments with the DualPI2 AQM () have shown that introducing 'drop on
saturation' at 100% coupled L4S marking addresses this problem
with unresponsive ECN, and it also addresses the saturation
problem. At saturation, DualPI2 switches into overload mode, where
the Base AQM is driven by the max delay of both queues, and it
introduces probabilistic drop to both queues equally.
It leaves
only a small range of congestion levels just below saturation
where unresponsive traffic gains any advantage from using the ECN
capability (relative to being unresponsive without ECN), and the
advantage is hardly detectable (see
and section IV-G of ). Also, overload with
an unresponsive ECT(1) flow gets no more bandwidth advantage than
with ECT(0).ReferencesNormative ReferencesKey words for use in RFCs to Indicate Requirement LevelsIn many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.The Addition of Explicit Congestion Notification (ECN) to IPThis memo specifies the incorporation of ECN (Explicit Congestion Notification) to TCP and IP, including ECN's use of two bits in the IP header. [STANDARDS-TRACK]Relaxing Restrictions on Explicit Congestion Notification (ECN) ExperimentationThis memo updates RFC 3168, which specifies Explicit Congestion Notification (ECN) as an alternative to packet drops for indicating network congestion to endpoints. It relaxes restrictions in RFC 3168 that hinder experimentation towards benefits beyond just removal of loss. This memo summarizes the anticipated areas of experimentation and updates RFC 3168 to enable experimentation in these areas. An Experimental RFC in the IETF document stream is required to take advantage of any of these enabling updates. In addition, this memo makes related updates to the ECN specifications for RTP in RFC 6679 and for the Datagram Congestion Control Protocol (DCCP) in RFCs 4341, 4342, and 5622. This memo also records the conclusion of the ECN nonce experiment in RFC 3540 and provides the rationale for reclassification of RFC 3540 from Experimental to Historic; this reclassification enables new experimental use of the ECT(1) codepoint.The Explicit Congestion Notification (ECN) Protocol for Low Latency, Low Loss, and Scalable Throughput (L4S)Informative ReferencesAnalysis of DCTCP: Stability, Convergence, and FairnessSIGMETRICS '11: Proceedings of the ACM SIGMETRICS Joint International Conference on Measurement and Modeling of Computer Systems, pp. 73-84A Comparison of Load-based and Queue-based Active Queue Management AlgorithmsPurdue UniversityPurdue UniversityProc. Int'l Soc. for Optical Engineering (SPIE), Vol. 4866, pp. 35-46Adaptive RED: An Algorithm for Increasing the Robustness of RED's Active Queue ManagementACIRIACIRIACIRIACIRI Technical Report 301BBR Congestion ControlWork in ProgressTCP BBR v2 Alpha/Preview Releasecommit 17700caValidating the Sharing Behavior and Latency Characteristics of the L4S ArchitectureKarlstad UniKarlstad UniKarlstad UniKarlstad UniACM SIGCOMM Computer Communication Review, Vol. 50, Issue 2, pp. 37-44The Great Internet TCP Congestion Control CensusProceedings of the ACM on Measurement and Analysis of Computing Systems, Vol. 3, Issue 3, Article No. 45, pp. 1-24Controlling Queue DelayPARCPollere IncACM Queue, Vol. 10, Issue 5Insights from Curvy RED (Random Early Detection)BTBTBT Technical Report, TR-TUB8-2015-003The DOCSIS® Queue Protection to Preserve Low LatencyWork in ProgressDOCSIS 3.1 MAC and Upper Layer Protocols Interface SpecificationCableLabsCM-SP-MULPIv3.1, Data-Over-Cable Service Interface Specifications DOCSIS 3.1 Version I17 or laterDUALPI2 - Low Latency, Low Loss and Scalable (L4S) AQMSimula Research LabNokia Bell LabsIndependentNokia Bell LabsSimula Research LabDestruction Testing: Ultra-Low Delay using Dual Queue Coupled Active Queue ManagementUniversity of OsloMaster's Thesis, Department of Informatics, University of OsloWhy Flow-Completion Time is the Right Metric for Congestion ControlStanford UniversityStanford UniversityACM SIGCOMM Computer Communication Review, Vol. 36, Issue 1, pp. 59-62L4S Testscommit e21cd91Interactions between Low Latency, Low Loss, Scalable Throughput (L4S) and Differentiated ServicesCableLabs L4S and Diffserv offer somewhat overlapping services (low latency and
low loss), but bandwidth allocation is out of scope for L4S.
Therefore there is scope for the two approaches to complement each
other, but also to conflict. This informational document explains
how the two approaches interact, how they can be arranged to
complement each other and in which cases one can stand alone without
needing the other.
Work in ProgressUltra-Low Delay for All: Live Experience, Live AnalysisSimula Research LabBell LabsBell LabsBTProceedings of the 7th International Conference on Multimedia Systems, Article No. 33, pp. 1-4Dual Queue Coupled AQM: Deployable Very Low Queuing Delay for AllNokia Bell LabsSimula Research LabNokia Bell LabsIndependent (bobbriscoe.net)Preprint submitted to IEEE/ACM Transactions on NetworkingEnabling time-critical applications over 5G with rate adaptationEricsson - Deutsche Telekom White Paper, BNEW-21:025455Internet Inter-Domain TrafficArbor NetworksArbor NetworksArbor NetworksUni MichiganUni MichiganACM SIGCOMM Computer Communication Review, Vol. 40, Issue 4, pp. 75-86Low Latency DOCSIS: Technology OverviewCableLabsCableLabsCableLabsCableLabs White PaperMEDF - A Simple Scheduling Algorithm for Two Real-Time Transport Service Classes with Application in the UTRANUniversity of WuerzburgInfosim AGSiemensSiemensProc. IEEE Conference on Computer Communications (INFOCOM'03), Vol. 2, pp. 1116-1122PI2: A Linearized AQM for both Classic and Scalable TCPNokia Bell LabsSimula Research LabBTNokia Bell LabsACM CoNEXT'16PI2 ParametersTechnical Report, TR-BB-2021-001, arXiv:2107.01003 [cs.NI]Prague Congestion ControlNokia Bell LabsNokia Bell LabsIndependent This specification defines the Prague congestion control scheme,
which is derived from DCTCP and adapted for Internet traffic by
implementing the Prague L4S requirements. Over paths with L4S
support at the bottleneck, it adapts the DCTCP mechanisms to achieve
consistently low latency and full throughput. It is defined
independently of any particular transport protocol or operating
system, but notes are added that highlight issues specific to certain
transports and OSs. It is mainly based on the current default
options of the reference Linux implementation of TCP Prague, but it
includes experience from other implementations where available. It
separately describes non-default and optional parts, as well as
future plans.
The implementation does not satisfy all the Prague requirements (yet)
and the IETF might decide that certain requirements need to be
relaxed as an outcome of the process of trying to satisfy them all.
In two cases, research code is replaced by placeholders until full
evaluation is complete.
Work in ProgressImplementing the 'TCP Prague' Requirements for L4SIndependentNokia Bell LabsSimula Research LabSimula Research LabNokia Bell LabsETH ZurichSimula Research LabProceedings of Linux Netdev 0x13Random Early Detection Gateways for Congestion AvoidanceUC BerkeleyUC BerkeleyIEEE/ACM Transactions on Networking, Volume 1, Issue 4, pp. 397-413Relentless Congestion ControlPittsburgh Supercomputing CenterRelentless congestion control is a simple modification that can be
applied to almost any AIMD style congestion control: instead of
applying a multiplicative reduction to cwnd after a loss, cwnd is
reduced by the number of lost segments. It can be modeled as a
strict implementation of van Jacobson's Packet Conservation
Principle. During recovery, new segments are injected into the
network in exact accordance with the segments that are reported to
have been delivered to the receiver by the returning ACKs.
This algorithm offers a valuable new congestion control property: the
TCP portion of the control loop has exactly unity gain, which should
make it easier to implement simple controllers in network devices to
accurately control queue sizes across a huge range of scales.
Relentless Congestion Control conforms to neither the details nor the
philosophy of current congestion control standards. These standards
are based on the idea that the Internet can attain sufficient
fairness by having relatively simple network devices send uniform
congestion signals to all flows, and mandating that all protocols
have equivalent responses to these congestion signals.
To function appropriately in a shared environment, Relentless
Congestion Control requires that the network allocates capacity
through some technique such as Fair Queuing, Approximate Fair
Dropping, etc. The salient features of these algorithms are that
they segregate the traffic into distinct flows, and send different
congestion signals to each flow. This alternative congestion control
paradigm is described in a separate document, also under
consideration by the ICCRG.
The goal of the document is to illustrate some new protocol features
and properties might be possible if we relax the "TCP-friendly"
mandate. A secondary goal of Relentless TCP is to make a distinction
between the bottlenecks that belong to protocol itself, vs standard
congestion control and the "TCP-friendly" paradigm.
Work in ProgressOn Packet Switches With Infinite StorageThe purpose of this RFC is to focus discussion on a particular problem in the ARPA-Internet and possible methods of solution. Most prior work on congestion in datagram systems focuses on buffer management. In this memo the case of a packet switch with infinite storage is considered. Such a packet switch can never run out of buffers. It can, however, still become congested. The meaning of congestion in an infinite-storage system is explored. An unexpected result is found that shows a datagram network with infinite storage, first-in-first-out queuing, at least two packet switches, and a finite packet lifetime will, under overload, drop all packets. By attacking the problem of congestion for the infinite-storage case, new solutions applicable to switches with finite storage may be found. No proposed solutions this document are intended as standards for the ARPA-Internet at this time.Congestion Control PrinciplesThe goal of this document is to explain the need for congestion control in the Internet, and to discuss what constitutes correct congestion control. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.An Expedited Forwarding PHB (Per-Hop Behavior)This document defines a PHB (per-hop behavior) called Expedited Forwarding (EF). The PHB is a basic building block in the Differentiated Services architecture. EF is intended to provide a building block for low delay, low jitter and low loss services by ensuring that the EF aggregate is served at a certain configured rate. This document obsoletes RFC 2598. [STANDARDS-TRACK]HighSpeed TCP for Large Congestion WindowsThe proposals in this document are experimental. While they may be deployed in the current Internet, they do not represent a consensus that this is the best method for high-speed congestion control. In particular, we note that alternative experimental proposals are likely to be forthcoming, and it is not well understood how the proposals in this document will interact with such alternative proposals. This document proposes HighSpeed TCP, a modification to TCP's congestion control mechanism for use with TCP connections with large congestion windows. The congestion control mechanisms of the current Standard TCP constrains the congestion windows that can be achieved by TCP in realistic environments. For example, for a Standard TCP connection with 1500-byte packets and a 100 ms round-trip time, achieving a steady-state throughput of 10 Gbps would require an average congestion window of 83,333 segments, and a packet drop rate of at most one congestion event every 5,000,000,000 packets (or equivalently, at most one congestion event every 1 2/3 hours). This is widely acknowledged as an unrealistic constraint. To address his limitation of TCP, this document proposes HighSpeed TCP, and solicits experimentation and feedback from the wider community.Specifying New Congestion Control AlgorithmsThe IETF's standard congestion control schemes have been widely shown to be inadequate for various environments (e.g., high-speed networks). Recent research has yielded many alternate congestion control schemes that significantly differ from the IETF's congestion control principles. Using these new congestion control schemes in the global Internet has possible ramifications to both the traffic using the new congestion control and to traffic using the currently standardized congestion control. Therefore, the IETF must proceed with caution when dealing with alternate congestion control proposals. The goal of this document is to provide guidance for considering alternate congestion control algorithms within the IETF. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.TCP Friendly Rate Control (TFRC): Protocol SpecificationThis document specifies TCP Friendly Rate Control (TFRC). TFRC is a congestion control mechanism for unicast flows operating in a best-effort Internet environment. It is reasonably fair when competing for bandwidth with TCP flows, but has a much lower variation of throughput over time compared with TCP, making it more suitable for applications such as streaming media where a relatively smooth sending rate is of importance.This document obsoletes RFC 3448 and updates RFC 4342. [STANDARDS-TRACK]TCP Congestion ControlThis document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery. In addition, the document specifies how TCP should begin transmission after a relatively long idle period, as well as discussing various acknowledgment generation methods. This document obsoletes RFC 2581. [STANDARDS-TRACK]Guidelines for Considering Operations and Management of New Protocols and Protocol ExtensionsNew protocols or protocol extensions are best designed with due consideration of the functionality needed to operate and manage the protocols. Retrofitting operations and management is sub-optimal. The purpose of this document is to provide guidance to authors and reviewers of documents that define new protocols or protocol extensions regarding aspects of operations and management that should be considered. This memo provides information for the Internet community.IETF Recommendations Regarding Active Queue ManagementThis memo presents recommendations to the Internet community concerning measures to improve and preserve Internet performance. It presents a strong recommendation for testing, standardization, and widespread deployment of active queue management (AQM) in network devices to improve the performance of today's Internet. It also urges a concerted effort of research, measurement, and ultimate deployment of AQM mechanisms to protect the Internet from flows that are not sufficiently responsive to congestion notification.Based on 15 years of experience and new research, this document replaces the recommendations of RFC 2309.Proportional Integral Controller Enhanced (PIE): A Lightweight Control Scheme to Address the Bufferbloat ProblemBufferbloat is a phenomenon in which excess buffers in the network cause high latency and latency variation. As more and more interactive applications (e.g., voice over IP, real-time video streaming, and financial transactions) run in the Internet, high latency and latency variation degrade application performance. There is a pressing need to design intelligent queue management schemes that can control latency and latency variation, and hence provide desirable quality of service to users.This document presents a lightweight active queue management design called "PIE" (Proportional Integral controller Enhanced) that can effectively control the average queuing latency to a target value. Simulation results, theoretical analysis, and Linux testbed results have shown that PIE can ensure low latency and achieve high link utilization under various congestion situations. The design does not require per-packet timestamps, so it incurs very little overhead and is simple enough to implement in both hardware and software.Active Queue Management (AQM) Based on Proportional Integral Controller Enhanced (PIE) for Data-Over-Cable Service Interface Specifications (DOCSIS) Cable ModemsCable modems based on Data-Over-Cable Service Interface Specifications (DOCSIS) provide broadband Internet access to over one hundred million users worldwide. In some cases, the cable modem connection is the bottleneck (lowest speed) link between the customer and the Internet. As a result, the impact of buffering and bufferbloat in the cable modem can have a significant effect on user experience. The CableLabs DOCSIS 3.1 specification introduces requirements for cable modems to support an Active Queue Management (AQM) algorithm that is intended to alleviate the impact that buffering has on latency-sensitive traffic, while preserving bulk throughput performance. In addition, the CableLabs DOCSIS 3.0 specifications have also been amended to contain similar requirements. This document describes the requirements on AQM that apply to DOCSIS equipment, including a description of the "DOCSIS-PIE" algorithm that is required on DOCSIS 3.1 cable modems.Ambiguity of Uppercase vs Lowercase in RFC 2119 Key WordsRFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.Data Center TCP (DCTCP): TCP Congestion Control for Data CentersThis Informational RFC describes Data Center TCP (DCTCP): a TCP congestion control scheme for data-center traffic. DCTCP extends the Explicit Congestion Notification (ECN) processing to estimate the fraction of bytes that encounter congestion rather than simply detecting that some congestion has occurred. DCTCP then scales the TCP congestion window based on this estimate. This method achieves high-burst tolerance, low latency, and high throughput with shallow- buffered switches. This memo also discusses deployment issues related to the coexistence of DCTCP and conventional TCP, discusses the lack of a negotiating mechanism between sender and receiver, and presents some possible mitigations. This memo documents DCTCP as currently implemented by several major operating systems. DCTCP, as described in this specification, is applicable to deployments in controlled environments like data centers, but it must not be deployed over the public Internet without additional measures.The Flow Queue CoDel Packet Scheduler and Active Queue Management AlgorithmThis memo presents the FQ-CoDel hybrid packet scheduler and Active Queue Management (AQM) algorithm, a powerful tool for fighting bufferbloat and reducing latency.FQ-CoDel mixes packets from multiple flows and reduces the impact of head-of-line blocking from bursty traffic. It provides isolation for low-rate traffic such as DNS, web, and videoconferencing traffic. It improves utilisation across the networking fabric, especially for bidirectional traffic, by keeping queue lengths short, and it can be implemented in a memory- and CPU-efficient fashion across a wide range of hardware.Self-Clocked Rate Adaptation for MultimediaThis memo describes a rate adaptation algorithm for conversational media services such as interactive video. The solution conforms to the packet conservation principle and uses a hybrid loss-and-delay- based congestion control algorithm. The algorithm is evaluated over both simulated Internet bottleneck scenarios as well as in a Long Term Evolution (LTE) system simulator and is shown to achieve both low latency and high video throughput in these scenarios.CUBIC for Fast Long-Distance NetworksCUBIC is an extension to the current TCP standards. It differs from the current TCP standards only in the congestion control algorithm on the sender side. In particular, it uses a cubic function instead of a linear window increase function of the current TCP standards to improve scalability and stability under fast and long-distance networks. CUBIC and its predecessor algorithm have been adopted as defaults by Linux and have been used for many years. This document provides a specification of CUBIC to enable third-party implementations and to solicit community feedback through experimentation on the performance of CUBIC.Effects of Pervasive Encryption on OperatorsPervasive monitoring attacks on the privacy of Internet users are of serious concern to both user and operator communities. RFC 7258 discusses the critical need to protect users' privacy when developing IETF specifications and also recognizes that making networks unmanageable to mitigate pervasive monitoring is not an acceptable outcome: an appropriate balance is needed. This document discusses current security and network operations as well as management practices that may be impacted by the shift to increased use of encryption to help guide protocol development in support of manageable and secure networks.QUIC: A UDP-Based Multiplexed and Secure TransportThis document defines the core of the QUIC transport protocol. QUIC provides applications with flow-controlled streams for structured communication, low-latency connection establishment, and network path migration. QUIC includes security measures that ensure confidentiality, integrity, and availability in a range of deployment circumstances. Accompanying documents describe the integration of TLS for key negotiation, loss detection, and an exemplary congestion control algorithm.Low Latency, Low Loss, and Scalable Throughput (L4S) Internet Service: ArchitectureSCReAMcommit fda6c53Rapid Signalling of Queue DynamicsTechnical Report, TR-BB-2017-001Example DualQ Coupled PI2 AlgorithmAs a first concrete example, the pseudocode below gives the DualPI2
algorithm. DualPI2 follows the structure of the DualQ Coupled AQM
framework in . A simple ramp
function (configured in units of queuing time) with unsmoothed ECN
marking is used for the Native L4S AQM. The ramp can also be configured
as a step function. The PI2 algorithm is used
for the Classic AQM. PI2 is an improved variant of the PIE
AQM .The pseudocode will be introduced in two passes. The first pass
explains the core concepts, deferring handling of edge-cases like
overload to the second pass. To aid comparison, line numbers are kept in
step between the two passes by using letter suffixes where the longer
code needs extra lines.All variables are assumed to be floating point in their basic units
(size in bytes, time in seconds, rates in bytes/second, alpha and beta
in Hz, and probabilities from 0 to 1). Constants expressed in k (kilo), M
(mega), G (giga), u (micro), m (milli), %, and so forth, are assumed to be
converted to their appropriate multiple or fraction to represent the
basic units. A real implementation that wants to use integer values
needs to handle appropriate scaling factors and allow
appropriate resolution of its integer types (including temporary
internal values during calculations).A full open source implementation for Linux is available at
and explained in . The specification of the DualQ Coupled AQM for
DOCSIS cable modems and cable modem termination systems (CMTSs) is available in
and explained in .Pass #1: Core ConceptsThe pseudocode manipulates three main structures of variables: the
packet (pkt), the L4S queue (lq), and the Classic queue (cq). The
pseudocode consists of the following six functions:
The initialization function dualpi2_params_init(...) () that sets parameter
defaults (the API for setting non-default values is omitted for
brevity).
The enqueue function dualpi2_enqueue(lq, cq, pkt) ().
The dequeue function dualpi2_dequeue(lq, cq, pkt) ().
The recurrence function recur(q, likelihood) for de-randomized
ECN marking (shown at the end of ).
The L4S AQM function laqm(qdelay) () used to calculate the
ECN-marking probability for the L4S queue.
The Base AQM function that implements the PI algorithm
dualpi2_update(lq, cq) ()
used to regularly update the base probability (p'), which is
squared for the Classic AQM as well as being coupled across to the
L4S queue.
It also uses the following functions that are not shown in
full here:
scheduler(), which selects between the head packets of the two
queues. The choice of scheduler technology is discussed later.
cq.byt() or lq.byt() returns the current length
(a.k.a. backlog) of the relevant queue in bytes.
cq.len() or lq.len() returns the current length of the relevant
queue in packets.
cq.time() or lq.time() returns the current queuing delay of the
relevant queue in units of time (see Note a below).
mark(pkt) and drop(pkt) for ECN marking and dropping a
packet.
In experiments so far (building on experiments with PIE) on
broadband access links ranging from 4 Mb/s to 200 Mb/s with base RTTs
from 5 ms to 100 ms, DualPI2 achieves good results with the default
parameters in . The
parameters are categorised by whether they relate to the PI2 AQM,
the L4S AQM, or the framework coupling them together. Constants and
variables derived from these parameters are also included at the end
of each category. Each parameter is explained as it is encountered in
the walk-through of the pseudocode below, and the rationale for the
chosen defaults are given so that sensible values can be used in
scenarios other than the regular public Internet.The overall goal of the code is to apply the marking and dropping
probabilities for L4S and Classic traffic (p_L and p_C). These are
derived from the underlying base probabilities p'_L and p' driven,
respectively, by the traffic in the L and C queues. The marking
probability for the L queue (p_L) depends on both the base probability
in its own queue (p'_L) and a probability called p_CL, which is
coupled across from p' in the C queue (see for the derivation of the specific
equations and dependencies).The probabilities p_CL and p_C are derived in lines 4 and 5 of the
dualpi2_update() function ()
then used in the dualpi2_dequeue() function where p_L is also derived
from p_CL at line 6 (). The
code walk-through below builds up to explaining that part of the code
eventually, but it starts from packet arrival.When packets arrive, a common queue limit is checked first as shown
in line 2 of the enqueuing pseudocode in . This assumes a shared buffer
for the two queues (Note b discusses the merits of separate buffers).
In order to avoid any bias against larger packets, 1 MTU of space is
always allowed, and the limit is deliberately tested before
enqueue.If limit is not exceeded, the packet is timestamped in line 4 (only
if the sojourn time technique is being used to measure queue delay;
see Note a below for alternatives).At lines 5-9, the packet is classified and enqueued to the Classic
or L4S queue dependent on the least significant bit (LSB) of the ECN field
in the IP header (line 6). Packets with a codepoint having an LSB of 0
(Not-ECT and ECT(0)) will be enqueued in the Classic queue. Otherwise,
ECT(1) and CE packets will be enqueued in the L4S queue. Optional
additional packet classification flexibility is omitted for brevity
(see the L4S ECN protocol ).The dequeue pseudocode () is repeatedly called whenever
the lower layer is ready to forward a packet. It schedules one packet
for dequeuing (or zero if the queue is empty) then returns control to
the caller so that it does not block while that packet is being
forwarded. While making this dequeue decision, it also makes the
necessary AQM decisions on dropping or marking. The alternative of
applying the AQMs at enqueue would shift some processing from the
critical time when each packet is dequeued. However, it would also add
a whole queue of delay to the control signals, making the control loop
sloppier (for a typical RTT, it would double the Classic queue's
feedback delay).All the dequeue code is contained within a large while loop so that
if it decides to drop a packet, it will continue until it selects a
packet to schedule. Line 3 of the dequeue pseudocode is where the
scheduler chooses between the L4S queue (lq) and the Classic queue
(cq). Detailed implementation of the scheduler is not shown (see
discussion later).
If an L4S packet is scheduled, in lines 7 and 8 the packet is
ECN-marked with likelihood p_L. The recur() function at the end of
is used, which is
preferred over random marking because it avoids delay due to
randomization when interpreting congestion signals, but it still
desynchronizes the sawteeth of the flows. Line 6 calculates p_L
as the maximum of the coupled L4S probability p_CL and the
probability from the Native L4S AQM p'_L. This implements the
max() function shown in to
couple the outputs of the two AQMs together. Of the two
probabilities input to p_L in line 6:
p'_L is calculated per packet in line 5 by the laqm()
function (see ), whereas
p_CL is maintained by the dualpi2_update() function,
which runs every Tupdate (Tupdate is set in line 12 of ).
If a Classic packet is scheduled, lines 10 to 17 drop or mark
the packet with probability p_C.
The Native L4S AQM algorithm () is a ramp function, similar to
the RED algorithm, but simplified as follows:
The extent of the ramp is defined in units of queuing delay,
not bytes, so that configuration remains invariant as the queue
departure rate varies.
It uses instantaneous queuing delay, which avoids the
complexity of smoothing, but also avoids embedding a worst-case
RTT of smoothing delay in the network (see ).
The ramp rises linearly directly from 0 to 1, not to an
intermediate value of p'_L as RED would, because there is no need
to keep ECN-marking probability low.
Marking does not have to be randomized. Determinism is used
instead of randomness to reduce the delay necessary to smooth out
the noise of randomness from the signal.
The ramp function requires two configuration parameters, the
minimum threshold (minTh) and the width of the ramp (range), both in
units of queuing time, as shown in lines 17 and 18 of the
initialization function in . The ramp function can be
configured as a step (see Note c).Although the DCTCP paper
recommends an ECN-marking threshold of 0.17*RTT_typ, it also shows
that the threshold can be much shallower with hardly any worse
underutilization of the link (because the amplitude of DCTCP's
sawteeth is so small). Based on extensive experiments, for the public
Internet the default minimum ECN-marking threshold (target) in is considered a good
compromise, even though it is a significantly smaller fraction of
RTT_typ.(Note: Clamping p' within the range [0,1] omitted for clarity -- see below.)The coupled marking probability p_CL depends on the base
probability (p'), which is kept up to date by executing the core PI algorithm in
every Tupdate.Note that p' solely depends on the queuing time in the Classic
queue. In line 2, the current queuing delay (curq) is evaluated from
how long the head packet was in the Classic queue (cq). The function
cq.time() (not shown) subtracts the time stamped at enqueue from the
current time (see Note a below) and implicitly takes the current queuing
delay as 0 if the queue is empty.The algorithm centres on line 3, which is a classical
PI controller that alters p' dependent on: a)
the error between the current queuing delay (curq) and the target
queuing delay (target) and b) the change in queuing delay since the
last sample. The name 'PI' represents the fact that the second factor
(how fast the queue is growing) is Proportional
to load while the first is the Integral of
the load (so it removes any standing queue in excess of the
target).The target parameter can be set based on local knowledge, but the
aim is for the default to be a good compromise for anywhere in the
intended deployment environment -- the public Internet. According
to , the target queuing delay on line 8 of
is related to the
typical base RTT worldwide, RTT_typ, by two factors: target = RTT_typ
* g * f. Below, we summarize the rationale behind these factors and
introduce a further adjustment. The two factors ensure that, in a
large proportion of cases (say 90%), the sawtooth variations in RTT of
a single flow will fit within the buffer without underutilizing the
link. Frankly, these factors are educated guesses, but with the
emphasis closer to 'educated' than to 'guess' (see for the full background):
RTT_typ is taken as 25 ms. This is based on an average CDN
latency measured in each country weighted by the number of
Internet users in that country to produce an overall weighted
average for the Internet . Countries
were ranked by number of Internet users, and once 90% of Internet
users were covered, smaller countries were excluded to avoid
small sample sizes that would be less representative. Also, importantly, the data
for the average CDN latency in China (with the largest number of
Internet users) has been removed, because the CDN latency was a
significant outlier and, on reflection, the experimental technique
seemed inappropriate to the CDN market in China.
g is taken as 0.38. The factor g is a geometry factor that
characterizes the shape of the sawteeth of prevalent Classic
congestion controllers. The geometry factor is the fraction of the
amplitude of the sawtooth variability in queue delay that lies
below the AQM's target.
For instance, at low bitrates, the
geometry factor of standard Reno is 0.5, but at higher rates, it
tends towards just under 1. According to the census of congestion
controllers conducted by Mishra et al. in Jul-Oct
2019 , most Classic TCP traffic
uses CUBIC. And, according to the analysis in , if running over a PI2 AQM, a large proportion
of this CUBIC traffic would be in its Reno-friendly mode, which
has a geometry factor of ~0.39 (for all known implementations). The
rest of the CUBIC traffic would be in true CUBIC mode, which has a
geometry factor of ~0.36. Without modelling the sawtooth profiles
from all the other less prevalent congestion controllers, we
estimate a 7:3 weighted average of these two, resulting in an
average geometry factor of 0.38.
f is taken as 2. The factor f is a safety factor that increases
the target queue to allow for the distribution of RTT_typ around
its mean. Otherwise, the target queue would only avoid
underutilization for those users below the mean. It also provides
a safety margin for the proportion of paths in use that span
beyond the distance between a user and their local CDN. Currently,
no data is available on the variance of queue delay around the
mean in each region, so there is plenty of room for this guess to
become more educated.
recommends target = RTT_typ * g * f =
25 ms * 0.38 * 2 = 19 ms. However, a further adjustment is
warranted, because target is moving year-on-year.
The paper is
based on data collected in 2019, and it mentions evidence from the Speedtest Global Index
that suggests RTT_typ reduced by 17% (fixed) or 12%
(mobile) between 2020 and 2021. Therefore, we recommend a default
of target = 15 ms at the time of writing (2021).
Operators can always use the data and discussion in to configure a more appropriate target for their
environment. For instance, an operator might wish to question the
assumptions called out in that paper, such as the goal of no
underutilization for a large majority of single flow transfers (given
many large transfers use multiple flows to avoid the scaling
limitations of Classic flows).The two 'gain factors' in line 3 of , alpha and beta, respectively
weight how strongly each of the two elements (Integral and
Proportional) alters p'. They are in units of 'per second of delay' or
Hz, because they transform differences in queuing delay into changes
in probability (assuming probability has a value from 0 to 1).Alpha and beta determine how much p' ought to change after each
update interval (Tupdate). For a smaller Tupdate, p' should change by
the same amount per second but in finer more frequent steps. So alpha
depends on Tupdate (see line 13 of the initialization function in
). It is best to update
p' as frequently as possible, but Tupdate will probably be constrained
by hardware performance. As shown in line 12, the update interval
should be frequent enough to update at least once in the time taken
for the target queue to drain ('target') as long as it updates at
least three times per maximum RTT. Tupdate defaults to 16 ms in the
reference Linux implementation because it has to be rounded to a
multiple of 4 ms. For link rates from 4 to 200 Mb/s and a maximum RTT
of 100 ms, it has been verified through extensive testing that
Tupdate = 16 ms (as also recommended in the PIE spec ) is sufficient.The choice of alpha and beta also determines the AQM's stable
operating range. The AQM ought to change p' as fast as possible in
response to changes in load without overcompensating and therefore
causing oscillations in the queue. Therefore, the values of alpha and
beta also depend on the RTT of the expected worst-case flow
(RTT_max).The maximum RTT of a PI controller (RTT_max in line 9 of ) is not an absolute maximum,
but more instability (more queue variability) sets in for long-running
flows with an RTT above this value. The propagation delay halfway
round the planet and back in glass fibre is 200 ms. However, hardly
any traffic traverses such extreme paths and, since the significant
consolidation of Internet traffic between 2007 and 2009 , a high and growing proportion of all Internet
traffic (roughly two-thirds at the time of writing) has been served
from CDNs or 'cloud' services
distributed close to end users. The Internet might change again, but
for now, designing for a maximum RTT of 100 ms is a good compromise
between faster queue control at low RTT and some instability on the
occasions when a longer path is necessary.Recommended derivations of the gain constants alpha and beta can be
approximated for Reno over a PI2 AQM as:
alpha = 0.1 * Tupdate / RTT_max^2;
beta = 0.3 / RTT_max,
as shown in lines 13 and 14 of
. These are derived
from the stability analysis in . For the default
values of Tupdate = 16 ms and RTT_max = 100 ms, they result in alpha =
0.16; beta = 3.2 (discrepancies are due to rounding). These defaults
have been verified with a wide range of link rates, target delays, and
traffic models with mixed and similar RTTs, short and long
flows, etc.In corner cases, p' can overflow the range [0,1] so the resulting
value of p' has to be bounded (omitted from the pseudocode). Then, as
already explained, the coupled and Classic probabilities are derived
from the new p' in lines 4 and 5 of as p_CL = k*p' and p_C = p'^2.Because the coupled L4S marking probability (p_CL) is factored up
by k, the dynamic gain parameters alpha and beta are also inherently
factored up by k for the L4S queue. So, the effective gain factor for
the L4S queue is k*alpha (with defaults alpha = 0.16 Hz and k = 2,
effective L4S alpha = 0.32 Hz).Unlike in PIE , alpha and beta do not
need to be tuned every Tupdate dependent on p'. Instead, in PI2, alpha
and beta are independent of p' because the squaring applied to Classic
traffic tunes them inherently. This is explained in , which also explains why this more principled approach
removes the need for most of the heuristics that had to be added to
PIE.Nonetheless, an implementer might wish to add selected details to
either AQM. For instance, the Linux reference DualPI2 implementation
includes the following (not shown in the pseudocode above):
Classic and coupled marking or dropping (i.e., based on p_C
and p_CL from the PI controller) is not applied to a packet if the
aggregate queue length in bytes is < 2 MTU (prior to enqueuing
the packet or dequeuing it, depending on whether the AQM is
configured to be applied at enqueue or dequeue); and
in the WRR scheduler, the 'credit' indicating which queue
should transmit is only changed if there are packets in both
queues (i.e., if there is actual resource contention). This
means that a properly paced L flow might never be delayed by the
WRR. The WRR credit is reset in favour of the L queue when the
link is idle.
An implementer might also wish to add other heuristics,
e.g., burst protection or enhanced
burst protection .Notes:
The drain rate of the queue can vary
if it is scheduled relative to other queues or if it accommodates
fluctuations in a wireless medium. To auto-adjust to changes in
drain rate, the queue needs to be measured in time, not bytes or
packets .
Queuing delay could be measured directly as the sojourn time (a.k.a.
service time) of the queue by storing a per-packet timestamp as
each packet is enqueued and subtracting it from the system time
when the packet is dequeued. If timestamping is not easy to
introduce with certain hardware, queuing delay could be predicted
indirectly by dividing the size of the queue by the predicted
departure rate, which might be known precisely for some link
technologies (see, for example, DOCSIS PIE ). However, sojourn time is slow to detect bursts.
For instance, if a burst arrives at an empty queue, the sojourn
time only fully measures the burst's delay when its last packet is
dequeued, even though the queue has known the size of the burst
since its last packet was enqueued -- so it could have signalled
congestion earlier. To remedy this, each head packet can be marked
when it is dequeued based on the expected delay of the tail packet
behind it, as explained below, rather than based on the head
packet's own delay due to the packets in front of it. "Underutilization with Bursty Traffic" in identifies a specific scenario where bursty
traffic significantly hits utilization of the L queue. If this
effect proves to be more widely applicable, using the delay behind
the head could improve performance.The
delay behind the head can be implemented by dividing the backlog
at dequeue by the link rate or equivalently multiplying the
backlog by the delay per unit of backlog. The implementation
details will depend on whether the link rate is known; if it is
not, a moving average of the delay per unit backlog can be
maintained. This delay consists of serialization as well as media
acquisition for shared media. So the details will depend strongly
on the specific link technology. This approach should be less
sensitive to timing errors and cost less in operations and memory
than the otherwise equivalent 'scaled sojourn time' metric, which
is the sojourn time of a packet scaled by the ratio of the queue
sizes when the packet departed and arrived .
Line 2 of the dualpi2_enqueue() function () assumes an implementation
where lq and cq share common buffer memory. An alternative
implementation could use separate buffers for each queue, in which
case the arriving packet would have to be classified first to
determine which buffer to check for available space. The choice is
a trade-off; a shared buffer can use less memory whereas separate
buffers isolate the L4S queue from tail drop due to large bursts
of Classic traffic (e.g., a Classic Reno TCP during slow-start
over a long RTT).
There has been some concern that using the step function of
DCTCP for the Native L4S AQM requires end systems to smooth the
signal for an unnecessarily large number of round trips to ensure
sufficient fidelity. A ramp is no worse than a step in initial
experiments with existing DCTCP. Therefore, it is recommended that
a ramp is configured in place of a step, which will allow
congestion control algorithms to investigate faster smoothing
algorithms.A ramp is more general than a
step, because an operator can effectively turn the ramp into a
step function, as used by DCTCP, by setting the range to zero.
There will not be a divide by zero problem at line 5 of because, if minTh is equal to
maxTh, the condition for this ramp calculation cannot arise.
Pass #2: Edge-Case DetailsThis section takes a second pass through the pseudocode to add
details of two edge-cases: low link rate and overload. repeats the dequeue
function of , but with
details of both edge-cases added. Similarly, repeats the core PI algorithm
of , but with overload details
added. The initialization, enqueue, L4S AQM, and recur functions are
unchanged.The link rate can be so low that it takes a single packet queue
longer to serialize than the threshold delay at which ECN marking
starts to be applied in the L queue. Therefore, a minimum marking
threshold parameter in units of packets rather than time is necessary
(Th_len, default 1 packet in line 19 of ) to ensure that the ramp
does not trigger excessive marking on slow links. Where an
implementation knows the link rate, it can set up this minimum at the
time it is configured.
For instance, it would divide 1 MTU by the link
rate to convert it into a serialization time, then if the lower
threshold of the Native L AQM ramp was lower than this serialization
time, it could increase the thresholds to shift the bottom of the ramp
to 2 MTU. This is the approach used in DOCSIS , because the configured link rate is dedicated to
the DualQ.The pseudocode given here applies where the link rate is unknown,
which is more common for software implementations that might be
deployed in scenarios where the link is shared with other queues. In
lines 5a to 5d in , the
native L4S marking probability, p'_L, is zeroed if the queue is only 1
packet (in the default configuration).Persistent overload is deemed to have occurred when Classic
drop/marking probability reaches p_Cmax. Above this point, the Classic
drop probability is applied to both the L and C queues, irrespective of
whether any packet is ECN-capable. ECT packets that are not dropped
can still be ECN-marked.In line 11 of the initialization function (), the maximum Classic drop
probability p_Cmax = min(1/k^2, 1) or 1/4 for the default coupling
factor k = 2. In practice, 25% has been found to be a good threshold to
preserve fairness between ECN-capable and non-ECN-capable traffic.
This protects the queues against both temporary overload from
responsive flows and more persistent overload from any unresponsive
traffic that falsely claims to be responsive to ECN.When the Classic ECN-marking probability reaches the p_Cmax
threshold (1/k^2), the marking probability that is coupled to the L4S queue,
p_CL, will always be 100% for any k (by equation (1) in ). So, for readability, the constant p_Lmax is
defined as 1 in line 21 of the initialization function (). This is intended to ensure
that the L4S queue starts to introduce dropping once ECN marking
saturates at 100% and can rise no further. The 'Prague L4S
requirements' state
that when an L4S congestion control detects a drop, it falls back to
a response that coexists with 'Classic' Reno congestion control. So, it
is correct that when the L4S queue drops packets, it drops them
proportional to p'^2, as if they are Classic packets.The two queues each test for overload in lines 4b and 12b of the
dequeue function ().
Lines 8c to 8g drop L4S packets with probability p'^2. Lines 8h to 8i
mark the remaining packets with probability p_CL. Given p_Lmax = 1,
all remaining packets will be marked because, to have reached the else
block at line 8b, p_CL >= 1.Line 2a in the core PI algorithm () deals with overload of the
L4S queue when there is little or no Classic traffic. This is
necessary, because the core PI algorithm maintains the appropriate
drop probability to regulate overload, but it depends on the length of
the Classic queue. If there is little or no Classic queue, the naive PI-update function
() would drop
nothing, even if the L4S queue were overloaded -- so tail drop would
have to take over (lines 2 and 3 of ).Instead, line 2a of the full PI-update function () ensures that the Base PI AQM
in line 3 is driven by whichever of the two queue delays is greater,
but line 3 still always uses the same Classic target (default 15 ms).
If L queue delay is greater just because there is little or no Classic
traffic, normally it will still be well below the Base AQM target.
This is because L4S traffic is also governed by the shallow threshold
of its own Native AQM (lines 5a to 6 of the dequeue algorithm in ). So the Base AQM will be
driven to zero and not contribute.
However, if the L queue is
overloaded by traffic that is unresponsive to its marking, the max()
in line 2a of enables the L queue to smoothly take over driving the Base
AQM into overload mode even if there is little or no Classic traffic.
Then the Base AQM will keep the L queue to the Classic target (default
15 ms) by shedding L packets.The choice of scheduler technology is critical to overload
protection (see ).
A well-understood weighted scheduler such as WRR is recommended. As long as the scheduler weight
for Classic is small (e.g., 1/16), its exact value is
unimportant, because it does not normally determine capacity
shares. The weight is only important to prevent unresponsive L4S
traffic starving Classic traffic in the short term (see ). This is because capacity
sharing between the queues is normally determined by the coupled
congestion signal, which overrides the scheduler, by making L4S
sources leave roughly equal per-flow capacity available for
Classic flows.
Alternatively, a time-shifted FIFO (TS-FIFO) could be used. It
works by selecting the head packet that has waited the longest,
biased against the Classic traffic by a time-shift of tshift. To
implement TS-FIFO, the scheduler() function in line 3 of
the dequeue code would simply be implemented as the scheduler()
function at the bottom of in
. For the public Internet, a good
value for tshift is 50 ms. For private networks with smaller
diameter, about 4*target would be reasonable. TS-FIFO is a very
simple scheduler, but complexity might need to be added to address
some deficiencies (which is why it is not recommended over
WRR):
TS-FIFO does not fully isolate latency in the L4S queue
from uncontrolled bursts in the Classic queue;
using sojourn time for TS-FIFO is only appropriate if
timestamping of packets is feasible; and
even if timestamping is supported, the sojourn time of the
head packet is always stale, so a more instantaneous measure
of queue delay could be used (see Note a in ).
A strict priority scheduler would be inappropriate as discussed
in .
Example DualQ Coupled Curvy RED AlgorithmAs another example of a DualQ Coupled AQM algorithm, the pseudocode
below gives the Curvy-RED-based algorithm. Although the AQM was designed
to be efficient in integer arithmetic, to aid understanding it is first
given using floating point arithmetic (). Then, one possible optimization for
integer arithmetic is given, also in pseudocode (). To aid comparison, the line numbers are
kept in step between the two by using letter suffixes where the longer
code needs extra lines.Curvy RED in PseudocodeThe pseudocode manipulates three main structures of variables: the
packet (pkt), the L4S queue (lq), and the Classic queue (cq). It is defined
and described below in the following three functions:
the initialization function cred_params_init(...) () that sets parameter
defaults (the API for setting non-default values is omitted for
brevity);
the dequeue function cred_dequeue(lq, cq, pkt) (); and
the scheduling function scheduler(), which selects between the
head packets of the two queues.
It also uses the following functions that are either shown
elsewhere or not shown in full here:
the enqueue function, which is identical to that used for
DualPI2, dualpi2_enqueue(lq, cq, pkt) in ;
mark(pkt) and drop(pkt) for ECN marking and dropping a
packet;
cq.byt() or lq.byt() returns the current length
(a.k.a. backlog) of the relevant queue in bytes; and
cq.time() or lq.time() returns the current queuing delay of the
relevant queue in units of time (see Note a in ).
Because Curvy RED was evaluated before DualPI2, certain
improvements introduced for DualPI2 were not evaluated for Curvy RED.
In the pseudocode below, the straightforward improvements have been
added on the assumption they will provide similar benefits, but that
has not been proven experimentally. They are: i) a conditional
priority scheduler instead of strict priority; ii) a time-based
threshold for the Native L4S AQM; and iii) ECN support for the Classic
AQM. A recent evaluation has proved that a minimum ECN-marking
threshold (minTh) greatly improves performance, so this is also
included in the pseudocode.Overload protection has not been added to the Curvy RED pseudocode
below so as not to detract from the main features. It would be added
in exactly the same way as in for
the DualPI2 pseudocode. The Native L4S AQM uses a step threshold, but
a ramp like that described for DualPI2 could be used instead. The
scheduler uses the simple TS-FIFO algorithm, but it could be replaced
with WRR.The Curvy RED algorithm has not been maintained or evaluated to the
same degree as the DualPI2 algorithm. In initial experiments on
broadband access links ranging from 4 Mb/s to 200 Mb/s with base RTTs
from 5 ms to 100 ms, Curvy RED achieved good results with the default
parameters in .The parameters are categorized by whether they relate to the
Classic AQM, the L4S AQM, or the framework coupling them together.
Constants and variables derived from these parameters are also
included at the end of each category. These are the raw input
parameters for the algorithm. A configuration front-end could accept
more meaningful parameters (e.g., RTT_max and RTT_typ) and convert
them into these raw parameters, as has been done for DualPI2 in . Where necessary, parameters are
explained further in the walk-through of the pseudocode below.The dequeue pseudocode () is
repeatedly called whenever the lower layer is ready to forward a
packet. It schedules one packet for dequeuing (or zero if the queue is
empty) then returns control to the caller so that it does not block
while that packet is being forwarded. While making this dequeue
decision, it also makes the necessary AQM decisions on dropping or
marking. The alternative of applying the AQMs at enqueue would shift
some processing from the critical time when each packet is dequeued.
However, it would also add a whole queue of delay to the control
signals, making the control loop very sloppy.The code is written assuming the AQMs are applied on dequeue
(Note 1). All the dequeue
code is contained within a large while loop so that if it decides to
drop a packet, it will continue until it selects a packet to schedule.
If both queues are empty, the routine returns NULL at line 20. Line 3
of the dequeue pseudocode is where the conditional priority scheduler
chooses between the L4S queue (lq) and the Classic queue (cq). The
TS-FIFO scheduler is shown at lines 28-33, which would be
suitable if simplicity is paramount (see Note 2).Within each queue, the decision whether to forward, drop, or mark is
taken as follows (to simplify the explanation, it is assumed that
U = 1):
L4S:
If the test at line 3 determines there is an
L4S packet to dequeue, the tests at lines 5b and 5c determine
whether to mark it. The first is a simple test of whether the L4S
queue delay (lq.time()) is greater than a step threshold T
(Note 3). The second
test is similar to the random ECN marking in RED but with the
following differences: i) marking depends on queuing time, not
bytes, in order to scale for any link rate without being
reconfigured; ii) marking of the L4S queue depends on a logical OR
of two tests: one against its own queuing time and one against the
queuing time of the other (Classic)
queue; iii) the tests are against the instantaneous queuing time
of the L4S queue but against a smoothed average of the other (Classic)
queue; and iv) the queue is compared with the maximum of U random
numbers (but if U = 1, this is the same as the single random number
used in RED).Specifically, in line 5a, the
coupled marking probability p_CL is set to the amount by which the
averaged Classic queuing delay Q_C exceeds the minimum queuing
delay threshold (minTh), all divided by the L4S scaling parameter
range_L. range_L represents the queuing delay (in seconds) added
to minTh at which marking probability would hit 100%. Then, in line
5c (if U = 1), the result is compared with a uniformly distributed
random number between 0 and 1, which ensures that, over range_L,
marking probability will linearly increase with queuing time.
Classic:
If the scheduler at line 3 chooses to
dequeue a Classic packet and jumps to line 7, the test at line 10b
determines whether to drop or mark it. But before that, line 9a
updates Q_C, which is an exponentially weighted moving average
(Note ) of
the queuing time of the Classic queue, where cq.time() is the
current instantaneous queuing time of the packet at the head of
the Classic queue (zero if empty), and gamma is the exponentially weighted moving average (EWMA) constant
(default 1/32; see line 12 of the initialization function).
Lines 10a and 10b implement the Classic
AQM. In line 10a, the averaged queuing time Q_C is divided by the
Classic scaling parameter range_C, in the same way that queuing
time was scaled for L4S marking. This scaled queuing time will be
squared to compute Classic drop probability. So, before it is
squared, it is effectively the square root of the drop
probability; hence, it is given the variable name sqrt_p_C. The
squaring is done by comparing it with the maximum out of two
random numbers (assuming U = 1). Comparing it with the maximum out
of two is the same as the logical 'AND' of two tests, which
ensures drop probability rises with the square of queuing
time.
The AQM functions in each queue (lines 5c and 10b) are two cases
of a new generalization of RED called 'Curvy RED', motivated as follows.
When the performance of this AQM was compared with FQ-CoDel and PIE,
their goal of holding queuing delay to a fixed target seemed
misguided . As the number of flows
increases, if the AQM does not allow host congestion controllers to
increase queuing delay, it has to introduce abnormally high levels of
loss. Then loss rather than queuing becomes the dominant cause of
delay for short flows, due to timeouts and tail losses.Curvy RED constrains delay with a softened target that allows some
increase in delay as load increases. This is achieved by increasing
drop probability on a convex curve relative to queue growth (the
square curve in the Classic queue, if U = 1). Like RED, the curve hugs
the zero axis while the queue is shallow. Then, as load increases, it
introduces a growing barrier to higher delay. But, unlike RED, it
requires only two parameters, not three. The disadvantage of Curvy RED
(compared to a PI controller, for example) is that it is not adapted to
a wide range of RTTs. Curvy RED can be used as is when the RTT range
to be supported is limited; otherwise, an adaptation mechanism is
needed.From our limited experiments with Curvy RED so far, recommended
values of these parameters are: S_C = -1; g_C = 5; T = 5 * MTU at the
link rate (about 1 ms at 60 Mb/s) for the range of base RTTs typical on
the public Internet. explains why these
parameters are applicable whatever rate link this AQM implementation
is deployed on and how the parameters would need to be adjusted for a
scenario with a different range of RTTs (e.g., a data centre). The
setting of k depends on policy (see
and , respectively, for its recommended
setting and guidance on alternatives).There is also a cUrviness parameter, U, which is a small positive
integer. It is likely to take the same hard-coded value for all
implementations, once experiments have determined a good value. Only
U = 1 has been used in experiments so far, but results might be even
better with U = 2 or higher.Notes:
The alternative of applying the
AQMs at enqueue would shift some processing from the critical time
when each packet is dequeued. However, it would also add a whole
queue of delay to the control signals, making the control loop
sloppier (for a typical RTT, it would double the Classic queue's
feedback delay). On a platform where packet timestamping is
feasible, e.g., Linux, it is also easiest to apply the AQMs at
dequeue, because that is where queuing time is also measured.
WRR better isolates
the L4S queue from large delay bursts in the Classic queue, but it
is slightly less simple than TS-FIFO. If WRR were used, a low
default Classic weight (e.g., 1/16) would need to be
configured in place of the time-shift in line 5 of the
initialization function ().
A step function is shown for
simplicity. A ramp function (see and the discussion around it
in ) is recommended, because
it is more general than a step and has the potential to enable L4S
congestion controls to converge more rapidly.
An EWMA is only one possible way
to filter bursts; other more adaptive smoothing methods could be
valid, and it might be appropriate to decrease the EWMA faster than
it increases, e.g., by using the minimum of the smoothed and
instantaneous queue delays, min(Q_C, qc.time()).
Efficient Implementation of Curvy REDAlthough code optimization depends on the platform, the following
notes explain where the design of Curvy RED was particularly motivated
by efficient implementation.The Classic AQM at line 10b in calls maxrand(2*U), which gives twice
as much curviness as the call to maxrand(U) in the marking function at
line 5c. This is the trick that implements the square rule in equation
(1) (). This is based on the fact that,
given a number X from 1 to 6, the probability that two dice throws
will both be less than X is the square of the probability that one
throw will be less than X.
So, when U = 1, the L4S marking function is
linear and the Classic dropping function is squared. If U = 2, L4S would
be a square function and Classic would be quartic. And so on.The maxrand(u) function in lines 22-27 simply generates u random
numbers and returns the maximum. Typically, maxrand(u) could be run in
parallel out of band. For instance, if U = 1, the Classic queue would
require the maximum of two random numbers. So, instead of calling
maxrand(2*U) in-band, the maximum of every pair of values from a
pseudorandom number generator could be generated out of band and held
in a buffer ready for the Classic queue to consume.The two ranges, range_L and range_C, are expressed as powers of 2 so
that division can be implemented as a right bit-shift (>>) in
lines 5 and 10 of the integer variant of the pseudocode ().For the integer variant of the pseudocode, an integer version of
the rand() function used at line 25 of the maxrand() function in would be arranged to return an integer
in the range 0 <= maxrand() < 2^32 (not shown). This would scale
up all the floating point probabilities in the range [0,1] by
2^32.Queuing delays are also scaled up by 2^32, but in two stages: i) in
line 9, queuing time qc.ns() is returned in integer nanoseconds, making
the value about 2^30 times larger than when the units were seconds, and then
ii) in lines 5 and 10, an adjustment of -2 to the right bit-shift
multiplies the result by 2^2, to complete the scaling by 2^32.In line 8 of the initialization function, the EWMA constant gamma
is represented as an integer power of 2, g_C, so that in line 9 of the
integer code (), the division needed to weight the moving average can be
implemented by a right bit-shift (>> g_C).Choice of Coupling Factor, kRTT-DependenceWhere Classic flows compete for the same capacity, their relative
flow rates depend not only on the congestion probability but also on
their end-to-end RTT (= base RTT + queue delay). The rates of
Reno flows competing over an AQM are
roughly inversely proportional to their RTTs. CUBIC exhibits similar
RTT-dependence when in Reno-friendly mode, but it is less
RTT-dependent otherwise.Until the early experiments with the DualQ Coupled AQM, the
importance of the reasonably large Classic queue in mitigating
RTT-dependence when the base RTT is low had not been appreciated.
Appendix
of the L4S ECN Protocol uses numerical examples to
explain why bloated buffers had concealed the RTT-dependence of
Classic congestion controls before that time.
Then, it explains why,
the more that queuing delays have reduced, the more that
RTT-dependence has surfaced as a potential starvation problem for long
RTT flows, when competing against very short RTT flows.Given that congestion control on end systems is voluntary, there is
no reason why it has to be voluntarily RTT-dependent. The
RTT-dependence of existing Classic traffic cannot be 'undeployed'.
Therefore, requires L4S
congestion controls to be significantly less RTT-dependent than the
standard Reno congestion control , at
least at low RTT. Then RTT-dependence ought to be no worse than it is
with appropriately sized Classic buffers. Following this approach
means there is no need for network devices to address RTT-dependence,
although there would be no harm if they did, which per-flow queuing
inherently does.Guidance on Controlling Throughput EquivalenceThe coupling factor, k, determines the balance between L4S and
Classic flow rates (see and equation
(1) in ).For the public Internet, a coupling factor of k = 2 is recommended
and justified below. For scenarios other than the public Internet, a
good coupling factor can be derived by plugging the appropriate
numbers into the same working.To summarize the maths below, from equation (7) it can be seen that
choosing k = 1.64 would theoretically make L4S throughput roughly the
same as Classic, if their actual end-to-end RTTs were the same.
However, even if the base RTTs are the same, the actual RTTs are
unlikely to be the same, because Classic traffic needs a fairly large
queue to avoid underutilization and excess drop, whereas L4S does
not.Therefore, to determine the appropriate coupling factor policy, the
operator needs to decide at what base RTT it wants L4S and Classic
flows to have roughly equal throughput, once the effect of the
additional Classic queue on Classic throughput has been taken into
account. With this approach, a network operator can determine a good
coupling factor without knowing the precise L4S algorithm for reducing
RTT-dependence -- or even in the absence of any algorithm.The following additional terminology will be used, with appropriate
subscripts:
r:
Packet rate [pkt/s]
R:
RTT [s/round]
p:
ECN-marking probability []
On the Classic side, we consider Reno as the most sensitive and
therefore worst-case Classic congestion control. We will also consider
CUBIC in its Reno-friendly mode ('CReno') as the most prevalent
congestion control, according to the references and analysis in . In either case, the Classic packet rate in steady
state is given by the well-known square root formula for Reno
congestion control:
r_C = 1.22 / (R_C * p_C^0.5) (5)On the L4S side, we consider the Prague congestion
control as the
reference for steady-state dependence on congestion. Prague conforms
to the same equation as DCTCP, but we do not use the equation derived
in the DCTCP paper, which is only appropriate for step marking. The
coupled marking, p_CL, is the appropriate one when considering
throughput equivalence with Classic flows. Unlike step marking,
coupled markings are inherently spaced out, so we use the formula for
DCTCP packet rate with probabilistic marking derived in Appendix A of
. We use the equation without RTT-independence
enabled, which will be explained later.
r_L = 2 / (R_L * p_CL) (6)For packet rate equivalence, we equate the two packet rates and
rearrange the equation into the same form as equation (1) (copied from ) so the two can be
equated and simplified to produce a formula for a theoretical coupling
factor, which we shall call k*:
r_c = r_L
=> p_C = (p_CL/1.64 * R_L/R_C)^2.
p_C = ( p_CL / k )^2. (1)
k* = 1.64 * (R_C / R_L). (7)
We say that this coupling factor is theoretical, because it is in
terms of two RTTs, which raises two practical questions: i) for
multiple flows with different RTTs, the RTT for each traffic class
would have to be derived from the RTTs of all the flows in that class
(actually the harmonic mean would be needed) and ii) a network node
cannot easily know the RTT of the flows anyway.RTT-dependence is caused by window-based congestion control, so it
ought to be reversed there, not in the network. Therefore, we use a
fixed coupling factor in the network and reduce RTT-dependence in L4S
senders. We cannot expect Classic senders to all be updated to reduce
their RTT-dependence. But solely addressing the problem in L4S senders
at least makes RTT-dependence no worse -- not just between L4S senders,
but also between L4S and Classic senders.Throughput equivalence is defined for flows
under comparable conditions, including with the same base
RTT . So if we assume the same base RTT,
R_b, for comparable flows, we can put both R_C and R_L in terms of
R_b.We can approximate the L4S RTT to be hardly greater than the base
RTT, i.e., R_L ~= R_b. And we can replace R_C with (R_b + q_C),
where the Classic queue, q_C, depends on the target queue delay that
the operator has configured for the Classic AQM.Taking PI2 as an example Classic AQM, it seems that we could just
take R_C = R_b + target (recommended 15 ms by default in ). However, target is roughly the queue
depth reached by the tips of the sawteeth of a congestion control, not
the average . That is R_max = R_b +
target.The position of the average in relation to the max depends on the
amplitude and geometry of the sawteeth. We consider two examples:
Reno , as the most sensitive worst case,
and CUBIC in its Reno-friendly mode
('CReno') as the most prevalent congestion control algorithm on the
Internet according to the references in .
Both are Additive Increase Multiplicative Decrease (AIMD), so we will generalize using b as the multiplicative
decrease factor (b_r = 0.5 for Reno, b_c = 0.7 for CReno). Then
R_C = (R_max + b*R_max) / 2
= R_max * (1+b)/2.
R_reno = 0.75 * (R_b + target); R_creno = 0.85 * (R_b + target).
(8)
Plugging all this into equation (7), at any particular base RTT, R_b, we get a fixed coupling factor
for each:
k_reno = 1.64*0.75*(R_b+target)/R_b
= 1.23*(1 + target/R_b); k_creno = 1.39 * (1 + target/R_b).
An operator can then choose the base RTT at which it wants
throughput to be equivalent. For instance, if we recommend that the
operator chooses R_b = 25 ms, as a typical base RTT between Internet
users and CDNs , then these coupling
factors become:
k_reno = 1.23 * (1 + 15/25) k_creno = 1.39 * (1 + 15/25)
= 1.97 = 2.22
~= 2. ~= 2. (9)
The approximation is relevant to any of the above example DualQ
Coupled algorithms, which use a coupling factor that is an integer
power of 2 to aid efficient implementation. It also fits best for the
worst case (Reno).To check the outcome of this coupling factor, we can express the
ratio of L4S to Classic throughput by substituting from their rate
equations (5) and (6), then also substituting for p_C in terms of
p_CL using equation (1) with k = 2 as just determined for the
Internet:
r_L / r_C = 2 (R_C * p_C^0.5) / 1.22 (R_L * p_CL)
= (R_C * p_CL) / (1.22 * R_L * p_CL)
= R_C / (1.22 * R_L). (10)
As an example, we can then consider single competing CReno and
Prague flows, by expressing both their RTTs in (10) in terms of their
base RTTs, R_bC and R_bL. So R_C is replaced by equation (8) for
CReno. And R_L is replaced by the max() function below, which
represents the effective RTT of the current Prague congestion
control in its
(default) RTT-independent mode, because it sets a floor to the
effective RTT that it uses for additive increase:
r_L / r_C ~= 0.85 * (R_bC + target) / (1.22 * max(R_bL, R_typ))
~= (R_bC + target) / (1.4 * max(R_bL, R_typ)).
It can be seen that, for base RTTs below target (15 ms), both the
numerator and the denominator plateau, which has the desired effect of
limiting RTT-dependence.At the start of the above derivations, an explanation was promised
for why the L4S throughput equation in equation (6) did not need to
model RTT-independence. This is because we only use one point -- at the
typical base RTT where the operator chooses to calculate the coupling
factor. Then throughput equivalence will at least hold at that chosen
point. Nonetheless, assuming Prague senders implement RTT-independence
over a range of RTTs below this, the throughput equivalence will then
extend over that range as well.Congestion control designers can choose different ways to reduce
RTT-dependence. And each operator can make a policy choice to decide
on a different base RTT, and therefore a different k, at which it
wants throughput equivalence. Nonetheless, for the Internet, it makes
sense to choose what is believed to be the typical RTT most users
experience, because a Classic AQM's target queuing delay is also
derived from a typical RTT for the Internet.As a non-Internet example, for localized traffic from a particular
ISP's data centre, using the measured RTTs, it was calculated that a
value of k = 8 would achieve throughput equivalence, and experiments
verified the formula very closely.But, for a typical mix of RTTs across the general Internet, a value
of k = 2 is recommended as a good workable compromise.AcknowledgementsThanks to , , ,
, ,
, ,
, , , , and for detailed review
comments, particularly of the appendices, and suggestions on how to make
the explanations clearer. Thanks also to for insight on the choice of schedulers and queue delay
measurement techniques. And thanks to the area reviewers , , and
.The early contributions of , , , and were partly funded by the European Community
under its Seventh Framework Programme through the Reducing Internet
Transport Latency (RITE) project (ICT-317700). Contributions of and were also partly funded by the 5Growth and
DAEMON EU H2020 projects. 's contribution was also
partly funded by the Comcast Innovation Fund and the Research Council of
Norway through the TimeIn project. The views expressed here are solely
those of the authors.ContributorsThe following contributed implementations and evaluations that
validated and helped to improve this specification: <olga@albisser.org> of Simula Research Lab,
Norway (Olga Bondarenko during early draft versions) implemented the
prototype DualPI2 AQM for Linux with Koen De Schepper and conducted
extensive evaluations as well as implementing the live performance
visualization GUI . <olivier.tilmans@nokia-bell-labs.com> of
Nokia Bell Labs, Belgium prepared and maintains the Linux
implementation of DualPI2 for upstreaming. wrote a model for the ns-3 simulator based on draft-ietf-tsvwg-aqm-dualq-coupled-01 (a draft version of this document). Based on this initial work, <tomh@tomh.org> updated that earlier model and
created a model for the DualQ variant specified as part of the Low Latency
DOCSIS specification, as well as conducting extensive
evaluations. of Nokia, Belgium built the End-to-End Data
Centre to the Home broadband testbed on which DualQ Coupled AQM
implementations were tested.Authors' AddressesNokia Bell LabsAntwerpBelgiumkoen.de_schepper@nokia.comhttps://www.bell-labs.com/about/researcher-profiles/koende_schepper/IndependentUnited Kingdomietf@bobbriscoe.nethttps://bobbriscoe.net/CableLabsLouisvilleCOUnited States of AmericaG.White@CableLabs.com