banner



What Is The Initial Size Of The Tcp Congestion Window?

3.7 TCP Congestion Control

In this section we return to our study of TCP. As we learned in Section iii.5, TCP provides a reliable transport service between ii processes running on dissimilar hosts. Another extremely important component of TCP is its congestion control mechanism. As nosotros indicated in the previous section, TCP must use cease-to-terminate congestion control rather than network-assisted congestion control, since the IP layer provides no feedback to the end systems regarding network congestion.  Before diving into the details of TCP congestion control, let's beginning go a high-level view of  TCP's congestion control mechanism, also as the overall goal that TCP strives for when multiple TCP connections must share the bandwidth of a congested link. .

A TCP connexion controls its transmission rate by limiting its number of transmitted-merely-yet-to-be-acknowledged segments.  Let us denote this number of permissible unacknowledged segments as w, frequently referred to equally the TCP window size.  Ideally, TCP connections should be allowed to transmit as fast as possible (i.e., to have as big a number of outstanding unacknowledged packets every bit possible) as long as segments are not lost (dropped at routers) due to congestion.  In very broad terms, a TCP connexion starts with a minor value of due west and and then "probes" for the being of additional unused  link bandwidth at the links on its end-to-cease path by increasing w. A TCP connection continues to increase w until a segment loss occurs (as detected by a timeout or duplicate acknowledgements).  When such a loss occurs, the TCP connection reduces w to a "safety level" and then  begins probing once again for unused bandwidth past slowly increasing w .

An of import measure of the functioning of a TCP connection is its throughput - the charge per unit at which it transmits information from the sender to the receiver.  Clearly, throughput volition depend on the value of west. Due west.  If  a TCP sender transmits all w segments back-to-dorsum, it must then expect for i round trip fourth dimension (RTT) until information technology receives acknowledgments for these segments, at which bespeak it tin can send w additional segments. If  a connection transmits w segments of size MSS bytes every RTT seconds, then the connection'south throughput, or transmission rate, is  (west*MSS)/RTT bytes per 2nd.

Suppose now that Thousand TCP connections are traversing a link of capacity R. Suppose also that at that place are no UDP packets flowing over this link, that each TCP connection is transferring a very large corporeality of data, and that none of these TCP connections traverse any other congested link.  Ideally, the window sizes in the TCP connections traversing this link should be such that each connexion achieves a throughput of R/K.  More by and large, if a connection passes through N links, with link n having manual rate Rn and supporting a full of Mnorthward TCP connections, then ideally this connection should reach  a rate of Rn/Grandn on the northwardth link. However, this connection'due south stop-to-end average rate cannot exceed the minimum rate achieved at all of the links forth the terminate-to-terminate path. That is, the end-to-end transmission rate for this connection is r = min{Ri/Kane,...,RN/KN}. The goal of TCP is to provide this connectedness with this terminate-to-end charge per unit, r. (In actuality, the formula for r is more complicated, every bit nosotros should take into account the fact that ane or more of the intervening connections may be bottlenecked at some other link that is not on this finish-to-end path and hence can not use their bandwidth share, Rdue north/1000north . In this case, the value of r would be college than min{R1/One thousandane,...,RDue north/ThousandN}. )

3.7.ane Overview of TCP Congestion Command

In Department 3.5 nosotros saw that each side of a TCP connectedness consists of a receive buffer, a send buffer, and several variables (LastByteRead, RcvWin, etc.)  The TCP congestion control mechanism has each side of the connection go on track of two additional variables: the congestion window and the threshold. The congestion window, denoted CongWin, imposes an additional constraint on how much traffic a host tin can ship into a connection. Specifically, the corporeality of unacknowledged data that a host tin can have within a TCP connection may non exceed the minimum of CongWin and RcvWin, i.east.,
LastByteSent - LastByteAcked <= min{CongWin, RcvWin}.

The threshold, which nosotros hash out in detail below, is a variable that effects how CongWin grows.

Let u.s. now wait at how the congestion window evolves throughout the lifetime of a TCP connexion. In lodge to focus on congestion control (as opposed to flow control), permit us assume that the TCP receive buffer is and then big that the receive window constraint  can exist ignored. In this instance,  the amount of unacknowledged data chapeau a host can take inside a TCP connection is solely limited by CongWin. Farther let'southward presume that a sender has a very large amount of data to transport to a receiver.

Once a TCP connection is established between the two end systems, the application procedure at the sender writes bytes to the sender's TCP send buffer. TCP grabs chunks of size  MSS, encapsulates each clamper within a TCP segment, and passes the segments to the network layer for transmission across the network. The TCP congestion window regulates the times at which the segments are sent into the network (i.due east., passed to the network layer). Initially, the congestion window is equal to 1 MSS. TCP sends the first segment into the network and waits for an acknowledgement. If this segment is acknowledged earlier its timer times out, the sender increases the congestion window by one MSS and sends out two maximum-size segments. If these segments are acknowledged before their timeouts, the sender increases the congestion window by ane MSS for each of the acknowledged segments, giving a congestion window of four MSS, and sends out iv maximum-sized segments. This process continues as long as (one) the congestion window is beneath the threshold and (2) the acknowledgements go far before their corresponding timeouts.

During this phase of the congestion control procedure, the congestion window increases exponentially fast, i.e., the congestion window is initialized to i MSS, later on 1 RTT the window is increased to 2 segments, afterwards two circular-trip times the window is increased to 4 segments, subsequently iii round-trip times the window is increased to eight segments, etc. This phase of the algorithm is chosen slow commencement because it begins with a small congestion window equal to one MSS. (The transmission rate of the connexion starts slowly only accelerates speedily.)

The boring kickoff phase ends when the window size exceed the value of threshold. Once the congestion window is larger than the current value of threshold, the congestion window grows linearly rather than exponentially. Specifically, if westward is the current value of the congestion window, and westward is larger than threshold, then after w acknowledgements accept arrived, TCP replaces w with w + i . This has the effect of increasing the congestion window past one in each RTT for which an entire window'due south worth of acknowledgements arrives.  This phase of the algorithm is called congestion avoidance.

The congestion avoidance phase continues every bit long as  the acknowledgements arrive before their corresponding timeouts. Only the window size, and hence the rate at which the TCP sender tin ship, can not increase forever.  Eventually, the TCP rate volition exist such that one of the links forth the path becomes saturated, and which betoken loss (and a resulting timeout at the sender) will occur. When a timeout occurs, the value of threshold is set to one-half  the value of the current congestion window, and the congestion window is reset to one MSS. The sender then again grows the congestion window exponentially fast using the deadening start procedure until the congestion window hits the threshold.

In summary:

  • When the congestion window is below the threshold, the congestion window grows exponentially.
  • When the congestion window is above the threshold, the congestion window grows linearly.
  • Whenever at that place is a timeout, the threshold is set to one half of the current congestion window and the congestion window is then set to one.
If we ignore the slowstart phase, nosotros run into that TCP essentially increases its window size by one each RTT (and thus increases its transmission rate by an condiment factor) when its network path is not congested, and decreases its window size past a factor of two each RTT when the path is congested.  For this reason, TCP is oftentimes referred to every bit an condiment-increase, multiplicative-decrease (AIMD) algorithm.


Figure 3.7-1: Evolution of TCP's congestion window

The evolution of TCP'south congestion window is illustrated in  Figure 3.vii-1. In this effigy, the threshold is initially equal to 8*MSS. The congestion window climbs exponentially fast during slow kickoff and hits the threshold at the third manual. The congestion window so climbs linearly until loss occurs, but after transmission vii. Note that the congestion window is 12*MSS when loss occurs. The threshold is so prepare to .5*CongWin = vi*MSS and the congestion window is fix one. And the process continues. This congestion command algorithm is due to 5. Jacobson [Jac88]; a number of modifications to Jacobson's initial algorithm are described in [Stevens 1994, RFC 2581].

A Trip to Nevada: Tahoe, Reno and Vegas

The TCP congestion control algorithm just described is often referred to every bit Tahoe. 1 problem with the Tahoe algorithm is that when a segment is lost the sender side of the awarding may accept to wait a long period of time for the timeout. For this reason, a variant of Tahoe, chosen Reno, is implemented by most operating systems. Like Tahoe, Reno sets its congestion window to one segment upon the expiration of a timer. However, Reno also includes the fast retransmit machinery that nosotros examined in Section 3.v.  Recall that fast retransmit triggers the transmission of a dropped segment if iii duplicate ACKs for a segment are received before the occurrence of the segment's timeout.  Reno also employs a fast recovery mechanism, which essentially cancels the slow start phase afterward a fast retransmission. The interested reader is encouraged so encounter [Stevens 1994, RFC 2581] for details.

Most TCP implementations currently apply the Reno algorithm. There is, still, another algorithm in the literature, the Vegas algorithm, that can improve Reno's performance. Whereas Tahoe and Reno react to congestion (i.e., to overflowing router buffers), Vegas attempts to avoid congestion while maintaining expert throughput. The bones idea of Vegas is to (ane) discover congestion in the routers between source and destination before parcel loss occurs, and (2) lower the charge per unit linearly when this imminent parcel loss is detected. Imminent bundle loss is predicted by observing the round-trip times -- the longer the round-trip times of the packets, the greater the congestion in the routers. The Vegas algorithm is discussed in detail in [Brakmo 1995] ; a study of its performance is given in [Ahn 1995]. As of 1999, Vegas is  non a office of the virtually pop TCP implementations.

We emphasize that TCP congestion control has evolved over the years, and is still evolving. What was good for the Internet when the majority of the TCP connections carried SMTP, FTP and Telnet traffic is non necessarily good for today'southward Web-dominated Net or for the Cyberspace of the future, which volition support who-knows-what kinds of services.

Does TCP Ensure Fairness?

In the above discussion, we noted that the goal of TCP's congestion control mechanism is to share a bottleneck link's bandwidth evenly amid the TCP connections traversing that link.  But why should TCP's additive increment, multiplicative  decrease algorithm reach that goal, particularly given that different TCP connections may get-go at different times and thus may have different window sizes at a given point in time?  [Chiu 1989] provides an elegant and intuitive caption of why TCP congestion control converges to provide an equal share of a bottleneck link'south bandwidth among competing TCP connections.

Let'southward consider the unproblematic case of 2 TCP connections sharing a unmarried link with transmission rate R, as shown in Figure three.7-ii. We'll assume that the ii connections have the aforementioned MSS and RTT (and then that if they take the same congestion window size, then they take the same throughput), that they take a large amount of data to send, and that no other TCP connections or UDP datagrams traverse this shared link. Also, we'll ignore the slow first phase of TCP, and assume the TCP connections are operating in congestion avoidance mode (additive increase, multiplicative subtract) at all times.

Two TCP connections sharing a single bottleneck link
Effigy three.7-2: Two TCP connections sharing a single clogging link

Figure iii.7-iii plots the throughput realized by the 2 TCP connections. If TCP is to equally share the link bandwidth between the two connections, then the realized throughput should fall along the 45 degree pointer ("equal bandwidth share")  emanating from the origin.  Ideally, the sum of the 2 throughputs should equal R (certainly, each connection receiving an equal, simply zero, share of the link capacity is not a desirable state of affairs!), and so the goal should exist to have the achieved throughputs fall somewhere near the intersection of the "equal bandwidth share" line and the "full bandwidth utilization" line in. Figure three.vii-three.

Suppose that the TCP window sizes are such that at a given point in time, connections 1 and two realize throughputs indicated by point A in Figure 3.7-three.   Because the amount of link bandwidth jointly consumed by the two connections is less than R, no loss volition occur, and both connections will increase their window by 1 per RTT as a issue of TCP's congestion abstention algorithm.  Thus, the articulation throughput of the 2 connections proceeds along a 45 degree line (equal increase for both connections) starting from point A.  Eventually, the link bandwidth jointly consumed by the two connections volition be greater than R and eventually packet loss will occur.  Suppose that connections one and 2 feel packet loss when they realize throughputs indicated by point B.  Connections 1 and ii then decrease their windows by a factor of ii.  The resulting throughputs realized are thus at point C,  halfway forth a vector starting at B and ending at the origin.  Because the joint bandwidth utilise is less than R at point C, the two connections again increase their throughputs along a 45 caste line starting from C.  Eventually, loss will again occur, due east.m., at  signal D, and the two connections again decrease their window sizes by a factor of ii.  And so on.  You lot should convince yourself that the bandwidth realized by the two connections somewhen fluctuates along the equal bandwidth share line. You lot should also convince yourself that the two connections will converge to this beliefs regardless of where they being in the ii-dimensional infinite!  Although a number of  idealized assumptions lay behind this scenario, information technology still provides an intuitive feel for why TCP results in an equal sharing of bandwidth among connections.

thoughtput realized by TCP connections 1 and 2
Figure three.7-3: Throughput realized past TCP connections 1 and two

In our arcadian scenario, nosotros assumed that simply TCP connections traverse the clogging link, and that only a single TCP connection is associated with a host-destination pair. In practise, these ii conditions are typically non met, and customer-server applications tin can thus obtain very unequal portions of link bandwidth.

Many network applications run over TCP rather than UDP because they want to make utilise of TCP's reliable transport service. Just an application developer choosing TCP gets not but reliable data transfer but also TCP congestion control. Nosotros take just seen how TCP congestion control regulates an awarding's transmission rate via the congestion window mechanism. Many multimedia applications do not run over TCP for this very reason -- they exercise not want their transmission rate throttled, even if the network is very congested. In particular, many Cyberspace telephone and Net video conferencing  applications typically run over UDP. These  applications prefer to pump their audio and video into the network at a abiding charge per unit and occasionally lose packets, rather than reduce their rates to "fair" levels at times of congestion and not lose whatever packets. From the perspective of TCP, the multimedia applications running over UDP are not being fair  -- they exercise not cooperate with the other connections nor accommodate their transmission rates appropriately. A major challenge in the upcoming years will be to develop congestion control mechanisms for the Internet that prevent UDP traffic from bringing the Internet'southward throughput to a grinding halt.

But even if we could forcefulness UDP traffic to behave adequately, the fairness problem would still non be completely solved. This is because at that place is nil to stop an awarding running over TCP from using multiple parallel connections. For example, Web browsers often use multiple parallel TCP connections to transfer a Web page. (The exact number of multiple connections is configurable in most browsers.) When an application uses multiple parallel connections, it gets a larger fraction of the bandwidth in a congested link. Every bit an example consider a link of charge per unit R supporting 9 on-going client-server applications, with each of the applications using one TCP connection. If a new application comes along and too uses one TCP connectedness, and so each awarding approximately gets the same manual rate of R/10. But if this new application instead uses xi parallel TCP connections, and then  the new application gets an unfair allocation of R/2. Because Web traffic is then pervasive in the Internet, multiple parallel connections are not uncommon.

Macroscopic Description of TCP Dynamics

Consider sending a very large file over a TCP connection. If we accept a macroscopic view of the traffic sent by the source,  we can ignore the slow showtime phase. Indeed, the connectedness is in the irksome-start phase for a relatively brusk catamenia of time because the connexion grows out of the phase exponentially fast. When we ignore the slow-start phase, the congestion window grows linearly, gets chopped in half when loss occurs, grows linearly, gets chopped in one-half when loss occurs, etc. This gives rise to the saw-tooth behavior of TCP [Stevens 1994]  shown in Figure iii.7-1.

Given this sawtooth behavior, what is the average throuphput of a TCP connection? During a item round-trip interval, the charge per unit at which TCP sends data is function of the congestion window and the current RTT: when the window size is w*MSS and the current circular-trip time is RTT, so TCP's transsmission rate is (w*MSS)/RTT. During the congestion avoidance phase, TCP probes for additional bandwidth by increasing w by one each RTT until loss occurs; denote past W the value of w at which loss occurs. Assuming that the RTT and W are approximately constant over the duration of the connection, the TCP manual charge per unit ranges from (W*MSS)/(2RTT)  to (W*MSS)/RTT.

These assumputions pb to a highly-simplified macroscopic model for the steady-state behavior of TCP: the network drops a packet from the connectedness when the connection's window size increases to Due west*MSS; the congestion window is then cut in half and and so increases by ane MSS per round-trip fourth dimension until information technology again reaches West. This process repeats itself over and over again. Because the TCP throughput increases linearly between the two extreme values, we have:

average throughput of a connection = (.75*Due west*MSS)/RTT.

Using this highly idealized model for the steady-country dynamics of TCP, we can too derive an interesting expression that relates a connection'southward loss rate to its available bandwidth [Mahdavi 1997]. This derivation is outlined in the homework problems.

3.7.2 Modeling Latency: Static Congestion Window

Many TCP connections send  relatively small files from one host to another. For example, with HTTP/1.0 each object in a Web page is transported over a separate TCP connection, and many of these objects are small text files or tiny icons. When transporting  a minor file, TCP connexion establishment and slow beginning may accept a significant impact on the latency. In this department nosotros present an analytical model that quantifies the affect of connectedness institution and slow start on latency. For a given object, we define the latency as the fourth dimension from when the client initiates a TCP connectedness until when the client receives the requested object in its entirety.

The analysis presented here assumes that that the network is uncongested, i.e., the TCP connection transporting the object does non have to share link bandwidth with other TCP or UDP traffic. (Nosotros comment on this assumption below.) Also, in lodge to non to obscure the central issues, we  carry out the analysis in the context of the unproblematic one-link network every bit shown in Figure three.7-4. (This link might model a unmarried bottleneck on an end-to-finish path. Come across also the homework bug for an explicit extention to the case of multiple links.)


Effigy 3.7-4: A unproblematic one-link network connecting a client and a server

We besides make the following simplifying assumptions:

  1. The amount of data that the sender tin can transmit is solely limited past the sender's congestion window. (Thus, the TCP receive buffers are large.)
  2. Packets are neither lost nor corrupted, and then that there are no retransmissions.
  3. All protocol header overheads -- including TCP, IP and link-layer headers -- are negligible and  ignored.
  4. The object (that is, file) to be transferred consists of an integer number of segments of size MSS (maximum segment size).
  5. The only packets that accept non-negligible transmission times are packets that carry maximum-size TCP segments. Request packets, acknowledgements and TCP connection establishment packets are  small and have negligible manual times.
  6. The initial threshold in the TCP congestion control mechanism is a large value which is never attained past the congestion window.
We too introduce the following annotation:
  1. The size of the object to exist transferred is O bits.
  2. The MSS (maximum size segment) is Southward bits (due east.thousand., 536 bytes).
  3. The transmission rate of the link from the server to the client is R bps.
  4. The round-trip fourth dimension is denoted by RTT.
In this department we define the RTT to be the fourth dimension elapsed for a small packet to travel from customer to server and and so back to the customer, excluding the transmission fourth dimension of the packet. Information technology includes the two finish-to-end propagation delays between the 2 cease systems and the processing times at the ii end systems.  We shall assume that the RTT is also equal to the roundtrip time of a packet kickoff at the server.

Although the analysis presented in this section assumes an uncongested network with a single TCP connectedness, it nevertheless sheds insight on the more realistic case of multi-link congested network. For a congested network, R roughly represents the amount of bandwidth recieved in steady land in the end-to-end network connectedness; and RTT represents a round-trip filibuster that includes queueing delays at the routers preceding the congested links. In the congested network instance, we model each TCP connection as a constant-fleck-charge per unit connection of rate R bps preceded past a unmarried tiresome-start stage. (This is roughly how TCP Tahoe behaves when losses are detected with triplicate acknowledgements.) In our numerical examples we utilise values of R and RTT that reverberate typical values for a congested network.

Before beginning the formal analysis, let us effort to gain some intuition. Let us consider what would exist the latency if there were no congestion window constraint, that is, if the server were permitted to send segments back-to-back until the entire object is sent? To answer this question, showtime note that one RTT is required to initiate the TCP connection. After 1 RTT the client sends a request for the object (which is piggybacked onto the third segment in the iii-manner TCP handshake). After a total of two RTTs the customer begins to receive data from the server. The customer receives data from the server for a period of fourth dimension O/R, the time for the server to transmit the entire object. Thus, in the instance of no congestion window constraint, the total latency is 2 RTT + O/R. This represents a lower bound; the slow commencement procedure, with its dynamic congestion window, volition of class elongate this latency.

Static Congestion Window

Although TCP uses a dynamic congestion window, it is instructive to start clarify the case of a static congestion window. Permit W, a positive integer, denote a stock-still-size static congestion window. For the static congestion window, the server is not permitted to have more than W unacknowledged outstanding segments. When the server receives the asking from the customer, the server immediately sends W segments back-to-dorsum to the customer. The server then sends i segment into the network for each acknowledgement it receives from the client. The server continues to transport one segment for each acknowledgement until all of the segments of the object have been sent.  There are two cases to consider:
  1. WS/R > RTT + S/R. In this case, the server receives an acknowledgement for the starting time segment in the first window before the server completes the transmission of the starting time window.
  2. WS/R < RTT + South/R. In this example, the server transmits the first window's worth of segments before the server receives an acknowledgement for the first segment in the window.
Let u.s.a. start consider Case ane, which is illustrated in Figure iii.vii-5.. In this effigy the window size is Westward = four segments.

Effigy iii.seven-5: the case that WS/R > RTT + S/R
One RTT is required to initiate the TCP connexion. Afterward i RTT the customer sends a request for the object (which is piggybacked onto the tertiary segment in the three-style TCP handshake). After a total of two RTTs the client begins to receive data from the server. Segments make it periodically from the server every Due south/R seconds, and the client acknowledges every segment it receives from the server. Because the server receives the first acknowledgement earlier information technology completes sending a window'due south worth of segments, the server continues to transmit segments after having transmitted the first window'due south worth of segments. And considering the acknowledgements make it periodically at the server every S/R seconds from the time when the first acknowledgement arrives, the server transmits segments continuously until it has transmitted the entire object. Thus, once the server starts to transmit the object at rate R, it continues to transmit the object at rate R until the entire object is transmitted.  The latency therefore is 2 RTT + O/R.

Now permit us consider Instance 2, which is illustrated in Figure 3.7-half dozen. In this figure, the window size is W=2 segments.


Figure three.7-half dozen: the case that WS/R < RTT + S/R

Once over again, subsequently a total of ii RTTs the customer begins to receive segments from the server. These segments arrive peridodically every Southward/R seconds, and the client acknowledges every segment it receives from the server. Just at present the server completes the transmission of the first window before the first acknowledgment arrives from the customer. Therefore, afterwards sending a window, the server must stall and wait for an acknowledgement earlier resuming transmission. When an acknowledgement finally arrives, the server sends a new segment to the client. In one case the get-go acknowledgement arrives, a window'due south worth of acknowledgements arrive, with each successive acknowledgement spaced by Due south/R seconds. For each of these acknowledgements, the server sends exactly ane segment. Thus, the server alternates between two states: a transmitting state, during which it transmits W segments; and a stalled state, during which it transmits zero and waits for an acknowledgement. The latency is equal to 2 RTT plus the fourth dimension required for the server to transmit the object, O/R, plus the amount of time that the server is in the stalled state. To determine the amount of time the server is in the stalled country, permit K = O/WS; if O/WS is not an integer, then circular K upward to the nearest integer. Note that Chiliad is the number of windows of data there are in the object of size O. The server is in the stalled state betwixt the manual of each of the windows, that is, for M-1 periods of time, with each period lasting RTT- (W-1)Due south/R (run across to a higher place diagram). Thus, for Case 2,

Latency = 2 RTT + O/R + (K-1)[South/R + RTT - Due west S/R] .

Combining the two cases, nosotros obtain

Latency = 2 RTT + O/R + (K-1) [Southward/R + RTT - W Southward/R]+

where [ten]+ = max(x,0).

This completes our analysis of static windows. The assay below for dynamic windows is more complicated, but parallels the analysis for static windows.

iii.7.iii Modeling Latency: Dynamic Congestion Window

We at present investigate the latency for a file transfer when TCP'southward dynamic congestion window is in force. Recall that the server starting time starts with a congestion window of one segment and sends 1 segment to the client. When information technology receives an acknowledgement for the segment, it increases its congestion window to two segments and sends two segments to the customer (spaced apart by S/R seconds). As information technology receives the acknowledgements for the ii segments, information technology increases the congestion window to four segments and sends 4 segments to the client (again spaced apart by South/R seconds). The procedure continues, with the congestion window doubling every RTT. A timing diagram  for TCP is illustrated in Figure three.seven-7.

Effigy 3.7-7: TCP timing during dull start

Note that O/Southward is the number of segments in the object; in the in a higher place diagram, O/S =fifteen. Consider the number of segments that  are in each of the windows. The first window contains 1 segment; the second window contains two segments; the third window contains four segments. More than mostly, the kth window contains iik-1 segments. Let Grand be the number of windows that cover the object; in the preceding diagram Grand=4. In general nosotros can express K in terms of O/S equally follows:

After transmitting a window's worth of data, the server may stall (i.due east., stop transmitting) while information technology waits for an acknowledgement. In the preceding diagram, the server stalls afterwards transmitting the first and 2nd windows, but not afterward transmitting the third. Let us at present calculate the amount of stall time afterwards transmitting the kth window.  The time from when the server begins to transmit the kth window until when the server receives an acknowledgement for the commencement segment in the window is S/R + RTT. The transmission time of the kth window is (South/R) twok-i. The stall time is the divergence of these two quantities, that is,

[S/R + RTT - 2yard-1(S/R)]+.

The server can potentially stall afterward the transmission of each of the beginning K-1 windows. (The server is done later the manual of the Kth window.) We can now calculate the latency for transferring the file. The latency has three components: 2RTT for setting upwards the TCP connection and requesting the file; O/R, the manual time of the object; and the sum of all the stalled times. Thus,

The reader should compare the higher up equation for the latency equation for static congestion windows; all the terms are exactly the aforementioned except the term WS/R for static windows has been replaced by 2chiliad-1S/R for dynamic windows. To obtain a more than compact expression for the latency, let Q exist the number of times the server would stall if the object contained an infinite number of segments:

The bodily number of times the server stalls is P = min{Q,K-ane}. In the preceding diagram P=Q=2. Combining the higher up 2 equations gives

We can farther simplify the above formula for latency by noting
Combining the above two equations gives the following airtight-class expression for the latency:
Thus to calculate the latency, we simple must calculate K and Q, ready P = min{Q,K-1}, and plug P into the above formula.

It is interesting to compare the TCP latency to the latency that would occur if there were no congestion control (that is, no congestion window constraint). Without congestion control, the latency is 2RTT + O/R, which we define to exist the Minimum Latency. It is simple do to testify that

Nosotros encounter from the above formula that TCP slow start will non significantly increase latency if RTT << O/R, that is, if the round-trip time is much less than the transmission fourth dimension of the object. Thus, if we are sending a relatively large object over an uncongested, high-speed link, and then slow commencement has an insignificant touch on on latency. Yet, with the Web we are often transmitting many small objects over congested links, in which example deadening first can significantly increase latency (equally we shall see in the following subsection).

Let us now take a look at some example scenarios. In all the scenarios nosotros set S = 536 bytes, a common default value for TCP. We shall use a RTT of 100 msec, which is not an atypical value for a continental or inter-continental delay over moderately congested links. Start consider sending a rather large object of size O = 100Kbytes. The number of windows that cover this object is K=8. For a number of  transmission rates, the following chart examines the impact of the the slow-offset machinery on the latency.

R
O/R
P
Minimum Latency:
O/R + ii RTT
Latency
with Slow Outset
  28 Kbps
28.half dozen sec
ane
 28.viii sec
 28.nine sec
   100 Kbps
8 sec
2
 viii.2 sec
 8.four sec
  i Mbps
800 msec
5
1 sec
1.5 sec
  x Mbps
80 msec
vii
.28 sec
.98 sec
Nosotros come across from the above chart that for a large object, slow-start adds appreciable delay only when the manual rate is high. If the transmission rate is low, so acknowledgments come up dorsum relatively quickly, and TCP chop-chop ramps up to its maximum rate. For example, when R = 100 Kbps, the number of stall periods is P=2 whereas the number of windows to transmit is K=eight; thus the server stalls only later on the outset 2 of eight windows. On i otherhand, when R = 10 Mbps, the server stalls between each window, which causes a significant increment in the delay.

At present consider sending a small-scale object of size O = 5 Kbytes. The number of windows that cover this object is K= iv. For a number of  manual rates, the following chart examines the affect of the the slow-start mechanism.

R
O/R
P
Minimum Latency:
O/R + 2 RTT
Latency
with Tiresome Showtime
  28 Kbps
 one.43 sec
1
 1.63 sec
 1.73 sec
   100 Kbps
 .4 sec
2
.6 sec
 .757 sec
  one Mbps
xl msec
3
.24 sec
.52 sec
  10 Mbps
  4 msec
three
.20 sec
.50 sec
Once once more tiresome start  adds an observable delay when the transmission rate is loftier. For instance, when R = 1Mbps the server stalls betwixt each window, which causes the latency to exist more than twice that of the minimum latency.

For a larger RTT, the bear upon of ho-hum get-go becomes significant for small objects for smaller transmission rates. The following chart examines the bear on of slow outset for  RTT = 1 2d and O = 5 Kbytes (Yard=iv).

R
O/R
P
Minimum Latency:
O/R + two RTT
Latency
with Dull Kickoff
  28 Kbps
 1.43 sec
3
 three.4 sec
 five.8 sec
  100 Kbps
.4 sec
iii
2.4 sec
5.2 sec
1 Mbps
40 msec
3
2.0 sec
five.0 sec
10 Mbps
4 msec
3
2.0 sec
v.0 sec
In summary, slow beginning can significantly increase latency when the object size is relatively small and the RTT is relatively large. Unfortunately, this is often the scenario when sending of objects over the Globe Wide Web.

An Example:  HTTP

As an application of the the latency analysis, let's now calculate the response time for a Web page sent over not-persistent HTTP. Suppose that the page consists of one base of operations HTML folio and K referenced images. To go along things simple, let us assume that each of the M+1 objects contains exactly O bits.

With non-persistent HTTP, each object is tranferred independently, one after the other. The response fourth dimension of the Spider web page is therefore the sum of the latencies for the individual objects. Thus

Note that the response time for non-persistent HTTP takes the form:
response time =  (M+1)O/R + 2(Grand+1)RTT + latency due to TCP tedious-kickoff  for each of the M+1 objects.

Clearly if there are many objects in the Web folio and if RTT is large, and so non-persistent HTTP will accept poor response-time functioning. In the homework problems we will investigate the response time for other HTTP transport schemes, including persistent connections and not-persistent connections with parallel connections. The reader is also encouraged to encounter [Heidemann] for a related analysis..

References

TCP congestion control has enjoyed a tremendous amount of study and pampering since its original adoption in 1988. This is non surprising as it is both an important and interesting topic. At that place is currently a big and growing literature on the subject. Below we provide references for the citations in this section as well references to some other of import works.

[Ahn 1995] J.S. Ahn, P.B. Danzig, Z. Liu and Y. Yan, Experience with TCP Vegas: Emulation and Experiment, Proceedings of ACM SIGCOMM '95, Boston, August 1995.
[Brakmo 1995] Fifty. Brakmo and 50. Peterson, TCP Vegas: End to End Congestion Abstention on a Global Net, IEEE Journal of Selected Areas in Communications, xiii(eight):1465-1480, October 1995.
[Chiu 1989] D. Chiu and R. Jain, "Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks," Reckoner Networks and ISDN Systems, Vol. 17, pp. 1 - fourteen.
[Floyd 1991] S. Floyd, Connections with Multiple Congested Gateways in Packet-Switched Networks, Part 1: One-Wat Traffic, ACM Computer Communications Review, Vol. 21, No. v, October 1991, pp. xxx-47.
[Heidemann 1997] J. Heidemann, K. Obraczka and J. Touch, Modeling the Performance of HTTP Over Several Send Protocols," IEEE/ACM Transactions on Networking, Vol. v, No. 5, October 1997, pp. 616-630.
[Hoe 1996] J.C. Hoe, Improving the Start-upwards Behavior of a Congestion Control Scheme for TCP. Proceedings of ACM SIGCOMM'96, Stanford, Baronial 1996.
[Jacobson 1988] V. Jacobson,  Congestion Abstention and Control. Proceedings of ACM SIGCOMM '88. August 1988, p. 314-329.
[Lakshman 1995] T.V. Lakshman and U. Madhow, "Performance Assay of Window-Based Flow Command Using TCP/IP: the Event of High Bandwidth-Delay Products and Random Loss", IFIP Transactions C-26, High Performance Networking V, North Holland, 1994, pp. 135-150.
[Mahdavi 1997] J. Mahdavi and S. Floyd, TCP-Friendly Unicast Rate-Based Flow Command. unpublsihed note, January 1997.
[Nielsen 1997] H. F. Nielsen, J. Gettys, A. Baird-Smith, E. Prud'hommeaux, H.W. Lie, C. Lilley, Network Operation Effects of HTTP/1.1, CSS1, and PNG, W3C Document, 1997 (likewise appeared in SIGCOMM' 97).
[RFC 793] "Transmission Control Protocol," RFC 793, September 1981.
[RFC 854] J. Postel and J. Reynolds, "Telnet Protocol Specifications,"  RFC 854, May 1983.
[RFC 1122] R. Braden, "Requirements for Cyberspace Hosts -- Communication Layers," RFC 1122, October 1989.
[RFC 1323] 5. Jacobson, S. Braden, D. Borman, "TCP Extensions for High Performance," RFC 1323, May 1992.
[RFC 2581] M. Allman, V. Paxson, W. Stevens, " TCP Congestion Control, RFC 2581, Apr 1999.
[Shenker 1990] Due south. Shenker, L. Zhang and D.D. Clark, "Some Observations on the Dynamics of a Congestion Control Algorithm", ACM Calculator Communications Review, 20(4), October 1990, pp. 30-39.
[Stevens 1994] W.R. Stevens, TCP/IP Illustrated, Volume i: The Protocols. Addison-Wesley, Reading, MA, 1994.
[Zhang 1991] L. Zhang, S. Shenker, and D.D. Clark, Obervations on the Dynamics of a Congestion Control Algorithm: The Effects of Two Way Traffic, ACM SIGCOMM '91, Zurich, 1991.

Search RFCs and Internet Drafts

If you are interested in an Internet Typhoon relating to a certain bailiwick or protocol enter the keyword(south) hither.

 Return to Table Of Contents


Copyright Keith W. Ross and James F. Kurose 1996-2000 . All rights reserved.

What Is The Initial Size Of The Tcp Congestion Window?,

Source: http://www2.ic.uff.br/~michael/kr1999/3-transport/3_07-congestion.html

Posted by: silveirawellegly.blogspot.com

0 Response to "What Is The Initial Size Of The Tcp Congestion Window?"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel