TCP retransmission scenarios (more) TCP: retransmission scenarios. Fast Retransmit. Outline. Fast retransmit algorithm:

Transport Layer Transport Layer TCP retransmission scenarios (more) TCP: retransmission scenarios SendBase = 120 SendBase = 120 Host A time tim...
Author: Sharyl Day
1 downloads 2 Views 2MB Size
Transport Layer

Transport Layer

TCP retransmission scenarios (more)

TCP: retransmission scenarios

SendBase = 120 SendBase = 120

Host A

time

timeout

X

loss

time Cumulative ACK scenario

premature timeout CSE Department

CSE Department

19

Transport Layer

Host A Idea: duplicate ACKs

Seq 19, 5 bytes

Sender often sends many segments back-toback If segment is lost, there will likely be many duplicate ACKs.

Ack (18) Ack (18)

 If sender receives 3

Seq 19, 5 bytes

ACKs for the same data, it supposes that segment after ACKed data was lost: 

timeout

via duplicate ACKs.

Ack (18)

X

long delay before resending lost packet

 Detect lost segments

Host B

Seq 10, 8 bytes

Time-out period often relatively long:



20

Transport Layer

Fast Retransmit



Host B

SendBase = 120

Seq=92 timeout

Sendbase = 100

Host B

Seq=92 timeout

Host A

Ack (24)

Retransmission dene before timeout occurs!

fast retransmit: resend segment before timer expires

time Resending a segment after triple duplicate ACK

CSE Department

Transport Layer

22

Transport Layer

Fast retransmit algorithm:

Outline  Transport-layer

event: ACK received, with ACK field value of y if (y > SendBase) { SendBase = y if (there are currently not-yet-acknowledged segments) start timer } else { increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) { resend segment with sequence number y } a duplicate ACK for already ACKed segment

CSE Department

21

services  Multiplexing and demultiplexing  Connectionless transport: UDP  Principles of reliable data transfer

 Connection-oriented

transport: TCP    

segment structure reliable data transfer flow control connection management

 Principles of congestion

control

 TCP congestion control

fast retransmit

CSE Department

23

CSE Department

24

1

Transport Layer

TCP Flow Control  receive side of TCP

connection has a receive buffer:

Transport Layer

TCP Flow control: how it works

flow control

 Rcvr advertises spare

sender won’t overflow receiver’s buffer by transmitting too much, too fast

 speed-matching

 Potential problem:

(Suppose TCP receiver discards out-of-order segments)  spare room in buffer

service: matching the send rate to the receiving app’s drain rate

CSE Department

26

TCP Connection Management Recall: TCP sender, receiver establish “connection” before

 Connection-oriented

exchanging data segments

transport: TCP    

 initialize TCP variables:

segment structure reliable data transfer flow control connection management

seq. #s buffers, flow control info (e.g. RcvWindow)  client: connection initiator  

 Principles of congestion

control

Socket clientSocket = new

Socket("hostname","port

number");

 server: contacted by client Socket connectionSocket = welcomeSocket.accept();

 TCP congestion control

CSE Department

CSE Department

27

Transport Layer

28

Transport Layer

Establishing a TCP Connection

Connecting to 202.120.1.100: 80

animation http://media.pearsoncmg.com/aw/aw _kurose_network_4/applets/flow/Flo wControl.htm

Transport Layer

Outline services  Multiplexing and demultiplexing  Connectionless transport: UDP  Principles of reliable data transfer

guarantees receive buffer doesn’t overflow!

25

Transport Layer

 Transport-layer



= RcvWindow = RcvBuffer-[LastByteRcvd LastByteRead]

app process may be slow at reading from buffer CSE Department

room by including value of RcvWindow in segments  Sender limits unACKed data to RcvWindow

Three way handshake:

Step 1: client host sends TCP SYN segment to server

Step 2: server host receives SYN, replies with SYNACK segment

• specifies initial seq #

• server allocates buffers

• no data

• specifies server initial seq. #

Step 3: client receives SYNACK, replies with ACK segment, which may contain data

Listening on 202.120.1.100: 80

CSE Department

29

CSE Department

30

2

Transport Layer

Transport Layer

Three way handshake: Step 1: client host sends TCP SYN segment to server • specifies initial seq # • no data

Step 3: client receives SYNACK, replies with ACK segment, which may contain data • Allocate buffers

More Details in Segments

SYN SYNACK

Step 2: server host receives SYN, replies with SYNACK segment • server allocates buffers • specifies server initial seq. #

SYNACK

CSE Department

CSE Department

31

Transport Layer

32

Transport Layer

SYN Flooding Attack

Closing a TCP Connection

A SYN flood is a form of denial-of-service attack in which an attacker sends a succession of SYN requests to a target's system in an attempt to consume enough server resources to make the system unresponsive to legitimate traffic.

The attacker sends several packets but does not send the "ACK" back to the server. The connections are hence halfopened and consuming server resources. Alice, a legitimate user, tries to connect but the server refuses to open a connection resulting in a denial of service. CSE Department

Communicating on 202.120.10.2: 1150

CSE Department

33

Transport Layer

Communicating on 202.120.1.100: 80

34

Transport Layer

Asymmetric release is problematic

The two-army problem

one party just closes down the connection. May result in loss of data: Host 1

Host 2

Resource released Data loss

Resource released

Challenge: the two-army problem, i.e. we can't devise a solution to release a connection such that the two parties will always agree.

Abrupt disconnection with loss of data. CSE Department

35

CSE Department

36

3

Transport Layer

Transport Layer

TCP Connection Management (cont.)

TCP Four-way handshaking Closing a connection:

client

close

TCP FIN control segment to server

client

server

closing

Enters “timed wait” will respond with ACK to received FINs

closing

Step 4: server, receives

ACK. Connection closed.

timed wait

FIN, replies with ACK. Closes connection, sends FIN.

replies with ACK. 

Step 1: client host sends

Step 2: server receives

Step 3: client receives FIN,

server

close

timed wait

client closes socket: clientSocket.close();

closed

closed

closed

CSE Department

CSE Department

37

Transport Layer

38

Transport Layer

TCP Connection Management (cont)

States Transition (Establish)

TCP server lifecycle TCP client lifecycle

CSE Department

CSE Department

39

Transport Layer

40

Transport Layer

States Transition (Release)

Outline  Transport-layer

services  Multiplexing and demultiplexing  Connectionless transport: UDP  Principles of reliable data transfer

CSE Department

41

 Connection-oriented

transport: TCP    

segment structure reliable data transfer flow control connection management

 Principles of congestion

control

 TCP congestion control

CSE Department

42

4

Transport Layer

Transport Layer

Principles of Congestion Control

London Congestion Area

Congestion:  informally: “too many sources sending too much

data too fast for network to handle”

 manifestations:  

lost packets (buffer overflow at routers) long delays (queueing in router buffers) different from flow control! CSE Department

CSE Department

43

Transport Layer

44

Transport Layer

Causes/costs of congestion: scenario 1 Host A

 two senders, two

receivers  one router, infinite buffers  no retransmission

Causes/costs of congestion: scenario 2

lout

lin : original data

 one router, finite buffers  sender retransmission of lost packet

unlimited shared output link buffers

Host B

Host A Host B

 large delays

when congested  maximum achievable throughput CSE Department

lout

lin : original data l'in : original data, plus retransmitted data

finite shared output link buffers

CSE Department

45

Transport Layer

46

Transport Layer

Causes/costs of congestion: scenario 2 (goodput) = l out in  “perfect” retransmission only when loss:  always:

Causes/costs of congestion: scenario 3

l

l > lout in

 retransmission of delayed (not lost) packet makes

(than perfect case) for same R/2

 four senders

l

lout

in

R/2

R/2

lout

lout

lout

a.

lin

R/2

Unrealistic, long timeout

b.

lin : original data

lout

l'in : original data, plus retransmitted data

finite shared output link buffers

R/3

lin

in

 timeout/retransmit Host A

R/2

Perfect, no loss

Q: what happens as l in and l increase ?

 multihop paths

larger

R/4

lin

R/2

Host B

realistic

c.

“costs” of congestion:  more work (retrans) for given “goodput”  unneeded retransmissions: link carries multiple copies of pkt CSE Department

47

CSE Department

48

5

Transport Layer

Transport Layer

Approaches towards congestion control

Causes/costs of congestion: scenario 3 H o s t A

l

Two broad approaches towards congestion control

o u t

H o s t B

End-end congestion control:  no explicit feedback from

network  congestion inferred from end-system observed loss, delay  approach taken by TCP

Another “cost” of congestion:  when packet dropped, any “upstream transmission capacity used for that packet was wasted! CSE Department

 “elastic service”

RM (resource management) cells:

 if sender’s path

 sent by sender, interspersed

50

Case study: ATM ABR congestion control

with data cells

(“network-assisted”)  NI bit: no increase in rate (mild congestion)  CI bit: congestion indication  RM cells returned to sender by receiver, with bits intact

 two-byte ER (explicit rate) field in RM cell  congested switch may lower ER value in cell  sender’ send rate thus is maximum supportable rate on path  EFCI bit in data cells: set to 1 in congested switch  if data cell preceding RM cell has EFCI set, sets CI bit in returned RM cell CSE Department

51

Transport Layer

52

Transport Layer

Outline services  Multiplexing and demultiplexing  Connectionless transport: UDP  Principles of reliable data transfer

CSE Department

 bits in RM cell set by switches

CSE Department

 Transport-layer

to end systems  single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM)  explicit rate sender should send at

Transport Layer

Case study: ATM ABR congestion control

“underloaded”:  sender should use available bandwidth  if sender’s path congested:  sender throttled to minimum guaranteed rate

 routers provide feedback

49

Transport Layer

ABR: available bit rate:

Network-assisted congestion control:

How to Control Transmission Rate? Transmission rate can be controlled?

 Connection-oriented

transport: TCP    

Note: Not by applications themselves, but by the transport layer

segment structure reliable data transfer flow control connection management

 Principles of congestion

Internet

control

 TCP congestion control

CSE Department

53

Yes. Through controlling the send window

CSE Department

54

6

Transport Layer

Transport Layer

TCP congestion control: Basic Idea

TCP AIMD

 Basic idea: Dynamically adjust the

Additive Increase Multiplicative Decrease

transmission rate

CongWin: The send window for controlling the rate

When not congested -> the host increases transmission rate for higher transmission performance  When congested -> the host decreases the transmission rate for avoiding congestion. 



 Key idea: probing for usable bandwidth by

gradually increasing transmission rate (window size), until loss occurs (inferring congestion)

CSE Department

additive increase: increase CongWin by 1 MSS every RTT until loss detected multiplicative decrease: cut CongWin in half after loss

congestion window size



congestion window

MSS: maximum segment size

24 Kbytes

Saw tooth behavior: probing for bandwidth

16 Kbytes

8 Kbytes

time time

CSE Department

55

Transport Layer

56

Transport Layer

TCP Congestion Control: details (2)

TCP Congestion Control: details

How does sender perceive congestion?

 Sender limits transmission: LastByteSent-LastByteAcked  CongWin



 What is the transmission rate? Roughly, rate =



loss event = timeout or 3 duplicate acks TCP sender reduces rate (CongWin) after loss event

CongWin Bytes/sec RTT

AIMD

Three mechanisms:

 CongWin is dynamic, function of perceived

network congestion

slow start conservative after timeout events

CSE Department

CSE Department

57

Transport Layer

Transport Layer

TCP Slow Start

TCP Slow Start (more)

 When connection begins, CongWin = 1 MSS 

Host A

Example: MSS = 500 bytes & RTT = 200 msec initial rate = 20 kbps

Host B

RTT



58

 When connection

begins, increase rate exponentially until first loss event: 

 available bandwidth may be >> MSS/RTT  desirable to quickly ramp up to respectable rate



double CongWin every RTT done by incrementing CongWin for every ACK received (why it is exponential?)

 Summary: initial rate time

CSE Department

59

is slow but ramps up exponentially fast

CSE Department

60

7

Transport Layer

Transport Layer

Refinement

Refinement: inferring loss  After 3 dup ACKs:

CongWin is cut in half window then grows linearly  But after timeout event:  CongWin instead set to 1 MSS;  window then grows exponentially  to a threshold, then grows linearly  

Rationale:

3 dup ACKs indicates network capable of delivering some segments timeout indicates a “more alarming” congestion scenario

Q: When should the exponential increase switch to linear? A: When CongWin gets to 1/2 of its value before timeout.

Implementation:  Variable Threshold  At loss event, Threshold is

set to 1/2 of CongWin just before loss event

CSE Department

animation http://media.pearsoncmg.com/aw/aw _kurose_network_4/applets/fairness/i ndex.html CSE Department

61

Transport Layer

Transport Layer

TCP sender congestion control

Summary: TCP Congestion Control

State

Event

TCP Sender Action

Commentary

 When CongWin is below Threshold, sender in

Slow Start (SS)

ACK receipt for previously unacked data

CongWin = CongWin + MSS, If (CongWin > Threshold) set state to “Congestion Avoidance”

Resulting in a doubling of CongWin every RTT

 When CongWin is above Threshold, sender is in

Congestion Avoidance (CA)

ACK receipt for previously unacked data

CongWin = CongWin+MSS * (1/CongWin)

Additive increase, resulting in increase of CongWin by 1 MSS every RTT

SS or CA

Loss event detected by triple duplicate ACK

Threshold = CongWin/2, CongWin = Threshold, Set state to “Congestion Avoidance”

Fast recovery, implementing multiplicative decrease. CongWin will not drop below 1 MSS.

SS or CA

Timeout

Threshold = CongWin/2, CongWin = 1 MSS, Set state to “Slow Start”

Enter slow start

SS or CA

Duplicate ACK

Increment duplicate ACK count for segment being acked

CongWin and Threshold not changed

slow-start phase, window grows exponentially.

congestion-avoidance phase, window grows linearly.

 When a triple duplicate ACK occurs, Threshold

set to CongWin/2 and CongWin set to Threshold.

 When timeout occurs, Threshold set to

CongWin/2 and CongWin is set to 1 MSS. CSE Department

CSE Department

63

Transport Layer

64

Transport Layer

TCP throughput

TCP Fairness

 What’s the average throughout of TCP as a

Fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K

function of window size and RTT? 

62

Ignore slow start

 Let W be the window size when loss occurs.

TCP connection 1

 When window is W, throughput is W/RTT  Just after loss, window drops to W/2,

throughput to W/2RTT.

TCP connection 2

 Average throughout: 0.75 W/RTT

CSE Department

65

bottleneck router capacity R

CSE Department

66

8

Transport Layer

Transport Layer

Fairness (more)

Why is TCP fair?

Fairness and UDP  Multimedia apps often do not use TCP

Two competing sessions:  Additive increase gives slope of 1, as throughout increases  multiplicative decrease decreases throughput proportionally

R



equal bandwidth share

loss: decrease window by factor of 2 congestion avoidance: additive loss: decrease window by factor of 2 increase congestion avoidance: additive increase

do not want rate throttled by congestion control

 Instead use UDP:  pump audio/video at constant rate, tolerate packet loss  Research area: TCP

friendly

Fairness and parallel TCP connections  nothing prevents app from opening parallel connections between 2 hosts.  Web browsers do this  Example: link of rate R supporting 9 connections;  

new app asks for 1 TCP, gets rate R/10 new app asks for 11 TCPs, gets R/2 !

Connection 1 throughput R CSE Department

Transport Layer

68

Transport Layer

Chapter 3: Summary

Programming Assignment #2

 principles behind transport

layer services:  multiplexing, demultiplexing  reliable data transfer  flow control  congestion control  instantiation and implementation in the Internet  UDP  TCP

CSE Department

67

 Implementing a Reliable Transport Protocol  Deadline:  11 April 2016

Next:  leaving the network “edge” (application, transport layers)  into the network “core” CSE Department

69

CSE Department

70

9