Making things go fast on a network, part 2

Sorry about the delay - I got nailed by a nasty sinus infection on Tuesday night last week that took me out until today.

 

In my last post I started discussing some of the aspects of networking that need to be understood before you can make things "go fast" on a network.

As a quick recap, here are the definitions and axioms from that post (note: I've changed the definitions of packet and message because they make the subsequent articles easier):

First, some definitions:

  1. When you're transferring data on a connection oriented protocol, there are two principals involved, the sender and receiver (I'm not going to use the words "client" and "server" because they imply a set of semantics associated with a higher level protocol.
  2. A "Frame" is a unit of data sent on the wire. 
  3. A "Packet" is comprised of one or more frames of data depending on the size of the data being sent, it typically corresponds to a send() call on the sender
  4. A "Message" is comprised of one or more packets of data and typically corresponds to a higher level protocol verb or response.

And some axioms:

  1. Networks are unreliable.  The connection oriented protocols attempt to represent this unreliable network as a reliable communication channel, but there are some "interesting" semantics that arise from this.
  2. LAN Networks (Token Ring, Ethernet, etc) all transmit messages in "frames", on Ethernet the maximum frame size (or MSS, Maximum Segment Size) is 1500 bytes.  Some of that frame is used to hold protocol overhead (around 50 bytes or so), so in general, you've got about 1400 bytes of user payload for in each frame available (the numbers vary depending on the protocol used and the networking options used, but 1400 is a reasonable number).
  3. A connection oriented protocol provides certain guarantees:
    1. Reliable delivery - A sender can guarantee one of two things occurred when sending data - the receiver received the data being sent or an error has occurred.
    2. Data ordering - If the sender sends three packets in the order A-B-C, the
      receiver needs to receive the packets in the order A-B-C.

At the end of the last post, I introduced one consequence of these axioms:  When sending packets A, B, and C, the sender cant transmit packet B until the receiver has acknowledged receipt of packet A.  This was because of axiom 3.b.

There's one thing I forgot in my last post:

What happens when the receiver isn't ready to receive data from the client?

Well, it's not very pretty, and the answer depends on the semantics of the protocol, but in general, if the receiver doesn't have room for the packet, it sends a "NACK" to the client (NACK stands for Negative ACKnowledgement).  A NACK tells the client that there's no storage for the request, the client now needs to decide what to do.  Sometimes the NACK contains a hint as to the reason for the failure, for instance the NetBEUI protocol's NACK response includes the reasons like "no remote resources, unexpected request, out of sequence".  The client can use this information to determine if it should hang up the connection or retry (for no remote resources, for instance, it should retry).

Sender Receiver
Send Packet A.1  
  Send ACK A.1
Send Packet A.2  
  Send NACK A.2 (No Memory)
Send Packet A.2  
  Send NACK A.2 (No Memory)
Send Packet A.2  
  Send ACK A.2
Send Packet A.3  
  Send ACK A.3

All of this retransmission goes on below the covers, applications don't typically need to know about it.  But there's a potential perf pitfall here.

 

If you're analyzing network traces, you often see this pattern:

Sender Receiver
Send Packet A.1  
  Send NACK A.1 (No Memory)
Send Packet A.1  
  Send NACK A.1 (No Memory)
Send Packet A.1  
  Send NACK A.1 (No Memory)
Send Packet A.2  
  Send ACK A.2
Send Packet A.3  
  Send ACK A.3

What happened here?  Well, most likely the problem was the receiver didn't have a receive buffer down waiting for the sender to send data, so the sender had to retransmit its data to the receiver before the receiver got around to being able to receive it.

So here's "Making things go fast on a network" perf rule number 1:

        Always make sure that you have a receive request down BEFORE someone tries to send you data.

In traditional client/server networking, this rule applies to clients as well as servers - a client that doesn't have a receive outstanding when the server sends the response to a request, it will stall in the same way waiting on the client to get its receive down.

 

Btw, a piece of silly trivia.  J Allard, of XBox fame used to have two iguanas in his office named ACK and NACK back when he was the PM for NT networking.