Notes on the Cedar TCP implementation.
Usage:
TCP.mesa provides the interface to this package. Use CreateTCPStream to get an ordinary IO.
STREAM containing the TCP connection. Reading will wait for data and return it, writing will send data. Flush forces any partially filled packets to be sent with the TCP Push option set. When you are through sending data, do a Close. You may still receive incoming data but may not send any more after the Close. AbortTCPStream will send a TCP reset and destroy the connection entirely.
Urgent data is sent by calling SetUrgent. It sets the urgent pointer in outgoing packets to point to the current point in the stream. WaitForUrgentData is used to asynchronously wait for urgent data from the other end. It will only return when urgent data is received or the connection closes. Warning: This code has never been tested.
CreateTCPStream will raise the
ERROR TCP.Error if it cannot open the connection for some reason. The
ERROR IO.Error[$Failed, self] will be raised during an IO operation on the stream if the stream state changes to some state in which that operation cannot ever succeed (e.g. the remote end closes the connection while a write is pending, or the remote TCP stops responding to connection-level probes). A client who wishes to find out the reason for the error may call TCP.ErrorFromStream[self]. End of stream (resulting from a normal remotely-initiated close) is signalled by
ERROR IO.EndOfStream. It is not sufficient to use IO.EndOf to detect end of stream since the TCP package may not know about the end of stream until after the read call begins.
When you open a connection, you may optionally specify a timeout value (in milliseconds) for data on the connection. If a read or write operation has to wait that long for data (or for buffer space at the remote end), then it will raise the
INFORMATIONAL
SIGNAL TCP.Timeout. It can be resumed to try again, and if not caught it is automatically resumed. Note that this timeout is distinct from the connection-level timeout that occurs if the remote TCP stops responding altogether; the latter gives rise to IO.Error as described above.
To create a server, open the connection with active set to
FALSE. This will create a single connection in listen state. Call WaitForListenerOpen to wait for someone else to open a connection there. Once the connection is open, you will have to open another connection in the listening mode to handle new connections. I would like to see the interface change here.
Implementation:
The main parts of the implementation are TCPMain (which exports TCP), TCPOps, TCPLogging, TCPTransmit, TCPStates, and TCPReceiving, which are each discussed below.
TCPMain exports the TCP interface and implements the stream operations. The only interface it uses in the rest of the TCP code is TCPOps. It knows a little about the structure of a handle, particularly the input and output buffers. This module is modeled fairly heavily on the corresponding module in the BSP code.
TCPOps is the main interface that provides TCPness. It has most of the type and variable declarations for the TCP implementation, and it is
OPENed by most of the other modules. It provides packet streams (more or less) to TCPMain, and utility routines to the rest of the implementation.
The main data type provided is the TCPHandle. It is a monitored record which contains all the state about a particular TCP connection. TCPOpsImpl and TCPReceivingImpl are both monitors that lock the handle passed to their routines. TCPTransmitImpl is not a monitor, but its routines are only called by procedures that already have the handle lock.
TCPOps has two routines called StartupTCP and ShutdownTCP which do the obvious things. They are not exported, but can be called from the debugger. When debugging, you should call ShutdownTCP before running the new version. StartupTCP forks two processes, called the receiver process and the retransmit process. The receiver process sits in a loop getting datagrams from the IP package and processing them. The retransmit process wakes up every so often and examines all active connections for packets that should be retransmitting and connections that should be shut down due to timeouts.
TCPStates provides the interface to create and destroy new handles. TCPStatesImpl is a monitor protecting the list of all handles. The rule for obtaining locks is that you always get the handle lock before you get the TCPStates lock if you are going to get both.
TCPReceiving contains the routines that handle data coming in from the net. It exports ProcessRcvdSegment which is called by the receiver process. It calls TCPTransmit to send acks and things like that. It queues the incoming data on the readyToReadQueue in the handle. That queue is guaranteed to be in order, and to have non-empty segments on it. There is also a fromNetQueue, which contains packets received out of order.
TCPTransmit provides routines to send data to the net. There are simple routines like SendFIN to send a single packet, and more complex ones line TryToSend, which may send data and/or acks. When sending data, one queues the outgoing datagrams on the toNetQueue and calls TryToSend.
TCPLogging provided routines to print debugging information on an output stream. Setting either of the streams logFile or pktFile to non-NIL will cause debugging information to appear on them. These files are holdovers from the orginal implementation. logFile differs from pktFile in that timestamps are printed as well.
To do:
Urgent send and receive have never been tested.
The listener interface should be reworked to look more like the BSP version.
We neither send or process the TCP options. We should really send the maxSegment option since the IP package can't handle full length TCP segments yet.
I once saw a connection where the syns were not getting retransmitted. I don't know why.
The stream procedures in TCPMain are not monitored, so concurrent calls could get the buffer state thoroughly fouled up. Either move the TCPHandle object monitor out to TCPMain (from TCPOpsImpl) or add a new object monitor protecting just the buffers associated with the TCPHandle.
Each packet causes a new TCPRcvBuffer or TCPSendBuffer to be allocated, and also a new IP.DatagramRec (for received packets, the DatagramRec is allocated in the IP implementation). This is surely a serious inefficiency. Change to maintain a pool of free buffers that is periodically swept up by some background process. But note that determining safely that a buffer is no longer being referenced requires more careful treatment of buffer handles in the main stream procedures.