Optimization of the protocol opening operation when opening a tcp connection
I am developing a new protocol called DITP. It is a connection-oriented protocol that will use TCP as its transport layer. With regular internet protocols, when a TCP connection is established, the server starts by sending a welcome message to the client's response, eventually sending its first request.
I realized that I could save round-trip time by inverting the original protocol transaction. The client starts by sending a greeting, followed by the first request.
The following figure shows a comparison between two protocol transaction times and how it saves round trip time.
(source: disnetwork.info )
You can read the following blog post for a more detailed explanation. http://www.disnetwork.info/1/post/2008/08/optimizing-ditp-connection-open.html
I had two questions for StackOverflow network programmers:
-
Is this assumption correct?
-
Why don't conventional protocols use this?
This method can provide significant performance optimizations for long distance connections where latency is high and connections must be established frequently. HTTP would be a good candidate.
EDIT: Oops big mistake. HTTP uses an optimized method where the client sends a request directly. There is no greeting transaction, as in the case of SMTP. See the Hypertext Transfer Protocol page on Wikipedia.
a source to share
This is not done because:
a.) The client may need to know which version of the protocol the server is using
b.) You don't even know you are actually talking to a server that supports the protocol.
In short, it often makes sense to know what you are talking about before spewing data on it.
a source to share
I wonder if this design might not be a violation of Postel Law , as it takes things about the receiver and thus what is legal to send before knowing.
I would at least expect this principle to be the reason most protocols are designed so that they loop around to learn more about the other end before sending data that might not be understood at all.
a source to share
If you're worried about latency, you might want to look at LPT , a protocol specially designed for extremely long tryout connections.
When designing a new transport protocol, you should pay attention to congestion control and what the firewall will do when they encounter packets of an unknown protocol.
a source to share
The design goals of protocols such as HTTP, SMTP were not speed, but reliability in the face of a tough physical network and poor bandwidth usage. In many ways, these conditions have now changed with improved equipment.
Your design should be considered considering the network conditions you must face, the required reliability, latency, and bandwidth usage of your intended application.
a source to share
- In theory, this is correct.
- Common protocols don't use this because it is inefficient. The client will have to separate the data streams, so they will need to be distinguished. The server will have to take care of this, for example packing each piece of data into a container (XML, JSON, Bitorrent-like, You name name). And the container is just extra overhead slowing down the transfer.
Why not just open up multiple TCP sockets and send separate requests over those multiple connections? There is no overhead here! Oh, this is already being done, i.e. some modern web browsers. Use wireshark
or tcpdump
to check packages and watch for yourself.
There's more to it than that. A TCP socket takes a while to configure (SYN, some time, SYN + ACK, some time, ACK ...). Some would consider it a waste of time to reset the connection after each request, which is why some modern HTTP servers and clients use Connection: keep-alive
to indicate that they want to reuse the connection.
Sorry, but I think your ideas are great, however you can find them in the RFC. Keep thinking though, I'm sure one day you'll come up with something brilliant. See fe here for an optimized bitorrent client.
a source to share