After months of dealing with choppy video and blaming it on the flash player I am sorry to say that your lack of buffer documentation and especially the 16k SendBufferSize recommendation for chat applications are *bullshit*.
Apparently SendBufferSize and ReceiveBufferSize are the SOCKET BUFFERS. For people developing a network application it should be common knowledge that the combination of outgoing socket buffers and network latency put a cap on bandwidth for TCP traffic. (Packets remain in the buffer until their reception is acknowledged, if the buffer is full, no more data is sent).
I run a global chat site offering video streams of up to 512kbit. With 16k sendBuffers receiving one videostream properly requires a latency of less than 250ms. Since viewing several streams still just uses one netconnection, two streams only work on <125ms, to view for streams the user needs to live next to the datacenter.
Analog to that, the receiveBufferSize limits the TCP window size, which also affects throughput.
So, unless my conclusions about my observations are wrong, you might want to document this. Not everyone has a CDN with a box every 500 miles.
I understand your frustration but I am not sure I completely understand your conclusions. Where did you get this information? My understanding is that the maximum MTU for broadband Internet delivery is 1500 bytes. I would think 16K send and receive buffers is much greater than 1500 byte MTU so it should not affect this value. The default tuning for Java sockets is 8K buffers. We are suggestion a value of twice this. What did you end up changing these values to and did it improve throughput? I would love to understand this at a deeper level so any pointers to socket tuning suggestions we can leverage to help others would be great.