Wowza Community

[Rant] SendBufferSize recommendations

After months of dealing with choppy video and blaming it on the flash player I am sorry to say that your lack of buffer documentation and especially the 16k SendBufferSize recommendation for chat applications are bullshit.

Apparently SendBufferSize and ReceiveBufferSize are the SOCKET BUFFERS. For people developing a network application it should be common knowledge that the combination of outgoing socket buffers and network latency put a cap on bandwidth for TCP traffic. (Packets remain in the buffer until their reception is acknowledged, if the buffer is full, no more data is sent).

I run a global chat site offering video streams of up to 512kbit. With 16k sendBuffers receiving one videostream properly requires a latency of less than 250ms. Since viewing several streams still just uses one netconnection, two streams only work on <125ms, to view for streams the user needs to live next to the datacenter.

Analog to that, the receiveBufferSize limits the TCP window size, which also affects throughput.

So, unless my conclusions about my observations are wrong, you might want to document this. Not everyone has a CDN with a box every 500 miles.

I understand your frustration but I am not sure I completely understand your conclusions. Where did you get this information? My understanding is that the maximum MTU for broadband Internet delivery is 1500 bytes. I would think 16K send and receive buffers is much greater than 1500 byte MTU so it should not affect this value. The default tuning for Java sockets is 8K buffers. We are suggestion a value of twice this. What did you end up changing these values to and did it improve throughput? I would love to understand this at a deeper level so any pointers to socket tuning suggestions we can leverage to help others would be great.

Charlie

OK, thanks for the info.

Charlie

Hi,

Thanks for alerting me to this. I was having video stutter every 40-55 seconds, even though there was more than enough bandwidth between me & server. Was confirmed via SFTP transfer test, that I could max out my connection, yet video at under 1/2 the max of my connection was “freezing”.

I have changed all VHosts to 120000 (120k) buffers, and have never had a more stable stream in my life.

And to avoid being ambiguous and for people wanting to try this:

VHost.xml @

120000

120000

So a big thanks !

It’s just how TCP works: Error correction requires the reception of each packet to be acknowledged by the receiver. Until a packet is acknowledged it stays in the socket buffer in case it needs to be resent, if the buffer is full, no more data is sent until acknowledged packets free up some space. How fast the acknowledgement makes it back to the sender depends on the latency between the two.

Due to this the bandwidth of a TCP connection is limited by buffer size / latency. I.e. 16kbytes buffers on a 200ms connection has a throughput of 16kb/0.2 = 80kb/s or 640kbit.

TCP/IP is very old and a lot of default values have become insufficient for todays bandwidth requirements. Most operating systems have default buffers in the 64kbyte ballpark.

I’m no networking expert, so I just had to research the necessity for larger receive buffers: it seems when negotiating the window size, the TCP protocol defaults to the smallest of the two buffers (send/receive). Since TCP requires packets to be in order and fast-recovery requires the receiver to keep out of order packets, it is possible that the receive buffer must hold as much data as the send buffer … I think. :stuck_out_tongue: So, both buffers should be the same size. And they should be larger than 16kbytes for many if not most video chats.