I’m working on a project for a new client. This guy has a Wowza server running on an EC2 instance. His streams are audio-only, and he’s quite concerned about getting his latency (from microphone input on the encoder to headphone output on the player) as low as possible - ideally under 1/2 second.
I’ve followed the advice in https://www.wowza.com/docs/how-to-set-up-low-latency-applications-in-wowza-streaming-engine-for-rtmp-streaming, and when I get my hands on an Apple device to test on I’ll also implement https://www.wowza.com/docs/how-to-configure-adobe-hds-packetization-sanjosestreaming. So far, encoding a stream using FMLE and using the OSMF player I can get my latency down around a second. I don’t seem to notice a difference between including a video stream or sending just an audio stream.
Besides the two articles mentioned above and the discussions that go with them, is there anything more I should be looking at to get my stream latency down? I am using stream type of live-record-lowlatency.