I am looking to integrate audio streams from wowza - both live and on-demand - with a customized kaldi based speech recognition engine.
I am working with this codebase - https://github.com/alumae/kaldi-gstreamer-server - that sends audio from microphone or file to the kaldi based backend that runs on a websocket. The audio engine itself seems to use gstreamer and there are some examples of how to send audio on the command line to the kaldi engine and through a http api.
An example http request for a audio file is -
"curl -v -T test/data/english_test.raw -H “Content-Type: audio/x-raw-int; rate=16000” --header “Transfer-Encoding: chunked” --limit-rate 32000 "http://localhost:8888/client/dynamic/recognize "
How can I do something similar from my wowza server for live and ondemand files?