Under The Hood: A Closer Look At Wowza Streaming Engine 4.9.7
In our previous Developer Deep Dive session, we explored the power of building custom modules to extend the capabilities of your streaming workflows. In our most recent installment, we shifted our focus toward the core of the engine itself. We’ve already outlined the technical features of this release, but this deep dive was about why we made these proactive optimizations, and how they are setting the stage for the next generation of video innovation.
We are moving toward a future where Wowza isn’t just a processing hub, but an intelligent, event-driven ecosystem. By laying the groundwork for video intelligence, automated metadata, and AI-driven alerting, Wowza Streaming Engine will transform how you manage and monitor your streams.
Miss the live webinar? Don’t worry! You can still watch the VOD recording here.
Observability Webhooks: Moving Beyond Manual Monitoring
One of the most frequent requests we hear from developers is for better observability into the video processing engine. Traditionally, monitoring Wowza Streaming Engine required your application to constantly poll our REST API, asking, “Is the stream connected yet? Are there any errors?”
With the release of 4.9.7, we’ve introduced a Webhooks framework that allows Wowza to proactively notify you about key stream events. When a stream connects, a recording starts, or a segment is ended, Wowza pushes a machine-readable JSON payload directly to your application logic in real time.
During our live demo, I showed how instantaneous this can be. As soon as I started an RTMP stream in FFmpeg, we saw the “Connection Success” and “Stream Started” alerts. For mission-critical events, this means you can catch an error and troubleshoot it via the exact timestamp in your log files before a single viewer experiences a disruption.
Built for Complexity & AI-Driven Workflows
A common concern is whether these new Webhooks might conflict with existing custom modules. To be clear: these webhooks are strictly additive. They run as a server listener that you enable manually. This means they won’t stomp on your previous modules.
This event-driven model is a critical prerequisite for AI-powered object detection, logging, and alerting capabilities in Wowza Streaming Engine. A Webhook workflow can trigger an AI process to begin analyzing a stream for specific objects or motion. By moving to a proactive observability model now, you are preparing your infrastructure for the automated, intelligent monitoring of the future.
For a full technical walkthrough of the configuration, check out my expanded Webhook demo video below.
Modernized Captions and Meeting Comliance Standards
As the industry coalesces around CMAF (Common Media Application Format) as the standard data format, it’s vital that our workflows are equally modernized. A major focus of 4.9.7 was ensuring that our CMAF packager handles a wide variety of caption formatting with reliability and consistency.
Historically, caption support could be fragmented across different protocols. Now, whether you are delivering via HLS or DASH, Wowza provides equal support for embedded 608/708 and WebVTT formatting. This consistency creates a much more reliable offering for HTTP playout, particularly for broadcasters and large-scale event producers.
Evolving Beyond Traditional Live Playout
We also addressed whether these captioning improvements extend to time-shifted content. The short answer: Yes. Captions are available and fully supported within a DVR workflow. Ensure compliance standards are maintained whether your audience is watching live or scrubbing back through a recorded session.
What’s more, consistent, standardized captioning is fuel for AI-driven accessibility. By unifying these workflows, you can leverage any AI transcription service (like OpenAI’s Whisper, Azure Speech-to-Text, or LibreTranslate) to generate WebVTT files that work across every format. This reduces the processing overhead and simplifies the path to global, multilingual compliance
EVA: The Engine for High-Performance Video Encoding
The most significant architectural shift in version 4.9.7 is the integration of an Easy Video API (EVA), which revamps the encoding pipeline. Currently in beta, EVA represents a move toward more modern, efficient video workflows. It gives users greater confidence in how their video processing and delivery workflows are resourced. By modernizing this core architecture, we are setting the stage for adding features and optimizations with far greater reliability than ever before.
Optimizing Resource Efficiency For High-Powered Video Workloads
During the webinar, we discussed how this foundational work is already paying off for cloud-based workflows. EVA is designed to help Wowza Streaming Engine take better advantage of the latest NVIDIA GPU instances on AWS. This ensures that, as you scale, your hardware is operating at peak efficiency. This is crucial for the heavy processing demands of real-time video analysis.
Ready For What’s Next?
As we move forward, the Wowza roadmap is a two-way conversation. The architectural changes we’ve implemented in 4.9.7 were built to support the high-performance, AI-driven features that our developer community is asking for most. We are evolving Wowza Streaming Engine to meet the future of video intelligence head-on.
Whether you’re looking to implement proactive alerting via Webhooks, migrate to 4K or HEVC workflows with CMAF caption support, or prepare your infrastructure for AI-powered capabilities, the 4.9.7 release is your starting point.
Watch the full webinar VOD or read the release notes to get a closer look at Wowza Streaming Engine 4.9.7, and don’t hesitate to contact us if you have questions.