Takeaways from Mile High Video 2026

The annual pilgrimage to Denver for the Mile High Video (MHV) conference has long been a barometer for the technical state of the video streaming industry. In years past, the topics tended to be more academic, focused on the latest research.  Subjects such as film grain synthesis and the microscopic efficiencies of new codecs and enhancement algorithms ruled the day. But, this year felt different.

The conversation has moved to an intriguing combination of research and practicality. More talks are focusing on how the individual parts fit into the greater workflow. While codecs like AV1 (even AV2 now) and VVC still have impact in many areas, the slate this year was covered a broad range of topics from optimizing ad delivery and interaction, the latest standards for measuring and improving visual quality QoE, to the forward looking promise of Media-over-QUIC (MoQ), and of course the implications and uses for AI in streaming.

What We Saw At Mile High Video 2026

In 2026, a successful streaming architecture isn’t defined solely by codecs. Success is defined by the rich, actionable data that accompanies every frame across the workflow. Some things that caught my eye at Mile High Video 2026 that I’m sure we’ll be hearing more this year:

  1. Media-over-QUIC (MoQ) and the potential to be a new standard for real-time, scalable video delivery
  2. Standards like CMCDv2, CMSD, and MQA are providing a mechanism for real-time feedback and observability across the workflow for improved QoE.
  3. AI is moving beyond a core component for human viewing, to a tool to enhance machine-to-machine applications.
  4. Modern advertising techniques such as linear and SIMID help optimize monetization while preserving the user experience.
  5. The impacts of eRTMP and next-gen codecs like AV1/2 and VVC are moving the needle on infrastructure costs and modernization at scale.
  6. How to prioritize security and authenticity standards like CAT and C2PA in the age of AI-generated media.

1. Media-over-QUIC (MoQ) Video Gains Steam

Based on just the number of presentations, Media over QUIC (MoQ) is on everyone’s mind as it moves to become a production ready protocol in 2026.  While technically in the definition phase of its development, we are already seeing working production deployments from companies like CloudFlare and NanoCosmos.

One interesting talk and demonstration involved using MoQ to power live sports watch parties. Taking advantage of dual pipes and multiple tracks with intelligent prioritization, the system can dynamically ensure streams stay in sync and drop lower priority traffic. For example, it could drop participant video during network congestion to preserve the main live stream’s integrity. Another talk demonstrated the use of cascading caching MoQ relays as a method to deliver streams to large audiences like a traditional CDN. All of this, while still maintaining ultra-low latency and using standard packaging pipelines like CMAF via CMSF.

2. The Streaming Telemetry Loop: CMCDv2 & MQA

Telemetry no longer needs to involve after-the-fact log file analysis. Real-time observability is a critical necessity in any modern streaming workflow. Modern standards are making that possible.

The Common Media Client Data Specification (CMCD) v2

The move to Event Mode and batch reporting is a game-changer for operational monitoring. Players can send CMCD data to any destination via an HTTP Post when an event is triggered.  Sending data in batches minimizes HTTP traffic.  The ability to send data in real time to external data sinks can save countless hours mining log files for actionable data.  A CMCDv2-compliant open-source implementation is included in the latest version of the Common Media Library (CML). It is also included in players such as Dash.js, HLS.js, and Shaka.

Many worked to help define the CMCDv2 spec, including Wowza’s Ian Zenoni.

End-to-End Media Quality Analysis (MQA) Enables Intelligent Decision-Making

The SVTA QoE group is defining a mechanism to move quality metrics (VMAF, SSIM, PSNR) from an out-of-band metric into the video stream. By propagating quality data from the encoder all the way to the viewer, smarter decisions can be made across the workflow, and performance degradations can be surfaced in real time. This could include a packager switching encoders, a CDN switching origins, or even a player switching CDNs based on perceptual quality, rather than just throughput or bitrate.

3. AI As The Invisible Engineer In Video Streaming Workflows

AI is now becoming deeply intertwined with the core plumbing of the video stack.

For Just-In-Time (JIT) Per-Title/Per-Scene or even Per-Frame Encoding, companies are leveraging AI to predict the complexity of video frames without decoding, greatly decreasing resource usage. By monitoring quantization parameters as a proxy, and other techniques, these groups can run optimized, high-scale encoding workflows without the traditional compute overhead.

Several talks focused on optimizing video coding for Machine-to-Machine (M2M) uses. As more video is consumed by Machine Vision and AI inference engines (for object detection or remote driving) rather than humans, standards like MPEG-AI and VC-6 are emerging. These optimize what is sent to be specifically for what a machine needs to see. It saves massive amounts of bandwidth by discarding information that doesn’t contribute to inference accuracy.

4. Monetization Beyond Pre-Rolls

As the industry matures, optimizing the user’s ad experience is becoming critical to driving engagement and reducing churn. Standards like SIMID are allowing for secure interaction methods across devices. Linear advertising methods, such as L-shape and Squeeze back, can allow actionable ads that don’t interrupt the primary content. These, combined with the emerging use of SGAI (Server-Guided Ad Insertion), can help provide relevant, targeted advertising to a broad range of users in near real-time.

5. Enhanced Infrastructure & Codecs

For the infrastructure purists, there was plenty regarding contribution and resiliency:

  • Enhanced RTMP (eRTMP)
    Enhanced RTMP is now the undisputed standard for contribution. With support across YouTube, Twitch, Meta, and TikTok, eRTMP has brought the protocol into the modern era with support for HEVC and AV1.
  • Redundant Encoding and Packaging (REaP) Architecture
    Netflix shared insights into Redundant Encoding and Packaging (REaP) reference architecture. By using a manifest-less approach where the client pulls segments from redundant regional pipelines, they created a blueprint for 100% uptime in large-scale live events.
  • AV2 & Custom Silicon
    With AV2 slated for an early 2026 release and promising a 30% gain over AV1, the focus has shifted to hardware. YouTube’s use of custom Argo VCUs highlights the necessity of ASIC-based encoding to handle the massive ABR ladders. At scale, this could require 60+ outputs for a single 4K HDR stream.

6. Authenticity and Content Provenance

In an era of deepfakes and AI-generated media, the streaming industry is stepping up as a guardian of truth. The C2PA (Coalition for Content Provenance and Authenticity) v2.3 spec was a major talking point. Live Provenance Anchoring allows for a secure chain of custody for video streams. This has massive implications for sectors like body-worn cameras or news organizations. Proving that a stream hasn’t been tampered with between the camera and the viewer is non-negotiable.

Final Thoughts

The takeaway from Mile High Video 2026 is clear: the Video Architect of tomorrow is as much a data scientist as they are a compression expert. Whether it’s for sub-second interactivity, optimizing CDN performance, or securing video data, the goal is the same. Teams are building smarter, more resilient, and more transparent media ecosystems.

The streaming stack is becoming more modular, data-driven, and interactive than ever before. At Wowza, we are already integrating these insights into our roadmap and ensuring that customers have the tools to turn these trends into scalable production realities. Contact a Wowza Streaming Engine expert today to see how we can help you modernize and future-proof your media infrastructure.

About Barry Owen

Barry Owen is Wowza’s resident video streaming expert, industry ambassador and Chief Solution Architect. In this role, he works with customers and partners to translate streaming requirements into scalable solutions. From architecting custom applications to solving complex integration challenges, Barry leverages more than 25 years of experience developing scalable, reliable on prem and cloud-based streaming platforms to create innovative solutions that empower organizations across every use case.
View More

FREE TRIAL

Live stream and Video On Demand for the web, apps, and onto any device. Get started in minutes.

START STREAMING!
  • Stream with WebRTC, HLS and MPEG-DASH
  • Fully customizable with REST and Java APIs
  • Integrate and embed into your apps

Search Wowza Resources


Subscribe


Follow Us


Categories

Blog

Back to All Posts