Contribution and Delivery Protocols in Video Streaming Workflows
The Quick Answer
Contribution protocols transport video from a source (camera, encoder, or production system) to a media server for processing. Contribution protocols optimize for stream integrity and low source-to-server latency. Common examples include RTMP, RTSP, and SRT. Delivery protocols distribute the processed video from the server to end viewers at scale through browsers and mobile devices. Delivery protocols optimize for security, scalability, metadata, adaptive bitrate support, and broad device compatibility. Common examples include HLS and DASH. Most live streaming workflows use a contribution protocol for ingest and a delivery protocol for playback, with a media server handling the conversion between them.
Contribution vs Delivery Protocols: What’s the Difference?
Video streaming depends on protocols, but not all protocols do the same job. Protocols like RTMP, SRT, HLS, and DASH are not interchangeable options. In practice, these protocols fall into two distinct categories based on the role they play in the video pipeline: contribution protocols and delivery protocols. Understanding this distinction is foundational to building streaming workflows that are reliable, performant, and cost-effective.
Contribution protocols handle the first mile. They transport video from a source, such as a camera, encoder, or production system, to a media server or cloud platform for processing. Delivery protocols handle the last mile. They distribute the processed video from that server to end viewers through browsers, mobile applications, and connected devices.

What Is a Contribution Protocol (Ingest Protocol)?
A contribution protocol is a streaming protocol designed to transport video from a source to a platform or media server for processing. Contribution protocols are also called ingest protocols or transport protocols. They do not scale to large audiences natively. Instead, they enable point-to-point or point-to-few transport. Most are not viewable directly in a web browser.
These protocols are optimized for a specific set of priorities:
- Stream integrity: A dropped or corrupted frame at the ingest stage cascades through the entire downstream pipeline
- Low latency: Especially for live production environments where the encoding and processing stages need to stay tightly synchronized
- Error recovery: The network between a field encoder and a remote server is often less predictable than the CDN infrastructure that handles viewer delivery
What Are Examples of Contribution Protocols?
The major contribution protocols used in production streaming workflows today include RTMP, SRT, RTSP, RTP, and MPEG-TS. Wowza Streaming Engine supports all of these contribution protocols, enabling ingest from virtually any encoder, camera, or production system.
RTMP (Real-Time Messaging Protocol)
RTMP remains the most widely supported ingest protocol across hardware and software encoders. Originally developed by Macromedia for Flash-based streaming, it operates over TCP and provides a reliable, well-understood transport path from encoder to server. Following the deprecation of Flash in 2020, teams have stopped using RTMP for viewer-facing playback. But, it continues to be the default contribution protocol for most streaming platforms.
SRT (Secure Reliable Transport)
A modern alternative to RTMP, SRT is for contribution over unpredictable networks. It was originally developed by Haivision and is now maintained as an open-source project. SRT operates over UDP and uses Automatic Repeat reQuest (ARQ) error correction to recover lost packets without introducing significant latency. It also includes AES-128 and AES-256 encryption by default, making it a strong choice for secure contribution over the public internet, satellite links, and other challenging network paths.
RTSP/RTP (Real-Time Streaming Protocol / Real-Time Transport Protocol)
RTP and RTSP are common in IP camera and surveillance workflows, where cameras stream continuously to a recording or processing server. RTSP handles session control (play, pause, teardown), while RTP handles the actual media transport.
MPEG-TS over Satellite and UDP
Broadcast-grade contribution workflows, particularly in professional broadcast, defense, and government environments, use MPEG-TS over satellite or UPD. Here, transport stream containers carry video, audio, and metadata (including SCTE-35 markers and KLV) as synchronized elementary streams.
What Is a Delivery Protocol (Playback Protocol)?
A delivery protocol is a streaming protocol designed to distribute video from a media server or CDN to end viewers, typically through web browsers or mobile applications at scale. Delivery protocols focus on one-to-many distribution. They are optimized for the viewer experience, and they work natively in browsers and on mobile devices without requiring dedicated playback software.
Where contribution protocols prioritize stream integrity on a single network path, delivery protocols prioritize scalability, quality, and reliability. Scalability is the primary concern, because a single live stream may need to reach thousands or millions of concurrent viewers. Adaptive bitrate (ABR) support allows the stream quality to adjust dynamically based on each viewer’s network conditions and device capabilities.
CDN compatibility is essential, because modern delivery protocols use HTTP-based transport that flows through standard web infrastructure without requiring specialized servers at the edge. What’s more, device and browser compatibility (while not as prevalent a concern today) has historically been a major requirement. Some devices require specific protocols like SRT, RTSP, or multicast.
What Are Examples of Delivery Protocols?
The major delivery protocols in use today include HLS, Low-Latency HLS, DASH, Low-Latency DASH, and WebRTC. Wowza Streaming Engine transcodes and packages contribution streams into all of these delivery formats automatically, giving streaming operators the flexibility to serve any audience on any device.
HLS (HTTP Live Streaming)
HLS is the dominant delivery protocol for live and on-demand video. It segments video into chunks/segments (*.ts files) and serves them through standard web servers and CDNs. Interestingly, when a device is “streaming” over HTTP, it is really downloading numerous “chunks” in sequence, before playing them back-to-back, seamlessly. HLS supports adaptive bitrate streaming and content protection via DRM. HLS works across virtually all modern browsers, iOS and Android devices, smart TVs, and set-top boxes.
DASH (Dynamic Adaptive Streaming over HTTP)
DASH is the open-standard counterpart to HLS. Like HLS, DASH uses HTTP-based chunked delivery and supports ABR. Viewed as a more neutral, open-source option, DASH delivers *.mp4 files as chunked content and supports a subset of streaming capabilities. DASH is widely used in OTT platforms and broadcast environments. Plus, it offers more flexibility in codec and container support compared to HLS.
LL-HLS (Low-Latency HLS) and LL-DASH (Low-Latency DASH)
LL-HLS and LL-DASH are extensions of their respective protocols that reduce delivery latency from the typical 6 to 30 seconds down to 2 to 5 seconds, retaining significant low latency cost opportunities by retaining a CDN distribution option. These variants are gaining adoption as live streaming use cases demand faster interaction between broadcasters and audiences.
WebRTC (Web Real-Time Communication)
WebRTC originated as a peer-to-peer video calling and real-time communication protocol for the browser. It is increasingly used for ultra-low-latency video delivery, and achieves sub-second glass-to-glass latency. The tradeoff is complexity, as well as cost per gigabyte. WebRTC requires signaling infrastructure and is more difficult to scale to large audiences compared to HLS or DASH.
How Do Contribution and Delivery Protocols Work Together?
In a typical live streaming workflow, contribution and delivery protocols operate in sequence across three stages:
- Ingest. An encoder captures video from a camera or other source and sends it to a media server using a contribution protocol such as RTMP or SRT.
- Processing: The media server packages the incoming stream into the appropriate delivery format and protocol — a process sometimes called transmuxing. Optionally, it may also transcode the stream into multiple quality renditions to enable Adaptive Bitrate (ABR) streaming. However, transcoding is a separate (and often optional) step. It should not be confused with packaging.
- Distribution. The packaged stream is delivered to viewers through a delivery protocol such as HLS or DASH, often routed through a CDN for geographic reach and scale.

RTMP and SRT are optimized for the source-to-server link, where the priority is getting a clean, reliable signal to the processing layer. HLS and DASH are optimized for the server-to-viewer leg, where the priority is reaching a large and diverse audience with adaptive quality and broad device support. These protocols solve different problems at different stages.
Wowza Streaming Engine accepts contribution streams from any major ingest protocol and delivers them in any major delivery format. This protocol-agnostic architecture means that streaming operators can choose the best protocol for each stage independently. In effect, they are no longer locked into a single protocol for both contribution and delivery.
When Protocol Lines Blur
While the contribution and delivery distinction holds true for the vast majority of streaming architectures, there are edge cases. But, while these edge cases exist, the primary design intent of each protocol should guide architectural decisions. Building a workflow around the standard contribution-to-delivery pipeline produces the most predictable and maintainable results.
SRT is sometimes used for point-to-point delivery between media servers. It can also transport streams for decoding in remote production facilities. In these contexts, SRT functions as a distribution protocol, though the destination is an upstream media endpoint or decoder, rather than an end viewer.
Some CDNs and platforms accept HLS as a contribution format, allowing encoders to push HLS segments upstream. This approach works but introduces additional latency compared to RTMP or SRT ingest, and it is not common in latency-sensitive workflows. Production workflows should use an origin protection layer. Publish content to intermediate blob storage and configure the CDN to “pull” from there. A media server, like Wowza Streaming Engine, supports this architecture natively.
WebRTC can operate on both sides of the pipeline. As such, it can be used for browser-based contribution (such as a guest joining a live broadcast from a web browser) and for ultra-low-latency delivery to viewers. Scaling WebRTC delivery to large audiences requires additional infrastructure beyond what HLS or DASH demand at the CDN level.
As always, if you have protocol questions or want to better understand which contribution and delivery protocols best suit your needs, reach out to one of our experts today.
Frequently Asked Questions
What is the difference between a contribution protocol and a delivery protocol?
A contribution protocol transports video from a source, such as a camera or encoder, to a media server for processing. A delivery protocol distributes the processed video from the server to end viewers. Contribution protocols like RTMP and SRT are optimized for reliability and low source-to-server latency. Delivery protocols like HLS and DASH are optimized for scalability, adaptive bitrate streaming, and broad device compatibility.
Is RTMP a contribution or delivery protocol?
RTMP is primarily a contribution protocol. It originally handled both ingest and playback during the Flash era, but the security struggle leading up to the deprecation of Flash in 2020 effectively ended RTMP-based viewer playback. Today, RTMP almost exclusively sends streams from encoders to media servers and streaming platforms.
Is SRT better than HLS?
SRT and HLS serve different purposes in a streaming workflow and are not direct substitutes. SRT is a contribution protocol designed for secure, low-latency transport from source to server. HLS is a delivery protocol designed for scalable distribution to viewers. In a typical workflow, one would ingest a SRT and then deliver to viewers via HLS.
Can SRT be used for playback?
SRT is not designed for browser-based playback. Viewing an SRT stream directly requires a desktop application (such as VLC) or purpose-built decoder hardware. For delivering video to end viewers at scale, the standard approach is to ingest the stream via SRT, process it on a media server, and repackage it into a delivery protocol like HLS.
What is the best streaming protocol for low latency?
The answer depends on the part of the workflow. For contribution, where the goal is low-latency transport from source to server, SRT offers strong performance with built-in encryption and error correction. For delivery, where the goal is low-latency playback for viewers, WebRTC provides sub-second latency at smaller scale, while Low-Latency HLS and Low-Latency DASH offer reduced latency with the scalability advantages of CDN-based distribution.
