Encoding vs. Transcoding: What’s the Difference? (Update)

If you’re trying to stream video, you’ve undoubtedly heard the terms “encoding” and “transcoding” thrown around — but what’s the difference, and what does it matter?
Essentially, Encoding involves compressing raw video into a digital format suitable for transmission, whereas decoding is the process of decompressing this data for playback on end-user devices. While transcoding is the process of converting already encoded video into a different format or bitrate to ensure compatibility across various devices and platforms. They’re very different processes that serve unique purposes, so let’s dive into why they’re both important and how to know when each is appropriate.
Table of Contents
- Encoding and Transcoding Defined
- When You Need to Encode Video
- When Transcoding Video Is Necessary
- How Do Encoding and Transcoding Affect Video Quality
- Your Options for an Encoder or Transcoder
- Encoding and Transcoding With Wowza
First, A Few Definitions
Before we go forward, some definitions you’ll need to know include:
Video encoding: To elaborate on the point above, this term describes the process of converting raw video — which is much too large to broadcast or stream in its original state when a camera captures it — into a compressed format that makes it possible to transmit across the internet.
Without video encoding, there would be no streaming. It’s the first step in delivering content to an online audience because it’s what makes video digital.
Container: You can imagine the video container like a nicely-wrapped box that holds your video, audio, and relevant metadata (including things like captions and subtitles or things like title and description,). These containers are are typically recognized by their file extensions with file names ending in .MOV, .MP4, and .MKV.
Codec: Video encoding relies on two-part compression technology called codecs. Short for “coder-decoder” or “compressor-decompressor” (we love a good double meaning), codecs use algorithms to discard visual and audio data that won’t affect a video’s appearance to the human eye. Once video data is a more manageable size, it’s feasible to stream it to audiences and decompress it when it reaches different devices.
If you’re confused about the distinction between codecs and encoders think of it this way: encoders are hardware (like Teradek) or software (like OBS Studio) that perform the action of encoding by using codecs. H.264/AVC, HVEC, and VP9 are common video codecs you’ll come across (H.264 most of all), and AAC, MP3, and Opus are examples of audio codecs.
Decoding: Decoding is the act of decompressing encoded video data to display or alter it in some way. For example, a human viewer can’t watch an encoded video, so their device must decode the data into images to display on a screen.
Transcoding: Video transcoding is an umbrella term that refers to taking already-encoded content, decoding/decompressing it, implementing alterations, and recompressing it before delivery. Transcoding video might involve making smaller changes, such as adding watermarks and graphics; or much larger ones, like converting content from one codec to another. Actions like transrating and transizing fall under the definition of transcoding.
Video Transcoding often falls right in the middle along the camera to end-user pipeline. A potential use case would be using a protocol like RTMP, RTSP, SRT, or WebRTC for contribution to a transcoder — which can be either a media server or cloud-based streaming platform — and then transcoding your content into a different codec, bitrate or resolution to make it compatible with the kinds of protocols your audience’s devices support. These include protocols like Apple HLS, MPEG-DASH, and WebRTC (which is applicable for contribution or delivery).
Transmuxing: You’ve also likely heard the term “transmuxing,” which refers to repackaging, rewrapping, or packetizing content from one protocol to another. Transmuxing is not to be confused with transcoding because the former involves switching delivery protocols while the latter requires more computational power and makes changes to the video content itself.

When You Need to Encode Video
You encode video because you need it to be, well, smaller — which means it’s inevitable.
Remember that scene from Willy Wonka and the Chocolate Factory when Mike Teavee jumps in front of the Wonkavision machine and appears in a TV across the room, now two inches tall? The machine took his larger self, broke him down into countless tiny pieces, transmitted them through the air, and reassembled him the right way (adjusting for the TV’s smaller size). Poor Mike was encoded and delivered.
Video encoding can compress gigabytes worth of data into mere megabytes. The process could occur in:
- Mobile apps
- IP cameras
- Video Encoding software
- Specialized hardware
After capturing raw footage, the data is encoded for transmission. Upon reaching the viewer, the data is decoded for playback. During a live stream, these tools encode footage as soon as the cameras they’re connected to capture it. The video data is now ready to undergo whatever additional processes are necessary to make it deliverable to various end users.
When Transcoding Video Is Necessary
Transcoding video is a bigger question because encoding happens 100% of the time, but transcoding isn’t explicitly necessary to stream to viewers. The transcoded video ensures optimal playback across all user devices.
Why so? It depends on your audience. For instance, live streaming footage from surveillance cameras on a private network, like at a museum, won’t require transcoding because only security guards will see it. All of their tools and screens are on-site or connected to external devices on the museum’s network which is a known and stable configuration.
It’s unlikely that’s a guarantee you can make, though when streaming to a broader audience. Imagine you want to live stream a high school graduation event (which are notorious for offering minimal in-person tickets per student, which means some loved ones have to watch from home). You want grandparents in rural Kansas, using a slightly older computer and poor internet, to be able to watch their grandchild cross the stage just as seamlessly as someone else’s aunt and uncle who live in New York with a high-speed connection on the latest MacBook. To make this happen, you’ll need to transcode the video and deliver it using adaptive streaming with a protocol like Apple HLS or MPEG DASH.
What Happens During Transcoding?
Encoding is the first step during this hypothetical live stream. You’ll likely use a widely supported codec and delivery protocol, like H.264 and Apple HLS, respectively. These options shouldn’t present any problems, but if you want to deliver the content at a 1080p resolution, then our Kansas grandparents are in for a great deal of frustration as they wait for the player to buffer over and over again.
So, you’ll need to change the resolution and bitrate to accommodate their limited bandwidth. You can send the stream to a live transcoder that:
- Decodes the incoming stream
- Creates multiple outputs with optimized frame sizes and bitrates
- If necessary, converts to an alternate codec
- Re-encodes them
- Sends the reencoded outputs to be packaged into the final delivery protocol
This enables adaptive streaming that plays on screens of any kind, under any network conditions. By creating a set of resolution and bitrate options, viewers will encounter fewer problems when trying to watch your stream.

How Do Encoding and Transcoding Affect Video Quality?
You’re understandably curious about how encoding and transcoding affect video quality. It sounds like you’re putting video data through the wringer, so what’s the likelihood it will come out the other end as you envisioned it?
Encoding and transcoding can affect video quality, but not necessarily to the point where the user experience is compromised. As we’ve mentioned, compressing video involves removing data without excessively damaging its perceived quality. If we compress a video just the right amount, the final result won’t be perceptible to the human eye; compress a video too much, and you’ll have a pixelated image and poor user experience.
How Encoding Affects Quality
There are two kinds of encoding:
Lossy compression: Reducing a file size by discarding unnecessary data.
Lossless compression: Compacting files without the above process, therefore maintaining data integrity. ZIP files are an example.
Encoding video, specifically, is an inherently lossy process. It’s important to be aware of your rate-distortion tradeoff: your video’s quality will usually correspond to the bitrate. This is not always the case — the kind of content you’re compressing matters, like live-action footage versus animation — but it’s a good rule of thumb. The best way to compensate for a smaller bitrate is to lower the resolution so images don’t appear overly grainy.
How a Transcoder Affects Quality
There are also three kinds of transcoding (try saying these all five times fast):
Lossless-to-lossless: Compressing a file that hasn’t lost any information and transcoding it in such a way that nothing else is removed a second time. While possible, it will result in a large bitrate that is not practical for streaming.
Lossy-to-lossy: This kind of transcoding means you’re further degrading the original encoded video, so care must be taken to minimize further loss.
Lossless-to-lossy: Compressing a lossless stream after altering it, this time removing unnecessary data allowing the content to be streamed.
Whichever method you choose will affect your video’s quality, so be aware of what kind of devices you want to stream to and what level of quality is important to you.
Your Options for an Encoder or Transcoder
Next, where do you turn to for encoding and transcoding? Encoders can be hardware or software that converts video signals coming from camera sensors. You can learn all about what to look for in an encoder here.
Transcoding software can be either local or cloud-based. Local transcoders like Wowza Streaming Engine can take advantage of on-site resources, but when either compute or bandwidth resources are constrained, transcoding in the cloud may be a better option. Cloud transcoders like Wowza’s, can be very flexible and enable you to transcode and deliver adaptive bitrate streams without worrying about any on-site hardware. In either case you can now stream without worrying about how well it will playback for grandparents in rural Kansas or anyone conveniently living next to a video CDN edge server in New York.
Encoding and Transcoding With Wowza
There’s no room for error when it comes to optimizing your content’s playback. Fortunately, Wowza’s technology supports and enables all kinds of compression, alteration, and decompression you need to stream to as broad an audience as possible, and a wealth of resources to help you understand the difference between encoding vs. transcoding, when they’re important, and what kind of codec and protocols are necessary. Whatever kind of encoder or transcoder you’re looking for, Wowza has the technology and tools to make it happen.