7 Streaming Predictions for 2018January 3, 2018
We bid farewell to 2017 and look forward to what 2018 will bring in the world of streaming. Ready to dive in? Read our ideas on what’s coming next, or just skim the summary—then weigh in with your feedback on Facebook or Twitter!
2018 Predictions Summary
- Producer automation: More high-quality, live content will be available because of automation.
- Viewer automation: Finding content you want to watch will get faster and easier.
- Live interactivity: Audiences are getting more engaged with live, social and interactive content.
- VR/360: Compelling immersive content is becoming more accessible this year.
- Low latency: You’ll watch more streams that aren’t 30+ seconds behind the live action.
- Reliable UDP: Networking changes are needed to watch interactive and immersive streams.
- HEVC: Apple will push the industry to get more efficient at sending HD video over the Internet.
Just because you can stream, it doesn’t always mean you should. While simplifications in technology have made streaming more accessible, the results are often lower-quality. High-school sports and band concerts provide common examples, where we often see fast camera panning that makes us dizzy, or poor audio and shaky video from a mobile phone that makes us seasick.
Fortunately, more consumer-friendly, cost-effective products and services are coming on the market that require little or no human intervention during a live broadcast. Using fixed or pan-tilt-zoom (PTZ) cameras, you can create a good-quality production that is completely automated. There are a few options for automated solutions at varying levels of complexity:
Cloud-streaming services with scheduling functionality. This is the simplest level of automation. At a specified time, the cloud service starts receiving a stream from a single fixed network camera located at the venue; converts the stream for adaptive bitrate (ABR) streaming; and delivers it globally to your website or Facebook page. If you add one of the newer PTZ cameras that can track and follow one or more presenters, you also have an automated lecture-capture solution.
Automated multi-camera products. At a more complex level, products from companies such as Pixellot, Keemotion, AutomaticTV, VEO and TNO track players on a field or court, then digitally pan and zoom within a 4K (3840×2160 resolution) window to create the illusion of a trained camera operator following the action from the middle of the sideline. Some offer automatic switching to additional cameras mounted at the corners of the field, located where they can better show goals, baskets and volleyball serves.
They may also offer some basic graphic overlays, and a picture-in-picture capture of the scoreboard. (For additional production quality for some higher-visibility games, such as adding play-by-play announcers, multi-angle instant replay, and social media clipping, these products can be combined with a single producer running Production Truck software from BlueFrame.) Beyond sports, Livestream Mevo and Clone.tv Studio offer automatic switching between shots or cameras for almost any scenario.
The common theme here is simplicity. You’ll have some initial investment in gear, and possibly some recurring costs for a streaming service—but you won’t need to hire producers and camera operators for every event. The end result? More, higher-quality content, and relatively low costs to produce and deliver it. 2018 is the year these kinds of automation go from proofs-of-concept to mainstream.
Just as we will see increased automation for content producers in 2018, we can also expect better and smarter automation for end viewers. With the right media architecture, viewers and content producers alike can save time on content discovery. Automating the discovery process frees up viewers to spend more time consuming, and gives producers more time to create content for those viewers. From content curation and discovery to cross-device connectivity, automation will make it easier to find, watch and share what we want. (The same applies for audio-only content, too.)
2018 will build on the adoption of machine learning and AI (Artificial Intelligence) to help us leverage metadata and identify relationships between content and individual viewers. This will make recommendation engines more accurate and effective.
IBM’s Watson Media Group offered a unique example of recommending content based on context with its recent TED experiment. The site uses Watson’s AI to contextually analyze the spoken words on an audio track, and builds a robust metadata index for easy cross-referencing and recommendations. The system even allows you to find the exact moment someone might be talking about a particular topic—for example, if you wanted to search for “Mars settlements,” it would pull up the exact moment author Andy Weir starts talking about his predictions for human life on Mars.
This year, consumers will desire to get things faster, more seamlessly and without extensive searching. Look for new AI-based features and companies in 2018 that focus on making content discovery and delivery faster, simpler and more accurate.
Just 10 years ago, the primary way to interact with video content was to post comments below on-demand videos, as popularized by YouTube. Second-screen apps—which we used while watching live or linear TV content—became popular five years ago, as brands built out custom experiences for fans on platforms such as Zeebox.
Twitter arguably gained the most popularity in this area; millions of consumers already had it on their devices, and no custom app building was required. When Periscope and Meerkat launched in 2015, we suddenly had an easy way to interact directly with live videos on portable devices—no TV screen required.
Since 2013, we’ve seen a huge shift toward Facebook for audience engagement with video—first with on-demand, and since early 2016, through Facebook Live. Whether you use the consumer-friendly Facebook mobile app to create homemade-quality videos, or you employ professional-grade tools to reliably create high production-value content, it’s now easier for everyone to produce live broadcasts that virally engage viewers through the world’s largest social network. Audience interaction is immediate during the live event, and its reach and engagement keep going long after the broadcast ends.
In 2018, expect to see significantly more online-only live broadcasts embracing Facebook as a destination, tapping into the amplification effects of personal feeds and group pages to reach an ever-expanding audience. Likewise, expect LinkedIn—the current king of business networks—to start rolling out a variety of video features that allow for audience engagement and amplification.
We’ve been watching the 360° video market for a few years now, and it continues to gain real-world momentum. Even when viewed with a flat display, such as a mobile device that you move in different directions, or a computer monitor where you use a mouse to navigate around the video, it’s easier to immerse yourself in the setting of a 360° video than a 2D video.
When 360° video is combined with Virtual Reality (VR) headsets, it creates “you are there” experiences that appeal to many. VR/360 is already allowing many users to visit tourist destinations, explore for-sale houses and experience improved training from hundreds or thousands of miles away. With live, 360° video, people that can’t attend professional-league games can virtually sit in prime seats and enjoy the event almost as if they were in the stands. Mix in a few interactivity elements, per prediction number three, and you’ve got a potent feature set.
However, there’s been a vicious circle where adoption won’t happen without content, and content won’t be created without a dedicated audience. So far, the biggest barriers to widespread adoption of VR/360° technology have been:
Content quality. More vendors are supporting ABR streaming of VR content, making it easier to deliver immersive content to mobile devices in the best possible quality. Using standard ABR streaming technology—just like Netflix, Hulu and YouTube use—viewers can receive up to 4K per-eye rendering. ABR adapts to each viewer’s available network and computing resources, reducing cost and technology barriers that can otherwise impair VR stream delivery.
Equipment and technology costs. Devices for content creation and delivery were once steeply priced, but are now becoming more affordable. Some examples by category include:
- Playback: Oculus, Sony, HTC and others have been bringing down the price of their headsets. Other technologies, such as Google Daydream, are making mobile devices powerful viewing alternatives.
- Capture: Lower-priced cameras, such as the ALLie Camera, Orah 4i and Insta 360 Pro, are now generating 4K, 6K and 8K 360° live streams—including onboard camera stitching that delivers spherical and equirectangular content straight from the device.
- Creation and delivery: Tools such as Unreal Engine, Unity and NVIDIA are making it easier to generate very high-quality VR and AR (Augmented Reality) content in real time with less computing power.
VR/360° content and playback is going to create a huge draw for consumers to buy new hardware and software. As the ability to create and experience VR/360° video moves from science project to mainstream consumer attraction in 2018, expect big growth in this area.
The worlds of traditional television and internet-delivered content are continuing to intersect, and with this comes an ever-greater need to synchronize the timing of those broadcasts. In 2017, we saw many examples of high-profile events simultaneously delivered through broadcast TV and over-the-top (OTT) streaming—and the delays (or latency) built into our HTTP adaptive streaming formats over the last 10 years started becoming painfully apparent.
Beyond television—and building on our predictions about live interactivity—latency is too high for much of today’s live OTT content, in which the audience is encouraged to respond in near-real time with comments and questions. A few extra seconds of delay means viewer replies don’t get back to the hosts of a show until the conversation has already moved on.
In 2018, as more premium content is syndicated across both TV and internet delivery infrastructures, expect to see the latency in live OTT adaptive streaming from many content producers drop to about five seconds—close to that of digital cable TV channels. For near-real time interactive content, expect to see content providers switching to even lower-latency streaming options, such as WebRTC and SRT delivery, to reduce delays to one or two seconds.
Fifteen to 20 years ago—in the early days of internet streaming—most streams were delivered with custom formats over User Datagram Protocol, or UDP. Unlike Transmission Control Protocol, or TCP, UDP is not designed to ensure the delivery of every data packet. The logic at the time was that if some bits of video dropped, it was better to glitch for a moment and then move on, rather than go into a buffering state and wait for retransmission of the missing bits—especially for live streams.
As bandwidth improved and the internet became more reliable, streaming was increasingly delivered using TCP. When adaptive streaming caught on in 2008, video started getting delivered as HTTP files over TCP, taking advantage of the growing web infrastructure. Just like other web content, video could now pass easily through firewalls, be stored on HTTP edge servers of content delivery networks (CDNs) and get cached locally in browsers.
What’s changed? Are we heading backwards? No. Reliable UDP is becoming more popular because it can now deliver the following benefits:
Low latency. While ABR streaming over HTTP gave everyone the best quality their network connection and device could support, it also added high latency: typically, 30-45 seconds. If you’re watching a live stream of a game in one room, and your friends are watching that same game on cable in the next room, they’ll be cheering (or groaning) at least 30 seconds before your HTTP stream shows you why. Reliable UDP, on the other hand, provides dependable delivery while reducing latency to under two seconds.
High-quality streaming delivery. Reliable UDP can deliver content as reliably as TCP, but more efficiently—sometimes with much higher quality than HTTP over TCP offers on the same congested network connection. This addresses major challenges for both first-mile and last-mile parts of the streaming workflow: It helps live content producers get their content out of remote venues with poor connectivity, while also delivering higher quality over local and in-home network connections.
Scaling can be a bit more challenging when using reliable UDP, as now you must maintain one-to-one connections between each viewer and a media server. Fortunately, dynamic cloud technologies and streaming services have made scaling much simpler and less expensive than 10 years ago.
Google, Yahoo, Akamai, Wowza and many others actively deployed reliable UDP infrastructures in 2017. As business models based on reliable and low-latency video continue to grow in 2018—such as TV synchronization and near-real time viewer interactivity for classes, gaming and webcasts—you’ll see increased usage of reliable UDP for video streaming.
High-Efficiency Video Coding, better known as the HEVC or H.265 codec, is the planned successor to the now-ubiquitous AVC/H.264 video compression standard. H.265 promises about 50 percent better compression than H.264 for on-demand content, and 35 percent better for live streams. Not only does this make it feasible to deliver 1080p and 4K content over typical home internet connections, it also means that folks with low-bandwidth DSL and satellite-dish data connections might get video at twice the quality they can today with H.264.
But wait, you say—HEVC has been around since about 2013. If it’s so great, why hasn’t it been widely adopted already? Well, there are three key reasons:
Licensing. There are several HEVC patent pools, making licensing costs unclear—and definitely higher than existing H.264 costs. Apple recently announced native support for H.265 as their primary video format in new iPhone and Mac devices, which may help instill confidence in HEVC for some device manufacturers—though others will likely wait for greater clarity in 2018.
At the same time, they’ll be watching developments with the royalty-free AV1 compression standard from the Alliance for Open Media. However, Apple’s commitment means most content producers would need to create H.264, HEVC and possibly AV1 content to reach all possible screens. This leads us to…
Ubiquity. Native support for H.264 playback is already found in almost all devices today—meaning content creators and distributors finally don’t need to compress content in multiple formats (e.g., Windows Media, RealMedia, QuickTime and Adobe Flash). HEVC doesn’t have anywhere near that ubiquity now, but with Apple’s commitment to support it, estimates are that over 1 billion devices will be HEVC-capable in 2018.
Since reaching Apple device screens has been a priority for content producers in the past, many may skip deploying the free AV1 codec in order to reduce the number of formats they need to support.
Processing. HEVC takes up to three times more processing power to decompress and play back on viewer devices—but new devices with HEVC support will also have native hardware for this to offload the CPU, making it less of an issue in 2018. On the compression (or encoder) side, content producers and distributors face an even larger challenge: 10 times the processing power is required to turn video into HEVC streams. As with scalability for reliable UDP, dynamic cloud infrastructures – including GPU acceleration on cloud servers—are available now to make HEVC compression much more cost-viable in 2018.
That’s our list of predictions for 2018. While not comprehensive, these are the ones that are top-of-mind for us based on the scenarios our customers are trying to achieve, and the technologies that are becoming mature enough to power them. What do you think we got right? What did we get wrong? What did we miss? Let us know by dropping us a line on Twitter @wowzamedia and @blueframetech.