Virtual Reality and 360° Streaming Trends 2017
June 19, 2017 by
Over the past several years, Wowza has seen growing interest in virtual reality (VR) and 360° viewing experiences. In the ever-evolving landscape that is live streaming, these technologies are emerging as two of the most innovative trends, powering use cases across industries—from concerts and events to security, education and even live medical surgeries.
Investment in VR and 360° hardware, software and delivery technology has been booming for the past few years, and the trend shows no signs of slowing. Facebook led the charge by acquiring Oculus for $2 billion in 2014, and in 2016, released the first consumer VR headset with the Oculus Rift. The Pokemon Go mobile app became a surprise global augmented-reality phenomenon in 2016, with $600 million in revenue just in the first three months.
Today, the world’s largest corporations are all investing heavily in VR and 360° development, including Apple, Amazon, Google, Microsoft, Sony and Samsung, in addition to Facebook. Even the U.S. Army is beginning to use VR technology to train soldiers.
VR vs. 360°
So, what's the difference between VR and 360° video? VR immerses you in a new, virtual environment that does not physically exist. This environment can be computer-generated through self-contained software that may be downloaded and saved to a machine, or it can be live-streamed; video games are the most common example.
On the other hand, 360° video transports viewers into a real, immersive, live-streaming experience that they otherwise wouldn’t have access to, and that can’t be installed on a machine—for example, a live broadcast of a concert or sporting event that gives viewers a seat from the stands. For the purposes of this post, we'll focus on 360° video, as this is the most applicable for streaming workflows.
Anyone who has used Google Street View or Bing Streetside in a browser has already experienced interactive 360° photography, in which you move through a scene by tapping and panning. You can now have similar experiences with on-demand and live video streams, too—for example, in mobile apps that change the scene based on the position and motion of your device. (Explore 360° videos on the Facebook 360 page.)
The Playback Reality of VR/360°
VR goggles take 360° visual experiences even further by adjusting what you see based on which direction your face is pointing. To these, 360° content creators can add additional sensory cues. For example, platforms such as Virtuix Omni allow you to seemingly walk in any direction. Some players are also integrating 360° audio; while VR goggles still require you to use headphones, they provide accurate, directional sound that match your field of vision (FOV) to what you’re hearing and the source audio.
Social media giant Facebook is also leading innovation in the VR and 360° space. Its player integrates 360° audio, returning sound quality with pinpoint geospatial accuracy based on your FOV. It also utilizes its unique cube map projection technology, which renders video in the shape of a cube—providing clearer, less distorted images and a smoother scaling of your FOV as you move. Conversely, the traditional layout for VR and 360° video is spherical, which can stretch and distort the image, creating the appearance of a “fish-eye” lens.
The Evolution of 360° Hardware
While goggles were once the only option for viewing VR and 360° content, today, it’s accessible from the palm of your hand through your mobile device. VR and 360° cameras have also evolved a great deal over the years, with a variety of commercial- and consumer-grade models now available at a range of price points. There are even sleek, portable cameras that attach to mobile devices, so consumers can capture immersive virtual experiences from wherever they are.
Mobile 360° cameras, such as the Insta360 Nano and the Giroptic iO, offer built-in encoders, which pull in multiple feeds, encode them within the device and stitch them together into one high-quality stream—all on the fly. This stream can then be delivered to a variety of destinations around the web using a media server, such as Wowza Streaming Engine. Examples of desktop VR and 360° devices with this functionality include the Orah 4i, the Giroptic 360 and the Nokia OZO.
There are also stand-alone VR and 360° hardware encoders that allow for live stitching of multiple camera feeds, along with monitoring and calibration of live streams. The Teradek Sphere is one such encoder, which allows 360° live-streams to be recorded on-site using an iOS device.
How can you ensure support for all this hardware across various end-user devices? Player clients such as JW Player, Bitmovin and THEOPlayer offer SDKs developers can use to build VR and 360°-friendly mobile apps and web-based players.
Finally, to scale your VR and 360° streams to a global audience, media server software—such as Wowza Streaming Engine or Wowza Streaming Cloud—must be used for process and delivery. From there, you can use a content delivery network (CDN), such as Wowza CDN, to deliver to an even broader audience.
Wowza Powers Innovation in VR Workflows
So, how long until we can share a virtual seat with others as we tune into a live, 360° broadcast straight from the couch? A Golden State Warriors basketball game was an early test, allowing viewers to feel as if they were at the game, even when sitting at home.
In 2017, live 360° events were taken to the next level. In January, Radiant Images used Wowza technology to live-stream President Obama’s farewell address in immersive 360° video. Multiple cameras were used to stream the broadcast to every social media platform, giving viewers around the world a front-row seat to the President’s speech. In the same month, Periscope offered the first live, 360° underwater broadcast—powered by Wowza Streaming Cloud.
Wowza continues to be a core technology for powering live 360° event and VR workflows, from the White House to the Oscars; Lollapalooza and corporate events; and more than we even know. While a few VR providers use proprietary streaming technologies, most use the streaming protocols that are at the heart of Wowza Streaming Engine and Wowza Streaming Cloud.
There are nearly limitless applications for 360° and VR streaming. We’ll soon see it used in all kinds of use cases: racing, weddings, corporate meetings, education, medicine, military applications and more. It’s going to be a fun ride!
Watch the Video: Streaming Forum 2017—Evolution of Virtual Reality and 360° Live Streaming
To learn how new trends and technology in VR and 360° are being applied in a real use case, check out this presentation by Wowza’s own Streaming Video Technologist, Ryan Jespersen, and Gordon Charles, Analyst Developer at University College London, from Streaming Forum 2017:
- Full Video Transcription
My name's Ryan Jespersen and I work for Wowza Media Systems. We're a media server company. Joining me today is Gordon Charles, who's one of our customers over at University College London. We'll take turns talking up here and talk about the use case for VR and 360 streaming.
Show of hands, how much experience do we have with VR and 360? All right, we've got some people who've played around with it. What I'm going to try to talk about today is pretty much the entire workflow of VR streaming—everything from cameras to encoding to the processing and distribution of the media. Also, the playback on the clients to mobile devices to set-top boxes to HTML5 players—really the ecosystem that you need to create in this streaming infrastructure for both live and video-on-demand to be able to get this VR and 360 workflow to work.
What I wanted to show is this is a new camera vendor that we're working with called Insta360 Nano. There's a few different workflows. A lot of people have been building separate cameras, rigs, different ways to actually capture content and encode content. With this particular one, if you see here, this is a camera, it has two cameras on it, plugs into my iPhone. They make one for the Android as well. With my finger here you can see I can actually scroll around and see the feed of this 360 camera. If you guys want to take a look at this later I can show it.
This is showing how VR and 360 really is getting revolutionary and moving to a new space that's going to make it more accessible to all users. Especially in an educational environment. It's going to allow every student to be able to capture VR and 360 and create engaging content, both user-generated content, as well as educational and learning content, and a workflow that can be applied to many different use cases, whether it be training for enterprises, whether it be EDU and so on.
As you can see in this app, or maybe you can't see it. Down here you have the option to actually capture locally on the SD card or on the device. There's also an option to live-stream. In here, there's an integration to plug into YouTube Live, Periscope Live, Facebook Live, as well as push it to a media server like Wowza or other media servers out there, and do that live-streaming. What's amazing about this is, there's been an evolution of devices of camera, encoding, and then the distribution to users that's now becoming more compact. This really makes it accessible to all.
Some of the use cases we're going to touch on today [are] everything from broadcasting to live mobile broadcasting, Periscope-type use cases, tourism and entertainment. The biggest question we get is, “VR and 360's cool, but is it technology for technology’s sake, or is there really a way that you can monetize it? A way that you can create value that makes your content different, and different from your competitor, different from other educational institutions?” That's really the focus of what we're trying to do today in this session.
A good example here is the Red Bull Air Races; they created a live-stream of the 360 cockpit. This is content that people really want to see. It creates a great second-user experience. At an educational level, we'll talk to Gordon a little bit about what they're doing at UCL. We had spoken to one of his colleagues at UCL that is doing live medical surgeries and so on. Duke University in America is actually able to live-stream that content and create very engaging content that they can actually monetize and turn into a new revenue source.
With that, we're going to talk to Gordon a little bit about the UCL case study. Maybe Gordon, you can join me up here. Gordon, at UCL, it's not just the creating of the content, it's how you going to distribute that. It could be through a learning management platform. What are some of the requirements that you had at UCL to really get it to work and get it out to your students?
Hello everyone. Well, one of the requirements that we had was having a platform that's simple and easy to use. We wanted a platform where we could actually be able to separate the content. That way, what we're going to be able to do is actually allow public access, also student access by departments. We don't have to control access by departments. Also, as well, we have to control it so it's available just to students. We wanted a facility which allows people to be able to have full content and also live-streaming as well, and also to be able to integrate that into their LMS systems.
I think the key here is not just EDUs, but a lot of enterprises are looking to actually create training platforms—like Noodle, like Blackboard, like Canvas—and they want an integration between video content, whether it be live or VOD. One of the references here is a company called Medial based here in the UK, that does create that integration between video content and these learning content management systems like Noodle. With Medial, they've used JW Player—that has a 360 player—and they've now been able to use Wowza with JW Player to create an integration that makes this live and VOD streaming possible and easy and accessible to both students and staff.
We talked a little bit to a couple of your colleagues at UCL, talking to them about what are the use cases within the UCL, whether it can be orientation for students; whether it be selling the university as a place to come; whether it be learning. Maybe talk a bit about the different UCL case studies.
Sure. Okay, first and foremost, what we each try to do is we try to use it in terms of for marketing. We have a department, which is called "Communications to Marketing." What we try to do is we actually try to have everyone understand a bit more the qualities that UCL runs. Actually try to get people to engage more, know about our services that we provide. We try to use it for promotional events as well and social awareness.
What we try to do in terms of for staff, we try to use it in terms of for training, for internal communication as well, and we also try to make it a bit more user-generated content. What we try to do is we try to engage people to talk about relevant subjects of interest to them. What we want to try and do is we want to try and have it as a platform, not only just for work-related, but also just for activities, which is content related to hobbies that people might have or things they think that they might know about on a professional basis, which allows us to be able to help each other.
One of the things you're doing at UCL, you just had Noam Chomsky, I think, was just presenting. These are really speakers and events that people want to view at a global scale. From that perspective, what are some of the events you guys are putting on?
Okay, so what we do is we also do twice a week we do lunch hour lectures. Lunch hour lectures is available to the rest of the world. Anyone can subscribe to it. What we have is we have maybe intellects talking; or people doing maybe, I don't know, civil rights activists, for instance; or anyone that just wants to talk about anything of interest. What we try to do is we try to have it twice a week between lunch times. We try to get people to engage, try to promote for UCL.
We also, as well, have a series of different debates that we try to do. We also try to make it departmental-based. For example, last week we had the Department of Economics—they have an open day for the MSC courses. Also, as well, what we try to do is, we also use it, especially at the start of the term, for when we have a problem. We have overflow problems, so our lecture theaters are heavily oversubscribed. What we try to do is, we try to use the live-streams to be able to publish that content into other lecture theaters so people can actually see it. Also, as well, what we're trying to do, we want to try and make sure that people can actually see content and video-on-demand a bit later, so if anyone is maybe at work, or can't actually make it into the office or into campus, they can actually go and view it at a later time.
Fantastic. To talk a little bit more from the technology side, let's get our hands a little dirty here in what technologies are involved. If we look at a little bit going into the past, we've seen different cameras and rigs. These have been around for three, four, five years and longer, where people are building different rigs and using very convenient GoPros.
This is a very, very high-end production. You can see these Red 5 cameras. I hope Wowza gets me one of these for my birthday. Here's 16 different GoPros. The complexity that this creates, though, is with so many different HDMI or SDI cameras, how do you capture that content to IP, how do you stitch it together? There's really this built-in problem and pinpoint with this workflow. These HDMI, and you can imagine, SDI capture cards allow you to pull in multiple feeds, but you end up creating all these different dongles, and it ends up becoming just a mess of cables and a mess of hardware. It also creates more issues for potential breakage.
This is the Vahana software. VideoStitch Vahana makes a software that's revolutionized and made any desktop available or any laptop available to actually stitch content. You can see here they have six different feeds from different GoPros. I believe these are at 960p. They've been able to stitch more together into a 4K stream, and then they can encode to IP and push it out over RTMP to a media server like Wowza or others that can then re-stream that to the world.
Like I mentioned, these are the hardware-setup issues that you have. You've got all this mess of cables; you have problems with hardware synchronization between different cameras. What you've seen in the evolution of different 360 cameras, rigs and encoders, is you've seen this movement towards bringing it all into one device. One device at a very, very high level.
The VideoStitch, their evolution of their Vahana software was to create their own hardware device, and they've created the Orah device that enables you to do that. The idea here is they have the camera itself that has the four cameras on it, and then they have the actual stitching box that will run the Vahana software and is pre-configured to synchronize these onboard cameras into one 4K high quality stream that can be pushed out over RTMP. You notice the cable here—there's only one cable, and that's a POE cable. It's a power-over-ethernet cable, it powers the device and also transfers the data of all the cameras, the raw data, to the actual stitching box that sits there. When you buy this product—I believe the Orah's at $2,000, thereabout. That includes the stitching box and the camera itself.
If we go back a couple of years, you also have the Giroptic device. The Giroptic device, instead of having a separate stitching box, it's a slightly lower resolution, which enables you to use the onboard CPU to actually encode the content. These three cameras will stitch; you've got a little POE switch at the bottom, you plug it in and you're able to pull that over RTSP into a media server that we stream then onto the Internet, onto a local network or however you want to distribute it.
Of course, you guys have probably heard a lot of news about the Ozo. If you've got an extra $70,000 lying around, you can purchase one of these. The Ozo's gained a lot of notoriety because of events like the Champions League, where you've seen it being used in finals and semi-finals and so on. That's another device. You can see, I don't even know how many cameras that has, but it’s quite a fancy device. In principle, it works the same as the other, smaller over-the-shelf devices. The Giroptic, I believe, is $500, and then the POE switch at the bottom is $250, so $750 will get you up and running with the Giroptic. This one is slightly, slightly more.
Then you have the devices that are really, in essence, an encoder, right? These are the encoders. Teradek is famous for their Teradek Cube; their Vidiu; the Teradek Bond. All these devices that allow you to encode IP and distribute. The cool thing about the Teradek Sphere is that it's agnostic to the camera you use. It has an IP switch at the back. This enables you to pull in four HDMI feeds as well, and be able to stitch them together on the device. It comes along with an actual—you actually run their software on an iPad. They have a lot of, I think, Apple developers at Teradek, and they're able to do a lot of stitching on the Apple device itself. This allows you to record on the iPad, stitch it together, and then stream it out of it, as well. The cool thing here is you can bring your own cameras, your own HDMI cameras. They can be very high-end, professional cameras that can pull into this, or even GoPros, so, another product. I believe that’s $3,000 for the Teradek Sphere.
The one that I just showed, I can show a demo of this for you guys after. You're seeing more and more devices now that are trying to take advantage of the mobile space, right, and trying to maximize on how prevalent mobile devices have become for encoding. With the Insta360 Nano, this allows you to use the two fish-eye lenses that are on the top of the camera and pull it in. More and more of these mobile devices that are in our hand have extreme CPUs, sometimes even GPU capabilities, to encode this directly on the device, and plug it into common social media platforms like Facebook Live, YouTube Live, Periscope Live—in addition to also pushing it into your own media server, like a Wowza or others.
Then the Giroptic that we showed earlier, they've made a movement into this Giroptic iO camera that also allows you to pull in or maximize the use of your mobile device for both iOS and Android and be able to stream that out. That's another device that's become very common. The Giroptic iO, I actually don't remember the price point. The Insta360 Nano is $299. To be honest, it actually puts out a pretty nice high-quality feed. The encoder that's on your mobile devices these days, like an iPhone 6S or a 7, can actually do encoding up 4K. That CPU is quite robust. You can actually stream out 2K or 4K from the device at 6 Megabits per second, 8 Megabits per second, if you’ve got the bandwidth to support it.
I think you're going to see more and more as the evolution continues with VR and 360. I wouldn't be surprised if you see a mobile phone vendor come to market with a built-in 360 work phone that will make these devices maybe somewhat obsolete. I can see that happening here in the very near future.
As we walk through the workflow, we've talked about cameras, we've talked about encoders. Let's talk about how you then pull that into a media server and you deliver it. How do you play it back on a device? This becomes a huge headache. Not just in 360, but in streaming. How do you get device support across so many different devices? We work with a company called NexPlayer, and they make a 360 SDK that allows you to stream to iOS and Android devices—a fantastic SDK that you can build into your own app.
You can put it in the app store and be able to enable 360 playback. The great thing about the NexPlayer—at Streaming Media East last year, I did a full demo of this and actually showed how it works, but it allows you to use a Google cardboard, it allows you to use your finger, or actually move it around both vertically and horizontally to actually consume that 360 content.
The drawback to the 360 player experience is that you need separate solutions to get to separate devices. If you want to build a mobile app, you need a 360 SDK player like the NexPlayer. However, if you want to get to HTML5—you want to have native support for browsers, Microsoft Edge, Firefox, Chrome—you need to use some of the common players that are out there.
Bitmovin, that has a stand down here—we work very closely with them—has a very extensible 360 player that supports both DASH and HLS. With them, you can embed this in your site and be able to allow the user to use your finger, use your mouse, to actually scroll through the content on their player.
By the same token, JWPlayer—a lot of you are probably familiar with JW—they've created a 360 player, an SDK, that's also a part of their offering. Here it allows you to create it for both desktop and mobile devices, and create immersive VR experiences as well. That SDK you can built into apps, websites—wherever you want to do, just like you can with the JWPlayer standard.
Then THEOPlayer, that also has a stand down here. They also have a 360 player that you can get and use with their SDK as well. This gives you the ability to use different vendors, but you need to have a multi-vendor approach to really get to both mobile devices and to native browsers.
Let's talk a little bit about, how do you scale this to a global audience? One thing is to pull it up and just look at it on my device, like I just did. How do you capture content, a live concert? We were talking about the Red Hot Chili Peppers earlier in Berlin, and they had a huge concert. How do you capture that and stream that to a global audience, and have the infrastructure to scale it? That's really where you have to involve a media server or some kind of a CDN. I'll have Gordon talk a little bit here about how they've been able to actually use their platform to deliver content globally.
Sure. What we do is, we've actually used an application in-house—well, both an application using a third-party vendor called Medial. We've actually called it Media Central.
One of the things we wanted to try and do was, we wanted to try and make the process to be able to upload full content and to be able to stream live a lot more easy. What we wanted to try and do is, we wanted to make sure that we have an easy workflow. Also, what we wanted to try and do is, we wanted to be able to make sure that we can get analytical information of who's viewing it, whenever they're viewing it and what device they're using. What we've done is, we've used this platform, and obviously what we've done is integrated it within Google Analytics, and then we've obviously extrapolated the information into our UI.
That allows us to be able to reach different territories. The difference between Media Central and using other portals, such as YouTube, is it's like in certain territories, YouTube is obviously banned. Within Media Central, we have full access all around the world, and our content is never taken off line.
For instance, I’ll give you an example—obviously on Saturday, when we had the Noam Chomsky lecture, one of the intellectualists was actually playing a reggae song, which was simultaneously being streamed through YouTube. That was immediately taken offline, which that meant that obviously the content was just taken off. The thing was, it was still good for us, because we had Twitter time fades as well, and then we could then inform our colleagues, our audience who was watching the stream, to go over to Media Central and then use our solutions.
From there we were able to track and see where our audiences were actually looking at content—and what we can also do as well, we can do a month-by-month analysis to see, okay, is this actually working in this territory? Especially with the Brexit, what we were actually thinking about is, we have 40,000 students, around about I think about 8,000 students are from overseas, and we have another 30 percent which is from the E.U. We then reckon, if you take that analogy, it's probably about 2,700 students we could potentially lose. Obviously, if you work out on a rotor, it's maybe 15,000 pounds per year, so the University would be losing 40 million pounds per year. What we want to be able to do is showcase some of our talent, because we're very well-known for our research materials. We want to be able to use it as a marketplace to advertise and let people know about the quality of the work that University College London does.
By using Media Central and its repository, we can go in and find out where our audience are looking at content, how long they were watching the content for and if it's engaging. Obviously that is good for independent departments, who can then go and do marketing projects around various different territories.
I think the compelling thing about this slide is, I was talking about the variety of different playback devices, right? You have to have a multi-device approach when trying to deliver content—otherwise you have a very narrow use case, and a very narrow deployment of all this money and investment you're putting into a VR and 360 solution. One of the reasons I spend time talking about the different 360 playback options that you have—I think in this particular slide, it looks quite complex, but really we're just breaking it down, adding a little bit more detail into the different components involved in getting a 360 stream to work. These slides are available up on the site, as a PDF version of all the slides, if you guys want to grab it there.
The live event capture here, what we talked about, we have the mobile apps that are available. People are using the Wowza mobile encoder, which is our own SDK, or you use a camera and encoder like the ones that I've shown to capture the content live. At that point, of course this is a very Wowza-focused slide, but you can insert any other media server here that can accept 4K or 2K content to be able to re-stream it.
What's very common here is, on the processing side, the different protocols that are coming in are usually RTMP-push from a device—or if it's a Giroptic, or even the Orah, it's an RTSP-pull that the media server has to do to actually grab the content. Here you get the content that actually comes in. Then what the media server does, it has to translate those protocols into protocols that different devices can actually support. Apple HLS is very common. We talked about DASH. These are different protocols that get sent to different devices, so that you can make sure that you get playback and no-playback options. That's at the point where you need those different clients—like JWPlayer, like Bit Dash, like Fiio Player, like NexPlayer—to make sure you get the coverage, and actually support the different protocols to different devices.
Not only do you have to process this stream and translate the protocol, you also need this middle layer, this scaling layer to deliver this to a global audience. This could even be in an in-network environment. UCL being 40,000 students, I mean an enormous university. They have different campuses throughout, so if they're trying to do a multi-campus delivery, you have to put different media servers throughout the campus and then re-stream it. You're almost creating your little mini-CDN within your network to do it. At a public Internet scale, you're now looking at using a large CDN or creating your own edge-delivery system. You're using an Akamai, a LimeLight, a Level 3. Most of these CDNs don't care. They just take in the video encoding that's sent to them by the origin, by the processing server, the media server—and then just re-distributing it with all the different protocols.
Which leads us into another question, which is the resolution of these VR and 360—a lot of these devices have traditionally maybe done very low-end, 2K or lower. When you're stitching these different camera feeds into a very high-end resolution, sometimes the quality of these cameras are poor in some of these older devices. You end up with something that’s cool because it's there, but how much value does it really have? I think as these devices get better, you're going to start seeing higher quality, which means you need a workflow that can support 4K resolution. This inserts issues, because as you get to the playback environment, you have a mobile device, a mobile app you've developed, and there's a user trying to consume this on a 3G connection. They're not going to be able to download a 4K video stream that's an eight, 10 Megabits per second.
Their 3G network is just going to sit there and buffer. At that point, you can actually insert what's called adaptive bitrate, which I'm sure most of you are familiar with over in HTTP protocol like HLS or DASH or others. That allows you to transcode the 4K stream to a 2K, to a 1K, and so on and so forth, to actually let the client decide and adapt to whatever their network conditions allow them. That whole ABR workflow can work even in a VR/360 environment. It's actually something you probably should be offering. Although, at the end of the day, you end up getting into the same problems of, if you're delivering a 500K four ADP stream, how much value do you really have in that being a VR/360, right? It's not going to look very good.
All these issues become a part of the playback environment. Then, from a consumption perspective, you look at what Facebook is doing with Facebook Live and their Oculus devices. You start inserting different Google Cardboards—this starts becoming a very compelling offering, right? To be able to actually deliver engaging content that people can interact with. Not just in a mobile-device environment, but as an actual VR, immersive experience. From a streaming perspective, that's still very raw. People are still trying to figure out a way to really to do this in a streaming environment and insert data, insert information that makes that more compelling and more sellable, right, in different workflows.
That's pretty much it. The demos, I would suggest—I think we're a little tight on time. We have different demos from the session I did at Streaming Media East last year that showed a full demo of the Orah going through and actually delivering, as well as one with the Giroptic device. I'm actually going to be recording one with the Insta360 and putting that up on the web, as well. I encourage you guys—I'll be here for the next few minutes, if you want to see a demo, that would be fantastic. With that, Gordon, I want to thank you for joining us, and thank you all. I'll be here for questions.
You talked a lot about the available video-capture devices. What about 360-degree audio to go with that?
I mean, from a media-server perspective, it comes down to how many channels you pass through your media server. With Wowza, I mean you can pass in three, five different channels, and we can just process those and resend them off with a stream. The biggest issue becomes, is what does the capture device support. This one here is not going to pull your Dolby sound, you know, that gives you that. The Nokia Ozo has multi-audio support inside of it. Really, you're at the mercy of the capture devices that you have. From an encoder, you need to make sure you have an encoder that can honor the different audio tracks that are coming in. It's most likely AAC.
To do something like a Dolby Surround Sound, you need to have the audio code from Dolby. Have they licensed the rights to that, and have that available on the device? That's a different question. From the media, from the Wowza perspective, we don't usually care. We can actually pass it through without having to re-transcode it, and just wrap it in a different protocol for different devices. Then you also need to make sure that the playback level, you have the playback enabled within your client to honor the audio codecs.
Most of the player SD cases that are out there will support multi-audio tracks and surround sound-type environments. Then it becomes, what kind of VR headset has the support to honor that throughout. To be honest, I don't know a lot about the metadata side of it, because of lot of that is passing metadata through the stream. Will the device know how to play back the audio in a certain way, right, to match the content? That I couldn't answer for you, but it's a good question.
Which camera do you use for live-streaming?
Thank you. For live-streaming, what we try to do is we try to make the process as easy as possible. For instance, Wowza does a product called GoCoder, and GoCoder is a free application, which you can download from the Google Play Store. For us, it just allows people to be able to stream live. What we do is, we allow people to have the authentication details. What we do is, it passes through to the live-stream service. What we do is, we publish the stream name, and authentication details, and that's it, and they just click and then they go.
Also, as well, what we try and do is, we try and promote a self-service. What you want to try and do is, you want to try and have three different environments. First environment is just a very cheap environment, which you can have if a camera is maybe less than 1,000 pounds. Then what we'll try and do is, try and go up to maybe 2,000 pounds, and then 5,000 pounds.
What we do is, especially for lunch hour lectures, we actually have a in-house department called Digital Media Services. What they do is they film the lectures. What we find at that moment is, it's a lot of paperwork; for instance, like with the economic department or maybe the School of Pharmacy, they have their own cameras, but sometimes people use iPads. If they have GoCoder on it, it just streams right away, and it's free.
We try not to make it too complicated. Again, in essence, what we try to do is, we don't want to say to people, use a specific camera. What we want to do is say to people, use any camera that you want, as long as it's good quality. We try to promote the service. We want people to come back and we want to be able to say okay, we'll give you advice on how to create the best streams, make it engaging, and also to try and improve your quality—just to make sure that you're not too far away, that your headshot’s in it, and so forth. Unfortunately, we kind of use a wide range of cameras.
From the 360 perspective, at Duke University, they are one of our first customers that was able to do that and monetize 360 content, because they’re medical, they got a lot of money, they're very well-known in the United States. They've always done a camera-agnostic solution, because they wanted to make sure that they could scale their cameras into the future. They use Valhana, and they use their own capture devices. They pretty much created their own rig to actually capture 360, and that way they're not tied into a device that maybe today is cool, but in a couple years might be behind the times.
To take one lesson from the session is the fact that the budgets, there's a huge range—everything from a $300 mobile device all the way up to a Nokia $70,000, and there are a lot that I haven't even mentioned. Samsung's making their own devices, so there's a huge range, depending on what you want to do and how you want to, and how compact of a solution that you want.
I should actually mention that the Engineering department also use the GoPro rigs, and they also use spare cameras as well. That's more on a real high end. More for research.
Quite simple question: What does a Media Four run for this stitched video? For anything should this video be unstitched, or you can edit this 360?
To answer your first question, H.264 for video codec, AAC audio codec—
No, no, no, no. From an MP4?
Oh, MP4, as a VOD file, yes. I see most of those being MP4 files, or actually all of those that I've seen. Vahana itself can record locally. The Wowza media server can record on the server itself. You always want to record at the encoder level, because then you make sure that no packets get lost across the IP connection. Valhana can record it or whatever your source. Even this device will record on an SD card locally as an MP4, and encode it as H.264. Yeah. I think from a protocol perspective, most of these cameras, if they're doing a push, they're using RTMP; if they're doing a pull, like an IP camera or network camera, it's an RTSP pull. Regardless of resolution, they're using H.264 video and AAC audio for that.
Yeah, but what about editing—can it be edited in a stitched way?
Yeah, so that's a good question. I think with Valhana VR software they have an ability to whatever—because you can go in there and, on the fly, modify the stitch itself, but as it is with any stitch, you will sometimes get like a line, right? Where the stitching is, sometimes it doesn't happen elegantly. You might want to go back and actually edit the raw footage to make sure you can fix that stitch for VOD. I believe Valhana allows you to record the incoming source footage for each of your cameras, so that later you can actually do a restitch of the content. I'm almost positive they do. I've never tried it though. I wouldn't be surprised if they do.
No, it doesn't.
It doesn't? Okay. Well, there's a backlog request for them.
Of course, you're using protocols to record on the SD camera level.
Yeah, so you could do that and then record off—that's good feedback in general, as you always want to record at the camera level, at the encoder level, at the server level, right? At the camera level, you can make sure you've got the raw footage on those GoPros, and then pull into Valhana and restitch it. That's good advice. Any other questions? Cool. Well, thanks guys.