Author: Gregg Hall
This article has a twofold purpose: First, to educate you about the live streaming process; what actually happens under the hood without (hopefully) getting overly technical. Second, once we understand the complexity involved, what steps, if any, should we take to protect against a catastrophic failure. Here at Webcast & Beyond we practice what we call “Failsafe Protection” as a means to provide “peace-of-mind insurance” for our customers.
We live in an era where free apps allow us to instantly stream from our smartphones to destinations such as Facebook Live, Periscope, or Instagram. So what’s the big deal? Isn’t live streaming as simple as point and shoot? The fact is, live streaming is a highly complex process subject to a myriad of potential failure points. It’s one thing to use an app on social media for personal reasons. It’s quite another when broadcasting your company’s event to the world at large. Professional production standards must be met and the reliability of the live stream must be secured.
In it’s simplest form, the live streaming process can be divided into three phases: Acquisition, Transmission, and Distribution (refer to the diagram below).
This is the front-end production side which includes the cameras, microphones, Powerpoint images, pre-recorded videos, titles, graphics, and video switcher. In short, all of the equipment and crew that make your event look and sound like a professional television broadcast.
The next phase of the live streaming process involves taking the audio/video signal from the the acquisition stage and encoding it for delivery over the internet. The encoded signal then needs to be transmitted to a streaming server. This transmission path known as the “uplink” is typically a high speed internet connection at the venue to a data center where the streaming server is housed. This requires an Internet Service Provider (ISP) who can provide a consistent, reliable data path without interruption or congestion from other traffic.
Now this is where it gets interesting. Let’s start with the people viewing, the end users. To get access to the internet they need to have an Internet Service Provider or ISP. Once internet connectivity is established they click on a link which takes them to your landing page, the web page with the live streaming player and any other relevant information about the event. The landing page is hosted by a web server which can be your company website, a social media platform, or a private video streaming service. A streaming video player is embedded on the landing page. The live stream itself typically does not originate from the web server, but gets piped directly to each streaming video player from the streaming server. In effect, at least two servers are necessary to watch a webcast, although this process is transparent to the end user. The streaming server in the cloud handles the connection requests from each person watching and works in concert with a Content Delivery Network (CDN) to handle the distribution to each end user in the most efficient route available. It should be noted that the CDN allows for the webcast to be scaled up as more people sign on to watch. It essentially “multiplies” the capacity of the streaming server by using a special network of additional geographically positioned servers.
So now we have explained how to establish an end-to-end connection from the event to each end user watching on their own device. We can sum up the whole process by specifying these two signal paths:
Media Path: Acquisition Equipment > Video Switcher > Streaming Encoder > Router > Uplink ISP > Streaming Server > CDN > ISP > End User
Landing Page Path: Web Server > ISP > End User
In the next section we will take a look at what can go wrong, now that we understand the live streaming process in more detail.
We have learned that live streaming involves multiple processes with multiple entities in multiple locations. A malfunction at any step in this data chain can lead to catastrophic failure. Here then are some of the most likely failure points that we must guard against:
The above list identifies the broad categories we need to be aware of in order to develop an effective failsafe plan. Some of these issues are beyond our direct control such as the cloud-based functions. If a cloud-based service goes down all we can do is re-rout around it using an alternate path. If the end user is having problems we have a tech support page and online assistance.
We need to anticipate what could go wrong and be prepared in advance with a work-around. Generally speaking we need redundancy in the form of spare equipment, alternate service providers and established troubleshooting techniques to quickly respond when something unexpected happens.
From years of real world experience and quantitative analysis we have developed a four-point failsafe preventative strategy to address these issues, namely:
In the event the ethernet uplink goes down our wireless 4-G network has you covered.
If the network or website host goes down we will switch over to an alternate CDN/landing page.
We supply battery back-up, spare cameras, encoders, cables, and other critical accessories to keep your broadcast up and running.
Here is where my scientific credentials and 24 years of broadcast experience pay off. As Chief Technical Officer of the company it is my job to develop and implement the contingency plans and redundant systems to keep everything running smoothly.
Even though I have identified all of these possible calamities there is no reason to lack confidence in the live streaming process. Hiring professionals who know what they are doing, have well maintained equipment, and have a contingency plan, reduces the probability of failure to near nothing. We want our clients to have peace-of-mind and that’s what Failsafe protection is all about. Failsafe protection is included with all of our multi-camera webcasting packages.