Home >> September 2017 Edition >> Streaming Video — How It Works. A Vimond Media Solutions Focus
Streaming Video — How It Works. A Vimond Media Solutions Focus
by Andreas Helland, Chief Commercial Officer, Vimond Media Solutions

 

Broadcast was once the exclusive domain of experts. Then came the Internet. Having worked in the broadcast and OTT (Over-The-Top) industries for the past few decades, Vimond has witnessed how IP technologies have democratized the ability to produce and distribute video.

This sounds simple. You shoot the video. You edit the video. And then you upload the video to the cloud for publishing — and then come the questions. How can you make sure the video is available for everyone? What about the numerous devices you need to ensure the video reaches? Where is the content published? How is content secured? The list goes on and on, and Vimond has faced such inquiries for a long time.

If the need to know more about streaming video, either to keep up with the OTT competition or to understand what is happening a company or organization, here is a quick tutorial on this process, starting with some basics.

Compression, Formats, Codecs, and Containers
Video is a sequence of images where the color for each pixel is defined in each image. However, if every image was fully stored, the video would end up being far too large for streaming within acceptable time limits.

A full-length, uncompressed, progressive HD movie, with 10-bit color, 1920x1080 pixels, and at 25 frames per second (fps), translates into a 1.4 terabyte file. To view such a video without buffering, viewers would need to be able to receive extremely high-speed data of about 2 Gbps in size. The answer to such a dilemma is compression, which comes in two basic flavors:

• Lossless compression allows the images to be fully restored after decompression, but are CPU-intensive
• 
Lossy compression reduces the size of the original file dramatically, by simplifying the image or removing detail

Video formats are specifications for compressing video to a stream. Examples include the MPEG family (1, 2 and 4), H.264 (MPEG-4 AVC) and H.265 (HEVC).

Codecs (coder-decoders) are the methods for implementing a specific video format. They use algorithms to shrink the size of the video file and to decompress that file when asked. Examples include x264, x265, ffmpeg, DivX, Xvid, WME, VP3 - VP9, Sorenson, Blackbird, Dirac, libtheora, Huffyuv and 3ivx.

Video formats and codecs are constantly being improved and updated as better hardware is developed and new devices come to market, and most of all because the viewing public demands faster and higher resolution imagery.

Video container (or wrapper) formats define how elements coexist in a stored file, but not what kinds of data can be stored. Containers usually contain multiple, interrelated video and audio tracks. Individual tracks can have metadata, such as aspect ratio or language. The container can also have metadata, such as the video title, cover art, episode numbers, subtitles, and so on.

Because the same codec is used to play back the stream with which the material was coded, many video containers also embed the codec. Examples include MP4, Microsoft (AVI, ASF, WMV), Google WebM, Apple (m4v, MOV), Adobe Flash FLV, Matroska MKV, Ogg, 3gp, DivX and RM,

Transcoding and Streaming
In a multi-screen world, the video must be scaled to fit different devices. With transcoding, video is adapted to the size of the device and the bitrate (bits per second of video) is adjusted in order to cap the amount of data to be transferred. The different streams are then packaged into the same container.

Streaming today uses adaptive bitrate (ABR) techniques — the stream is broken down into a sequence of small HTTP-based file downloads. Each download contains a short segment of a transport stream and also includes a manifest, which contains timing data, quality and a list of other available streams.

At the start of the session, an ABR stream downloads the manifest, an extended playlist containing the metadata for the various sub-streams that are available. As the stream is played, the client may select from a number of different quality streams to adapt to the available connection speed. Examples include Google’s MPEG-Dynamic Adaptive Streaming over HTTP (DASH), Apple’s HTTP Live Streaming (HLS) and Microsoft Smooth Streaming.

In addition to the standards and software, hardware is required that can take the original video file, fragment that file and then smoothly deliver that content. This is done by a streaming (aka, origin) server. Examples include devices from Unified Streaming, Wowza, Adobe Media, Apache, Nginx Plus and others.

Distribution, VOD vs. Live
In a standard Video on Demand (VOD) pipeline, the video is prepared beforehand and is not time critical. For example, content and metadata are ingested from various sources and stored in the cloud. The content is archived in the selected format. Content is then distributed to the CDN, while the online data-storage or hosting service ensures availability.

In a VOD scenario, there’s the video, the container, format and everything needed for delivery to customers. For live video, the customer is attached to a potentially endless stream of data. In either case, customer access lines vary enormously. The goal of the service provider is to balance the amount of buffering and availability of content with acceptably high video quality.

But for live events, timing and synchronization are extremely important. Content must reach the end-user as soon as possible, and redundancy in the system must be designed so that any failover happens without notice by the end-user. The importance of synchronized time codes to redundancy cannot be over-emphasized: customers typically pay a premium to watch live video and are unhappy with interruptions to the video stream.

Continue to Learn
In satellite content distribution, the video expert has typically been someone with deep knowledge of industry standards, such as DVB or MPEG. In the streaming video world, the technology domain is broader and more dynamic. The new expert must understand multiple and evolving formats, codecs, containers, transcoders, streamers and CDNs — and, moreover, must know how these and other technologies are deployed to deliver on-demand and time-sensitive live video.

Becoming a streaming video expert may not be necessary; however, given the proliferation of OTT video, even within the satellite industry and how OTT cuts across traditional video, networking, IT and business silos, satellite OTT providers do need to become more familiar with this category.

A little learning, in this case, is certainly a good thing.
www.vimond.com/