#media
Mar 20, 2018

How Showmax got into live streaming

Jan Panáček
Jan Panáček
ffmpegliveusp
How Showmax got into live streaming

Part III - Trim the fat, not the meat

This is the third in our series of articles about how we made a radical departure from video-on-demand to live streaming - and how we engineered the move from scratch to production in less than three months.

This time, we’re focusing on video stream trimming - specifically, cutting a live stream that is still running to get rid of pre-roll.

Post image

Users don’t sign up for two hours of pre-roll. Ever.

Just to (re)set the scene - the use case for our move to live streaming wasn’t public access TV shot in grandma’s basement. We were bringing SNL Poland to a live audience, and it was as exciting as it was challenging. So, on with the show…

To ensure that both the broadcasting Teradek Cubes in the OB van and our encoders were working as expected, we agreed with ATM System to actually start the streaming pipeline a few hours in advance. The information about these new streams wasn’t published to the front-ends until about one hour before the start of the show, giving us enough time (theoretically, at least) to fix any issues.

However, that meant that anyone who would later tune in to the archived stream and wanted to watch it from the beginning would have a couple of hours of a static image (pre-roll). Not exactly a huge laugh-getter.

To avoid this, we needed to be able to trim this pre-roll while the video was still streaming, or very soon after that.

Luckily, Unified Streaming Platform (USP) had what we needed: The Purge API. It seemed that all we needed to do was to let our content administrator watch the live stream, select the point in time she wanted to trim to, get the timestamp, and perform the trim as easy as:

curl -X POST https://usp-live01.showmax.com/l/test/test.isml/purge?t=02:00:00

We quickly enhanced our Content Management System, adding a video preview widget that could be used to seek the trim point and initiate the trim operation on the USP. Happy ending, all good, problem solved…or not!

Time waits for no viewer

The mistake we made was simple (but not obvious). When the stream is still running and you trim by time only, your trim point (t=2:00:00) deviates from point where you really wanted to trim. The reason? Well, time is moving forward, so T minus 2 hours is a different time than it was when you started to read this sentence. Same goes for checking if we set everything correctly in the admin interface and clicked the trim button. It was very very possible to trim actual feature content, not just the pre-roll.

The solution was simple. We stored the timestamp at the start of the stream and added t=2:00:00 to the captured timestamp. Here, though, we hit a slight Purge API oddity. The trim point cannot be specified as simply as t=YYYY-MM-DDTHH:MM:SS.ss - you need to specify an actual time range:

t=1970-01-01T00:00:00.000000-2017-12-08T13:21:57.124535

With this time range, we trim everything from the beginning of the UNIX epoch to the point we need, down to the millisecond.

How can I count what I can’t see?

That brings us to the last issue with trimming pre-rolls. As mentioned in our first piece about live streaming, we did a couple of dry-runs. We wanted to know what would happen if an encoder failed, if broadcasting Teradek Cube failed, etc. Through this, we made a lot of discontinuities (holes) in our HLS playlist.

Think of it like this: The full, archived video in the player will display as being 20 minutes long because the content in the live stream lasted 20 minutes. But, during the stream there were two five-minute interruptions. So, in real-world time, the stream was 30 minutes long. The difference between the start and end timestamp of the stream is not equal to the stream duration. So the idea is that if you seek to position 00:10:00 in the player (10 minutes since the beginning) and you say that you want to trim this particular video to this point, the time is not <start time> +10 minutes, but actually <start time> +15 minutes, because you need to account for the five-minute hole. Here’s a shot of our whiteboard at the moment we realized what was going on.

Post image

In the end we came up with a Django view, serving as a JSON API endpoint that takes a starting timestamp (which we take from hls.js player that we use for this trimming preview) and elapsed time. It fetches /archive endpoint from the USP manifest and then iterates through each chunk of the archive, gathering its start and end timestamps:

for chunk in chunks:
    start = dateutil.parser.parse(chunk.attrib['start'])
    end = dateutil.parser.parse(chunk.attrib['end'])
    if (end - start) > trim_time:
        trim_timestamp = start + trim_time
        break
    else:
        to_substract = end-start
        trim_time -= to_substract

With this, we take the time that someone from the content team selected as the start of the show and trim only the part that the user was actually able to see, ignoring holes in the playlist.

In truth, all of the issues we encountered are quite logical, but they just didn’t appear so at first glance. Some traps just need to be triggered to be seen.

Next up

In the last part of our series on live streaming, we’ll discuss lip-sync issues and what happens to ffmpeg after exactly 26.1 hours of streaming.

Share article via: