Jiří Brunclík in SCRIPTease: ‘I like when the job is a true challenge’
The SCRIPTease podcast from Lolo.team is a great look into the world of engineering, software development, and the technical side of…everything. Jiri Brunclik, Showmax Engineering’s Head of R&D, was the first-ever guest on the podcast back in March and we want to give non-Czech speakers a wrap-up of the conversation and some insights from our in-house philosopher.
Jiri has been at Showmax Engineering longer than anyone else, and he’s been working with software since he was a teenager. After stints at Sun Microsystems and Google, he came back to the Czech Republic to work on what would eventually become Showmax Engineering.
Initially, Jiri was invited by the Lolo.team, and CTO Jiří Bachel, for a development-focused discussion open only to employees. However, when he showed up, he was surprised to see a full setup, and audio set to record. The good news is that he’s unflappable, and SCRIPTease got a great inaugural podcast out of the deal.
This is an adaptation of Jiri’s interview on SCRIPTease, translated and edited for clarity. You can listen to the full episode here (again, it’s in Czech), or on any major podcast platform:
SCRIPTease: What company had the biggest impact on you and your career?
Jiri B.: I think that would be Google. I worked in Dublin for 2 years as Search Site Reliability Engineer, held the pager from Google.com, managed deployments, roll-outs, automation, monitoring and load testing, and more.
I got the job through a referral program. A friend of mine worked there and wanted the bonus - and he bothered me about it as much as he could. I eventually took it, but I was still attending university, so I had to finish my degree before I could onboard.
ST: You must have celebrated
JB: The joy was great, you’re right. However the greatest professional joy I’ve ever felt - it was almost a state of euphoria, really - was when I got my first-ever job offer from Sun Microsystems when I was in high school.
ST: From Google you moved to ICFLIX?
JB: Yes. I got together with Antonín Král, whom I had known for some time before I moved to Ireland, in fact. He offered me a job with Nangu.TV, an offer I declined to take the Google job.
When I returned from Dublin, I was itching to work with big, distributed systems where you have opportunities to work with complex issues and do performance tuning. At the time, it was hard to find the right professional fit here in the Czech Republic. I saw a tweet from Antonín that said that he wanted to build a streaming video platform for the Middle East and Africa, and I was all-in. It was nuts. He didn’t have a single line of code, let alone an office or anything like that, and I had a mortgage and a new baby to provide for.
ST: So, you expected a big challenge. What was the reality?
JB: ICFLIX was launched with two investors from Dubai who were, let’s say…hard to get on with sometimes. I joined the project in April and soon learned that the platform was to be live by Ramadan in June.
One of the two investors literally took us aside and let us know that we had to make the deadline because his reputation was at stake - this was clearly more important than anything having to do with our lives. But, we got to work, hired two more colleagues, and worked with Mautilus to build an SVOD platform - and the attendant apps - on time.
Unfortunately, the project was not a commercial success. After two years, it had just 2000 subscribers. We built it to handle 200,000.
ST: What happened when the business failed?
JB: Naspers came in and bought the platform, and took over the Czech team of engineers. Naspers, at that time still the Multichoice “mothership” and the biggest supplier of satellite TV in sub-Saharan Africa, was aware that satellite TV was simply not the future.
They wanted to start moving customers from satellite to the internet and chose to acquire into the space rather than build in-house. So, that’s what they did. We re-branded to Showmax, the ICFLIX engineering team joined, and…well…here we are.
ST: You’ve held several positions at Showmax - DevOps, Head of Infrastructure, VP of Engineering. Now you’re Head of R&D. What does that mean, exactly?
JB: Our parent company, Multichoice Group (MCG), has two platforms - Showmax and DStv Now - with more or less the same features and strengths/weaknesses. In our context R&D means designing the overall architecture of our next-generation platform - basically keeping all the good things from both existing platforms and working out the rest.
ST: What programming languages did you choose for the backend?
JB: That’s an interesting story that has real-world implications. As we only had three months to build the platform, we used what we knew the best - Python for me and Ruby for Antonín. Right now, our backends are written partially in Ruby and partially in Python, and we have two backend teams. In regards to performance optimization, we’ve started to rewrite the code in Go.
It’s fast, and there is a large, active community around it. That makes it relatively easy to find high quality libraries.
Last year Showmax launched live sports streaming, so in ten minutes we have to handle hundreds of thousands of users coming to watch all at once. We’ve found some success with an approach that certainly gets close to being fake-it-til-you-make-it. Before big events we disable personalization and serve everybody the same generic homepage from caches. But, with subscription management this is not possible, as we need to check each and every user’s eligibility to start the playback.
ST: Well that’s a lot of users. How do you tackle the sudden and exponential increase of incoming traffic?
JB: We built the system to maximise the delivery from cache. We have Varnishes on the frontend servers that work as HTTP cache, and requests are served from cache. If not, the request is solved from the backend and the reply is stored in cache. We optimise the process with features like responding to the same URL with the same Varnish to maximise the chance that the response is stored in cache already. We set long max ages to keep the objects available longer. And we built a framework to ease invalidation of caches. If somebody changes any profile information, there is only one call that runs to all Varnishes and invalidates all the caches.
ST: How do you test backends?
JB: We have a few backends covered with unit tests, and we use integration tests on payments so that we correctly set the subscription after credit card or PayPal payment.
One time, one of the payment methods was down for three days. Our aim is that the basic flow through the app and registration/login is covered with tests to avoid situations like this. On the frontend we use end-to-end tests, for example, to test scenarios before releasing a new website or app on smart TVs.
ST: What about penetration tests?
JB: We use HackerOne, a paid service. It enlists whitehat hackers to perform penetration tests and other security tests for companies. The bug bounty can run into the hundreds of thousands of dollars, but it’s worth it.
ST: Where do you host your hardware?
JB: At the beginning when we were still ICFLIX, we had one credit card with $5000/month on it for all of our infrastructure costs. So, to achieve maximum cost efficiency, we used bare metal servers provided by Hetzner, and we’ve stuck with them so far. We have servers in two different cities, with several data centers.
ST: What about CDNs?
JB: CDN’s are important because we need the content to be as close to the consumer as possible. Sometimes we use commercial CDNs, but, for bigger markets, we build our own CDN edge, buy our own servers, and build our own data center. This approach delivers a better-quality product to customers, with low buffering and better streamed bitrates.
ST: What’s the one big thing you’re working on right now?
JB: My goal is to get us to scale. With major live sports, traffic peaks are not only much higher, but also more sudden. At the same time, the amount of such events throughout the year is relatively low. In terms of costs, it doesn’t make sense to provision the capacity to handle them for extended periods of time. So, I’m working on our next-gen platform and how to run it on top of public clouds so we leverage their auto-scaling capabilities.