Next-Generation Apps Enabled by SPAN
GT Systems CTO Rhett Sampson reviews the well-known problems of distributing live sport via the Internet and next-generation networking solutions.
Fundamentally, the Internet was not designed for live broadcast. There is no inherent Internet architecture or protocol for it. Telco engineers describe it as “threading an elephant through the eye of a needle”. Actually, they say that about video in general, but live sports video is worse. Much worse. It’s live and there’s a lot of movement! Often with a relatively small ball.
As always, plenty of stuff has been ‘grafted on’: RTMP, RTSP, ABR (DASH, HLS etc), FASP, WebRTC and SCTP, QUIC, HTTP/2… the list goes on. There’s a good discussion of the most common ones here https://www.dacast.com/blog/video-streaming-protocol/
Some have gotten good results up to a point, e.g. WebRTC and SCTP. But none of them is really satisfactory. Why? Because they are all built on variations of TCP/IP or UDP, which don’t have inherent architecture or protocols.
But that’s just the start. We’ll come back to it, but there are much bigger problems. Basically, it’s a problem of capacity and architecture. In real time.
No network in the world is sized for peak load, which live video often is. They can’t afford to be. The Tour de France, World Cup, Super Bowl, your local sports code grand final and even esports. All are global or local peaks. Even global CDNs like Akamai, Amazon, Google or Microsoft Azure have limits. And they don’t talk to each other. Which is why broadcasters have often turned to ‘multi-CDN’ solutions.
Every single one of these networks is centralised, hierarchical and fixed. The CDNs claim they put servers at the edge, but that depends on what you define as the edge.
What this means is that the servers are still a long way (in network terms) from the audience. And everyone is trying to hit a small group of servers at the same time across the same networks. Both servers and networks get hammered and we see lag and ABR fuzziness at best and buffering (the ‘spinning wheel of death’) at worst. And that’s with HD. Try it with 4K or 8K.
The solution
So what’s the answer? This is where we come back to the underlying architecture and protocol. What if we could start again? What would that look like? Suspend incredulity for a moment. Bearing in mind the problems above, and after giving it (a lot of) thought, it might look something like this:
- It would be fully distributed. Processing, storage (caching) and switching/routing would be distributed throughout the network, where they are most needed and best utilised.
- Content would be distributed and cached where it is most needed. Counter-intuitively, it turns out this is more efficient than big, centralised servers and switches. A lot more efficient. Both in terms of cost and performance.
- It would support live broadcast intrinsically.
- It would be virtual (elastic). This would enable reconfiguration and optimisation in real time and eliminate wasted infrastructure.
- It would enable delivery of virtual services anywhere in the network.
- It would be autonomous. The Internet has become too complex for humans to operate, even as it is. It needs a safe and secure machine learning system to run it. The “I” in network AI isn’t that complex, once you have the models and optimising algorithms.
- It would be self-optimising, both locally and globally.
- It would be 100% compatible and interoperable with existing TCP/IP networks.
- It would be open and intelligently inter-operable with any network using the architecture and protocol stack.
This is our architecture and protocol stack, SPAN-AI. It can be used to build inter-operable Universal Content Distribution Networks (UCDN) for live sport. And for the coming next gen apps: industrial automation, machine vision, AR, esports and gaming.
Sound too good to be true? Get in touch. I’d love to have a chat, even just to debate any of the above.