The Next-Generation Network
Limits of CDNs exposed
The growth of the Internet requires more than current networking approaches can support.
The Internet, and particularly its core protocol stack, TCP/IP, are reaching their limits of growth. This is evident in the increasing occurrence of “the spinning wheel of death” (buffering) and increasingly frequent global catastrophic failures of major CDN providers.
Emerging applications require even more
New applications such as industrial automation (IoT), machine vision, AR, gaming, 5G etc. have latency, caching and processing requirements that are beyond the capability of most legacy, centralised, hierarchical networks. Even if we were to try and rebuild them, the cost would be prohibitive.
Next-generation CDN solutions – now
GT Systems has been researching and solving these problems for over a decade. We work with leaders in the field at CSIRO, Protocol Labs, Bell Labs, etc. to design and build next generation content distribution network systems (ngCDN) that are hyper-scalable and interoperable. We call them Universal Content Distribution Networks (UCDN). We build our distributed, elastic, content based, autonomous and self-optimising networks alongside, and interoperable with, traditional networks. We are working with global network equipment suppliers to add our SPAN-AI protocol stacks and capabilities to their systems.
SPAN-AI agents are independent and self-governing. Machine Learning (AI) capability allows local optimisation of all network and operational functions. Agent swarms use algorithms inspired by nature. Agents feed capability and real time status up to the global intelligence (see below).
Our powerful network modelling capability helps us deliver highly optimised next-generation network designs customised to customer requirements.
The study of networks has emerged in diverse disciplines as a means of analyzing complex relational data. In post-graduate work at the University of Maryland, our head of R&D, Jaime Llorca, invented novel physics-inspired models for the optimization and control of heterogeneous and dynamic networks. These were based on the innovative application of graph, edge-based, models to distributed virtual cloud network flows. These models formed the basis of his work at Bell Labs, where he developed the concept of a Software Defined Virtual Content Distribution Network (SDvCDN) for programmable, elastic, cloud native, CDNs. This was further refined in collaboration with GT Systems into our SPAN-AI architecture.
The algorithms and models have been refined in rigorous, scientific, peer review and research projects at Bell Labs, NYU, etc. These were in the fields of cloud/network modelling and optimization; machine learning; and data analytics; using extremely detailed, real world network data. The result is a set of models and optimisation algorithms that are robust and comprehensive. They include all aspects of networking, including power consumption, an increasingly critical metric as hyper-scale data centres (including so-called “edge” data centres) consume more and more power.
Modelling a SPAN-AI network
The first stage of any SPAN-AI network is to model your optimum base design using extremely detailed data about your network. The more granular we can get in that data, the better the result. This gives us an initial answer to the question: “Where do we optimally put our switches, processors and storage?” Because SPAN-AI is elastic (virtual), this is just the starting point. Network functions are separate from hardware and delivered optimally in real time. However, this greatly increases the complexity of the models. In some cases, the problem escalates faster than the models can converge to a solution. This is called non-deterministic polynomial-time hardness or NP-hard. Fortunately, Jaime has developed approximations that resolve these problems within an acceptable degree of accuracy.
Implementing the network
Once we have your base design and optimisation algorithms, we can then implement an intelligent SPAN-AI network. SPAN-AI agents use Machine Learning to locally optimise network content placement and processing decisions, based on local network conditions, in real time. Global training and optimisation pipelines optimise agent and global performance. Optimum configuration will vary from network to network and according to real time traffic and performance.
The art and skill are knowing where this is at any given moment and optimising the network in real time to adapt to it. That is what we do.