GT Systems is putting AI inside the network. We’ve been doing it since 2014.
Internet Protocol (IP) is as dumb as a rock. The problem with that is, you can’t do AI on a rock.
When the Internet was invented, there were 4 “mainframes” (computers) connected to a bunch of telex-machine-like “teletypes” via Interface Message Processors (IMPs). Computers were nose-bleed expensive and actually not that smart by today’s standards. A mainframe was less powerful than a $100 Raspberry Pi hobby computer today. Teletypes were arcane, nightmarish, electro-mechanical devices that were expensive and horrible to maintain; but still infinitely cheaper than computers. All they could do is print numbers and letters on a page. The Interface Message Processors (IMPs) of the original internet were no better than PCs but still very expensive. As a result, the smarts were left out of the network and put at the application level in the mainframe. This would have far reaching consequences.
Intel did not invent the microprocessor until 1971. It, and its successors, would become the backbone of the electronic age and particularly the backbone of the routers, switches and low-cost terminals on which the current generation of the Internet is built. The trouble is, those extremely powerful routers and switches were built on the original Internet architecture. That’s like paving over horse and cart tracks to make roads for Ferraris.
IP is just “dumb bits down a pipe” between two routers. It is literally about as smart as two tin cans and a piece of string. You can make the tin cans (routers) as smart as you like. They still have to talk over a piece of string. You can’t make intelligent decisions if you don’t know what is going down the string/pipe. It could be Batman the Movie or shopping lists. Never mind having to repeat it over multiple pieces of string to reach your destination.
The world wide web (www) wasn’t invented until 1989. A good idea at the time, it has brought fresh issues. The explosion of centralised web sites and particularly video streaming sites is straining the limits of the web and IP. Massive congestion almost caused it to fail in the 90s. Van Jacobson’s algorithms for the Transmission Control Protocol (TCP), coupled with route aggregation, enabled the Internet to survive. Van also saw the writing on the wall and invented Information Centric Networking.
GT Systems is leading two fundamental transitions. The first is from “dumb bits down a pipe” (IP) to Information Centric Networking (ICN). ICN puts content at the centre of network communications, vastly improving efficiency and quality. Content is referred to by name e.g. /batman/; it is moved around by “interest requests” e.g. /rhett/batman/bondi/; and cached anywhere in the network. Once you know what you are moving, where, and for whom, you can make intelligent decisions about it. The network becomes the computer and the distinction between network and server just goes away. That is the second transition.
The most fundamental intelligent decision is caching (as opposed to buffering) in the network. If you optimise where you cache, then you optimise performance of the network, significantly reducing load and vastly improving quality and response time. Another is Quality of Service/Experience (QoS/QoE). Publishers can specify the level of QoS they want for their customers. Customers can specify the level they want also. The network sorts it all out, intelligently.
There are a number of far reaching and not immediately obvious consequences of this. The first is that content producers no longer need to worry about distribution and storage. They publish once to the network. The network takes care of caching and distribution to meet QoS levels. Publishing requests and QoS specs are in the form of span://publish/disney/batman/99.999/90days/ and subscribers request content in the form of span://subscribe/rhett/batman/bondi/99.9/ where 99.999 and 99.9 are quality levels.
The astute observer will have grasped that, as well as being a new network paradigm, this is also a new programming paradigm. We are programming the network that is the computer. Instead of the current necessity for application developers to keep track of application and client state, store and send data to the “client”, the network takes care of everything. Data follows interest. In the words of Van Jacobson: “operationally what happens is content diffuses from where it was created to where the consumers are. In that diffusion, given that there’s enough memory in the intermediaries, you guarantee pretty much maximum efficiency of the distribution, which is that any piece of the content will take, at most, one trip over any given link…… bits in a wire, bits on a disk, and bits in a memory are indistinguishable; they’re all just storage. You get them all the same way: you give a name, you get back bits”.
This is going to require a different view of how and where applications run and how they are programmed. Applications will run in the “network cloud” and data will live “on the wire” or in cache/memory wherever it is most needed. Data follows interest. The intelligent network keeps track of network state globally and locally. It distributes storage and caching to meet QoS specifications against forecast load. It tells consumers whether it can meet their QoS specifications and, if not, proposes cheaper alternatives.
There is no limit to how smart the network can get. Machine Reinforcement Learning trains the network to deal with load and reconfigures it to meet demand. The network becomes autonomous, self-sustaining and self-optimising. It truly is the network of the future.