Network: From Hardware Past To Software Future

At this year’s GigaOm Structure conference, there was a single event that attracted my interest the most - network virtualization panel (I didn’t attend the conference, I was only following along over the Internet). It wasn’t just because it involved OpenFlow. I think there is a bigger trend at play here - a lot functionality that we are used to seeing in network gear is moving to application level, from hardware to software. OpenFlow is just one of the manifestations of this bigger trend. Let me explain.

Networking was first about moving packets, in large quantities and with low latencies. This demand was met by specialized hardware which I assume was able to perform the job better than a general-purpose machine (“better” in this context means faster, more reliably and more cheaply). From their early days, network vendors have also extensively focused on what developers of modern distributed or hyper-distributed applications focus today - failure detection, fault tolerance. When application servers were still growing vertically (bigger machines with redundant power supplies, for example), network already was using distributed gossip-like protocols to exchange information.

Over time, however, more and more services found their home within the network layer - load balancing, virtual addresses, traffic encryption and so on. The idea was to let application remain unaware of all of this complexity on top of which it was sitting.

While this approach had been working for a while, it ran into a wall. Firstly, without direct control over network from applications, current setups were always extremely inflexible and high maintenance (dedicated network engineering staff, change management process in addition to application code rollouts, etc). Secondly, features baked into hardware take longer to tweak (unless vendor had sufficient foresight to plan for new requirements). Thirdly, hardware is harder to replace from financial perspective (pay up-front + maintenance).

Final hit was delivered relatively recently by infrastructure-as-a-code. Flexible IaaS models can’t effectively support customers’ hardware. While there are places where hardware is still very visible to customers (VPN connectivity from customers’ datacenters to their IaaS resources), this is a temporary phenomenon - there are numerous IaaS-compatible software solutions already (please see my disclosure in upper right).

Furthermore, a lot of non-packet-moving functionality can be efficiently delivered in software these days. Look at Heroku - their frontend routing mesh is a massively-scalable load balancer that could be tweaked in real-time. Good luck trying to accomplish the same in hardware.

We currently think of Ciscos and Junipers of the world as hardware vendors. What they actually are is software companies - they just don't let their software run anywhere except on their own hardware. I bet we are going to see this transformation play out within the next 3-5 years. In not so distant future, network gear will go back to focusing on one thing they do exceptionally well - moving packets. All other functionality will turn into software products and will be used on application servers.

Categories: cloud-computing | infrastructure-development |