Software-Defined Networking Is The Next Virtualization Bubble
January 7, 2013 Timothy Prickett Morgan
Server virtualization is one of the big culprits behind the revenue decline in proprietary and Unix servers since the late 1990s and despite what many would have you believe, it has had a dramatic effect on X86-based server sales in recent years, too. Storage virtualization has been around nearly as long, and has driven up utilization and driven down sales of disk arrays in the past decade as well. And now it is time for the network to get the squeeze.
Specifically, a set of new capabilities called software-defined networking, or SDN for short, are being introduced into the physical network, causing a similar kind of disruption that was long overdue. Switches and routers are just as brittle and unmalleable as servers prior to the introduction of hypervisors to virtualize the compute, memory, and I/O in servers. With network gear, the brittleness is due to the fact that each device has to be configured independently, with their traffic forwarding tables set up by hand. Obviously, as traffic shifts both up-and-down in the network stack from users back to the data center and from side-to-side between servers and storage arrays in converged infrastructure, you want to be able to automagically shift traffic around the network, routing around hot spots to boost the overall performance of the network. But no network administrator is fast enough to reconfigure the network on the fly.
With SDN, you break the forwarding plane inside a switch or router from the control plane, and then externalize the control plane and the forwarding tables that normally reside inside the switches to an external controller. This is not quite the same thing as a hypervisor, but it does allow for a sophisticated SDN controller, more times than not based on an OpenFlow network protocol, to reshape traffic on the fly based on policies set up ahead of time to ensure quality of service on the network for different applications and the server and storage resources they need. This is not the same thing as running a virtual network driver inside of a hypervisor, or a virtual switch, for that matter, but those and other capabilities are necessary in an increasingly virtualized data center.
According to the prognosticators at IDC, SDN sales to enterprises and service providers building clouds and providing traditional hosting will account for $360 million in revenues in 2013, with sales exploding by a factor of 10 in the ensuing three years to $3.7 billion. This is truly astounding growth, and that explains why VMware was willing to spend $1.26 billion to buy a startup called Nicira with expertise in OpenFlow controllers and virtual switches last July before Nicira had even come fully out of stealth mode. (By the way, those numbers include the value of the switches as well as the SDN software that runs inside of them as well as on external controllers.)
“SDN’s ability to decouple network logic and policies from the underlying network equipment allows for a more programmable network,” explains Rohit Mehra, vice president of network infrastructure at IDC in a statement accompanying those projections. “Providing better alignment with the underlying applications, this programmability allows for greater levels of flexibility, innovation, and control in the network. Logic and policies that can be defined, changed, and modified result in a more dynamic network, providing the scale network administrators so desperately crave.”
IBM bought its way back into the switch market after more than a decade of ceding the market to Cisco Systems when Big Blue bought Blade Network Technologies for around $400 million back in October 2010, and the company has created its own SDN controller as well as virtual networking for BladeCenter, Flex System, and Power Systems platforms. I am not exactly sure how all of this meshes with the IBM i platform, but I am going to look into it and report back to you.