As Systems And Storage Go Virtual, Networks Must Follow
June 18, 2018 Timothy Prickett Morgan
We spend a lot of time getting down in to the nitty gritty of the Power Systems iron and its IBM i platform, with occasional forays into AIX or Linux where it is important. But sometimes, we take a high-level view of a phenomenon going on in the IT sector or in business in general and we give you some thoughts about how something might affect the IBM i ecosystem.
This is one of those essays, and we think there is something important going on, and quite frankly, we are not sure how it will affect IBM i. And that lack of certainty, as you will see, sort of speak volumes to the maintenance mode thinking or wait and see attitude that is often the hallmark of established platforms. (Notice how I did not say legacy there?)
As we all well know, virtualization has swept through the datacenter in the past two decades, and what we all know and what the world often forgets is that OS/400 V4R4 was the first non-mainframe platform to get server virtualization, allowing for a physical machine to be carved up into virtual machines, each running its own copy of the operating system. Everyone makes a lot of noise about Docker containers, but OS/400 had subsystems – a kind of sandbox running atop the operating system kernel – from the beginning and this is not a new idea, either, although the ubiquity of Linux, on which Docker depends, means that the Docker runtime can also be ubiquitous, although it always has to be compiled down to a specific Linux kernel.
Storage virtualization is not new to the IBM i base, either. You could argue that the single-level store architecture of the System/38 and then the AS/400 was the ultimate in storage virtualization, with main memory and disk storage capacity treated as a single address space for all application to play in and house their data, with the operating system automating the placement of the applications and data on the physical memory and physical disks. This is something that Microsoft wanted to add to Windows Server more than a decade and a half ago, and it gave up on the idea because it was too hard. There are other kinds of storage virtualization on the IBM i platform, including the embedding of the OS/2 parallel file system with the OS/400 kernel way back when to create the ASCII-based Integrated File System that looks and smells like the Windows Server SMB file system. Now, you can even go so far as to have hyperconverged storage, which puts a virtual SAN either underneath or inside of virtual machines, spanning the same clusters that run virtual machines for compute. As we have pointed out, IBM is now supporting AIX as well as Linux on Power Systems running the Nutanix hyperconverged storage, but it has not yet seen fit to make IBM i a full peer – as we think it should.
This virtualization is all well and good on either servers or storage, in that it drives up utilization and therefore it cuts costs. But that is not entirely the point. What virtualization really does is make infrastructure malleable, subject to programming. That means it can be automated, which is great, but more importantly, virtualization also means a device – be it a server or a storage or a networking device – can change its personality on the fly. For the past decade or so, this has been gradually taking place in networking, which is not only getting virtual but is also becoming programmable like servers and storage.
By network virtualization, I am not talking about setting up Virtual LANs on a machine with a hypervisor and multiple logical partitions running on it. This is normal stuff, even if it is neat. This is different. For the past decade, switch makers have either been adopting the Linux kernel for their core switch operating system or adding a Linux instance beside their own switch kernels, allowing for Linux workloads to be pulled onto the switches. This way, firewalls, load balancers, intrusion detection systems, and other expensive appliances can be replaced by a smart switch that has enough oomph to run such software right on the switch, where it arguably belongs. But starting with the P4 project coming out of Stanford University and Princeton University and championed by Google and an upstart switch chip maker called Barefoot Networks, people started talking about making the ASICs themselves programmable, just like a CPU running Linux is.
This is radical thinking, but what it means is that switch chips with basic routing and switching engines can be programmed to look like any particular kind of switch with specific protocol stacks. And, importantly, new switch protocols can be added on the fly – perhaps to do virtual machine traffic encapsulation, as was done with the VxLAN and NV-GRE protocols that allowed multiple VLANs to live migrate VMs across their boundaries, just to give one example. It took anywhere from 18 months to 24 months for switch chip makers to support these new protocols in their chips after they were approved by the industry, and frankly, that is taking too long. Hence the desire to make switch chips more generic so they can be programmed with different personalities as needed.
IBM sold off its Blade Network Technology switch business to Lenovo along with its System x server line, so technically IBM does not have a particular dog in the programmable datacenter switch market. But obviously, IBM i shops do network machines to each other and they also have lots of Windows Server and increasingly Linux iron doing lots of different jobs, all networked to each other.
We are not suggesting that IBM i shops will be the ones who grab P4 and some of these new malleable switch chips from Barefoot Networks (Tofino), Cavium (XPliant), or Broadcom (Trident and Tomahawk) and start programming their own protocols. That would be as ridiculous as the biggest IBM i shops ganging up together to create their own server virtualization hypervisor or hyperconverged storage. But what we are suggesting is that malleable networks are the future, and there is no reason why IBM i shops cannot benefit from the innovation going on at the hyperscalers for these malleable, virtualized networks. The true propellerheads of networking can write the personalities of these switches, and IBM i ships can just buy what they need and run it, like an app on an iPhone.
The annoying bit for me is that IBM was at the front end of server and storage virtualization and it has not really been in there for network virtualization as I am talking about from above. This is not a surprise. Compute, storage, and networking are often run as separate silos in the datacenter. What you need to do at this point is keep an eye on developments in networking and not ignore what is happening there, and wherever possible, think about investing in switches that have these malleable personalities so you can extend your investments in the network. That’s what all of the hyperscalers and cloud builders are doing.