Cisco: Data Center Traffic To Quadruple Thanks To Clouds
October 29, 2012 Timothy Prickett Morgan
It looks like you are probably going to join the 10 Gigabit Ethernet wave in your data center, and it looks like increasingly cloudy applications, which cut across servers in distributed architectures, are going to be the force driving you to upgrade your network. That’s the prognosis from networking giant and server wannabe Cisco Systems, which obviously has a vested interest in seeing network traffic and greenfield cloudy server installations take off.
The analysts at Cisco have taken a sample of data center traffic inside 10 midrange and large data centers, with a mix of public and private data glass houses, to cook up its second annual Global Cloud Index, which measures the network traffic inside of data centers and extrapolates it to the millions of data centers worldwide to see how traffic is exploding. (You can see the Global Cloud Index at this link.)
The first interesting thing about the traffic report coming out of Cisco is that three-quarters of the data that is being handled by networks inside the data center is for server-to-server and server-to-storage links running over Ethernet and InfiniBand backbones. (Fibre Channel SAN switches are not included in the survey.) This traffic is not just driven by chatter between databases, web applications, authentication, and caching servers, but all of the parallel components in a modern system architecture. There is now also a significant amount of virtual machine and logical partition mobility going on, and this will rise over time. But interestingly, which this internal data center traffic will be exploding, driving up network traffic, traffic between separate data centers is also on the rise at the same pace as customers replicate data and applications for high availability and traffic out to end users from the data center will also rise in lockstep, so the ratios won’t change that much between 2011 and 2016.
Traffic on the data center network is expected to grow by a factor of four in the next five years, and that is a slightly faster pace than Cisco was anticipating a year ago. The switch and router juggernaut, which itself has around 60 to 65 percent revenue market in those markets, depending on the quarter, expects global aggregate data center network traffic to grow from 1.8 zettabytes–that is 1 million petabytes–in total for 2011 to 6.6 zettabytes in 2016.
I love the silly comparisons vendors do when they talk about data, so I will share the ones Cisco made. That 6.6 zettabytes is equivalent to streaming 92 trillion hours of music, which is enough to keep the feet of the world’s population tapping for a year and a half. It is also equivalent to 16 trillion hours of video conferencing (something Cisco is totally in love with, obviously), which is enough network capacity to give the world’s entire workforce 12 hours of web conferencing each day for a year. (God, please, no!) That is also equivalent to 7 trillion hours of HD video streaming, which is enough for all of us in the world to get 2.5 hours a day of video for an entire year.
I am not sure how many books in the Library of Congress constitutes 6.6 zettabytes (if you only count the data encoded in the text, rather than scanned images of pages), which was the old standby for humanizing data capacity. As of January 2012, the Library of Congress had 285 terabytes of web archive data and adds about 5 terabytes per month, but this is not the book archive. The printed collection of the Library of Congress is only 10 terabytes, and all printed material ever generated on earth, encoded in ASCII characters, would only be 200 petabytes. So 6.6 zettabytes would be somewhere on the order of 33,000 Libraries of Congress.
What these impressive data center traffic figures as well as the comparative data comparisons do not measure, however, is how much information is being transmitted. I happen to think that we are passing around more and more data thanks to richer and higher-definition data formats, but I have a feeling that we are actually receiving less and less information. Or certainly no more.
I’ll give you the same example I gave to Cisco’s analysts to make my point. Whenever Cisco does a product announcement, it now has video spec sheets and presentations and then you have to dig around deeper into its site to find an old-fashioned spec sheet with words on it. I can read a spec sheet in 30 seconds and get most of the data I need, and it might be a couple hundred kilobytes of data to make that spec sheet, and that is if it has glossy pictures and logos in it. The video is tens of megabytes if you run it on a smartphone-sized image, it takes many minutes to run, and it cannot have all the data that is in the spec sheet in it because no one can talk that fast. Cisco thinks this is progress. I think it is annoying. Webcasting might be fun for some people, but it is just about the slowest way in the world to get to the point as far as I can tell.
The interesting thing that Cisco does do in the study, which is shown in the chart above, is look at the server installed base projections from Gartner and IDC, figure out how many servers will be virtualized and how many of those will be encapsulated in proper cloud management and automation software to provide true cloudy infrastructure, with utility-style, on-demand access and either IT chargeback for private clouds or utility pricing for public clouds, and then figure out how much traffic cloudy infrastructure will generate compared to traditional workloads with either physical servers or plain-old virtualized servers with minimal automation.
As you can see, traditional workloads dominate the network traffic now, but in four years, it cloudy system traffic will dominate. Traditional workloads will only see a factor of 2.2 increase in traffic between 2011 and 2016, which is a 17 percent compound annual growth rate, while cloud workloads will grow by a factor of 6.2, which works out at a 44 percent compound annual growth rate. Clouds may be more efficient, but the applications are more sophisticated and they need more bandwidth, too.
I’ve said it before, and I will say it again. Over the long haul, the IT budget never goes down and stays down. Something new always drives it up again.