Q&A with HP's Paul Miller: The X64 Server Biz
Published: May 8, 2007
by Timothy Prickett Morgan
As vice president of marketing for Hewlett Packard's Industry Standard Server division, which peddles the company's ProLiant rack and tower servers and its Blade System blade servers, Paul Miller has his hand on the steering wheel of the world's largest single server business in terms of the number of customers and units sold each year. Miller took some time out of his schedule recently to shoot the breeze about the X64 biz.
Timothy Prickett Morgan: I really just wanted to talk for a bit about what is happening in the X64 market. Can you give me a general view of where things are at from Hewlett Packard's point of view and where things are headed? I just want to talk form factors, processors, and other peripheral technologies that are important as we look ahead into the rest of 2007 and beyond.
For instance, we have been watching the transition into blades--and I think blades are doing alright but they are not doing great, although HP is certainly seeing better traction now than you have in the past couple of years, and IBM's numbers were not so great in the past few quarters.
So, how's biz?
Paul Miller: Business continues to be strong in the X86 space, and in general, the nature of our conversations with customers continues to change. The hot topic--no pun intended--is power and cooling, and global warming and carbon emissions. For other people, the issue is that they are just running out of capacity in their data centers. This is a lot of what we are talking about. Part of that is an X86 thing because this architecture has the biggest footprint in the data center, and it is the fastest-growing footprint there, too. HP continues to be at the center of the conversation.
We're still doing very well with blades, and I think part of the reason there is that with the c-Class blades, we redefined the value of the blades, which has made them relevant. Our initial foray into blades, with the p-Class, was all about packaging, and that's where I think HP's competitors are right now. That packaging drove a certain value proposition and that got us and IBM to about a 10 percent of the X86 mix. But we think getting it up to where we think it can be--around 30 percent of the mix--requires the advanced integration and a value proposition around power and cooling, virtualization, and management. These aspects are what we think will drive blades up to 30 percent of shipments.
TPM: Is there a point in time where servers will be in a blade form factor no matter what? In other words, will we build large SMP servers by taking a bunch of blades and using light pipes to lash them together with chipsets and all that stuff?
I keep thinking that, in the long run, what the industry wants to be able to do is make one blade and use it many times, whether it is in a standalone server, a cell board in an SMP system, or a vertical blade in a set of infrastructure servers. Silicon Graphics seems to be taking this approach with its Altix designs, and Fujitsu has a blade server that you can scale from two to eight sockets in an SMP by plugging multiple blades together into an SMP; Appro has a similar design.
The blades servers themselves seem to be a natural component for making what would have been a rack of individual servers in the past stacked vertically like pizza boxes in a rack--but which are now slotted side by side in a chassis horizontally as a blade--as well as being the core component for a real shared memory, NUMA/SMP system.
PM: I think you will see that sort of approach more and more. But will we get to the point where everything goes to blades? For the customers who are buying two or three servers per year, I agree that for them, in the future everything they need will be in a bladed form factor. And the granularity will go down further and further. Even with today's blades, the granularity is at a server form factor, a storage form factor, or a network switch form factor. As we look ahead, the blade form factor will be CPU, memory, and I/O.
TPM: That's how the SGI design is set up. The Altix machines have CPU blades, memory blades, I/O blades. Everything is a blade. If you want to make an eight-way box, you plug in four two-socket blades and they use a shared global memory. If you want to extend the I/O, you plug in more I/O blades. I don't know if it is technically possible to do it in all cases, but this strikes me as the way to do this. But this approach may not work for a database server, I realize.
PM: This approach works for niches today, and in some ways, this is how our Superdome servers are already built, too. But in on an industry-standard cost basis, when you get to the point where everything is going out in high volumes, this might not make sense. Blades do not make sense for the company that is only buying a server once every three years. And in this case, you will have a mix of tower and rack systems that customers buy.
I think the interesting thing that we have seen across the different form factors is this: Blades continue to be very strong in traditional businesses--the financial sector, insurance, and manufacturing. We are very strong there. On the other end of the spectrum, in the emerging markets like China and India, companies continue to drive a very strong tower business.
TPM: Is that a function of the size of the typical company in these markets, or of their level of IT sophistication or relative computing needs, or budget?
PM: It is not really the size of the business; it is the investment thinking about business. They are thinking about what they can get that is the least expensive from an upfront cost perspective. They are not thinking about longevity, they are not thinking about technology cycles.
TPM: It's Beowulf clustering in HPC ten years ago.
TPM: People just took a bunch of PCs and made a cheap cluster, and then they realized that they had 400 PCs that were taking up a lot of space and that were difficult to manage. It's cheap, but it is hot and cranky, and it is not the right answer for the long run.
PM: They are thinking about servers in terms of the budget they have to spend today, and they are not thinking about it as a long-term infrastructure investment.
In some countries, even in China, rack server growth is very strong. They are in some cases already moving from towers to racks. But the buying patterns are very different in sub-sectors and in sub-geographies. Racks are starting to drive now, and we think that blades are going to be very big next year and beyond as companies go through an IT investment maturity cycle.
TPM: So these companies in the emerging markets are, for all intents and purposes, entering the late 1990s? Everyone seems to have to go through all of the stages. Ontogeny recapitulates phylogeny is the principle in biology.
OK. New topic. How do you track the penetration of server virtualization among customers, and what kind of penetration are you seeing on ProLiant rack servers and Blade System blade servers. Obviously, on the entry tower servers, I do not expect virtualization usage to be very high.
PM: Overall, HP is shipping a server every 13 seconds or so, and the clock is going pretty fast. Virtualization is very hard to track. We think virtualization on X64 machines is somewhere between 10 percent and 15 percent of shipments. We know we have about a 4 percent to 5 percent virtualization footprint based on our sales of VMware's virtualization software through our channels, for which we collect revenues; there is a little bit of Xen out there and a little bit of Microsoft's Virtual Server, but the vast majority is VMware software. Some customers buy through Microsoft, and apply licenses that way. The great unknown is this: There are many different ways of downloading hypervisors, and we cannot track this. Which is why HP believes we are in the range of 10 percent to 15 percent of industry standard servers being equipped with some kind of virtualization.
When I talk to customers, most of them are raising their hands and saying that virtualization is a very big topic for them, and it is for us, too. It related back to our power and cooling focus as well as to some other things that we are doing around the virtualization of clients. Blades have a much higher attach rate for virtualization, too.
TPM: But it is like Linux was 7 or 8 years ago, where companies didn't really know where Linux was in their organizations because it was free or close to it.
PM: We know that virtualization is much higher in blades--it is not at 50 percent yet, but it is trending in that direction. Our Virtual Connect I/O switch, which is shipping now and which we announced last summer, breaks through a lot of barriers to virtualize I/O.
TPM: I put my neck out from time to time, and I think that in the long run--not the short run, but the long run--that this kind of virtualization capability will eventually put a damper on X64 server shipments. Virtualization will do so for servers of all kinds, and in my view, it has already done so for mainframes, proprietary minicomputers, and Unix machines, which all got virtualization in various stages over the last two decades. As each one of these platforms virtualized, it drove down installed footprints. Some of that was competitive pressure--Unix replaced mainframes, Windows replaced Unix, etc for economic reasons. But some of the footprint shrink for these virtualized platforms was just because of the virtualization.
Here's my thesis: One companies get through the technology upgrade cycle to machines that can support CPU, memory, I/O, and network virtualization--it takes a long time to get here with the X64 platform, and it might be two or three years, or more, from now--shipments take a hit. Once a company has consolidated and virtualized servers, they are just as apt to shift around workloads to get jobs to run as they are to buy lots of new capacity. I cannot imagine a world that has enough application growth where 25 million servers in the world running at 5, 10, or 20 percent get consolidated and virtualized onto machines where they are running at 60, 70, or 80 percent to keep footprints growing. Maybe server footprints contract, maybe they just level off. But I can't imagine going from 8 million server shipments a year to 13 million server units, as some projects call for with this virtualization crunch coming.
I'd like to be proved wrong on this one.
PM: Let me answer that question in a number of different ways. First, HP is very bullish on virtualization. I can envision the day when we will ship servers that will be virtualized right out of the chute. We are already working with customers on standardization of virtualized environments--meaning, hypervisors that they want to see embedded in our systems right out of the factories. This could be two, three, or four years out--it is hard to say when it will happen--but servers will ship already virtualized, whether they are going to run a single application or multiple applications.
TPM: Are you thinking about embedding the hypervisor in the system itself? I keep thinking that the hypervisor belongs on the system, just like the BIOS.
PM: I don't want to disclose anything right now on what we are thinking about that. Now, back to virtualization and server shipments. So, will the number of server units go down? Yes. Will the total revenue go down? No.
There are going to be winners and losers in this game. When you start to talk to customers about server virtualization, what you see is that the companies who are winning in virtualization--and we think we have the largest footprint out there--our average selling prices are either flat or rising, which is bucking the industry trends. Customers are buying fewer units, but each unit has more CPUs, more memory, more I/O, and more software to control it. When you move from 50 1U rack mounted servers running at 20 percent utilization to 10 servers running at 80 percent, the load balancing, management, and resiliency requirements on those fewer servers go up.
People who are doing this consolidation and virtualization grasp that they need to make a more robust system, too. So I do not believe that server revenues will go down because of virtualization, but the nature of the revenue will change.
The last comment I want to make is that virtualization is an interesting beast. This is where you have to take virtualization to the next level. People talk about servers running at 20 percent of capacity, and you need to get it up to 80 percent. It's not that simple. I was at a customer site where they have an application running at an average of 4 percent, but it is a trading application that this financial services company needs to run once a day. They need to get an answer back from this application in less than two minutes, and that is only possible on a machine that can deliver a very large amount of performance in a very short period of time. Getting compute power to shift so they can get it where they need it and when they need it is the issue and it will drive different types of revenue.
TPM: But don't we eventually get to the point where we have done that shift? That's the point that I am trying to make. We get everyone through that upgrade cycle and we all have Integrity-style, VSE-like virtualized ProLiants. And then, you live by the incremental growth in the applications as a set, and you can't just grow revenues because companies have isolated workloads and they have to plan for peaks on isolated systems. To be fair, virtualization will drive disaster recovery, since a lot of eggs will be in fewer baskets.
PM: When companies start putting all their eggs in one basket, they actually buy more memory, more I/O, and more software. It is changing the balance of revenue around.
HP Ships Virtual Connect I/O for Blades, Adds Blade Workstation
HP Says It Will "Blade Everything" As Next Gen Boxes Launch
Post this story to del.icio.us
Post this story to Digg
Post this story to Slashdot