Microsoft Azure: An AS/400 for Private and Public Clouds
July 19, 2010 Timothy Prickett Morgan
I may be a whippersnapper compared to many of you dyed-in-the-wool System/38, System/36, and AS/400 old-timers, but I have been around long enough to see the irony of things with a certain amount of good humor and healthy detachment. And so I got a good chuckle last week when I saw that Microsoft was taking its Azure public cloud computing platform private, and perhaps to many more masses than it would have gotten through a public-only cloud.
People are making a big deal about cloud computing–what I still like to call utility computing–not just because we need something new to talk about, but because information technology continues to evolve and we need something to describe this change so we can understand it and then use it to our advantage. As you know from setting up and using servers, or even just from working on your PC on a daily basis, a computer feels like hardware in terms of its brittleness and rigidity even though so much of what makes a computer these days is not just a bunch of shiny stuff on a piece of glass or metal; it is actually software and, in theory at least, malleable rather than squishy. It is this brittleness and rigidity that cloud computing is trying to deal with, and as you might expect, we are dealing with it by admitting it is there and building redundancies of all kinds into virtual, rather than physical, server infrastructure. And if we are lucky, all of this gets done in such a way that we don’t realize our machines are not physical, or better still, the level of abstraction is so good we don’t know where the applications run–and on how many servers–so long as the response time is good and we can afford to pay for it.
There are so many different kinds of genius embodied in the System/3X and AS/400 line of midrange computers that it is hard to enumerate them all, but if there is one key genius, it was IBM Rochester’s keen understanding that is was building sophisticated machines and systems software to run applications for companies who were experts in their business, but not necessarily in IT. So a machine with layers and layers of virtualization, like the AS/400, could be expensive but worth the money. Here’s a few that will be relevant to Microsoft Azure and what it is trying to do with cloud computing, which I will get into in a moment:
Basically, you coded on the boxes and they just worked, spitting out the right answers. Which is why in a world where far fewer than a million servers a year were sold in the entire world, IBM sold hundreds of thousands of System/3X and AS/400 machines over two decades.
Fast forward two decades, and now everyone wants a cloud, by which we mean virtualized server, storage, and networking that can be shuffled around a data center or between data centers as conditions dictate. Cloudy infrastructure also means being able to turn on capacity instantly when you need it, turning it off when you don’t, and being able to scale up capacity for a single workload on the fly when it needs more oomph. You need to meter usage in some way that makes sense for the customer and for systems and application software providers, and also to provide some kind of security.
As best as I can figure, there are fewer than 100 true public clouds out there in the world. Some implement raw server infrastructure–a hypervisor yearning for an operating system and apps to be laid upon it, such as Amazon‘s Elastic Compute Cloud, and others that take the level of abstraction up one layer further and present users with services, masking the underlying infrastructure. This latter type is what Google‘s Python-based App Engine and Microsoft’s Azure clouds do.
In the case of Azure, the cloud has three components: compute service, storage services, and a fabric controller that pools hardware resources–in this instance, X64-based servers, their memory and disks, external disk storage, and networking to lash it all together. That controller does load balancing, fault tolerance clustering, disaster recovery, and data replication for workloads–the kinds of things people do today with a hodge-podge of different software. At the moment, Microsoft has implemented SQL database services, SharePoint collaboration services, AppFabric Internet services, and Dynamics CRM services atop the Azure public cloud, which runs in a data center in Quincy, Washington, as well as one in the suburb of Chicago. But you can also create your own .NET applications and throw them out onto Azure, too. Which is the whole point. The genius of this platform-as-a-service cloud is that it masks all the underlying complexity of having a modern, distributed, virtualized, metered, and flexible infrastructure stack. It is truly Windows for Dummies, and to the point that you don’t know–and don’t care–that it is Windows at all. Microsoft claims to have 10,000 companies running on Azure already.
The only problem with the Azure cloud, of course, is that you have to run your code outside of your firewall and on Microsoft’s own infrastructure. Not very many midrange and enterprise customers are going to go for that for security as well as for sanity and employment reasons. Companies want to build a private cloud first, and then maybe they will think about cloudbursting some of their capacity needs out to a compatible public cloud if it can demonstrate security.
And so, Microsoft has figured out it needs to build private Azure clouds, and announced last week at its Worldwide Partner Conference in Washington that it was partnering with Hewlett-Packard, Dell, and Fujitsu to take the Azure code and move it onto selected and precise hardware configurations to create what Microsoft is calling the Windows Azure Platform Appliance.
Rather than partner with one server maker, and therefore alienate all the others, Microsoft picked the two hardware vendors that have helped it build the Azure public cloud (custom Dell boxes mostly with some HP iron) who are the two top sellers in North America and Europe and added in Fujitsu, which leads in Asia and is also a player in Europe. That covers the PR bases for now, but eventually you can expect IBM and other players like NEC, Hitachi, Unisys, and Bull to create Microsoft-certified appliances to run the Azure cloud stack. It will probably be a cold day somewhere before Oracle sells this, but server wannabe Cisco Systems might.
HP, Dell, and Fujitsu plan to have hosted versions of the Azure platform running in their data centers by the end of the year, with appliances they can install in customer sites coming after that. My guess is probably not until the middle of 2011, but Microsoft and its partners did not say. There’s time for IBM to squeeze in there, if its IBM CloudBurst products running the ESX Server hypervisor from VMware (for production private clouds) and the KVM hypervisor from Red Hat (for test and development public clouds called the IBM Cloud) hasn’t somehow got IBM on Microsoft’s bad side. (This seems unlikely, and Microsoft wants IBM’s help peddling its clouds. But maybe IBM wants to build clouds for its customers using lots of different products and selling lots of its own software and services.)
Whatever IBM’s cloudy infrastructure plan is for Power Systems shops, the company sure hasn’t been particularly clear about it. IBM gave the merest nod to Power and mainframe servers in the CloudBurst and IBM Cloud offerings that were announced a year ago and has said little else since. Clearly IBM can build public and private clouds on Power Systems machines carved up with PowerVM hypervisors, and running i, AIX, and Linux. Virtualization is necessary but not sufficient to be a cloud. The orchestration, load balancing, metering, and billing are the hard bits, and they can’t cost a fortune or end users won’t use them.
If IBM doesn’t get its act together soon, someone else might just have to start an i cloud. (We’ll help you sell it if you do.) Cheap Power7-based entry servers might just do the trick for basic infrastructure if IBM drops operating system prices low enough. It would be truly embarrassing if Microsoft Azure beat Big Blue at its own midrange game twice–first in the data center in the 1990s and 2000s, and then in the cloud in the 2010s.