Utility Service/400: Making the iSeries into a Different Market
January 3, 2006 Timothy Prickett Morgan
As 2005 was drawing to a close, there was a lot of activity surrounding a new twist on a very old idea: the compute utility. Well, to be more precise, utilities are an old idea that were perfected during that industrial revolution for water distribution (and other related public water works such as sewage disposal), transportation, energy distribution, and communications. The question that the IT community is pondering is whether it is possible or desirable for there to be a utility or a collection of utilities for data processing, data storage, networking, and collaboration.
For over a year, Sun Microsystems has been on a tear, opening up its technologies–literally taking its Solaris operating system and Java Enterprise System middleware stack to open source development–and has been trying to transition itself from a vendor that sells Unix servers and a little software on a quarterly basis to a company that is at the forefront of a large open source movement that is centered on technologies that it helps develop and provides fee-based support services for. One of the centerpieces of its subscription strategy is a compute and storage grid, which will soon allow anyone in the world to log on through a Web portal, buy a unit of processing capacity or storage, and do what you will with it. (Sun has said it will charge $1 per CPU per hour and $1 per GB per month for the Sun Grid, but prices for early customers are substantially below that already and it has not even gone retail yet.) As the year wound down in December, Sun had boosted its capacity and signed up some new customers for the Sun Grid, as well as rolling out new services for archiving data on the grid from remote locations. Sun had previously announced services on that utility that allowed companies to convert documents from one format to another for free–something that the company undoubtedly hopes it can eventually charge for.
In a utility, after all, the name of the game is sharing, because the sharing of resources is what drives up efficiencies and drives down costs. That is why Sun is going to try to pump as many applications as possible through its Sun Grid. It will be a few years before the company can handle real transaction processing and Web infrastructure workloads on the utility it is building, but as I said last year when Sun announced the utility, the company doesn’t want to be a utility, but rather to be a provider of technologies for information utilities that Sun hopes to foster. It is a question as to who will ultimately provide such utility computing services, but I am of the opinion that no one really wants this to happen any time soon. The reason is simple: it is extremely disruptive to the IT ecosystem as we know it.
This is why Hewlett-Packard, which has been trying to push very sophisticated utility computing services since 1999, has backed off in the past year and will in the new year simply allow customers to use its own data center–full of Unix, Windows, and Linux servers and HP software and technicians–to offload some of their peak processing. Flexible Computing Services (FCS) is an interesting offering (you can read more about it by following the link in at the bottom of this story), but it is a far cry from the Utility Data Center concept that HP launched six years ago to much fanfare. UDC was way ahead of its time–it was in essence a giant appliance for virtualizing and provisioning servers, operating systems, storage, and networking capacity–and it was far too costly to sell as a solution. It was a great-sounding idea, but very few companies, except some very big services customers, bought into it. But, HP has one of the largest data centers in the world and it has excess capacity and the ability to securely provision slices of its data center on the fly to customers who need a little extra. With the FCS offering, you can get a 32-bit Xeon processor running Linux for about 55 cents per CPU per hour, with an Itanium CPU running HP-UX costing about $1.50 per hour; 64-bit CPUs based on Xeon or Opteron platforms running Windows and Linux cost somewhere in between.
Rather than take over the data center directly, as UDC was proposing, the FCS offering allows companies to dabble. Sun has learned that this is what customers want to do, too. And so has IBM, which offers similar Deep Computing on Demand services in a number of supercomputing centers around the world. (One of them is a giant Blue Gene Linux-based supercomputer in Rochester, Minnesota.)
While these three vendors have talked up the idea of utility computing, they are in no hurry to see it take over the world. In a sense, such utility computing is an anathema to any vendor who is accustomed to selling computers and software to customers on an annual basis as they build out their own infrastructure. In most large companies, there are many facilities–some used to be wholly separate companies prior to a merger or acquisition–that each have their own data centers. And at large and small companies alike, people hug their servers and applications in departments and divisions as if their very livelihoods depended on it. And that server hugging and application hugging has meant IT vendors could sell more excess capacity to companies than they might otherwise need. There is a tremendous amount of waste, which turns out to be profitable for IT vendors (at least as they are currently configured to approach the market) and equally unprofitable for IT consumers.
In the long run, I think this will change. Exactly what “long” means in this context, I am not so sure. But I have talked to server makers, operating system vendors, application vendors, telecom providers, and systems integrators and outsourcers in the past few months to get their thoughts about utility computing, and the best minds in the business are not sure when this might happen either. But they all agree that it is probably–or certainly–inevitable. Even for OS/400 shops. To my way of thinking, there is a way that the iSeries can get on the front edge of the wave of utility computing and simultaneously make the OS/400 ecosystem stronger and OS/400 shops happier.
With the advent of on-the-fly provisioning and virtualization for servers, CFOs can start thinking about forcing their CIOs to consolidate their applications onto a virtualized server infrastructure. This is the next wave in server consolidation. First, in the early 1990s, we had downsizing, as large mainframe shops moved to machine like the AS/400 and Unix servers, which were far less expensive than mainframes. Then there was a wave of data center consolidations, which was driven by the desire to cut IT costs by consolidating data center facilities. And then the outsourcing boom started, where some companies cut the cord and let firms like IBM, EDS, HP, and others actually take over and run their facilities. Since the late 1990s, when there was an explosion of platforms and applications, most companies have been trying to consolidate and simplify their server infrastructure. Utility computing seeks to bring all of these forces together.
While I am convinced that the math can work out such that a provider of utility computing services can make money, I think it will be very difficult for all of the major IT players who might jump into this field to do so. The very fact that there are so many players who will be chasing fewer and fewer dollars leads me to believe that, unless some new killer app is lurking on the horizon, competition is going to drive the price of compute, storage, and network capacity into the basement. First, if you only use a utility to handle your peak workloads at end of the week, month, or year, you will be able to buy a lot less capacity. This sets off a price war amongst sellers of servers. Then, as you get better at virtualizing across your organization, you buy less and less hardware and push out as much to utility computing facilities as makes sense. Then, ultimately, companies will become comfortable with running at least some of their workloads on a shared utility, side by side inside a virtualized server with other users and not just sitting side by side in a data center, and computing costs will fall further.
All of this future speak is predicated on the idea that software vendors will not only shift to subscription-based pricing rather than perpetual or CPU-based pricing, and I can tell you that systems and application software suppliers are in no hurry to do this. They will be dragged kicking and screaming into that world, because billions and billions of dollars of profits are going to vanish. Utility pricing means paying for software when you actually use it. It means turning servers on rapidly, running your jobs, and then shutting the hardware and software off. Just like you do a water faucet. The way hardware is priced now, the way the IT industry works today, the hardware players make you buy a giant reservoir of water, large enough to meet your thirstiest days, and the software players make you keep the spigot open full blast all the time, with a lot of the water running out on the ground. This is why I think utility computing will be the sale of last resort–the last competitive weapon any player brings out in the battle for IT budget dollars.
About this time last year, I wrote a story about how IBM was doing “real” utility computing on the iSeries through its Global Services offering. To the new way of thinking about utility computing–you turn it on and you turn it off, and you only pay for what you use–this service (which you can read about in the link below) is a very sophisticated lease with capacity on demand elements, and it may be perfect for your business, but it is not a full-tilt-boogie compute utility.
So here is what I am thinking, and it solves many problems at once.
First, IBM is looking for a way to engage with many vintage AS/400 and iSeries shops that have very modest computing needs. Finding these customers is a very, very expensive proposition, and they do not spend a great deal of money on computing, so IBM is loathe to spend hardly any money at all on trying to reach them. And the resellers and business partners who are probably in the closest contact with them are also not inclined to spend a lot of time, money, and effort to reach these customers. Both IBM and its partners are far to busy trying to chase big and presumably profitable deals to make their quarterly numbers.
Second, while no one likes to admit it, there is a vast inventory of homegrown RPG and COBOL code running on AS/400 and iSeries servers out there in the world. And while many independent software vendors look at these companies, who happily pay a programmer or two to keep their code current, as dim prospects for third-party applications, the way I see it, these customers are the perfect prospects for utility computing, AS/400-style. You see, they own their own code. There is no license issue, and that gives these loyal OS/400 shops with their homegrown code a big advantage compared to companies that have invested big bucks in someone else’s code, which they do not own. (Oh, how ironic and delicious to get the last laugh!)
Third, the i5 platform is a nearly perfect computing utility. The biggest server has 165,000 CPWs of processing power, and while IBM has limited the number of logical partitions on a big i5 595 to 254 partitions, that limit has nothing to do with anything except the fact that WebSphere needs around 500 CPWs to be useful. IBM sells i5 boxes with 30 CPW and 60 CPW of green-screen processing capacity–which is clearly enough power for a lot of Mom and Pop OS/400 shops, and that means IBM could have many thousands of partitions (as many as 5,500 with 30 CPWs in each partition) on a big iSeries box. Partitions could be dynamic, of course, since i5/OS has the best dynamic, logical partitioning in the industry, which means if a customer needed 100 CPWs for a short burst, they could get access to it, particularly if a few customers with 30 CPWs allotted to them were sitting idle for a few minutes. A heavily loaded i5 595 costs around $4.8 million with 128 GB of main memory, and you have to toss in a few million more to boost that to 1 TB of main memory. Call it $6 million for a 64-core machine with 1 TB of main memory after a big discount from Big Blue. If you spread the cost of that server over three years, that is $2 million a year, and that makes a 30 CPW slice of that box cost $363 per year. Even if IBM charged three times that–call it $1,000 a year–to add in services and extra middleware to support the partitions and their applications, this is peanuts. Like 11 cents per CPW per hour. IBM wants you to finance an i5 520 Express with the same 30 CPWs for around $300 a month over five years, and if you need more capacity on the fly, you have to pay for an upgrade. When you own the server, it costs you at least five times as much, and that is without management costs.
So, if there are over 200,000 OS/400 shops out there in the world, the question is how many of them would gladly ditch their boxes and move their code to an i5 utility? There are probably tens of thousands of customers with homegrown software who might do this and who might never consider buying a modern i5. There is no point to it, or they would have done it by now. It is clearly an issue of economics and desire.
IBM won’t make a lot of money on this so-called Utility Service/400–and it will lose a lot of footprints. You could cram tens of thousands of customers around the world onto a few dozen i5 595s. But if IBM doesn’t do this, then some intrepid entrepreneur is going to figure it out. And you better watch out, IBM. It might just be Guild Companies, Inc. I happen to think that a free six months of Utility Service/400 is just the right kind of tactic that will get the attention of the neglected OS/400-RPG customer base that doesn’t care very much about WebSphere or Java . . . .
IBM Offers Real iSeries Utility Computing
HP Debuts Utility Computing Services