The X Factor: If Sun Builds a Grid, Will They Come?
Published: May 4, 2006
by Timothy Prickett Morgan
Sun Microsystems has a new chief executive officer now that Jonathan Schwartz has been tapped to take on that role. And while he is a big proponent of open source and community building, Schwartz has also been the wood behind the arrowhead, pushing the company's utility computing strategy. Now that Sun has carved up the Sun Grid into public and private portions, it will be up to the market to decide if the idea of utility computing will become a reality.
While product design and marketing strategy are always important when it comes to introducing a new technology, so is timing. And, somewhat ominously, the Sun Grid may also get a little help from Mother Nature in 2006, as it did in 2005. On June 1, the hurricane season gets underway in the coastal areas of the United States, Mexico, Central America, and the Caribbean, and the businesses in Hurricane Alley are going to be looking for help from Silicon Valley to get them emergency capacity. If you think this is a far-fetched idea, you're wrong. Last year, a reseller of capacity to the oil and gas industry bought several million hours of compute capacity on the Sun Grid in anticipation--not the aftermath, but in anticipation--of data center outages and the need for extra processing capacity among the oil and gas companies, which had to get their rigs back online and used some rented computers to help them map out how to do so. The high-performance computing workloads that are used in this industry to find oil and gas, guide the drilling for it, and manage the deployment of rigs are a perfect match for the Sun Grid.
It is only a pity that the Sun Grid cannot yet support transactional systems--databases, ERP applications, and Web front-ends--that make up a modern computing stack in the back office. Sun would have an even larger line of potential customers. Any business in Hurricane Alley, particularly those that do financial services and similar electronic processing as the core of their business (as distinct from the manufacturing, distribution, or sale of goods and services), is probably looking at the Sun Grid and alternatives from other utility computing and disaster recovery system providers and wishing it could be a part of their business continuity plan. Even though paying $1 per CPU per hour and $1 per GB per month is more expensive than acquiring server and disk capacity, when your data center is wiped out, paying that price until the data center is back online probably seems better than the alternative--going out of business.
Unfortunately for Sun and the many potential customers for the Sun Grid and the alternative grids that Sun is desperately hoping its partners like EDS, AT&T, remote transactional processing is not yet a possibility on grid utilities. The real benefit of the Sun Grid is that you get on, you get your jobs done, and you get off. No fuss, no muss, no commitment--what Sun calls a "zero barrier to exit."
The Sun Grid is not really supposed to be a production grid. Other service providers are supposed to build those, at least according to Sun's plan. The Sun Grid is the test bed where the software and techniques for building a grid are created (which is then sold to service providers, perhaps with revenue-sharing models as a key component of the deal so Sun has an annuity revenue stream). The Sun Grid is also where developers and software providers come to learn how to either create new grid-ready applications or take their existing applications and grid-enable them.
According to sources, Sun apparently has a backlog of approximately 2,000 ISVs who are trying to get time on the Sun Grid to figure out how to get their applications to run on the utility. This is a very big backlog. Getting those ISVs working on the Sun Grid is key to fostering this market, because these ISVs need to grid-enable their applications and then act as resellers for the ultimate end user grid utilities that Sun is clearly hoping will materialize with the help of EDS, AT&T and others to start the utility computing revolution it foresees.
As for when this actually happens, Sun is not sure. It could start in 2006, or in 2008. Over the long haul, Sun expects that companies will own about 20 percent of their processing capacity for key, mission-critical, business-defining applications, with somewhere between 40 percent and 50 percent of their capacity being in contract from various suppliers on various compute utilities and the remaining 30 percent to 40 percent of their capacity being in short-term contracts that are more costly and that represent peak performance needs for various workloads.
Right now, bulk customers are already getting 50 percent discounts on utility capacity, and as the cost comes down and competition begins--after all, a Linux cluster will be a Linux cluster--it is hard to imagine that there will not be a price war for compute and storage capacity. Pricing isn't everything, of course. Reliability and history are important, just as they have been for Web and email access and Web and email hosting during the Internet era.
The real money for utility computing is not going to be in the number-crunching area where it starts. That is the important thing to remember. Sun has said in the past that that customers pay around $100 per CPU per hour to buy and use their midrange Unix and OS/400 servers, and that on high-end mainframes, the real cost is on the order of about $1,000 per CPU per hour. What those numbers represent are the ceilings under which Sun must charge for a long-term contract on the transactional portion of the Sun Grid. But they are probably the baseline much higher prices that Sun could charge for spot prices for transactional capacity in the aftermath of disasters--be they man-made, such as a software crash, or acts of nature, like a hurricane.
This is why it is reasonable to expect that the Sun is working very hard to create a transactional grid. Mainframe shops could provide a big lever. Sun already owns the Unix clone of the CICS mainframe transaction monitor (formerly known as UniKix) and has a mainframe re-hosting environment that it bought several years ago. Sun could save mainframe shops a lot of money by plunking baby Sun Grids on site running this software, plugging these grids into the existing mainframe applications. Sun would start by hosting the transaction monitor, then you re-host the applications, and then you move from DB2 on the mainframe to DB2 or Oracle running on the grid. And before you know it, it is five years later and the mainframe is out of the data center and the company is running a big portion of their data center in house, with peak transaction loads out on the grid.
If Sun added printing services to the Sun Grid, it could even run big batch jobs on the grid and print out the paper and electronic statements that consume so many resources at the financial services companies that are its key customer base. Sun hasn't said anything yet about this, but it is an obvious extension to the idea of the transactional grid.
Hackers Take a Whack at the Sun Grid Utility
Sun Grid Compute Utility Opens for Public Business
Sun Plugs the Grid Some More, Adds Some Features
Sun Aspires to Be the General Electric of the Grid Era