Newsletters   Subscriptions  Forums  Store   Career  Media Kit  About Us  Contact  Search   Home 
two
Volume 2, Number 11 -- March 16, 2005

Open Source Servers


by Timothy Prickett Morgan


Corporate computing on machines that we can recognizably call electronic data processing systems--and what we now call servers--is coming up on its silver anniversary. The world has consumed a truly staggering amount of computing capacity and has devoured countless varieties of chip and server architectures, operating systems, programming languages, and middleware. But consumption is not a goal in and of itself--unless you are an IT vendor selling wares. Rather than consume what vendors feed us, it is time for the companies that buy servers to have more say in how servers get cooked and what server technologies we consume.

Don't get me wrong. The systems embodied in the tens of millions of servers out there in the world and the applications are a truly stunning work of--for lack of a better word for it--art. It is amazing, for instance, that we could have ever lived without computers to do corporate accounting, much less organize our thoughts with personal computing and let us share them with the world through the Internet.

While I have tremendous respect for the gargantuan amounts of money and Herculean efforts by companies and their IT vendors to build and shape the systems that truly do embody the organizations they serve--there are some who believe that what a company really is these days is the embodiment of its information, which is expressed in how people work with that information to create a product that they sell for money--the fact remains that after five decades of corporate computing, many organizations are limited by the systems and servers they use, the applications they have created, the skill sets that they bring to bear on solving any business and IT problem (the two are always related), the business and IT cultures that are often in diametric opposition, and the limits of their own particular financial situations within their very specific businesses.

Consider that in 2004, approximately $500 billion of the $1.2 trillion in total worldwide information technology spending was spent just on managing the complex systems we have created. This maintenance bill, as embodied in the tools and human labor to create and manage these systems, would be absolutely unacceptable in any other business, for any other tool, in any other time in history. And, if the IT industry succeeds in better automating IT systems, making them easier to use and manage, who is going find jobs for the millions of IT experts that have been displaced? Solving this complexity problem will create perhaps a bigger one. While the Amish probably do not come up in conversations about the future of information technology, a wise man understands that the Amish do not fear technology so much as understand its effects on their culture. They shun the tractor not because it is evil, but because it changes their culture and it makes redundant most of the men in their community. They make a conscious choice to do things the old-fashioned way to get something closer to full employment. This is not an unwise approach; it is merely an approach to life that is the antithesis of a business community comprised of corporations that are always trying to sell you the next, big, labor-saving device. The Amish want to spend their labor and keep their limited money, not spend their money to get rid of their labor; they have a keen understanding of feedback loops and self-preservation.

As producers and consumers of servers and related information technology, and members of a worldwide economy, we would do well to ponder the consequences of automating ourselves out of work.

Having put out that note of caution, most of us in the IT community agree that something has to change in the way servers are designed, made, outfitted with software, managed, and sold. All of the best minds in the computer industry know this. And it is safe to say that none of them agree on precisely how to make computing easier.

Change does not come easy in the IT industry, and paradoxically, it never stops coming, either.

The old analogy that working in a company's data center is a bit like trying to change the tire on a car while you are going down the road at 55 miles per hour does not accurately describe the challenge that the creators and consumers of IT face. It is more like trying to build a car in an insane factory where none of the tools work together on an asteroid that is hurtling through space on a perilously close trajectory to the sun--and every company has its own asteroid, too. (Maybe it is not really like that at all. But you get the idea.)

By their very nature, computers--or rather, the software that runs atop them--are malleable, in a way that no other tool used by humans has been. And it is the malleable nature of the computer that appeals to us, that allows us to make computers do things our way, that makes them truly useful and absolutely frustrating. While I am a firm believer in the power of competition and choice in the computer business, since this has fostered wave after wave of change in corporate computing, the fact remains that companies do not always make good choices (from a technical standpoint), and their choices can keep architectures that perhaps should have met their demise alive longer than is wise and, conversely, led to the lack of cultivation of architectures that should have been fostered because they were technically better. Suffice it to say, if computing had been free for the past two decades, the computers would look a lot different from the ones we use today. (And perhaps not for the better, but perhaps so.) Evolution is driven by harsh, competitive conditions; easy living can only be had where there is an abundance of resources, and under these conditions, evolution stops and, in the case of the computer business, monopolies begin. It is only through effective planting and pruning that any technology evolves to be useful. And a confluence of frustrations and needs out there in the IT world is putting pressure on IT vendors--and some of the vendors' technologies and perhaps the vendors themselves are not going to survive the cut this time.

IT vendors are in the business of selling what are still largely point solutions to customers who have holistic problems that include myriad legacy systems. The vendors and the consumers of servers, in particular, have been at odds with each other from the beginning. The vendors control the introduction, maturation, and death of any technology in the servers they control, often to the chagrin of their customers.

There is another way to make and support servers, one that is more flexible, yet allows companies to absorb change in the way they want to, not the way that vendors want them to. But before I get into how to do this, let's go through some of the major problems that the server market faces.

The Problem: Vendors Are In Control, Not Users

1. Vendors get to decide what components are included in their servers and, equally importantly, how these components plug into their boxes, what these components cost, and when they are delivered.

Those vendors that sell so-called "industry standard servers" sneer at those who sell so-called "proprietary" and so-called "open system" RISC/Unix servers. Let's be honest. These are all proprietary systems at one level or another--and often on many levels.

Let me give you a few examples of how closed server systems are--and they are ones that we are all familiar with. You will be able to think of hundreds more, I am sure.

On the ProLiant-based Windows and Linux cluster I run in my home office, I cannot plug in Intel Pentium III or Xeon processors that I could acquire on the open market. I have to buy a special version of the chip from Hewlett-Packard, which has a special heat sink that only comes with HP-branded Xeon chips. So, Intel makes the chips, and HP makes the margins. (Every server major vendor does this.)

Ditto for the hot-plug disks sold by server makers. While there is a standard that sort-of governs hot-plug SCSI disks, you cannot buy a SCSI disk that plugs into an IBM xSeries server and jam it into an HP ProLiant, or visa versa. The SCSI disk is a raw component, but the pieces of plastic and metal that wrap around it (which are worth maybe a few bucks) and the interface that allows it to plug into the server is absolutely proprietary and a control point (and therefore a profit point) for server makers. The difference is a factor of two or three in price between the cost of a raw SCSI drive and the hot-plug SCSI module. Not bad for a few bucks investment in some metal and plastic.

Here's another more recent example: Serial ATA disks have been available for years, and companies have clearly indicated that, with RAID 1, 5, or 10 data protection schemes, they do not feel compelled to pay a big premium for hot-plug SCSI disk arrays. But vendors are only now offering SATA drives in their arrays, and there is still not a standard for delivering hot-plug SATA drives in those arrays.

2. Vendors get to decide what operating systems, middleware, and software stacks are certified on their machines.

Because of the necessity to turn a profit, hardware vendors and software vendors--if they are not literally one in the same--are often in cahoots when it comes to deciding when particular software will be supported officially. How many years did it take until the main server vendors supported commercial Linux distributions on their servers? How long will customers have to wait until Sun's Java Enterprise System and Solaris operating system is supported on IBM's eServer line? (Too many years.) And how long does the server vendor support a particular release of an operating system? (As long as it suits their profit and loss calculations, and not one minute longer.) How many times have you needed to upgrade one aspect of your machine, but you had a whole slew of other features that are useless to you rammed down your throat? (Every time you do an upgrade, right?)

The same holds true for system management tools, virtualization and provisioning software, and other key features of the modern server. If you like HP's Insight Manager software and you have some IBM xSeries or BladeCenter blade servers as well as some ProLiants and some BladeSystems from HP, you have to use IBM Director on the IBM boxes, or figure out how to force Insight Manager to work on the IBM boxes. If you like EMC's VMware ESX Server partitioning software, and a vendor decides that it is only supporting the open Xen partitioning or its own brand of virtual partitioning, tough luck. But you already bought your servers? Good luck supporting it yourself, then. By the way, now that Microsoft is offering its own virtual machine partitioning software for Windows, it is apparently being dodgy about supporting Windows instances inside VMware partitions. Good luck to you, too.

3. Vendors get to decide the form factors of their servers and they get to set the standards for those form factors or ignore customer pressure to create standards where none exists.

The fact that there are standard, rack-mounted form factors is in many ways an accidental miracle in the server business. But there have never been standard form factors for desktop machines or tower servers, and even the rack form factors have enough variation that it is difficult to mix various vendors' servers inside one vendor's so-called industry server racks. Try to mount an IBM xSeries server inside a Hewlett-Packard rack and you will see that the industry has indeed agreed that a 1U server is 1.75 inches high, but the chassis and rails to mount each server into a rack are not standard.

The lack of form factor standards is absolutely inhibiting the adoption of blade servers, in fact. When you buy a blade server chassis, you are buying into a whole blade infrastructure package from a server vendor and its partnerships.

In the telecom industry, it can be argued that the adoption of form factor standards such as CompactPCI and AdvancedTCA has slowed down the pace of innovation and the profit margins companies can wring out of specialized DC servers sold to telecom companies, switch makers, and other service providers. But these companies have a long investment horizon for technologies, and this is good for them even if it is problematic for server makers. And because of these standards, there is a wealth of devices that can plug into these telecom blade servers. The commercial blade server market, by contrast, is evolving into a three-horse race, pitting IBM, HP, and Dell against a few smaller players and each other. Each vendor makes its own chassis, has its own blades, and its own set of interconnects and management tools. And by the way, none of them are fully cooked according to some of the biggest shops who have deployed thousands of blade servers.

IBM and Intel have offered up some of the specs in the BladeCenter design to partners on a royalty-free basis, but partners are not allowed to make chassis or compute blades--because this is where the profit is for IBM itself and the OEM customers who buy barebones blade boxes from Intel. This is not a standard, and both IBM and Intel know this even as they claim it is.

The lack of a set of chassis standards and blade server standards is absolutely holding back the adoption of blade servers, and all of the vendors know this. This is how they are making money in blades, and the control that blades engender because of the issues outlined above make them very happy. A blade is like a hot-plug SCSI drive, only there is a lot less bent metal and plastic and a lot more profit.

4. Vendors get to decide who they partner with and who they do not, and they use their partnerships as competitive weapons as much as they do as a source of collaborative development.

If you have a good idea to add a feature to a server, you need to work out partnerships with all of the server players and then worry about how they will eventually co-opt that technology from you or find an alternative supplier if they do not like who you are partnering with. This web of partnerships is as vindictive and restrictive as it is helpful to companies with good ideas for new products or add-ons to servers.

The Solution: Open Source Servers

What the industry needs is a guilt-free, risk-free way to foster new ideas and to create standards that are driven by end user requirements and not vendor requirements to take market share and profits from their competitors as they sell their wares into end user accounts.

To put it simply, the design and manufacturing of servers has to be done using a process that is very similar to the open source concept behind the Linux operating system. Hence, the name for this concept I am proposing is called Open Source Servers.


While some hardware components have been licensed--some of the BladeCenter specs or Sun's Sparc chip instruction set, to give two examples--most of the components that go into a server have patent or other intellectual property protections that govern how the components in that server are put together.

Nothing made this more clear to me last year as I was writing an essay called "Lean, Mean Green Machines," which discussed how much electricity we were wasting in the world because server designs had far too much computing power in them--something that is obvious based on the well-known low utilization rates for servers. As research for that essay, I actually built a set of low-power servers running Linux and FreeBSD based on the Mini-ITX motherboards (which have clone X86 chips on them) from VIA Technologies. About the same time, Transmeta had announced an Mini-ITX board using its Efficeon processor, which had a lot more computing power and better power management features than the VIA boards. As I was discussing the possibilities of launching a whole line of energy efficient servers based on small form factor motherboards like the Nano-ITX and Mini-ITX boards and the possibility of creating new blade, rack, and tower server form factors using these boards as basic elements, an executive at Transmeta gave me lots of encouragement and some free advice:

"Make sure you nail down all of your intellectual property, because that is the only way you are going to make any money."

I explained that I was not interested in making money--I was a journalist and an analyst in the IT field, after all--but rather, I was interested in changing the server business. I said that in many parts of the world there was a need for an inexpensive, low-powered server that could be fueled by a solar array and wind power because electricity was so undependable. You can't drop a two-way Xeon server into a jungle village and expect people to power it. But a Nano-ITX machine that burns as little DC power as a set top box that can be shared by many people can bring hundreds of millions of people into the modern world.

I quipped to that Transmeta executive that maybe I should create a consortium for developing a set of open source specs for making various kinds of servers and then let anyone get the specs, buy the raw components that make up a particular kind of server, and have a go at making and supporting them. He laughed at that, and wished me good luck--and he said he really meant that. (This stands to reason, given the harsh treatment Transmeta suffered at the hands of the server makers who should have been championing its power-saving chips.)

The idea behind Open Source Servers is simple enough: make the specs for a server design openly available under something akin to the GNU General Public License and allow engineers from all over the world to contribute their expertise to foster more innovation in the server market. I think it is time that end users push the roadmaps, create the form factors, introduce new technologies, and push the vendors to do what we want. I'm looking for a few good hardware and software engineers. The question now is, do any of you want to play?

Sponsored By
STALKER SOFTWARE

CommuniGate Pro Real-Time Communications

CommuniGate Pro is the most advanced Internet messaging server on the market today. The comprehensive, flexible solution enables corporations, educational institutions, and service providers to implement a variety of functionality. From email and calendaring, to instant messaging and voice over IP, CommuniGate Pro supports it all from one proven, reliable platform.


CommuniGate Pro Benefits:

  • It's Full-Featured. CommuniGate Pro supports email and so much more. The product also supports Outlook in workgroup mode for Calendaring, Invites, Tasks etc. Built-in functionality includes standards-based SMTP, POP, IMAP, user directory and Webmail. Rounding out the comprehensive feature set are integrated calendaring, scheduling, mailing lists, and live communications for instant messaging, VoIP and Video.


  • It's Flexible. CommuniGate Pro runs on over 30 major platforms, including all UNIX flavors, AS/400, Windows, Linux and Mac OS X. Plus, its open interface facilitates integration with billing, provisioning, and additional security applications.


  • It's Robust. Today, more than 8,500 CommuniGate Pro systems are installed worldwide, serving messaging environments that range from 25 to over 5 million accounts. CommuniGate Pro's advanced technology provides the best availability and performance to over 58 million active users.


  • It's Secure. CommuniGate Pro offers advanced built-in anti-spam and security features, along with support for leading third party anti-spam and anti-virus solutions.


  • It's Scalable. CommuniGate Pro's Dynamic Cluster architecture offers unlimited growth potential, along with support for 99.999% uptime.

For more information, please visit:
www.stalker.com


Editor: Alex Woodie
Managing Editor: Shannon Pastore
Contributing Editors: Dan Burger, Joe Hertvik, Shannon O'Donnell,
Timothy Prickett Morgan, Victor Rozek, Kevin Vandever, Hesh Wiener
Publisher and Advertising Director: Jenny Thomas
Advertising Sales Representative: Kim Reed
Contact the Editors: To contact anyone on the IT Jungle Team
Go to our contacts page and send us a message.


THIS ISSUE
SPONSORED BY:

Vision Solutions
Micro Focus
Thawte Consulting
Stalker Software
Hewlett-Packard


BACK ISSUES

TABLE OF
CONTENTS
Microsoft Gets Into the Collaboration Groove with Acquisition

Desktops to Have First Crack at Dual-Core Intel Chips

NEC Shows Off SAP Performance on Windows-Itanium Combo

Open Source Servers

But Wait, There's More


The Four Hundred
Re-Energizing ISVs Is a Tough Chore for IBM

Book Excerpt: The All-Everything Machine

iSeries ISVs Elated as IBM Opens Roadmap and Wallet

IBM's Chiphopper Tools to Help Build iSeries Apps

The Linux Beacon
Novell Delivers Open Enterprise Server, Preps SUSE Professional 9.3

IBM Opens Blue Gene/L Utility Center in Minnesota

Future "Cell" Power Processors to Spotlight Linux

IDC Says Linux Server Market Grew 36 Percent in Q4 2004

The Unix Guardian
Sun Modifies Its Packaging of Trusted Solaris

IDC Says Unix Server Sales Rebounded in Q4 2004

Gartner Gives 2004 Server Report Cards

As I See It: To Tell or Not to Tell


Copyright © 1996-2008 Guild Companies, Inc. All Rights Reserved.
Guild Companies, Inc. (formerly Midrange Server), 50 Park Terrace East, Suite 8F, New York, NY 10034
Privacy Statement