The X Factor: Small Is Beautiful
May 29, 2007 Timothy Prickett Morgan
For the past five decades of the computer industry, three forces have been at work that have determined the trajectory of all hardware and software technologies. The first, and perhaps the most important, is the desire by companies and individuals to want to automate processes and store data. Without this desire, Moore’s Law–the ability to put more transistors on a chip to boost processing or memory capacity–would be meaningless. Data centers would be the size of a pack of cigarettes, and they would only cost $100.
This desire to computerize processes and information that were otherwise done by people and stored in their memories or on paper is the force that drives the entire IT industry, in fact. The ability to meet that desire is what drives the $1.3 trillion worldwide computer market, and for many decades, it was necessary to build “big iron” computers to handle the very large and complex workloads for the largest national and multinational organizations, as well as the governments in which they were located. The sheer size of the problem dictated large scale, complex, and expensive computing. And both companies and governments and their IT suppliers benefited from this voracious appetite for capacity. The computing infrastructures that were enabled by this progress moved computing from mere tabulation and accounting to becoming the backbone of the business to essentially embodying the business–which is what computers are for most companies today. In a sense, employees work for the computers, not the other way around. And in a very real sense, we all work for the Internet now.
All this big iron thinking has come at a cost, of course. The advent of minicomputers in the late 1970s and then cheap X86-style servers in the 1990s pushed workloads off big iron boxes, such as mainframes and then Unix servers, but mainframes and Unix boxes persist and they are surrounded by legions of hot X86 and X64 servers. The workload expanded, the servers got cheaper, and companies bought a lot more servers. Software features expanded to use up the capacity, we moved from compiled to interpreted languages for much of the programming that goes on out there today, and have used computers inefficiently without caring much about the cost of the capacity we don’t use to our companies or ourselves.
Computers started out as a shared resource in the 1950s and 1960s because they were so expensive, and the desire to do more computing coupled with the capability of Moore’s Law to provide the hardware and software engineers around the world to use it up has left us with grossly inefficient but highly capable hardware in every aspect of the IT environment. Interestingly, the scale of the software in these devices has also exploded. You can’t load a full-blown Unix or Windows environment on a cell phone. IT has to go back to its roots, and think about how to create software that runs in an efficient manner. When bits and MIPS were scarce, there really was no choice, and perhaps we would all do well to start acting like we do not have a choice if the data centers of the world consume so much juice. Jonathan Koomey, a staff scientist at the Lawrence Berkeley National Laboratory and a professor at Stanford University, estimated earlier this year that the 27.3 million servers in use at the end of 2005 worldwide consumed 123 billion kilowatt-hours of electricity. Those faster processors, memory, disks, and I/O come at a price.
There is, of course, another design approach that IT vendors can take, and it involves being a minimalist. There are an emerging number of examples. For instance, the first half billion or so people on the Internet used a PC to get there, but the next couple of billion will probably use a cell phone. Such a machine can roam public networks, surf the Web, and give end users some of the same capabilities that full-blown PCs have. Granted, you wouldn’t want to write a novel or put together a proposal on a cell phone. But for a lot of people, the kind of computing they want to do fits nicely in a small device.
The form factors for PCs are shrinking, too, because no one wants a big beige box dominating their desk anymore. You can get MiniITX and NanoITX form factors now, which are the size of a book or a video cassette, respectively; Advanced Micro Devices and Intel‘s motherboard partners have been building MicroATX boards for a while, which are behind the small form factor PCs on the market. AMD is working on two new form factors that are between the MicroATX and MiniITX standard, one called DTX and the other called MiniDTX, which puts one Athlon or Opteron socket on a board and just enough peripheral expansion to make a usable machine.
Disk drives are shrinking, with 2.5-inch, enterprise-class SAS and SATA drives now appearing on the market with the kind of reliability that servers demand. And, it probably won’t be too many years before 1.8-inch devices are available. Even networking devices are shrinking. Mistletoe Technologies is going to be showing off an $800 network appliance called the SlimLine that includes a built-in firewall plus software for data encryption, shaping network traffic and coping with denial of service attacks with two Gigabit Ethernet ports and 2 Gb/sec throughput. The device is about one-fifth the size of similar network devices (it looks to be about the size of a paperback book) ands it consumes under 15 watts of juice.
As many IT managers know, the business managers who are responsible for deciding what platforms get deployed to support the applications they control are often sized on a whim. Managers are worried about peak capacity needs and being caught short, so they order the fastest box they can afford, rather than look for a box that fits correctly. Or, worse still, they do not properly size their workloads and then demand of their hardware vendors to build machines that fit their needs rather than the needs of the quarterly profit reports of IT suppliers keen on keeping average selling prices high. The best way to do that is to ride Moore’s Law, count on people’s fear of undercapacity, and deliver machines that never seem to get smaller even if they do sometimes get cheaper.
But if you run a less expensive and higher capacity computer less efficiently, have you really gotten anywhere?
Software vendors, of course, want IT shops to plunk down more servers in the data center and PCs on the desktop, since they make tremendous sums of money on this. Software is not designed to be lean and mean, but to give the impression that more hardware capacity is necessary for a better end user experience. But people are starting to resist this tendency. Server virtualization is driving up utilization on servers, and Microsoft’s recent packaging of its Vista platform shows just how little it believes some customers want to move ahead to the full-on Vista experience. There was only one Windows 3.1, but there are six Windows Vistas–Home Basic, Home Premium, Business, Ultimate, and Enterprise, plus a Starter edition for emerging markets where computing is still expensive and scarce and so is electricity. For many end users in the developed world, Starter edition is all we really needed, and a more streamlined operating system would be more secure and more manageable to boot.
The fact is, for many applications, a smaller computer is simply a better one. The idea could eventually catch on–if the members of the IT industry can figure out how to make money on it. The odds are, though, they can’t.