• The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
Menu
  • The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
  • The X Factor: Small Is Beautiful

    May 29, 2007 Timothy Prickett Morgan

    For the past five decades of the computer industry, three forces have been at work that have determined the trajectory of all hardware and software technologies. The first, and perhaps the most important, is the desire by companies and individuals to want to automate processes and store data. Without this desire, Moore’s Law–the ability to put more transistors on a chip to boost processing or memory capacity–would be meaningless. Data centers would be the size of a pack of cigarettes, and they would only cost $100.

    This desire to computerize processes and information that were otherwise done by people and stored in their memories or on paper is the force that drives the entire IT industry, in fact. The ability to meet that desire is what drives the $1.3 trillion worldwide computer market, and for many decades, it was necessary to build “big iron” computers to handle the very large and complex workloads for the largest national and multinational organizations, as well as the governments in which they were located. The sheer size of the problem dictated large scale, complex, and expensive computing. And both companies and governments and their IT suppliers benefited from this voracious appetite for capacity. The computing infrastructures that were enabled by this progress moved computing from mere tabulation and accounting to becoming the backbone of the business to essentially embodying the business–which is what computers are for most companies today. In a sense, employees work for the computers, not the other way around. And in a very real sense, we all work for the Internet now.

    All this big iron thinking has come at a cost, of course. The advent of minicomputers in the late 1970s and then cheap X86-style servers in the 1990s pushed workloads off big iron boxes, such as mainframes and then Unix servers, but mainframes and Unix boxes persist and they are surrounded by legions of hot X86 and X64 servers. The workload expanded, the servers got cheaper, and companies bought a lot more servers. Software features expanded to use up the capacity, we moved from compiled to interpreted languages for much of the programming that goes on out there today, and have used computers inefficiently without caring much about the cost of the capacity we don’t use to our companies or ourselves.

    Computers started out as a shared resource in the 1950s and 1960s because they were so expensive, and the desire to do more computing coupled with the capability of Moore’s Law to provide the hardware and software engineers around the world to use it up has left us with grossly inefficient but highly capable hardware in every aspect of the IT environment. Interestingly, the scale of the software in these devices has also exploded. You can’t load a full-blown Unix or Windows environment on a cell phone. IT has to go back to its roots, and think about how to create software that runs in an efficient manner. When bits and MIPS were scarce, there really was no choice, and perhaps we would all do well to start acting like we do not have a choice if the data centers of the world consume so much juice. Jonathan Koomey, a staff scientist at the Lawrence Berkeley National Laboratory and a professor at Stanford University, estimated earlier this year that the 27.3 million servers in use at the end of 2005 worldwide consumed 123 billion kilowatt-hours of electricity. Those faster processors, memory, disks, and I/O come at a price.

    There is, of course, another design approach that IT vendors can take, and it involves being a minimalist. There are an emerging number of examples. For instance, the first half billion or so people on the Internet used a PC to get there, but the next couple of billion will probably use a cell phone. Such a machine can roam public networks, surf the Web, and give end users some of the same capabilities that full-blown PCs have. Granted, you wouldn’t want to write a novel or put together a proposal on a cell phone. But for a lot of people, the kind of computing they want to do fits nicely in a small device.

    The form factors for PCs are shrinking, too, because no one wants a big beige box dominating their desk anymore. You can get MiniITX and NanoITX form factors now, which are the size of a book or a video cassette, respectively; Advanced Micro Devices and Intel‘s motherboard partners have been building MicroATX boards for a while, which are behind the small form factor PCs on the market. AMD is working on two new form factors that are between the MicroATX and MiniITX standard, one called DTX and the other called MiniDTX, which puts one Athlon or Opteron socket on a board and just enough peripheral expansion to make a usable machine.

    Disk drives are shrinking, with 2.5-inch, enterprise-class SAS and SATA drives now appearing on the market with the kind of reliability that servers demand. And, it probably won’t be too many years before 1.8-inch devices are available. Even networking devices are shrinking. Mistletoe Technologies is going to be showing off an $800 network appliance called the SlimLine that includes a built-in firewall plus software for data encryption, shaping network traffic and coping with denial of service attacks with two Gigabit Ethernet ports and 2 Gb/sec throughput. The device is about one-fifth the size of similar network devices (it looks to be about the size of a paperback book) ands it consumes under 15 watts of juice.

    As many IT managers know, the business managers who are responsible for deciding what platforms get deployed to support the applications they control are often sized on a whim. Managers are worried about peak capacity needs and being caught short, so they order the fastest box they can afford, rather than look for a box that fits correctly. Or, worse still, they do not properly size their workloads and then demand of their hardware vendors to build machines that fit their needs rather than the needs of the quarterly profit reports of IT suppliers keen on keeping average selling prices high. The best way to do that is to ride Moore’s Law, count on people’s fear of undercapacity, and deliver machines that never seem to get smaller even if they do sometimes get cheaper.

    But if you run a less expensive and higher capacity computer less efficiently, have you really gotten anywhere?

    Software vendors, of course, want IT shops to plunk down more servers in the data center and PCs on the desktop, since they make tremendous sums of money on this. Software is not designed to be lean and mean, but to give the impression that more hardware capacity is necessary for a better end user experience. But people are starting to resist this tendency. Server virtualization is driving up utilization on servers, and Microsoft’s recent packaging of its Vista platform shows just how little it believes some customers want to move ahead to the full-on Vista experience. There was only one Windows 3.1, but there are six Windows Vistas–Home Basic, Home Premium, Business, Ultimate, and Enterprise, plus a Starter edition for emerging markets where computing is still expensive and scarce and so is electricity. For many end users in the developed world, Starter edition is all we really needed, and a more streamlined operating system would be more secure and more manageable to boot.

    The fact is, for many applications, a smaller computer is simply a better one. The idea could eventually catch on–if the members of the IT industry can figure out how to make money on it. The odds are, though, they can’t.



                         Post this story to del.icio.us
                   Post this story to Digg
        Post this story to Slashdot

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Tags: Tags: mtfh_rc, Volume 16, Number 21 -- May 29, 2007

    Sponsored by
    LaserVault

    Integrate Virtual Tape For Better Backups, Faster Recovery, And More Flexibility

    Virtual tape and virtual tape libraries offer a way to both simplify and strengthen backup and recovery operations. By incorporating virtual tape technology, automation of backups becomes possible resulting in hundreds of hours saved annually for IT departments and personnel.

    LaserVault ViTL is a virtual tape and tape library solution developed specifically for use with IBM Power Systems (from AS/400 to iSeries to Power 9s). See a demo and get a $50 gift card.

    With ViTL you can:

    • Replace physical tape and tape libraries and associated delays
    • Automate backup operations, including the ability to purge or archive backups
    • Remotely manage your backups – no need to be onsite with your server
    • Save backups to a dedupe appliance and the cloud
    • Recover your data at lightspeed greatly improving your ability to recover from cyberattacks
    • And so much more

    “The ViTL tapeless solution has truly made my job easier. It has given me more confidence in our full system recovery ability – but at the same time I hope it is never needed.” IBM i Administrator at a financial services company

    Sign-up now to see a ViTL online demo and get a $50 Amazon e-gift card when the demo is complete as our way of saying thanks for your time. Plus when you sign-up you’ll receive a free facts comparison sheet on using virtual tape vs tape so you can compare the functionality for yourself.

    LaserVault.com

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Admin Alert: i5 IPL Pre-Planning and Post-Planning Checklists IBM’s Rumored System i Power6 Server Plans

    Leave a Reply Cancel reply

TFH Volume: 16 Issue: 21

This Issue Sponsored By

    Table of Contents

    • IBM Announces New HMCs for System p and System i Servers
    • Big Blue Offers Free Monitoring to Server Customers
    • IBM’s First Power6 Box: A Glimpse Into System i 2008 Edition
    • IDC Projects Disk Capacity to Grow, But Revenues to Flatten
    • Virtualization, Consolidation Drive Server Sales in Q1
    • InfiniBand Finds Its Place in the Data Center
    • IBM Offers Upgrade and Trade-In Promotions to Bolster System i Sales
    • Magic Software Boosts Revenues and Profits in Q1
    • The X Factor: Small Is Beautiful
    • NetManage’s Losses Grow as Sales Decline in the First Quarter

    Content archive

    • The Four Hundred
    • Four Hundred Stuff
    • Four Hundred Guru

    Recent Posts

    • COMMON Set for First Annual Conference in Three Years
    • API Operations Management for Safe, Powerful, and High Performance APIs
    • What’s New in IBM i Services and Networking
    • Four Hundred Monitor, May 18
    • IBM i PTF Guide, Volume 24, Number 20
    • IBM i 7.3 TR12: The Non-TR Tech Refresh
    • IBM i Integration Elevates Operational Query and Analytics
    • Simplified IBM i Stack Bundling Ahead Of Subscription Pricing
    • More Price Hikes From IBM, Now For High End Storage
    • Big Blue Readies Power10 And IBM i 7.5 Training for Partners

    Subscribe

    To get news from IT Jungle sent to your inbox every week, subscribe to our newsletter.

    Pages

    • About Us
    • Contact
    • Contributors
    • Four Hundred Monitor
    • IBM i PTF Guide
    • Media Kit
    • Subscribe

    Search

    Copyright © 2022 IT Jungle

    loading Cancel
    Post was not sent - check your email addresses!
    Email check failed, please try again
    Sorry, your blog cannot share posts by email.