Newsletters   Subscriptions  Forums  Store   Career  Media Kit  About Us  Contact  Search   Home 
tlb
Volume 2, Number 44 -- November 29, 2005

Liquid Computing Jumps into the Servers with a Big Splash


by Timothy Prickett Morgan


Every so often in the server business, some technology or economic change comes along that allows upstarts to take a run at the old guard. When technology changes coincide more or less with arduous economic circumstances, that cocktail can be a quite powerful influence. We may be in the thick of such a transition now, and that is why we are suddenly seeing new kinds of servers entering the market. Liquid Computing, which just put its servers into alpha, is one such vendor.

Timing is everything in the IT market, because if you bring the right technology to the market at the wrong time, you fail. And timing your entry to a relatively bad economy, oddly enough, can be a big help if you have a technology that aims to unseat incumbents. Look at the history of systems and servers in the IT market. Let's just talk about the past three decades or so to make the point.

The oil crisis of 1973 was caused less by the Saudi Arabian oil embargo and more by the wars in the Middle East and the belated realization that the United States had hit peak oil production around 1971 or so. By the time the Western economies were in a full tail spin by late 1975, Gene Amdahl, one of the key designers of the System/360 mainframe, was getting close to launching the world's first clone mainframes. So as the economy in the U.S. dragged on and eventually hit another wall in 1979, the Iranian Revolution got underway and produced another massive oil shock and eventually another recession. IBM was not only fighting off clone mainframes, but a proliferation of minicomputers--including its own System/3X machines--that offered mainframe-class computing for a lot less than what mainframes cost. The roaring economies of the late 1980s ran out of gas in the fall of 1987 and hit a wall in 1988, and it may be a coincidence that this was also when the next round of minicomputers were coming out from IBM, Digital, Hewlett-Packard and others. These minicomputers--some of which were running Unix, not just proprietary operating systems--were now powerful enough to do a lot of mainframe-sized workloads. The open systems war had started, and so did a second round of plug-compatible mainframe innovation. Even more workloads shifted off big iron to little iron. By 1991, IBM's mainframe business was on the rocks, and Unix servers were on their way to utterly dominate the server market for the next decade. When the economy had improved by 1993--in part because of ridiculously low oil prices--PCs were powerful enough to be useful to business, and the client/server revolution got under way. Companies started figuring out how to make PCs take work off their central machines, since PCs are a lot less expensive per unit of processing power than a minicomputer, and ridiculously less expensive than a mainframe. As the economies of the world just roared in the late 1990s, Unix grew, and Windows and Linux burst in the scene to challenge it for hegemony in the server space. Unix was holding its own until the economy hit the wall in early 2000 and was hammered by the 9/11 terrorist attacks 18 months later. By then, the first 64-bit Opteron processors were on the horizon, 32-bit Xeon processors were very powerful and very inexpensive, and everyone was looking around, trying to figure out how to spend a lot, lot less for servers. Unix took a severe beating, and Wintel and Lintel iron ascended. I happen to believe that we are right now at global peak oil production, and that there is a hell of an economic shock coming. Incumbent server architectures are going to be under tremendous pressure from the economic situation.

The good news, if you like X64 architectures, is that there is a substantial amount of innovation going on in this space. That was a long setup to say that companies like Liquid Computing and Fabric7, which are building big, sophisticated machines out of Opteron processors but adding real innovation of their own, may have a much bigger shot at the server business than they had anticipated. If the oil shock doesn't come--and I would be very, very, very happy to be wrong on this one--then they still have interesting iron that will be appealing to many customers. The appeal will be amplified by arduous economic circumstances, should they arise. And, to be honest, the protracted pressure from business owners, CEOs, and CFOs to "do more with less" in the IT budget is essentially a recession that is focused on only one aspect of the business world, but as far as IT is concerned, the economy is in recession even if other parts of business are seeing good growth. This time around, the economy can grow on the whole (for a lot of reasons), but that may not translate into good times for IT. In fact, I can just about guarantee that the purse strings will never be as loose as during the late 1990s ever again--unless we see such an incredible economic expansion, which seems unlikely. We could discover the Moon is made of silver and Mars has oil reserves--so you can't count it out, though. (I am kidding, obviously, about the Moon. But Halliburton is dead serious on securing Martian drilling rights. No joke.)

All of this brings us back to Liquid Computing, a brand spanking new server maker that was founded in Ottawa, Ontario, by a bunch of telecommunications systems experts who know a thing or two about lashing together servers and building low-latency networks. These telecom nerds have taken Sun Microsystems's old adage that "the network is the computer" to heart, and put a high-speed network at the heart of a cluster of servers. Liquid Computing was founded in 2003, at the height of the latest IT recession, with the task of creating a new server architecture that would deliver lots of processing power and high-bandwidth, low-latency connections between processors, memory, and I/O subsystems at prices substantially lower than big SMP machines. Liquid Computing was founded by Brian Hurley, who worked for Canadian telecom giant Nortel for two decades, rolling out the infrastructure behind new data, optical, and wireless services that Nortel delivered to the market. Hurley is the company's CEO, and his co-founder, Mike Kemp, is the company's chief technical officer. Kemp is in his third decade of building high-end computer systems, having worked for both Nortel and the U.S. Defense Advanced Research Projects Agency (DARPA), the birthplace of the Internet. Kemp has generated numerous patents for multiprocessor systems, scalable communications, and high-availability switching. The tech team and sales team that Liquid Computing has put together has breadth and depth, and the company is backed by several venture capitalists, including VenGrowth Capital Partners, ATA Ventures, Business Development Bank of Canada, Export Development Canada, Axis Investment Fund and Adam Chowaniec, the chairman of the board who has put his own money into the venture as well as VenGrowth's, where he is an executive in residence. Liquid Computing got a round of seed funding in May 2004, followed up by $14 million in Series A funding in May 2005.

This may not be your normal server development team, but that is one of the things that makes the LiquidIQ system so interesting. And as you probably expected, the LiquidIQ machine that just went into alpha testing is based on Opteron processors from Advanced Micro Devices. The idea behind LiquidIQ is very simple, although creating it is probably not so easy--or someone would have done it by now. Simply put, the LiquidIQ server is a collection of cell boards based on Opteron processors that have an interconnect that is so fast that the server can be configured as a normal cluster of one- or two-socket servers to run HPC workloads, one or many SMP servers clustered together or running in standalone mode, or a cluster that is a hybrid of these two approaches. The interconnect that glues this all together and allows it to dynamically change the personality of the server or servers under the skin of the LiquidIQ server is called IQInterconnect. This is obviously the secret sauce. This interconnect presents a global, non-coherent memory space that can be made coherent for SMP or which can support MPI message passing like real Linux and Unix clusters do today.


"We built this to allow people to have a large number of architectures at the same time," explains Hurley. The initial release of the LiquidIQ server will have a fairly large chassis that supports 10 processor blades in the front, another 10 processor blades in the back, and 10 I/O and interconnect blades in both the front abd back underneath the processor blades. Each processor blade supports four dual-core Opteron 800 Series processors, with 16 GB of main memory per socket. Two chasses can fit in a standard rack, which will hold 320 Opteron cores; as many as a dozen chasses can be lashed together into a single system. Hurley says that the SMP scalability of the box is limited by the scalability inherent in the operating system, not in the box, and for Linux, the first operating system to run on the LiquidIQ platform, that essentially means 16-core SMP scalability. (On some architectures, you can push SMP scalability to 32 or 64 cores, and it is likely that Liquid Computing can do some tweaks to push the envelope there if customers need it.) The LiquidIQ chassis has 200 Gb/sec of aggregate I/O bandwidth to the outside world, which can be used to link to other devices through Gigabit Ethernet, 10G Ethernet, or Fibre Channel interfaces. The multipath connection between processors can deliver 100 GB/sec of bandwidth from one processor blade to another, and the proprietary interconnect that makes up the IQInterconnect is a 16 GB/sec link that has a latency of under 2 microseconds. "For us in the telecom space, this is business as usual. We have built large, scalable systems for years," brags Hurley. "Everybody says that they wish they had the LiquidIQ today." The whole thing is controlled by an out-of-band system management server, which can change the personality of the servers and partition the LiquidIQ machine on the fly from the outside.

Hurley says that Liquid Computing, which has moved its headquarters to Los Gatos in Silicon Valley but which has kept its development labs in Ottawa, will begin beta testing in February on modest configurations of the LiquidIQ box. Because the server was designed with the future "Rev F" Opteron processors (which have the "Pacifica" virtualization features and perhaps faster HyperTransport links), the machine will not become generally available until August. (The Rev F Opterons are expected around mid-2006, and it will take some time to get everything certified and ready).

Liquid Computing will support Red Hat Enterprise Linux at first, and will quickly add Microsoft Windows and Novell SUSE Linux Enterprise Server. The company has no plans to support either Solaris 10 or OpenSolaris, the Unix variants from Sun Microsystems, but this is theoretically possible. While LiquidIQ has not determined its pricing yet, Hurley says that any customer who is looking to buy four four-socket servers should take a look at the LiquidIQ box first.


RELATED STORY

Fabric7 Creates Flexible Opteron Server for Linux, Windows

Sponsored By
LINUX NETWORX

Clusterworx® Whitepaper

High performance Linux clusters can consist of hundreds or thousands of individual components. Knowing the status of each CPU, memory, disk, fan, and other components is critical to ensure the system is running safely and effectively.

Likewise, managing the software components of a cluster can be difficult and time consuming for even the most seasoned administrator. Making sure each host's software stack is up to date and operating efficiently can consume much of an administrator's time. Reducing this time frees up system administrators to perform other tasks.

Though Linux clusters are robust and designed to provide good uptime, occasionally conditions lead to critical, unplanned downtime. Unnecessary downtime of a production cluster can delay a product's time to market or hinder critical research.

    Since most organizations can't afford these delays, it's important that a Linux cluster comes with a robust cluster monitoring tool that:
  • Provides essential monitoring data to make sure the system is operational.
  • Eliminates repetitive installation and configuration tasks to reduce
          periods of downtime.
  • Provides powerful features, but doesn't compromise on usability.
  • Automates problem discovery and recovery on would-be critical events.

This paper discusses the features and functions of Clusterworx® 3.2. It details how Clusterworx® provides the necessary power and flexibility to monitor over 120 system components from a single point of control. The paper also discusses how Clusterworx® reduces the time and resources spent administering the system by improving software maintenance procedures and automating repetitive tasks.

High Performance Monitoring

Each cluster node has its own processor, memory, disk, and network that need to be independently monitored. This means individual cluster systems can consist of hundreds or thousands of different components. The ability to monitor the status and performance of all system components in real time is critical to understanding the health of a system and to ensure it's running as efficiently as possible.

Because so many system components need to be monitored, one of the challenges of cluster management is to efficiently collect data and display system health status in an understandable format. For example, let's say a cluster system has 100 nodes and is running at 97 percent usage. It's very important to know whether 100 nodes are running at 97 percent usage or whether 97 nodes are running at 100 percent usage while three nodes are down.

Clusterworx® provides real-time analysis of over 120 essential system metrics from each node. Data is displayed in easy-to-read graphs, thumbnails, and value tables. Clusterworx® collects data from groups of nodes to spot anomalies, then drills down to single node view to investigate problems. This allows users to determine exactly what the problem is before taking corrective action.

Clusterworx® also tracks the power and health state of each node and displays its status using visual markers in a node tree view throughout the user interface. Power status shows whether the node is on, off, provisioning, or in an unknown state. The health state tracks informational or warning messages and critical errors. Health state messages are displayed in a message queue on the interface.

Clusterworx®'s comprehensive monitoring and easy-to-read charts and graphs allow users to quickly asses the state of each node and the overall system at a glance - while providing the necessary information to make informed decisions about the cluster system.

To read the rest of this whitepaper, please visit www.linuxnetworx.com


Editor: Timothy Prickett Morgan
Contributing Editors: Dan Burger, Joe Hertvik, Kevin Vandever,
Shannon O'Donnell, Victor Rozek, Hesh Wiener, Alex Woodie
Publisher and Advertising Director: Jenny Thomas
Advertising Sales Representative: Kim Reed
Contact the Editors: To contact anyone on the IT Jungle Team
Go to our contacts page and send us a message.


THIS ISSUE
SPONSORED BY:

Arkeia
Linux Networx
ANSYS
Guild Companies
Scalix


The Linux Beacon

BACK ISSUES

TABLE OF
CONTENTS
The Linux-Windows Warriors Get Better Weapons

Liquid Computing Jumps into the Servers with a Big Splash

HP's Q4 Sales Grow, Profits Hit by Restructuring

Shaking IT Up: Just When You Thought It Was Safe to Use Your New Software

But Wait, There's More


The Four Hundred
Domino on the iSeries: The Empire Can Strike Back

The Once and Future OS/400 Ecosystem

International Business Server, International Business Desktop

Mad Dog 21/21: Hasta La Vista, Budget

The Windows Observer
HPC Version of Windows Server Goes to Public Beta

Executive Memos Point to a Disrupted Microsoft

Gates Lays Out Vision of Future of Supercomputing

IBM Unveils New Midrange Storage Systems

The Unix Guardian
Sun Makes Niagara Teaser Announcement, Servers Imminent

Linux Clusters Continue to Expand in Top 500 Supers Ranking

IBM Updates Virtualization Engine for Multiplatform Management

IBM Unveils New Midrange Storage Systems


Copyright © 1996-2008 Guild Companies, Inc. All Rights Reserved.
Guild Companies, Inc. (formerly Midrange Server), 50 Park Terrace East, Suite 8F, New York, NY 10034
Privacy Statement