Liquid Computing Jumps into the Servers with a Big Splash
by Timothy Prickett Morgan
Every so often in the server business, some technology or economic change comes along that allows upstarts to take a run at the old guard. When technology changes coincide more or less with arduous economic circumstances, that cocktail can be a quite powerful influence. We may be in the thick of such a transition now, and that is why we are suddenly seeing new kinds of servers entering the market. Liquid Computing, which just put its servers into alpha, is one such vendor.
Timing is everything in the IT market, because if you bring the right technology to the market at the wrong time, you fail. And timing your entry to a relatively bad economy, oddly enough, can be a big help if you have a technology that aims to unseat incumbents. Look at the history of systems and servers in the IT market. Let's just talk about the past three decades or so to make the point.
The oil crisis of 1973 was caused less by the Saudi Arabian oil embargo and more by the wars in the Middle East and the belated realization that the United States had hit peak oil production around 1971 or so. By the time the Western economies were in a full tail spin by late 1975, Gene Amdahl, one of the key designers of the System/360 mainframe, was getting close to launching the world's first clone mainframes. So as the economy in the U.S. dragged on and eventually hit another wall in 1979, the Iranian Revolution got underway and produced another massive oil shock and eventually another recession. IBM was not only fighting off clone mainframes, but a proliferation of minicomputers--including its own System/3X machines--that offered mainframe-class computing for a lot less than what mainframes cost. The roaring economies of the late 1980s ran out of gas in the fall of 1987 and hit a wall in 1988, and it may be a coincidence that this was also when the next round of minicomputers were coming out from IBM, Digital, Hewlett-Packard and others. These minicomputers--some of which were running Unix, not just proprietary operating systems--were now powerful enough to do a lot of mainframe-sized workloads. The open systems war had started, and so did a second round of plug-compatible mainframe innovation. Even more workloads shifted off big iron to little iron. By 1991, IBM's mainframe business was on the rocks, and Unix servers were on their way to utterly dominate the server market for the next decade. When the economy had improved by 1993--in part because of ridiculously low oil prices--PCs were powerful enough to be useful to business, and the client/server revolution got under way. Companies started figuring out how to make PCs take work off their central machines, since PCs are a lot less expensive per unit of processing power than a minicomputer, and ridiculously less expensive than a mainframe. As the economies of the world just roared in the late 1990s, Unix grew, and Windows and Linux burst in the scene to challenge it for hegemony in the server space. Unix was holding its own until the economy hit the wall in early 2000 and was hammered by the 9/11 terrorist attacks 18 months later. By then, the first 64-bit Opteron processors were on the horizon, 32-bit Xeon processors were very powerful and very inexpensive, and everyone was looking around, trying to figure out how to spend a lot, lot less for servers. Unix took a severe beating, and Wintel and Lintel iron ascended. I happen to believe that we are right now at global peak oil production, and that there is a hell of an economic shock coming. Incumbent server architectures are going to be under tremendous pressure from the economic situation.
The good news, if you like X64 architectures, is that there is a substantial amount of innovation going on in this space. That was a long setup to say that companies like Liquid Computing and Fabric7, which are building big, sophisticated machines out of Opteron processors but adding real innovation of their own, may have a much bigger shot at the server business than they had anticipated. If the oil shock doesn't come--and I would be very, very, very happy to be wrong on this one--then they still have interesting iron that will be appealing to many customers. The appeal will be amplified by arduous economic circumstances, should they arise. And, to be honest, the protracted pressure from business owners, CEOs, and CFOs to "do more with less" in the IT budget is essentially a recession that is focused on only one aspect of the business world, but as far as IT is concerned, the economy is in recession even if other parts of business are seeing good growth. This time around, the economy can grow on the whole (for a lot of reasons), but that may not translate into good times for IT. In fact, I can just about guarantee that the purse strings will never be as loose as during the late 1990s ever again--unless we see such an incredible economic expansion, which seems unlikely. We could discover the Moon is made of silver and Mars has oil reserves--so you can't count it out, though. (I am kidding, obviously, about the Moon. But Halliburton is dead serious on securing Martian drilling rights. No joke.)
All of this brings us back to Liquid Computing, a brand spanking new server maker that was founded in Ottawa, Ontario, by a bunch of telecommunications systems experts who know a thing or two about lashing together servers and building low-latency networks. These telecom nerds have taken Sun Microsystems's old adage that "the network is the computer" to heart, and put a high-speed network at the heart of a cluster of servers. Liquid Computing was founded in 2003, at the height of the latest IT recession, with the task of creating a new server architecture that would deliver lots of processing power and high-bandwidth, low-latency connections between processors, memory, and I/O subsystems at prices substantially lower than big SMP machines. Liquid Computing was founded by Brian Hurley, who worked for Canadian telecom giant Nortel for two decades, rolling out the infrastructure behind new data, optical, and wireless services that Nortel delivered to the market. Hurley is the company's CEO, and his co-founder, Mike Kemp, is the company's chief technical officer. Kemp is in his third decade of building high-end computer systems, having worked for both Nortel and the U.S. Defense Advanced Research Projects Agency (DARPA), the birthplace of the Internet. Kemp has generated numerous patents for multiprocessor systems, scalable communications, and high-availability switching. The tech team and sales team that Liquid Computing has put together has breadth and depth, and the company is backed by several venture capitalists, including VenGrowth Capital Partners, ATA Ventures, Business Development Bank of Canada, Export Development Canada, Axis Investment Fund and Adam Chowaniec, the chairman of the board who has put his own money into the venture as well as VenGrowth's, where he is an executive in residence. Liquid Computing got a round of seed funding in May 2004, followed up by $14 million in Series A funding in May 2005.
This may not be your normal server development team, but that is one of the things that makes the LiquidIQ system so interesting. And as you probably expected, the LiquidIQ machine that just went into alpha testing is based on Opteron processors from Advanced Micro Devices. The idea behind LiquidIQ is very simple, although creating it is probably not so easy--or someone would have done it by now. Simply put, the LiquidIQ server is a collection of cell boards based on Opteron processors that have an interconnect that is so fast that the server can be configured as a normal cluster of one- or two-socket servers to run HPC workloads, one or many SMP servers clustered together or running in standalone mode, or a cluster that is a hybrid of these two approaches. The interconnect that glues this all together and allows it to dynamically change the personality of the server or servers under the skin of the LiquidIQ server is called IQInterconnect. This is obviously the secret sauce. This interconnect presents a global, non-coherent memory space that can be made coherent for SMP or which can support MPI message passing like real Linux and Unix clusters do today.
"We built this to allow people to have a large number of architectures at the same time," explains Hurley. The initial release of the LiquidIQ server will have a fairly large chassis that supports 10 processor blades in the front, another 10 processor blades in the back, and 10 I/O and interconnect blades in both the front abd back underneath the processor blades. Each processor blade supports four dual-core Opteron 800 Series processors, with 16 GB of main memory per socket. Two chasses can fit in a standard rack, which will hold 320 Opteron cores; as many as a dozen chasses can be lashed together into a single system. Hurley says that the SMP scalability of the box is limited by the scalability inherent in the operating system, not in the box, and for Linux, the first operating system to run on the LiquidIQ platform, that essentially means 16-core SMP scalability. (On some architectures, you can push SMP scalability to 32 or 64 cores, and it is likely that Liquid Computing can do some tweaks to push the envelope there if customers need it.) The LiquidIQ chassis has 200 Gb/sec of aggregate I/O bandwidth to the outside world, which can be used to link to other devices through Gigabit Ethernet, 10G Ethernet, or Fibre Channel interfaces. The multipath connection between processors can deliver 100 GB/sec of bandwidth from one processor blade to another, and the proprietary interconnect that makes up the IQInterconnect is a 16 GB/sec link that has a latency of under 2 microseconds. "For us in the telecom space, this is business as usual. We have built large, scalable systems for years," brags Hurley. "Everybody says that they wish they had the LiquidIQ today." The whole thing is controlled by an out-of-band system management server, which can change the personality of the servers and partition the LiquidIQ machine on the fly from the outside.
Hurley says that Liquid Computing, which has moved its headquarters to Los Gatos in Silicon Valley but which has kept its development labs in Ottawa, will begin beta testing in February on modest configurations of the LiquidIQ box. Because the server was designed with the future "Rev F" Opteron processors (which have the "Pacifica" virtualization features and perhaps faster HyperTransport links), the machine will not become generally available until August. (The Rev F Opterons are expected around mid-2006, and it will take some time to get everything certified and ready).
Liquid Computing will support Red Hat Enterprise Linux at first, and will quickly add Microsoft Windows and Novell SUSE Linux Enterprise Server. The company has no plans to support either Solaris 10 or OpenSolaris, the Unix variants from Sun Microsystems, but this is theoretically possible. While LiquidIQ has not determined its pricing yet, Hurley says that any customer who is looking to buy four four-socket servers should take a look at the LiquidIQ box first.
Fabric7 Creates Flexible Opteron Server for Linux, Windows