IBM Brings SAN Performance to Parity with Internal Arrays
Published: February 4, 2008
by Timothy Prickett Morgan
The AS/400, iSeries, and System i platform has been on the vanguard of technologies lots of times in its long history. They were rack-mounted long before that was cool, they always used the most advanced CMOS processor designs, they employed asymmetric multiprocessing, small form factor disks, the densest main memory, and so on. One area where the i5/OS and OS/400 platform has not been at the front end of technology is in storage area networks, or SANs. With the announcements from IBM last week, that is about to change.
The concept of storage area networks is a bit "back to the future" for the AS/400, which used external disk arrays on enterprise-class boxes from day one back in June 1988; deskside AS/400s used internal storage, and when third parties such as EMC and a number of others were encroaching on IBM's AS/400 disk business in the mid-1990s, the company switched to internal RAID controllers and internal disk arrays for all of its OS/400 servers, large and small alike. This made it more difficult for these vendors to compete for a lot of reasons, but the performance of the internal arrays compared to external ones was a big reason. Of course, a SAN is more than an external disk array. It is a shared disk subsystem that can be carved up for many servers on the fly and managed as a single entity. The idea is to get storage all on the same box and utilize it more heavily, just like we do in server consolidation projects.
While IBM has supported Fibre Channel adapter cards that link into SANs and therefore should, in theory, allow giant shared storage infrastructure to be used by i5/OS and OS/400 servers as well as those that support mainframe, Unix, Windows, and Linux platforms, the performance issues with Fibre Channel adapter cards and their attached SANs have compelled IBM's largest i5/OS and OS/400 shops to stick with external arrays. Even though SAN attachment has been around since OS/400 V5R1 came out in April 2001 and was substantially enhanced in 2002 with V5R2 and new adapters, the performance differential between internal arrays and external SANs was apparently an issue and has remained so until now.
Last week, IBM announced that it has a new Fibre Channel adapter card, the feature 5749 PCI-X 4 Gb/sec dual-port Fibre Channel adapter, that is a companion to the existing 4 Gb/sec dual-port PCI-Express adapter card (feature 5774) that the company announced for its Power6-based 570 servers last year running AIX and Linux. (But not i5/OS V5R4M5.) For System i workloads, these two cards really require the new i5/OS V6R1 operating system and attach a System i server to IBM's high-end DS8000 disk arrays. The new cards have drivers that have been specifically tweaked to boost the performance of SANs compared to internal disks. Here's the benchmarking data that IBM is offering to prove this point:
This CPW workload benchmark was run on a Power6-based server equipped with V6R1, and shows the response time and throughput of a box using the earlier Fibre Channel adapters compared to the new IOP-less adapters. As you can see, at a 1 second average response time for transactions, the new card does approximately 30 percent more work than the old one when attached to DS8300 disk arrays. IBM does now show how an equivalently configured internal disk array would do on a similar CPW benchmark, but presumably about the same as the DS8300s and the new Fibre Channel adapters.
By the way, these Fibre Channel adapter cards can link to either disk arrays or tape drives and arrays. They are not just for linking to DS8000 family products from IBM. And presumably, EMC Symmetrix and DMX arrays can also be linked to the adapter cards as well--and if not today, then at some point in the near future. The new adapters also take up fewer slots in a machine. Prior Fibre Channel adapters required an IOP and the adapter itself to get one Fibre Channel port, while the new cards plug right into a single peripheral slot and yield two Fibre Channel ports. That's a four-to-one improvement in slot efficiency. The feature 5749 Fibre Channel card costs $3,808, the same price as the feature 5774 feature announced last fall with only AIX and Linux support.
Here's another thing IBM did last week on the SAN front. By supporting the Virtual I/O Server with i5/OS V6R1, the DS4700 and DS4800 midrange disk arrays can be linked to System i servers now, too. Virtual I/O Server has been a feature of AIX and Linux machines since AIX 5.3l it creates a virtual server in a logical partition that links to disks and tapes and is used to feeds I/O to other partitions on the box. The point is that all I/O is virtualized in the machine so each partition does not need its own dedicated disk and tape adapters.
Further on the storage front, IBM is also now allowing the feature 4329 disk drives--282 GB capacity for i5/OS and spinning at 15K RPM--to be used in System i drive enclosures as well as in the EXP24 boxes used in the System x and System p product line and recently added for System i boxes. Feature 4329 costs $2,799, and is a 3.5-inch SCSI disk. IBM is also selling a 300 GB disk--essentially the same unit but only supporting AIX or Linux partitions--for $1,999. Grrrrrrr.
i5/OS V6R1 Announced Today, Ships in March
IBM Tweaks Prices on BladeCenter H and Power Blade Networking Gear
IT Shops Expect iSCSI and Fibre Channel to Co-Exist
Vendors Propose Fibre Channel Over Ethernet Standard
IBM Delivers iSCSI Connection, Pushes Blades to OS/400 Shops
Post this story to del.icio.us
Post this story to Digg
Post this story to Slashdot