Reconsidering SAN in Wake of SCSI Disk’s End
August 17, 2009 Alex Woodie
In 11 days, IBM will officially stop selling SCSI disks for System i and Power Systems servers, leaving serial attached SCSI (SAS) and solid state drives (SSD) as the only option for internal storage on the platform. While support for SCSI disks will continue through 2010, storage experts recommend taking a hard look at external storage and storage area networks (SANs), not only for enterprise customers with big workloads, but mid sized shops, too.
If you’ve been reading this newsletter, IBM’s transition from SCSI to SAS drives for internal storage should come as no surprise. Since the launch of the consolidated Power Systems boxes and IBM i 6.1 in April 2008, IBM has talked about the approaching end of SCSI and the new direction toward SAS disks and SSDs for internal storage. (External storage, through Fibre Channel-attached drives running in a SAN, is not affected.)
But the transition from battle-proven internal SCSI technology to new SAS and SSD storage–in fact, any transition of storage technology on the platform–is not as straightforward as if we were talking about Unix, Windows, or Linux. That’s because on the Power Systems platform running the i 6.1 operating system we have something called single-level storage that blends all physical disk and memory storage capacity into one big storage pot that’s managed by the operating system. And while single-level storage brings many advantages to the platform, such as ease of programming and administration, it complicates the storage sizing equation.
The problem has to do with the relationship between disk arms and application performance. A decade ago, when disk drives were in the 8 GB to 36 GB range, users had to load lots of individual disk drives into their servers to meet their storage needs, and as a result they also got a lot of disk arms to support their I/O requirements. However, when users started buying IBM disk drives with 141 GB, 282 GB, and larger capacities, they started to run into I/O problems. As customers installed fewer drives with bigger capacities to meet storage needs, the ratio of disk arms to total storage decreased. To compensate, some customers overbought on big disk, just to crank up the number of disk arms.
IBM’s replacement SAS drives are likely just as susceptible to the System i’s performance peccadilloes as SCSI drives. IBM has cranked up the sizes of the read and write caches on the SAS controllers to compensate, but since the larger issue is with single-level storage access methods and the number of disk arms on hand to fetch and write data, it doesn’t appear there’s a lot they can do. SSD drives, with their witheringly fast I/O and relatively small sizes, are not as susceptible, but they are prohibitively expensive.
Limits of Internal Storage
With the end of any storage technology and the transition to something new, it is worth re-evaluating the playing field and considering all the available options. In some cases, a step up to SAN technologies may be appropriate.
For System i storage expert Rick Aguiar, the internal storage transition presents a perfect opportunity to explore the benefits of external SAN storage in System i and Power System environments. Aguiar recently retired from the AS/400 business at storage giant EMC after a 21-year career, and took a new job as vice president of sales at Entrepid, a consultancy in the Boston area that sells EMC disk arrays and smaller IBM Power Systems servers.
Aguiar recently wrote a white paper, titled IBM i Host Environments–Understanding Disk Performance, that discusses some of the performance pitfalls that are lurking behind System i internal storage. That white paper will soon be posted to the company’s Web site at www.entrepid.com.
Aguiar delicately lays out his case for SANs–particularly kind that say “EMC” on the side, but also for IBM’s DS-series SANs. He uses performance metrics to make his point in favor of SANs. The metric Aguiar encourages System i shops to focus on is I/O operations per second, or IOPS, per device. It may be tempting to look at disk response time, or DRT, to get feel for how a System i disk drive is performing. But it can be misleading, Aguiar says.
“We see customers with 50 and 60 internal disk drives upgrade to a new Power6 or Power5+, and all of a sudden they have 10 to 30 drives,” Aguiar says. “And IBM says, ‘Look at your DRT, you’re performing well,’ and all the tools say you’re performing well. But the application is running slowly. They scratch their head. They don’t understand why.”
The problem, of course, is that while the total storage capacity has stayed about the same, that capacity is spread across fewer disk arms, leading to lower I/O. One way to solve the problem is to add more arms–to add more disk drives. “But now you’re talking about more towers, more busses, more RAID cards. It’s costly proposition,” he says. “We see many customers traditionally at 5 TB or 6 TB running 25 TB just because of the disk arm issue. In a SAN you don’t have that challenge.”
Not Being SANctimonious, But . . .
The SAN architecture brings certain storage advantages to customers running System i and other servers. The most obvious benefit is customers can consolidate storage for all of their servers onto a single SAN, eliminating the need to buy and maintain silos of storage pools for System i, Windows, Unix, and Linux servers. SANs also have the capability to virtualize storage by allocating storage into logical volumes, or LUNs. There are additional HA/DR features that SANs open up, including IBM’s GlobalMirror and MetroMirror technologies and EMC’s TimeFinder and SRDF facilities.
But it’s really SAN’s capability to virtualize access to storage that gets the most attention here. When combined with the fast throughput of Fibre Channel connectors–made faster with the elimination of I/O processors with i 6.1 generation hardware and new FC adapters–SANs are able to emulate smaller, older-generation disk drives to jack up the number of disk arms visible to the System i, thus solving that particular performance problem.
Aguiar gives this example: “I can go in there and put in one hundred 141 GB physical drives, and have 100 disk arms. Or I can go in with a SAN with 50 disk arms–50 physical disk drives with 146 GB each–and carve them down into 36 GB volumes, then tell the host ‘I’ve got 100 36 GB volumes,’ but I’m only using 50 physical drives in the SAN.” Yes, you would be replacing physical disk arms with logical disk arms with a SAN. But, Aguiar maintains, the System i doesn’t know the difference.
It used to be that SANs were too expensive and complex for all but the biggest enterprises. But that is changing as mid-sized companies today begin to experience the same types of problems that previously only affected larger outfits, making SANs a practical solution. “We’re seeing a trend where people are looking at the SAN, and trying to move in that direction,” Aguiar says. “Many times they need a little guidance, a little consulting, and that’s where we step in.”
And its not just EMC and the EMC partners network that’s looking to push SANs, but IBM itself. Nearly two years ago it advised its Large User Group (LUG) members to consider SANs for storage for Power Systems 570 and 595 servers, according to Aguiar.
That brought some bittersweet redemption for him. “Here’s a situation where, as an EMC employee, I was championing the SAN for many, many years, and IBM was throwing us under the bus. And we still went out there and provided SAN solutions to those high-end customers. So now IBM is toeing the line and telling the same story. . . .[But I] don’t think [IBM’s SAN] can stand toe to toe to some of the EMC SAN solutions with DMX and VMAX.”
This article has been corrected. The name of the Boston-area company selling EMC solutions is Entrepid, not Envision, and its Web site is www.entrepid.com. IT Jungle regrets the error.