• The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
Menu
  • The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
  • SSD Performance: Be Careful Before You Buy

    November 30, 2009 Doug Mewmaw

    The other day I was at an office supply store picking up a flash drive for my wife. As a teacher, the inexpensive technology is just perfect for her storage needs. Did you chuckle when I said the technology was inexpensive? I remember when a flash drive cost over $50, and now they are practically giving them away. I purchased a 4 GB flash drive for less than $10!

    Does any remember what we paid for our first VCR? As a passionate golfer, I see this phenomenon in the golf club industry, too. At the beginning of the year, the top new drivers are announced with a price tag of around $500. A year later, they are all leaving the store for under $200. There is nothing better than to have a technology around long enough to see the price drop dramatically, and to benefit from waiting it out.

    I bring this up as I was thinking about a recent customer I’ve been working with. You see, this customer implemented solid state disks (SSDs). Right now, we will all agree that SSD is like the new iPhone craze. It’s really cool technology, but do I want to fork out that kind money when my existing phone works just fine?

    With five kids in my family, every dollar counts, especially during a recession. I think it’s safe to say corporate America has the same mentality, especially when it comes to major expenditures. In my family, I don’t mind spending the money as long as I’m getting one thing back: bang for the buck.

    It made me wonder if my customer was getting SSD bang for the buck.

    Let’s look at what one needs to do during the SSD process to ensure you’re getting the most for your dollar, and measure the impact of a new SSD environment.

    Identify Your SSD Jobs

    For SSD implementation, the key is to see if any jobs in your system qualify as good candidates for SSD. IBM has an analyzer tool, but anyone can do it simply by looking at your current performance data. The key is to inventory your jobs into three categories:

    1. Jobs that are good candidates for SSD.
    2. Jobs that might be a good candidate for SSD.
    3. Jobs that probably would not have a performance gain with SSD.

    Here is a great real-life example:

    What Are the SSD Guidelines?

    First, we will start with how much a job is waiting on disk I/O and use the disk read wait average. Here are some best practice guidelines you can refer to:

    Disk Read Wait Average > 3.5 milliseconds — Jobs that are good candidates for SSD.

    Disk Read Wait Average 1.5 to 3.5 milliseconds — Jobs that may be good candidates for SSD.

    The key is to look for jobs that not only have a lot of disk read waits, but to make sure you are selecting jobs that run for a long a time. In other words, who cares if a job has a high disk read wait average when it runs for only one minute? The only time I would be concerned with quick running jobs is if it was an environment where the job ran thousands of times per day. In our real-life example, my customer simply wanted to cut down his nightly batch window so his environment was one where he had the typical long-running batch jobs.

    The next step for my customer was to implement the SSD environment. My customer went from 72 SAS drives to 60 SAS drives plus four SSDs. Now, let’s measure the impact of the new disk environment.

    Measuring the SSD Impact

    Before measuring the SSD impact, let’s first understand the job stream that is being chosen for SSD. Below we see a batch job summary report:

    Some observations:

    1. Job ran 21 times in the month.
    2. Average run time was 4,378 seconds (73 milliseconds) .
    3. Average CPU was about 10 percent.
    4. The job has a lot of I/O.

    Next, the customer changes his files to be in the SSDs, so we can do a before/after analysis to see if he truly got bang for the buck. Let’s look at this analysis now:

    Before Picture (Baseline Data)

    Next, let’s look at a job disk read wait average before the SSD environment was implemented:

    Disk Read Wait Average Before SSD

    Measuring the job’s seven intervals, we see that the disk read wait average was 1.5 (1.494). It’s interesting that the customer chose a job where it was categorized as a maybe in regard to the potential SSD performance gain. That is, the above graph shows that all intervals were under the 3.5 milliseconds disk read wait best practice guideline. The maximum disk read wait was under the guideline as well (2.3 milliseconds).

    This is a great real-life example where we can see if the customer made the right SSD decision.

    Next, we look at the potential performance gain for this job:

    I like this graph because it illustrates what I would call a best-case scenario performance improvement. In other words, it shows exactly what can be gained with SSD. We see from start to finish, the job waited over 3,000 seconds due to disk waits. What does this mean? In theory, if we implemented this job and its related files into an SSD environment, we have the potential to save over 50 milliseconds in the job run time.

    Since the customer’s number one goal was to shorten the end of day processing, it’s important to understand our baseline data. Below we see the job run time before the SSD project was implemented.

    The job run time statistics are as follows:

    Job Started: 12:17

    Job Ended: 2:01

    Total Run Time: 1 hour; 44 milliseconds

    The After Picture (SSD Data)

    Following is a job disk read wait average after the SSD environment was implemented:

    Disk Read Wait Average after SSD

    Measuring the job’s seven intervals again, we see that the disk read wait average decreased to only 1.2 milliseconds, which represents a 20 percent improvement. The maximum disk read wait decreased from 2.3 milliseconds to 2.0 milliseconds (a 13 percent improvement).

    But did we get bang for the buck? Was the customer’s goal of decreasing the job run time, thus shortening the end of day processing, met? Let’s look at the job run time stats after SSD was implemented.

    The job run time statistics are as follows:

    Job Started: 12:01

    Job Ended: 1:43

    Total Run Time: 1 hour; 42 milliseconds

    Did the Customer Get SSD Bang For the Buck?

    This falls into that good news/bad news scenario. The good news is that with SSD, the disk read average wait definitely improved by 20 percent.

    The bad news is that the job run time only improved 1.4 percent. It only ran two milliseconds faster.

    So the answer to the question is: In this situation the customer did not receive bang for the buck they were expecting. However, it’s not because SSD is a bad idea. This real-life example indicates how important the SSD data analysis process is. Remember, the job selected was only a “maybe” in regard to the possible performance gain, and sure enough, the performance gain was minimal. Also, remember there are a lot of factors that affect performance. Did memory change in the environment? Is there a CPU bottleneck? Are more jobs processing on the system now? To measure the impact of SSD accurately, it’s important not to have an apples to oranges environment. The golden rule in measuring the impact of change is simple: Change one thing and measure the impact. Hopefully, the customer did that.

    And for the record, I am one of those people that thought the iPhone craze was just silly. I didn’t need a fancy phone when my existing phone worked just fine. Of course, I didn’t anticipate getting an iPhone for Father’s Day. Truth be told, after having an iPhone for months, I can’t imagine life without it.

    I wonder if someday we’ll all be saying the same thing about SSD?

    Doug Mewmaw is a “jack of all trades” IT veteran who currently is director of education and analysis at Midrange Performance Group, an i business partner that specializes in performance management and capacity planning.



                         Post this story to del.icio.us
                   Post this story to Digg
        Post this story to Slashdot

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Tags: Tags: mtfh_rc, Volume 18, Number 42 -- November 30, 2009

    Sponsored by
    WorksRight Software

    Do you need area code information?
    Do you need ZIP Code information?
    Do you need ZIP+4 information?
    Do you need city name information?
    Do you need county information?
    Do you need a nearest dealer locator system?

    We can HELP! We have affordable AS/400 software and data to do all of the above. Whether you need a simple city name retrieval system or a sophisticated CASS postal coding system, we have it for you!

    The ZIP/CITY system is based on 5-digit ZIP Codes. You can retrieve city names, state names, county names, area codes, time zones, latitude, longitude, and more just by knowing the ZIP Code. We supply information on all the latest area code changes. A nearest dealer locator function is also included. ZIP/CITY includes software, data, monthly updates, and unlimited support. The cost is $495 per year.

    PER/ZIP4 is a sophisticated CASS certified postal coding system for assigning ZIP Codes, ZIP+4, carrier route, and delivery point codes. PER/ZIP4 also provides county names and FIPS codes. PER/ZIP4 can be used interactively, in batch, and with callable programs. PER/ZIP4 includes software, data, monthly updates, and unlimited support. The cost is $3,900 for the first year, and $1,950 for renewal.

    Just call us and we’ll arrange for 30 days FREE use of either ZIP/CITY or PER/ZIP4.

    WorksRight Software, Inc.
    Phone: 601-856-8337
    Fax: 601-856-9432
    Email: software@worksright.com
    Website: www.worksright.com

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    More with the WDSc Tasks View Simply Continuous Aims to Narrow ‘Recovery Gap’ with DR Solution

    Leave a Reply Cancel reply

TFH Volume: 18 Issue: 42

This Issue Sponsored By

    Table of Contents

    • IBM Slashes Power Systems Memory Prices
    • A New Look for the COMMON Session Grid
    • SSD Performance: Be Careful Before You Buy
    • Mad Dog 21/21: The Fox in IBM’s Storage Henhouse
    • How Does 800,000 CPWs in a 2U Server Grab You?
    • Reader Feedback on IBM Smart Business Moves into Italy
    • IBM Pushes Smarter Mid-Market IT Projects with More Financing
    • SaaS Sales Up Smartly Despite (or Because Of) the Economy
    • Math, Science, and Engineering: A Better Career These Days?
    • AMD Taps IBM Chiphead for Board of Directors

    Content archive

    • The Four Hundred
    • Four Hundred Stuff
    • Four Hundred Guru

    Recent Posts

    • Public Preview For Watson Code Assistant for i Available Soon
    • COMMON Youth Movement Continues at POWERUp 2025
    • IBM Preserves Memory Investments Across Power10 And Power11
    • Eradani Uses AI For New EDI And API Service
    • Picking Apart IBM’s $150 Billion In US Manufacturing And R&D
    • FAX/400 And CICS For i Are Dead. What Will IBM Kill Next?
    • Fresche Overhauls X-Analysis With Web UI, AI Smarts
    • Is It Time To Add The Rust Programming Language To IBM i?
    • Is IBM Going To Raise Prices On Power10 Expert Care?
    • IBM i PTF Guide, Volume 27, Number 20

    Subscribe

    To get news from IT Jungle sent to your inbox every week, subscribe to our newsletter.

    Pages

    • About Us
    • Contact
    • Contributors
    • Four Hundred Monitor
    • IBM i PTF Guide
    • Media Kit
    • Subscribe

    Search

    Copyright © 2025 IT Jungle