tlb
Volume 4, Number 10 -- March 20, 2007

Transaction Processing Council Launches TPC-E Benchmark

Published: March 20, 2007

by Timothy Prickett Morgan

The Transaction Processing Council, the well-known industry consortium that is nearly two decades old and creates and audits a series of online transaction processing benchmarks on behalf of server, operating system, and database software makers, has finally got an updated transaction processing test to market. The TPC-E test, which becomes available this week, has been ratified by the member companies and will be the first test with standardized code that will be freely available to anyone in the IT community to use to gauge the performance of systems.

That latter bit, it may turn out, could be the most important part about TPC-E, aside from the fact that the TPC-E test will be a lot harder for vendors to get away with shenanigans with than TPC-C was. Up until now, all TPC tests were administered by members of the TPC consortium and official tests could not be done by third parties, such as other vendors or IT shops themselves. Not so with TPC-E, which is a kicker to the TPC-C test, and the future TPC-DS, which will be a kicker to the current TPC-D, TPC-H, and TPC-R decision support benchmarks. The vendors have controlled the benchmark up until now, and they could implement the code behind the various TPC tests as they saw fit.

With TPC-E, according to Mike Molloy, senior manager for Dell's Enterprise Performance Lab and chairman of the TPC, the TPC-E code is the same for everyone, and instead of getting into the whole religious war between Java and C#, the code is implemented in C++, a compiled language (rather than interpreted) that offers more performance and therefore will be preferable to all players. The code has APIs to link into databases and middleware, and has 32,718 lines (including some blanks).

The TPC-C test simulated the warehouse (as in forklifts, not business intelligence) operations of a distributor, including order processing, inventory, and other operations, and counts the number of new orders the system can process (in transactions per minute) while supporting a set mix of transactions. Top-end Unix boxes can deliver 4 million TPM these days on the TPC-C test, which is a lot more transaction than such a box in the field can usually do. This just goes to show how relatively skinny and simple TPC-C code is.

The TPC-E test simulates the transaction processing associated with a Web-based brokerage house that has customers coming in to trade stocks and then trades those stocks as directed with stock exchanges. Customers are simulated using a snapshot of the 2000 census data (name, address, phone number, and so forth) from the United States and Canada, and the data for stock transactions is based on a snapshot of data from the NYSE and NASDAQ markets. With TPC-C, the data was gibberish, created by random generators, but this data is real. The use of gibberish for data meant that there was no performance penalty for carving up data into partitions and steering it towards cell boards in a particular server, but in the real world, where data has a shape, this is not going to work.

The TPC-E test is comprised of 33 tables with a total of 133 columns, compared to the nine tables and 92 columns in the TPC-C test. There are many different data types in the test (you name it, it is in there), and the databases have 33 primary keys and 50 foreign keys, compared to eight primaries and nine foreigns in the TPC-C test.

"People are doing more complicated things in memory and CPUs than they did in the past, and the TPC-E test reflects this," says Molloy. "Customers are keeping track of more data and they are making more correlations between data sets."

Moreover, the TPC-E test checks constraints and performs referential integrity, which many companies do within their OLTP workloads and which the TPC-C test did not. It also requires RAID 1 or RAID 5 data protection for disk files, and has nowhere near the same disk to memory to CPU ratios that the TPC-C test had. With TPC-C, big boxes had to have tens or hundreds of terabytes of disk capacity, which is not the amounts real servers have out there in the data center.

The TPC-E test has 10 different transactions--looking up historical stock data, a stock feed, getting a customer position in a stock, etc.--and counts the transaction called a trade result; this is the confirmation transaction that is received from the stock market that a trade was executed. To get smaller numbers--and because this is a denser benchmark--the TPC-E test will be measured in transaction per second, or TPS.

Some people get indignant about the way vendors learn over time how to tune their applications and systems to do better on benchmark tests. They add lots of CPU power, memory, and I/O bandwidth, for instance, and suddenly boxes go a lot faster. But, to another way of thinking, the benchmark drove innovation, and all server customers benefited from that. The issue is at the tail end of any benchmark's life, where the results on tests get far out of whack relative to real-world workloads because the code on a test, such as the TPC-C test, which debuted in 1992, is too simple.

While the TPC-E test itself may be more complex, the standardization of code, the reduction in disk capacity requirements, and other factors were designed to make the TPC-E test a lot easier to administer, and therefore Molloy is hoping that it will be more widely used than other TPC tests have been in the past. The test auditors will be approved in a month or so, and Molloy expects that vendors will roll out results for the TPC-E test by summer.

The TPC-E test was in preliminary stages in late 2004, and was expected to be ratified in 2005. Better late than never, as the saying goes. And it will be interesting to see who sweats it out running TPC-E. It is reasonable to expect some changes in the server rankings.


RELATED STORY

New TPC Benchmarks Are on the Horizon



                     Post this story to del.icio.us
               Post this story to Digg
    Post this story to Slashdot


Sponsored By
IOUG

Register today for
COLLABORATE 07 -IOUG Forum,
April 15-19, in Las Vegas, NV.

Benefit from user-driven Oracle technology training
in more than 250 sessions concentrating on five tracks,
including Database, Development, Architecture,
Middleware and Professional Development.

Plus, gain access to over 200 top
solution providers and enjoy unmatched networking events
at this premier Oracle community gathering.

Visit our Website for more information.


Editor: Timothy Prickett Morgan
Contributing Editors: Dan Burger, Joe Hertvik, Kevin Vandever,
Shannon O'Donnell, Victor Rozek, Hesh Wiener, Alex Woodie
Publisher and Advertising Director: Jenny Thomas
Advertising Sales Representative: Kim Reed
Contact the Editors: To contact anyone on the IT Jungle Team
Go to our contacts page and send us a message.

Sponsored Links

COMMON:  Join us at the 2007 conference, April 29 May 3, in Anaheim, California
ANSYS:  Engineering simulation solutions for more than 30 years
Scalix:  Advanced email and calendaring for power users in the enterprise


The Four Hundred
IBM Pays for System i5 Video Viral Marketing

System i Shops Plenty Annoyed About Missing WDSc Features

It Was Inevitable: IBM Jacks Maintenance Fees on Midrange Gear

Mad Dog 21/21: The China Spin Drone

Four Hundred Stuff
SOA, What's The Big Deal?

Asigra Debuts Remote, Agent-Less Backup for iSeries

Lawson Updates ERP, Unveils SaaS Plans at User Conference

Attachmate Moves SOA Strategy Forward with Veratream 6.5

Big Iron
Putting the z in College Degrees

Top Mainframe Stories From Around the Web

Chats, Webinars, Seminars, Shows, and Other Happenings

Four Hundred Guru
Release That Record Lock!

Giving RSE a Split Personality

Admin Alert: The Better Way to Send Break Messages to Active Users in i5/OS

System i PTF Guide
March 10, 2007: Volume 9, Number 10

March 3, 2007: Volume 9, Number 9

February 24, 2007: Volume 9, Number 8

February 17, 2007: Volume 9, Number 7

February 10, 2007: Volume 9, Number 6

February 3, 2007: Volume 9, Number 5

The Windows Observer
Windows Server 2003 SP2 Released by Microsoft

Microsoft Unveils 'Duet'-Like Interface, New ERP Releases

Intel Delivers Low-Power, Quad-Core Xeon Chips

VoIP's Future Rosy, Microsoft Biz Chief Says

The Unix Guardian
Intel Delivers Low-Power, Quad-Core Xeon Chips

Server Makers Have $5.3 Billion Bumper Crop in Q4 in Europe

Lawson Updates ERP, Unveils SaaS Plans at User Conference

As I See It: The Digital Life

Four Hundred Monitor
Four Hundred Monitor's
Full iSeries Events Calendar

THIS ISSUE SPONSORED BY:

nuBridges
Bytware
IOUG
Egenera
ShaoLin Microsystems



TABLE OF CONTENTS
Red Hat Integrates and Simplifies with RHEL 5

The Feeds and Speeds of Red Hat Enterprise Linux 5

Transaction Processing Council Launches TPC-E Benchmark

Mad Dog 21/21: The China Spin Drone

But Wait, There's More:


Novell BrainShare 2007: Time to Buy More T1s . . . Study Attempts to Quantify IT's Effects on the Economy . . . SWsoft Distributes Virtuozzo Virtualization with SLES 10 Linux . . . Linux Cluster Specialist Terascala Gets $3 Million in Venture Funding . . . Vision Taps SteelEye for Linux HA on iSeries . . . IBM Tweaks Cell Chip, Moves to 65 Nanometer Process . . .

The Linux Beacon

BACK ISSUES





 
Subscription Information:
You can unsubscribe, change your email address, or sign up for any of IT Jungle's free e-newsletters through our Web site at http://www.itjungle.com/sub/subscribe.html.

Copyright © 1996-2008 Guild Companies, Inc. All Rights Reserved.
Guild Companies, Inc., 50 Park Terrace East, Suite 8F, New York, NY 10034

Privacy Statement