Transaction Processing Council Launches TPC-E Benchmark
Published: March 20, 2007
by Timothy Prickett Morgan
The Transaction Processing Council, the well-known industry consortium that is nearly two decades old and creates and audits a series of online transaction processing benchmarks on behalf of server, operating system, and database software makers, has finally got an updated transaction processing test to market. The TPC-E test, which becomes available this week, has been ratified by the member companies and will be the first test with standardized code that will be freely available to anyone in the IT community to use to gauge the performance of systems.
That latter bit, it may turn out, could be the most important part about TPC-E, aside from the fact that the TPC-E test will be a lot harder for vendors to get away with shenanigans with than TPC-C was. Up until now, all TPC tests were administered by members of the TPC consortium and official tests could not be done by third parties, such as other vendors or IT shops themselves. Not so with TPC-E, which is a kicker to the TPC-C test, and the future TPC-DS, which will be a kicker to the current TPC-D, TPC-H, and TPC-R decision support benchmarks. The vendors have controlled the benchmark up until now, and they could implement the code behind the various TPC tests as they saw fit.
With TPC-E, according to Mike Molloy, senior manager for Dell's Enterprise Performance Lab and chairman of the TPC, the TPC-E code is the same for everyone, and instead of getting into the whole religious war between Java and C#, the code is implemented in C++, a compiled language (rather than interpreted) that offers more performance and therefore will be preferable to all players. The code has APIs to link into databases and middleware, and has 32,718 lines (including some blanks).
The TPC-C test simulated the warehouse (as in forklifts, not business intelligence) operations of a distributor, including order processing, inventory, and other operations, and counts the number of new orders the system can process (in transactions per minute) while supporting a set mix of transactions. Top-end Unix boxes can deliver 4 million TPM these days on the TPC-C test, which is a lot more transaction than such a box in the field can usually do. This just goes to show how relatively skinny and simple TPC-C code is.
The TPC-E test simulates the transaction processing associated with a Web-based brokerage house that has customers coming in to trade stocks and then trades those stocks as directed with stock exchanges. Customers are simulated using a snapshot of the 2000 census data (name, address, phone number, and so forth) from the United States and Canada, and the data for stock transactions is based on a snapshot of data from the NYSE and NASDAQ markets. With TPC-C, the data was gibberish, created by random generators, but this data is real. The use of gibberish for data meant that there was no performance penalty for carving up data into partitions and steering it towards cell boards in a particular server, but in the real world, where data has a shape, this is not going to work.
The TPC-E test is comprised of 33 tables with a total of 133 columns, compared to the nine tables and 92 columns in the TPC-C test. There are many different data types in the test (you name it, it is in there), and the databases have 33 primary keys and 50 foreign keys, compared to eight primaries and nine foreigns in the TPC-C test.
"People are doing more complicated things in memory and CPUs than they did in the past, and the TPC-E test reflects this," says Molloy. "Customers are keeping track of more data and they are making more correlations between data sets."
Moreover, the TPC-E test checks constraints and performs referential integrity, which many companies do within their OLTP workloads and which the TPC-C test did not. It also requires RAID 1 or RAID 5 data protection for disk files, and has nowhere near the same disk to memory to CPU ratios that the TPC-C test had. With TPC-C, big boxes had to have tens or hundreds of terabytes of disk capacity, which is not the amounts real servers have out there in the data center.
The TPC-E test has 10 different transactions--looking up historical stock data, a stock feed, getting a customer position in a stock, etc.--and counts the transaction called a trade result; this is the confirmation transaction that is received from the stock market that a trade was executed. To get smaller numbers--and because this is a denser benchmark--the TPC-E test will be measured in transaction per second, or TPS.
Some people get indignant about the way vendors learn over time how to tune their applications and systems to do better on benchmark tests. They add lots of CPU power, memory, and I/O bandwidth, for instance, and suddenly boxes go a lot faster. But, to another way of thinking, the benchmark drove innovation, and all server customers benefited from that. The issue is at the tail end of any benchmark's life, where the results on tests get far out of whack relative to real-world workloads because the code on a test, such as the TPC-C test, which debuted in 1992, is too simple.
While the TPC-E test itself may be more complex, the standardization of code, the reduction in disk capacity requirements, and other factors were designed to make the TPC-E test a lot easier to administer, and therefore Molloy is hoping that it will be more widely used than other TPC tests have been in the past. The test auditors will be approved in a month or so, and Molloy expects that vendors will roll out results for the TPC-E test by summer.
The TPC-E test was in preliminary stages in late 2004, and was expected to be ratified in 2005. Better late than never, as the saying goes. And it will be interesting to see who sweats it out running TPC-E. It is reasonable to expect some changes in the server rankings.
New TPC Benchmarks Are on the Horizon
Post this story to del.icio.us
Post this story to Digg
Post this story to Slashdot