Reader Feedback On One Power 750 Matches Two Xeon Servers On SAP BW Test
September 23, 2013 Timothy Prickett Morgan
“The database IBM tested had 500 million records, which could fit into a machine with 32 cores; I am guessing here, but it would probably take a 64-core machine to do 1 billion records as HP tested with its pair of ProLiant DL580 G7 servers.”
What does the number of DB records have to do with the number of cores required? I’m not aware of any correlation between the two. Perhaps you were thinking of a correlation between DB records and RAM (only applies to in-memory databases; not IBM i).
“The HANA database has data compression and columnar data store, which is why it can get a fat database into the 512 GB of main memory.”
Something about that sounds fishy. Wouldn’t the data have to be uncompressed in order for queries to perform relational operations against column values?
Data may be compressed while stored persistently, but wouldn’t it be decompressed when loaded into RAM?
“With the ProLiant DL580 G7 database server running at 88 percent of CPU and the app server linked to it running at 28 percent of CPU (which suggests some bad sizing if you are worried about wasting money).”
Maybe HP was more concerned about increasing throughput than wasting money. There’s generally a strong correlation between the number of app server instances running and the number of cores, because most operating systems (other than IBM i) require significant resources just to do task switching. The cores may be idle, but you still need them to avoid task switching.
That also explains why you see vendors dividing database workloads and app server workloads across two physical or virtual machines. Running complex workloads wreaks havoc on benchmarks results under most operating systems.
Do you see how remarkable it is that both database and app server workloads ran so well on a single IBM i partition?
Well, I did see that, and I pointed it out! HA!