In my last blog titled “Varying Database Benchmark Results” I tried to explain why people can often get varied performance results when running a specific database benchmark multiple times. I attempted to explain the possible reasons from a purely mechanics viewpoint. In other words it’s quite often the user background, experience and approach that can play a very significant role. I also stressed that running benchmarks that don’t push the hardware limits can also result in varied or non-repeatable results. But one of the most common questions people have historically asked in response to this explanation is why does expensive commercial database benchmarking tool X yield performance numbers ten times slower than freeware tool Y when running the very same benchmark test on the exactly the same sized data?

Now I don’t want to name these referenced database benchmarking tools because I routinely use both (the commercial and the freeware). Both tools have their pros and cons. I don’t want to appear to either endorse or denounce any database benchmarking tool. But I am going to explain why it is possible to see ten times worse performance between those tools. Unsurprisingly the primary reasons point back to my first blog, specifically the database benchmark specs. I’ll reference the TPC-C benchmarkin my example.

The TPC-C benchmark specdictates certain mandatory test characteristics, plus also offers those which allow for some user customization. The tables, columns, their data types, primary keys, foreign keys and minimum indexes are generally mandated. Moreover certain advanced database features or constructs are not permitted. Examples include using index organized tables or creating indexes that contain all the columns accessed by a specific SQL command (thereby eliminating the need to access the table). There are more than a few such limitations. But there are a few permissible customizations. Examples include horizontal partitioning, vertical table partitioning, data clustering and index clustering.

So let’s now examine how database benchmarking tools can vary in their implementation of the spec. Of course this will require that we look at the generated DDL to validate its legitimacy. This comparison will provide the context for an apples to apples comparison, thus helping to explain performance variances.

One database benchmarking tool creates all its tables with none of the required primary or foreign keys. It creates only unique and non-unique indexes. Thus it does not support the spec’s integrity rules. This is important because the database does not have to perform referential integrity checks, thereby gaining a speed advantage.

One database benchmarking tool creates two of its tables as indexed organized tables – including the largest and most accessed table. This violates the spec’s general guidelines regarding what is known as benchmark special implementations – which result in highly specialized performance gains.

One database benchmarking tool creates an index with extra columns to enhance one of the most recurring join conditions. This too violates the spec’s general guidelines regarding what is known as benchmark special implementations – which again result in highly specialized performance gains.

Finally one database benchmarking tool runs all its transactions with a default zero wait or think time. This violates the spec’s wait time and response time requirements. Thus this database benchmarking tool submits all of its transactions as fast as possible while the other forcibly spaces out or delays all of its transactions. Clearly this will result in a highly skewed result all by itself.

With such variations from the benchmark spec is it any wonder that one tool might appear to yield results ten times faster? So we really need to know both the benchmark spec and any specific tool’s implementation of that spec. Otherwise running the very same benchmark on the exactly the same sized data may well not yield reliable or even comparable results. As the song goes “You better look before you leap, still waters run deep”.

About the Author

Bert Scalzo

Bert Scalzo is a guest-blogger for Quest and a renowned database expert, Oracle® ACE, author, database technology consultant, and formerly a member of Dell Software’s TOAD dev team. With three decades of Oracle® database experience to draw on, Bert’s webcasts garner high attendance and participation rates. His work history includes time at both Oracle Education and Oracle Consulting. Bert holds several Oracle Masters certifications and has an extensive academic background that includes a BS, MS and Ph.D. in computer science, as well as an MBA, and insurance industry designations. Bert is a highly sought-after speaker who has presented at numerous Oracle conferences and user groups, including OOW, ODTUG, IOUG, OAUG, RMOUG and many others. Bert enjoys sharing his vast knowledge on data modeling, database benchmarking, database tuning and optimization, "star schema" data warehouses, Linux® and VMware®. As a prolific writer, Bert has produced educational articles, papers and blogs for such well-respected publications as the Oracle Technology Network (OTN), Oracle Magazine, Oracle Informant, PC Week (eWeek), Dell Power Solutions Magazine, The LINUX Journal,, Oracle FAQ, Ask Toad and Toad World.

Start the discussion at