HammerDB Source Code moved to the TPC Council

The TPC Council recently announced that the TPC is now hosting the HammerDB open source projects GitHub repository.

HammerDB has increased dramatically in popularity and use and has been identified as  the industry default for database benchmarking illustrating both the popularity of open source  and TPC based benchmarks.  Consequently with the wish to engage with the open source database benchmarking community the TPC approached HammerDB to collaborate on development and bring together both the communities around the TPC benchmarks and HammerDB.  The first step towards this goal is the recently announced move of the HammerDB source code to the TPC-Council GitHub repository.  The previous developer owned GitHub repository sm-shaw github repository has been deleted and the TPC-Council repository will be the source code repository for ongoing and future HammerDB development.  Anyone wishing to contribute to the development of HammerDB should do so from the HammerDB TPC-Council GitHub repository. From the first release up to version 3.1 HammerDB has been released on the HammerDB sourceforge site and currently hosts the runnable binary downloads and support site. Over time downloads and support will also transition to the TPC-Council GitHub site.  The HammerDB website hosts the latest documentation in docbook format and links to published benchmarks and will be maintained as the primary website.  Downloads from the download page on this site will show the current release.  To understand the difference between the downloadable binaries and the source code view the post on HammerDB Concepts and Architecture.

The license of HammerDB remains as GPLv3 and copyright to Steve Shaw the developer of HammerDB. For additional clarity Steve Shaw is an employee of Intel however HammerDB is not Intel software and was developed as an approved personal open source project where a stipulation of this approval was that there would be no association of the software with Intel or the developers status as an employee of the company.

Testing MySQL 8.0.16 on Skylake with innodb_spin_wait_pause_multiplier

In the recent MySQL 8.0.16 release there is a new variable for the InnoDB storage engine called innodb_spin_wait_pause_multiplier described as providing “greater control over the duration of spin-lock polling delays that occur when a thread waits to acquire a mutex or rw-lock” and “delays can be tuned more finely to account for differences in PAUSE instruction duration on different processor architectures”

This post aims to address what this about and whether you really need to be concerned about the difference in instructions on the different processor architectures for MySQL  by testing the performance with HammerDB.  Note that the main developer of HammerDB is Intel employee (#IAMINTEL) however HammerDB is a personal open source project and HammerDB has no optimization whatsoever for a database running on any particular architecture.  Fortunately the HammerDB TPC-C/OLTP workload intentionally has a great deal of contention between threads and is therefore ideal for testing spin-locks.

So the PAUSE instruction is  an optimization over NOP when a thread is waiting to acquire a lock and is particularly important in spin-locks on x86 CPUs for power and performance. However in the Skylake microarchitecture (you can see a list of CPUs here) the PAUSE instruction changed and in the documentation it says “the latency of the PAUSE instruction in prior generation microarchitectures is about 10 cycles, whereas in Skylake microarchitecture it has been extended to as many as 140 cycles.” and “as the PAUSE latency has been increased significantly, workloads that are sensitive to PAUSE latency will suffer some performance loss.”  The impact on a database workload is not necessarily straightforward however as it depends on how much time that workload actually spends in spin-locks, fortunately as noted a HammerDB deliberately induces a great deal of contention so is at the top end of workloads that do.

Also it is interesting to note that the impact of this change was not observed in other databases or other MySQL storage engines such as MyRocks the only noticeable impact with HammerDB workloads occurred in MySQL  with InnoDB where in the source code ut0ut.cc in the directory storage/innobase/ut the following section has UT_RELAX_CPU defined to call the PAUSE instruction on x86.

for (i = 0; i <delay * 50; i++) {
    j += i;  

Note that this fixed value of 50 is multiplied by the parameter  innodb_spin_wait_delay which has a default of 6 (but selects a value at random up to this value) so could be calling PAUSE up to 300 times. So innodb_spin_wait_delay has always been configurable in recent versions but now from MySQL innodb_spin_wait_pause_multiplier is configurable also rather than requiring modification of the source code to do so.   However as noted many factors affect spin-locks including how much time you actually spend in locks in the first place and therefore the best way to really determine how much time you have to spend in fine-tuning spin-locks in  the real world can be done through testing.

So to test I took a system with Skylake CPUs and all storage on a P4800X SSD.

model name : Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz

The OS was:

 Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-23-generic x86_64)

I downloaded and installed the pre-built Linux binary from here:


and set the following my.cnf

# general

# files						
# buffers

# tune

# perf special

# monitoring						

Initially I set doublewrite and flush_log_at_trx_commit and then deleted and rebuilt the schema to re-test with these parameters turned on.

I installed HammerDB 3.1 and used the client library from MySQL 5.7 installed in the home directory


and added this to the library path as follows:


I created a HammerDB build file and called it using the source command with the hammerdbcli tool.

HammerDB-3.1$ cat innodbbuild400.tcl
dbset db mysql
diset connection mysql_host
diset connection mysql_port 3307
diset tpcc mysql_count_ware 400
diset tpcc mysql_partition true
diset tpcc mysql_num_vu 64
diset tpcc mysql_storage_engine innodb
print dict

Once the schema was built I shutdown and restarted the database and configured a test script as follows:

HammerDB-3.1$ cat innodbtest1.tcl
proc runtimer { seconds } {
set x 0
set timerstop 0
while {!$timerstop} {
incr x
after 1000
if { ![ expr {$x % 60} ] } {
set y [ expr $x / 60 ]
puts "Timer: $y minutes elapsed"
if { [ vucomplete ] || $x eq $seconds } { set timerstop 1 }
dbset db mysql
diset connection mysql_host
diset connection mysql_port 3307
diset tpcc mysql_driver timed
diset tpcc mysql_rampup 1
diset tpcc mysql_duration 2
vuset logtotemp 1
foreach z { 1 2 4 8 16 24 32 40 48 56 64 72 80 88} {
puts "$z VU TEST"
vuset vu $z
runtimer 240
after 20000

and then ran the test leaving it unattended until it reported the message “TEST SEQUENCE COMPLETE”. In particular note that the parameter innodb_spin_wait_pause_multiplier remained at the default throughout.

HammerDB-3.1$ ./hammerdbcli 
HammerDB CLI v3.1 
Copyright (C) 2003-2018 Steve Shaw 
Type "help" for a list of commands 
The xml is well-formed, applying configuration 
hammerdb>source innodbtest1.tcl

The following was the results in graph form:

with a peak performance of the following:

Vuser 1:64 Active Virtual Users configured 
Vuser 1:TEST RESULT : System achieved 2159378 MySQL TPM at 713488 NOPM

so we are generating more than 2 million MySQL Transactions per minute and more than 700,000 New Orders per minute without having modified any of the spin-loop parameters.

Now that the parameter is configurable I could also run some experiments:

mysql> show global variables like '%pause%'; 
| Variable_name | Value | 
| innodb_spin_wait_pause_multiplier | 50 | 
1 row in set (0.00 sec) 

mysql> set global innodb_spin_wait_pause_multiplier=25; 
Query OK, 0 rows affected (0.00 sec) 

mysql> show global variables like '%pause%'; 
| Variable_name | Value | 
| innodb_spin_wait_pause_multiplier | 25 | 
1 row in set (0.00 sec)

what I found was when reducing innodb_spin_wait_pause_multiplier signficantly to a value such as 5 I could achieve marginally improved performance up to 40 virtual users  643423 NOPM at a value of 5 compared to 626630 NOPM at 50 however beyond this point performance was signficantly lower and at 64 virtual users was only 278909 NOPM.   The optimal performance remained at the default.

Further testing involved doubling the number of warehouses with the same results, however not partitioning the schema did bring some benefits in reducing innodb_spin_wait_pause_multiplier to a lower value.

The key finding was remarkably consistent with that of spin-lock configuration on other databases in that in the vast majority of use cases you simply do not have to worry about modifying these parameters from the default regardless of the processor architecture. However it is noted that there are many factors that input into home much time is spent in spin-locks in the first place until these parameters come into play, examples such as the speed of the memory and the storage and configuration of the schema with partitioning as well as the operating system.  In particular the number of concurrent users is going to be a significant factor.

Nevertheless if you have an application generating millions of  transactions per minute and have the opportunity to test and optimize surely the availability of more parameters to fine tune a database to a particular application is welcome because with each release MySQL is continuing to push the boundaries of performance over previous releases.

Why both TPM and NOPM Performance Metrics?

The reason for reporting both TPM and NOPM performance metrics is for both historical and performance reasons. On the historical side HammerDB started out as Hammerora and only supported Oracle so it only needed to report Oracle TPM. However as other databases were added they all record TPM in a slightly different way so it wasn’t possible to compare TPM between different database for example Oracle and SQL Server.  Therefore NOPM was added.

NOPM is a close relation of the official tpmC statistic recording only new orders per minute, however it is strictly forbidden to use the term tpmC unless you are doing an official fully audited and approved TPC-C benchmark and therefore the term NOPM signifies this difference.  Nevertheless because it comes from the schema itself that means that NOPM can be used to compare different databases whereas TPM cannot.

There should always be a close ratio between TPM and NOPM for example with Oracle NOPM is close to  1/3rd of TPM.  This also brings additional benefit that if the values diverge it may indicate errors in the testing. In particular because TPM records all transactions, both commits and rollbacks during the measuring period on the database whether HammerDB induced transactions or not.

Therefore one question is why not just use NOPM? There are 2 reasons. Firstly for database performance engineering studies TPM relates directly to the metrics captured from the database. For example  the TPM  value is the same number multiplied by 60 as the transactions per second value in the load profile of an Oracle AWR report (the SQL Statement actually gets the number from the AWR repository). Secondly and most importantly the TPM values for all databases are from in-memory tables again using the Oracle example v$sysstat (All v$ tables are memory based) and therefore selecting from these does not impact the test – this means we can then run the graphical transaction counter and select from this table without affecting the test results. However as NOPM is selected from the district table if we ran this during the test at the same frequency it would impact the results by introducing locking and therefore we take this value at the start and end of the test only.

Therefore this is why we have 2 values of TPM and NOPM and both are as valid as each other (graphs for TPM and NOPM should look identical) that can be summed up in a simple explanation. If you are comparing one database only then TPM is better as you can relate this directly to all of the database performance metrics you have captured however if you are comparing different databases then you should use NOPM instead when reporting results.