HammerDB: Using MySQL 5.7 vs 8.0 to understand performance profiles

One of the most important concepts in analysing database performance is that of understanding scalability. When a system ‘scales’ it is able to deliver higher levels of performance proportional to the system resources available to it.  In particular, this means as we add CPU cores and increase the system load, we see higher performance. The way we measure this is through a ‘performance profile’. This simply means that we run a series of tests incrementally increasing the system load until we find the peak performance. Plotting these data points enables us to understand the scalability of the database software being tested on that system.

In this example, we will compare MySQL 5.7.33 and MySQL 8.0.25 with the TPROC-C workload on a system with 2 sockets of Intel Xeon 8280L that means we have 28 cores per socket, with 56 physical CPUs and 112 logical CPUs with Hyper-Threading.

So as a first step we will install MySQL 5.7.33 and MySQL 8.0.25 and build a schema on both. In this example, we will use the HammerDB CLI and build the schema with the script as follows in both databases.

dbset db mysql
dbset bm TPC-C
vuset logtotemp 1
diset connection mysql_socket /tmp/mysql.sock
diset tpcc mysql_count_ware 800
diset tpcc mysql_num_vu 64
diset tpcc mysql_partition true
buildschema
waittocomplete
quit

Then we can run an interactive workload for a single Virtual User as follows for MySQL 8.0.25

./hammerdbcli
HammerDB CLI v4.2
Copyright (C) 2003-2021 Steve Shaw
Type "help" for a list of commands
The xml is well-formed, applying configuration
hammerdb>dbset db mysql
Database set to MySQL
hammerdb>diset tpcc mysql_driver timed
Clearing Script, reload script to activate new setting
Script cleared
Changed tpcc:mysql_driver from test to timed for MySQL
hammerdb>vuset logtotemp 1
hammerdb>vuset unique 1
hammerdb>vuset vu 1
hammerdb>loadscript
Script loaded, Type "print script" to view
hammerdb>vucreate
Vuser 1 created MONITOR - WAIT IDLE
Vuser 2 created - WAIT IDLE
Logging activated
to /tmp/hammerdb_60F169935C7303E293236333.log
2 Virtual Users Created with Monitor VU
hammerdb>vurun
Vuser 1:RUNNING
Vuser 1:Beginning rampup time of 2 minutes
Vuser 2:RUNNING
Vuser 2:Processing 10000000 transactions with output suppressed...
...
hammerdb>Vuser 1:Rampup 1 minutes complete ...
Vuser 1:Rampup 2 minutes complete ...
Vuser 1:Rampup complete, Taking start Transaction Count.
Vuser 1:Timing test period of 5 in minutes
Vuser 1:1 ...,
Vuser 1:2 ...,
Vuser 1:3 ...,
Vuser 1:4 ...,
Vuser 1:5 ...,
Vuser 1:Test complete, Taking end Transaction Count.
Vuser 1:1 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 25469 NOPM from 76873 MySQL TPM
Vuser 1:FINISHED SUCCESS
Vuser 2:FINISHED SUCCESS
ALL VIRTUAL USERS COMPLETE

and MySQL 5.7.33

hammerdb>dbset db mysql
Database set to MySQL
hammerdb>diset tpcc mysql_driver timed
Clearing Script, reload script to activate new setting
Script cleared
Changed tpcc:mysql_driver from test to timed for MySQL
hammerdb>vuset logtotemp 1
hammerdb>vuset unique 1
hammerdb>loadscript
Script loaded, Type "print script" to view
hammerdb>vuset vu 1
hammerdb>vucreate
Vuser 1 created MONITOR - WAIT IDLE
Vuser 2 created - WAIT IDLE
Logging activated
to /tmp/hammerdb_60F183575C7303E273030333.log
2 Virtual Users Created with Monitor VU
hammerdb>vurun
Vuser 1:RUNNING
Vuser 1:Beginning rampup time of 2 minutes
Vuser 2:RUNNING
Vuser 2:Processing 10000000 transactions with output suppressed...
hammerdb>Vuser 1:Rampup 1 minutes complete ...
Vuser 1:Rampup 2 minutes complete ...
Vuser 1:Rampup complete, Taking start Transaction Count.
Vuser 1:Timing test period of 5 in minutes
Vuser 1:1 ...,
Vuser 1:2 ...,
Vuser 1:3 ...,
Vuser 1:4 ...,
Vuser 1:5 ...,
Vuser 1:Test complete, Taking end Transaction Count.
Vuser 1:1 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 27154 NOPM from 82383 MySQL TPM
Vuser 1:FINISHED SUCCESS
Vuser 2:FINISHED SUCCESS
ALL VIRTUAL USERS COMPLETE

Extracting the NOPM value from the CLI or from the log file, we found that MySQL 8.0.25 returned 25469 NOPM to MySQL 5.7.33’s 27154 NOPM meaning the older version of MySQL gave a slightly better result.

So does this mean that we can conclude that MySQL 5.7 is 1.06X faster than MySQL 8.0? Of course not, we have only run a single Virtual User and therefore not tested MySQL’s ability to manage multiple sessions running the same workload concurrently.  Therefore, again using the interactive shell let us now run more tests, increasing the Virtual User count to 20.

Now at 20 Virtual Users MySQL 8.0 is at 390014 TPM with MySQL 5.7 at 379972 putting MySQL 8.0 at 1.02X better than MySQL 5.7. However, again this might not be giving us the full picture, so let’s run a fully automated test up to 120 Virtual Users using the script as follows:

puts "MySQL Test Started"
dbset db mysql
dbset bm TPC-C
diset connection mysql_socket /tmp/mysql.sock
diset tpcc mysql_driver timed
diset tpcc mysql_rampup 2
diset tpcc mysql_duration 5
vuset logtotemp 1
vuset unique 1
loadscript
foreach z {1 2 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 96 100 104 108 112 116 120} {
puts "$z VU test"
vuset vu $z
vucreate
vurun
runtimer 480
vudestroy
}
puts "MySQL Test Complete"

When we graph the result, we now get a much better picture of the full capabilities of the MySQL 5.7 and 5.8 software running on the same server.

Now we can see that MySQL 8.0 reaches peak performance at 1006469 NOPM at 112 Virtual Users, with MySQL 5.7 peaking at 666407 at 68 Virtual Users, putting MySQL 8.0.25 at 1.51X over MySQL 5.7.33.

It should be clear from the performance profile that the difference in performance can be attributed to the scalability of the database software, in this case MySQL. This is the same server, same OS and same HammerDB client, the only difference is in the server software.

Crucially HammerDB can also run against Oracle, SQL Server, Db2, MariaDB and PostgreSQL on Windows and Linux, so we know that the capabilities of the server with commercial database software is actually a lot higher and also that the capabilities of the HammerDB client is also a lot higher – it is the database software and the database software alone that limits the scalability in this case.

Also important is the user experience, so we have also gathered the transaction response times with the summaries shown below for both MySQL versions at the peak performance of MySQL 5.7.33.

MySQL 8.0.25

Vuser 1:68 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 841337 NOPM from 2550191 MySQL TPM
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>>>>> SUMMARY OF 68 ACTIVE VIRTUAL USERS : MEDIAN ELAPSED TIME : 403363ms
>>>>> PROC: NEWORD
CALLS: 5775293 MIN: 0.810ms AVG: 2.401ms MAX: 57.323ms TOTAL: 13867195.952ms
P99: 4.005ms P95: 3.698ms P50: 2.316ms SD: 792.316 RATIO: 50.557%
>>>>> PROC: PAYMENT
CALLS: 5775133 MIN: 0.435ms AVG: 1.042ms MAX: 56.802ms TOTAL: 6022416.890ms
P99: 2.052ms P95: 1.576ms P50: 0.978ms SD: 401.213 RATIO: 21.957%
>>>>> PROC: DELIVERY
CALLS: 578532 MIN: 4.001ms AVG: 8.929ms MAX: 67.479ms TOTAL: 5165817.194ms
P99: 12.438ms P95: 11.282ms P50: 9.063ms SD: 1831.663 RATIO: 18.834%
>>>>> PROC: SLEV
CALLS: 578436 MIN: 0.752ms AVG: 2.468ms MAX: 56.045ms TOTAL: 1427926.455ms
P99: 3.585ms P95: 3.230ms P50: 2.459ms SD: 560.309 RATIO: 5.206%
>>>>> PROC: OSTAT
CALLS: 576834 MIN: 0.286ms AVG: 0.955ms MAX: 39.063ms TOTAL: 551043.284ms
P99: 1.796ms P95: 1.432ms P50: 0.912ms SD: 334.873 RATIO: 2.009%
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

MySQL 5.7.33

Vuser 1:68 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 655295 NOPM from 1986131 MySQL TPM
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>>>>> SUMMARY OF 68 ACTIVE VIRTUAL USERS : MEDIAN ELAPSED TIME : 403998ms
>>>>> PROC: NEWORD
CALLS: 4542033 MIN: 0.714ms AVG: 3.082ms MAX: 42.174ms TOTAL: 14000329.326ms
P99: 5.124ms P95: 4.720ms P50: 3.001ms SD: 994.571 RATIO: 50.962%
>>>>> PROC: DELIVERY
CALLS: 453821 MIN: 3.020ms AVG: 13.931ms MAX: 58.857ms TOTAL: 6322207.859ms
P99: 19.838ms P95: 17.926ms P50: 13.975ms SD: 2826.392 RATIO: 23.013%
>>>>> PROC: PAYMENT
CALLS: 4544693 MIN: 0.352ms AVG: 1.157ms MAX: 37.108ms TOTAL: 5261866.428ms
P99: 2.231ms P95: 1.734ms P50: 1.096ms SD: 429.929 RATIO: 19.154%
>>>>> PROC: SLEV
CALLS: 455338 MIN: 0.686ms AVG: 2.407ms MAX: 54.447ms TOTAL: 1096000.372ms
P99: 3.253ms P95: 3.048ms P50: 2.381ms SD: 501.128 RATIO: 3.990%
>>>>> PROC: OSTAT
CALLS: 454731 MIN: 0.252ms AVG: 0.946ms MAX: 28.175ms TOTAL: 430240.916ms
P99: 1.856ms P95: 1.458ms P50: 0.905ms SD: 321.024 RATIO: 1.566%
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

What we can see is that with MySQL 8.0.25 we have both higher throughput and lower response times. MySQL 8.0 therefore offers a higher capacity for load than MySQL 5.7 on this server.

It should also be clear that testing on a single socket environment may not give you an accurate comparison of database software designed to take advantage of multiple CPU sockets. In this case we tested MySQL vs MySQL however you should exercise caution in drawing conclusions about database software capabilities without having first tested in a server environment.

HammerDB v4.2 New Features Pt2: Increase of TPROC-C Schema Size Limits

Prior to version 4.2 the limits to the size of the schema that could be generated was set to 5000 warehouses in the GUI and 30,000 if using the datagen feature to generate flat files for external loading. From version 4.2 these limits have been increased to 100,000 for both features. This post gives a background to the schema size limits, guidance on the warehouse count and expected size limits. It also shows how to manually exceed the 100,000 limit if you wish.

Up to version 4.1 when using the GUI the number of warehouses is adjusted by a spinbox with an upper limit set to 5000. You could manually enter a figure larger, but would then receive the error shown that 5000 was the limit.

This was often interpreted as a limit of the amount of data that HammerDB could generate. Instead, however, this was an imposed limit to encourage right-sizing of the test database instead of over-sizing.

The number of warehouses is passed as one of the parameters in the last line of the build script that has no limit.  In the example below, the value 5000 can be seen as the fourth argument to the function do_tpcc. If you stop the build, manually modify this value and re-run the build, it will generate the number of warehouses you have defined. This is the same for the previous 5000 limit or the current 100,000 limit.

However, the precision of some numeric data types may not have been sufficient for larger values in some databases. For example, for Oracle as shown “W_ID” of the table WAREHOUSE was set to the NUMBER data type with scale and precision of (4,0) meaning that the maximum number of warehouses permitted was 9999 before Oracle would generate an error.

"CREATE TABLE WAREHOUSE (W_ID NUMBER(4, 0), W_YTD NUMBER(12, 2), W_TAX NUMBER(4, 4), W_NAME VARCHAR2(10), W_STREET_1 VARCHAR2(20), W_STREET_2 VARCHAR2(20), W_CITY VARCHAR2(20), W_STATE CHAR(2), W_ZIP CHAR(9)) INITRANS 4 PCTFREE 99 PCTUSED 1"

All databases have been checked and updated for v4.2 to ensure that schema builds of up to 100,000 warehouses will complete without error.

Regarding the amount of disk space that should be reserved for a schema build, the general guidance is to allow up to 100MB per warehouse. Typically, not all of this space will be needed and varies per database, however it should not exceed this value. Therefore, for 100,000 warehouses, 10TB of space should be allowed.

However, be aware that just because you can build schemas of up to 100,000 warehouses, it does not necessarily mean that this will right for you. With the default workload, each Virtual User will process 90% of its transactions against its home warehouse selected at random at the start of the test. Consequently, as shown if you are running 2 Virtual Users it makes little difference if you have created, 10, 1000, 10,000 or 100,000 warehouses, aside from the 2 selected at random most of that data will remain idle and will not affect the result of the test.  You should create a schema large enough to allow an even, uncontended, random distribution of Virtual Users for the maximum Virtual User count you will expect to create.

For further information on right sizing, see the post:

How Many Warehouses for the HammerDB TPC-C Test?

Where larger warehouse counts are of most benefit is when using advanced driver script options such as use all warehouses and event driven scaling.  These are areas where creating larger schema sizes can add additional benefit and insight to your testing above and beyond the default schema installations.

HammerDB v4.2 New Features Pt1: MariaDB Build and Test Example with the CLI

Prior to HammerDB v4.2 support for MariaDB has been provided by using the MySQL interface. However, as the two databases diverged from a shared common codebase, this presented a challenge in that it was not possible to modify the workload for MySQL or MariaDB without also changing the other.  Also, the additional installation of the MySQL client libraries was needed when testing MariaDB.  HammerDB v4.2 adds full support for MariaDB in both the GUI as shown:

And in the CLI as a separate database from MySQL, this post describes a setup and test for MariaDB using the CLI on Linux as an example running the TPROC-C workload.

First, download your favourite release and format of MariaDB from https://downloads.mariadb.org/. We will use the Linux generic file mariadb-10.2.34-linux-x86_64.tar.gz of release 10.2.34 and extract it into a directory such as /opt. We have added a my.cnf file to the file listing.

/opt/mariadb-10.2.34-linux-x86_64$ ls
bin EXCEPTIONS-CLIENT man README-wsrep support-files
COPYING include my.cnf scripts THIRDPARTY
CREDITS INSTALL-BINARY mysql-test share
data lib README.md sql-bench

Also install HammerDB v4.2 if you have not done so already and run the librarycheck command from the CLI.  As we have not told HammerDB where to find the MariaDB libraries, it will generate an error telling you to add the location of these libraries to your LIBRARY_PATH.

hammerdb>librarycheck
....
Checking database library for MariaDB
Error: failed to load mariatcl - couldn't load file "/home/HammerDB-4.2/lib/mariatcl0.1/libmariatcl0.1.so": libmariadb.so.3: cannot open shared object file: No such file or directory
Ensure that MariaDB client libraries are installed and the location in the LD_LIBRARY_PATH environment variable

Note that the error tells us we are missing the file libmariadb.so.3. Advanced users can check this using the ldd command on the HammerDB library file itself.

~/HammerDB-4.2/lib/mariatcl0.1$ ldd libmariatcl0.1.so
linux-vdso.so.1 (0x00007ffd3edff000)
libmariadb.so.3 => not found
...

If we look in the lib directory of the MariaDB installation, we can find this file

/opt/mariadb-10.2.34-linux-x86_64/lib$ ls
galera libmariadb.so.3 libmysqlclient.so libmysqlservices.a
libgalera_smm.so libmysqlclient.a libmysqld.a pkgconfig
libmariadbclient.a libmysqlclient_r.a libmysqld.so plugin
libmariadb.so libmysqlclient_r.so libmysqld.so.19

So lets add it to the LIBRARY_PATH and retrace our steps.

$ export LD_LIBRARY_PATH=/opt/mariadb-10.2.34-linux-x86_64/lib:$LD_LIBRARY_PATH

Now using ldd we can see that the library can be found right in the directory we added to the LIBRARY_PATH.

~/HammerDB-4.2/lib/mariatcl0.1$ ldd libmariatcl0.1.so
linux-vdso.so.1 (0x00007fff613f3000)
libmariadb.so.3 => /opt/mariadb-10.2.34-linux-x86_64/lib/libmariadb.so.3 (0x00007fa177941000)
...

and now librarycheck works as it knows where this file is, meaning HammerDB is ready to start testing MariaDB.

~/HammerDB-4.2$ ./hammerdbcli
HammerDB CLI v4.2
Copyright (C) 2003-2021 Steve Shaw
Type "help" for a list of commands
The xml is well-formed, applying configuration
hammerdb>librarycheck
...
Checking database library for MariaDB
Success ... loaded library mariatcl for MariaDB

As noted previously, we’ve added a my.cnf file to the MariaDB directory and tuned this for performance rather than resilience.  (A future blog post on performance profiles will explain why this is a good first test on which to baseline other changes)

$ more my.cnf
[mysqld]
skip-log-bin
lc_messages_dir=/opt/mariadb-10.2.34-linux-x86_64/share/english
datadir=/opt/mariadb-10.2.34-linux-x86_64/data
default_authentication_plugin=mysql_native_password
socket=/tmp/mariadb10234.sock
port=3306
log-error=/tmp/mariadb10234.log
bind_address=localhost
# general
max_connections=1000
table_open_cache=2000
table_open_cache_instances=16
back_log=1500
#default_password_lifetime=0
ssl=0
performance_schema=OFF
max_prepared_stmt_count=12800
skip_log_bin=1
character_set_server=latin1
collation_server=latin1_swedish_ci
transaction_isolation=REPEATABLE-READ
# files
#innodb_file_per_table
innodb_log_file_size=16384M
innodb_open_files=1000
# buffers
innodb_buffer_pool_size=64000M
innodb_buffer_pool_instances=16
innodb_log_buffer_size=64M
# tune
innodb_doublewrite=0
innodb_thread_concurrency=0
innodb_flush_log_at_trx_commit=0
innodb_max_dirty_pages_pct=90
innodb_max_dirty_pages_pct_lwm=10
join_buffer_size=32K
sort_buffer_size=32K
innodb_use_native_aio=1
innodb_stats_persistent=1
innodb_spin_wait_delay=6
innodb_max_purge_lag_delay=300000
innodb_max_purge_lag=0
innodb_flush_method=O_DIRECT_NO_FSYNC
#innodb_checksum_algorithm=none
innodb_io_capacity=4000
innodb_io_capacity_max=20000
innodb_lru_scan_depth=9000
innodb_change_buffering=none
innodb_read_only=0
#innodb_page_cleaners=2
#innodb_undo_log_truncate=off
# perf special
innodb_adaptive_flushing=1
innodb_flush_neighbors=0
innodb_read_io_threads=16
innodb_write_io_threads=16
innodb_purge_threads=4
innodb_adaptive_hash_index=0
# monitoring
innodb_monitor_enable='%'

Next we will install the database and start MariaDB

./scripts/mysql_install_db --defaults-file=/opt/mariadb-10.6.3-linux-x86_64/my.cnf --user=mysql
./bin/mysqld --defaults-file=./my.cnf

Finally, we will set the root password

./bin/mysql -uroot -S/tmp/mariadb10234.sock
...
MySQL [(none)]> SET PASSWORD FOR 'root'@localhost = PASSWORD("mysql");

At this point, we are ready to start testing MariaDB with HammerDB so lets create 2 scripts, a build script and a run script. For the build script the example sets the socket name we have used and opts to create 800 warehouses with 64 virtual users in a partitioned schema.

dbset db maria
dbset bm TPC-C
diset connection maria_socket /tmp/mariadb10234.sock
diset tpcc maria_count_ware 800
diset tpcc maria_num_vu 64
diset tpcc maria_partition true
buildschema
waittocomplete
quit

Save the script as mysqlbuild.tcl and run as follows:

./hammerdbcli auto mysqlbuild.tcl

Wait for the schema build to complete. In the example below it took just over 14 minutes to create 800 warehouses.

Hammerdb Log @ Wed Jul 07 07:33:23 PDT 2021
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
Vuser 1:Monitor Thread
Vuser 1:CREATING TPCC SCHEMA
Vuser 1:CREATING DATABASE tpcc
Vuser 1:CREATING TPCC TABLES
Vuser 1:Loading Item
Vuser 2:Worker Thread
Vuser 2:Waiting for Monitor Thread...
Vuser 2:Loading 13 Warehouses start:1 end:13
Vuser 2:Start:Wed Jul 07 07:33:24 PDT 2021
Vuser 2:Loading Warehouse
Vuser 2:Loading Stock Wid=1
...
Vuser 30:Orders Done
Vuser 30:Loading Orders for D=10 W=377
Vuser 30:...1000
Vuser 30:...2000
Vuser 30:...3000
Vuser 30:Orders Done
Vuser 30:End:Wed Jul 14 07:17:36 PDT 2021
Vuser 1:Workers: 0 Active 64 Done
Vuser 1:CREATING TPCC STORED PROCEDURES
Vuser 1:GATHERING SCHEMA STATISTICS
Vuser 1:TPCC SCHEMA COMPLETE
Vuser 1:Workers: 0 Active 64 Done
Vuser 1:CREATING TPCC STORED PROCEDURES
Vuser 1:GATHERING SCHEMA STATISTICS

Next we are ready to run the test. In the example below, we want to collect a performance profile so will run tests without interruption from 1 to 100 Virtual Users creating a unique log file for each test.

puts "MariaDB 10.2.34 Test Started"
dbset db maria
dbset bm TPC-C
diset connection maria_socket /tmp/mariadb10234.sock
diset tpcc maria_driver timed
diset tpcc maria_rampup 2
diset tpcc maria_duration 5
vuset logtotemp 1
vuset unique 1
loadscript
foreach z {1 2 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 96 100} {
puts "$z VU test"
vuset vu $z
vucreate
vurun
runtimer 480
vudestroy
}
puts "MariaDB 10.2.34 Test Complete"

The script is run as follows:

./hammerdbcli auto mysqlrun.tcl

and can then be left without intervention to complete the test. On completion in the /tmp directory are a number of output files with the NOPM and TPM results of the test.

Hammerdb Log @ Mon Jul 05 04:36:59 PDT 2021
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
Vuser 1:Beginning rampup time of 2 minutes
Vuser 2:Processing 10000000 transactions with output suppressed...
Vuser 3:Processing 10000000 transactions with output suppressed...
Vuser 4:Processing 10000000 transactions with output suppressed...
Vuser 65:Processing 10000000 transactions with output suppressed...
Vuser 1:Rampup 1 minutes complete ...
Vuser 1:Rampup 2 minutes complete ...
Vuser 1:Rampup complete, Taking start Transaction Count.
Vuser 1:Timing test period of 5 in minutes
Vuser 1:1 ...,
Vuser 1:2 ...,
Vuser 1:3 ...,
Vuser 1:4 ...,
Vuser 1:5 ...,
Vuser 1:Test complete, Taking end Transaction Count.
Vuser 1:64 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 669086 NOPM from 2026420 MariaDB TPM

Add these results to your favourite spreadsheet and you have your first performance profile of MariaDB on your system.

Once you have explored the TPROC-C workload you can also use HammerDB to run TPROC-H for analytic workloads and test solutions such as the MariaDB ColumnStore https://mariadb.com/kb/en/mariadb-columnstore/.

Also don’t forget that HammerDB is fully open source, so if you wish to help make HammerDB even better for testing MariaDB you can contribute both ideas and code at https://github.com/TPC-Council/HammerDB.