How to build HammerDB from source

With pull requests Dependency build automation for Linux #323 and Dependency build automation for Windows #362 HammerDB has enabled the user to build their own full distribution of HammerDB from source on Linux x64 and Windows x64 with a single command. This post explains how you can build your own distribution.

Do you need to build from source?

Before building from source, the first question should be, do you need to?  HammerDB already provides pre-compiled packages from the download page so if you want to run HammerDB without building it then these packages are all you need.

WHY build from source?

If you are not already familiar with the programming languages that HammerDB uses, then this earlier post serves as an ideal introduction to what makes up the highest performing GIL free database benchmarking application.

What programming languages does HammerDB use and why does it matter?

You may want to become familiar with all the underlying source code in C or wish to build a distribution where you can verify every single line of source code that goes into HammerDB. All of the source code and dependencies in HammerDB are  open source right down to the layer above any 3rd party commercial database drivers for Oracle, SQL Server and Db2.  Building from source also enables you to build a distribution right up to date with any of the latest pull requests not yet included in the most recent HammerDB release.

BAWT by Paul Obermeier

HammerDB build automation uses an adapted version of the BAWT package by Paul Obermeier . BAWT is copyrighted by Paul Obermeier and distributed under the 3-clause BSD license.

GETTING STARTED ON LINUX

If you have read what programming languages HammerDB uses, then you will not be too surprised that you will firstly need to install a development environment using the gcc compiler for Linux. You will also need the p7zip tool as well as the Xft font library development packages.  The build has been tested on x64 Red Hat 8.X and Ubuntu 20.04.X Linux, with the following example from Red Hat Linux.

yum install p7zip-16.02-20.el8.x86_64.rpm
sudo yum install libXft-devel
yum group install "Development Tools"

The HammerDB Oracle and ODBC for SQL Server client libraries will build without the respective Oracle and SQL Server client libraries, but will need them at runtime.  However, the client or server must be installed for all of Db2, MariaDB, PostgreSQL and MySQL for the HammerDB build to be successful. The database installation must include both the include and lib directories.

For Db2 either the server or client can be installed, for example v11.5.7_linuxx64_server_dec.tar.gz.

HammerDB build automation will look for the Db2 installation in the location of the environment variable IBM_DB_DIR set using db2profile tool. Verify that this has been set as follows:

$ echo $IBM_DB_DIR
/home/ibm/sqllib

On Linux MariaDB, PostgreSQL and MySQL include a config command in the bin directory that returns details of the configuration. HammerDB uses these commands to find the headers and libraries needed for the build.

Before running the build, environment variables MARIADB_CONFIG, PG_CONFIG and MYSQL_CONFIG must be set to the location of the respective config commands for each database in the terminal running the build.

$ export MARIADB_CONFIG=/opt/mariadb/mariadb-10.6.7-linux-systemd-x86_64/bin
$ export PG_CONFIG=/opt/postgresql/bin
$ export MYSQL_CONFIG=/opt/mysql/mysql-8.0.28-linux-glibc2.12-x86_64/bin
GETTING STARTED ON WINDOWS

On Windows, download and install Visual Studio 2022 , Visual Studio is free for open source developers.  An additional gcc compiler will be downloaded and installed locally during the build. The build has been tested on x64 Windows 10 and 11.

As with Linux it is also mandatory to install a database server or client including the development environment of headers and libraries for MariaDB, Db2, MySQL and PostgreSQL.

For Db2 on Windows, there is no db2profile that sets the environment therefore the IBM_DB_DIR environment variable must be set to the location of the Db2 install.  Similarly, the MariaDB and MySQL config commands are not available on Windows either and should also be set to the database or client installation directory rather than the bin directory. PostgreSQL for Windows does include the config command and therefore the environment configuration is the same as Linux.

set MARIADB_CONFIG=C:\Program Files\MariaDB\MariaDB Connector C 64-bit
set MYSQL_CONFIG=C:\Program Files\MySQL\MySQL Server 8.0
set PG_CONFIG=C:\Program Files\PostgreSQL\pgsql\bin
set IBM_DB_DIR=C:\Program Files\IBM\SQLLIB

For all database installations on Windows whether client or server verify that the installation has the include, bin and lib directories. On Windows in particular, some installations may not include all the required files for development.

Download HAMMERDB SOURCE

At this stage you will have installed the compiler you need and database client/server installations for MariaDB, Db2, MySQL and PostgreSQL. To reiterate, HammerDB will not build correctly unless you have installed ALL the required database environments.

Next download HammerDB from gitHub by either cloning or downloading. From the main HammerDB GitHub page use the clone URL or the Download Zip link from the master branch.

$ git clone https://github.com/TPC-Council/HammerDB.git
Cloning into 'HammerDB'...
remote: Enumerating objects: 8099, done.
remote: Total 8099 (delta 0), reused 0 (delta 0), pack-reused 8099
Receiving objects: 100% (8099/8099), 40.41 MiB | 8.39 MiB/s, done.
Resolving deltas: 100% (2564/2564), done.

If cloning you will have a directory called “HammerDB” or extracting the zipfile a directory called “HammerDB-master”.

RUNNING THE BUILD

Navigate to the Build\Bawt-2.1.0 directory with the command line for both Linux and Windows. Note that during the build an InputLibs directory will be created and all HammerDB libraries downloaded and under the Build directory a BawtBuild directory will be created where the installation will take place. Therefore, sufficient disk space and permissions must be available for the build to take place.

Within the command line you are running the build make sure that you have correctly set all the MARIADB_CONFIG, MYSQL_CONFIG, PG_CONFIG, and IBM_DB_DIR environment variables used during the build and run the Build-Linux.sh command for Linux ./Build-Linux.sh x64 Setup/HammerDB-Linux.bawt update and Build-Windows.bat commands for Windows ./Build-Windows.bat x64 vs2022+gcc Setup/HammerDB-Windows.bawt update, the command sequence will look similar to the following on Linux:

$ export MYSQL_CONFIG=/opt/mysql-8.0.20-linux-glibc2.12-x86_64/bin
$ export MARIADB_CONFIG=/opt/mariadb-10.8.1-linux-x86_64/bin/
$ export PG_CONFIG=/opt/postgresql-14.1/bin
$ echo $IBM_DB_DIR
/opt/ibm/sqllib
$ ./Build-Linux.sh x64 Setup/HammerDB-Linux.bawt update

and the following on Windows:

set MARIADB_CONFIG=C:\Program Files\MariaDB\MariaDB Connector C 64-bit
set MYSQL_CONFIG=C:\Program Files\MySQL\MySQL Server 8.0
set PG_CONFIG=C:\Program Files\PostgreSQL\pgsql\bin
set IBM_DB_DIR=C:\Program Files\IBM\SQLLIB
Build-Windows.bat x64 vs2022+gcc Setup\HammerDB-Windows.bawt update

The first step the build will take is to download the required packages and build instructions from www.hammerdb.com, On Windows the MYSYS/MinGW package will also be downloaded.  These will be stored in the Bawt-2.1.0/InputLibs directory. Both checksums and modification times are verified with the remote packages. If a package is already present with the same checksum and modification time, it will not be downloaded again if already present. Also some packages such as Tcl have been modified from the original and therefore only the packages from www.hammerdb.com should be used.

awthemes-9.3.1.7z  libressl-2.6.4.7z  pgtcl-2.1.1.7z Tk-8.6.12.7z
awthemes.bawt      libressl.bawt      pgtcl.bawt        Tk.bawt
clearlooks-1.0.7z  mariatcl-0.1.7z    redis-0.1.7z tkblt-3.2.23.7z
clearlooks.bawt    mariatcl.bawt      redis.bawt        tkblt.bawt
db2tcl-2.0.1.7z    mysqltcl-3.052.7z  Tcl-8.6.12.7z tksvg-0.5.7z
db2tcl.bawt        mysqltcl.bawt      Tcl.bawt          tksvg.bawt
expect-5.45.4.7z   oratcl-4.6.7z      tcltls-1.7.22.7z
expect.bawt        oratcl.bawt        tcltls.bawt

Allow the build to complete. A summary will be given of the packages built and the location of the build given, for example for Linux as follows:

14:27:08 > Creating Distribution tar.gz in /opt/HammerDB/Build/BawtBuild/Linux/x64/Release/Distribution
14:27:08 >   TarGzip
               Source directory: /opt/HammerDB/Build/BawtBuild/Linux/x64/Release/Distribution/HammerDB-4.4
               Tar file        : /opt/HammerDB/Build/BawtBuild/Linux/x64/Release/Distribution/HammerDB-4.4-Linux.tar.gz
14:27:09 > End FinalizeStage

14:27:09 > Summary
           Setup file     : /opt/HammerDB/Build/Bawt-2.1.0/Setup/HammerDB-Linux.bawt
           Build directory: /opt/HammerDB/Build/BawtBuild/Linux/x64/Release/Build
           Architecture   : x64
           Compilers      : gcc
           Global stages  : Finalize
           #  : Library Name         Version    Build time Stages
----------------------------------------------------------------------
             1: Tcl                  8.6.12     1.52 minutes Clean Extract Configure Compile Distribute
             2: Tk                   8.6.12     0.36 minutes Clean Extract Configure Compile Distribute
             3: awthemes             9.3.1      0.00 minutes Clean Extract Configure Compile Distribute
             4: clearlooks           1.0        0.00 minutes Clean Extract Configure Compile Distribute
             5: db2tcl               2.0.1      0.10 minutes Clean Extract Configure Compile Distribute
             6: expect               5.45.4     0.12 minutes Clean Extract Configure Compile Distribute
             7: libressl             2.6.4      0.99 minutes Clean Extract Configure Compile Distribute
             8: mariatcl             0.1        0.04 minutes Clean Extract Configure Compile Distribute
             9: mysqltcl             3.052      0.04 minutes Clean Extract Configure Compile Distribute
            10: oratcl               4.6        0.05 minutes Clean Extract Configure Compile Distribute
            11: pgtcl                2.1.1      0.05 minutes Clean Extract Configure Compile Distribute
            12: redis                0.1        0.00 minutes Clean Extract Configure Compile Distribute
            13: tcltls               1.7.22     0.14 minutes Clean Extract Configure Compile Distribute
            14: tkblt                3.2.23     0.21 minutes Clean Extract Configure Compile Distribute
            15: tksvg                0.5        0.06 minutes Clean Extract Configure Compile Distribute
----------------------------------------------------------------------
           Total: 3.70 minutes

and for Windows:

You now have your own distribution of HammerDB with the latest source code.  You can run the hammerdbcli librarycheck command to verify that the libraries built correctly.

/opt/HammerDB/Build/BawtBuild/Linux/x64/Release/Distribution$ ls
HammerDB-4.4  HammerDB-4.4-Linux.tar.gz
$ cd HammerDB-4.4
/opt/HammerDB/Build/BawtBuild/Linux/x64/Release/Distribution/HammerDB-4.4$ ./hammerdbcli
HammerDB CLI v4.4
Copyright (C) 2003-2022 Steve Shaw
Type "help" for a list of commands
hammerdb>librarycheck
Checking database library for Oracle
Success ... loaded library Oratcl for Oracle
Checking database library for MSSQLServer
Success ... loaded library tdbc::odbc for MSSQLServer
Checking database library for Db2
Success ... loaded library db2tcl for Db2
Checking database library for MySQL
Success ... loaded library mysqltcl for MySQL
Checking database library for PostgreSQL
Success ... loaded library Pgtcl for PostgreSQL
Checking database library for MariaDB
Success ... loaded library mariatcl for MariaDB

You can also browse all of the C source code for the libraries you have built in the Build directory.

/opt/HammerDB/Build/BawtBuild/Linux/x64/Release/Build
VERIFYING DEPENDENT LIBRARIES

Having built the distribution yourself it should be clear that as the database libraries are dynamically linked when you install the distribution on another system you will  also need to install the database clients on that system.

For example on Linux we have a check that fails because the MariaDB libraries cannot be found and we can verify this with the ldd command.

hammerdb>librarycheck
...
Checking database library for MariaDB
Error: failed to load mariatcl - couldn't load file "/opt/HammerDB/Build/BawtBuild/Linux/x64/Release/Distribution/HammerDB-4.4/lib/mariatcl0.1/libmariatcl0.1.so": libmariadb.so.3: cannot open shared object file: No such file or directory
Ensure that MariaDB client libraries are installed and the location in the LD_LIBRARY_PATH environment variable

$ ldd libmariatcl0.1.so
    linux-vdso.so.1 (0x00007ffd534db000)
    libmariadb.so.3 => not found
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f259d2f9000)
    /lib64/ld-linux-x86-64.so.2 (0x00007f259d513000)

When we install this dependent library and tell HammerDB where to find it:

export LD_LIBRARY_PATH=/opt/mariadb-10.2.34-linux-x86_64/lib:$LD_LIBRARY_PATH

HammerDB will correctly load the MariaDB package.


$ ldd libmariatcl0.1.so
    linux-vdso.so.1 (0x00007ffc6214d000)
    libmariadb.so.3 => /opt/mariadb-10.2.34-linux-x86_64/lib/libmariadb.so.3 (0x00007f3e028c9000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f3e026bd000)
    libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f3e0269a000)
    libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f3e02694000)
    libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f3e02545000)
    /lib64/ld-linux-x86-64.so.2 (0x00007f3e02cd2000)

hammerdb>librarycheck
...
Checking database library for MariaDB
Success ... loaded library mariatcl for MariaDB

For an example Windows this time we cannot load the MySQL library.

HammerDB CLI v4.4
Copyright (C) 2003-2022 Steve Shaw
Type "help" for a list of commands
hammerdb>librarycheck
...
Checking database library for MySQL
Error: failed to load mysqltcl - couldn't load library "C:/HammerDB-master/Build/BawtBuild/vs2022/x64/Release/Distribution/HammerDB-4.4/lib/mysqltcl3.052/libmysqltcl.dll": No error
Ensure that MySQL client libraries are installed and the location in the PATH environment variable
hammerdb>

We run the DependenciesGui from the dependencies tool,  load libmysqltcl.dll and it shows we cannot find libmysql.dll.

After installing MySQL we set the environment variable to find the library.

and verify with the Dependencies GUI that now all dependencies can be found.

We can now confirm that the MySQL package can be correctly loaded.

HammerDB CLI v4.4
Copyright (C) 2003-2022 Steve Shaw
Type "help" for a list of commands
hammerdb>librarycheck
Checking database library for Oracle
Success ... loaded library Oratcl for Oracle
Checking database library for MSSQLServer
Success ... loaded library tdbc::odbc for MSSQLServer
Checking database library for Db2
Success ... loaded library db2tcl for Db2
Checking database library for MySQL
Success ... loaded library mysqltcl for MySQL
Checking database library for PostgreSQL
Success ... loaded library Pgtcl for PostgreSQL
Checking database library for MariaDB
Success ... loaded library mariatcl for MariaDB
hammerdb>
SUMMARY

You have now learnt how to build your own HammerDB from source. This gives you access to all of the source code and see how HammerDB is built for a released distribution to enable you to build your own.

 

HammerDB v4.3 New Features Pt2: Enhanced Webservice with SQLite repository

The most popular interface to HammerDB is using the GUI, however demand for running HammerDB workloads in environments without GUI interfaces led to the development of the CLI.  From v3.2 cloud based usage led to the development of a HammerDB HTTP based web service interface with output data stored in a SQLite database repository. From v4.3 this web interface has been enhanced to add command line functionality to the service as well as extending the ability to query the database of configuration, results and timing data to be returned in JSON format, enabling a long-term repository of HammerDB benchmark data for integrating HammerDB into service based cloud environments.

Configuring and starting the Webservice

The webservice configuration is set in the generic.xml file in the config directory. ws_port defines on which port to start the HTTP service, whilst sqlite_db sets the location of the SQLite repository database. The default value of TMP allows HammerDB to find a suitable temp directory in which to store the database.  if :memory: is given an in-memory database will be used, however some functionality such as the storing  of timing data is not available (because multiple threads cannot open the same in-memory database).

<webservice>
<ws_port>8080</ws_port> 
<sqlite_db>TMP</sqlite_db> 
</webservice>

To start hammerdb run the ./hammerdbws command on Linux or hammerdbws.bat on Windows.

./hammerdbws

Navigating to the defined port will show the Web Service interface, and clicking on the API link will return the available commands.

 

HammerDB Web Service Interface

In addition to the HTTP interface, the web service interface also provides a CLI interface to the service. This CLI interface translates the existing CLI commands to call the HTTP API whilst translating the output to JSON format.  At v4.3 the web service CLI supports all commands support by the standard CLI except for primary/replica features meaning switchmode and steprun are not available.

The web service CLI means that direct HTTP calls and CLI commands can be mixed whilst storing all output in the SQLite repository for querying at a later time.

Running a schema build

The following PostgreSQL build script will run in both the standard and web service CLI.

dbset db pg
dbset bm TPC-C
diset tpcc pg_count_ware 20
diset tpcc pg_num_vu 4
diset tpcc pg_superuser postgres
diset tpcc pg_superuserpass postgres
buildschema

Running in the web service CLI we can see that the command output is returned to the user, however the virtual user output is no longer returned and instead a jobid is given.

./hammerdbws 
HammerDB Web Service v4.3
Copyright (C) 2003-2021 Steve Shaw
Type "help" for a list of commands
The xml is well-formed, applying configuration
Initialized new SQLite on-disk database /tmp/hammer.DB
Starting HammerDB Web Service on port 8080
Listening for HTTP requests on TCP port 8080
hammerws>source pgbuild.tcl
{"success": {"message": "Database set to PostgreSQL"}}
{"success": {"message": "Benchmark set to TPC-C for PostgreSQL"}}
{"success": {"message": "Changed tpcc:pg_count_ware from 1 to 20 for PostgreSQL"}}
{"success": {"message": "Changed tpcc:pg_num_vu from 1 to 4 for PostgreSQL"}}
{"success": {"message": "Value postgres for tpcc:pg_superuserpass is the same as existing value postgres, no change made"}}
{"success": {"message": "Building 20 Warehouses with 5 Virtual Users, 4 active + 1 Monitor VU(dict value pg_num_vu is set to 4): JOBID=619CF6B05D1703E263931333"}}

The jobids can be queried directly over HTTP

List Jobs

and also for example the status of the current job.

Job status

This same output is available at the CLI prompt using the jobs command.

hammerws>jobs
["619CF6B05D1703E263931333"]

hammerws>jobs 619CF6B05D1703E263931333 status
[
"0",
"Ready to create a 20 Warehouse PostgreSQL TPROC-C schema\nin host LOCALHOST:5432 sslmode PREFER under user TPCC in database TPCC?",
"0",
"Vuser 1:RUNNING",
"0",
"Vuser 2:RUNNING",
"0",
"Vuser 3:RUNNING",
"0",
"Vuser 4:RUNNING",
"0",
"Vuser 5:RUNNING"
]

hammerws>jobs 619CF6B05D1703E263931333 1
[
"1",
"Monitor Thread",
"1",
"CREATING TPCC SCHEMA",
"1",
"CREATING DATABASE tpcc under OWNER tpcc",
"1",
"CREATING TPCC TABLES",
"1",
"Loading Item",
"1",
"Loading Items - 10000",
"1",
"Loading Items - 20000",
"1",
"Loading Items - 30000",
"1",
"Loading Items - 40000",
"1",
"Loading Items - 50000",
"1",
"Loading Items - 60000",
"1",
"Loading Items - 70000",
"1",
"Loading Items - 80000",
"1",
"Loading Items - 90000",
"1",
"Loading Items - 100000",
"1",
"Item done",
"1",
"Monitoring Workers...",
"1",
"Workers: 4 Active 0 Done"
]

In this example, we use the HTTP interface to verify the output that the build completed successfully.

Build Job Complete

Running a Test

The following script is also recognisable as a valid CLI script with the addition of jobs commands to retrieve the test output.

dbset db pg
print dict
diset tpcc pg_superuser postgres
diset tpcc pg_defaultdbase postgres
diset tpcc pg_driver timed
diset tpcc pg_rampup 1
diset tpcc pg_duration 2
diset tpcc pg_timeprofile true
tcset refreshrate 10
loadscript
foreach z {1 2 4 8} {
puts "$z VU TEST"
vuset vu $z
vucreate
tcstart
set jobid [ vurun ]
runtimer 200
tcstop
jobs $jobid result
jobs $jobid timing
vudestroy
}
puts "JOB SEQUENCE ENDED"

Note that in this example using runtimer to automate the tests means that the interactive prompt will not return whilst the test is running, however it is still possible to query the jobs using the HTTP interface whilst running.

Job status

Running this test script returns the following output.  Note that the job result and timing data is now queried from the SQLite repository during the script run.

hammerws>source pgrun.tcl
{"success": {"message": "Database set to PostgreSQL"}}
{
"connection": {
"pg_host": "localhost",
"pg_port": "5432",
"pg_sslmode": "prefer"
},
"tpcc": {
"pg_count_ware": "1",
"pg_num_vu": "1",
"pg_superuser": "postgres",
"pg_superuserpass": "postgres",
"pg_defaultdbase": "postgres",
"pg_user": "tpcc",
"pg_pass": "tpcc",
"pg_dbase": "tpcc",
"pg_tspace": "pg_default",
"pg_vacuum": "false",
"pg_dritasnap": "false",
"pg_oracompat": "false",
"pg_storedprocs": "false",
"pg_partition": "false",
"pg_total_iterations": "10000000",
"pg_raiseerror": "false",
"pg_keyandthink": "false",
"pg_driver": "test",
"pg_rampup": "2",
"pg_duration": "5",
"pg_allwarehouse": "false",
"pg_timeprofile": "false",
"pg_async_scale": "false",
"pg_async_client": "10",
"pg_async_verbose": "false",
"pg_async_delay": "1000",
"pg_connect_pool": "false"
}
}
{"success": {"message": "Value postgres for tpcc:pg_defaultdbase is the same as existing value postgres, no change made"}}
{"success": {"message": "Set driver script to timed, clearing Script, reload script to activate new setting"}}
{"success": {"message": "Changed tpcc:pg_rampup from 2 to 1 for PostgreSQL"}}
{"success": {"message": "Changed tpcc:pg_duration from 5 to 2 for PostgreSQL"}}
{"success": {"message": "Changed tpcc:pg_timeprofile from false to true for PostgreSQL"}}
{"success": {"message": "Transaction Counter refresh rate set to 10"}}
{"success": {"message": "script loaded"}}
1 VU TEST
{"success": {"message": "Virtual users set to 1"}}
{"success": {"message": "2 Virtual Users Created with Monitor VU"}}
{"success": {"message": "Transaction Counter Thread Started"}}
{"success": {"message": "Running Virtual Users: JOBID=619D05045D1703E213238373"}}
{"success": {"message": "Timer: 1 minutes elapsed"}}
{"success": {"message": "Timer: 2 minutes elapsed"}}
{"success": {"message": "Timer: 3 minutes elapsed"}}
{"success": {"message": "runtimer returned after 183 seconds"}}
{"success": {"message": "Transaction Counter thread running with threadid:tid0x7effcb1c5700"}}{"success": {"message": "Stopping Transaction Counter"}}
[
"619D05045D1703E213238373",
"2021-11-23 15:13:08",
"1 Active Virtual Users configured",
"TEST RESULT : System achieved 8089 NOPM from 18636 PostgreSQL TPM"
]
{
"NEWORD": {
"elapsed_ms": "181149.0",
"calls": "23974",
"min_ms": "1.856",
"avg_ms": "3.664",
"max_ms": "116.224",
"total_ms": "87851.402",
"p99_ms": "7.321",
"p95_ms": "5.624",
"p50_ms": "3.203",
"sd": "2550.005",
"ratio_pct": "48.497"
},
"PAYMENT": {
"elapsed_ms": "181149.0",
"calls": "23990",
"min_ms": "1.412",
"avg_ms": "2.799",
"max_ms": "119.255",
"total_ms": "67138.15",
"p99_ms": "4.552",
"p95_ms": "3.868",
"p50_ms": "2.52",
"sd": "2448.404",
"ratio_pct": "37.062"
},
"DELIVERY": {
"elapsed_ms": "181149.0",
"calls": "2473",
"min_ms": "2.609",
"avg_ms": "5.227",
"max_ms": "33.868",
"total_ms": "12927.033",
"p99_ms": "10.588",
"p95_ms": "8.638",
"p50_ms": "4.569",
"sd": "2507.252",
"ratio_pct": "7.136"
},
"SLEV": {
"elapsed_ms": "181149.0",
"calls": "2452",
"min_ms": "0.7",
"avg_ms": "2.679",
"max_ms": "13.246",
"total_ms": "6569.494",
"p99_ms": "6.72",
"p95_ms": "5.279",
"p50_ms": "2.258",
"sd": "1407.091",
"ratio_pct": "3.627"
},
"OSTAT": {
"elapsed_ms": "181149.0",
"calls": "2407",
"min_ms": "0.182",
"avg_ms": "0.74",
"max_ms": "2.494",
"total_ms": "1781.993",
"p99_ms": "1.75",
"p95_ms": "1.436",
"p50_ms": "0.614",
"sd": "392.984",
"ratio_pct": "0.984"
}
}
{"success": {"message": "vudestroy success"}}
2 VU TEST
{"success": {"message": "Virtual users set to 2"}}
{"success": {"message": "3 Virtual Users Created with Monitor VU"}}
{"success": {"message": "Transaction Counter Thread Started"}}
{"success": {"message": "Running Virtual Users: JOBID=619D05BC5D1703E263930313"}}
{"success": {"message": "Timer: 1 minutes elapsed"}}
{"success": {"message": "Timer: 2 minutes elapsed"}}
{"success": {"message": "Timer: 3 minutes elapsed"}}
{"success": {"message": "runtimer returned after 181 seconds"}}
{"success": {"message": "Transaction Counter thread running with threadid:tid0x7effca74c700"}}{"success": {"message": "Stopping Transaction Counter"}}
[
"619D05BC5D1703E263930313",
"2021-11-23 15:16:12",
"2 Active Virtual Users configured",
"TEST RESULT : System achieved 14102 NOPM from 32446 PostgreSQL TPM"
]
{
"NEWORD": {
"elapsed_ms": "179711.5",
"calls": "20902",
"min_ms": "2.13",
"avg_ms": "3.97",
"max_ms": "150.767",
"total_ms": "82979.222",
"p99_ms": "8.438",
"p95_ms": "5.836",
"p50_ms": "3.293",
"sd": "4482.874",
"ratio_pct": "46.323"
},
"PAYMENT": {
"elapsed_ms": "179711.5",
"calls": "20749",
"min_ms": "1.601",
"avg_ms": "3.56",
"max_ms": "147.367",
"total_ms": "73867.606",
"p99_ms": "6.637",
"p95_ms": "4.614",
"p50_ms": "3.201",
"sd": "4299.084",
"ratio_pct": "41.237"
},
"DELIVERY": {
"elapsed_ms": "179711.5",
"calls": "2114",
"min_ms": "2.789",
"avg_ms": "5.394",
"max_ms": "85.028",
"total_ms": "11401.954",
"p99_ms": "11.38",
"p95_ms": "8.898",
"p50_ms": "4.95",
"sd": "3780.131",
"ratio_pct": "6.365"
},
"SLEV": {
"elapsed_ms": "179711.5",
"calls": "2152",
"min_ms": "0.803",
"avg_ms": "2.459",
"max_ms": "12.131",
"total_ms": "5292.509",
"p99_ms": "6.567",
"p95_ms": "5.241",
"p50_ms": "2.039",
"sd": "1300.169",
"ratio_pct": "2.955"
},
"OSTAT": {
"elapsed_ms": "179711.5",
"calls": "2131",
"min_ms": "0.181",
"avg_ms": "0.631",
"max_ms": "2.649",
"total_ms": "1345.042",
"p99_ms": "1.656",
"p95_ms": "1.331",
"p50_ms": "0.526",
"sd": "329.533",
"ratio_pct": "0.751"
}
}
{"success": {"message": "vudestroy success"}}
4 VU TEST
{"success": {"message": "Virtual users set to 4"}}
{"success": {"message": "5 Virtual Users Created with Monitor VU"}}
{"success": {"message": "Transaction Counter Thread Started"}}
{"success": {"message": "Running Virtual Users: JOBID=619D06735D1703E263039373"}}
{"success": {"message": "Timer: 1 minutes elapsed"}}
{"success": {"message": "Timer: 2 minutes elapsed"}}
{"success": {"message": "Timer: 3 minutes elapsed"}}
{"success": {"message": "runtimer returned after 183 seconds"}}
{"success": {"message": "Transaction Counter thread running with threadid:tid0x7effc974a700"}}{"success": {"message": "Stopping Transaction Counter"}}
[
"619D06735D1703E263039373",
"2021-11-23 15:19:15",
"4 Active Virtual Users configured",
"TEST RESULT : System achieved 22392 NOPM from 51340 PostgreSQL TPM"
]
{
"NEWORD": {
"elapsed_ms": "180723.5",
"calls": "17438",
"min_ms": "2.082",
"avg_ms": "5.08",
"max_ms": "160.272",
"total_ms": "88591.161",
"p99_ms": "10.536",
"p95_ms": "7.33",
"p50_ms": "4.737",
"sd": "5223.001",
"ratio_pct": "48.826"
},
"PAYMENT": {
"elapsed_ms": "180723.5",
"calls": "17340",
"min_ms": "1.815",
"avg_ms": "4.011",
"max_ms": "133.818",
"total_ms": "69556.936",
"p99_ms": "8.386",
"p95_ms": "5.476",
"p50_ms": "3.329",
"sd": "5131.308",
"ratio_pct": "38.335"
},
"DELIVERY": {
"elapsed_ms": "180723.5",
"calls": "1723",
"min_ms": "2.601",
"avg_ms": "7.267",
"max_ms": "130.229",
"total_ms": "12520.634",
"p99_ms": "14.639",
"p95_ms": "11.303",
"p50_ms": "6.758",
"sd": "5597.635",
"ratio_pct": "6.901"
},
"SLEV": {
"elapsed_ms": "180723.5",
"calls": "1738",
"min_ms": "0.75",
"avg_ms": "2.816",
"max_ms": "7.497",
"total_ms": "4893.517",
"p99_ms": "6.261",
"p95_ms": "5.099",
"p50_ms": "2.784",
"sd": "1344.462",
"ratio_pct": "2.697"
},
"OSTAT": {
"elapsed_ms": "180723.5",
"calls": "1697",
"min_ms": "0.164",
"avg_ms": "0.781",
"max_ms": "3.79",
"total_ms": "1324.922",
"p99_ms": "2.155",
"p95_ms": "1.436",
"p50_ms": "0.755",
"sd": "414.512",
"ratio_pct": "0.73"
}
}
{"success": {"message": "vudestroy success"}}
8 VU TEST
{"success": {"message": "Virtual users set to 8"}}
{"success": {"message": "9 Virtual Users Created with Monitor VU"}}
{"success": {"message": "Transaction Counter Thread Started"}}
{"success": {"message": "Running Virtual Users: JOBID=619D072D5D1703E273631383"}}
{"success": {"message": "Timer: 1 minutes elapsed"}}
{"success": {"message": "Timer: 2 minutes elapsed"}}
{"success": {"message": "Timer: 3 minutes elapsed"}}
{"success": {"message": "runtimer returned after 184 seconds"}}
{"success": {"message": "Transaction Counter thread running with threadid:tid0x7effaaffd700"}}{"success": {"message": "Stopping Transaction Counter"}}
[
"619D072D5D1703E273631383",
"2021-11-23 15:22:21",
"8 Active Virtual Users configured",
"TEST RESULT : System achieved 36483 NOPM from 84065 PostgreSQL TPM"
]
{
"PAYMENT": {
"elapsed_ms": "181853.0",
"calls": "12384",
"min_ms": "1.85",
"avg_ms": "6.93",
"max_ms": "378.076",
"total_ms": "85816.379",
"p99_ms": "68.816",
"p95_ms": "11.981",
"p50_ms": "4.779",
"sd": "13572.668",
"ratio_pct": "48.716"
},
"NEWORD": {
"elapsed_ms": "181853.0",
"calls": "12454",
"min_ms": "2.182",
"avg_ms": "5.892",
"max_ms": "412.09",
"total_ms": "73383.456",
"p99_ms": "66.096",
"p95_ms": "10.838",
"p50_ms": "3.967",
"sd": "11271.032",
"ratio_pct": "41.658"
},
"DELIVERY": {
"elapsed_ms": "181853.0",
"calls": "1272",
"min_ms": "1.174",
"avg_ms": "7.539",
"max_ms": "103.49",
"total_ms": "9589.561",
"p99_ms": "65.102",
"p95_ms": "16.171",
"p50_ms": "5.538",
"sd": "8412.093",
"ratio_pct": "5.444"
},
"SLEV": {
"elapsed_ms": "181853.0",
"calls": "1260",
"min_ms": "1.165",
"avg_ms": "2.744",
"max_ms": "18.103",
"total_ms": "3456.896",
"p99_ms": "10.539",
"p95_ms": "7.59",
"p50_ms": "2.09",
"sd": "1989.072",
"ratio_pct": "1.962"
},
"OSTAT": {
"elapsed_ms": "181853.0",
"calls": "1270",
"min_ms": "0.205",
"avg_ms": "0.824",
"max_ms": "9.996",
"total_ms": "1045.855",
"p99_ms": "4.74",
"p95_ms": "2.312",
"p50_ms": "0.516",
"sd": "857.958",
"ratio_pct": "0.594"
}
}
{"success": {"message": "vudestroy success"}}
JOB SEQUENCE ENDED

With all jobs complete the SQLite repository can be queried either over HTTP or from the jobs command.  If the webservice is stopped and restarted an on-disk repository is retained for querying at a later point in time.

hammerws>jobs
[
"619CF6B05D1703E263931333",
"619D05045D1703E213238373",
"619D05BC5D1703E263930313",
"619D06735D1703E263039373",
"619D072D5D1703E273631383"
]
hammerws>

The help jobs command can show the available commands to query the repository.

hammerws>help jobs
jobs - Usage: jobs
list all jobs.

jobs - Usage: jobs [jobid|result|timestamp]
jobid: list VU output for jobid.
result: list result for all jobs.
timestamp: list starting timestamp for all jobs.

jobs jobid - Usage: jobs jobid [bm|db|delete|dict|result|status|tcount|timestamp|timing|vuid]
bm: list benchmark for jobid.
db: list database for jobid.
delete: delete jobid.
dict: list dict for jobid.
result: list result for jobid.
status: list status for jobid.
tcount: list count for jobid.
timestamp: list starting timestamp for jobid.
timing: list xtprof summary timings for jobid.
vuid: list VU output for VU with vuid for jobid.

jobs jobid timing - Usage: jobs jobid timing vuid
timing vuid: list xtprof timings for vuid for jobid.

The CLI jobs result command will query all jobs and return the result if it finds one. In the example below as expected the build job does not have a result but the results are returned for all of the driver jobs.

hammerws>jobs result
[
"619CF6B05D1703E263931333",
"Jobid has no test result"
]
[
"619D05045D1703E213238373",
"2021-11-23 15:13:08",
"1 Active Virtual Users configured",
"TEST RESULT : System achieved 8089 NOPM from 18636 PostgreSQL TPM"
]
[
"619D05BC5D1703E263930313",
"2021-11-23 15:16:12",
"2 Active Virtual Users configured",
"TEST RESULT : System achieved 14102 NOPM from 32446 PostgreSQL TPM"
]
[
"619D06735D1703E263039373",
"2021-11-23 15:19:15",
"4 Active Virtual Users configured",
"TEST RESULT : System achieved 22392 NOPM from 51340 PostgreSQL TPM"
]
[
"619D072D5D1703E273631383",
"2021-11-23 15:22:21",
"8 Active Virtual Users configured",
"TEST RESULT : System achieved 36483 NOPM from 84065 PostgreSQL TPM"
]

An individual job can queried for any related information such as the timestamp when the job started, the configuration dict for the job or as the example below because the transaction counter was started the transaction count data for the run can retrieved based on the jobid.

hammerws>job 619D072D5D1703E273631383 tcount
{"PostgreSQL tpm": {
"0": "2021-11-23 15:22:21",
"76662": "2021-11-23 15:22:31",
"102426": "2021-11-23 15:22:41",
"74574": "2021-11-23 15:22:51",
"71526": "2021-11-23 15:23:01",
"119334": "2021-11-23 15:23:11",
"96852": "2021-11-23 15:23:21",
"59640": "2021-11-23 15:23:31",
"79512": "2021-11-23 15:23:41",
"93750": "2021-11-23 15:23:51",
"92322": "2021-11-23 15:24:01",
"68988": "2021-11-23 15:24:11",
"95556": "2021-11-23 15:24:21",
"97512": "2021-11-23 15:24:31",
"76590": "2021-11-23 15:24:41",
"55236": "2021-11-23 15:24:51",
"97254": "2021-11-23 15:25:01",
"98364": "2021-11-23 15:25:11",
"95784": "2021-11-23 15:25:21"
}}

Summary

It should be clear that the v4.3 Webservice functionality provides a building block to integrate HammerDB functionality into wider webservice environments. The addition of the CLI provides an easy to use command line interface to the functionality whilst the SQLite repository and output in JSON format enables HammerDB workloads to be automated with results, output, timing data and configuration all able to be retrieved for a particular job at any further point in time.

HammerDB v4.3 New Features Pt1: Graphical Metrics for PostgreSQL

Introducing the PostgreSQL performance metrics viewer

Prior to  version 4.3, HammerDB included a graphical performance metrics view for the Oracle database only. At v4.3 HammerDB includes the same functionality for PostgreSQL enabling the user to drill down on database metrics in real time.  Additionally, using the Active Session History functionality, it is possible to select a previous time period of statistics in the graph and view the PostgreSQL metrics for this earlier period of time.  This enables the user to compare and contrast performance across different benchmark scenarios.

PostgreSQL Graphical Metrics

Install pg_stat_statements and pg_sentinel extensions

To use the PostgreSQL graphical metrics, it is necessary to install the pg_stat_statements and pg_sentinel extensions first in the database to be viewed.

This example uses a PostgreSQL source based build with the pg_stat_statements extension found in the contrib directory.  Running make and make install in this directory installs the extension.

/postgresql-14.1/contrib/pg_stat_statements$ make
...
make[1]: Entering directory '/opt/postgresql-4.1/src/backend'
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -shared -o pg_stat_statements.so pg_stat_statements.o -L../../src/port -L../../src/common -Wl,--as-needed -Wl,-rpath,'/opt/postgresql-14.1/lib',--enable-new-dtags -lm 
...
./postgresql-14.1/contrib/pg_stat_statements$ make install
...
/usr/bin/install -c -m 644 ./pg_stat_statements--1.4.sql ./pg_stat_statements--1.8--1.9.sql ./pg_stat_statements--1.7--1.8.sql ./pg_stat_statements--1.6--1.7.sql ./pg_stat_statements--1.5--1.6.sql ./pg_stat_statements--1.4--1.5.sql ./pg_stat_statements--1.3--1.4.sql ./pg_stat_statements--1.2--1.3.sql ./pg_stat_statements--1.1--1.2.sql ./pg_stat_statements--1.0--1.1.sql '/opt/postgresql-14.1/share/extension/'
...

To build pgsentinel it is necessary to download it from github and to run make and make install as we did previously for pg_stat_statements.

git clone https://github.com/pgsentinel/pgsentinel.git
cd pgsentinel/src
opt/postgresql-14.1/pgsentinel/src$ make
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 -fPIC -I. -I./ -I/opt/postgresql-14.1/include/server -I/opt/postgresql-14.1/include/internal -D_GNU_SOURCE -c -o pgsentinel.o pgsentinel.c
/opt/postgresql-4.1/pgsentinel/src$ make install
/bin/mkdir -p '/opt/postgresql-14.1/lib'
...

Add the following example entries to the postgresql.conf

shared_preload_libraries = 'pg_stat_statements,pgsentinel'
track_activity_query_size=2048
pg_stat_statements.save=on
pg_stat_statements.track=all
pgsentinel_pgssh.enable = true
pgsentinel_ash.pull_frequency = 1
pgsentinel_ash.max_entries = 1000000

start the database and then login and create the extensions as follows

postgres=# create extension pg_stat_statements;
CREATE EXTENSION
postgres=# create extension pgsentinel;
CREATE EXTENSION

If the build went according to plan then the  pg_active_session_history table will be found.

postgres=# select * from pg_active_session_history;
ash_time | datid | datname | pid | leader_pid | usesysid | usename | applicatio
n_name | client_addr | client_hostname | client_port | backend_start | xact_star
t | query_start | state_change | wait_event_type | wait_event | state | backend_
xid | backend_xmin | top_level_query | query | cmdtype | queryid | backend_type 
| blockers | blockerpid | blocker_state 
----------+-------+---------+-----+------------+----------+---------+-----------
-------+-------------+-----------------+-------------+---------------+----------
--+-------------+--------------+-----------------+------------+-------+---------
----+--------------+-----------------+-------+---------+---------+--------------
+----------+------------+---------------
(0 rows)

Starting the PostgreSQL Metrics

With the pg_stat_statements and pg_sentinel extensions installed, it is now possible to start the HammerDB graphical metrics to view the Active Session History.

Under the tree view there is the Metrics entry enabling the editing of options and to start the display.  Note that if a workload is running and the treeview is locked, it is also possible to access the options from the Options menu and to start the metrics from the Metrics button.

PostgreSQL Metrics treeview

Clicking on options, it shows the login credentials for the PostgreSQL superuser.  The Agent ID and Hostname are identical to the metrics option at earlier HammerDB versions, enabling the viewing of CPU performance by connecting to a running agent.

PostgreSQL Metrics Options

By clicking on the Metrics Display option, HammerDB will connect to the target database and start monitoring the performance, note that if the pg_active_session_history table is empty the viewer will report an error and refuse to start.

No rows found

If the pg_active_session_history table is populated (after running or having previously run a workload) the Metrics tab will become activated.

Started Metrics

Viewing PostgreSQL Metrics for a HammerDB workload

For full performance details, grab the Metrics tab and pull it out of the HammerDB window to expand.

Drag out Metrics tab

The example shows a TPROC-C workload running with 4 Active Virtual Users. The example has a configuration set to illustrate a number of wait events and has therefore not been configured for performance.

Workload running

The PostgreSQL performance metrics will automatically start to be populated in the viewer.  The options enable the user to drill down on the SQL running in the database, the wait events and the user statistics.  The example shows we have 4 users called tpcc running the SQL from the TPROC-C workload with key WALWrite and CPU events.  The events are colour coded and indexed in the graph to the wait event groups.

Metrics view for benchmark

When a benchmark workload has completed, use the selection tool in the graph to select the metrics for a period of time of interest. The viewer will be populated with the metrics for that specific period of time.  Clicking on the SQL, events or user will display the output relevant to that selection. The SQL entry gives further options of sql text, io and stats, the wait events show the SQL that caused that wait event and the users show a summary of statistics for that user. The following example shows the SQL view.

Active Session History SQL view

The following example shows the LWLock WalWrite event, illustrating the top SQL that caused that event.

Active Session History event view

Closing the Window will return it to the main GUI showing the performance graph summary.

Summary display

Pressing stop on the Metrics button will close the PostgreSQL metrics viewer.

Summary

The HammerDB graphical metrics viewer in v4.3 adds the functionality to view the PostgreSQL active session history for benchmark workloads, enabling the user to find and diagnose bottlenecks in hardware and software configurations in a PostgreSQL environment.  Further information on CPU and database metrics can be found in the documentation.

 

HammerDB: Using MySQL 5.7 vs 8.0 to understand performance profiles

One of the most important concepts in analysing database performance is that of understanding scalability. When a system ‘scales’ it is able to deliver higher levels of performance proportional to the system resources available to it.  In particular, this means as we add CPU cores and increase the system load, we see higher performance. The way we measure this is through a ‘performance profile’. This simply means that we run a series of tests incrementally increasing the system load until we find the peak performance. Plotting these data points enables us to understand the scalability of the database software being tested on that system.

In this example, we will compare MySQL 5.7.33 and MySQL 8.0.25 with the TPROC-C workload on a system with 2 sockets of Intel Xeon 8280L that means we have 28 cores per socket, with 56 physical CPUs and 112 logical CPUs with Hyper-Threading.

So as a first step we will install MySQL 5.7.33 and MySQL 8.0.25 and build a schema on both. In this example, we will use the HammerDB CLI and build the schema with the script as follows in both databases.

dbset db mysql
dbset bm TPC-C
vuset logtotemp 1
diset connection mysql_socket /tmp/mysql.sock
diset tpcc mysql_count_ware 800
diset tpcc mysql_num_vu 64
diset tpcc mysql_partition true
buildschema
waittocomplete
quit

Then we can run an interactive workload for a single Virtual User as follows for MySQL 8.0.25

./hammerdbcli
HammerDB CLI v4.2
Copyright (C) 2003-2021 Steve Shaw
Type "help" for a list of commands
The xml is well-formed, applying configuration
hammerdb>dbset db mysql
Database set to MySQL
hammerdb>diset tpcc mysql_driver timed
Clearing Script, reload script to activate new setting
Script cleared
Changed tpcc:mysql_driver from test to timed for MySQL
hammerdb>vuset logtotemp 1
hammerdb>vuset unique 1
hammerdb>vuset vu 1
hammerdb>loadscript
Script loaded, Type "print script" to view
hammerdb>vucreate
Vuser 1 created MONITOR - WAIT IDLE
Vuser 2 created - WAIT IDLE
Logging activated
to /tmp/hammerdb_60F169935C7303E293236333.log
2 Virtual Users Created with Monitor VU
hammerdb>vurun
Vuser 1:RUNNING
Vuser 1:Beginning rampup time of 2 minutes
Vuser 2:RUNNING
Vuser 2:Processing 10000000 transactions with output suppressed...
...
hammerdb>Vuser 1:Rampup 1 minutes complete ...
Vuser 1:Rampup 2 minutes complete ...
Vuser 1:Rampup complete, Taking start Transaction Count.
Vuser 1:Timing test period of 5 in minutes
Vuser 1:1 ...,
Vuser 1:2 ...,
Vuser 1:3 ...,
Vuser 1:4 ...,
Vuser 1:5 ...,
Vuser 1:Test complete, Taking end Transaction Count.
Vuser 1:1 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 25469 NOPM from 76873 MySQL TPM
Vuser 1:FINISHED SUCCESS
Vuser 2:FINISHED SUCCESS
ALL VIRTUAL USERS COMPLETE

and MySQL 5.7.33

hammerdb>dbset db mysql
Database set to MySQL
hammerdb>diset tpcc mysql_driver timed
Clearing Script, reload script to activate new setting
Script cleared
Changed tpcc:mysql_driver from test to timed for MySQL
hammerdb>vuset logtotemp 1
hammerdb>vuset unique 1
hammerdb>loadscript
Script loaded, Type "print script" to view
hammerdb>vuset vu 1
hammerdb>vucreate
Vuser 1 created MONITOR - WAIT IDLE
Vuser 2 created - WAIT IDLE
Logging activated
to /tmp/hammerdb_60F183575C7303E273030333.log
2 Virtual Users Created with Monitor VU
hammerdb>vurun
Vuser 1:RUNNING
Vuser 1:Beginning rampup time of 2 minutes
Vuser 2:RUNNING
Vuser 2:Processing 10000000 transactions with output suppressed...
hammerdb>Vuser 1:Rampup 1 minutes complete ...
Vuser 1:Rampup 2 minutes complete ...
Vuser 1:Rampup complete, Taking start Transaction Count.
Vuser 1:Timing test period of 5 in minutes
Vuser 1:1 ...,
Vuser 1:2 ...,
Vuser 1:3 ...,
Vuser 1:4 ...,
Vuser 1:5 ...,
Vuser 1:Test complete, Taking end Transaction Count.
Vuser 1:1 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 27154 NOPM from 82383 MySQL TPM
Vuser 1:FINISHED SUCCESS
Vuser 2:FINISHED SUCCESS
ALL VIRTUAL USERS COMPLETE

Extracting the NOPM value from the CLI or from the log file, we found that MySQL 8.0.25 returned 25469 NOPM to MySQL 5.7.33’s 27154 NOPM meaning the older version of MySQL gave a slightly better result.

So does this mean that we can conclude that MySQL 5.7 is 1.06X faster than MySQL 8.0? Of course not, we have only run a single Virtual User and therefore not tested MySQL’s ability to manage multiple sessions running the same workload concurrently.  Therefore, again using the interactive shell let us now run more tests, increasing the Virtual User count to 20.

Now at 20 Virtual Users MySQL 8.0 is at 390014 TPM with MySQL 5.7 at 379972 putting MySQL 8.0 at 1.02X better than MySQL 5.7. However, again this might not be giving us the full picture, so let’s run a fully automated test up to 120 Virtual Users using the script as follows:

puts "MySQL Test Started"
dbset db mysql
dbset bm TPC-C
diset connection mysql_socket /tmp/mysql.sock
diset tpcc mysql_driver timed
diset tpcc mysql_rampup 2
diset tpcc mysql_duration 5
vuset logtotemp 1
vuset unique 1
loadscript
foreach z {1 2 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 96 100 104 108 112 116 120} {
puts "$z VU test"
vuset vu $z
vucreate
vurun
runtimer 480
vudestroy
}
puts "MySQL Test Complete"

When we graph the result, we now get a much better picture of the full capabilities of the MySQL 5.7 and 5.8 software running on the same server.

Now we can see that MySQL 8.0 reaches peak performance at 1006469 NOPM at 112 Virtual Users, with MySQL 5.7 peaking at 666407 at 68 Virtual Users, putting MySQL 8.0.25 at 1.51X over MySQL 5.7.33.

It should be clear from the performance profile that the difference in performance can be attributed to the scalability of the database software, in this case MySQL. This is the same server, same OS and same HammerDB client, the only difference is in the server software.

Crucially HammerDB can also run against Oracle, SQL Server, Db2, MariaDB and PostgreSQL on Windows and Linux, so we know that the capabilities of the server with commercial database software is actually a lot higher and also that the capabilities of the HammerDB client is also a lot higher – it is the database software and the database software alone that limits the scalability in this case.

Also important is the user experience, so we have also gathered the transaction response times with the summaries shown below for both MySQL versions at the peak performance of MySQL 5.7.33.

MySQL 8.0.25

Vuser 1:68 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 841337 NOPM from 2550191 MySQL TPM
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>>>>> SUMMARY OF 68 ACTIVE VIRTUAL USERS : MEDIAN ELAPSED TIME : 403363ms
>>>>> PROC: NEWORD
CALLS: 5775293 MIN: 0.810ms AVG: 2.401ms MAX: 57.323ms TOTAL: 13867195.952ms
P99: 4.005ms P95: 3.698ms P50: 2.316ms SD: 792.316 RATIO: 50.557%
>>>>> PROC: PAYMENT
CALLS: 5775133 MIN: 0.435ms AVG: 1.042ms MAX: 56.802ms TOTAL: 6022416.890ms
P99: 2.052ms P95: 1.576ms P50: 0.978ms SD: 401.213 RATIO: 21.957%
>>>>> PROC: DELIVERY
CALLS: 578532 MIN: 4.001ms AVG: 8.929ms MAX: 67.479ms TOTAL: 5165817.194ms
P99: 12.438ms P95: 11.282ms P50: 9.063ms SD: 1831.663 RATIO: 18.834%
>>>>> PROC: SLEV
CALLS: 578436 MIN: 0.752ms AVG: 2.468ms MAX: 56.045ms TOTAL: 1427926.455ms
P99: 3.585ms P95: 3.230ms P50: 2.459ms SD: 560.309 RATIO: 5.206%
>>>>> PROC: OSTAT
CALLS: 576834 MIN: 0.286ms AVG: 0.955ms MAX: 39.063ms TOTAL: 551043.284ms
P99: 1.796ms P95: 1.432ms P50: 0.912ms SD: 334.873 RATIO: 2.009%
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

MySQL 5.7.33

Vuser 1:68 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 655295 NOPM from 1986131 MySQL TPM
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>>>>> SUMMARY OF 68 ACTIVE VIRTUAL USERS : MEDIAN ELAPSED TIME : 403998ms
>>>>> PROC: NEWORD
CALLS: 4542033 MIN: 0.714ms AVG: 3.082ms MAX: 42.174ms TOTAL: 14000329.326ms
P99: 5.124ms P95: 4.720ms P50: 3.001ms SD: 994.571 RATIO: 50.962%
>>>>> PROC: DELIVERY
CALLS: 453821 MIN: 3.020ms AVG: 13.931ms MAX: 58.857ms TOTAL: 6322207.859ms
P99: 19.838ms P95: 17.926ms P50: 13.975ms SD: 2826.392 RATIO: 23.013%
>>>>> PROC: PAYMENT
CALLS: 4544693 MIN: 0.352ms AVG: 1.157ms MAX: 37.108ms TOTAL: 5261866.428ms
P99: 2.231ms P95: 1.734ms P50: 1.096ms SD: 429.929 RATIO: 19.154%
>>>>> PROC: SLEV
CALLS: 455338 MIN: 0.686ms AVG: 2.407ms MAX: 54.447ms TOTAL: 1096000.372ms
P99: 3.253ms P95: 3.048ms P50: 2.381ms SD: 501.128 RATIO: 3.990%
>>>>> PROC: OSTAT
CALLS: 454731 MIN: 0.252ms AVG: 0.946ms MAX: 28.175ms TOTAL: 430240.916ms
P99: 1.856ms P95: 1.458ms P50: 0.905ms SD: 321.024 RATIO: 1.566%
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

What we can see is that with MySQL 8.0.25 we have both higher throughput and lower response times. MySQL 8.0 therefore offers a higher capacity for load than MySQL 5.7 on this server.

It should also be clear that testing on a single socket environment may not give you an accurate comparison of database software designed to take advantage of multiple CPU sockets. In this case we tested MySQL vs MySQL however you should exercise caution in drawing conclusions about database software capabilities without having first tested in a server environment.

HammerDB v4.2 New Features Pt2: Increase of TPROC-C Schema Size Limits

Prior to version 4.2 the limits to the size of the schema that could be generated was set to 5000 warehouses in the GUI and 30,000 if using the datagen feature to generate flat files for external loading. From version 4.2 these limits have been increased to 100,000 for both features. This post gives a background to the schema size limits, guidance on the warehouse count and expected size limits. It also shows how to manually exceed the 100,000 limit if you wish.

Up to version 4.1 when using the GUI the number of warehouses is adjusted by a spinbox with an upper limit set to 5000. You could manually enter a figure larger, but would then receive the error shown that 5000 was the limit.

This was often interpreted as a limit of the amount of data that HammerDB could generate. Instead, however, this was an imposed limit to encourage right-sizing of the test database instead of over-sizing.

The number of warehouses is passed as one of the parameters in the last line of the build script that has no limit.  In the example below, the value 5000 can be seen as the fourth argument to the function do_tpcc. If you stop the build, manually modify this value and re-run the build, it will generate the number of warehouses you have defined. This is the same for the previous 5000 limit or the current 100,000 limit.

However, the precision of some numeric data types may not have been sufficient for larger values in some databases. For example, for Oracle as shown “W_ID” of the table WAREHOUSE was set to the NUMBER data type with scale and precision of (4,0) meaning that the maximum number of warehouses permitted was 9999 before Oracle would generate an error.

"CREATE TABLE WAREHOUSE (W_ID NUMBER(4, 0), W_YTD NUMBER(12, 2), W_TAX NUMBER(4, 4), W_NAME VARCHAR2(10), W_STREET_1 VARCHAR2(20), W_STREET_2 VARCHAR2(20), W_CITY VARCHAR2(20), W_STATE CHAR(2), W_ZIP CHAR(9)) INITRANS 4 PCTFREE 99 PCTUSED 1"

All databases have been checked and updated for v4.2 to ensure that schema builds of up to 100,000 warehouses will complete without error.

Regarding the amount of disk space that should be reserved for a schema build, the general guidance is to allow up to 100MB per warehouse. Typically, not all of this space will be needed and varies per database, however it should not exceed this value. Therefore, for 100,000 warehouses, 10TB of space should be allowed.

However, be aware that just because you can build schemas of up to 100,000 warehouses, it does not necessarily mean that this will right for you. With the default workload, each Virtual User will process 90% of its transactions against its home warehouse selected at random at the start of the test. Consequently, as shown if you are running 2 Virtual Users it makes little difference if you have created, 10, 1000, 10,000 or 100,000 warehouses, aside from the 2 selected at random most of that data will remain idle and will not affect the result of the test.  You should create a schema large enough to allow an even, uncontended, random distribution of Virtual Users for the maximum Virtual User count you will expect to create.

For further information on right sizing, see the post:

How Many Warehouses for the HammerDB TPC-C Test?

Where larger warehouse counts are of most benefit is when using advanced driver script options such as use all warehouses and event driven scaling.  These are areas where creating larger schema sizes can add additional benefit and insight to your testing above and beyond the default schema installations.

HammerDB v4.2 New Features Pt1: MariaDB Build and Test Example with the CLI

Prior to HammerDB v4.2 support for MariaDB has been provided by using the MySQL interface. However, as the two databases diverged from a shared common codebase, this presented a challenge in that it was not possible to modify the workload for MySQL or MariaDB without also changing the other.  Also, the additional installation of the MySQL client libraries was needed when testing MariaDB.  HammerDB v4.2 adds full support for MariaDB in both the GUI as shown:

And in the CLI as a separate database from MySQL, this post describes a setup and test for MariaDB using the CLI on Linux as an example running the TPROC-C workload.

First, download your favourite release and format of MariaDB from https://downloads.mariadb.org/. We will use the Linux generic file mariadb-10.2.34-linux-x86_64.tar.gz of release 10.2.34 and extract it into a directory such as /opt. We have added a my.cnf file to the file listing.

/opt/mariadb-10.2.34-linux-x86_64$ ls
bin EXCEPTIONS-CLIENT man README-wsrep support-files
COPYING include my.cnf scripts THIRDPARTY
CREDITS INSTALL-BINARY mysql-test share
data lib README.md sql-bench

Also install HammerDB v4.2 if you have not done so already and run the librarycheck command from the CLI.  As we have not told HammerDB where to find the MariaDB libraries, it will generate an error telling you to add the location of these libraries to your LIBRARY_PATH.

hammerdb>librarycheck
....
Checking database library for MariaDB
Error: failed to load mariatcl - couldn't load file "/home/HammerDB-4.2/lib/mariatcl0.1/libmariatcl0.1.so": libmariadb.so.3: cannot open shared object file: No such file or directory
Ensure that MariaDB client libraries are installed and the location in the LD_LIBRARY_PATH environment variable

Note that the error tells us we are missing the file libmariadb.so.3. Advanced users can check this using the ldd command on the HammerDB library file itself.

~/HammerDB-4.2/lib/mariatcl0.1$ ldd libmariatcl0.1.so
linux-vdso.so.1 (0x00007ffd3edff000)
libmariadb.so.3 => not found
...

If we look in the lib directory of the MariaDB installation, we can find this file

/opt/mariadb-10.2.34-linux-x86_64/lib$ ls
galera libmariadb.so.3 libmysqlclient.so libmysqlservices.a
libgalera_smm.so libmysqlclient.a libmysqld.a pkgconfig
libmariadbclient.a libmysqlclient_r.a libmysqld.so plugin
libmariadb.so libmysqlclient_r.so libmysqld.so.19

So lets add it to the LIBRARY_PATH and retrace our steps.

$ export LD_LIBRARY_PATH=/opt/mariadb-10.2.34-linux-x86_64/lib:$LD_LIBRARY_PATH

Now using ldd we can see that the library can be found right in the directory we added to the LIBRARY_PATH.

~/HammerDB-4.2/lib/mariatcl0.1$ ldd libmariatcl0.1.so
linux-vdso.so.1 (0x00007fff613f3000)
libmariadb.so.3 => /opt/mariadb-10.2.34-linux-x86_64/lib/libmariadb.so.3 (0x00007fa177941000)
...

and now librarycheck works as it knows where this file is, meaning HammerDB is ready to start testing MariaDB.

~/HammerDB-4.2$ ./hammerdbcli
HammerDB CLI v4.2
Copyright (C) 2003-2021 Steve Shaw
Type "help" for a list of commands
The xml is well-formed, applying configuration
hammerdb>librarycheck
...
Checking database library for MariaDB
Success ... loaded library mariatcl for MariaDB

As noted previously, we’ve added a my.cnf file to the MariaDB directory and tuned this for performance rather than resilience.  (A future blog post on performance profiles will explain why this is a good first test on which to baseline other changes)

$ more my.cnf
[mysqld]
skip-log-bin
lc_messages_dir=/opt/mariadb-10.2.34-linux-x86_64/share/english
datadir=/opt/mariadb-10.2.34-linux-x86_64/data
default_authentication_plugin=mysql_native_password
socket=/tmp/mariadb10234.sock
port=3306
log-error=/tmp/mariadb10234.log
bind_address=localhost
# general
max_connections=1000
table_open_cache=2000
table_open_cache_instances=16
back_log=1500
#default_password_lifetime=0
ssl=0
performance_schema=OFF
max_prepared_stmt_count=12800
skip_log_bin=1
character_set_server=latin1
collation_server=latin1_swedish_ci
transaction_isolation=REPEATABLE-READ
# files
#innodb_file_per_table
innodb_log_file_size=16384M
innodb_open_files=1000
# buffers
innodb_buffer_pool_size=64000M
innodb_buffer_pool_instances=16
innodb_log_buffer_size=64M
# tune
innodb_doublewrite=0
innodb_thread_concurrency=0
innodb_flush_log_at_trx_commit=0
innodb_max_dirty_pages_pct=90
innodb_max_dirty_pages_pct_lwm=10
join_buffer_size=32K
sort_buffer_size=32K
innodb_use_native_aio=1
innodb_stats_persistent=1
innodb_spin_wait_delay=6
innodb_max_purge_lag_delay=300000
innodb_max_purge_lag=0
innodb_flush_method=O_DIRECT_NO_FSYNC
#innodb_checksum_algorithm=none
innodb_io_capacity=4000
innodb_io_capacity_max=20000
innodb_lru_scan_depth=9000
innodb_change_buffering=none
innodb_read_only=0
#innodb_page_cleaners=2
#innodb_undo_log_truncate=off
# perf special
innodb_adaptive_flushing=1
innodb_flush_neighbors=0
innodb_read_io_threads=16
innodb_write_io_threads=16
innodb_purge_threads=4
innodb_adaptive_hash_index=0
# monitoring
innodb_monitor_enable='%'

Next we will install the database and start MariaDB

./scripts/mysql_install_db --defaults-file=/opt/mariadb-10.6.3-linux-x86_64/my.cnf --user=mysql
./bin/mysqld --defaults-file=./my.cnf

Finally, we will set the root password

./bin/mysql -uroot -S/tmp/mariadb10234.sock
...
MySQL [(none)]> SET PASSWORD FOR 'root'@localhost = PASSWORD("mysql");

At this point, we are ready to start testing MariaDB with HammerDB so lets create 2 scripts, a build script and a run script. For the build script the example sets the socket name we have used and opts to create 800 warehouses with 64 virtual users in a partitioned schema.

dbset db maria
dbset bm TPC-C
diset connection maria_socket /tmp/mariadb10234.sock
diset tpcc maria_count_ware 800
diset tpcc maria_num_vu 64
diset tpcc maria_partition true
buildschema
waittocomplete
quit

Save the script as mysqlbuild.tcl and run as follows:

./hammerdbcli auto mysqlbuild.tcl

Wait for the schema build to complete. In the example below it took just over 14 minutes to create 800 warehouses.

Hammerdb Log @ Wed Jul 07 07:33:23 PDT 2021
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
Vuser 1:Monitor Thread
Vuser 1:CREATING TPCC SCHEMA
Vuser 1:CREATING DATABASE tpcc
Vuser 1:CREATING TPCC TABLES
Vuser 1:Loading Item
Vuser 2:Worker Thread
Vuser 2:Waiting for Monitor Thread...
Vuser 2:Loading 13 Warehouses start:1 end:13
Vuser 2:Start:Wed Jul 07 07:33:24 PDT 2021
Vuser 2:Loading Warehouse
Vuser 2:Loading Stock Wid=1
...
Vuser 30:Orders Done
Vuser 30:Loading Orders for D=10 W=377
Vuser 30:...1000
Vuser 30:...2000
Vuser 30:...3000
Vuser 30:Orders Done
Vuser 30:End:Wed Jul 14 07:17:36 PDT 2021
Vuser 1:Workers: 0 Active 64 Done
Vuser 1:CREATING TPCC STORED PROCEDURES
Vuser 1:GATHERING SCHEMA STATISTICS
Vuser 1:TPCC SCHEMA COMPLETE
Vuser 1:Workers: 0 Active 64 Done
Vuser 1:CREATING TPCC STORED PROCEDURES
Vuser 1:GATHERING SCHEMA STATISTICS

Next we are ready to run the test. In the example below, we want to collect a performance profile so will run tests without interruption from 1 to 100 Virtual Users creating a unique log file for each test.

puts "MariaDB 10.2.34 Test Started"
dbset db maria
dbset bm TPC-C
diset connection maria_socket /tmp/mariadb10234.sock
diset tpcc maria_driver timed
diset tpcc maria_rampup 2
diset tpcc maria_duration 5
vuset logtotemp 1
vuset unique 1
loadscript
foreach z {1 2 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 96 100} {
puts "$z VU test"
vuset vu $z
vucreate
vurun
runtimer 480
vudestroy
}
puts "MariaDB 10.2.34 Test Complete"

The script is run as follows:

./hammerdbcli auto mysqlrun.tcl

and can then be left without intervention to complete the test. On completion in the /tmp directory are a number of output files with the NOPM and TPM results of the test.

Hammerdb Log @ Mon Jul 05 04:36:59 PDT 2021
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
Vuser 1:Beginning rampup time of 2 minutes
Vuser 2:Processing 10000000 transactions with output suppressed...
Vuser 3:Processing 10000000 transactions with output suppressed...
Vuser 4:Processing 10000000 transactions with output suppressed...
Vuser 65:Processing 10000000 transactions with output suppressed...
Vuser 1:Rampup 1 minutes complete ...
Vuser 1:Rampup 2 minutes complete ...
Vuser 1:Rampup complete, Taking start Transaction Count.
Vuser 1:Timing test period of 5 in minutes
Vuser 1:1 ...,
Vuser 1:2 ...,
Vuser 1:3 ...,
Vuser 1:4 ...,
Vuser 1:5 ...,
Vuser 1:Test complete, Taking end Transaction Count.
Vuser 1:64 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 669086 NOPM from 2026420 MariaDB TPM

Add these results to your favourite spreadsheet and you have your first performance profile of MariaDB on your system.

Once you have explored the TPROC-C workload you can also use HammerDB to run TPROC-H for analytic workloads and test solutions such as the MariaDB ColumnStore https://mariadb.com/kb/en/mariadb-columnstore/.

Also don’t forget that HammerDB is fully open source, so if you wish to help make HammerDB even better for testing MariaDB you can contribute both ideas and code at https://github.com/TPC-Council/HammerDB.

 

HammerDB at Percona Live ONLINE May 13th 2021

Percona Live ONLINE, THE Open Source Database Conference, is taking place May 12-13, 2021. This year Steve Shaw will be presenting HammerDB: A better Way to Benchmark Your Open Source Database on Thursday, May 13th 2021 at 13.30-14.30 EDT.

This presentation is now available to view on demand on YouTube. With slides also available for  download from Percona.

Percona Live is a community-focused conference with an emphasis on promoting and increasing the adoption of open source database software. It provides a forum for open source database users and experts to ask questions, and share knowledge and experience.

Online and free to attend in 2021, register to hear industry experts discuss their experiences, deep-dive into software features, and offer insight and advice.

You can visit the event website for more information and to view the full conference schedule.

HammerDB v4.1 New Features Pt3: Step Workloads

 

HammerDB Variable or Step Workloads are an advanced testing feature that enables you to automatically vary the load on the database over a period of time. When taking this approach you would not focus on the test result but instead monitor the databases ability to cope with the variation in demand and transaction response times.

Step Workloads

To implement Step Workloads, HammerDB v4.1 introduces a CLI only command called steprun combined with a new XML configuration file called steps.xml.   the steprun command reads the XML configuration file and creates primary and replica instances of HammerDB per step with the replica instances starting at pre-defined time intervals automatically connecting back to the primary instance of HammerDB.

Primary and Replicas

When defining the workload it is best to think of the configuration as defining a pyramid with the primary at the base and the replicas sitting above it. Each replica must finish at the same time or earlier than the primary.  The primary running time continues to be defined by the standard settings. For example for configuring an Oracle workload the following commands set the rampup and duration running times respectively. Therefore in this case the workload woud run for 12 minutes with 2 minutes of rampup and 10 minutes of duration.

diset tpcc rampup 2 
diset tpcc duration 10

The replicas are defined in steps.xml in the config directory to determine how many virtual users each replica will configure, how soon after the previous instance has started the replica should start and how long it will run for.

<steps>
<replica1>
<start_after_prev>2</start_after_prev>
<duration>8</duration>
<virtual_users>2</virtual_users>
</replica1>
<replica2>
<start_after_prev>2</start_after_prev>
<duration>6</duration>
<virtual_users>2</virtual_users>
</replica2>
<replica3>
<start_after_prev>2</start_after_prev>
<duration>4</duration>
<virtual_users>2</virtual_users>
</replica3>
<replica4>
<start_after_prev>2</start_after_prev>
<duration>2</duration>
<virtual_users>2</virtual_users>
</replica4>
</steps>

If the configuration is incorrect HammerDB will report the error and fail to start the Step Workload.  In the following example we have set the default 2 minutes of rampup and 5 minutes of test.

diset tpcc rampup 2 
diset tpcc duration 5

In this case the workload errors because the replica running times exceed the primary.

Value 2 for tpcc:rampup is the same as existing value 2, no change made
Value 5 for tpcc:duration is the same as existing value 5, no change made
primary starts immediately, runs rampup for 2 minutes then runs test for 5 minutes with 2 Active VU
replica1 starts 2 minutes after rampup completes and runs test for 8 minutes with 2 Active VU
Error: replica1 is set to complete after 12 minutes and is longer than the Primary running time of 7 minutes
replica2 starts 2 minutes after previous replica starts and runs test for 6 minutes with 2 Active VU
Error: replica2 is set to complete after 12 minutes and is longer than the Primary running time of 7 minutes
replica3 starts 2 minutes after previous replica starts and runs test for 4 minutes with 2 Active VU
Error: replica3 is set to complete after 12 minutes and is longer than the Primary running time of 7 minutes
replica4 starts 2 minutes after previous replica starts and runs test for 2 minutes with 2 Active VU
Error: replica4 is set to complete after 12 minutes and is longer than the Primary running time of 7 minutes
Error: Step workload primary running time must exceed the running time of all replicas

Instead the following script defines the rampup of 2 minutes and duration of 10 correctly so that the primary provides an adequate base for the pyramid, starting before and ending at the same time or after all of the replicas. The workload is started with the steprun command as the last command.  No commands should follow steprun as the command will intentionally exit all replicas and the primary when the workload is complete.

dbset db ora
dbset bm TPC-C
diset connection system_user system
diset connection system_password oracle
diset connection instance RAZPDB1
diset tpcc tpcc_user tpcc
diset tpcc tpcc_pass tpcc
diset tpcc total_iterations 10000000
diset tpcc ora_driver timed
diset tpcc rampup 2
diset tpcc duration 10
vuset logtotemp 1
vuset vu 2
steprun

Running this script it can be seen that without further intervention the primary and replicas are created with the replicas automatically connecting back to the primary,  the replicas are then started at the time interval given in the steps.xml file.

The primary sets the rampup in the replicas to zero (as the rampup has completed in the primary) and then sends the individual duration times to the replicas. Time profiling is also disabled in the replicas.  When complete replicas will call exit from the primary and when the final replica has completed the primary will also exit.

Note that it is expected for Virtual User 1 of the replicas to end immediately with the message.

Vuser 1:FINISHED SUCCESS

This is because Virtual User 1 is the monitor Virtual User but this Virtual User does not run in the replica meaning it ends immediately. When the replicas are started the message sending “run_virtual” is recorded.

Sending "run_virtual" ....

The following output shows the previously defined step workload running against an Oracle database.

hammerdb>source runstepora.tcl
Database set to Oracle
Benchmark set to TPC-C for Oracle
Value system for connection:system_user is the same as existing value system, no change made
Changed connection:system_password from manager to oracle for Oracle
Changed connection:instance from oracle to RAZPDB1 for Oracle
Value tpcc for tpcc:tpcc_user is the same as existing value tpcc, no change made
Value tpcc for tpcc:tpcc_pass is the same as existing value tpcc, no change made
Changed tpcc:total_iterations from 1000000 to 10000000 for Oracle
Clearing Script, reload script to activate new setting
Script cleared
Changed tpcc:ora_driver from test to timed for Oracle
Value 2 for tpcc:rampup is the same as existing value 2, no change made
Changed tpcc:duration from 5 to 10 for Oracle
primary starts immediately, runs rampup for 2 minutes then runs test for 10 minutes with 2 Active VU
replica1 starts 2 minutes after rampup completes and runs test for 8 minutes with 2 Active VU
replica2 starts 2 minutes after previous replica starts and runs test for 6 minutes with 2 Active VU
replica3 starts 2 minutes after previous replica starts and runs test for 4 minutes with 2 Active VU
replica4 starts 2 minutes after previous replica starts and runs test for 2 minutes with 2 Active VU
Switch from Local
to Primary mode?
Enter yes or no: replied yes
Setting Primary Mode at id : 20166, hostname : razorbill.home
Primary Mode active at id : 20166, hostname : razorbill.home
Starting 1 replica HammerDB instance
Starting 2 replica HammerDB instance
HammerDB CLI v4.1
Copyright (C) 2003-2021 Steve Shaw
Type "help" for a list of commands
HammerDB CLI v4.1
Copyright (C) 2003-2021 Steve Shaw
Type "help" for a list of commands
Starting 3 replica HammerDB instance
Starting 4 replica HammerDB instance
Doing wait to connnect ....
Primary waiting for all replicas to connect .... 0 out of 4 are connected
HammerDB CLI v4.1
Copyright (C) 2003-2021 Steve Shaw
Type "help" for a list of commands
HammerDB CLI v4.1
Copyright (C) 2003-2021 Steve Shaw
Type "help" for a list of commands
The xml is well-formed, applying configuration
The xml is well-formed, applying configuration
The xml is well-formed, applying configuration
Switch from Local
to Replica mode?
Enter yes or no: replied yes
Switch from Local
to Replica mode?
Enter yes or no: replied yes
Setting Replica Mode at id : 20182, hostname : razorbill.home
Replica connecting to localhost 20166 : Connection succeeded
Received a new replica connection from host ::1
Setting Replica Mode at id : 20181, hostname : razorbill.home
Replica connecting to localhost 20166 : Connection succeeded
New replica joined : {20182 razorbill.home}
The xml is well-formed, applying configuration
Received a new replica connection from host ::1
New replica joined : {20182 razorbill.home} {20181 razorbill.home}
Switch from Local
to Replica mode?
Primary call back successful
Switched to Replica mode via callback
Enter yes or no: replied yes
Primary call back successful
Switched to Replica mode via callback
Setting Replica Mode at id : 20183, hostname : razorbill.home
Received a new replica connection from host ::1
Replica connecting to localhost 20166 : Connection succeeded
New replica joined : {20182 razorbill.home} {20181 razorbill.home} {20183 razorbill.home}
Primary call back successful
Switched to Replica mode via callback
Switch from Local
to Replica mode?
Enter yes or no: replied yes
Setting Replica Mode at id : 20184, hostname : razorbill.home
Received a new replica connection from host ::1
Replica connecting to localhost 20166 : Connection succeeded
New replica joined : {20182 razorbill.home} {20181 razorbill.home} {20183 razorbill.home} {20184 razorbill.home}
Primary call back successful
Switched to Replica mode via callback
Primary waiting for all replicas to connect .... {20182 razorbill.home} {20181 razorbill.home} {20183 razorbill.home} {20184 razorbill.home} out of 4 are connected
Primary Received all replica connections {20182 razorbill.home} {20181 razorbill.home} {20183 razorbill.home} {20184 razorbill.home}
Database set to Oracle
Database set to Oracle
Database set to Oracle
Setting primary to run 2 virtual users for 10 duration
Database set to Oracle
Database set to Oracle
Value 10 for tpcc:duration is the same as existing value 10, no change made
Sending dbset all to 20182 razorbill.home
Setting replica1 to start after 2 duration 8 VU count 2, Replica instance is 20182 razorbill.home
Sending "diset tpcc ora_timeprofile false" to 20182 razorbill.home
Value false for tpcc:ora_timeprofile is the same as existing value false, no change made
Sending "diset tpcc rampup 0" to 20182 razorbill.home
Changed tpcc:rampup from 2 to 0 for Oracle
Sending "diset tpcc duration 8" to 20182 razorbill.home
Changed tpcc:duration from 10 to 8 for Oracle
Sending "vuset vu 2" to 20182 razorbill.home
Sending dbset all to 20181 razorbill.home
Setting replica2 to start after 2 duration 6 VU count 2, Replica instance is 20181 razorbill.home
Sending "diset tpcc ora_timeprofile false" to 20181 razorbill.home
Value false for tpcc:ora_timeprofile is the same as existing value false, no change made
Sending "diset tpcc rampup 0" to 20181 razorbill.home
Changed tpcc:rampup from 2 to 0 for Oracle
Sending "diset tpcc duration 6" to 20181 razorbill.home
Changed tpcc:duration from 10 to 6 for Oracle
Sending "vuset vu 2" to 20181 razorbill.home
Sending dbset all to 20183 razorbill.home
Setting replica3 to start after 2 duration 4 VU count 2, Replica instance is 20183 razorbill.home
Sending "diset tpcc ora_timeprofile false" to 20183 razorbill.home
Value false for tpcc:ora_timeprofile is the same as existing value false, no change made
Sending "diset tpcc rampup 0" to 20183 razorbill.home
Changed tpcc:rampup from 2 to 0 for Oracle
Sending "diset tpcc duration 4" to 20183 razorbill.home
Changed tpcc:duration from 10 to 4 for Oracle
Sending "vuset vu 2" to 20183 razorbill.home
Sending dbset all to 20184 razorbill.home
Setting replica4 to start after 2 duration 2 VU count 2, Replica instance is 20184 razorbill.home
Sending "diset tpcc ora_timeprofile false" to 20184 razorbill.home
Value false for tpcc:ora_timeprofile is the same as existing value false, no change made
Sending "diset tpcc rampup 0" to 20184 razorbill.home
Changed tpcc:rampup from 2 to 0 for Oracle
Sending "diset tpcc duration 2" to 20184 razorbill.home
Changed tpcc:duration from 10 to 2 for Oracle
Sending "vuset vu 2" to 20184 razorbill.home
Script loaded, Type "print script" to view
Script loaded, Type "print script" to view
Script loaded, Type "print script" to view
Script loaded, Type "print script" to view
Script loaded, Type "print script" to view
Vuser 1 created MONITOR - WAIT IDLE
Vuser 2 created - WAIT IDLE
Vuser 3 created - WAIT IDLE
Vuser 1 created MONITOR - WAIT IDLE
Vuser 2 created - WAIT IDLE
Vuser 1 created MONITOR - WAIT IDLE
Vuser 3 created - WAIT IDLE
Vuser 2 created - WAIT IDLE
3 Virtual Users Created with Monitor VU
Vuser 3 created - WAIT IDLE
3 Virtual Users Created with Monitor VU
Vuser 1 created MONITOR - WAIT IDLE
Vuser 2 created - WAIT IDLE
Vuser 3 created - WAIT IDLE
3 Virtual Users Created with Monitor VU
Vuser 1 created MONITOR - WAIT IDLE
Vuser 2 created - WAIT IDLE
Vuser 3 created - WAIT IDLE
3 Virtual Users Created with Monitor VU
Logging activated
to /tmp/hammerdb.log
3 Virtual Users Created with Monitor VU
Starting Primary VUs
Vuser 1:RUNNING
Vuser 1:Beginning rampup time of 2 minutes
Vuser 2:RUNNING
Vuser 2:Processing 10000000 transactions with output suppressed...
Vuser 3:RUNNING
Vuser 3:Processing 10000000 transactions with output suppressed...
Delaying Start of Replicas to rampup 2 replica1 2 replica2 2 replica3 2 replica4 2
Delaying replica1 for 4 minutes.
Delaying replica2 for 6 minutes.
Delaying replica3 for 8 minutes.
Delaying replica4 for 10 minutes.
Primary entering loop waiting for vucomplete
Vuser 1:Rampup 1 minutes complete ...
Vuser 1:Rampup 2 minutes complete ...
Vuser 1:Rampup complete, Taking start AWR snapshot.
Vuser 1:Start Snapshot 18 taken at 10 MAY 2021 09:07 of instance RAZCDB1 (1) of database RAZCDB1 (171153594)
Vuser 1:Timing test period of 10 in minutes
Vuser 1:1 ...,
Sending "run_virtual" to 20182 razorbill.home
Vuser 1:RUNNING
Vuser 1:Operating in Replica Mode, No Snapshots taken...
Vuser 1:FINISHED SUCCESS
Vuser 2:RUNNING
Vuser 2:Processing 10000000 transactions with output suppressed...
Vuser 3:RUNNING
Vuser 3:Processing 10000000 transactions with output suppressed...
Vuser 1:2 ...,
Vuser 1:3 ...,
Sending "run_virtual" to 20181 razorbill.home
Vuser 1:RUNNING
Vuser 1:Operating in Replica Mode, No Snapshots taken...
Vuser 1:FINISHED SUCCESS
Vuser 2:RUNNING
Vuser 2:Processing 10000000 transactions with output suppressed...
Vuser 3:RUNNING
Vuser 3:Processing 10000000 transactions with output suppressed...
Vuser 1:4 ...,
Vuser 1:5 ...,
Sending "run_virtual" to 20183 razorbill.home
Vuser 1:RUNNING
Vuser 1:Operating in Replica Mode, No Snapshots taken...
Vuser 1:FINISHED SUCCESS
Vuser 2:RUNNING
Vuser 2:Processing 10000000 transactions with output suppressed...
Vuser 3:RUNNING
Vuser 3:Processing 10000000 transactions with output suppressed...
Vuser 1:6 ...,
Vuser 1:7 ...,
Sending "run_virtual" to 20184 razorbill.home
Vuser 1:RUNNING
Vuser 1:Operating in Replica Mode, No Snapshots taken...
Vuser 1:FINISHED SUCCESS
Vuser 2:RUNNING
Vuser 2:Processing 10000000 transactions with output suppressed...
Vuser 3:RUNNING
Vuser 3:Processing 10000000 transactions with output suppressed...
Vuser 1:8 ...,
Vuser 1:9 ...,
Vuser 1:10 ...,
Vuser 1:Test complete, Taking end AWR snapshot.
Vuser 1:End Snapshot 19 taken at 10 MAY 2021 09:17 of instance RAZCDB1 (1) of database RAZCDB1 (171153594)
Vuser 1:Test complete: view report from SNAPID 18 to 19
Vuser 1:2 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 13607 NOPM from 28559 Oracle TPM
Vuser 1:FINISHED SUCCESS
Vuser 2:FINISHED SUCCESS
Vuser 3:FINISHED SUCCESS
Vuser 2:FINISHED SUCCESS
Vuser 3:FINISHED SUCCESS
Vuser 3:FINISHED SUCCESS
ALL VIRTUAL USERS COMPLETE
Vuser 2:FINISHED SUCCESS
Replica workload complete and calling exit from primary
Lost connection to : 20182 razorbill.home because target application died or connection lost
Vuser 3:FINISHED SUCCESS
ALL VIRTUAL USERS COMPLETE
Vuser 3:FINISHED SUCCESS
ALL VIRTUAL USERS COMPLETE
Vuser 2:FINISHED SUCCESS
ALL VIRTUAL USERS COMPLETE
Vuser 2:FINISHED SUCCESS
ALL VIRTUAL USERS COMPLETE
Replica workload complete and calling exit from primary
Lost connection to : 20181 razorbill.home because target application died or connection lost
Replica workload complete and calling exit from primary
Lost connection to : 20183 razorbill.home because target application died or connection lost
Replica workload complete and calling exit from primary
Lost connection to : 20184 razorbill.home because target application died or connection lost
Primary complete
deleting port_file /tmp/hdbcallback.tcl
Step workload complete

Monitoring the workload enables you to see the variation and the impact of starting additional instances against the same database over time.

Oracle Transaction Counter

Step workloads enable you to configure complex Virtual User configurations to see how your database responds to changes in load over time.

HammerDB v4.1 New Features Pt2: CLI Transaction Counter and Count Logging

Prior to v4.1 HammerDB has featured a graphical transaction counter, this enables you to see the transaction rate taking place on the database during the test. The transaction counter is designed not to be intrusive on the schema being tested. It also enables you to verify that the transaction rate reported at the end of the test is consistent throughout without peaks and troughs in the graph that would indicate bottlenecks due to configuration issues.

GUI Transaction Counter

From v4.1 HammerDB also features a transaction counter in the CLI.  Similarly to the GUI transaction counter the CLI one also runs in a separate thread to be non-intrusive to main workload. To configure the CLI transaction counter use the tcset command with the same options available in the GUI. The transaction counter can then be started with tcstart, stopped with tcstop and queried with tcstatus.

hammerdb>help tcset
tcset - Usage: tcset [refreshrate|logtotemp|unique|timestamps]
Configure the transaction counter options. Equivalent to the Transaction Counter Options window in the graphical interface.

hammerdb>help tcstart
tcstart - Usage: tcstart
Starts the Transaction Counter.

hammerdb>help tcstatus
status - Usage: tcstatus
Checks the status of the Transaction Counter.

hammerdb>help tcstop
tcstop - Usage: tcstop
Stops the Transaction Counter.

An example test script is shown including the transaction counter commands.

dbset db mssqls
diset connection mssqls_server {(local)\SQLDEVELOP}
diset tpcc mssqls_driver timed
diset tpcc mssqls_rampup 1
diset tpcc mssqls_duration 2
vuset logtotemp 1
tcset logtotemp 1
tcset timestamps 1
loadscript
vuset vu 2
vucreate
tcstart
tcstatus
vurun
runtimer 200
vudestroy
tcstop

When we run the script we have now activated the transaction counter to run throughout the test.

hammerdb>source sqlrun.tcl
Database set to MSSQLServer
Changed connection:mssqls_server from (local) to (local)\SQLDEVELOP for MSSQLServer
Clearing Script, reload script to activate new setting
Script cleared
Changed tpcc:mssqls_driver from test to timed for MSSQLServer
Changed tpcc:mssqls_rampup from 2 to 1 for MSSQLServer
Changed tpcc:mssqls_duration from 5 to 2 for MSSQLServer
Transaction Counter log to temp set to 1
Transaction Counter timestamps set to 1
Script loaded, Type "print script" to view
Vuser 1 created MONITOR - WAIT IDLE
Vuser 2 created - WAIT IDLE
Vuser 3 created - WAIT IDLE
Vuser 4 created - WAIT IDLE
Vuser 5 created - WAIT IDLE
Logging activated
to C:/Users/Steve/AppData/Local/Temp/hammerdb.log
5 Virtual Users Created with Monitor VU
Transaction Counter logging activated to C:/Users/Steve/AppData/Local/Temp/hdbtcount.log
Transaction Counter Started
Transaction Counter thread running with threadid:tid0000000000002F88
Vuser 1:RUNNING
0 MSSQLServer tpm
Vuser 1:Beginning rampup time of 1 minutes
Vuser 2:RUNNING
Vuser 2:Processing 1000000 transactions with output suppressed...
Vuser 3:RUNNING
Vuser 3:Processing 1000000 transactions with output suppressed...
Vuser 4:RUNNING
Vuser 4:Processing 1000000 transactions with output suppressed...
Vuser 5:RUNNING
Vuser 5:Processing 1000000 transactions with output suppressed...
238182 MSSQLServer tpm
261366 MSSQLServer tpm
251310 MSSQLServer tpm
255102 MSSQLServer tpm
252180 MSSQLServer tpm
Vuser 1:Rampup 1 minutes complete ...
Vuser 1:Rampup complete, Taking start Transaction Count.
Vuser 1:Timing test period of 2 in minutes
228408 MSSQLServer tpm
Timer: 1 minutes elapsed
267666 MSSQLServer tpm
248466 MSSQLServer tpm
283332 MSSQLServer tpm
270900 MSSQLServer tpm
242988 MSSQLServer tpm
Vuser 1:1 ...,
246384 MSSQLServer tpm
Timer: 2 minutes elapsed
255042 MSSQLServer tpm
275712 MSSQLServer tpm
260112 MSSQLServer tpm
258246 MSSQLServer tpm
243798 MSSQLServer tpm
Vuser 1:2 ...,
Vuser 1:Test complete, Taking end Transaction Count.
Vuser 1:4 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 112009 NOPM from 257938 SQL Server TPM
Vuser 1:FINISHED SUCCESS
Vuser 3:FINISHED SUCCESS
Vuser 2:FINISHED SUCCESS
Vuser 5:FINISHED SUCCESS
Vuser 4:FINISHED SUCCESS
ALL VIRTUAL USERS COMPLETE
runtimer returned after 178 seconds
235392 MSSQLServer tpm
vudestroy success
Transaction Counter thread running with threadid:tid0000000000002F88
Stopping Transaction Counter
Closed Transaction Counter Log

It should also be noted that we now have the option to record the output of the transaction counter to a new log file called hdbtcount.log. We also have the option of giving this file a unique log name and adding timestamps to each line of output.  The following shows the contents in the logfile from the previous test.

Hammerdb Transaction Counter Log @ Fri May 07 15:31:33 BST 2021
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
0 MSSQLServer tpm @ Fri May 07 15:31:33 BST 2021
238182 MSSQLServer tpm @ Fri May 07 15:31:43 BST 2021
261366 MSSQLServer tpm @ Fri May 07 15:31:53 BST 2021
251310 MSSQLServer tpm @ Fri May 07 15:32:03 BST 2021
255102 MSSQLServer tpm @ Fri May 07 15:32:13 BST 2021
252180 MSSQLServer tpm @ Fri May 07 15:32:23 BST 2021
228408 MSSQLServer tpm @ Fri May 07 15:32:33 BST 2021
267666 MSSQLServer tpm @ Fri May 07 15:32:43 BST 2021
248466 MSSQLServer tpm @ Fri May 07 15:32:53 BST 2021
283332 MSSQLServer tpm @ Fri May 07 15:33:04 BST 2021
270900 MSSQLServer tpm @ Fri May 07 15:33:14 BST 2021
242988 MSSQLServer tpm @ Fri May 07 15:33:24 BST 2021
246384 MSSQLServer tpm @ Fri May 07 15:33:34 BST 2021
255042 MSSQLServer tpm @ Fri May 07 15:33:44 BST 2021
275712 MSSQLServer tpm @ Fri May 07 15:33:54 BST 2021
260112 MSSQLServer tpm @ Fri May 07 15:34:04 BST 2021
258246 MSSQLServer tpm @ Fri May 07 15:34:14 BST 2021
243798 MSSQLServer tpm @ Fri May 07 15:34:24 BST 2021
235392 MSSQLServer tpm @ Fri May 07 15:34:34 BST 2021

This option to record the transaction counter output is not only available in the CLI, it has also been added to the GUI with a checkbox option of Log Output to Temp with the same additional options as the CLI to use a unique log name and add timestamps.

GUI Log Transaction Counter

Adding the Transaction Counter to the CLI and Transaction Counter Logging to both GUI and CLI enables you to record the transaction rate during the test to ensure that the rates are consistent throughout the test without significant variance around your test result.

HammerDB v4.1 New Features Pt1: Extended Time Profiling

Up to HammerDB v4.0 you have had the ability to do time profiling for the first Active Virtual User only.  This post showed you how to graph the transaction response times using this package called etprof.  v4.1 enhances time profiling by introducing a new package called xtprof that enables you to capture timing data for all Active Virtual Users simultaneously. This post will get you started with time profiling in v4.1.

Time profiling of a workload is the process of capturing transaction response times.  Response times give us multiple insights beyond just transaction rates in the form of NOPM and TPM.  NOPM shows us the new orders per minute so the number of new order transactions only.  TPM shows us the user commits and user rollbacks across the whole database, however with both values we are recording the average transaction rate across a minute.  With the transaction counter this shows us how even this transaction rate is across the measured time, however time profiling enables a much finer granular view on the workload of each Virtual User.

Within the TPROC-C workload there is a transaction mix of neword 45%, payment 43%, delivery 4%, order status 4%, stock level 4%.   The transaction to run is selected according to this mix and NOPM records only 45% of this mix. Note that for example New Order is called 45% of the time however the actual time ratio attributed to the transaction could be longer or shorter.

So using the test workload as an example for a single Virtual User they will run a sequence of transactions such as follows.

Vuser 1:order status
Vuser 1:payment
Vuser 1:payment
Vuser 1:new order
Vuser 1:new order
Vuser 1:new order
Vuser 1:stock level
Vuser 1:new order
Vuser 1:payment
Vuser 1:payment
Vuser 1:payment
Vuser 1:new order
Vuser 1:payment
Vuser 1:payment
Vuser 1:order status
Vuser 1:new order
Vuser 1:new order
Vuser 1:payment
Vuser 1:new order
Vuser 1:delivery

Looking at this single Virtual User if for example the Stock Level or Order Status transactions takes longer, then it should be clear they are going to be able to run fewer New Orders in a minute because while they are running other transactions they are not recording any New Order transactions.

However, it is in most cases not just one Virtual User,  instead it is tens, hundreds or thousands running at the same time and the database is managing the concurrency between them. Stock Level for example is querying the district, stock and order_line tables so while other virtual users are inserting and updating these tables with New Order and Payment or deleting with Delivery, Stock Level is doing a longer running query with locks and multiversioning to ensure that the data is consistent.  A very basic database approach for Stock Level would be to lock the tables to ensure consistency, however doing this would block any other transactions from running until Stock Level was complete resulting in low performance overall.  This is the very design of TPC-C specification in that the transactions are intended to do inserts, updates, deletes and queries on the data at the same time to test how well the database engine can manage the concurrency. Time profiling can give you a deeper insight into how well this is managed.

To begin using extended profiling view the settings in the generic.xml file in the config directory. From v4.1 the default profiler will be the newer xtprof, however this can be changed back to etprof to use the earlier single Virtual User profiler.  As xtprof profiles all Virtual Users output now gets written to a dedicated log file and therefore there is the option of whether a unique log name is required, 0 for a regular filename and 1 for a unique id.

<timeprofile>
<profiler>xtprof</profiler>
<xt_unique_log_name>0</xt_unique_log_name>
</timeprofile>

With the profiler setting set to xtprof HammerDB will use the new profiler automatically enabling it for all Virtual Users when selected.

To enable time profiling when configuring the driver script select the time profiling option.

Time Profile Option

In the CLI set the timeprofile option to true.

hammerdb>diset tpcc mssqls_timeprofile true
Changed tpcc:mssqls_timeprofile from false to true for MSSQLServer

When running a workload you will now see a message “Initializing xtprof time profiler” in all Virtual Users. Note that the package initializes in the Monitor Virtual User also as all data is gathered, processed and reported by the Monitor.

Time Profiling Started

When the workload is complete after reporting the NOPM/TPM the Monitor Virtual User will report messages on Gathering timing data/Calculating timings and Writing timing data to an external logfile to mark the separate stages of gathering and processing the data.

In the logfile there is a report of the timings for all of the Active Virtual Users followed by a summary of the cumulative data.

In the report the values have the following meaning with all timings in milliseconds.

Value Description
CALLS Number of times that the stored procedure was called.
MIN Minimum response time in milliseconds.
AVG Average response time in milliseconds.
MAX Maximum response time in milliseconds.
TOTAL Total time spent in that stored procedure during the measuring interval. The total time will include both rampup and timed test times.
P99 99th percentile in milliseconds.
P95 95th percentile in milliseconds.
P50 50th percentile in milliseconds.
SD Standard Deviation showing variance in captured values.
RATIO Ratio showing percentage time taken for that stored procedure. The total of these values may be less than 100% as only timings for the stored procedures are shown.
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
MSSQLServer Hammerdb Time Profile Report @ Thu May 06 13:37:27 BST 2021
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>>>>> VIRTUAL USER 2 : ELAPSED TIME : 420128ms
>>>>> PROC: NEWORD
CALLS: 173439 MIN: 0.481ms AVG: 0.938ms MAX: 1247.509ms TOTAL: 162638.013ms
P99: 2.147ms P95: 1.358ms P50: 0.842ms SD: 40379.430 RATIO: 38.712%
>>>>> PROC: PAYMENT
CALLS: 174701 MIN: 0.370ms AVG: 0.774ms MAX: 1191.065ms TOTAL: 135295.676ms
P99: 2.108ms P95: 1.261ms P50: 0.648ms SD: 39317.110 RATIO: 32.203%
>>>>> PROC: DELIVERY
CALLS: 17433 MIN: 1.483ms AVG: 3.351ms MAX: 552.191ms TOTAL: 58422.210ms
P99: 7.287ms P95: 5.305ms P50: 3.121ms SD: 47931.042 RATIO: 13.906%
>>>>> PROC: SLEV
CALLS: 17686 MIN: 0.637ms AVG: 2.044ms MAX: 640.690ms TOTAL: 36157.299ms
P99: 2.260ms P95: 1.740ms P50: 1.214ms SD: 194033.832 RATIO: 8.606%
>>>>> PROC: OSTAT
CALLS: 17189 MIN: 0.236ms AVG: 0.907ms MAX: 425.793ms TOTAL: 15597.346ms
P99: 1.776ms P95: 1.346ms P50: 0.716ms SD: 62930.031 RATIO: 3.713%
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>>>>> VIRTUAL USER 3 : ELAPSED TIME : 419621ms
>>>>> PROC: NEWORD
CALLS: 175953 MIN: 0.492ms AVG: 0.940ms MAX: 1031.020ms TOTAL: 165474.149ms
P99: 2.172ms P95: 1.373ms P50: 0.842ms SD: 37665.637 RATIO: 39.434%
>>>>> PROC: PAYMENT
CALLS: 176330 MIN: 0.360ms AVG: 0.770ms MAX: 942.985ms TOTAL: 135763.464ms
P99: 2.131ms P95: 1.269ms P50: 0.648ms SD: 28595.491 RATIO: 32.354%
>>>>> PROC: DELIVERY
CALLS: 17424 MIN: 1.449ms AVG: 3.351ms MAX: 1051.259ms TOTAL: 58383.716ms
P99: 7.215ms P95: 5.295ms P50: 3.110ms SD: 81873.726 RATIO: 13.913%
>>>>> PROC: SLEV
CALLS: 17662 MIN: 0.670ms AVG: 1.759ms MAX: 1719.874ms TOTAL: 31063.587ms
P99: 2.240ms P95: 1.755ms P50: 1.226ms SD: 181722.634 RATIO: 7.403%
>>>>> PROC: OSTAT
CALLS: 17431 MIN: 0.240ms AVG: 0.984ms MAX: 366.508ms TOTAL: 17150.844ms
P99: 1.771ms P95: 1.351ms P50: 0.716ms SD: 78557.793 RATIO: 4.087%
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>>>>> VIRTUAL USER 4 : ELAPSED TIME : 419456ms
>>>>> PROC: NEWORD
CALLS: 187196 MIN: 0.481ms AVG: 0.923ms MAX: 871.146ms TOTAL: 172755.826ms
P99: 2.133ms P95: 1.274ms P50: 0.821ms SD: 34184.264 RATIO: 41.186%
>>>>> PROC: PAYMENT
CALLS: 187555 MIN: 0.362ms AVG: 0.729ms MAX: 1011.880ms TOTAL: 136729.814ms
P99: 1.894ms P95: 1.177ms P50: 0.629ms SD: 26574.336 RATIO: 32.597%
>>>>> PROC: DELIVERY
CALLS: 18767 MIN: 1.430ms AVG: 2.560ms MAX: 141.821ms TOTAL: 48042.982ms
P99: 4.802ms P95: 3.548ms P50: 2.415ms SD: 19616.865 RATIO: 11.454%
>>>>> PROC: SLEV
CALLS: 18839 MIN: 0.648ms AVG: 1.700ms MAX: 630.934ms TOTAL: 32031.203ms
P99: 2.085ms P95: 1.651ms P50: 1.190ms SD: 144279.102 RATIO: 7.636%
>>>>> PROC: OSTAT
CALLS: 18747 MIN: 0.225ms AVG: 0.951ms MAX: 359.600ms TOTAL: 17822.935ms
P99: 1.781ms P95: 1.331ms P50: 0.705ms SD: 72273.598 RATIO: 4.249%
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>>>>> VIRTUAL USER 5 : ELAPSED TIME : 419300ms
>>>>> PROC: NEWORD
CALLS: 186330 MIN: 0.478ms AVG: 0.917ms MAX: 1004.861ms TOTAL: 170810.498ms
P99: 2.079ms P95: 1.281ms P50: 0.826ms SD: 33795.633 RATIO: 40.737%
>>>>> PROC: PAYMENT
CALLS: 185915 MIN: 0.359ms AVG: 0.737ms MAX: 1051.469ms TOTAL: 136976.940ms
P99: 1.893ms P95: 1.186ms P50: 0.633ms SD: 38761.422 RATIO: 32.668%
>>>>> PROC: DELIVERY
CALLS: 18506 MIN: 1.436ms AVG: 2.584ms MAX: 184.669ms TOTAL: 47813.602ms
P99: 4.927ms P95: 3.574ms P50: 2.444ms SD: 22552.186 RATIO: 11.403%
>>>>> PROC: SLEV
CALLS: 18723 MIN: 0.642ms AVG: 1.759ms MAX: 709.370ms TOTAL: 32932.960ms
P99: 2.057ms P95: 1.654ms P50: 1.189ms SD: 150924.608 RATIO: 7.854%
>>>>> PROC: OSTAT
CALLS: 18558 MIN: 0.230ms AVG: 1.009ms MAX: 554.386ms TOTAL: 18718.672ms
P99: 1.763ms P95: 1.349ms P50: 0.711ms SD: 89937.214 RATIO: 4.464%
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
>>>>> SUMMARY OF 4 ACTIVE VIRTUAL USERS : MEDIAN ELAPSED TIME : 419539ms
>>>>> PROC: NEWORD
CALLS: 722918 MIN: 0.478ms AVG: 0.929ms MAX: 1247.509ms TOTAL: 671678.272ms
P99: 2.134ms P95: 1.323ms P50: 0.832ms SD: 36516.643 RATIO: 40.025%
>>>>> PROC: PAYMENT
CALLS: 724501 MIN: 0.359ms AVG: 0.752ms MAX: 1191.065ms TOTAL: 544765.715ms
P99: 2.016ms P95: 1.222ms P50: 0.639ms SD: 33766.268 RATIO: 32.462%
>>>>> PROC: DELIVERY
CALLS: 72130 MIN: 1.430ms AVG: 2.948ms MAX: 1051.257ms TOTAL: 212662.428ms
P99: 6.728ms P95: 4.638ms P50: 2.661ms SD: 49195.620 RATIO: 12.672%
>>>>> PROC: SLEV
CALLS: 72910 MIN: 0.637ms AVG: 1.813ms MAX: 1719.871ms TOTAL: 132185.007ms
P99: 2.165ms P95: 1.700ms P50: 1.204ms SD: 168407.157 RATIO: 7.877%
>>>>> PROC: OSTAT
CALLS: 71925 MIN: 0.225ms AVG: 0.963ms MAX: 554.387ms TOTAL: 69289.775ms
P99: 1.775ms P95: 1.344ms P50: 0.712ms SD: 76749.333 RATIO: 4.129%
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

Some key observations are to note that time profiling begins immediately for each Virtual User and includes both rampup and timed durations. For this reason by default the captured time for the first Virtual User will be longer than the second and so on because there is a  pause between each starting. It is a proposal to Exclude ramp up duration for time profiling #233 and this may be an enhancement for a future version. Also note that for the summary the elapsed times will be proprtionally longer than the duration of the test as it is recording the elapsed time for all Virtual Users.

Typically the key metrics you will want to observe will be P95 & P99 as this shows that 95% and 99% respectively of transactions completed inside this time. You would normally expect the total elapsed time to be in the order of NEWORD, PAYMENT, DELIVERY, SLEV, OSTAT however this may vary for some databases when higher levels of locking are experienced.  Finally the SD value or standard deviation can give an indication of the variance between the recorded values. A higher variation gives an indication of less consistent transaction processing times.

Finally for advanced users comfortable with examining the HammerDB source code in the xtprof module there is the following comment as a reference point.

#At this point [dict get $monitortimings $vutr $sproc clickslist] will return all unsorted data points for vuser $vutr for stored proc $sproc
#To record all individual data points for a virtual user write the output of this command to a file

As shown in the comment at this section of the code if desired you can use the commands given to print out all individual data points for all Virtual Users for advanced plotting and analysis of timing data. This is also functionality considered for future releases beyond v4.1 and a potential area for future code contribution.