This is a very typical FAQ and although detailed in the documentation some extra details may help in sizing and configuration. For a really quick answer create a schema with 250-500 warehouses per server CPU socket for more details size as follows.
HammerDB MySQL and MariaDB Best Practice for Performance and Scalability
As is exactly the same with PostgreSQL for system choice a 2 socket system is optimal for MySQL OLTP performance. As is also the case this limitation is at the database level (especially the storage engine) rather than the hardware level. InnoDB is the storage engine that will deliver the best OLTP throughput and should be chosen for this test.
HammerDB difference from Sysbench
For anyone benchmarking MySQL with HammerDB it is important to understand the differences from sysbench workloads as HammerDB is targeted at a testing a different usage model from sysbench. Historically MySQL has been positioned for supporting web-based applications this is in contrast to enterprise based database workloads that have been served by commercial databases such as Oracle, Db2 and SQL Server. For this reason sysbench presents a vastly more simplified workload than HammerDB.
MariaDB [sbtest]> select * from sbtest limit 10
-> ;
+----+---+---+----------------------------------------------------+
| id | k | c | pad |
+----+---+---+----------------------------------------------------+
| 1 | 0 | | qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt |
| 2 | 0 | | qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt |
| 3 | 0 | | qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt |
| 4 | 0 | | qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt |
| 5 | 0 | | qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt |
| 6 | 0 | | qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt |
| 7 | 0 | | qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt |
| 8 | 0 | | qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt |
| 9 | 0 | | qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt |
| 10 | 0 | | qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt |
+----+---+---+----------------------------------------------------+
10 rows in set (0.02 sec)
The workload then interacts with the database with simple SQL statements, for example with the read-only workload statements such as follows:
SELECT SUM(K) from sbtest where id between ? and ?
SELECT DISTINCT c from sbtest where id between ? and ? order by c
SELECT c from sbtest where id=?
HammerDB on the other hand is based on the TPC-C specification (but not identical to a full TPC-C workload) and more closely represents an enterprise type workload. There are 9 tables:
MariaDB [tpcc]> show tables;
+----------------+
| Tables_in_tpcc |
+----------------+
| customer |
| district |
| history |
| item |
| new_order |
| order_line |
| orders |
| stock |
| warehouse |
+----------------+
9 rows in set (0.00 sec)
MariaDB [tpcc]> select * from customer limit 1 G
*************************** 1. row ***************************
c_id: 1
c_d_id: 1
c_w_id: 1
c_first: 4fTYMkzYdy8EbABc
c_middle: OE
c_last: BARBARBAR
c_street_1: G30TzdyBM1xx
c_street_2: fRveiqUZDzz54pt9FK8
c_city: jVRLzrSBopMpcmN4WYO2
c_state: JK
c_zip: 603111111
c_phone: 4610643910936129
c_since: 2018-10-11 08:48:51
c_credit: BC
c_credit_lim: 50000.00
c_discount: 0.2000
c_balance: -10.00
c_ytd_payment: 10.00
c_payment_cnt: 1
c_delivery_cnt: 0
c_data: hjtWypvebZZgVhNCdKOhHd50Wn7HQG8XAm9cSHkXTf73KqBHot7IvHq8PtaHdaJ9oMXqFx6aUXrBRxQ44gqLf0k04gkPVWc6Lx3q71gFCu1vZlLhmmIaWQf5zyDD4AAqejVcwYKi50P9rHFegjzURpTf6c9SPEfpupc7378uekwBYTj4Xfm0Od3ukiQIKto8Nlx1Is51pC4qynxLEWWGULhXdBipYckk5EjpbpdKzUjEcMGJ6nCmFpNgXfKDIUpaYsw1dWwgCEhPfXiKjXMO0v0iF56wzD6AOF4w7m8CXSw0x5zKB7URuTqlGedSYK8EvmylYudXLF
1 row in set (0.00 sec)
MariaDB [tpcc]> show procedure status
-> ;
+-------+--------------------+-----------+----------------+---------------------+---------------------+---------------+---------+----------------------+----------------------+--------------------+
| Db | Name | Type | Definer | Modified | Created | Security_type | Comment | character_set_client | collation_connection | Database Collation |
+-------+--------------------+-----------+----------------+---------------------+---------------------+---------------+---------+----------------------+----------------------+--------------------+
| tpcc | DELIVERY | PROCEDURE | @ | 2018-10-11 08:57:34 | 2018-10-11 08:57:34 | DEFINER | | latin1 | latin1_swedish_ci | latin1_swedish_ci |
| tpcc | NEWORD | PROCEDURE | @ | 2018-10-11 08:57:34 | 2018-10-11 08:57:34 | DEFINER | | latin1 | latin1_swedish_ci | latin1_swedish_ci |
| tpcc | OSTAT | PROCEDURE | @ | 2018-10-11 08:57:34 | 2018-10-11 08:57:34 | DEFINER | | latin1 | latin1_swedish_ci | latin1_swedish_ci |
| tpcc | PAYMENT | PROCEDURE | @ | 2018-10-11 08:57:34 | 2018-10-11 08:57:34 | DEFINER | | latin1 | latin1_swedish_ci | latin1_swedish_ci |
| tpcc | SLEV | PROCEDURE | @ | 2018-10-11 08:57:34 | 2018-10-11 08:57:34 | DEFINER | | latin1 | latin1_swedish_ci | latin1_swedish_ci |
+-------+--------------------+-----------+----------------+---------------------+---------------------+---------------+---------+----------------------+----------------------+--------------------+
7 rows in set (0.00 sec)
CREATE DEFINER=`` PROCEDURE `slev`(
st_w_id INTEGER,
st_d_id INTEGER,
threshold INTEGER
)
BEGIN
DECLARE st_o_id INTEGER;
DECLARE stock_count INTEGER;
DECLARE `Constraint Violation` CONDITION FOR SQLSTATE '23000';
DECLARE EXIT HANDLER FOR `Constraint Violation` ROLLBACK;
DECLARE EXIT HANDLER FOR NOT FOUND ROLLBACK;
SELECT d_next_o_id INTO st_o_id
FROM district
WHERE d_w_id=st_w_id AND d_id=st_d_id;
SELECT COUNT(DISTINCT (s_i_id)) INTO stock_count
FROM order_line, stock
WHERE ol_w_id = st_w_id AND
ol_d_id = st_d_id AND (ol_o_id < st_o_id) AND
ol_o_id >= (st_o_id - 20) AND s_w_id = st_w_id AND
s_i_id = ol_i_id AND s_quantity < threshold;
As a result there is deliberate contention in the HammerDB workload that is not seen in a sysbench one. Also as HammerDB is making one call to stored procedure that then contains multiple SQL statements the workload throughput is significantly higher than sysbench.
More recently MySQL and MariaDB has been positioned in the enterprise space competing with commercial database such as Oracle with features such as PL/SQL compatibility. Therefore if you are testing MySQL or MariaDB’s ability to support web-based applications that use SQL statements with minimal contention then sysbench is likely to present the best assessment of the system for this type of workload. On the other hand if testing MySQL or MariaDB for the ability to handle a more complex workload such as the use of stored procedures and in particular if looking to compare scalability with a traditional database then HammerDB is focused more towards testing those enterprise features. Finally it is also important to note that this comparison is focused around OLTP based workloads, HammerDB also supports a TPC-H based workload for analytics with complex ad-hoc queries. Traditionally MySQL has not supported such workloads however features such as columnstore in MariaDB now make this possible.
System Setup: CPU, Memory and I/O Configuration
./cpupower frequency-info
analyzing CPU 0:
driver: intel_pstate
CPUs which run at the same hardware frequency: 0
CPUs which need to have their frequency coordinated by software: 0
maximum transition latency: Cannot determine or is not supported.
hardware limits: 1000 MHz - 3.80 GHz
available cpufreq governors: performance powersave
current policy: frequency should be within 1000 MHz and 3.80 GHz.
The governor "performance" may decide which speed to use
within this range.
current CPU frequency: Unable to call hardware
current CPU frequency: 1.99 GHz (asserted by call to kernel)
boost state support:
Supported: yes
Active: yes
Finally in testing Huge Pages has less of an impact than with PostgreSQL, although performance should not be lower than having Huge Pages disabled. Just note that if using MariaDB in some versions a bug means that Huge Pages cannot be enabled although this does not greatly impact performance.
Install MySQL or MariaDB from Source
(Note that the following section applies in particular to pre-2019 versions of MySQL and MariaDB and more recent versions of MySQL 8 have already been updated for optimal performance on multiple platforms and therefore the change is this section is not required)
for (i = 0; i <delay * 5; i++) {
j += i;
UT_RELAX_CPU();
}
If unsure on whether to do this run the workload and use the perf command to observe the top functions. If “ut_delay” is consuming most of the CPU (in most cases it will still be one of the top functions) then potentially modifying this code can increase performance.
Configure MySQL
An example MySQL my.cnf file is shown. Note that innodb_file_per_table is set then when we use partition in HammerDB it can insert into multiple files one per partition. This gives a performance gain. One configured start the MySQL or MariaDB database. For more details on MySQL and optimization see the website by Dimitri Kravtchuk.
[mysqld]
large-pages
skip-log-bin
datadir=/home/mysql/data
language=/homemysql/bld/share/english
default_authentication_plugin=mysql_native_password
socket=/tmp/mysql.sock
port=3306
bind_address=192.168.1.1
# general
max_connections=4000
table_open_cache=8000
table_open_cache_instances=16
back_log=1500
default_password_lifetime=0
ssl=0
performance_schema=OFF
max_prepared_stmt_count=128000
skip_log_bin=1
character_set_server=latin1
collation_server=latin1_swedish_ci
transaction_isolation=REPEATABLE-READ
# files
innodb_file_per_table
innodb_log_file_size=1024M
innodb_log_files_in_group=32
innodb_open_files=4000
# buffers
innodb_buffer_pool_size=64000M
innodb_buffer_pool_instances=16
innodb_log_buffer_size=64M
# tune
innodb_doublewrite=0
innodb_thread_concurrency=0
innodb_flush_log_at_trx_commit=0
innodb_max_dirty_pages_pct=90
innodb_max_dirty_pages_pct_lwm=10
join_buffer_size=32K
sort_buffer_size=32K
innodb_use_native_aio=1
innodb_stats_persistent=1
innodb_spin_wait_delay=6
innodb_max_purge_lag_delay=300000
innodb_max_purge_lag=0
innodb_flush_method=O_DIRECT_NO_FSYNC
innodb_checksum_algorithm=none
innodb_io_capacity=4000
innodb_io_capacity_max=20000
innodb_lru_scan_depth=9000
innodb_change_buffering=none
innodb_read_only=0
innodb_page_cleaners=4
innodb_undo_log_truncate=off
# perf special
innodb_adaptive_flushing=1
innodb_flush_neighbors=0
innodb_read_io_threads=16
innodb_write_io_threads=16
innodb_purge_threads=4
innodb_adaptive_hash_index=0
# monitoring
innodb_monitor_enable='%'
Configure the HammerDB Client
Download and install HammerDB on a test client system, Like PostgreSQL another 2 socket server is ideal. You need the client libraries so ensure the MySQL 5.7 library file “libmysqlclient.so.20” is findable – you only need this one file and then run hammerdbcli to check for this file:
export LD_LIBRARY_PATH=/home/mysql:$LD_LIBRARY_PATH
HammerDB CLI v3.1
Copyright (C) 2003-2018 Steve Shaw
Type "help" for a list of commands
The xml is well-formed, applying configuration
hammerdb>librarycheck
..
Checking database library for MySQL
Success ... loaded library mysqltcl for MySQL
Create the Schema and Run the Test
Error in Virtual User 1: mysqlexec/db server: Table 'mysql.proc' doesn't exist
$ ./scripts/mysql_install_db --srcdir=/home/mariadb/mariadb-10.2 --defaults-file=./../my.cnf Installing MariaDB/MySQL system tables in '/home/mariadb/data' ... OK
#!/bin/tclsh
puts "SETTING CONFIGURATION"
dbset db mysql
diset connection mysql_host 192.168.1.1
diset connection mysql_port 3306
diset tpcc mysql_count_ware 800
diset tpcc mysql_partition true
diset tpcc mysql_num_vu 64
diset tpcc mysql_storage_engine innodb
print dict
buildschema
$ ./hammerdbcli
HammerDB CLI v3.1
Copyright (C) 2003-2018 Steve Shaw
Type "help" for a list of commands
The xml is well-formed, applying configuration
hammerdb>source innodbbuild.tcl
It will return to the prompt when built – then restart the MySQL instance and run a single test. On an up to date system around mid-2018 expect to see up to 2M MySQL TPM and around 650K NOPM.
Vuser 1:56 Active Virtual Users configured
Vuser 1:TEST RESULT : System achieved 1994844 MySQL TPM at 658082 NOPM
#!/bin/tclsh
proc runtimer { seconds } {
set x 0
set timerstop 0
while {!$timerstop} {
incr x
after 1000
if { ![ expr {$x % 60} ] } {
set y [ expr $x / 60 ]
puts "Timer: $y minutes elapsed"
}
update
if { [ vucomplete ] || $x eq $seconds } { set timerstop 1 }
}
return
}
puts "SETTING CONFIGURATION"
dbset db mysql
diset connection mysql_host 192.168.142.1
diset connection mysql_port 3307
diset tpcc mysql_driver timed
diset tpcc my_rampup 2
diset tpcc my_duration 5
vuset logtotemp 1
loadscript
puts "SEQUENCE STARTED"
foreach z { 1 2 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 } {
puts "$z VU TEST"
vuset vu $z
vucreate
vurun
runtimer 600
vudestroy
after 5000
}
puts "TEST SEQUENCE COMPLETE"
Test Results
Grab the TPM or NOPM from the logfile. If everything went well you should get a sequence as follows peaking at around 650,000 NOPM on an up to date system in mid-2018. To be clear this chart shows the data from 20 performance tests, each with a 2 minute rampup time and 5 minute test. After each test completed HammerDB then increased the virtual user count and repeated the test. Fortunately HammerDB allows all of this process to be automated. Each data point shows the average transaction rate captured over the 5 minute test so not the peak performance that may be higher.
Note that after a point, in this case after 56 virtual users, performance will decrease as you increase the number of virtual users. This is to be expected and is due to the limitations of the scalability of the storage engine. If you report the stored procedure response times (see the HammerDB manual for how to do this) you will see the time increase especially for the DELIVERY stored procedure that is updating the ORDER_LINE table while the NEWORD stored procedure is trying to insert into it. As a result the NOPM or New Orders per minute reduces as more contention is experienced. This is not a limitation of HammerDB that has been observed to drive significantly higher transaction rates. Nevertheless around 2M MySQL TPM is very respectable throughput for an enterprise workload and as there is a lot more CPU available the potential is there to go a lot higher.
HammerDB Best Practice for PostgreSQL Performance and Scalability
I/O
On the 2 socket system you will need I/O that is able to keep up with writing to the WAL – An example is an NVME PCIe SSD formatted as an XFS file system and mounted as below:
/dev/nvme0n1p1 /mnt/ssd xfs noatime
The OS username you are using it can be anything but for people familair with Oracle this becomes the equivalent of the system username. Many people use postgres.
CPU Configuration
Make sure that your CPU is configured for optimal performance. To do this make sure that the Linux tools package is installed. If so there are a number of tools under the /usr/lib/linux-tools directory under the specific kernel name.
# ls
acpidbg cpupower perf turbostat usbip usbipd x86_energy_perf_policy
# ./cpupower frequency-set --governor=performance
Setting cpu: 0
Setting cpu: 1
Setting cpu: 2
# ./cpupower frequency-info
analyzing CPU 0:
driver: intel_pstate
CPUs which run at the same hardware frequency: 0
CPUs which need to have their frequency coordinated by software: 0
maximum transition latency: Cannot determine or is not supported.
hardware limits: 1000 MHz - 3.80 GHz
available cpufreq governors: performance powersave
current policy: frequency should be within 1000 MHz and 3.80 GHz.
The governor "performance" may decide which speed to use
within this range.
current CPU frequency: Unable to call hardware
current CPU frequency: 1.99 GHz (asserted by call to kernel)
boost state support:
Supported: yes
Active: yes
Also check that the idle settings are enabled and (for Intel CPUs) that the intel_idle driver is used. Note that the pstate and cstate drivers work together to provide the most efficient use of turbo boost. Initially it is best leave all of these states enabled. Disabling all C-states or Idle states is likely to reduce overall performance by reducing the available turbo frequency. You can experiment with the C-states between ‘all or nothing’ to find the optimal settings.
# ./cpupower idle-set --enable-all
# ./cpupower idle-info
CPUidle driver: intel_idle
CPUidle governor: menu
analyzing CPU 0:
Number of idle states: 4
Available idle states: POLL C1 C1E C6
POLL:
Flags/Description: CPUIDLE CORE POLL IDLE
Latency: 0
Usage: 10736
Duration: 29182086
# ./x86_energy_perf_policy -rcpu0: EPB 6
cpu0: HWP_REQ: min 38 max 38 des 0 epp 0 window 0x0 (0*10^0us) use_pkg 0
cpu0: HWP_CAP: low 7 eff 10 guar 25 high 38
cpu1: EPB 6
# ./x86_energy_perf_policy performance
# ./x86_energy_perf_policy -rcpu0: EPB 0
cpu0: HWP_REQ: min 38 max 38 des 0 epp 0 window 0x0 (0*10^0us) use_pkg 0
cpu0: HWP_CAP: low 7 eff 10 guar 25 high 38
cpu1: EPB 0
Huge Page Memory Configuration
For the memory we need to set Huge Pages – so as root add the following line to /etc/sysctl.conf we are going to create 64GB of buffers by adding the vm.nr_hugepages line (Huge Pages are 2MB in size).
vm.swappiness = 0
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
vm.nr_hugepages = 35000
postgres soft memlock 100000000
postgres hard memlock 100000000
cat /proc/meminfo
HugePages_Total: 35000
HugePages_Free: 6588
HugePages_Rsvd: 4572
To reiterate this will only be when you have reached the stage of starting the database, so firstly you will need to install the software.
Install PostgreSQL from Source
postgres:/mnt/ssd/postgresqlsrc$ ls
pgsql postgresql-9.6.5 postgresql-9.6.5.tar
find . -name pg_config.h -print ./src/include/pg_config.h
So it looks like below, this will set the WAL file size to be 1GB instead of the default 16MB.
/* XLOG_SEG_SIZE is the size of a single WAL file. This must be a power of 2
and larger than XLOG_BLCKSZ (preferably, a great deal larger than
XLOG_BLCKSZ). Changing XLOG_SEG_SIZE requires an initdb. */
#define XLOG_SEG_SIZE (1024 * 1024 * 1024)
pgsql -> /mnt/ssd/postgresqlsrc/pgsql
postgres:/mnt/ssd/postgresqlsrc$ ls -ltr /usr/local
total 32
lrwxrwxrwx 1 root root 28 Oct 11 03:41 pgsql -> /mnt/ssd/postgresqlsrc/pgsql
postgres:/usr/local/pgsql$ ls
bin data include lib share
Configure PostgreSQL
./bin/initdb -D ./data
postgres:/mnt/ssd/postgresqlsrc/pgsql/data$ more postgresql.conf
listen_addresses ='192.168.1.1' # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 256 # (change requires restart)
shared_buffers = 64000MB # min 128kB
huge_pages = on # on, off, or try
temp_buffers = 4000MB # min 800kB
work_mem = 4000MB # min 64kB
maintenance_work_mem = 512MB # min 1MB
autovacuum_work_mem = -1 # min 1MB, or -1 to use maintenance_work_mem
max_stack_depth = 7MB # min 100kB
dynamic_shared_memory_type = posix # the default is the first option
max_files_per_process = 4000 # min 25
effective_io_concurrency = 32 # 1-1000; 0 disables prefetching
wal_level = minimal # minimal, archive, hot_standby, or logical
synchronous_commit = off # synchronization level;
wal_buffers = 512MB # min 32kB, -1 sets based on shared_buffers
#checkpoint_segments = 256 # in logfile segments, min 1, 16MB each
checkpoint_timeout = 1h # range 30s-1h
checkpoint_completion_target = 1 # checkpoint target duration, 0.0 - 1.0
checkpoint_warning = 0 # 0 disables
log_min_messages = error # values in order of decreasing detail:
log_min_error_statement = error # values in order of decreasing detail:
log_timezone = 'GB'
autovacuum = off # Enable autovacuum subprocess? 'on'
datestyle = 'iso, dmy'
timezone = 'GB'
lc_messages = 'en_GB.UTF-8' # locale for system error message
lc_monetary = 'en_GB.UTF-8' # locale for monetary formatting
lc_numeric = 'en_GB.UTF-8' # locale for number formatting
lc_time = 'en_GB.UTF-8' # locale for time formatting
default_text_search_config = 'pg_catalog.english'
max_locks_per_transaction = 64 # min 10
max_pred_locks_per_transaction = 64 # min 10
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
host all all 192.168.1.1/32 trust
host all all 192.168.1.2/32 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local replication postgres trust
#host replication postgres 127.0.0.1/32 trust
#host replication postgres ::1/128 trust
Start the PostgreSQL Database
Start the database as follows:
./bin/pg_ctl start -D ./data
And check it is running (remember to check the huge page allocation as well).
postgres:/usr/local/pgsql$ ps -ef | grep post
postgres 201539 1 0 Sep19 ? 00:00:00 /mnt/ssd/postgresqlsrc/pgsql/bin/postgres -D ./data
postgres 201541 201539 0 Sep19 ? 00:00:57 postgres: checkpointer process
postgres 201542 201539 0 Sep19 ? 00:00:02 postgres: writer process
postgres 201543 201539 0 Sep19 ? 00:03:18 postgres: wal writer process
postgres 201544 201539 0 Sep19 ? 00:00:04 postgres: stats collector process
./bin/pg_ctl stop -D ./data
postgres:/usr/local/pgsql$ ./bin/psql -U postgres -d postgres
psql (9.6.5)
Type "help" for help.
postgres=# alter role postgres password 'postgres';
ALTER ROLE
postgres=#
postgres:/usr/local/pgsql$ ./bin/psql -U postgres -d postgres -W
Password for user postgres:
psql (9.6.5)
Type "help" for help.
postgres=#
Configure the HammerDB Client
Download and install HammerDB on a test client system, another 2 socket server is ideal. You need the client libraries so you can either copy the postgres lib directory you have just compiled or compile from source again on this host – you don’t need to create a database or set the config files. Then add the library to the library path:
postgres:~$ export LD_LIBRARY_PATH=/usr/local/pgsql/lib:$LD_LIBRARY_PATH
and check that it loads with a librarycheck test.
$ ./hammerdbcli
HammerDB CLI v3.1
Copyright (C) 2003-2018 Steve Shaw
Type "help" for a list of commands
The xml is well-formed, applying configuration
hammerdb>librarycheck
Checking database library for PostgreSQL
Success ... loaded library Pgtcl for PostgreSQL
Create the Schema and Run the Test
HammerDB command line build and test examples
With the HammerDB command line its very easy to script the schema build and automated test so that if you don’t have a GUI or don’t want to use one you can still access all of the HammerDB functionality. The following example is with MySQL on Linux but can easily apply to any database you choose.
Firstly if you don’t have MySQL installed copy the client library to your home directory
$ ls -l libmysqlclient.so.20 -rw-r--r-- 1 mysql mysql 4237408 Apr 23 09:21 libmysqlclient.so.20
$ export LD_LIBRARY_PATH=/home/intel:$LD_LIBRARY_PATH
$ ./hammerdbcli HammerDB CLI v3.1 Copyright (C) 2003-2018 Steve Shaw Type "help" for a list of commands The xml is well-formed, applying configuration hammerdb>librarycheck ... Checking database library for MySQL Success ... loaded library mysqltcl for MySQL ...
$ more schemabuild.tcl #!/bin/tclsh puts "SETTING CONFIGURATION" dbset db mysql diset connection mysql_host 127.0.0.1 diset connection mysql_port 3306 diset tpcc mysql_count_ware 800 diset tpcc mysql_partition true diset tpcc mysql_num_vu 64 diset tpcc mysql_storage_engine innodb print dict buildschema
$ ./hammerdbcli HammerDB CLI v3.1 Copyright (C) 2003-2018 Steve Shaw Type "help" for a list of commands The xml is well-formed, applying configuration hammerdb>source schemabuild.tcl
$ more mysqlrun.tcl #!/bin/tclsh proc runtimer { seconds } { set x 0 set timerstop 0 while {!$timerstop} { incr x after 1000 if { ![ expr {$x % 60} ] } { set y [ expr $x / 60 ] puts "Timer: $y minutes elapsed" } update if { [ vucomplete ] || $x eq $seconds } { set timerstop 1 } } return } puts "SETTING CONFIGURATION" dbset db mysql diset connection mysql_host 192.168.1.1 diset connection mysql_port 3306 diset tpcc mysql_driver timed diset tpcc my_rampup 2 diset tpcc my_duration 5 vuset logtotemp 1 loadscript puts "SEQUENCE STARTED" foreach z { 1 2 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 96 100 104 } { puts "$z VU TEST" vuset vu $z vucreate vurun runtimer 600 vudestroy after 5000 } puts "TEST SEQUENCE COMPLETE"
$ ./hammerdbcli HammerDB CLI v3.1 Copyright (C) 2003-2018 Steve Shaw Type "help" for a list of commands The xml is well-formed, applying configuration hammerdb>source mysqlrun.tcl
HammerDB Best Practice for SQL Server Performance and Scalability
BIOS Settings
Power Options
Verify Single Threaded Performance
Network Bandwidth
SQL Server Properties
Database Creation
Schema Build and Configure
Partition the History Table
Resize the Transaction Log
Monitoring
Processor Group Affinity
HammerDB Best Practice for Oracle Performance and Scalability
CPU, Memory and I/O
BIOS Settings
Power Saving
Verify Single Threaded Performance
SET SERVEROUTPUT ON SET TIMING ON DECLARE n NUMBER := 0; BEGIN FOR f IN 1..10000000 LOOP n := MOD (n,999999) + SQRT (f); END LOOP; DBMS_OUTPUT.PUT_LINE ('Res = '||TO_CHAR (n,'999999.99')); END; /
Res = 873729.72 PL/SQL procedure successfully completed. Elapsed: 00:00:07.88
Results will vary according to CPU model however typically a modern CPU will complete this test in 10 seconds or less according to configuration ensuring both system and Linux power saving settings are set optimally for Oracle performance.
Hyper-Threading
Memory
I/O and SSDs
Network Bandwidth
Oracle Parameters
Database Creation
Schema Build and Configure
db_16k_cache_size=32749125632
Resize the Redo Log Files
log_checkpoints_to_alert=TRUE
fast_start_mttr_target=0 log_checkpoint_interval=0 log_checkpoint_timeout=0
Load Profile Per Second Per Transaction Per Exec Per Call ~~~~~~~~~~~~~~~ --------------- --------------- --------- --------- DB Time(s): 0.0 0.0 0.00 0.00 DB CPU(s): 0.0 0.0 0.00 0.00 Redo size (bytes): 405,860,130.7 5,345.2 Logical read (blocks): 7,892,009.8 103.9 Block changes: 2,377,433.9 31.3
Monitoring
Event
|
Waits
|
Total Wait Time (sec)
|
Wait Avg(ms)
|
% DB time
|
Wait Class
|
DB CPU
|
30.1K
|
96.9
|
|||
library cache: mutex X
|
936,935
|
364.2
|
0
|
1.2
|
Concurrency
|
cursor: pin S
|
210,234
|
315.5
|
2
|
1.0
|
Concurrency
|
enq: TX – row lock contention
|
212,383
|
228.2
|
1
|
.7
|
Application
|
log file sync
|
28,919
|
82.3
|
3
|
.3
|
Commit
|
latch: In memory undo latch
|
693,091
|
82.3
|
0
|
.3
|
Concurrency
|
db file sequential read
|
55,642
|
46.4
|
1
|
.1
|
User I/O
|
buffer busy waits
|
222,183
|
23.5
|
0
|
.1
|
Concurrency
|
latch: enqueue hash chains
|
11,122
|
14.1
|
1
|
.0
|
Other
|
SQL*Net message to client
|
13,132,309
|
12.1
|
0
|
.0
|
Network
|
CPU Time (s)
|
Executions
|
CPU per Exec (s)
|
%Total
|
Elapsed Time (s)
|
%CPU
|
%IO
|
SQL Id
|
SQL Module
|
SQL Text
|
15,894.75
|
5,704,467
|
0.00
|
52.73
|
19,048.04
|
83.45
|
0.01
|
wish8.6@wesep1.example.com (TNS V1-V3)
|
begin neword(:no_w_id, :no_max…
|
|
3,663.11
|
571,496
|
0.01
|
12.15
|
3,946.36
|
92.82
|
0.00
|
wish8.6@wesep1.example.com (TNS V1-V3)
|
BEGIN delivery(:d_w_id, :d_o_c…
|
|
How to use Python with HammerDB
[root@vulture tclpython-master]#
build LICENSE Makefile msvc pkg README.md src test VERSION.md
[root@vulture ~]# sudo yum install python-devel tcl-devel Loaded plugins: langpacks, ulninfo Package python-devel-2.7.5-69.0.1.el7_5.x86_64 already installed and latest version ...
[root@vulture tclpython-master]# make cc -o build/tclpython/tclpython/tclpython.so.5.0 build/tclpython/src/tclpython.o build/tclpython/src/py.o -shared -s -lpthread -ldl -lutil -lm -lpython2.7 -ltclstub8.5
/home/oracle/tclpython-master/build/tclpython/tclpython [oracle@vulture tclpython]$ ls pkgIndex.tcl tclpython.so.5.0
[oracle@vulture lib]$ ls tclpython/ pkgIndex.tcl tclpython.so.5.0
[oracle@vulture HammerDB-3.1]$ ./hammerdbcli HammerDB CLI v3.1 Copyright (C) 2003-2018 Steve Shaw Type "help" for a list of commands The xml is well-formed, applying configuration hammerdb>package require tclpython 5.0 hammerdb>set interpreter [python::interp new] python0 hammerdb>$interpreter exec {print("Hello World")} Hello World hammerdb>puts [$interpreter eval 3/2.0] 1.5 hammerdb>python::interp delete $interpreter hammerdb>exit