1. SUT Database Server Configuration

For query based workloads there is no requirement for a load testing client although you may use one if you wish. It is entirely acceptable to run HammerDB directly on the SUT (System Under Test) Database system if you wish, the client workload is minimal compared to an OLTP workload. In the analytic workload the client sends long running queries to the SUT and awaits a response therefore requiring minimal resources on the client side. As with an OLTP configuration however the database server architecture to be tested must meet the standard requirements for a database server system. Similarly the database can be installed on any supported operating system, there is no restriction on the version of that is required. Before running a HammerDB analytic test depending on your configuration you should focus on memory and I/O (disk performance). Also in turn the number and type of multi-core and multi-threaded processors installed will have a significant impact on parallel performance to drive the workload. When using in-memory column store features processors that support SIMD/AVX instructions sets are also required for the vectorisation of column scans. HammerDB by default provides TPROC-H schemas at Scale Factors 1,10,30,100,300 and 1000 (larger can be configured if required). The Scale Factors correspond to the schema size in Gigabytes. As with the official TPROC-H tests the results at one schema size should not be compared with the results derived with another schema size. As the analytic workload utilizes parallel query where available it is possible for a single virtual user to use all of the CPU resources on the SUT at any schema size. Nevertheless there is still a relation with all of the hardware resources available including memory and I/O and a larger system will benefit from tests run a larger schema size. The actual sizing of hardware resources of hardware resources is beyond the scope of this document however at the basic level with traditional parallel execution and modern CPU capabilities I/O read performance is typically the constraining factor. Note that also in contrast to an OLTP workload high throughput transaction log write performance is not a requirement, however in similarity to the OLTP workload storage based on SSD disks will usually offer significant improvements in performance over standard hard disks although in this case it is the benefits of read bandwidth as opposed to the IOPs benefits of SSDs for OLTP. When using the in-memory column store memory capacity and bandwidth feature and if fully cached in memory storage performance is not directly a factor for query performance. Nevertheless data loads are an important consideration for in-memory data and therefore I/O and SSD read performance remain important for loading the data into memory to be available for scans.