Test framework
The test framework which is used on the backend of https://db-benchmarks.com is fully Open Source (AGPLv3 license) and can be found at https://github.com/db-benchmarks/db-benchmarks . Here’s what it does:
- Automates data loading to the databases/search engines included in the repository.
- Can run a database/search engine in Docker with a particular CPU/RAM constraint.
- While testing:
- Purges OS cache automatically
- Automates purging database caches before each cold run
- Restarts the database before each cold run
- Looks after your CPU temperature to avoid throttling
- Looks after the coefficient of variation while making queries and can stop as soon as:
- The CV is low enough
- And the number of queries made is sufficient
- After starting a database/search engine, lets it do its warm-up stage (pre-read needed data from disk), stops waiting as soon as:
- There’s no IO for a few seconds
- And it can connect to the database/search engine
- After stopping a database/search engine waits until it fully stops
- Can accept different timeouts: start, warm-up, initial connection, getting info about the database/search engine, query
- Can emulate one physical core which allows benchmarking algorithmic capabilities of databases more objectively (
--limited
)
- Can accept all the values as command line arguments as well as environment variables for easier integration with CI systems
--test
saves test results to file
--save
saves test results from files to a remote database (neither of those that have been tested)
- Tracks a lot of things while testing:
- Server info: CPU, memory, running processes, filesystem, hostname
- Current repository info to make sure there’s no local changes
- Performance metrics: each query response time in microseconds, aggregated stats:
- Coefficient of variation of all queries
- Coefficient of variation of 80% fastest queries
- Cold query’s response time
- Avg(response times)
- Avg(80% fastest queries’ response times)
- Slowest query’s response time
- Database/search engine info:
select count(*)
and select * limit 1
to make sure the data collections are similar in different databases
- internal database/search engine data structures status (chunks, shards, segments, partitions, parts, etc.)
- Makes it easy to limit CPU/RAM consumption inside or outside the test (using environment variables
cpuset
and mem
).
- Allows to start each database/search engine easily the same way it’s started by the framework for manual testing and preparation of test queries.
Installation
Before you deploy the test framework, make sure you have the following:
- Linux server fully dedicated to testing
- Fresh CPU thermal paste to make sure your CPUs don’t throttle down
PHP 8
and:
curl
module
mysqli
module
docker
docker-compose
sensors
to control CPU temperature to prevent throttling
dstat
cgroups v2
To install:
- git clone from the repository:
1
2
|
git clone git@github.com:db-benchmarks/db-benchmarks.git
cd db-benchmarks
|
- Copy
.env.example
to .env
- Update
mem
and cpuset
in .env
with the default value of the memory (in megabytes) and CPUs the test framework can use for secondary tasks (data loading, getting info about databases)
- Tune JVM limits
ES_JAVA_OPTS
for your tests. Usually it’s size of allocated memory for Docker Machine
Get started
Prepare test
First you need to prepare a test:
Go to a particular test’s directory (all tests must be in directory ./tests
), for example “hn_small”:
Run the init script:
This will:
- download the data collection from the Internet
- build the tables/indices
- measure the time spent uploading the dataset to the database and add a {engine_name}_init file to the corresponding results folder
Run test
Then run ../../test
(it’s in the project root’s folder) to see the options:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
To run a particular test with specified engines, memory constraints and number of attempts and save the results locally:
/perf/test_engines/test
--test=test_name
--engines={engine1:type,...,engineN}
--memory=1024,2048,...,1048576 - memory constraints to test with, MB
[--times=N] - max number of times to test each query, 100 by default
[--dir=path] - if path is omitted - save to directory 'results' in the same dir where this file is located
[--probe_timeout=N] - how long to wait for an initial connection, 30 seconds by default
[--start_timeout=N] - how long to wait for a db/engine to start, 120 seconds by default
[--warmup_timeout=N] - how long to wait for a db/engine to warmup after start, 300 seconds by default
[--query_timeout=N] - max time a query can run, 900 seconds by default
[--info_timeout=N] - how long to wait for getting info from a db/engine
[--limited] - emulate one physical CPU core
[--queries=/path/to/queries] - queries to test, ./tests/<test name>/test_queries by default
To save to db all results it finds by path
/perf/test_engines/test
--save=path/to/file/or/dir, all files in the dir recursively will be saved
--host=HOSTNAME
--port=PORT
--username=USERNAME
--password=PASSWORD
--rm - remove after successful saving to database
--skip_calm - avoid waiting until discs become calm
----------------------
Environment variables:
All the options can be specified as environment variables, but you can't use the same option as an environment variables and as a command line argument at the same time.
|
And run the test:
1
|
../../test --test=hn_small --engines=elasticsearch,clickhouse --memory=16384
|
If you run your tests in local mode (development) and don’t care about tests inaccuracy you can avoid discs calm and cpu checks by setting parameter --skip_inaccuracy
1
|
../../test --test=hn_small --engines=elasticsearch,clickhouse --memory=16384 --skip_inaccuracy
|
Now you have test results in ./results/
(in the root of the repository), for example:
1
2
|
# ls results/
220401_054753
|
Save to db to visualize
You can now upload the results to the database for further visualization. The visualization tool, which is used on https://db-benchmarks.com/ , is also open source and can be found at https://github.com/db-benchmarks/ui.
Here’s how you can save the results:
1
|
username=login password=pass host=db.db-benchmarks.com port=443 save=./results ./test
|
or
1
|
./test --username=login --password=pass --host=db.db-benchmarks.com --port=443 --save=./results
|
Make pull request
We are eager to see your test results. If you believe they should be added to https://db-benchmarks.com, please make a pull request of your results to this repository.
Please keep the following in mind:
- Your results should be located in the directory
./results
.
- If it’s a new test/engine, any other changes should be included in the same pull request.
- It is important that we, and anyone else, should be able to reproduce your test and hopefully get similar results.
We will then:
- Review your results to ensure they follow the testing principles.
- Reproduce your test on our hardware to ensure they are comparable with other tests.
- Discuss any arising questions with you.
- And, if everything checks out, we will merge your pull request.
Directory structure
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
|
.
|-core <- Core directory, contains base files required for tests.
| |-engine.php <- Abstract class Engine. Manages test execution, result saving, and parsing of test attributes.
| |-EsCompatible.php <- Helper for ElasticSearch compatible engines
| |-helpers.php <- Helper file with logging functions, attribute parsing, exit functions, etc.
|-misc <- Miscellaneous directory, intended for storing files useful during the initialization step.
| |-func.sh <- Meilisearch initialization helper script.
| |-ResultsUpdater.php <- Helper that allows to update results (It should be unserialized and serialized again)
|-plugins <- Plugins directory: if you want to extend the framework by adding another database or search engine for testing, place it here.
| |-clickhouse.php <- ClickHouse plugin.
| |-elasticsearch.php <- Elasticsearch plugin.
| |-manticoresearch.php <- Manticore Search plugin.
| |-meilisearch.php <- Meilisearch plugin.
| |-mysql.php <- MySQL plugin.
| |-mysql_percona.php <- MySQL (Percona) plugin.
| |-postgres.php <- Postgres plugin.
| |-quickwit.php <- Quickwit plugin.
| |-typesense.php <- Typesense plugin.
|-results <- Test results directory. The results shown on https://db-benchmarks.com/ are found here. You can also use `./test --save` to visualize them locally.
|-tests <- Directory containing test suites.
| |-hn <- Hackernews test suite.
| | |-clickhouse <- Directory for "Hackernews test -> ClickHouse".
| | | |-inflate_hook <- Engine initialization script. Handles data ingestion into the database.
| | | |-post_hook <- Engine verification script. Ensures the correct number of documents have been ingested and verifies data consistency.
| | | |-pre_hook <- Engine pre-check script. Determines if tables need to be rebuilt, starts the engine, and ensures availability.
| | |-data <- Prepared data collection for the tests.
| | |-elasticsearch <- Directory for "Hackernews test -> Elasticsearch".
| | | |-config <- Elasticsearch configuration directory.
| | | | |-elasticsearch.yml
| | | | |-jvm.options
| | | | |-log4j2.properties
| | | |-config_tuned <- Elasticsearch configuration directory for the "tuned" type.
| | | |-logstash <- Logstash configuration directory.
| | | | |-logstash.conf
| | | | |-template.json
| | | |-logstash_tuned <- Logstash configuration directory for the "tuned" type.
| | | | |-logstash.conf
| | | | |-template.json
| | | |-inflate_hook <- Engine initialization script for data ingestion.
| | | |-post_hook <- Verifies document count and data consistency.
| | | |-pre_hook <- Pre-check script for table rebuilding and engine initialization.
| | |-manticoresearch <- Directory for testing Manticore Search in the Hackernews test suite.
| | | |-generate_manticore_config.php <- Script for dynamically generating Manticore Search configuration.
| | | |-inflate_hook <- Data ingestion script.
| | | |-post_hook <- Verifies document count and consistency.
| | | |-pre_hook <- Pre-check for table rebuilds and engine availability.
| | |-meilisearch <- Directory for "Hackernews test -> Meilisearch".
| | | |-inflate_hook <- Data ingestion script.
| | | |-post_hook <- Ensures correct document count and data consistency.
| | | |-pre_hook <- Pre-check for table rebuilds and engine start.
| | |-mysql <- Directory for "Hackernews test -> MySQL".
| | | |-inflate_hook <- Data ingestion script.
| | | |-post_hook <- Ensures document count and consistency.
| | | |-pre_hook <- Pre-check for table rebuilds and engine start.
| | |-postgres <- Directory for "Hackernews test -> Postgres".
| | | |-inflate_hook <- Data ingestion script.
| | | |-post_hook <- Verifies document count and data consistency.
| | | |-pre_hook <- Pre-check for table rebuilds and engine availability.
| | |-prepare_csv <- Prepares the data collection, handled in `./tests/hn/init`.
| | |-quickwit <- Directory for "Hackernews test -> Quickwit".
| | | |-csv_jsonl.php <- CSV to JSONl modification script
| | | |-index-config.yaml <- Quickwit index config
| | | |-inflate_hook <- Data ingestion script.
| | | |-post_hook <- Verifies document count and data consistency.
| | | |-pre_hook <- Pre-check for table rebuilds and engine availability.
| | | |-quickwit.yaml <- Quickwit config
| | |-typesense <- Directory for "Hackernews test -> Typesense".
| | | |-csv_jsonl.php <- CSV to JSONl modification script
| | | |-inflate_hook <- Data ingestion script.
| | | |-post_hook <- Verifies document count and data consistency.
| | | |-pre_hook <- Pre-check for table rebuilds and engine availability.
| | |-description <- Test description, included in test results and used during result visualization.
| | |-init <- Main initialization script for the test.
| | |-test_info_queries <- Contains queries to retrieve information about the data collection.
| | |-test_queries <- Contains all test queries for the current test.
| |-taxi <- Taxi rides test suite, with a similar structure.
| |-hn_small <- Test for a smaller, non-multiplied Hackernews dataset, similar structure.
| |-logs10m <- Test for Nginx logs, similar structure.
|-.env.example <- Example environment file. Update the "mem" and "cpuset" values as needed.
|-LICENSE <- License file.
|-NOTICE <- Notice file.
|-README.md <- You're reading this file.
|-docker-compose.yml <- Docker Compose configuration for starting and stopping databases and search engines.
|-important_tests.sh
|-init <- Initialization script. Handles data ingestion and tracks the time taken.
|-logo.svg <- Logo file.
|-test <- The executable file to run and save test results.
|
How to start a particular database / search engine with a particular dataset
1
|
test=logs10m cpuset="0,1" mem=32768 suffix=_tuned docker-compose up elasticsearch
|
will:
- start Elasticsearch to test “logs10m” with the following settings:
suffix=_tuned
: maps ./tests/logs10m/es/data/idx_tuned as a data directory
mem=32768
limits RAM to 32GB, if not specified the default will be used from file .env
cpuset="0,1"
: Elasticsearch’s container will be running only on CPU cores 0 and 1 (which may be the first whole physical CPU)
To stop - just CTRL-C
.
Notes