110 million comments from Hacker News: medium data full-text / analytics test
In this test we use the data collection of 1.1M Hacker News curated comments with numeric fields from https://zenodo.org/record/45901 multiplied by 100. 110 million documents can be considered a medium size data set in the modern world. You you can meet similar size datasets on big blogs and news sites, big online stores, classifieds and so on. It’s typical for such applications to have:
- not very long textual data in one or multiple fields
- and a number of attributes
The source of the data collection is https://zenodo.org/record/45901.
The record structure is:
So far we have made this test available for 3 databases:
- Clickhouse - a powerful OLAP database,
- Elasticsearch - general purpose “search and analytics engine”,
- Manticore Search - “database for search”, Elasticsearch alternative.
We’ve tried to make as little changes to database default settings as possible to not give either of them an unfair advantage:
- Clickhouse: no tuning
CREATE TABLE ... ENGINE = MergeTree() ORDER BY idand standard clickhouse-server docker image.
- Elasticsearch: as we saw in another test
sharding can help Elasticsearch signficantly, so given 100+ M documents is not the smallest dataset we decided it would be more fair to:
- let Elasticsearch make 32 shards
"number_of_shards": 32), otherwise it couldn’t utilize the CPU which has 32 cores on the server, since as said in Elasticsearch official guide “Each shard runs the search on a single CPU thread”.
- we also tuned it by setting
bootstrap.memory_lock=truesince as said on https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_disable_swapping it needs to be done for performance.
- the docker image is standard
- let Elasticsearch make 32 shards : (
- Manticore Search is also used in a form of their own docker image + the columnar library they provide
. The following updates have been made
to their defaults:
min_infix_len = 2since in Elasticsearch by default you can do infix full-text search and it would be not fair to let Manticore run in lighter mode (w/o infixes). Unfortunately it’s not possible in Clickhouse at all, so it’s given the handicap.
- we tested Manticore in two modes:
- row-wise storage which is a default one, therefore is worth testing
- columnar storage: the data collection is of medium size, so provided Elasticsearch and Clickhouse internally use column-oriented structures it seems fair to compare them with Manticore’s columnar storage too.
We’ve also configured the databases to not use any internal caches:
SYSTEM DROP MARK CACHE,
SYSTEM DROP UNCOMPRESSED CACHE,
SYSTEM DROP COMPILED EXPRESSION CACHEafter each query .
"index.queries.cache.enabled": falsein its configuration
/_cache/clear?request=true&query=true&fielddata=trueafter each query .
- For Manticore Search in its configuration file:
qcache_max_bytes = 0
docstore_cache_size = 0
- Operating system:
- we do
echo 3 > /proc/sys/vm/drop_caches; syncbefore each new query
- we do
The query set consists of both full-text and analytical (filtering, sorting, grouping, aggregating) queries:
You can find all the results on the results page by selecting “Test: hn”.
Remember that the only high quality metric is “Fast avg” since it guarantees low coefficient of variation and high queries count conducted for each query. The other 2 (“Fastest” and “Slowest”) are provided with no guarantee since:
- Slowest - is a single attempt result, in most cases the very first coldest query. Even though we purge OS cache before each cold query it can’t be considered stable. So it can be used for informational purposes and is greyed out in the below summary.
- Fastest - just the very fastest result, it should be in most cases similar to the “Fast avg” metric, but can be more volatile from run to run.
Remember the tests including the results are 100% transparent as well as everything in this project, so:
- you can use the test framework to learn how they were made
- and find raw test results in the results directory.
Unlike other less transparent and less objective benchmarks we are not making any conclusions, we are just leaving screenshots of the results here:
4 competitors at once
Clickhouse vs Elasticsearch
Manticore Search (columnar storage) vs Elasticsearch
Manticore Search (columnar storage) vs Clickhouse
Manticore Search row-wise storage vs columnar storage
What about MySQL?
As you can see on the screenshots MySQL has been also tested, but we don’t compare it with the others here since it was heavily tuned - keys were added based on the queries.
The author of this test and the test framework is a member of Manticore Search core team and the test was initially made to compare Manticore Search with Elasticsearch, but as shown above and can be verified in the open source code and by running the same test yourself Manticore Search wasn’t given any unfair advantage, so the test can be considered unprejudiced. However, if something is missing or wrong (i.e. non-objective) in the test feel free to make a pull request or an issue on Github . Your take is appreciated! Thank you for spending your time reading this!