1.7B NYC Taxi rides test: Clickhouse vs Elasticsearch vs Manticore Search

Intro
NYC taxi rides is probably the most commonly used benchmark in the area of data analytics.
It started with Todd W. Schneider deciding to prepare the collection first in 2015 to analyze 1.1 billion NYC Taxi and Uber Trips. Then Mark Litwintschik continued by testing lots of databases and search engines using the data collection.
Now we at https://db-benchmarks.com/:
- have dockerized preparation of the data collection to make it easier to use
- made it available as a part of the most transparent and open source database benchmarks suite .
Data collection
The data collection constitutes 1.7B taxi and for-hire vehicle (Uber, Lyft, etc.) trips originating in New York City since 2009. Most of the raw data comes from the NYC Taxi & Limousine Commission.
The data collection record includes a lot of different attributes of a taxi ride:
- pickup date and time
- coordinates of pickup and dropoff
- pickup and dropoff location names
- fee and tip amount
- wind speed, snow depth
- and many other fields
It can be used mostly for testing analytical queries, but it also includes a couple of full-text fields that can be used to test free text capabilities of databases.
The whole list of fields and their data types is:
|
|
Databases
So far we have made this test available for 3 databases:
- Clickhouse - a powerful OLAP database,
- Elasticsearch - general purpose “search and analytics engine”,
- Manticore Search - “database for search”, Elasticsearch alternative.
We’ve tried to make as little changes to database default settings as possible to not give either of them an unfair advantage:
- Clickhouse: no tuning
, just
CREATE TABLE ... ENGINE = MergeTree() ORDER BY id
and standard clickhouse-server docker image. - Elasticsearch: here to make it fair to compare with the other databases we had to help Elasticsearch
by:
- letting it make 32 shards: (
"number_of_shards": 32
), otherwise it couldn’t utilize the CPU which has 32 cores on the server, since as said in Elasticsearch official guide “Each shard runs the search on a single CPU thread”. bootstrap.memory_lock=true
since as said on https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_disable_swapping it needs to be done for performance.- the docker image is standard
- letting it make 32 shards: (
- Manticore Search is also used in a form of their own docker image + the columnar library they provide
:
- as well as with Elasticsearch we also use 32 shards in a form of 32 plain indexes
- and we use Manticore columnar storage since comparing Manticore’s default row-wise storage vs Clickhouse’s and Elasticsearch’s columnar storages would be not fair on such a large data collection.
We’ve also configured the databases to not use any internal caches:
- Clickhouse:
SYSTEM DROP MARK CACHE
,SYSTEM DROP UNCOMPRESSED CACHE
,SYSTEM DROP COMPILED EXPRESSION CACHE
after each query .
- Elasticsearch:
"index.queries.cache.enabled": false
in its configuration/_cache/clear?request=true&query=true&fielddata=true
after each query .
- For Manticore Search in its configuration file:
qcache_max_bytes = 0
docstore_cache_size = 0
- Operating system:
- we do
echo 3 > /proc/sys/vm/drop_caches; sync
before each new query
- we do
Queries
The queries are mostly analytical queries that do filtering, sorting and grouping. We’ve also included one full-text query:
|
|
Results
You can find all the results on the results page by selecting “Test: taxi”.
Remember that the only high quality metric is “Fast avg” since it guarantees low coefficient of variation and high queries count conducted for each query. The other 2 (“Fastest” and “Slowest”) are provided with no guarantee since:
- Slowest - is a single attempt result, in most cases the very first coldest query. Even though we purge OS cache before each cold query it can’t be considered stable. So it can be used for informational purposes and is greyed out in the below summary.
- Fastest - just the very fastest result, it should be in most cases similar to the “Fast avg” metric, but can be more volatile from run to run.
Remember the tests including the results are 100% transparent as well as everything in this project, so:
- you can use the test framework to learn how they were made
- and find raw test results in the results directory.
Unlike other less transparent and less objective benchmarks we are not making any conclusions, we are just leaving screenshots of the results here:
3 competitors at once
Clickhouse vs Elasticsearch
Manticore Search vs Elasticsearch
Manticore Search vs Clickhouse
Disclaimer
The author of this test and the test framework is a member of Manticore Search core team and the test was initially made to compare Manticore Search with Elasticsearch, but as shown above and can be verified in the open source code and by running the same test yourself Manticore Search wasn’t given any unfair advantage, so the test can be considered unprejudiced. However, if something is missing or wrong (i.e. non-objective) in the test feel free to make a pull request or an issue on Github . Your take is appreciated! Thank you for spending your time reading this!