A library that provides an embeddable, persistent key-value store for fast storage.
Go to file
Radheshyam Balasundaram 08be7f5266 Implement Prepare method in CuckooTableReader
Summary:
- Implement Prepare method
- Rewrite performance tests in cuckoo_table_reader_test to write new file only if one doesn't already exist.
- Add performance tests for batch lookup along with prefetching.

Test Plan:
./cuckoo_table_reader_test --enable_perf
Results (We get better results if we used int64 comparator instead of string comparator (TBD in future diffs)):
With 100000000 items and hash table ratio 0.500000, number of hash functions used: 2.
Time taken per op is 0.208us (4.8 Mqps) with batch size of 0
With 100000000 items and hash table ratio 0.500000, number of hash functions used: 2.
Time taken per op is 0.182us (5.5 Mqps) with batch size of 10
With 100000000 items and hash table ratio 0.500000, number of hash functions used: 2.
Time taken per op is 0.161us (6.2 Mqps) with batch size of 25
With 100000000 items and hash table ratio 0.500000, number of hash functions used: 2.
Time taken per op is 0.161us (6.2 Mqps) with batch size of 50
With 100000000 items and hash table ratio 0.500000, number of hash functions used: 2.
Time taken per op is 0.163us (6.1 Mqps) with batch size of 100

With 100000000 items and hash table ratio 0.600000, number of hash functions used: 3.
Time taken per op is 0.252us (4.0 Mqps) with batch size of 0
With 100000000 items and hash table ratio 0.600000, number of hash functions used: 3.
Time taken per op is 0.192us (5.2 Mqps) with batch size of 10
With 100000000 items and hash table ratio 0.600000, number of hash functions used: 3.
Time taken per op is 0.195us (5.1 Mqps) with batch size of 25
With 100000000 items and hash table ratio 0.600000, number of hash functions used: 3.
Time taken per op is 0.191us (5.2 Mqps) with batch size of 50
With 100000000 items and hash table ratio 0.600000, number of hash functions used: 3.
Time taken per op is 0.194us (5.1 Mqps) with batch size of 100

With 100000000 items and hash table ratio 0.750000, number of hash functions used: 3.
Time taken per op is 0.228us (4.4 Mqps) with batch size of 0
With 100000000 items and hash table ratio 0.750000, number of hash functions used: 3.
Time taken per op is 0.185us (5.4 Mqps) with batch size of 10
With 100000000 items and hash table ratio 0.750000, number of hash functions used: 3.
Time taken per op is 0.186us (5.4 Mqps) with batch size of 25
With 100000000 items and hash table ratio 0.750000, number of hash functions used: 3.
Time taken per op is 0.189us (5.3 Mqps) with batch size of 50
With 100000000 items and hash table ratio 0.750000, number of hash functions used: 3.
Time taken per op is 0.188us (5.3 Mqps) with batch size of 100

With 100000000 items and hash table ratio 0.900000, number of hash functions used: 3.
Time taken per op is 0.325us (3.1 Mqps) with batch size of 0
With 100000000 items and hash table ratio 0.900000, number of hash functions used: 3.
Time taken per op is 0.196us (5.1 Mqps) with batch size of 10
With 100000000 items and hash table ratio 0.900000, number of hash functions used: 3.
Time taken per op is 0.199us (5.0 Mqps) with batch size of 25
With 100000000 items and hash table ratio 0.900000, number of hash functions used: 3.
Time taken per op is 0.196us (5.1 Mqps) with batch size of 50
With 100000000 items and hash table ratio 0.900000, number of hash functions used: 3.
Time taken per op is 0.209us (4.8 Mqps) with batch size of 100

Reviewers: sdong, yhchiang, igor, ljin

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22167
2014-08-20 18:35:35 -07:00
build_tools Changes to support unity build: 2014-08-11 13:22:47 -04:00
coverage Disable the html-based coverage report by default 2014-02-06 12:58:13 -08:00
db Fix the error of c_test.c 2014-08-20 17:05:29 -07:00
doc Remove seek compaction 2014-06-20 10:23:02 +02:00
examples Make it easier to start using RocksDB 2014-05-10 10:49:33 -07:00
hdfs hdfs cleanup and compile test against CDH 4.4. 2014-05-20 17:22:12 -04:00
helpers/memenv Expose in memory Env to the world 2014-04-14 12:28:15 -07:00
include Improve Options sanitization and add MmapReadRequired() to TableFactory 2014-08-20 15:53:39 -07:00
java [Java] Add purgeOldBackups API 2014-08-14 10:58:09 -07:00
linters allow lambda function syntax in cpplint 2014-02-20 12:47:05 -08:00
port generic rate limiter 2014-07-08 11:41:57 -07:00
table Implement Prepare method in CuckooTableReader 2014-08-20 18:35:35 -07:00
third-party/rapidjson Fix a rapidjson compile error in mac. 2014-06-23 17:09:24 -06:00
tools fix_sst_dump_for_old_sst_format 2014-08-14 09:59:41 -07:00
util Eliminate VersionSet memory leak 2014-08-20 13:52:03 -07:00
utilities Optimize storage parameters for spatialDB 2014-08-20 11:14:01 -07:00
.arcconfig Improve/fix bugs for the cpp linter 2014-02-13 17:48:11 -08:00
.clang-format A script that automatically reformat affected lines 2014-01-14 12:21:24 -08:00
.gitignore Changes to support unity build: 2014-08-11 13:22:47 -04:00
.travis.yml Turn off travis notifications 2014-05-14 16:08:02 -07:00
CONTRIBUTING.md facebook accounts are not required for CLA signers 2014-07-08 05:57:54 -04:00
HISTORY.md WriteBatchWithIndex: a wrapper of WriteBatch, with a searchable index 2014-08-18 16:37:38 -07:00
INSTALL.md specify the command to install build_tools/mac-install-gflags.sh file in doc 2014-06-17 17:03:21 -05:00
LICENSE Fix copyright year 2014-03-12 12:06:58 -07:00
Makefile WriteBatchWithIndex: a wrapper of WriteBatch, with a searchable index 2014-08-18 16:37:38 -07:00
PATENTS Fix the patent format 2013-10-16 15:37:32 -07:00
README.md Update README.md 2014-06-23 15:58:54 -07:00
ROCKSDB_LITE.md RocksDBLite 2014-04-15 13:39:26 -07:00

RocksDB: A Persistent Key-Value Store for Flash and RAM Storage

Build Status

RocksDB is developed and maintained by Facebook Database Engineering Team. It is built on on earlier work on LevelDB by Sanjay Ghemawat (sanjay@google.com) and Jeff Dean (jeff@google.com)

This code is a library that forms the core building block for a fast key value server, especially suited for storing data on flash drives. It has a Log-Structured-Merge-Database (LSM) design with flexible tradeoffs between Write-Amplification-Factor (WAF), Read-Amplification-Factor (RAF) and Space-Amplification-Factor (SAF). It has multi-threaded compactions, making it specially suitable for storing multiple terabytes of data in a single database.

Start with example usage here: https://github.com/facebook/rocksdb/tree/master/examples

See the github wiki for more explanation.

The public interface is in include/. Callers should not include or rely on the details of any other header files in this package. Those internal APIs may be changed without warning.

Design discussions are conducted in https://www.facebook.com/groups/rocksdb.dev/