Commit Graph

2560 Commits

Author SHA1 Message Date
Chris Riccomini
9db13987b1 Update RocksDB's Java bindings to support multiple native RocksDB builds in the same Jar file. Cross build RocksDB for linux32 and linux64 using Vagrant. Build a cross-platform fat jar that contains osx, linux32, and linux64 RocksDB static builds. 2014-09-26 13:57:12 -07:00
fyrz
5340484266 Built-in comparator(s) in RocksJava
Extended Built-in comparators with ReverseBytewiseComparator.

Reverse key handling is under certain conditions essential. E.g. while
using timestamp versioned data.

As native-comparators were not available using JAVA-API. Both built-in comparators
were exposed via JNI to be set upon database creation time.
2014-09-26 10:35:12 +02:00
Lei Jin
d439451fab delay initialization of cuckoo table iterator
Summary:
cuckoo table iterator creation is quite expensive since it needs to load
all data and sort them. After compaction, RocksDB creates a new iterator
of the new file to make sure it is in good state. That makes the DB
creation quite slow. Delay the iterator db sort to the seek time to
speed it up.

Test Plan: db_bench

Reviewers: igor, yhchiang, sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23775
2014-09-25 16:45:37 -07:00
Lei Jin
94997eab5e reduce memory usage of cuckoo table builder
Summary:
builder currently buffers all key value pairs as a vector of
pair<string, string>. That is too much due to std::string
overhead. It wasn't able to fit 1B key/values (12bytes total) in 100GB
of ram. Switch to use a plain string to store the key/value sequence and
use only 12GB of ram as a result.

Test Plan: db_bench

Reviewers: igor, sdong, yhchiang

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23763
2014-09-25 16:34:24 -07:00
Lei Jin
c6275956e2 improve memory efficiency of cuckoo reader
Summary:
When creating a new iterator, instead of storing mapping from key to
bucket id for sorting, store only bucket id and read key from mmap file
based on the id. This reduces from 20 bytes per entry to only 4 bytes.

Test Plan: db_bench

Reviewers: igor, yhchiang, sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23757
2014-09-25 16:15:23 -07:00
Lei Jin
581442d446 option to choose module when calculating CuckooTable hash
Summary:
Using module to calculate hash makes lookup ~8% slower. But it has its
benefit: file size is more predictable, more space enffient

Test Plan: db_bench

Reviewers: igor, yhchiang, sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23691
2014-09-25 13:53:27 -07:00
Lei Jin
fbd2dafc9f CompactedDBImpl::MultiGet() for better CuckooTable performance
Summary:
Add the MultiGet API to allow prefetching.
With file size of 1.5G, I configured it to have 0.9 hash ratio that can
fill With 115M keys and result in 2 hash functions, the lookup QPS is
~4.9M/s  vs. 3M/s for Get().
It is tricky to set the parameters right. Since files size is determined
by power-of-two factor, that means # of keys is fixed in each file. With
big file size (thus smaller # of files), we will have more chance to
waste lot of space in the last file - lower space utilization as a
result. Using smaller file size can improve the situation, but that
harms lookup speed.

Test Plan: db_bench

Reviewers: yhchiang, sdong, igor

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23673
2014-09-25 13:34:51 -07:00
Lei Jin
3c68006109 CompactedDBImpl
Summary:
Add a CompactedDBImpl that will enabled when calling OpenForReadOnly()
and the DB only has one level (>0) of files. As a performan comparison,
CuckooTable performs 2.1M/s with CompactedDBImpl vs. 1.78M/s with
ReadOnlyDBImpl.

Test Plan: db_bench

Reviewers: yhchiang, igor, sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23553
2014-09-25 11:14:01 -07:00
Igor Canadi
f7375f39fd Fix double deletes
Summary: While debugging clients compaction issues, I noticed bunch of delete bugs: P16329995. MakeTableName returns sst file with "/" prefix. We also need "/" prefix when we get the files though GetChildren(), so that we can properly dedup the files.

Test Plan: none

Reviewers: sdong, yhchiang, ljin

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23457
2014-09-25 11:08:16 -07:00
Igor Canadi
21ddcf6e4f Remove allow_thread_local
Summary: See https://reviews.facebook.net/D19365

Test Plan: compiles

Reviewers: sdong, yhchiang, ljin

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23907
2014-09-24 13:12:16 -07:00
ankgup87
fb4a492cca Merge pull request #311 from ankgup87/master
[Java] Add remaining options to BlockBasedTableConfig and add getProperty() API
2014-09-24 11:47:21 -07:00
Ankit Gupta
611e286b9d Merge branch 'master' of https://github.com/facebook/rocksdb 2014-09-24 11:44:23 -07:00
Ankit Gupta
0103b4498c Merge branch 'master' of ssh://github.com/ankgup87/rocksdb 2014-09-24 11:43:57 -07:00
Ankit Gupta
1dfb7bb980 Add block based table config options 2014-09-24 11:43:35 -07:00
sdong
cdaf44f9ae Enlarge log size cap when printing file summary
Summary:
Now the file summary is too small for printing. Enlarge it.
To enable it, allow to pass a size to log buffer.

Test Plan:
Add a unit test.
make all check

Reviewers: ljin, yhchiang

Reviewed By: yhchiang

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D21723
2014-09-23 16:56:34 -07:00
ankgup87
7cc1ed7f08 Merge pull request #309 from naveenatceg/staticbuild
RocksDB Staticbuild
2014-09-23 16:09:17 -07:00
Naveen
ba6d660f6d Resolving merge conflict 2014-09-23 16:00:54 -07:00
Naveen
51eeaf65e2 Addressing review comments 2014-09-23 15:24:31 -07:00
Naveen
fd7d3fe604 Addressing review comments (adding a env variable to override temp directory) 2014-09-23 15:24:19 -07:00
Naveen
cf7ace8865 Addressing review comments 2014-09-23 15:23:57 -07:00
Lei Jin
0a29ce5393 re-enable BlockBasedTable::SetupForCompaction()
Summary:
It was commented out in D22545 by accident. Keep the option in
ImmutableOptions for now. I can make it dynamic in
https://reviews.facebook.net/D23349

Test Plan: make release

Reviewers: sdong, yhchiang, igor

Reviewed By: igor

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23865
2014-09-23 14:18:57 -07:00
Igor Canadi
55af370756 Remove TODO for checking index checksums 2014-09-23 13:02:23 -07:00
Igor Canadi
3d74f09979 Fix compile 2014-09-22 15:19:20 -07:00
Igor Canadi
53b0039954 Fix release compile 2014-09-22 15:00:03 -07:00
sdong
d0de413f4d WriteBatchWithIndex to allow different Comparators for different column families
Summary:
Previously, one single column family is given to WriteBatchWithIndex to index keys for all column families. An extra map from column family ID to comparator is maintained which can override the default comparator given in the constructor. A WriteBatchWithIndex::SetComparatorForCF() is added for user to add comparators per column family.

Also move more codes into anonymous namespace.

Test Plan: Add a unit test

Reviewers: ljin, igor

Reviewed By: igor

Subscribers: dhruba, leveldb, yhchiang

Differential Revision: https://reviews.facebook.net/D23355
2014-09-22 13:47:39 -07:00
Lei Jin
57a32f147f change target_file_size_base to uint64_t
Summary: It contrains the file size to be 4G max with int

Test Plan:
tried to grep instance and made sure other related variables are also
uint64

Reviewers: sdong, yhchiang, igor

Reviewed By: igor

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23697
2014-09-22 11:15:03 -07:00
Lei Jin
5e6aee4325 dont create backup_input if compaction filter v2 is not used
Summary:
Compaction creates backup_input iterator even though it only needed
when compaction filter v2 is enabled

Test Plan: make all check

Reviewers: sdong, yhchiang, igor

Reviewed By: igor

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23769
2014-09-22 10:36:53 -07:00
Igor Canadi
49b5f94c54 Merge pull request #306 from Liuchang0812/fix_cast
Update logging.cc
2014-09-22 10:17:03 -07:00
liuchang0812
787cb4db29 remove cast, replace %llu with % PRIu64 2014-09-23 01:10:46 +08:00
whu_liuchang
a7574d4fa1 Update logging.cc 2014-09-22 23:37:00 +08:00
whu_liuchang
7e0dcb953f Update logging.cc
fix cast style to cpp static_cast
2014-09-22 23:24:53 +08:00
Igor Canadi
57fa3cc5b3 Merge pull request #304 from Liuchang0812/fix-check
fixed #303: replace %ld with % PRId64
2014-09-21 11:55:45 -07:00
Igor Canadi
cd44522a94 Merge pull request #305 from Liuchang0812/fix-logging
remove unused variable
2014-09-21 11:54:55 -07:00
liuchang0812
6a031b6a81 remove unused variable 2014-09-21 22:20:00 +08:00
liuchang0812
4436f17bd8 fixed #303: replace %ld with % PRId64 2014-09-21 22:09:48 +08:00
ankgup87
7a1bd057fa Merge pull request #302 from ankgup87/master
[Java] Add rate limiter
2014-09-19 16:14:25 -07:00
Ankit Gupta
423e52cd47 Merge branch 'master' of https://github.com/facebook/rocksdb 2014-09-19 16:12:26 -07:00
Ankit Gupta
bfeef94d31 Add rate limiter 2014-09-19 16:11:59 -07:00
Torrie Fischer
ed9a2df8cc fix unity build 2014-09-19 16:10:33 -07:00
Mark Callaghan
32f2532a0b Print compression_size_percent as a signed int
Summary:
compression_size_percent is an int but was printed as
an unsigned int. So the default of -1 is displayed as a big number.

Test Plan: make check

Reviewers: sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23679
2014-09-19 13:09:25 -07:00
sdong
976caca09b Skip AllocateTest if fallocate() is not supported in the file system
Summary: To avoid false positive test failures when the file system doesn't support fallocate. In EnvTest.AllocateTest, we first make a simple fallocate call and check the error codes to rule out the possibility that it is not supported. Skip the test if the error code indicates it is not supported.

Test Plan: Run the test and make sure it passes on file systems supporting and not supporting fallocate

Reviewers: yhchiang, ljin, igor

Reviewed By: igor

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23667
2014-09-19 10:55:59 -07:00
Igor Canadi
3b897cddd7 Enable no-fbcode RocksDB build
Summary: I want to use open source build rather than fbcode one. This enables me to run `ROCKSDB_NO_FBCODE=1 make` and run it with my system g++.

Test Plan:
ROCKSDB_NO_FBCODE=1 make
make

Reviewers: sdong, ljin, yhchiang

Reviewed By: yhchiang

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23613
2014-09-19 09:27:16 -07:00
Venkatesh Radhakrishnan
f44594743f RocksDB: Format uint64 using PRIu64 in db_impl.cc
Summary: Use PRIu64 to format uint64 in a portable manner

Test Plan: Run "make all check"

Reviewers: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23595
2014-09-18 22:19:41 -07:00
ankgup87
e17bc65c75 Merge pull request #299 from ankgup87/master
[Java] Fix build
2014-09-18 22:15:31 -07:00
Ankit Gupta
b93797abc4 Fix build 2014-09-18 22:13:52 -07:00
Yueh-Hsuan Chiang
adae3ca1fe [Java] Fix JNI link error caused by the removal of options.db_stats_log_interval
Summary: Fix JNI link error caused by the removal of options.db_stats_log_interval in https://reviews.facebook.net/D21915.

Test Plan:
make rocksdbjava
make jtest

Reviewers: ljin, ankgup87

Reviewed By: ankgup87

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23505
2014-09-18 21:09:44 -07:00
Igor Canadi
90b8c07b48 Fix unit tests errors
Summary: Those were introduced with 2fb1fea30f because the flushing behavior changed when max_background_flushes is > 0.

Test Plan: make check

Reviewers: ljin, yhchiang, sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23577
2014-09-18 13:32:44 -07:00
Lei Jin
51af7c326c CuckooTable: add one option to allow identity function for the first hash function
Summary:
MurmurHash becomes expensive when we do millions Get() a second in one
thread. Add this option to allow the first hash function to use identity
function as hash function. It results in QPS increase from 3.7M/s to
~4.3M/s. I did not observe improvement for end to end RocksDB
performance. This may be caused by other bottlenecks that I will address
in a separate diff.

Test Plan:
```
[ljin@dev1964 rocksdb] ./cuckoo_table_reader_test --enable_perf --file_dir=/dev/shm --write --identity_as_first_hash=0
==== Test CuckooReaderTest.WhenKeyExists
==== Test CuckooReaderTest.WhenKeyExistsWithUint64Comparator
==== Test CuckooReaderTest.CheckIterator
==== Test CuckooReaderTest.CheckIteratorUint64
==== Test CuckooReaderTest.WhenKeyNotFound
==== Test CuckooReaderTest.TestReadPerformance
With 125829120 items, utilization is 93.75%, number of hash functions: 2.
Time taken per op is 0.272us (3.7 Mqps) with batch size of 0, # of found keys 125829120
With 125829120 items, utilization is 93.75%, number of hash functions: 2.
Time taken per op is 0.138us (7.2 Mqps) with batch size of 10, # of found keys 125829120
With 125829120 items, utilization is 93.75%, number of hash functions: 2.
Time taken per op is 0.142us (7.1 Mqps) with batch size of 25, # of found keys 125829120
With 125829120 items, utilization is 93.75%, number of hash functions: 2.
Time taken per op is 0.142us (7.0 Mqps) with batch size of 50, # of found keys 125829120
With 125829120 items, utilization is 93.75%, number of hash functions: 2.
Time taken per op is 0.144us (6.9 Mqps) with batch size of 100, # of found keys 125829120

With 104857600 items, utilization is 78.12%, number of hash functions: 2.
Time taken per op is 0.201us (5.0 Mqps) with batch size of 0, # of found keys 104857600
With 104857600 items, utilization is 78.12%, number of hash functions: 2.
Time taken per op is 0.121us (8.3 Mqps) with batch size of 10, # of found keys 104857600
With 104857600 items, utilization is 78.12%, number of hash functions: 2.
Time taken per op is 0.123us (8.1 Mqps) with batch size of 25, # of found keys 104857600
With 104857600 items, utilization is 78.12%, number of hash functions: 2.
Time taken per op is 0.121us (8.3 Mqps) with batch size of 50, # of found keys 104857600
With 104857600 items, utilization is 78.12%, number of hash functions: 2.
Time taken per op is 0.112us (8.9 Mqps) with batch size of 100, # of found keys 104857600

With 83886080 items, utilization is 62.50%, number of hash functions: 2.
Time taken per op is 0.251us (4.0 Mqps) with batch size of 0, # of found keys 83886080
With 83886080 items, utilization is 62.50%, number of hash functions: 2.
Time taken per op is 0.107us (9.4 Mqps) with batch size of 10, # of found keys 83886080
With 83886080 items, utilization is 62.50%, number of hash functions: 2.
Time taken per op is 0.099us (10.1 Mqps) with batch size of 25, # of found keys 83886080
With 83886080 items, utilization is 62.50%, number of hash functions: 2.
Time taken per op is 0.100us (10.0 Mqps) with batch size of 50, # of found keys 83886080
With 83886080 items, utilization is 62.50%, number of hash functions: 2.
Time taken per op is 0.116us (8.6 Mqps) with batch size of 100, # of found keys 83886080

With 73400320 items, utilization is 54.69%, number of hash functions: 2.
Time taken per op is 0.189us (5.3 Mqps) with batch size of 0, # of found keys 73400320
With 73400320 items, utilization is 54.69%, number of hash functions: 2.
Time taken per op is 0.095us (10.5 Mqps) with batch size of 10, # of found keys 73400320
With 73400320 items, utilization is 54.69%, number of hash functions: 2.
Time taken per op is 0.096us (10.4 Mqps) with batch size of 25, # of found keys 73400320
With 73400320 items, utilization is 54.69%, number of hash functions: 2.
Time taken per op is 0.098us (10.2 Mqps) with batch size of 50, # of found keys 73400320
With 73400320 items, utilization is 54.69%, number of hash functions: 2.
Time taken per op is 0.105us (9.5 Mqps) with batch size of 100, # of found keys 73400320

[ljin@dev1964 rocksdb] ./cuckoo_table_reader_test --enable_perf --file_dir=/dev/shm --write --identity_as_first_hash=1
==== Test CuckooReaderTest.WhenKeyExists
==== Test CuckooReaderTest.WhenKeyExistsWithUint64Comparator
==== Test CuckooReaderTest.CheckIterator
==== Test CuckooReaderTest.CheckIteratorUint64
==== Test CuckooReaderTest.WhenKeyNotFound
==== Test CuckooReaderTest.TestReadPerformance
With 125829120 items, utilization is 93.75%, number of hash functions: 2.
Time taken per op is 0.230us (4.3 Mqps) with batch size of 0, # of found keys 125829120
With 125829120 items, utilization is 93.75%, number of hash functions: 2.
Time taken per op is 0.086us (11.7 Mqps) with batch size of 10, # of found keys 125829120
With 125829120 items, utilization is 93.75%, number of hash functions: 2.
Time taken per op is 0.088us (11.3 Mqps) with batch size of 25, # of found keys 125829120
With 125829120 items, utilization is 93.75%, number of hash functions: 2.
Time taken per op is 0.083us (12.1 Mqps) with batch size of 50, # of found keys 125829120
With 125829120 items, utilization is 93.75%, number of hash functions: 2.
Time taken per op is 0.083us (12.1 Mqps) with batch size of 100, # of found keys 125829120

With 104857600 items, utilization is 78.12%, number of hash functions: 2.
Time taken per op is 0.159us (6.3 Mqps) with batch size of 0, # of found keys 104857600
With 104857600 items, utilization is 78.12%, number of hash functions: 2.
Time taken per op is 0.078us (12.8 Mqps) with batch size of 10, # of found keys 104857600
With 104857600 items, utilization is 78.12%, number of hash functions: 2.
Time taken per op is 0.080us (12.6 Mqps) with batch size of 25, # of found keys 104857600
With 104857600 items, utilization is 78.12%, number of hash functions: 2.
Time taken per op is 0.080us (12.5 Mqps) with batch size of 50, # of found keys 104857600
With 104857600 items, utilization is 78.12%, number of hash functions: 2.
Time taken per op is 0.082us (12.2 Mqps) with batch size of 100, # of found keys 104857600

With 83886080 items, utilization is 62.50%, number of hash functions: 2.
Time taken per op is 0.154us (6.5 Mqps) with batch size of 0, # of found keys 83886080
With 83886080 items, utilization is 62.50%, number of hash functions: 2.
Time taken per op is 0.077us (13.0 Mqps) with batch size of 10, # of found keys 83886080
With 83886080 items, utilization is 62.50%, number of hash functions: 2.
Time taken per op is 0.077us (12.9 Mqps) with batch size of 25, # of found keys 83886080
With 83886080 items, utilization is 62.50%, number of hash functions: 2.
Time taken per op is 0.078us (12.8 Mqps) with batch size of 50, # of found keys 83886080
With 83886080 items, utilization is 62.50%, number of hash functions: 2.
Time taken per op is 0.079us (12.6 Mqps) with batch size of 100, # of found keys 83886080

With 73400320 items, utilization is 54.69%, number of hash functions: 2.
Time taken per op is 0.218us (4.6 Mqps) with batch size of 0, # of found keys 73400320
With 73400320 items, utilization is 54.69%, number of hash functions: 2.
Time taken per op is 0.083us (12.0 Mqps) with batch size of 10, # of found keys 73400320
With 73400320 items, utilization is 54.69%, number of hash functions: 2.
Time taken per op is 0.085us (11.7 Mqps) with batch size of 25, # of found keys 73400320
With 73400320 items, utilization is 54.69%, number of hash functions: 2.
Time taken per op is 0.086us (11.6 Mqps) with batch size of 50, # of found keys 73400320
With 73400320 items, utilization is 54.69%, number of hash functions: 2.
Time taken per op is 0.078us (12.8 Mqps) with batch size of 100, # of found keys 73400320
```

Reviewers: sdong, igor, yhchiang

Reviewed By: igor

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23451
2014-09-18 11:00:48 -07:00
Yueh-Hsuan Chiang
035043559d Fixed a signed-unsigned comparison in spatial_db.cc -- issue #293
Summary:
Fixed a signed-unsigned comparison in spatial_db.cc

utilities/spatialdb/spatial_db.cc:542:38: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
cc1plus: all warnings being treated as errors
make: *** [utilities/spatialdb/spatial_db.o] Error 1

Test Plan:
make spatial_db_test
./spatial_db_test

Reviewers: ljin, sdong, reddragon, igor

Reviewed By: reddragon

Subscribers: reddragon, leveldb

Differential Revision: https://reviews.facebook.net/D23565
2014-09-18 10:57:20 -07:00
Igor Canadi
2fb1fea30f Fix syncronization issues 2014-09-18 10:42:54 -07:00