Summary:
Adding support for zLinux on s390x architecture in JNI.
Closes https://github.com/facebook/rocksdb/pull/4009
Differential Revision: D8483750
Pulled By: siying
fbshipit-source-id: e681657c27e7a28f1731e08e8570382de5deff44
Summary:
1. Add a new ticker stat rocksdb.number.multiget.keys.found to track the
number of keys successfully read
2. Update rocksdb.memtable.hit/miss in DBImpl::MultiGet(). It was being done in
DBImpl::GetImpl(), but not MultiGet
Closes https://github.com/facebook/rocksdb/pull/3730
Differential Revision: D7677364
Pulled By: anand1976
fbshipit-source-id: af22bd0ef8ddc5cf2b4244b0a024e539fe48bca5
Summary:
Changes to support sharing block cache using the Java API.
Previously DB instances could share the block cache only when the same Options instance is passed to all the DB instances. But now, with this change, it is possible to explicitly create a cache and pass it to multiple options instances, to share the block cache.
Implementing this for [Rocksandra](https://github.com/instagram/cassandra/tree/rocks_3.0), but this feature has been requested by many java api users over the years.
Closes https://github.com/facebook/rocksdb/pull/3623
Differential Revision: D7305794
Pulled By: sagar0
fbshipit-source-id: 03e4e8ed7aeee6f88bada4a8365d4279ede2ad71
Summary:
I modified the Makefile so that we can compile rocksdb on OpenBSD.
The instructions for building have been added to INSTALL.md.
The whole compilation process works fine like this on OpenBSD-current
Closes https://github.com/facebook/rocksdb/pull/3617
Differential Revision: D7323754
Pulled By: siying
fbshipit-source-id: 990037d1cc69138d22f85bd77ef4dc8c1ba9edea
Summary:
This changes the console output when the RocksJava tests are run. It makes spotting the errors and failures much easier; perviously the output was malformed with results like "ERun" where the "E" represented an error in the preceding test.
Closes https://github.com/facebook/rocksdb/pull/3621
Differential Revision: D7306172
Pulled By: sagar0
fbshipit-source-id: 3fa6f6e1ca6c6ea7ceef55a23ca81903716132b7
Summary:
This is an abstraction for working with custom Comparators implemented in native C++ code from Java. Native code must directly extend `rocksdb::Comparator`. When the native code comparator is compiled into the RocksDB codebase, you can then create a Java Class, and JNI stub to wrap it.
Useful if the C++/JNI barrier overhead is too much for your applications comparator performance.
An example is provided in `java/rocksjni/native_comparator_wrapper_test.cc` and `java/src/main/java/org/rocksdb/NativeComparatorWrapperTest.java`.
Closes https://github.com/facebook/rocksdb/pull/3334
Differential Revision: D7172605
Pulled By: miasantreble
fbshipit-source-id: e24b7eb267a3bcb6afa214e0379a1d5e8a2ceabe
Summary:
Add Java-side copy constructors for:
- Options
- DBOptions
- ColumnFamilyOptions
- WriteOptions
along with unit tests to assert the copy worked.
NOTE: Unit tests are failing in travis but it looks like a global timeout issue. These tests pass.
Closes https://github.com/facebook/rocksdb/pull/3450
Differential Revision: D6874425
Pulled By: sagar0
fbshipit-source-id: 5bde68ea5b5225e071faea2628bf8bbf10bd65ab
Summary:
This options was introduced in the C++ API in #1953 .
Closes https://github.com/facebook/rocksdb/pull/3064
Differential Revision: D6139010
Pulled By: sagar0
fbshipit-source-id: 164de11d539d174cf3afe7cd40e667049f44b0bc
Summary:
Java's `Status.SubCode` was out of sync with `include/rocksdb/status.h:SubCode`.
When running out of disc space this led to an `IllegalArgumentException` because of an invalid status code, rather than just returning the corresponding status code without an exception.
I added the missing status codes.
By this, we keep the behaviour of throwing an `IllegalArgumentException` in case of newly added status codes that are defined in C but not in Java.
We could think of an alternative strategy: add in Java another code "UnknownCode" which acts as a catch-all for all those status codes that are not yet mirrored from C to Java. This approach would never throw an exception but simply return a non-OK status-code.
I think the current approach of throwing an Exception in case of a C/Java inconsistency is fine, but if you have some opinion on the alternative strategy, then feel free to comment here.
Closes https://github.com/facebook/rocksdb/pull/3050
Differential Revision: D6129682
Pulled By: sagar0
fbshipit-source-id: f2bf44caad650837cffdcb1f93eb793b43580c66
Summary:
This PR also includes some cleanup, bugfixes and refactoring of the Java API. However these are really pre-cursors on the road to CompactionFilterFactory support.
Closes https://github.com/facebook/rocksdb/pull/1241
Differential Revision: D6012778
Pulled By: sagar0
fbshipit-source-id: 0774465940ee99001a78906e4fed4ef57068ad5c
Summary:
Now that RocksDB supports conditional merging during point lookups (introduced in #2923), Cassandra value merge operator can be updated to pass in a limit. The limit needs to be passed in from the Cassandra code.
Closes https://github.com/facebook/rocksdb/pull/2947
Differential Revision: D5938454
Pulled By: sagar0
fbshipit-source-id: d64a72d53170d8cf202b53bd648475c3952f7d7f
Summary:
This option was introduced in the C++ API in RocksDB 5.6 in bb01c1880c . Now, exposing it through RocksJava API.
Closes https://github.com/facebook/rocksdb/pull/2908
Differential Revision: D5864224
Pulled By: sagar0
fbshipit-source-id: 140aa55dcf74b14e4d11219d996735c7fdddf513
Summary:
In our testing cluster, we found large amount tombstone has been promoted to kValue type from kMerge after reaching the top level of compaction. Since we used to only collecting tombstone in merge operator, those tombstones can never be collected.
This PR addresses the issue by adding a GC step in compaction filter, which is only for kValue type records. Since those record already reached the top of compaction (no earlier data exists) we can safely remove them in compaction filter without worrying old data appears.
This PR also removes an old optimization in cassandra merge operator for single merge operands. We need to do GC even on a single operand, so the optimation does not make sense anymore.
Closes https://github.com/facebook/rocksdb/pull/2855
Reviewed By: sagar0
Differential Revision: D5806445
Pulled By: wpc
fbshipit-source-id: 6eb25629d4ce917eb5e8b489f64a6aa78c7d270b
Summary:
Plumbed ReadOptions::iterate_upper_bound through JNI.
Made the following design choices:
* Used Slice instead of AbstractSlice due to the anticipated usecase (key / key prefix). Can change this if anyone disagrees.
* Used Slice instead of raw byte[] which seemed cleaner but necessitated the package-private handle-based Slice constructor. Followed WriteBatch as an example.
* We need a copy constructor for ReadOptions, as we create one base ReadOptions for a particular usecase and clone -> change the iterate_upper_bound on each slice operation. Shallow copy seemed cleanest.
* Hold a reference to the upper bound slice on ReadOptions, in contrast to Snapshot.
Signed a Facebook CLA this morning.
Closes https://github.com/facebook/rocksdb/pull/2872
Differential Revision: D5824446
Pulled By: sagar0
fbshipit-source-id: 74fc51313a10a81ecd348625e2a50ca5b7766888
Summary:
As discussed in #2742 , this pull-requests brings the iterator's [SeekForPrev()](https://github.com/facebook/rocksdb/wiki/SeekForPrev) functionality to the java-api. It affects all locations in the code where previously only Seek() was supported.
All code changes are essentially a copy & paste of the already existing implementations for Seek().
**Please Note**: the changes to the C++ code were applied without fully understanding its effect, so please take a closer look. However, since Seek() and SeekForPrev() provide exactly the same signature, I do not expect any mistake here.
The java-tests are extended by new tests for the additional functionality.
Compilation (`make rocksdbjavastatic`) and test (`java/make test`) run without errors.
Closes https://github.com/facebook/rocksdb/pull/2747
Differential Revision: D5721011
Pulled By: sagar0
fbshipit-source-id: c1f951cddc321592c70dd2d32bc04892f3f119f8
Summary:
Remove cassandra tombstone when reaching the max compaction level (full merge). if all columns collected key will be removed in next compaction via compaction filter
Closes https://github.com/facebook/rocksdb/pull/2791
Reviewed By: sagar0
Differential Revision: D5722465
Pulled By: wpc
fbshipit-source-id: 61e9898a5686551653a16383255aeaab3197e65e
Summary:
I might have missed these while doing some recent cassandra code reviews.
Closes https://github.com/facebook/rocksdb/pull/2663
Differential Revision: D5520138
Pulled By: sagar0
fbshipit-source-id: 340930afe9efe03c75f535a1da1f89bd3e53c1f9
Summary:
Major changes in this PR:
* Implement CassandraCompactionFilter to remove expired columns and rows (if all column expired)
* Move cassandra related code from utilities/merge_operators/cassandra to utilities/cassandra/*
* Switch to use shared_ptr<> from uniqu_ptr for Column membership management in RowValue. Since columns do have multiple owners in Merge and GC process, use shared_ptr helps make RowValue immutable.
* Rename cassandra_merge_test to cassandra_functional_test and add two TTL compaction related tests there.
Closes https://github.com/facebook/rocksdb/pull/2588
Differential Revision: D5430010
Pulled By: wpc
fbshipit-source-id: 9566c21e06de17491d486a68c70f52d501f27687
Summary:
WriteBatchThreadedTest is failing, at least on Mac. The problem seems to be `wb` is getting GC before we finish write. Explicitly close it seems to fix it.
Closes https://github.com/facebook/rocksdb/pull/2482
Differential Revision: D5307379
Pulled By: yiwu-arbug
fbshipit-source-id: 8ff7f8170451078c941951f5aafae83afffb7933
Summary:
This diff syncs the Histogram Types in the Java API with the ones in C++ API (`statistics.h`), and brings it up-to-date.
I also found that the enum ordering between Java and C++ has gotten out-of-sync, a few years back, with the addition of `SUBCOMPACTION_SETUP_TIME`. So updated the order as well.
`READ_NUM_MERGE_OPERANDS` added in #2373 is needed for Cassandra-on-RocksDB work.
Closes https://github.com/facebook/rocksdb/pull/2429
Differential Revision: D5215623
Pulled By: sagar0
fbshipit-source-id: bd136698c48197e53693275eb52acc9198ee5a4e
Summary:
Adding SSTFileWriter's newly introduced put, merge and delete apis to the Java api. The C++ APIs were first introduced in #2361.
Add is deprecated in favor of Put.
Merge is especially needed to support streaming for Cassandra-on-RocksDB work in https://issues.apache.org/jira/browse/CASSANDRA-13476.
Closes https://github.com/facebook/rocksdb/pull/2392
Differential Revision: D5165091
Pulled By: sagar0
fbshipit-source-id: 6f0ad396a7cbd2e27ca63e702584784dd72acaab
Summary:
There are a couple of warnings while building RocksJava, coming from Javadoc generation.
```
Generating target/apidocs/org/rocksdb/RocksDB.html...
src/main/java/org/rocksdb/RocksDB.java:2139: warning: no throws for org.rocksdb.RocksDBException
public void ingestExternalFile(final List<String> filePathList,
^
src/main/java/org/rocksdb/RocksDB.java:2162: warning: no throws for org.rocksdb.RocksDBException
public void ingestExternalFile(final ColumnFamilyHandle columnFamilyHandle,
^
```
Closes https://github.com/facebook/rocksdb/pull/2396
Differential Revision: D5178388
Pulled By: sagar0
fbshipit-source-id: a0ab6696d6de78d089a9a860a559f64cc320019e
Summary:
Previously the Java implementation of `RocksDB#addFile` was both incomplete and not inline with the C++ API.
Rather than fix it, as I see that `rocksdb::DB::AddFile` is now deprecated in favour of `rocksdb::DB::IngestExternalFile`, I have removed the old broken implementation and implemented `RocksDB#ingestExternalFile`.
Closes https://github.com/facebook/rocksdb/issues/2261
Closes https://github.com/facebook/rocksdb/pull/2291
Differential Revision: D5061264
Pulled By: sagar0
fbshipit-source-id: 85df0899fa1b1fc3535175cac4f52353511d4104
Summary:
Replacement of #2147
The change was squashed due to a lot of conflicts.
Closes https://github.com/facebook/rocksdb/pull/2194
Differential Revision: D4929799
Pulled By: siying
fbshipit-source-id: 5cd49c254737a1d5ac13f3c035f128e86524c581
Summary:
Replace Options::use_direct_writes with Options::use_direct_io_for_flush_and_compaction
Now if Options::use_direct_io_for_flush_and_compaction = true, we will enable direct io for both reads and writes for flush and compaction job. Whereas Options::use_direct_reads controls user reads like iterator and Get().
Closes https://github.com/facebook/rocksdb/pull/2117
Differential Revision: D4860912
Pulled By: lightmark
fbshipit-source-id: d93575a8a5e780cf7e40797287edc425ee648c19
Summary:
This adds almost all missing options to RocksJava
Closes https://github.com/facebook/rocksdb/pull/2039
Differential Revision: D4779991
Pulled By: siying
fbshipit-source-id: 4a1bf28
Summary:
Right now, building rocksdbjava in PowerPC is broken due to JNI library name. I figured it out that "uname -m" and java's os.arch matches in PowerPC architecture. I made use of this advantage to fix the issue. More info can found from this issue --> https://github.com/facebook/rocksdb/issues/1317
Closes https://github.com/facebook/rocksdb/pull/2040
Differential Revision: D4779967
Pulled By: siying
fbshipit-source-id: 259f939
Summary:
I have manually audited the entire RocksJava code base.
Sorry for the large pull-request, I have broken it down into many small atomic commits though.
My initial intention was to fix the warnings that appear when running RocksJava on Java 8 with `-Xcheck:jni`, for example when running `make jtest` you would see many errors similar to:
```
WARNING in native method: JNI call made without checking exceptions when required to from CallObjectMethod
WARNING in native method: JNI call made without checking exceptions when required to from CallVoidMethod
WARNING in native method: JNI call made without checking exceptions when required to from CallStaticVoidMethod
...
```
A few of those warnings still remain, however they seem to come directly from the JVM and are not directly related to RocksJava; I am in contact with the OpenJDK hostpot-dev mailing list about these - http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-February/025981.html.
As a result of fixing these, I realised we were not r
Closes https://github.com/facebook/rocksdb/pull/1890
Differential Revision: D4591758
Pulled By: siying
fbshipit-source-id: 7f7fdf4
Summary:
…action
The two options, min_partial_merge_operands and verify_checksums_in_compaction, are not seldom used. Remove them to reduce the total number of options. Also remove them from Java and C interface.
Closes https://github.com/facebook/rocksdb/pull/1902
Differential Revision: D4601219
Pulled By: siying
fbshipit-source-id: aad4cb2
Summary:
Remove disableDataSync, and another similarly named disable_data_sync options.
This is being done to simplify options, and also because the performance gains of this feature can be achieved by other methods.
Closes https://github.com/facebook/rocksdb/pull/1859
Differential Revision: D4541292
Pulled By: sagar0
fbshipit-source-id: 5b3a6ca
Summary:
Partition Index blocks and use a Partition-index as a 2nd level index.
The two-level index can be used by setting
BlockBasedTableOptions::kTwoLevelIndexSearch as the index type and
configuring BlockBasedTableOptions::index_per_partition
t15539501
Closes https://github.com/facebook/rocksdb/pull/1814
Differential Revision: D4473535
Pulled By: maysamyabandeh
fbshipit-source-id: bffb87e
Summary:
Remove the ticker count because:
* Having to reset the ticker count in WriteImpl is ineffiecent;
* It doesn't make sense to have it as a ticker count if multiple db
instance share a statistics object.
Closes https://github.com/facebook/rocksdb/pull/1531
Differential Revision: D4194442
Pulled By: yiwu-arbug
fbshipit-source-id: e2110a9
Summary:
Needed for working with `get` after `merge` on a WBWI.
Closes https://github.com/facebook/rocksdb/pull/1093
Differential Revision: D4137978
Pulled By: yhchiang
fbshipit-source-id: e18d50d
Summary:
- Deprecated RateLimiterConfig and GenericRateLimiterConfig
- Introduced RateLimiter
It is now possible to use all C++ related methods also in RocksJava.
A noteable method is setBytesPerSecond which can change the allowed
number of bytes per second at runtime.
Test Plan:
make rocksdbjava
make jtest
Reviewers: adamretter, yhchiang, ankgup87
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D35715
Summary: To reduce number of options, merge source_compaction_factor, max_grandparent_overlap_bytes and expanded_compaction_factor into max_compaction_bytes.
Test Plan: Add two new unit tests. Run all existing tests, including jtest.
Reviewers: yhchiang, igor, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D59829
* Rename RocksDB#remove -> RocksDB#delete to match C++ API; Added deprecated versions of RocksDB#remove for backwards compatibility.
* Add missing experimental feature RocksDB#singleDelete
* [refactor] Split Java ColumnFamilyOptions into mutable and immutable and implement any missing immutable options
* [feature] Implement RocksDB#setOptions
Summary: filter_deltes is not a frequently used feature. Remove it.
Test Plan: Run all test suites.
Reviewers: igor, yhchiang, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D59427
Summary: Need to change several more places for the change to fix Java tests
Test Plan:
make jtest
under java, run "make db_bench"
Reviewers: yhchiang, andrewkr, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D59541
Summary:
memtable_prefix_bloom_probes is not a critical option. Remove it to reduce number of options.
It's easier for users to make mistakes with memtable_prefix_bloom_bits, turn it to memtable_prefix_bloom_bits_ratio
Test Plan: Run all existing tests
Reviewers: yhchiang, igor, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: gunnarku, yoshinorim, MarkCallaghan, leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D59199
Summary:
When a block based table file is opened, if prefetch_index_and_filter is true, it will prefetch the index and filter blocks, putting them into the block cache.
What this feature adds: when a L0 block based table file is opened, if pin_l0_filter_and_index_blocks_in_cache is true in the options (and prefetch_index_and_filter is true), then the filter and index blocks aren't released back to the block cache at the end of BlockBasedTableReader::Open(). Instead the table reader takes ownership of them, hence pinning them, ie. the LRU cache will never push them out. Meanwhile in the table reader, further accesses will not hit the block cache, thus avoiding lock contention.
Test Plan:
'export TEST_TMPDIR=/dev/shm/ && DISABLE_JEMALLOC=1 OPT=-g make all valgrind_check -j32' is OK.
I didn't run the Java tests, I don't have Java set up on my devserver.
Reviewers: sdong
Reviewed By: sdong
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D56133