Commit Graph

8218 Commits

Author SHA1 Message Date
Yanqin Jin
30edf1874c Change buckifier to support parameterized dependencies (#5648)
Summary:
Users may desire to specify extra dependencies via buck. This PR allows users to pass additional dependencies as a JSON object so that the buckifier script can generate TARGETS file with desired extra dependencies.

Test plan (on dev server)
```
$python buckifier/buckify_rocksdb.py '{"fake": {"extra_deps": [":test_dep", "//fakes/module:mock1"], "extra_compiler_flags": ["-DROCKSDB_LITE", "-Os"]}}'
Generating TARGETS
Extra dependencies:
{'': {'extra_compiler_flags': [], 'extra_deps': []}, 'test_dep1': {'extra_compiler_flags': ['-O2', '-DROCKSDB_LITE'], 'extra_deps': [':fake', '//dep1/mock']}}
Generated TARGETS Summary:
- 5 libs
- 0 binarys
- 296 tests
```
Verify the TARGETS file.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5648

Differential Revision: D16565043

Pulled By: riversand963

fbshipit-source-id: a6ef02274174fcf159692d7b846e828454d01e89
2019-08-02 10:55:17 -07:00
Zhongyi Xie
d1c9ede195 Fix duplicated file names in PurgeObsoleteFiles (#5603)
Summary:
Currently in `DBImpl::PurgeObsoleteFiles`, the list of candidate files is create through a combination of calling LogFileName using `log_delete_files` and `full_scan_candidate_files`.

In full_scan_candidate_files, the filenames look like this
{file_name = "074715.log", file_path = "/txlogs/3306"},
but LogFileName produces filenames like this that prepends a slash:
{file_name = "/074715.log", file_path = "/txlogs/3306"},

This confuses the dedup step here: bb4178066d/db/db_impl/db_impl_files.cc (L339-L345)

Because duplicates still exist, DeleteFile is called on the same file twice, and hits an error on the second try. Error message: Failed to mark /txlogs/3302/764418.log as trash.

The root cause is the use of `kDumbDbName` when generating file names, it creates file names like /074715.log. This PR removes the use of `kDumbDbName` and create paths without leading '/' when dbname can be ignored.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5603

Test Plan: make check

Differential Revision: D16413203

Pulled By: miasantreble

fbshipit-source-id: 6ba8288382c55f7d5e3892d722fc94b57d2e4491
2019-08-01 15:50:05 -07:00
Levi Tamasi
1dfc5eaab0 Test the various configurations in parallel in MergeOperatorPinningTest (#5659)
Summary:
MergeOperatorPinningTest.Randomized frequently times out under TSAN
because it tests ~40 option configurations sequentially in a loop. The
patch parallelizes the tests of the various configurations to make the
test complete faster.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5659

Test Plan: Tested using buck test mode/dev-tsan ...

Differential Revision: D16587518

Pulled By: ltamasi

fbshipit-source-id: 65bd25c0ad9a23587fed5592e69c1a0097fa27f6
2019-07-31 15:20:26 -07:00
Manuel Ung
f622ca2c7c WriteUnPrepared: savepoint support (#5627)
Summary:
Add savepoint support when the current transaction has flushed unprepared batches.

Rolling back to savepoint is similar to rolling back a transaction. It requires the set of keys that have changed since the savepoint, re-reading the keys at the snapshot at that savepoint, and the restoring the old keys by writing out another unprepared batch.

For this strategy to work though, we must be capable of reading keys at a savepoint. This does not work if keys were written out using the same sequence number before and after a savepoint. Therefore, when we flush out unprepared batches, we must split the batch by savepoint if any savepoints exist.

eg. If we have the following:
```
Put(A)
Put(B)
Put(C)
SetSavePoint()
Put(D)
Put(E)
SetSavePoint()
Put(F)
```

Then we will write out 3 separate unprepared batches:
```
Put(A) 1
Put(B) 1
Put(C) 1
Put(D) 2
Put(E) 2
Put(F) 3
```

This is so that when we rollback to eg. the first savepoint, we can just read keys at snapshot_seq = 1.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5627

Differential Revision: D16584130

Pulled By: lth

fbshipit-source-id: 6d100dd548fb20c4b76661bd0f8a2647e64477fa
2019-07-31 13:39:39 -07:00
Manuel Ung
d599135a03 WriteUnPrepared: use WriteUnpreparedTxnReadCallback for ValidateSnapshot (#5657)
Summary:
In DeferSnapshotSavePointTest, writes were failing with snapshot validation error because the key with the latest sequence number was an unprepared key from the current transaction.

Fix this by passing down the correct read callback.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5657

Differential Revision: D16582466

Pulled By: lth

fbshipit-source-id: 11645dac0e7c1374d917ef5fdf757d13c1d1108d
2019-07-31 10:44:56 -07:00
Eli Pozniansky
4834dab578 Improve CPU Efficiency of ApproximateSize (part 2) (#5609)
Summary:
In some cases, we don't have to get really accurate number. Something like 10% off is fine, we can create a new option for that use case. In this case, we can calculate size for full files first, and avoid estimation inside SST files if full files got us a huge number. For example, if we already covered 100GB of data, we should be able to skip partial dives into 10 SST files of 30MB.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5609

Differential Revision: D16433481

Pulled By: elipoz

fbshipit-source-id: 5830b31e1c656d0fd3a00d7fd2678ddc8f6e601b
2019-07-31 08:50:00 -07:00
Levi Tamasi
b538e756c2 Split the recent block based table changes between 6.3 and 6.4 in HISTORY.md
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5653

Differential Revision: D16573445

Pulled By: ltamasi

fbshipit-source-id: 19c639044fcfd43b5d5c627c8def33ff2dbb2af8
2019-07-30 17:46:02 -07:00
Fosco Marotto
265db3ebb5 Update history and version for 6.4.0 (#5652)
Summary:
Master branch had been left at 6.2 and history of 6.3 and beyond were merged.  Updated this to correct.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5652

Differential Revision: D16570498

Pulled By: gfosco

fbshipit-source-id: 79f62ec570539a3e3d7d7c84a6cf7b722395fafe
2019-07-30 16:10:06 -07:00
Yanqin Jin
55f4f5486d Update buckifier templates (#5647)
Summary:
Update buckifier templates in the scripts.

Test plan (on devserver)
```
$python buckifier/buckify_rocksdb.py
```
Then
```
$git diff
```
Verify that generated TARGETS file is the same (except for indentation).
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5647

Differential Revision: D16555647

Pulled By: riversand963

fbshipit-source-id: 32574a4d0e820858eab2391304dd731141719bcd
2019-07-30 16:00:35 -07:00
Yi Wu
849a8c0ae0 fix sign compare warnings (#5651)
Summary:
Fix -Wsign-compare warnings for gcc9.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5651

Test Plan: Tested with ubuntu19.10+gcc9

Differential Revision: D16567428

fbshipit-source-id: 730b2704d42ba0c4e4ea946a3199bbb34be4c25c
2019-07-30 14:12:54 -07:00
Manuel Ung
399f477818 WriteUnPrepared: Use WriteUnpreparedTxnReadCallback for MultiGet (#5634)
Summary:
The `TransactionTest.MultiGetBatchedTest` were failing with unprepared batches because we were not using the correct callbacks. Override MultiGet to pass down the correct ReadCallback. A similar problem is also fixed in WritePrepared.

This PR also fixes an issue similar to (https://github.com/facebook/rocksdb/pull/5147), but for MultiGet instead of Get.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5634

Differential Revision: D16552674

Pulled By: lth

fbshipit-source-id: 736eaf8e919c6b13d5f5655b1c0d36b57ad04804
2019-07-29 17:56:13 -07:00
haoyuhuang
e648c1d9eb Cache simulator: Optimize hybrid row-block cache. (#5616)
Summary:
This PR optimizes the hybrid row-block cache simulator. If a Get request hits the cache, we treat all its future accesses as hits.

Consider a Get request (no snapshot) accesses multiple files, e.g, file1, file2, file3. We construct the row key as "fdnumber_key_0". Before this PR, if it hits the cache when searching the key in file1, we continue to process its accesses in file2 and file3 which is unnecessary.

With this PR, if "file1_key_0" is in the cache, we treat all future accesses of this Get request as hits.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5616

Differential Revision: D16453187

Pulled By: HaoyuHuang

fbshipit-source-id: 56f3169cc322322305baaf5543226a0824fae19f
2019-07-29 10:58:15 -07:00
Manuel Ung
80d7067cb2 Use int64_t instead of ssize_t (#5638)
Summary:
The ssize_t type was introduced in https://github.com/facebook/rocksdb/pull/5633, but it seems like it's a POSIX specific type.

I just need a signed type to represent number of bytes, so use int64_t instead. It seems like we have a typedef from SSIZE_T for Windows, but it doesn't seem like we ever include "port/port.h" in our public header files.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5638

Differential Revision: D16526269

Pulled By: lth

fbshipit-source-id: 8d3a5c41003951b74b29bc5f1d949b2b22da0cee
2019-07-26 16:36:49 -07:00
Levi Tamasi
3f89af1c39 Reduce the number of random iterations in compact_on_deletion_collector_test (#5635)
Summary:
This test frequently times out under TSAN; reducing the number of random
iterations to make it complete faster.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5635

Test Plan: buck test mode/dev-tsan internal_repo_rocksdb/repo:compact_on_deletion_collector_test

Differential Revision: D16523505

Pulled By: ltamasi

fbshipit-source-id: 6a69909bce9d204c891150fcb3d536547b3253d0
2019-07-26 15:53:34 -07:00
haoyuhuang
70c7302fb5 Block cache simulator: Add pysim to simulate caches using reinforcement learning. (#5610)
Summary:
This PR implements cache eviction using reinforcement learning. It includes two implementations:
1. An implementation of Thompson Sampling for the Bernoulli Bandit [1].
2. An implementation of LinUCB with disjoint linear models [2].

The idea is that a cache uses multiple eviction policies, e.g., MRU, LRU, and LFU. The cache learns which eviction policy is the best and uses it upon a cache miss.
Thompson Sampling is contextless and does not include any features.
LinUCB includes features such as level, block type, caller, column family id to decide which eviction policy to use.

[1] Daniel J. Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband, and Zheng Wen. 2018. A Tutorial on Thompson Sampling. Found. Trends Mach. Learn. 11, 1 (July 2018), 1-96. DOI: https://doi.org/10.1561/2200000070
[2] Lihong Li, Wei Chu, John Langford, and Robert E. Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web (WWW '10). ACM, New York, NY, USA, 661-670. DOI=http://dx.doi.org/10.1145/1772690.1772758
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5610

Differential Revision: D16435067

Pulled By: HaoyuHuang

fbshipit-source-id: 6549239ae14115c01cb1e70548af9e46d8dc21bb
2019-07-26 14:41:13 -07:00
Manuel Ung
41df734830 WriteUnPrepared: Add new variable write_batch_flush_threshold (#5633)
Summary:
Instead of reusing `TransactionOptions::max_write_batch_size` for determining when to flush a write batch for write unprepared, add a new variable called `write_batch_flush_threshold` for this use case instead.

Also add `TransactionDBOptions::default_write_batch_flush_threshold` which sets the default value if `TransactionOptions::write_batch_flush_threshold` is unspecified.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5633

Differential Revision: D16520364

Pulled By: lth

fbshipit-source-id: d75ae5a2141ce7708982d5069dc3f0b58d250e8c
2019-07-26 12:56:26 -07:00
Levi Tamasi
3617287e0e Parallelize db_bloom_filter_test (#5632)
Summary:
This test frequently times out under TSAN; parallelizing it should fix
this issue.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5632

Test Plan:
make check
buck test mode/dev-tsan internal_repo_rocksdb/repo:db_bloom_filter_test

Differential Revision: D16519399

Pulled By: ltamasi

fbshipit-source-id: 66e05a644d6f79c6d544255ffcf6de195d2d62fe
2019-07-26 11:48:17 -07:00
Manuel Ung
230b909da8 Fix PopSavePoint to merge info into the previous savepoint (#5628)
Summary:
Transaction::RollbackToSavePoint undos the modification made since the SavePoint beginning, and also unlocks the corresponding keys, which are tracked in the last SavePoint. Currently ::PopSavePoint simply discard these tracked keys, leaving them locked in the lock manager. This breaks a subsequent ::RollbackToSavePoint behavior as it loses track of such keys, and thus cannot unlock them. The patch fixes ::PopSavePoint by passing on the track key information to the previous SavePoint.
Fixes https://github.com/facebook/rocksdb/issues/5618
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5628

Differential Revision: D16505325

Pulled By: lth

fbshipit-source-id: 2bc3b30963ab4d36d996d1f66543c93abf358980
2019-07-26 11:39:30 -07:00
Yanqin Jin
74782cec32 Fix target 'clean' to include parallel test binaries (#5629)
Summary:
current `clean` target in Makefile does not remove parallel test
binaries. Fix this.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5629

Test Plan:
(on devserver)
Take file_reader_writer_test for instance.
```
$make -j32 file_reader_writer_test
$make clean
```
Verify that binary file 'file_reader_writer_test' is delete by `make clean`.

Differential Revision: D16513176

Pulled By: riversand963

fbshipit-source-id: 70acb9f56c928a494964121b86aacc0090f31ff6
2019-07-26 09:56:09 -07:00
Eli Pozniansky
9625a2bc2b Added SizeApproximationOptions to DB::GetApproximateSizes (#5626)
Summary:
The new DB::GetApproximateSizes with SizeApproximationOptions argument, which allows to add more options/knobs to the DB::GetApproximateSizes call (beyond only the include_flags)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5626

Differential Revision: D16496913

Pulled By: elipoz

fbshipit-source-id: ee8c6c182330a285fa056ecfc3905a592b451720
2019-07-25 22:42:30 -07:00
Yanqin Jin
ae152ee666 Avoid user key copying for Get/Put/Write with user-timestamp (#5502)
Summary:
In previous https://github.com/facebook/rocksdb/issues/5079, we added user-specified timestamp to `DB::Get()` and `DB::Put()`. Limitation is that these two functions may cause extra memory allocation and key copy. The reason is that `WriteBatch` does not allocate extra memory for timestamps because it is not aware of timestamp size, and we did not provide an API to assign/update timestamp of each key within a `WriteBatch`.
We address these issues in this PR by doing the following.
1. Add a `timestamp_size_` to `WriteBatch` so that `WriteBatch` can take timestamps into account when calling `WriteBatch::Put`, `WriteBatch::Delete`, etc.
2. Add APIs `WriteBatch::AssignTimestamp` and `WriteBatch::AssignTimestamps` so that application can assign/update timestamps for each key in a `WriteBatch`.
3. Avoid key copy in `GetImpl` by adding new constructor to `LookupKey`.

Test plan (on devserver):
```
$make clean && COMPILE_WITH_ASAN=1 make -j32 all
$./db_basic_test --gtest_filter=Timestamp/DBBasicTestWithTimestampWithParam.PutAndGet/*
$make check
```
If the API extension looks good, I will add more unit tests.

Some simple benchmark using db_bench.
```
$rm -rf /dev/shm/dbbench/* && TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillseq,readrandom -num=1000000
$rm -rf /dev/shm/dbbench/* && TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillrandom -num=1000000 -disable_wal=true
```
Master is at a78503bd6c.
```
|        | readrandom | fillrandom |
| master | 15.53 MB/s | 25.97 MB/s |
| PR5502 | 16.70 MB/s | 25.80 MB/s |
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5502

Differential Revision: D16340894

Pulled By: riversand963

fbshipit-source-id: 51132cf792be07d1efc3ac33f5768c4ee2608bb8
2019-07-25 15:27:39 -07:00
Chad Austin
0d16fad51b rocksdb: build on macosx
Summary:
Make rocksdb build on macos:
1) Reorganize OS-specific flags and deps in rocksdb/src/TARGETS
2) Sandbox fbcode apple platform builds from repo root include path (which conflicts
    with layout of rocksdb headers).
3) Fix dep-translation for bzip2.

Reviewed By: andrewjcg

Differential Revision: D15125826

fbshipit-source-id: 8e143c689b88b5727e54881a5e80500f879a320b
2019-07-25 11:45:54 -07:00
Maysam Yabandeh
d9dc6b4637 Declare snapshot refresh incompatible with delete range (#5625)
Summary:
The ::snap_refresh_nanos option is incompatible with DeleteRange feature. Currently the code relies on range_del_agg.IsEmpty() to disable it if there are range delete tombstones. However ::IsEmpty does not guarantee that there is no RangeDelete tombstones in the SST files. The patch declares the two features incompatible in inline comments until we later figure how to properly detect the presence of RangeDelete tombstones in compaction inputs.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5625

Differential Revision: D16468218

Pulled By: maysamyabandeh

fbshipit-source-id: bd7beca278bc7e1db75e7ee4522d05a3a6ca86f4
2019-07-24 15:22:14 -07:00
sdong
7260347fd1 Auto Roll Logger to add some extra checking to avoid segfault. (#5623)
Summary:
AutoRollLogger sets GetStatus() to be non-OK if the log file fails to be created and logger_ is set to null. It is left to the caller to check the status before calling function to this class. There is no harm to create another null checking to logger_ before we using it, so that in case users mis-use the logger, they don't get a segfault.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5623

Test Plan: Run all existing tests.

Differential Revision: D16466251

fbshipit-source-id: 262b885eec28bf741d91e9191c3cb5ff964e1bce
2019-07-24 15:14:40 -07:00
sdong
5daa426a18 Fix regression bug of Auto rolling logger when handling failures (#5622)
Summary:
Auto roll logger fails to handle file creation error in the correct way, which may expose to seg fault condition to users. Fix it.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5622

Test Plan: Add a unit test on creating file under a non-existing directory. The test fails without the fix.

Differential Revision: D16460853

fbshipit-source-id: e96da4bef4f16db171ea04a11b2ec5a9448ddbde
2019-07-24 12:08:40 -07:00
Manuel Ung
66b524a911 Simplify WriteUnpreparedTxnReadCallback and fix some comments (#5621)
Summary:
Simplify WriteUnpreparedTxnReadCallback so we just have one function `CalcMaxVisibleSeq`. Also, there's no need for the read callback to hold onto the transaction any more, so just hold the set of unprep_seqs, reducing about of indirection in `IsVisibleFullCheck`.

Also, some comments about using transaction snapshot were out of date, so remove them.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5621

Differential Revision: D16459883

Pulled By: lth

fbshipit-source-id: cd581323fd18982e817d99af57b6eaba59e599bb
2019-07-24 10:25:26 -07:00
sdong
f5b951f7b6 Fix wrong info log printing for num_range_deletions (#5617)
Summary:
num_range_deletions printing is wrong in this log line:

2019/07/18-12:59:15.309271 7f869f9ff700 EVENT_LOG_v1 {"time_micros": 1563479955309228, "cf_name": "5", "job": 955, "event": "table_file_creation", "file_number": 34579, "file_size": 2239842, "table_properties": {"data_size": 1988792, "index_size": 3067, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 0, "index_value_is_delta_encoded": 1, "filter_size": 170821, "raw_key_size": 1951792, "raw_average_key_size": 16, "raw_value_size": 1731720, "raw_average_value_size": 14, "num_data_blocks": 199, "num_entries": 121987, "num_deletions": 15184, "num_merge_operands": 86512, "num_range_deletions": 86512, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "5", "column_family_id": 5, "comparator": "leveldb.BytewiseComparator", "merge_operator": "PutOperator", "prefix_extractor_name": "rocksdb.FixedPrefix.7", "property_collectors": "[]", "compression": "ZSTD", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1563479951, "oldest_key_time": 0, "file_creation_time": 1563479954}}

It actually prints "num_merge_operands" number. Fix it.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5617

Test Plan: Just build.

Differential Revision: D16453110

fbshipit-source-id: fc1024b3cd5650312ed47a1379f0d2cf8b2d8a8f
2019-07-23 19:38:16 -07:00
Mark Rambacher
cfcf045acc The ObjectRegistry class replaces the Registrar and NewCustomObjects.… (#5293)
Summary:
The ObjectRegistry class replaces the Registrar and NewCustomObjects.  Objects are registered with the registry by Type (the class must implement the static const char *Type() method).

This change is necessary for a few reasons:
- By having a class (rather than static template instances), the class can be passed between compilation units, meaning that objects could be registered and shared from a dynamic library with an executable.
- By having a class with instances, different units could have different objects registered.  This could be useful if, for example, one Option allowed for a dynamic library and one did not.

When combined with some other PRs (being able to load shared libraries, a Configurable interface to configure objects to/from string), this code will allow objects in external shared libraries to be added to a RocksDB image at run-time, rather than requiring every new extension to be built into the main library and called explicitly by every program.

Test plan (on riversand963's  devserver)
```
$COMPILE_WITH_ASAN=1 make -j32 all && sleep 1 && make check
```
All tests pass.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5293

Differential Revision: D16363396

Pulled By: riversand963

fbshipit-source-id: fbe4acb615bfc11103eef40a0b288845791c0180
2019-07-23 17:13:05 -07:00
Levi Tamasi
092f417037 Move the uncompression dictionary object out of the block cache (#5584)
Summary:
RocksDB has historically stored uncompression dictionary objects in the block
cache as opposed to storing just the block contents. This neccesitated
evicting the object upon table close. With the new code, only the raw blocks
are stored in the cache, eliminating the need for eviction.

In addition, the patch makes the following improvements:

1) Compression dictionary blocks are now prefetched/pinned similarly to
index/filter blocks.
2) A copy operation got eliminated when the uncompression dictionary is
retrieved.
3) Errors related to retrieving the uncompression dictionary are propagated as
opposed to silently ignored.

Note: the patch temporarily breaks the compression dictionary evicition stats.
They will be fixed in a separate phase.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5584

Test Plan: make asan_check

Differential Revision: D16344151

Pulled By: ltamasi

fbshipit-source-id: 2962b295f5b19628f9da88a3fcebbce5a5017a7b
2019-07-23 16:01:44 -07:00
Eli Pozniansky
6b7fcc0d5f Improve CPU Efficiency of ApproximateSize (part 1) (#5613)
Summary:
1. Avoid creating the iterator in order to call BlockBasedTable::ApproximateOffsetOf(). Instead, directly call into it.
2. Optimize BlockBasedTable::ApproximateOffsetOf() keeps the index block iterator in stack.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5613

Differential Revision: D16442660

Pulled By: elipoz

fbshipit-source-id: 9320be3e918c139b10e758cbbb684706d172e516
2019-07-23 15:34:33 -07:00
sdong
3782accf7d ldb sometimes specify a string-append merge operator (#5607)
Summary:
Right now, ldb cannot scan a DB with merge operands with default ldb. There is no hard to give a general merge operator so that it can at least print out something
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5607

Test Plan: Run ldb against a DB with merge operands and see the outputs.

Differential Revision: D16442634

fbshipit-source-id: c66c414ec07f219cfc6e6ec2cc14c783ee95df54
2019-07-23 14:25:18 -07:00
anand76
112702ac6c Parallelize file_reader_writer_test in order to reduce timeouts
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5608

Test Plan:
make check
buck test mode/dev-tsan internal_repo_rocksdb/repo:file_reader_writer_test -- --run-disabled

Differential Revision: D16441796

Pulled By: anand1976

fbshipit-source-id: afbb88a9fcb1c0ba22215118767e8eab3d1d6a4a
2019-07-23 11:50:10 -07:00
Manuel Ung
eae832740b WriteUnPrepared: improve read your own write functionality (#5573)
Summary:
There are a number of fixes in this PR (with most bugs found via the added stress tests):
1. Re-enable reseek optimization. This was initially disabled to avoid infinite loops in https://github.com/facebook/rocksdb/pull/3955 but this can be resolved by remembering not to reseek after a reseek has already been done. This problem only affects forward iteration in `DBIter::FindNextUserEntryInternal`, as we already disable reseeking in `DBIter::FindValueForCurrentKeyUsingSeek`.
2. Verify that ReadOption.snapshot can be safely used for iterator creation. Some snapshots would not give correct results because snaphsot validation would not be enforced, breaking some assumptions in Prev() iteration.
3. In the non-snapshot Get() case, reads done at `LastPublishedSequence` may not be enough, because unprepared sequence numbers are not published. Use `std::max(published_seq, max_visible_seq)` to do lookups instead.
4. Add stress test to test reading own writes.
5. Minor bug in the allow_concurrent_memtable_write case where we forgot to pass in batch_per_txn_.
6. Minor performance optimization in `CalcMaxUnpreparedSequenceNumber` by assigning by reference instead of value.
7. Add some more comments everywhere.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5573

Differential Revision: D16276089

Pulled By: lth

fbshipit-source-id: 18029c944eb427a90a87dee76ac1b23f37ec1ccb
2019-07-23 08:08:19 -07:00
Maysam Yabandeh
327c4807a7 Disable refresh snapshot feature by default (#5606)
Summary:
There are concerns about the correctness of this patch. Disabling by default until the concerns are resolved.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5606

Differential Revision: D16428064

Pulled By: maysamyabandeh

fbshipit-source-id: a89280f0ea85796c9c9dfbfd9a8e91dad9b000b3
2019-07-22 20:05:00 -07:00
sdong
66b5613d0c row_cache to share entry for recent snapshots (#5600)
Summary:
Right now, users cannot take advantage of row cache, unless no snapshot is used, or Get() is repeated for the same snapshots. This limits the usage of row cache.
This change eliminate this restriction in some cases. If the snapshot used is newer than the largest sequence number in the file, and write callback function is not registered, the same row cache key is used as no snapshot is given. We still need the callback function restriction for now because the callback function may filter out different keys for different snapshots even if the snapshots are new.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5600

Test Plan: Add a unit test.

Differential Revision: D16386616

fbshipit-source-id: 6b7d214bd215d191b03ccf55926ad4b703ec2e53
2019-07-22 18:56:19 -07:00
haoyuhuang
3778470061 Block cache analyzer: Compute correlation of features and human readable trace file. (#5596)
Summary:
- Compute correlation between a few features and predictions, e.g., number of accesses since the last access vs number of accesses till the next access on a block.
- Output human readable trace file so python can consume it.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5596

Test Plan: make clean && USE_CLANG=1 make check -j32

Differential Revision: D16373200

Pulled By: HaoyuHuang

fbshipit-source-id: c848d26bc2e9210461f317d7dbee42d55be5a0cc
2019-07-22 17:51:34 -07:00
Yanqin Jin
a78503bd6c Temporarily disable snapshot list refresh for atomic flush stress test (#5581)
Summary:
Atomic flush test started to fail after https://github.com/facebook/rocksdb/issues/5099. Then https://github.com/facebook/rocksdb/issues/5278 provided a fix after
which the same error occurred much less frequently. However it still occur
occasionally. Not sure what the root cause is. This PR disables the feature of
snapshot list refresh, and we should keep an eye on the failure in the future.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5581

Differential Revision: D16295985

Pulled By: riversand963

fbshipit-source-id: c9e62e65133c52c21b07097de359632ca62571e4
2019-07-22 14:38:16 -07:00
Eli Pozniansky
0be1feec21 Added .watchmanconfig file to rocksdb repo (#5593)
Summary:
Added .watchmanconfig file to rocksdb repo. It is currently .gitignored.
This allows to auto sync modified files with watchman when editing them remotely.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5593

Differential Revision: D16363860

Pulled By: elipoz

fbshipit-source-id: 5ae221e21c6c757ceb08877771550d508f773d55
2019-07-19 15:00:33 -07:00
anand76
4f7ba3aaed Fix tsan and valgrind failures in import_column_family_test
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5598

Test Plan:
tsan_check
valgrind_test

Differential Revision: D16380167

Pulled By: anand1976

fbshipit-source-id: 2d0caea7d2d02a9606457f62811175d762b89d5c
2019-07-19 13:25:36 -07:00
Eli Pozniansky
c129c75fb7 Added log_readahead_size option to control prefetching for Log::Reader (#5592)
Summary:
Added log_readahead_size option to control prefetching for Log::Reader.
This is mostly useful for reading a remotely located log, as it can save the number of round-trips when reading it.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5592

Differential Revision: D16362989

Pulled By: elipoz

fbshipit-source-id: c5d4d5245a44008cd59879640efff70c091ad3e8
2019-07-19 12:00:19 -07:00
sdong
6bb3b4b567 ldb idump to support non-default column families. (#5594)
Summary:
ldb idump now only works for default column family. Extend it.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5594

Test Plan: Compile and run the tool against a multiple CF DB.

Differential Revision: D16380684

fbshipit-source-id: bfb8af36fdad1806837c90aaaab492d71528aceb
2019-07-19 11:36:59 -07:00
anand76
abd1fdddef Fix asan_check failures
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5589

Test Plan: TEST_TMPDIR=/dev/shm/rocksdb COMPILE_WITH_ASAN=1 OPT=-g make J=64 -j64 asan_check

Differential Revision: D16361081

Pulled By: anand1976

fbshipit-source-id: 09474832b9cfb318a840d4b633e22dfad105d58c
2019-07-18 14:51:25 -07:00
Venki Pallipadi
3a6e83b56b HISTORY update for export and import column family APIs
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5587

Differential Revision: D16359919

fbshipit-source-id: cfd9c448d79a8b8e7ac1d2b661d10151df269dba
2019-07-18 10:16:38 -07:00
anand76
ec2b996b29 Fix LITE mode build failure
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/5588

Test Plan: make LITE=1 all check

Differential Revision: D16354543

Pulled By: anand1976

fbshipit-source-id: 327a171439e183ac3a5e5057c511d6bca445e97d
2019-07-17 22:06:12 -07:00
Eli Pozniansky
9f5cfb8e71 Fix for ReadaheadSequentialFile crash in ldb_cmd_test (#5586)
Summary:
Fixing a corner case crash when there was no data read from file, but status is still OK
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5586

Differential Revision: D16348117

Pulled By: elipoz

fbshipit-source-id: f97973308024f020d8be79ca3c56466b84d80656
2019-07-17 17:04:39 -07:00
haoyuhuang
8a008d4170 Block access tracing: Trace referenced key for Get on non-data blocks. (#5548)
Summary:
This PR traces the referenced key for Get for all types of blocks. This is useful when evaluating hybrid row-block caches.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5548

Test Plan: make clean && USE_CLANG=1 make check -j32

Differential Revision: D16157979

Pulled By: HaoyuHuang

fbshipit-source-id: f6327411c9deb74e35e22a35f66cdbae09ab9d87
2019-07-17 13:05:58 -07:00
Venki Pallipadi
22ce462450 Export Import sst files (#5495)
Summary:
Refresh of the earlier change here - https://github.com/facebook/rocksdb/issues/5135

This is a review request for code change needed for - https://github.com/facebook/rocksdb/issues/3469
"Add support for taking snapshot of a column family and creating column family from a given CF snapshot"

We have an implementation for this that we have been testing internally. We have two new APIs that together provide this functionality.

(1) ExportColumnFamily() - This API is modelled after CreateCheckpoint() as below.
// Exports all live SST files of a specified Column Family onto export_dir,
// returning SST files information in metadata.
// - SST files will be created as hard links when the directory specified
//   is in the same partition as the db directory, copied otherwise.
// - export_dir should not already exist and will be created by this API.
// - Always triggers a flush.
virtual Status ExportColumnFamily(ColumnFamilyHandle* handle,
                                  const std::string& export_dir,
                                  ExportImportFilesMetaData** metadata);

Internally, the API will DisableFileDeletions(), GetColumnFamilyMetaData(), Parse through
metadata, creating links/copies of all the sst files, EnableFileDeletions() and complete the call by
returning the list of file metadata.

(2) CreateColumnFamilyWithImport() - This API is modeled after IngestExternalFile(), but invoked only during a CF creation as below.
// CreateColumnFamilyWithImport() will create a new column family with
// column_family_name and import external SST files specified in metadata into
// this column family.
// (1) External SST files can be created using SstFileWriter.
// (2) External SST files can be exported from a particular column family in
//     an existing DB.
// Option in import_options specifies whether the external files are copied or
// moved (default is copy). When option specifies copy, managing files at
// external_file_path is caller's responsibility. When option specifies a
// move, the call ensures that the specified files at external_file_path are
// deleted on successful return and files are not modified on any error
// return.
// On error return, column family handle returned will be nullptr.
// ColumnFamily will be present on successful return and will not be present
// on error return. ColumnFamily may be present on any crash during this call.
virtual Status CreateColumnFamilyWithImport(
    const ColumnFamilyOptions& options, const std::string& column_family_name,
    const ImportColumnFamilyOptions& import_options,
    const ExportImportFilesMetaData& metadata,
    ColumnFamilyHandle** handle);

Internally, this API creates a new CF, parses all the sst files and adds it to the specified column family, at the same level and with same sequence number as in the metadata. Also performs safety checks with respect to overlaps between the sst files being imported.

If incoming sequence number is higher than current local sequence number, local sequence
number is updated to reflect this.

Note, as the sst files is are being moved across Column Families, Column Family name in sst file
will no longer match the actual column family on destination DB. The API does not modify Column
Family name or id in the sst files being imported.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5495

Differential Revision: D16018881

fbshipit-source-id: 9ae2251025d5916d35a9fc4ea4d6707f6be16ff9
2019-07-17 12:27:14 -07:00
Yuqi Gu
a3c1832e86 Arm64 CRC32 parallel computation optimization for RocksDB (#5494)
Summary:
Crc32c Parallel computation optimization:
Algorithm comes from Intel whitepaper: [crc-iscsi-polynomial-crc32-instruction-paper](https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/crc-iscsi-polynomial-crc32-instruction-paper.pdf)
 Input data is divided into three equal-sized blocks
Three parallel blocks (crc0, crc1, crc2) for 1024 Bytes
One Block: 42(BLK_LENGTH) * 8(step length: crc32c_u64) bytes

1. crc32c_test:
```
[==========] Running 4 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 4 tests from CRC
[ RUN      ] CRC.StandardResults
[       OK ] CRC.StandardResults (1 ms)
[ RUN      ] CRC.Values
[       OK ] CRC.Values (0 ms)
[ RUN      ] CRC.Extend
[       OK ] CRC.Extend (0 ms)
[ RUN      ] CRC.Mask
[       OK ] CRC.Mask (0 ms)
[----------] 4 tests from CRC (1 ms total)

[----------] Global test environment tear-down
[==========] 4 tests from 1 test case ran. (1 ms total)
[  PASSED  ] 4 tests.
```

2. RocksDB benchmark: db_bench --benchmarks="crc32c"

```
Linear Arm crc32c:
  crc32c: 1.005 micros/op 995133 ops/sec; 3887.2 MB/s (4096 per op)
```

```
Parallel optimization with Armv8 crypto extension:
  crc32c: 0.419 micros/op 2385078 ops/sec; 9316.7 MB/s (4096 per op)
```

It gets ~2.4x speedup compared to linear Arm crc32c instructions.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5494

Differential Revision: D16340806

fbshipit-source-id: 95dae9a5b646fd20a8303671d82f17b2e162e945
2019-07-17 11:22:38 -07:00
Eli Pozniansky
74fb7f0ba5 Cleaned up and simplified LRU cache implementation (#5579)
Summary:
The 'refs' field in LRUHandle now counts only external references, since anyway we already have the IN_CACHE flag. This simplifies reference accounting logic a bit. Also cleaned up few asserts code as well as the comments - to be more readable.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5579

Differential Revision: D16286747

Pulled By: elipoz

fbshipit-source-id: 7186d88f80f512ce584d0a303437494b5cbefd7f
2019-07-16 19:17:45 -07:00
Eli Pozniansky
0f4d90e6e4 Added support for sequential read-ahead file (#5580)
Summary:
Added support for sequential read-ahead file that can prefetch the read data and later serve it from internal cache buffer.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5580

Differential Revision: D16287082

Pulled By: elipoz

fbshipit-source-id: a3e7ad9643d377d39352ff63058ce050ec31dcf3
2019-07-16 18:21:18 -07:00