Summary:
ExternalSSTFileTest.IngestNonExistingFile occasionally fail for number of SST files after manual compaction doesn't go down as expected. Although I don't find a reason how this can happen, adding an extra waiting to make sure obsolete file purging has finished before we check the files doesn't hurt.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4625
Differential Revision: D12910586
Pulled By: siying
fbshipit-source-id: 2a5ddec6908c99cf3bcc78431c6f93151c2cab59
Summary:
fix current failing lite test:
> In file included from ./util/testharness.h:15:0,
from ./table/mock_table.h:23,
from ./db/db_test_util.h:44,
from db/db_flush_test.cc:10:
db/db_flush_test.cc: In member function ‘virtual void rocksdb::DBFlushTest_ManualFlushFailsInReadOnlyMode_Test::TestBody()’:
db/db_flush_test.cc:250:35: error: ‘Properties’ is not a member of ‘rocksdb::DB’
ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBackgroundErrors,
^
make: *** [db/db_flush_test.o] Error 1
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4619
Differential Revision: D12898319
Pulled By: miasantreble
fbshipit-source-id: 72de603b1f2e972fc8caa88611798c4e98e348c6
Summary:
The new case is directIO = true, write_global_seqno = false in which we no longer write global_seqno to the external SST file.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4614
Differential Revision: D12885001
Pulled By: riversand963
fbshipit-source-id: 7541bdc608b3a0c93d3c3c435da1b162b36673d4
Summary:
The logic to wait for stall conditions to clear before beginning a manual flush didn't take into account whether the DB was in read-only mode. In read-only mode the stall conditions would never clear since no background work is happening, so the wait would be never-ending. It's probably better to return an error to the user.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4615
Differential Revision: D12888008
Pulled By: ajkr
fbshipit-source-id: 1c474b42a7ac38d9fd0d0e2340ff1d53e684d83c
Summary:
A background compaction with pre-picked files (i.e., either a manual compaction or a bottom-pri compaction) fails when the DB is in read-only mode. In the failure handling, we forgot to unregister the compaction and the files it covered. Then subsequent manual compactions could conflict with this zombie compaction (possibly Halloween related) and wait forever for it to finish.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4611
Differential Revision: D12871217
Pulled By: ajkr
fbshipit-source-id: 9d24e921d5bbd2ee8c2c9536a30abfa42a220c6e
Summary:
Add unit tests to demonstrate that `VersionSet::Recover` is able to detect and handle cases in which the MANIFEST has valid atomic group, incomplete trailing atomic group, atomic group mixed with normal version edits and atomic group with incorrect size.
With this capability, RocksDB identifies non-valid groups of version edits and do not apply them, thus guaranteeing that the db is restored to a state consistent with the most recent successful atomic flush before applying WAL.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4433
Differential Revision: D10079202
Pulled By: riversand963
fbshipit-source-id: a0e0b8bf4da1cf68e044d397588c121b66c68876
Summary:
Since the number of range deletions are reported in
TableProperties, it is confusing to not report the number of merge
operands and point deletions as top-level properties; they are
accessible through the public API, but since they are not the "main"
properties, they do not appear in aggregated table properties, or the
string representation of table properties.
This change promotes those two property keys to
`rocksdb/table_properties.h`, adds corresponding uint64 members for
them, deprecates the old access methods `GetDeletedKeys()` and
`GetMergeOperands()` (though they are still usable for now), and removes
`InternalKeyPropertiesCollector`. The property key strings are the same
as before this change, so this should be able to read DBs written from older
versions (though I haven't tested this yet).
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4594
Differential Revision: D12826893
Pulled By: abhimadan
fbshipit-source-id: 9e4e4fbdc5b0da161c89582566d184101ba8eb68
Summary:
EnableFileDeletions() does info logging inside db mutex. This is not recommended in the code base, since there could be I/O involved. Move this outside the DB mutex.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4604
Differential Revision: D12834432
Pulled By: siying
fbshipit-source-id: ffe5c2626fcfdb4c54a661a3c3b0bc95054816cf
Summary:
When there's a gap between files, we do not need to output tombstones starting at the next output file's begin key to the current output file.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4592
Differential Revision: D12808627
Pulled By: ajkr
fbshipit-source-id: 77c8b2e7523a95b1cd6611194144092c06acb505
Summary:
Since ErrorHandler::RecoverFromNoSpace is no-op in LITE mode, then we should
not have this test in LITE mode. If we do keep it, it will cause the test
thread to wait on bg_cv_ that will not be signalled.
How to reproduce
```
$make clean && git checkout a27fce408e
$OPT="-DROCKSDB_LITE -g" make -j20
$./db_io_failure_test --gtest_filter=DBIOFailureTest.NoSpaceCompactRange
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4596
Differential Revision: D12818516
Pulled By: riversand963
fbshipit-source-id: bc83524f40fff1e29506979017f7f4c2b70322f3
Summary:
For flush triggered by RocksDB due to memory usage approaching certain
threshold (WriteBufferManager or Memtable full), we should cut the memtable
only when the current active memtable is not empty, i.e. contains data. This is
what we do for non-atomic flush. If we always cut memtable even when the active
memtable is empty, we will generate extra, empty immutable memtable.
This is not ideal since it may cause write stall. It also causes some
DBAtomicFlushTest to fail because cfd->imm()->NumNotFlushed() is different from
expectation.
Test plan
```
$make clean && make J=1 -j32 all check
$make clean && OPT="-DROCKSDB_LITE -g" make J=1 -j32 all check
$make clean && TEST_TMPDIR=/dev/shm/rocksdb OPT=-g make J=1 -j32 valgrind_test
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4595
Differential Revision: D12818520
Pulled By: riversand963
fbshipit-source-id: d867bdbeacf4199fdd642debb085f94703c41a18
Summary:
Rename the interface, as it is mean to be a generic interface for memory allocation.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4590
Differential Revision: D10866340
Pulled By: yiwu-arbug
fbshipit-source-id: 85cb753351a40cb856c046aeaa3f3b369eef3d16
Summary:
This allows tombstone fragmenting to only be performed when the table is opened, and cached for subsequent accesses.
On the same DB used in #4449, running `readrandom` results in the following:
```
readrandom : 0.983 micros/op 1017076 ops/sec; 78.3 MB/s (63103 of 100000 found)
```
Now that Get performance in the presence of range tombstones is reasonable, I also compared the performance between a DB with range tombstones, "expanded" range tombstones (several point tombstones that cover the same keys the equivalent range tombstone would cover, a common workaround for DeleteRange), and no range tombstones. The created DBs had 5 million keys each, and DeleteRange was called at regular intervals (depending on the total number of range tombstones being written) after 4.5 million Puts. The table below summarizes the results of a `readwhilewriting` benchmark (in order to provide somewhat more realistic results):
```
Tombstones? | avg micros/op | stddev micros/op | avg ops/s | stddev ops/s
----------------- | ------------- | ---------------- | ------------ | ------------
None | 0.6186 | 0.04637 | 1,625,252.90 | 124,679.41
500 Expanded | 0.6019 | 0.03628 | 1,666,670.40 | 101,142.65
500 Unexpanded | 0.6435 | 0.03994 | 1,559,979.40 | 104,090.52
1k Expanded | 0.6034 | 0.04349 | 1,665,128.10 | 125,144.57
1k Unexpanded | 0.6261 | 0.03093 | 1,600,457.50 | 79,024.94
5k Expanded | 0.6163 | 0.05926 | 1,636,668.80 | 154,888.85
5k Unexpanded | 0.6402 | 0.04002 | 1,567,804.70 | 100,965.55
10k Expanded | 0.6036 | 0.05105 | 1,667,237.70 | 142,830.36
10k Unexpanded | 0.6128 | 0.02598 | 1,634,633.40 | 72,161.82
25k Expanded | 0.6198 | 0.04542 | 1,620,980.50 | 116,662.93
25k Unexpanded | 0.5478 | 0.0362 | 1,833,059.10 | 121,233.81
50k Expanded | 0.5104 | 0.04347 | 1,973,107.90 | 184,073.49
50k Unexpanded | 0.4528 | 0.03387 | 2,219,034.50 | 170,984.32
```
After a large enough quantity of range tombstones are written, range tombstone Gets can become faster than reading from an equivalent DB with several point tombstones.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4493
Differential Revision: D10842844
Pulled By: abhimadan
fbshipit-source-id: a7d44534f8120e6aabb65779d26c6b9df954c509
Summary:
Currently there are two contrun test failures:
* rocksdb-contrun-lite:
> tools/db_bench_tool.cc: In function ‘int rocksdb::db_bench_tool(int, char**)’:
tools/db_bench_tool.cc:5814:5: error: ‘DumpMallocStats’ is not a member of ‘rocksdb’
rocksdb::DumpMallocStats(&stats_string);
^
make: *** [tools/db_bench_tool.o] Error 1
* rocksdb-contrun-unity:
> In file included from unity.cc:44:0:
db/range_tombstone_fragmenter.cc: In member function ‘void rocksdb::FragmentedRangeTombstoneIterator::FragmentTombstones(std::unique_ptr<rocksdb::InternalIteratorBase<rocksdb::Slice> >, rocksdb::SequenceNumber)’:
db/range_tombstone_fragmenter.cc:90:14: error: reference to ‘ParsedInternalKeyComparator’ is ambiguous
auto cmp = ParsedInternalKeyComparator(icmp_);
This PR will fix them
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4587
Differential Revision: D10846554
Pulled By: miasantreble
fbshipit-source-id: 8d3358879e105060197b1379c84aecf51b352b93
Summary:
Previously, range tombstones were accumulated from every level, which
was necessary if a range tombstone in a higher level covered a key in a lower
level. However, RangeDelAggregator::AddTombstones's complexity is based on
the number of tombstones that are currently stored in it, which is wasteful in
the Get case, where we only need to know the highest sequence number of range
tombstones that cover the key from higher levels, and compute the highest covering
sequence number at the current level. This change introduces this optimization, and
removes the use of RangeDelAggregator from the Get path.
In the benchmark results, the following command was used to initialize the database:
```
./db_bench -db=/dev/shm/5k-rts -use_existing_db=false -benchmarks=filluniquerandom -write_buffer_size=1048576 -compression_type=lz4 -target_file_size_base=1048576 -max_bytes_for_level_base=4194304 -value_size=112 -key_size=16 -block_size=4096 -level_compaction_dynamic_level_bytes=true -num=5000000 -max_background_jobs=12 -benchmark_write_rate_limit=20971520 -range_tombstone_width=100 -writes_per_range_tombstone=100 -max_num_range_tombstones=50000 -bloom_bits=8
```
...and the following command was used to measure read throughput:
```
./db_bench -db=/dev/shm/5k-rts/ -use_existing_db=true -benchmarks=readrandom -disable_auto_compactions=true -num=5000000 -reads=100000 -threads=32
```
The filluniquerandom command was only run once, and the resulting database was used
to measure read performance before and after the PR. Both binaries were compiled with
`DEBUG_LEVEL=0`.
Readrandom results before PR:
```
readrandom : 4.544 micros/op 220090 ops/sec; 16.9 MB/s (63103 of 100000 found)
```
Readrandom results after PR:
```
readrandom : 11.147 micros/op 89707 ops/sec; 6.9 MB/s (63103 of 100000 found)
```
So it's actually slower right now, but this PR paves the way for future optimizations (see #4493).
----
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4449
Differential Revision: D10370575
Pulled By: abhimadan
fbshipit-source-id: 9a2e152be1ef36969055c0e9eb4beb0d96c11f4d
Summary:
PR https://github.com/facebook/rocksdb/pull/4226 introduced per-level perf context which allows breaking down perf context by levels.
This PR takes advantage of the feature to populate a few counters related to bloom filters
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4581
Differential Revision: D10518010
Pulled By: miasantreble
fbshipit-source-id: 011244561783ec860d32d5b0fa6bce6e78d70ef8
Summary:
This fixes three tests that fail with relatively recent tools and libraries:
The tests are:
* `spatial_db_test`
* `table_test`
* `db_universal_compaction_test`
I'm using:
* `gcc` 7.3.0
* `glibc` 2.27
* `snappy` 1.1.7
* `gflags` 2.2.1
* `zlib` 1.2.11
* `bzip2` 1.0.6.0.1
* `lz4` 1.8.2
* `jemalloc` 5.0.1
The versions used in the Travis environment (which is two Ubuntu LTS versions behind the current one and doesn't use `lz4` or `jemalloc`) don't seem to have a problem. However, to be safe, I verified that these tests pass with and without my changes in a trusty Docker container without `lz4` and `jemalloc`.
However, I do get an unrelated set of other failures when using a trusty Docker container that uses `lz4` and `jemalloc`:
```
db/db_universal_compaction_test.cc:506: Failure
Value of: num + 1
Actual: 3
Expected: NumSortedRuns(1)
Which is: 4
[ FAILED ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/0, where GetParam() = (1, false) (1189 ms)
[ RUN ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/1
db/db_universal_compaction_test.cc:506: Failure
Value of: num + 1
Actual: 3
Expected: NumSortedRuns(1)
Which is: 4
[ FAILED ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/1, where GetParam() = (1, true) (1246 ms)
[ RUN ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/2
db/db_universal_compaction_test.cc:506: Failure
Value of: num + 1
Actual: 3
Expected: NumSortedRuns(1)
Which is: 4
[ FAILED ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/2, where GetParam() = (3, false) (1237 ms)
[ RUN ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/3
db/db_universal_compaction_test.cc:506: Failure
Value of: num + 1
Actual: 3
Expected: NumSortedRuns(1)
Which is: 4
[ FAILED ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/3, where GetParam() = (3, true) (1195 ms)
[ RUN ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/4
db/db_universal_compaction_test.cc:506: Failure
Value of: num + 1
Actual: 3
Expected: NumSortedRuns(1)
Which is: 4
[ FAILED ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/4, where GetParam() = (5, false) (1161 ms)
[ RUN ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/5
db/db_universal_compaction_test.cc:506: Failure
Value of: num + 1
Actual: 3
Expected: NumSortedRuns(1)
Which is: 4
[ FAILED ] UniversalCompactionNumLevels/DBTestUniversalCompaction.DynamicUniversalCompactionReadAmplification/5, where GetParam() = (5, true) (1229 ms)
```
I haven't attempted to fix these since I'm not using trusty and Travis doesn't use `lz4` and `jemalloc`. However, the final commit in this PR does at least fix the compilation errors that occur when using trusty's version of `lz4`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4562
Differential Revision: D10510917
Pulled By: maysamyabandeh
fbshipit-source-id: 59534042015ec339270e5fc2f6ac4d859370d189
Summary:
clang analyzer currently fails with the following warnings:
> db/log_reader.cc:323:9: warning: Undefined or garbage value returned to caller
return r;
^~~~~~~~
db/log_reader.cc:344:11: warning: Undefined or garbage value returned to caller
return r;
^~~~~~~~
db/log_reader.cc:369:11: warning: Undefined or garbage value returned to caller
return r;
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4583
Differential Revision: D10523517
Pulled By: miasantreble
fbshipit-source-id: 0cc8b8f27657b202bead148bbe7c4aa84fed095b
Summary:
There was a bug that the user comparator would receive the internal key instead of the user key. The bug was due to RangeMightExistAfterSortedRun expecting user key but receiving internal key when called in GenerateBottommostFiles. The patch augment an existing unit test to reproduce the bug and fixes it.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4575
Differential Revision: D10500434
Pulled By: maysamyabandeh
fbshipit-source-id: 858346d2fd102cce9e20516d77338c112bdfe366
Summary:
Level compaction usually performs poorly when the writes so heavy that the level targets can't be guaranteed. With this improvement, we improve level_compaction_dynamic_level_bytes = true so that in the write heavy cases, the level multiplier can be slightly adjusted based on the size of L0.
We keep the behavior the same if number of L0 files is under 2X compaction trigger and the total size is less than options.max_bytes_for_level_base, so that unless write is so heavy that compaction cannot keep up, the behavior doesn't change.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4338
Differential Revision: D9636782
Pulled By: siying
fbshipit-source-id: e27fc17a7c29c84b00064cc17536a01dacef7595
Summary:
When `MockTimeEnv` is used in test to mock time methods, we cannot use `CondVar::TimedWait` because it is using real time, not the mocked time for wait timeout. On Mac the method can return immediately without awaking other waiting threads, if the real time is larger than `wait_until` (which is a mocked time). When that happen, the `wait()` method will fall into an infinite loop.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4560
Differential Revision: D10472851
Pulled By: yiwu-arbug
fbshipit-source-id: 898902546ace7db7ac509337dd8677a527209d19
Summary:
Current `log::Reader` does not perform retry after encountering `EOF`. In the future, we need the log reader to be able to retry tailing the log even after `EOF`.
Current implementation is simple. It does not provide more advanced retry policies. Will address this in the future.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4394
Differential Revision: D9926508
Pulled By: riversand963
fbshipit-source-id: d86d145792a41bd64a72f642a2a08c7b7b5201e1
Summary:
We have already disabled it on Travis since it has been too flaky. The same problem arises in Appveyor as well.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4536
Differential Revision: D10452240
Pulled By: maysamyabandeh
fbshipit-source-id: 728f4ecddf780097159dc0a0737d460eb5ce4f09
Summary:
When there are no range deletions, flush and compaction perform a binary search
on an effectively empty map every time they call ShouldDelete. This PR lazily
initializes each stripe map entry so that the binary search can be elided in
these cases.
After this PR, the total amount of time spent in compactions is 52.541331s, and the total amount of time spent in flush is 5.532608s, the former of which is a significant improvement from the results after #4495.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4497
Differential Revision: D10428610
Pulled By: abhimadan
fbshipit-source-id: 6f7e1ce3698fac3ef86d1197955e6b72e0931a0f
Summary:
Current implementation of perf context is level agnostic. Making it hard to do performance evaluation for the LSM tree. This PR adds `PerfContextByLevel` to decompose the counters by level.
This will be helpful when analyzing point and range query performance as well as tuning bloom filter
Also replaced __thread with thread_local keyword for perf_context
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4226
Differential Revision: D10369509
Pulled By: miasantreble
fbshipit-source-id: f1ced4e0de5fcebdb7f9cff36164516bc6382d82
Summary:
When a CompactRange() call for a level is truncated before the end key
is reached, because it exceeds max_compaction_bytes, we need to properly
set the compaction_end parameter to indicate the stop key. The next
CompactRange will use that as the begin key. We set it to the smallest
key of the next file in the level after expanding inputs to get a clean
cut.
Previously, we were setting it before expanding inputs. So we could end
up recompacting some files. In a pathological case, where a single key
has many entries spanning all the files in the level (possibly due to
merge operands without a partial merge operator, thus resulting in
compaction output identical to the input), this would result in
an endless loop over the same set of files.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4496
Differential Revision: D10395026
Pulled By: anand1976
fbshipit-source-id: f0c2f89fee29b4b3be53b6467b53abba8e9146a9
Summary:
Leverage existing `FlushJob` to implement atomic flush of multiple column families.
This PR depends on other PRs and is a subset of #3752 . This PR itself is not sufficient in fulfilling atomic flush.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4262
Differential Revision: D9283109
Pulled By: riversand963
fbshipit-source-id: 65401f913e4160b0a61c0be6cd02adc15dad28ed
Summary:
`CompactionIterator::snapshots_` is ordered by ascending seqnum, just like `DBImpl`'s linked list of snapshots from which it was copied. This PR exploits this ordering to make `findEarliestVisibleSnapshot` do binary search rather than linear scan. This can make flush/compaction significantly faster when many snapshots exist since that function is called on every single key.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4495
Differential Revision: D10386470
Pulled By: ajkr
fbshipit-source-id: 29734991631227b6b7b677e156ac567690118a8b
Summary:
We would like to collect file-system-level statistics including file name, offset, length, return code, latency, etc., which requires to add callbacks to intercept file IO function calls when RocksDB is running.
To collect file-system-level statistics, users can inherit the class `EventListener`, as in `TestFileOperationListener `. Note that `TestFileOperationListener::ShouldBeNotifiedOnFileIO()` returns true.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/3933
Differential Revision: D10219571
Pulled By: riversand963
fbshipit-source-id: 7acc577a2d31097766a27adb6f78eaf8b1e8ff15
Summary:
The "je_" prefix of jemalloc APIs presents only when the macro `JEMALLOC_NO_RENAME` from jemalloc.h presents.
With the patch I'm also adding -DROCKSDB_JEMALLOC flag in buck TARGETS.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4488
Differential Revision: D10355971
Pulled By: yiwu-arbug
fbshipit-source-id: 03a2d69790a44ac89219c7525763fa937a63d95a
Summary:
This commit adds code to acquire lock on the DB LOCK file
before starting the repair process. This will prevent
multiple processes from performing repair on the same DB
simultaneously. Fixes repair_test to work with this change.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4435
Differential Revision: D10361499
Pulled By: riversand963
fbshipit-source-id: 3c512c48b7193d383b2279ccecabdb660ac1cf22
Summary:
Using `./range_del_aggregator_bench --use_collapsed=false
--num_range_tombstones=5000 --num_runs=1000`, here are the results before and
after this change:
Before:
```
=========================
Results:
=========================
AddTombstones: 1822.61 us
ShouldDelete (first): 94.5286 us
```
After:
```
=========================
Results:
=========================
AddTombstones: 199.26 us
ShouldDelete (first): 38.9344 us
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4487
Differential Revision: D10347288
Pulled By: abhimadan
fbshipit-source-id: d44efe3a166d583acfdc3ec1199e0892f34dbfb7
Summary:
Wrong I overwrite `WriteBatch::Handler::Continue` to return _false_ at some point, I always get the `Status::Corruption` error.
I don't think this check is used correctly here: The counter in `found` cannot reflect all entries in the WriteBatch when we exit the loop early.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4478
Differential Revision: D10317416
Pulled By: yiwu-arbug
fbshipit-source-id: cccae3382805035f9b3239b66682b5fcbba6bb61
Summary:
Even during `DBIter::Prev()`, there is a case where we need to use `RangeDelPositioningMode::kForwardTraversal`. In particular, when we hit too many internal keys for a single user key, we use seek to find the newest internal key. If it's a merge operand, we then scan forwards, collecting the merge operands. This forward scan should be using `RangeDelPositioningMode::kForwardTraversal`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4481
Differential Revision: D10319507
Pulled By: ajkr
fbshipit-source-id: b5ce7352461f3a7696b28a5136ae0076f2bde51f
Summary:
fix#4288
Add `OnCompactionBegin` support to `rocksdb::EventListener`.
Currently, we only have these three callbacks:
- OnFlushBegin
- OnFlushCompleted
- OnCompactionCompleted
As paolococchi requested in #4288 , and ajkr agreed, we should also support `OnCompactionBegin`.
This PR is a try to implement the support of `OnCompactionBegin`.
Hope it is useful to you.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4431
Differential Revision: D10055515
Pulled By: yiwu-arbug
fbshipit-source-id: 39c0f95f8e9ff1c7ca3a10787502a17f258d2334
Summary:
I wrote a couple tests using the public API to expose/prevent the bugs we talked. In particular,
- When files have overlapping endpoints and a range tombstone spans them, ensure the largest key does not reappear to readers. This was happening due to a bug that skipped writing range tombstones to an output file when their begin key exactly matched the file's largest key.
- When a tombstone spans multiple atomic compaction units, ensure newer keys do not disappear by being compacted beneath it. This happened due to a range tombstone appearing untruncated to readers when it spanned files with overlapping endpoints, even if it extended into files without overlapping endpoints (i.e., different atomic compaction units).
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4476
Differential Revision: D10286001
Pulled By: ajkr
fbshipit-source-id: bb5ca51d0f90812fb37bfe1d01aec93f7eda55aa
Summary:
There is a bug when the write queue leader is blocked on a write
delay/stop, and the queue has writers with WriteOptions::no_slowdown set
to true. They are not woken up until the write stall is cleared.
The fix introduces a dummy writer inserted at the tail to indicate a
write stall and prevent further inserts into the queue, and a condition
variable that writers who can tolerate slowdown wait on before adding
themselves to the queue. The leader calls WriteThread::BeginWriteStall()
to add the dummy writer and then walk the queue to fail any writers with
no_slowdown set. Once the stall clears, the leader calls
WriteThread::EndWriteStall() to remove the dummy writer and signal the
condition variable.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4475
Differential Revision: D10285827
Pulled By: anand1976
fbshipit-source-id: 747465e5e7f07a829b1fb0bc1afcd7b93f4ab1a9
Summary:
this avoids a few copies of std::string and other structs
in the context of range-based for loops. instead of copying
the values for each iteration, use a const reference to avoid
copying.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4459
Differential Revision: D10282045
Pulled By: sagar0
fbshipit-source-id: 5012e910dca279abd2be847e1fb432d96274edfb
Summary:
To more accurately truncate range tombstones at SST boundaries,
we now represent them in RangeDelAggregator using InternalKeys, which
are end-key-exclusive as they were before this change.
During compaction, "atomic compaction unit boundaries" (the range of
keys contained in neighbouring and overlaping SSTs) are propagated down
to RangeDelAggregator to truncate range tombstones at those boundariies
instead. See https://github.com/facebook/rocksdb/pull/4432#discussion_r221072219 and https://github.com/facebook/rocksdb/pull/4432#discussion_r221138683
for motivating examples.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4432
Differential Revision: D10263952
Pulled By: abhimadan
fbshipit-source-id: 2fe85ff8a02b3a6a2de2edfe708012797a7bd579
Summary:
Currently statistics are supposed to be dumped to info log at intervals of `options.stats_dump_period_sec`. However the implementation choice was to bind it with compaction thread, meaning if the database has been serving very light traffic, the stats may not get dumped at all.
We decided to separate stats dumping into a new timed thread using `TimerQueue`, which is already used in blob_db. This will allow us schedule new timed tasks with more deterministic behavior.
Tested with db_bench using `--stats_dump_period_sec=20` in command line:
> LOG:2018/09/17-14:07:45.575025 7fe99fbfe700 [WARN] [db/db_impl.cc:605] ------- DUMPING STATS -------
LOG:2018/09/17-14:08:05.643286 7fe99fbfe700 [WARN] [db/db_impl.cc:605] ------- DUMPING STATS -------
LOG:2018/09/17-14:08:25.691325 7fe99fbfe700 [WARN] [db/db_impl.cc:605] ------- DUMPING STATS -------
LOG:2018/09/17-14:08:45.740989 7fe99fbfe700 [WARN] [db/db_impl.cc:605] ------- DUMPING STATS -------
LOG content:
> 2018/09/17-14:07:45.575025 7fe99fbfe700 [WARN] [db/db_impl.cc:605] ------- DUMPING STATS -------
2018/09/17-14:07:45.575080 7fe99fbfe700 [WARN] [db/db_impl.cc:606]
** DB Stats **
Uptime(secs): 20.0 total, 20.0 interval
Cumulative writes: 4447K writes, 4447K keys, 4447K commit groups, 1.0 writes per commit group, ingest: 5.57 GB, 285.01 MB/s
Cumulative WAL: 4447K writes, 0 syncs, 4447638.00 writes per sync, written: 5.57 GB, 285.01 MB/s
Cumulative stall: 00:00:0.012 H:M:S, 0.1 percent
Interval writes: 4447K writes, 4447K keys, 4447K commit groups, 1.0 writes per commit group, ingest: 5700.71 MB, 285.01 MB/s
Interval WAL: 4447K writes, 0 syncs, 4447638.00 writes per sync, written: 5.57 MB, 285.01 MB/s
Interval stall: 00:00:0.012 H:M:S, 0.1 percent
** Compaction Stats [default] **
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4382
Differential Revision: D9933051
Pulled By: miasantreble
fbshipit-source-id: 6d12bb1e4977674eea4bf2d2ac6d486b814bb2fa
Summary:
- Fix DBImpl API race condition
The timeline of execution flow is as follow:
```
timeline user_thread1 user_thread2
t1 | cfh = GetColumnFamilyHandleUnlocked(0)
t2 | id1 = cfh->GetID()
t3 | GetColumnFamilyHandleUnlocked(1)
t4 | id2 = cfh->GetID()
V
```
The original implementation return a pointer to a stateful variable, so that the return `ColumnFamilyHandle` will be changed when another thread calls `GetColumnFamilyHandleUnlocked` with different `column family id`
- Expose ColumnFamily ID to compaction event listener
- Fix the return status of `DBImpl::GetLatestSequenceForKey`
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4391
Differential Revision: D10221243
Pulled By: yiwu-arbug
fbshipit-source-id: dec60ee9ff0c8261a2f2413a8506ec1063991993
Summary:
The controller you requested could not be found. PTAL
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4466
Differential Revision: D10241358
Pulled By: yiwu-arbug
fbshipit-source-id: 99664eb286860a6c8844d50efeb0ef6f0e10dd1e
Summary:
It also renames InstallMemtableFlushResults to MaybeInstallMemtableFlushResults to clarify its contract.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4464
Differential Revision: D10224918
Pulled By: maysamyabandeh
fbshipit-source-id: 04e3f2d8542002cb9f8010cb436f5152751b3cbe
Summary:
The contract of snprintf says that it returns "The number of characters that would have been written if n had been sufficiently large" http://www.cplusplus.com/reference/cstdio/snprintf/
The existing code however was assuming that the return value is the actual number of written bytes and uses that to reposition the starting point on the next call to snprintf. This leads to buffer overflow when the last call to snprintf has filled up the buffer.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4465
Differential Revision: D10224080
Pulled By: maysamyabandeh
fbshipit-source-id: 40f44e122d15b0db439812a0a361167cf012de3e
Summary:
This fix is for `level == 0` in `GetOverlappingInputs()`:
- In `GetOverlappingInputs()`, if `level == 0`, it has potential
risk of overflow if `i == 0`.
- Optmize process when `expand = true`, the expected complexity
can be reduced to O(n).
Signed-off-by: JiYou <jiyou09@gmail.com>
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4385
Differential Revision: D10181001
Pulled By: riversand963
fbshipit-source-id: 46eef8a1d1605c9329c164e6471cd5c5b6de16b5
Summary:
Before running CompactFilesTest.SentinelCompressionType, we should check
whether zlib and snappy are supported.
CompactFilesTest.SentinelCompressionType is a newly added test. Compilation and
linking with different options, e.g. COMPILE_WITH_TSAN, COMPILE_WITH_ASAN, etc.
lead to generation of different binaries. On the one hand, it's not clear why
zlib or snappy is present under ASAN, but not under TSAN. On the other hand,
changing the compilation flags for TSAN or ASAN seems a bigger change worth much
more attention. To unblock the cont-runs, I suggest that we simply add these
two checks at the beginning of the test, as we did for
GeneralTableTest.ApproximateOffsetOfCompressed in table/table_test.cc.
Future actions include invesigating the absence of zlib and snappy when
compiling with TSAN, i.e. COMPILE_WITH_TSAN=1, if necessary.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4443
Differential Revision: D10140935
Pulled By: riversand963
fbshipit-source-id: 62f96d1e685386accd2ef0b98f6f754d3fd67b3e
Summary:
In DBCompactionTestWithParam::ManualLevelCompactionOutputPathId, there is
a race condition between `DBTestBase::GetSstFileCount` and
`DBImpl::PurgeObsoleteFiles`. The following graph explains why.
```
Timeline db_compact_test_t bg_flush_t bg_compact_t
| [initiate bg flush and
| start waiting]
| flush
| DeleteObsoleteFiles
| [waken up by bg_flush_t which
| signaled in DeleteObsoleteFiles]
|
| [initiate compaction and
| start waiting]
|
| [compact,
| set manual.done to true]
| [signal at the end of
| BackgroundCallFlush]
|
| [waken up by bg_flush_t
| which signaled before
| returning from
| BackgroundCallFlush]
|
| Check manual.done is true
|
| GetSstFileCount <-- race condition --> PurgeObsoleteFiles
V
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4440
Differential Revision: D10122628
Pulled By: riversand963
fbshipit-source-id: 3ede73c39fee6ad804dc6ac1ed84759c7e63977f
Summary:
Previously `CompactFiles` with `CompressionType::kDisableCompressionOption` caused program to crash on assertion failure. This PR fixes the crash by adding support for that setting. Now, that setting will cause RocksDB to choose compression according to the column family's options.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4438
Differential Revision: D10115761
Pulled By: ajkr
fbshipit-source-id: a553c6fa76fa5b6f73b0d165d95640da6f454122
Summary:
`FindFile()` and `FindFileInRange()` actually works as the same
of `std::lower_bound()`. Use `std::lower_bound()` to reduce the
repeated code.
- change `FindFile()` and `FindFileInRange()` to use `std::lower_bound()`
Signed-off-by: JiYou <jiyou09@gmail.com>
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4372
Differential Revision: D9919677
Pulled By: ajkr
fbshipit-source-id: f74aaa30e2f80e410e299c5a5bca4eaf2a7a26de
Summary:
Improve log handling when avoid_flush_during_recovery=true.
1. restore total_log_size_ after recovery, by summing up existing log sizes. Fixes#4253.
2. truncate the last existing log, since this log can contain preallocated space and it will be a waste to keep the space. It avoids a crash loop of user application cause a lot of log with non-trivial size being created and ultimately take up all disk space.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4405
Differential Revision: D9953933
Pulled By: yiwu-arbug
fbshipit-source-id: 967780fee8acec7f358b6eb65190fb4684f82e56
Summary:
The CollapsedRangeDelMap was entirely mishandling tombstones at the same
sequence number when the tombstones did not have identical start and end
keys. Such tombstones are common since 90fc40690, which causes
tombstones to be split during compactions.
For example, if the tombstone [a, c) @ 1 lies across a compaction
boundary at b, it will be split into [a, b) @ 1 and [b, c) @ 1. Without
this patch, the collapsed range deletion map would look like this:
a -> 1
b -> 1
c -> 0
Notice how the b -> 1 entry is redundant. When the tombstones overlap,
the problem is even worse. Consider tombstones [a, c) @ 1 and [b, d) @
1, which produces this map without this patch:
a -> 1
b -> 1
c -> 0
d -> 0
This map is corrupt, as a map can never contain adjacent sentinel (zero)
entries. When the iterator advances from b to c, it will notice that c
is a sentinel enty and skip to d--but d is also a sentinel entry! Asking
what tombstone this iterator points to will trigger an assertion, as it
is not pointing to a valid tombstone.
/cc ajkr
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4424
Differential Revision: D10039248
Pulled By: abhimadan
fbshipit-source-id: 6d737c1e88d60e80cf27286726627ba44463e7f4
Summary:
Improve time measurements for AddTombstones to only include the
call and not the VectorIterator setup. Also add a new
add_tombstones_per_run flag to call AddTombstones multiple times per
aggregator, which will help simulate more realistic workloads.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4395
Differential Revision: D9996811
Pulled By: abhimadan
fbshipit-source-id: 5865a95c323fbd9b3606493013664b4890fe5a02
Summary:
Make the CompactOnDeletionCollectorFactory class public, and provide
methods to update the window size and deletion trigger params. These
will take effect on subsequent created SST files.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4403
Differential Revision: D9976857
Pulled By: anand1976
fbshipit-source-id: 31dbf0511c12fa2bb9b2a7ba620079e0ee09cf48
Summary:
Add a unit test for range collapsing when non-default comparator is used. This exposes the bug fixed in #4386.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4388
Differential Revision: D9918252
Pulled By: ajkr
fbshipit-source-id: 99501b96b251eab41791a7e33b27055ee36c5c39
Summary:
The Comparator passed to CollapsedRangeDelMap was not used for
operator less of the std::map `rep_` object contained in
CollapsedRangeDelMap. So the map was always sorted using the
default ByteWiseComparator, which seems wrong.
Passing the specified Comparator through for usage in that map
object fixes actual problems we were seeing with RangeDelete operations
that do not delete keys as expected when using a custom Comparator.
I found that the tests in current master crash when I run them locally,
both with and without my patch, at the very same location. I therefore
don't know if the patch breaks something else, but it seems to fix
RangeDeletion issues in our product that uses RocksDB.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4386
Differential Revision: D9916506
Pulled By: ajkr
fbshipit-source-id: 27bff8c775831f089dde8c5289df7343d88b2d66
Summary:
Value delta encoding in format_version 4 requires the differences between the size of two consecutive handles to be sent to BlockBuilder::Add. This applies not only to indexes on blocks but also the indexes on indexes and filters in partitioned indexes and filters respectively. The patch fixes a bug where the partitioned filters would encode the entire size of the handle rather than the difference of the size with the last size.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4381
Differential Revision: D9879505
Pulled By: maysamyabandeh
fbshipit-source-id: 27a22e49b482b927fbd5629dc310c46d63d4b6d1
Summary:
To measure the results of upcoming DeleteRange v2 work, this commit adds
simple benchmarks for RangeDelAggregator. It measures the average time
for AddTombstones and ShouldDelete calls.
Using this to compare the results before #4014 and on the latest master (using the default arguments) produces the following results:
Before #4014:
```
=======================
Results:
=======================
AddTombstones: 1356.28 us
ShouldDelete: 0.401732 us
```
Latest master:
```
=======================
Results:
=======================
AddTombstones: 740.82 us
ShouldDelete: 0.383271 us
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4363
Differential Revision: D9881676
Pulled By: abhimadan
fbshipit-source-id: 793e7d61aa4b9d47eb917bbcc03f08695b5e5442
Summary:
1. Add override keyword to overridden virtual functions in EventListener
2. Fix a memory corruption that can happen during DB shutdown when in
read-only mode due to a background write error
3. Fix uninitialized buffers in error_handler_test.cc that cause
valgrind to complain
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4375
Differential Revision: D9875779
Pulled By: anand1976
fbshipit-source-id: 022ede1edc01a9f7e21ecf4c61ef7d46545d0640
Summary:
This commit implements automatic recovery from a Status::NoSpace() error
during background operations such as write callback, flush and
compaction. The broad design is as follows -
1. Compaction errors are treated as soft errors and don't put the
database in read-only mode. A compaction is delayed until enough free
disk space is available to accomodate the compaction outputs, which is
estimated based on the input size. This means that users can continue to
write, and we rely on the WriteController to delay or stop writes if the
compaction debt becomes too high due to persistent low disk space
condition
2. Errors during write callback and flush are treated as hard errors,
i.e the database is put in read-only mode and goes back to read-write
only fater certain recovery actions are taken.
3. Both types of recovery rely on the SstFileManagerImpl to poll for
sufficient disk space. We assume that there is a 1-1 mapping between an
SFM and the underlying OS storage container. For cases where multiple
DBs are hosted on a single storage container, the user is expected to
allocate a single SFM instance and use the same one for all the DBs. If
no SFM is specified by the user, DBImpl::Open() will allocate one, but
this will be one per DB and each DB will recover independently. The
recovery implemented by SFM is as follows -
a) On the first occurance of an out of space error during compaction,
subsequent
compactions will be delayed until the disk free space check indicates
enough available space. The required space is computed as the sum of
input sizes.
b) The free space check requirement will be removed once the amount of
free space is greater than the size reserved by in progress
compactions when the first error occured
c) If the out of space error is a hard error, a background thread in
SFM will poll for sufficient headroom before triggering the recovery
of the database and putting it in write-only mode. The headroom is
calculated as the sum of the write_buffer_size of all the DB instances
associated with the SFM
4. EventListener callbacks will be called at the start and completion of
automatic recovery. Users can disable the auto recov ery in the start
callback, and later initiate it manually by calling DB::Resume()
Todo:
1. More extensive testing
2. Add disk full condition to db_stress (follow-on PR)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4164
Differential Revision: D9846378
Pulled By: anand1976
fbshipit-source-id: 80ea875dbd7f00205e19c82215ff6e37da10da4a
Summary:
Because `base_files` and `added_files` both are sorted, using a merge
operation to these two sorted arrays is more effective. The complexity
is reduced to linear time.
- optmize the merge complexity.
- move the `NDEBUG` of sorted `added_files` out of merge process.
Signed-off-by: JiYou <jiyou09@gmail.com>
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4366
Differential Revision: D9833592
Pulled By: ajkr
fbshipit-source-id: dd32b67ebdca4c20e5e9546ab8082cecefe99fd0
Summary:
The code is dead in RocksDB as `log::Reader::initial_offset_` is always zero. We should delete it so we don't have to maintain it like in #4359.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4362
Differential Revision: D9817829
Pulled By: ajkr
fbshipit-source-id: 474a2c679e5bd273b40608f3a5332931d9eefe6d
Summary:
Please consider this small PR providing access to the `MemoryUsage::GetApproximateMemoryUsageByType` function in plain C API. Actually I'm working on Go application and now trying to investigate the reasons of high memory consumption (#4313). Go [wrappers](https://github.com/tecbot/gorocksdb) are built on the top of Rocksdb C API. According to the #706, `MemoryUsage::GetApproximateMemoryUsageByType` is considered as the best option to get database internal memory usage stats, but it wasn't supported in C API yet.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4340
Differential Revision: D9655135
Pulled By: ajkr
fbshipit-source-id: a3d2f3f47c143ae75862fbcca2f571ea1b49e14a
Summary:
`RangeDelAggregator::AddTombstones` contained an assertion which stated that, if a range tombstone extended past the largest key in the sstable, then `FileMetaData::largest` must have a sentinel sequence number of `kMaxSequenceNumber`, which implies that the tombstone's end key is safe to truncate. However, `largest` will not be a sentinel key when the next sstable in the level's smallest key is equal to the current sstable's largest key, which caused the assertion to fail.
The assertion must hold for the truncation to be safe, so it has been moved to an additional check on end-key truncation.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4356
Differential Revision: D9760891
Pulled By: abhimadan
fbshipit-source-id: 7c20c3885cd919dcd14f291f88fd27aa33defebc
Summary:
TransactionOptions::skip_concurrency_control allows pessimistic transactions to skip the overhead of concurrency control. This could be as an optimization if the application knows that the transaction would not have any conflict with concurrent transactions. It is currently used during recovery assuming (i) application guarantees no conflict between prepared transactions in the WAL (ii) application guarantees that recovered transactions will be rolled back/commit before new transactions start.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4346
Differential Revision: D9759149
Pulled By: maysamyabandeh
fbshipit-source-id: f896e84fa58b0b584be904c7fd3883a41ea3215b
Summary:
In C++ 11, the order of argument and move evaluation in a statement such
as below is unspecified -
foo(a.b).bar(std::move(a))
The compiler is free to evaluate std::move(a) first, and then a.b is unspecified.
In C++ 17, this will be safe if a draft proposal around function
chaining rules is accepted.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4348
Differential Revision: D9688810
Pulled By: anand1976
fbshipit-source-id: e4651d0ca03dcf007e50371a0fc72c0d1e710fb4
Summary:
As you know, almost all compilers support "pragma once" keyword instead of using include guards. To be keep consistency between header files, all header files are edited.
Besides this, try to fix some warnings about loss of data.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4339
Differential Revision: D9654990
Pulled By: ajkr
fbshipit-source-id: c2cf3d2d03a599847684bed81378c401920ca848
Summary:
This is a followup to #4311. Checking `!RangeDelAggregator::IsEmpty()` before opening a dedicated range tombstone SST did not properly prevent empty SSTs from being generated. That's because it relies on `CollapsedRangeDelMap::Size`, which had an underflow bug when the map was empty. This PR fixes that underflow bug.
Also fixed an uninitialized variable in db_stress.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4336
Differential Revision: D9600080
Pulled By: ajkr
fbshipit-source-id: bc6980ca79d2cd01b825ebc9dbccd51c1a70cfc7
Summary:
Basically at the moment it seems it's possible to cause write stall by calling flush (either manually vis DB::Flush(), or from Backup Engine directly calling FlushMemTable() while background flush may be already happening.
One of the ways to fix it is that in DBImpl::CompactRange() we already check for possible stall and delay flush if needed before we actually proceed to call FlushMemTable(). We can simply move this delay logic to separate method and call it from FlushMemTable.
This is draft patch, for first look; need to check tests/update SyncPoints and most certainly would need to add allow_write_stall method to FlushOptions().
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4297
Differential Revision: D9420705
Pulled By: mikhail-antonov
fbshipit-source-id: f81d206b55e1d7b39e4dc64242fdfbceeea03fcc
Summary: For the CURRENT file forged during checkpoint, we were forgetting to `fsync` or `fdatasync` it after its creation. This PR fixes it.
Differential Revision: D9525939
Pulled By: ajkr
fbshipit-source-id: a505483644026ee3f501cfc0dcbe74832165b2e3
Summary:
According to 4848bd0c4e/db/log_reader.cc (L355), the original text is misleading when describing the layout of RecyclableLogHeader.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4315
Differential Revision: D9505284
Pulled By: riversand963
fbshipit-source-id: 79994c37a69e7003f03453e7efc0186feeafa609
Summary:
This PR fixes issue 3842. We drop deletion markers iff
1. We are the bottom most level AND
2. All other occurrences of the key are in the same snapshot range as the delete
I've also enhanced db_stress_test to add an option that does a full compare of the keys. This is done by a single thread (thread # 0). For tests I've run (so far)
make check -j64
db_stress
db_stress --acquire_snapshot_one_in=1000 --ops_per_thread=100000 /* to verify that new code doesnt break existing tests */
./db_stress --compare_full_db_state_snapshot=true --acquire_snapshot_one_in=1000 --ops_per_thread=100000 /* to verify new test code */
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4289
Differential Revision: D9491165
Pulled By: shrikanthshankar
fbshipit-source-id: ce144834f31736c189aaca81bed356ba990331e2
Summary:
RocksDB currently queues individual column family for flushing. This is not sufficient to support the needs of some applications that want to enforce order/dependency between column families, given that multiple foreground and background activities can trigger flushing in RocksDB.
This PR aims to address this limitation. Each flush request is described as a `FlushRequest` that can contain multiple column families. A background flushing thread pops one flush request from the queue at a time and processes it.
This PR does not enable atomic_flush yet, but is a subset of [PR 3752](https://github.com/facebook/rocksdb/pull/3752).
Pull Request resolved: https://github.com/facebook/rocksdb/pull/3952
Differential Revision: D8529933
Pulled By: riversand963
fbshipit-source-id: 78908a21e389a3a3f7de2a79bae0cd13af5f3539
Summary:
I have a PR to start calling `OnTableFileCreated` for empty SSTs: #4307. However, it is a behavior change so should not go into a patch release.
This PR adds back a check to make sure range deletions at least exist before starting file creation. This PR should be safe to backport to earlier versions.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4311
Differential Revision: D9493734
Pulled By: ajkr
fbshipit-source-id: f0d43cda4cfd904f133cfe3a6eb622f52a9ccbe8
Summary:
The API comment on `OnTableFileCreationStarted` (b6280d01f9/include/rocksdb/listener.h (L331-L333)) led users to believe a call to `OnTableFileCreationStarted` will always be matched with a call to `OnTableFileCreated`. However, we were skipping the `OnTableFileCreated` call in one case: no error happens but also no file is generated since there's no data.
This PR adds the call to `OnTableFileCreated` for that case. The filename will be "(nil)" and the size will be zero.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4307
Differential Revision: D9485201
Pulled By: ajkr
fbshipit-source-id: 2f077ec7913f128487aae2624c69a50762394df6
Summary:
Memtables are selected for flushing by the flush job. Currently we
have listener which is invoked when memtables for a column family are
flushed. That listener does not indicate which memtable was flushed in
the notification. If clients want to know if particular data in the
memtable was retired, there is no straight forward way to know this.
This method will help users who implement memtablerep factory and extend
interface for memtablerep, to know if the data in the memtable was
retired.
Another option that was tried, was to depend on memtable destructor to
be called after flush to mark that data was persisted. This works all
the time but sometimes there can huge delays between actual flush
happening and memtable getting destroyed. Hence, if anyone who is
waiting for data to persist will have to wait that longer.
It is expected that anyone who is implementing this method to have
return quickly as it blocks RocksDB.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4304
Reviewed By: riversand963
Differential Revision: D9472312
Pulled By: gdrane
fbshipit-source-id: 8e693308dee749586af3a4c5d4fcf1fa5276ea4d
Summary:
We want to sample the file I/O issued by RocksDB and report the function calls. This requires us to include the file paths otherwise it's hard to tell what has been going on.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4039
Differential Revision: D8670178
Pulled By: riversand963
fbshipit-source-id: 97ee806d1c583a2983e28e213ee764dc6ac28f7a
Summary:
User reported (https://github.com/facebook/rocksdb/issues/4168) that when opening RocksDB in read-only mode, some statistics are not correctly reported. After some investigation, we believe the following counters are indeed not reported during Get() call in a read-only DB:
rocksdb.memtable.hit
rocksdb.memtable.miss
rocksdb.number.keys.read
rocksdb.bytes.read
As well as histogram rocksdb.bytes.per.read
and perf context get_read_bytes
This PR will add the necessary counter reporting logic in the Get() call path
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4260
Differential Revision: D9476431
Pulled By: miasantreble
fbshipit-source-id: 7ab409d4e59df05d09ae8b69fe75554e5aa240d6
Summary:
Clang analyze is not happy in two pieces of code, with "Potential memory leak". No idea what the problem but slightly changing the code makes clang happy.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4292
Differential Revision: D9413555
Pulled By: siying
fbshipit-source-id: 9428c9d3664530c72129feefd135ee63d8386137
Summary:
During recovery, RocksDB is able to handle version edits that belong to group commits.
This PR is a subset of [PR 3752](https://github.com/facebook/rocksdb/pull/3752)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/3945
Differential Revision: D8529122
Pulled By: riversand963
fbshipit-source-id: 57cb0f9cc55ecca684a837742d6626dc9c07f37e
Summary:
This PR addresses issue #3865 and implements the following approach to fix it:
- adds `MergeContext::GetOperandsDirectionForward` and `MergeContext::GetOperandsDirectionBackward` to query merge operands in a specific order
- `MergeContext::GetOperands` becomes a shortcut for `MergeContext::GetOperandsDirectionForward`
- pass `MergeContext::GetOperandsDirectionBackward` to `MergeOperator::ShouldMerge` and document the order
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4266
Differential Revision: D9360750
Pulled By: sagar0
fbshipit-source-id: 20cb73ff017760b062ecdcf4382560767086e092
Summary:
Add a unit test to check that iterators release data blocks after it has moved away from it. Verify the same for compaction input iterators.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4170
Differential Revision: D8962513
Pulled By: siying
fbshipit-source-id: 05a5b604d7d29887fb488f2cda7286f554a14407
Summary:
Revert this change. Not generating the OnTableFileCreated() notification for a 0 byte SST on flush breaks the assumption that every OnTableFileCreationStarted() notification is followed by a corresponding OnTableFileCreated().
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4263
Differential Revision: D9285623
Pulled By: anand1976
fbshipit-source-id: 808c3dcd498b4b4f4ed4be947a29a24b2296aa8d