Commit Graph

6995 Commits

Author SHA1 Message Date
Maysam Yabandeh
011fc8be2d WritePrepared Txn: add stats
Summary:
Adding some stats that would be helpful to monitor if the DB has gone to unlikely stats that would hurt the performance. These are mostly when we end up needing to acquire a mutex.
Closes https://github.com/facebook/rocksdb/pull/3683

Differential Revision: D7529393

Pulled By: maysamyabandeh

fbshipit-source-id: f7d36279a8f39bd84d8ddbf64b5c97f670c5d6d9
2018-04-12 13:56:07 -07:00
Maysam Yabandeh
59b4f25d2d WritePrepared Txn: add write_committed option to dump_wal
Summary:
Currently dump_wal cannot print the prepared records from the WAL that is generated by WRITE_PREPARED write policy since the default reaction of the handler is to return NotSupported if markers of WRITE_PREPARED are encountered. This patch enables the admin to pass --write_committed=false option, which will be accordingly passed to the handler. Note that DBFileDumperCommand and DBDumperCommand are still not updated by this patch but firstly they are not urgent and secondly we need to revise this approach later when we also add WRITE_UNPREPARED markers so I leave it for future work.

Tested by running it on a WAL generated by WRITE_PREPARED:
$ ./ldb dump_wal --walfile=/dev/shm/dbbench/000003.log  | grep BEGIN_PREARE | head -1
1,2,70,0,BEGIN_PREARE
$ ./ldb dump_wal --walfile=/dev/shm/dbbench/000003.log --write_committed=false | grep BEGIN_PREARE | head -1
1,2,70,0,BEGIN_PREARE PUT(0) : 0x30303031313330313938 PUT(0) : 0x30303032353732313935 END_PREPARE(0x74786E31313535383434323738303738363938313335312D30)
Closes https://github.com/facebook/rocksdb/pull/3682

Differential Revision: D7522090

Pulled By: maysamyabandeh

fbshipit-source-id: a0332207261c61e18b2f9dfbe9feecd9a1339aca
2018-04-12 13:55:55 -07:00
Maysam Yabandeh
5b66eee2cb Stats for false positive rate of full filtesr
Summary:
Adds two stats to allow us measuring the false positive rate of full filters:
- The total count of positives: rocksdb.bloom.filter.full.positive
- The total count of true positives: rocksdb.bloom.filter.full.true.positive
Not the term "full" in the stat name to indicate that they are meaningful in full filters. block-based filters are to be deprecated soon and supporting it is not worth the the additional cost of if-then-else branches.

Closes #3680

Tested by:
$ ./db_bench -benchmarks=fillrandom  -db /dev/shm/rocksdb-tmpdb --num=1000000 -bloom_bits=10
$ ./db_bench -benchmarks="readwhilewriting"  -db /dev/shm/rocksdb-tmpdb --statistics -bloom_bits=10 --duration=60 --num=2000000 --use_existing_db 2>&1 > /tmp/full.log
$ grep filter.full /tmp/full.log
rocksdb.bloom.filter.full.positive COUNT : 3628593
rocksdb.bloom.filter.full.true.positive COUNT : 3536026
which gives the false positive rate of 2.5%
Closes https://github.com/facebook/rocksdb/pull/3681

Differential Revision: D7517570

Pulled By: maysamyabandeh

fbshipit-source-id: 630ab1a473afdce404916d297035b6318de4c052
2018-04-12 13:55:45 -07:00
Maysam Yabandeh
f2827845f9 WritePrepared Txn: smallest_prepare optimization
Summary:
The is an optimization to reduce lookup in the CommitCache when querying IsInSnapshot. The optimization takes the smallest uncommitted data at the time that the snapshot was taken and if the sequence number of the read data is lower than that number it assumes the data as committed.
To implement this optimization two changes are required: i) The AddPrepared function must be called sequentially to avoid out of order insertion in the PrepareHeap (otherwise the top of the heap does not indicate the smallest prepare in future too), ii) non-2PC transactions also call AddPrepared if they do not commit in one step.
Closes https://github.com/facebook/rocksdb/pull/3649

Differential Revision: D7388630

Pulled By: maysamyabandeh

fbshipit-source-id: b79506238c17467d590763582960d4d90181c600
2018-04-12 13:55:36 -07:00
Maysam Yabandeh
140c308f6f Skip deleted WALs during recovery
Summary:
This patch record the deleted WAL numbers in the manifest to ignore them and any WAL older than them during recovery. This is to avoid scenarios when we have a gap between the WAL files are fed to the recovery procedure. The gap could happen by for example out-of-order WAL deletion. Such gap could cause problems in 2PC recovery where the prepared and commit entry are placed into two separate WAL and gap in the WALs could result into not processing the WAL with the commit entry and hence breaking the 2PC recovery logic.
Closes https://github.com/facebook/rocksdb/pull/3488

Differential Revision: D6967893

Pulled By: maysamyabandeh

fbshipit-source-id: 13119feb155a08ab6d4909f437c7a750480dc8a1
2018-04-12 13:55:12 -07:00
Maysam Yabandeh
89d989ed75 WritePrepared Txn: fix a bug in publishing recoverable state seq
Summary:
When using two_write_queue, the published seq and the last allocated sequence could be ahead of the LastSequence, even if both write queues are stopped as in WriteRecoverableState. The patch fixes a bug in WriteRecoverableState in which LastSequence was used as a reference but the result was applied to last fetched sequence and last published seq.
Closes https://github.com/facebook/rocksdb/pull/3665

Differential Revision: D7446099

Pulled By: maysamyabandeh

fbshipit-source-id: 1449bed9aed8e9db6af85946efd347cb8efd3c0b
2018-03-29 14:46:41 -07:00
Adam Retter
3cb591954e Allow rocksdbjavastatic to also be built as debug build
Summary: Closes https://github.com/facebook/rocksdb/pull/3654

Differential Revision: D7417948

Pulled By: sagar0

fbshipit-source-id: 9514df9328181e54a6384764444c0c7ce66e7f5f
2018-03-28 16:30:36 -07:00
Maysam Yabandeh
0377ff9dea WritePrepared Txn: make recoverable state visible after flush
Summary:
Currently if the CommitTimeWriteBatch is set to be used only as a state that is required only for recovery , the user cannot see that in DB until it is restarted. This while the state is already inserted into the DB after the memtable flush. It would be useful for debugging if make this state visible to the user after the flush by committing it. The patch does it by a invoking a callback that does the commit on the recoverable state.
Closes https://github.com/facebook/rocksdb/pull/3661

Differential Revision: D7424577

Pulled By: maysamyabandeh

fbshipit-source-id: 137f9408662f0853938b33fa440f27f04c1bbf5c
2018-03-28 12:12:08 -07:00
Yanqin Jin
1f5def1653 Fix race condition causing double deletion of ssts
Summary:
Possible interleaved execution of background compaction thread calling `FindObsoleteFiles (no full scan) / PurgeObsoleteFiles` and user thread calling `FindObsoleteFiles (full scan) / PurgeObsoleteFiles` can lead to race condition on which RocksDB attempts to delete a file twice. The second attempt will fail and return `IO error`. This may occur to other files,  but this PR targets sst.
Also add a unit test to verify that this PR fixes the issue.

The newly added unit test `obsolete_files_test` has a test case for this scenario, implemented in `ObsoleteFilesTest#RaceForObsoleteFileDeletion`. `TestSyncPoint`s are used to coordinate the interleaving the `user_thread` and background compaction thread. They execute as follows
```
timeline              user_thread                background_compaction thread
t1   |                                          FindObsoleteFiles(full_scan=false)
t2   |     FindObsoleteFiles(full_scan=true)
t3   |                                          PurgeObsoleteFiles
t4   |     PurgeObsoleteFiles
     V
```
When `user_thread` invokes `FindObsoleteFiles` with full scan, it collects ALL files in RocksDB directory, including the ones that background compaction thread have collected in its job context. Then `user_thread` will see an IO error when trying to delete these files in `PurgeObsoleteFiles` because background compaction thread has already deleted the file in `PurgeObsoleteFiles`.
To fix this, we make RocksDB remember which (SST) files have been found by threads after calling `FindObsoleteFiles` (see `DBImpl#files_grabbed_for_purge_`). Therefore, when another thread calls `FindObsoleteFiles` with full scan, it will not collect such files.

ajkr could you take a look and comment? Thanks!
Closes https://github.com/facebook/rocksdb/pull/3638

Differential Revision: D7384372

Pulled By: riversand963

fbshipit-source-id: 01489516d60012e722ee65a80e1449e589ce26d3
2018-03-28 10:29:59 -07:00
Sagar Vemuri
90c542347a Update comments about MergeOperator::AllowSingleOperand
Summary:
Updated comments around AllowSingleOperand.
Reason: A couple of users were confused and encountered issues due to no overriding PartialMerge with AllowSingleOperand=true.

I'll also look into modifying the default merge operator implementation so that overriding PartialMerge is not mandatory when AllowSingleOp=true.
Closes https://github.com/facebook/rocksdb/pull/3659

Differential Revision: D7422691

Pulled By: sagar0

fbshipit-source-id: 3d075a6ced0120f5d65cb7ae5412936f1862f342
2018-03-27 17:13:53 -07:00
Sagar Vemuri
d687670256 Fix a leak in FilterBlockBuilder when adding prefix
Summary:
Our valgrind continuous test found an interesting leak which got introduced in #3614. We were adding the prefix key before saving the previous prefix start offset, due to which previous prefix offset is always incorrect. Fixed it by saving the the previous sate before adding the key.
Closes https://github.com/facebook/rocksdb/pull/3660

Differential Revision: D7418698

Pulled By: sagar0

fbshipit-source-id: 9933685f943cf2547ed5c553f490035a2fa785cf
2018-03-27 15:13:56 -07:00
Anand Ananthabhotla
f9f4d40f93 Align SST file data blocks to avoid spanning multiple pages
Summary:
Provide a block_align option in BlockBasedTableOptions to allow
alignment of SST file data blocks. This will avoid higher
IOPS/throughput load due to < 4KB data blocks spanning 2 4KB pages.
When this option is set to true, the block alignment is set to lower of
block size and 4KB.
Closes https://github.com/facebook/rocksdb/pull/3502

Differential Revision: D7400897

Pulled By: anand1976

fbshipit-source-id: 04cc3bd144e88e3431a4f97604e63ad7a0f06d44
2018-03-26 20:26:10 -07:00
Maysam Yabandeh
0999e9b79a WritePrepared Txn: Increase commit cache size to 2^23
Summary:
Current commit cache size is 2^21. This was due to a type. With 2^23 commit entries we can have transactions as long as 64s without incurring the cost of having them evicted from the commit cache before their commit. Here is the math:
2^23 / 2 (one out of two seq numbers are for commit) / 2^16 TPS = 2^6 = 64s
Closes https://github.com/facebook/rocksdb/pull/3657

Differential Revision: D7411211

Pulled By: maysamyabandeh

fbshipit-source-id: e7cacf40579f3acf940643d8a1cfe5dd201caa35
2018-03-26 19:45:17 -07:00
Maysam Yabandeh
35a4469bbf Fix race condition via concurrent FlushWAL
Summary:
Currently log_writer->AddRecord in WriteImpl is protected from concurrent calls via FlushWAL only if two_write_queues_ option is set. The patch fixes the problem by i) skip log_writer->AddRecord in FlushWAL if manual_wal_flush is not set, ii) protects log_writer->AddRecord in WriteImpl via log_write_mutex_ if manual_wal_flush_ is set but two_write_queues_ is not.

Fixes #3599
Closes https://github.com/facebook/rocksdb/pull/3656

Differential Revision: D7405608

Pulled By: maysamyabandeh

fbshipit-source-id: d6cc265051c77ae49c7c6df4f427350baaf46934
2018-03-26 16:29:56 -07:00
Sagar Vemuri
23f9d93f47 Exclude MySQLStyleTransactionTest.TransactionStressTest* from valgrind
Summary:
I found that each instance of MySQLStyleTransactionTest.TransactionStressTest/x is taking more than 10 hours to complete on our continuous testing environment, causing the whole valgrind run to timeout after a day. So excluding these tests.
Closes https://github.com/facebook/rocksdb/pull/3652

Differential Revision: D7400332

Pulled By: sagar0

fbshipit-source-id: 987810574506d01487adf7c2de84d4817ec3d22d
2018-03-26 10:27:47 -07:00
Maysam Yabandeh
3e417a6607 WritePrepared Txn: AddPrepared for all sub-batches
Summary:
Currently AddPrepared is performed only on the first sub-batch if there are duplicate keys in the write batch. This could cause a problem if the transaction takes too long to commit and the seq number of the first sub-patch moved to old_prepared_ but not the seq of the later ones. The patch fixes this by calling AddPrepared for all sub-patches.
Closes https://github.com/facebook/rocksdb/pull/3651

Differential Revision: D7388635

Pulled By: maysamyabandeh

fbshipit-source-id: 0ccd80c150d9bc42fe955e49ddb9d7ca353067b4
2018-03-23 17:30:04 -07:00
Dmitri Smirnov
d382ae7de6 Imporve perf of random read and insert compare by suggesting inlining to the compiler
Summary:
Results from 2015 compiler. This improve sequential insert. Random Read results are inconclusive but I hope 2017 will do a better job at inlining.

Before:
fillseq      :       **3.638 micros/op 274866 ops/sec;  213.9 MB/s**

After:
fillseq      :       **3.379 micros/op 295979 ops/sec;  230.3 MB/s**
Closes https://github.com/facebook/rocksdb/pull/3645

Differential Revision: D7382711

Pulled By: siying

fbshipit-source-id: 092a07ffe8a6e598d1226ceff0f11b35e6c5c8e4
2018-03-23 13:26:55 -07:00
Dmitri Smirnov
53d66df0c4 Refactor sync_point to make implementation either customizable or replaceable
Summary: Closes https://github.com/facebook/rocksdb/pull/3637

Differential Revision: D7354373

Pulled By: ajkr

fbshipit-source-id: 6816c7bbc192ed0fb944942b11c7074bf24eddf1
2018-03-23 12:56:52 -07:00
Sagar Vemuri
a993c0139d Add 5.11 and 5.12 to tools/check_format_compatible.sh
Summary: Closes https://github.com/facebook/rocksdb/pull/3646

Differential Revision: D7384727

Pulled By: sagar0

fbshipit-source-id: f713af7adb2ffea5303bbf0fac8a8a1630af7b38
2018-03-23 12:43:06 -07:00
LingBin
e80709a33a Avoid adding tombstones of the same file to RangeDelAggregator multiple times
Summary:
RangeDelAggregator will remember the files whose range tombstones have been added,
so the caller can check whether the file has been added before call AddTombstones.

Closes https://github.com/facebook/rocksdb/pull/3635

Differential Revision: D7354604

Pulled By: ajkr

fbshipit-source-id: 9b9f7ec130556028df417e650711554b46d8d107
2018-03-23 12:43:06 -07:00
Sagar Vemuri
7ffce2805b Add Java-API-Changes section to History
Summary:
We have not been updating our HISTORY.md change log with the RocksJava changes. Going forward, lets add Java changes also to HISTORY.md.
There is an old java/HISTORY-JAVA.md, but it hasn't been updated in years. It is much easier to remember to update the change log in a single file, HISTORY.md.

I added information about shared block cache here, which was introduced in #3623.
Closes https://github.com/facebook/rocksdb/pull/3647

Differential Revision: D7384448

Pulled By: sagar0

fbshipit-source-id: 9b6e569f44e6df5cb7ba06413d9975df0b517d20
2018-03-23 12:28:51 -07:00
Radoslaw Zarzynski
09b6bf828a InlineSkiplist: don't decode keys unnecessarily during comparisons
Summary:
Summary
========
`InlineSkipList<>::Insert` takes the `key` parameter as a C-string. Then, it performs multiple comparisons with it requiring the `GetLengthPrefixedSlice()` to be spawn in `MemTable::KeyComparator::operator()(const char* prefix_len_key1, const char* prefix_len_key2)` on the same data over and over. The patch tries to optimize that.

Rough performance comparison
=====
Big keys, no compression.

```
$ ./db_bench --writes 20000000 --benchmarks="fillrandom" --compression_type none -key_size 256
(...)
fillrandom   :       4.222 micros/op 236836 ops/sec;   80.4 MB/s
```

```
$ ./db_bench --writes 20000000 --benchmarks="fillrandom" --compression_type none -key_size 256
(...)
fillrandom   :       4.064 micros/op 246059 ops/sec;   83.5 MB/s
```

TODO
======
In ~~a separated~~ this PR:
- [x] Go outside the write path. Maybe even eradicate the C-string-taking variant of `KeyIsAfterNode` entirely.
- [x] Try to cache the transformations applied by `KeyComparator` & friends in situations where we havy many comparisons with the same key.
Closes https://github.com/facebook/rocksdb/pull/3516

Differential Revision: D7059300

Pulled By: ajkr

fbshipit-source-id: 6f027dbb619a488129f79f79b5f7dbe566fb2dbb
2018-03-23 12:14:30 -07:00
Zhongyi Xie
1cbc96d236 FlushReason improvement
Summary:
Right now flush reason "SuperVersion Change" covers a few different scenarios which is a bit vague. For example, the following db_bench job should trigger "Write Buffer Full"

> $ TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillrandom -write_buffer_size=1048576 -target_file_size_base=1048576 -max_bytes_for_level_base=4194304
$ grep 'flush_reason' /dev/shm/dbbench/LOG
...
2018/03/06-17:30:42.543638 7f2773b99700 EVENT_LOG_v1 {"time_micros": 1520386242543634, "job": 192, "event": "flush_started", "num_memtables": 1, "num_entries": 7006, "num_deletes": 0, "memory_usage": 1018024, "flush_reason": "SuperVersion Change"}
2018/03/06-17:30:42.569541 7f2773b99700 EVENT_LOG_v1 {"time_micros": 1520386242569536, "job": 193, "event": "flush_started", "num_memtables": 1, "num_entries": 7006, "num_deletes": 0, "memory_usage": 1018104, "flush_reason": "SuperVersion Change"}
2018/03/06-17:30:42.596396 7f2773b99700 EVENT_LOG_v1 {"time_micros": 1520386242596392, "job": 194, "event": "flush_started", "num_memtables": 1, "num_entries": 7008, "num_deletes": 0, "memory_usage": 1018048, "flush_reason": "SuperVersion Change"}
2018/03/06-17:30:42.622444 7f2773b99700 EVENT_LOG_v1 {"time_micros": 1520386242622440, "job": 195, "event": "flush_started", "num_memtables": 1, "num_entries": 7006, "num_deletes": 0, "memory_usage": 1018104, "flush_reason": "SuperVersion Change"}

With the fix:
> 2018/03/19-14:40:02.341451 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602341444, "job": 98, "event": "flush_started", "num_memtables": 1, "num_entries": 7009, "num_deletes": 0, "memory_usage": 1018008, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.379655 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602379642, "job": 100, "event": "flush_started", "num_memtables": 1, "num_entries": 7006, "num_deletes": 0, "memory_usage": 1018016, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.418479 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602418474, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 7009, "num_deletes": 0, "memory_usage": 1018104, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.455084 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602455079, "job": 102, "event": "flush_started", "num_memtables": 1, "num_entries": 7009, "num_deletes": 0, "memory_usage": 1018048, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.492293 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602492288, "job": 104, "event": "flush_started", "num_memtables": 1, "num_entries": 7007, "num_deletes": 0, "memory_usage": 1018056, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.528720 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602528715, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 7006, "num_deletes": 0, "memory_usage": 1018104, "flush_reason": "Write Buffer Full"}
2018/03/19-14:40:02.566255 7f11dc257700 EVENT_LOG_v1 {"time_micros": 1521495602566238, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 7009, "num_deletes": 0, "memory_usage": 1018112, "flush_reason": "Write Buffer Full"}
Closes https://github.com/facebook/rocksdb/pull/3627

Differential Revision: D7328772

Pulled By: miasantreble

fbshipit-source-id: 67c94065fbdd36930f09930aad0aaa6d2c152bb8
2018-03-22 18:42:18 -07:00
Andrew Kryczka
82137f0ce8 Add unit test for WAL corruption
Summary: Closes https://github.com/facebook/rocksdb/pull/3618

Differential Revision: D7301053

Pulled By: ajkr

fbshipit-source-id: a9dde90caa548c294d03d6386f78428c8536ca14
2018-03-22 18:28:01 -07:00
Sagar Vemuri
2e3d407778 Fsync after writing global seq number in ExternalSstFileIngestionJob
Summary:
Fsync after writing global sequence number to the ingestion file in ExternalSstFileIngestionJob. Otherwise the file metadata could be incorrect.
Closes https://github.com/facebook/rocksdb/pull/3644

Differential Revision: D7373813

Pulled By: sagar0

fbshipit-source-id: 4da2c9e71a8beb5c08b4ac955f288ee1576358b8
2018-03-22 17:42:56 -07:00
Andrew Kryczka
4d51feab0b Rename function for handling WAL write error
Summary:
It was misnamed. It actually updates `bg_error_` if `PreprocessWrite()` or `WriteToWAL()` fail, not related to the user callback.
Closes https://github.com/facebook/rocksdb/pull/3485

Differential Revision: D6955787

Pulled By: ajkr

fbshipit-source-id: bd7afc3fdb7a52830c021cbfc25fcbc3ab7d5e10
2018-03-22 15:58:39 -07:00
Siying Dong
118058ba69 SstFileManager: add bytes_max_delete_chunk
Summary:
Add `bytes_max_delete_chunk` in SstFileManager so that we can drop a large file in multiple batches.
Closes https://github.com/facebook/rocksdb/pull/3640

Differential Revision: D7358679

Pulled By: siying

fbshipit-source-id: ef17f0da2f5723dbece2669485a9b91b3edc0bb7
2018-03-22 15:58:37 -07:00
Andrew Kryczka
88c3e26cc0 log value of CompressionOptions::zstd_max_train_bytes
Summary: Closes https://github.com/facebook/rocksdb/pull/3587

Differential Revision: D7206901

Pulled By: ajkr

fbshipit-source-id: 5d4b1a2653627b44aa3c22db7d98c9cd5dcdb67a
2018-03-22 15:13:28 -07:00
Andrew Kryczka
620823f88b parse CompressionOptions::zstd_max_train_bytes in options string
Summary: Closes https://github.com/facebook/rocksdb/pull/3588

Differential Revision: D7208087

Pulled By: ajkr

fbshipit-source-id: 688f7a7c447cb17bee1b410d1fd891c0bf966617
2018-03-22 15:13:27 -07:00
Fosco Marotto
de6cf95a53 Update history for future 5.13 release
Summary: Closes https://github.com/facebook/rocksdb/pull/3631

Differential Revision: D7367519

Pulled By: gfosco

fbshipit-source-id: 57826cc1c9ffc9f2b351075567b8ad929809cb74
2018-03-22 14:59:27 -07:00
Maysam Yabandeh
7429b20e39 WritePrepared Txn: fix race condition on publishing seq
Summary:
This commit fixes a race condition on calling SetLastPublishedSequence. The function must be called only from the 2nd write queue when two_write_queues is enabled. However there was a bug that would also call it from the main write queue if CommitTimeWriteBatch is provided to the commit request and yet use_only_the_last_commit_time_batch_for_recovery optimization is not enabled. To fix that we penalize the commit request in such cases by doing an additional write solely to publish the seq number from the 2nd queue.
Closes https://github.com/facebook/rocksdb/pull/3641

Differential Revision: D7361508

Pulled By: maysamyabandeh

fbshipit-source-id: bf8f7a27e5cccf5425dccbce25eb0032e8e5a4d7
2018-03-22 14:43:36 -07:00
Rohan Rathi
fa8c050e9f Fixed buffer overrun in BackupEngineImpl::BackupMeta::StoreToFile
Summary:
The 10MB buffer in BackupEngineImpl::BackupMeta::StoreToFile can be corrupted with a large number of files. Added a check to determine current buffer length and append data to file if buffer becomes full.

Resolves https://github.com/facebook/rocksdb/issues/3228
Closes https://github.com/facebook/rocksdb/pull/3636

Differential Revision: D7354160

Pulled By: ajkr

fbshipit-source-id: eec12d38095a0d17551a4aaee52b99d30a555722
2018-03-22 14:08:10 -07:00
Huachao Huang
7a6353bd1c Ignore empty filter block when data block is empty
Summary:
Close https://github.com/facebook/rocksdb/issues/3592
Closes https://github.com/facebook/rocksdb/pull/3614

Differential Revision: D7291706

Pulled By: ajkr

fbshipit-source-id: 9dd8f40bd7716588e1e3fd6be0c2bc2766861f8c
2018-03-21 23:13:05 -07:00
QingpingWang
70282cf876 fix behavior does not match name for "IsFileDeletionsEnabled"
Summary:
for PR https://github.com/facebook/rocksdb/pull/3598
I deleted the original repo for some reason. Sorry for the inconvenience.
Closes https://github.com/facebook/rocksdb/pull/3612

Differential Revision: D7291671

Pulled By: ajkr

fbshipit-source-id: 918490ba86b13fe450d232af436cbe259d847c64
2018-03-21 22:13:34 -07:00
QingpingWang
2ce8f63f81 C API for PerfContext
Summary:
This pull request exposes the interface of PerfContext as C API
Closes https://github.com/facebook/rocksdb/pull/3607

Differential Revision: D7294225

Pulled By: ajkr

fbshipit-source-id: eddcfbc13538f379950b2c8b299486695ffb5e2c
2018-03-21 22:13:34 -07:00
Sagar Vemuri
d5585bb605 Shared block cache in RocksJava
Summary:
Changes to support sharing block cache using the Java API.

Previously DB instances could share the block cache only when the same Options instance is passed to all the DB instances. But now, with this change, it is possible to explicitly create a cache and pass it to multiple options instances, to share the block cache.

Implementing this for [Rocksandra](https://github.com/instagram/cassandra/tree/rocks_3.0), but this feature has been requested by many java api users over the years.
Closes https://github.com/facebook/rocksdb/pull/3623

Differential Revision: D7305794

Pulled By: sagar0

fbshipit-source-id: 03e4e8ed7aeee6f88bada4a8365d4279ede2ad71
2018-03-21 18:43:05 -07:00
Yi Wu
f1b7b790c9 BlobDB: Fix BlobDBImpl::GCFileAndUpdateLSM issues
Summary:
* Fix BlobDBImpl::GCFileAndUpdateLSM doesn't close the new file, and the new file will not be able to be garbage collected later.
* Fix BlobDBImpl::GCFileAndUpdateLSM doesn't copy over metadata from old file to new file.
Closes https://github.com/facebook/rocksdb/pull/3639

Differential Revision: D7355092

Pulled By: yiwu-arbug

fbshipit-source-id: 4fa3594ac5ce376bed1af04a545c532cfc0088c4
2018-03-21 16:30:09 -07:00
Jingguo Yao
8823487ff7 doc: fix a typo
Summary:
s/synchromization/synchronization/
Closes https://github.com/facebook/rocksdb/pull/3583

Differential Revision: D7276596

Pulled By: sagar0

fbshipit-source-id: 552ec6d6935f642e1a3a7c552de6c94441ac50e0
2018-03-21 15:58:58 -07:00
Siying Dong
6383e42362 benchmark.sh to use --max_background_job
Summary: Closes https://github.com/facebook/rocksdb/pull/3632

Differential Revision: D7347012

Pulled By: siying

fbshipit-source-id: 46230ec4a917ccf4c478825b07e92b4665a4820b
2018-03-20 18:57:55 -07:00
Dmitri Smirnov
61785c73ed CloseHandle docs says that the return is non-zero, does not say TRUE(1)
Summary:
say it is TRUE(1). Add assert.
Closes https://github.com/facebook/rocksdb/pull/3630

Differential Revision: D7346895

Pulled By: ajkr

fbshipit-source-id: a46075aa4dd89f32520230606adecccecc874cdf
2018-03-20 18:43:02 -07:00
Siying Dong
93d52696bf Memory Problem Of Destorying ColumnFamilyHandle after deleting the CF
Summary:
When destorying column family handle after the column family has been deleted, the handle may hold share pointers of some objects in ColumnFamilyOptions, but in the destructor, the destructing order may cause some of the objects to be destoryed before being used by the following steps. Fix it by making a copy of the option object and destory it as the last step.
Closes https://github.com/facebook/rocksdb/pull/3610

Differential Revision: D7281025

Pulled By: siying

fbshipit-source-id: ac18f3b2841788cba4ccfa1abd8d59158c1113bc
2018-03-20 17:13:12 -07:00
Andrew Kryczka
d1b26507bd fix db_compaction_test when compression disabled
Summary:
Previously, the compaction in `DBCompactionTestWithParam.ForceBottommostLevelCompaction` generated multiple files in no-compression use case, andone file in compression use case. I increased `target_file_size_base` so it generates one file in both use cases.
Closes https://github.com/facebook/rocksdb/pull/3625

Differential Revision: D7311885

Pulled By: ajkr

fbshipit-source-id: 97f249fa83a9924ac34357a4bb3189c969ecb107
2018-03-19 12:30:05 -07:00
Tobias Tschinkowitz
ccb761364d Enable compilation on OpenBSD
Summary:
I modified the Makefile so that we can compile rocksdb on OpenBSD.
The instructions for building have been added to INSTALL.md.
The whole compilation process works fine like this on OpenBSD-current
Closes https://github.com/facebook/rocksdb/pull/3617

Differential Revision: D7323754

Pulled By: siying

fbshipit-source-id: 990037d1cc69138d22f85bd77ef4dc8c1ba9edea
2018-03-19 12:30:05 -07:00
Yanqin Jin
1139422dfb Fix the command used to generate ctags
Summary:
In original $ROCKSDB_HOME/Makefile, the command used to generate ctags is
```
ctags * -R
```
However, this failed to generate tags for me.
I did some search on the usage of ctags command and found that it should be
```
ctags -R .
```
or
```
ctags -R *
```
After the change, I can find the tags in vim using `:ts <identifier>`.
Closes https://github.com/facebook/rocksdb/pull/3626

Reviewed By: ajkr

Differential Revision: D7320217

Pulled By: riversand963

fbshipit-source-id: e4cd8f8a67842370a2343f0213df3cbd07754111
2018-03-18 22:43:18 -07:00
Adam Retter
bef95be5d8 Improve the output of the RocksJava JUnit runner
Summary:
This changes the console output when the RocksJava tests are run. It makes spotting the errors and failures much easier; perviously the output was malformed with results like "ERun" where the "E" represented an error in the preceding test.
Closes https://github.com/facebook/rocksdb/pull/3621

Differential Revision: D7306172

Pulled By: sagar0

fbshipit-source-id: 3fa6f6e1ca6c6ea7ceef55a23ca81903716132b7
2018-03-16 13:27:55 -07:00
zhsj
cc340268e9 fix wrong length in snprintf
Summary: Closes https://github.com/facebook/rocksdb/pull/3622

Differential Revision: D7307689

Pulled By: ajkr

fbshipit-source-id: b8f52effc63fea06c2058b39c60944c2c1f814b4
2018-03-16 13:27:55 -07:00
Huachao Huang
ecfca1ff59 Optimize overlap checking for external file ingestion
Summary:
If there are a lot of overlapped files in L0, creating a merging iterator for
all files in L0 to check overlap can be very slow because we need to read and
seek all files in L0. However, in that case, the ingested file is likely to
overlap with some files in L0, so if we check those files one by one, we can stop
once we encounter overlap.

Ref: https://github.com/facebook/rocksdb/issues/3540
Closes https://github.com/facebook/rocksdb/pull/3564

Differential Revision: D7196784

Pulled By: anand1976

fbshipit-source-id: 8700c1e903bd515d0fa7005b6ce9b3a3d9db2d67
2018-03-16 10:43:17 -07:00
Niv Dayan
da82aab126 allowing CompactFiles to return new file names
Summary:
This is a small API extension to allow the CompactFiles method to return the names of files that were created during the compaction.
Closes https://github.com/facebook/rocksdb/pull/3608

Differential Revision: D7275789

Pulled By: siying

fbshipit-source-id: 1ec0c3954a0f10cd877efb5f29f9be6c7b59e9ba
2018-03-15 11:58:12 -07:00
Sagar Vemuri
cc118b0e6f Update version
Summary:
We missed updating version.h on master when cutting 5.11.fb and 5.12.fb branches. It should be the same as the version in the latest release branch (or should it be one more?).

I noticed this when trying to run some upgrade/downgrade tests from 5.11 to some new code on master.
Closes https://github.com/facebook/rocksdb/pull/3611

Differential Revision: D7282917

Pulled By: sagar0

fbshipit-source-id: 205ee75b77c5b6bbcea95a272760b427025a4aba
2018-03-15 10:41:48 -07:00
Andrew Kryczka
0cdaa1a804 Fix WAL corruption from checkpoint/backup race condition
Summary:
`Writer::WriteBuffer` was always called at the beginning of checkpoint/backup. But that log writer has no internal synchronization, which meant the same buffer could be flushed twice in a race condition case, causing a WAL entry to be duplicated. Then subsequent WAL entries would be at unexpected offsets, causing the 32KB block boundaries to be overlapped and manifesting as a corruption.

This PR fixes the behavior to only use `WriteBuffer` (via `FlushWAL`) in checkpoint/backup when manual WAL flush is enabled. In that case, users are responsible for providing synchronization between WAL flushes. We can also consider removing the call entirely.
Closes https://github.com/facebook/rocksdb/pull/3603

Differential Revision: D7277447

Pulled By: ajkr

fbshipit-source-id: 1b15bd7fd930511222b075418c10de0aaa70a35a
2018-03-14 16:12:50 -07:00