Commit Graph

11002 Commits

Author SHA1 Message Date
Bo Wang
01fdec23fe Add release note for #9747 (#9874)
Summary:
Add release note for CompressedSecondaryCache and the update of SecondaryCache::Lookup().

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9874

Reviewed By: jay-zhuang

Differential Revision: D35765973

Pulled By: gitbw95

fbshipit-source-id: 98232508c4f2047216def9c11a038cfb98709690
2022-04-19 18:24:58 -07:00
Peter Dillinger
682fc8ba6a Release note for #9546 (#9872)
Summary:
We don't really have a mechanism for internal-only release
notes, so adding this to the standard release notes. For picking into
7.2 release.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9872

Test Plan: release note only

Reviewed By: jay-zhuang

Differential Revision: D35761307

Pulled By: pdillinger

fbshipit-source-id: 5d1932767fff48456323df948604dbb956ac27b2
2022-04-19 16:44:05 -07:00
Federico Guerinoni
bbf5867353 Add C API for setting strict_capacity_limit (#9855)
Summary:
This allows to set with true the field `strict_capacity_limit` from C
API and other languages that wrap that.

Signed-off-by: Federico Guerinoni <guerinoni.federico@gmail.com>

Closes: https://github.com/facebook/rocksdb/issues/9707

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9855

Reviewed By: ajkr

Differential Revision: D35724150

Pulled By: jay-zhuang

fbshipit-source-id: d8514797e9d90b1cd88329018f9ac4776722aa0f
2022-04-19 09:34:02 -07:00
Andrew Kryczka
690f1edf37 Avoid overwriting OPTIONS file settings in db_bench (#9862)
Summary:
`InitializeOptionsGeneral()` was overwriting many options that were already configured by OPTIONS file, potentially with the flag default values. This PR changes that function to only overwrite options in limited scenarios, as described at the top of its definition. Block cache is still a violation.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9862

Test Plan: ran under various scenarios (multi-DB, single DB, OPTIONS file, flags) and verified options are set as expected

Reviewed By: jay-zhuang

Differential Revision: D35736960

Pulled By: ajkr

fbshipit-source-id: 75b77740af37e6f5741618f8a8f5685df2417d03
2022-04-18 23:46:16 -07:00
Peter Dillinger
1601433b3a Misc CI improvements / additions (#9859)
Summary:
* Add valgrind test to nightly CircleCI (in case it can catch something that
ASAN/UBSAN does not)
* Add clang13+asan+ubsan+folly test to nightly CircleCI, for broader testing
* Consolidate many copies of ASAN_OPTIONS= while also allowing it to be
inherited from parent environment rather than always overridden.
* Move UBSAN exclusion from Makefile into options_settable_test.cc

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9859

Test Plan: CI

Reviewed By: jay-zhuang

Differential Revision: D35730903

Pulled By: pdillinger

fbshipit-source-id: 6f5464034e8115f9a07f6f7aec1de9219ec2837c
2022-04-18 20:26:37 -07:00
Hui Xiao
e83c55439a Conditionally declare and define variable that is unused in LITE mode (#9854)
Summary:
Context:
As mentioned in https://github.com/facebook/rocksdb/issues/9701, we have the following in LITE=1 make static_lib for v7.0.2
```
  CC       file/sequence_file_reader.o
  CC       file/sst_file_manager_impl.o
  CC       file/writable_file_writer.o
In file included from file/writable_file_writer.cc:10:
./file/writable_file_writer.h:163:15: error: private field 'temperature_' is not used [-Werror,-Wunused-private-field]
  Temperature temperature_;
              ^
1 error generated.
make: *** [file/writable_file_writer.o] Error 1
```

 as titled

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9854

Test Plan:
- Local `LITE=1 make static_lib` reveals the same error and error is gone after this fix
- CI

Reviewed By: ajkr, jay-zhuang

Differential Revision: D35706585

Pulled By: hx235

fbshipit-source-id: 7743310298231ad6866304ffa2225c8abdc91d9a
2022-04-18 14:16:35 -07:00
Peter Dillinger
41237dd306 Add "no compression" job to CircleCI (#9850)
Summary:
Since they operate at distinct abstraction layers, I thought it
was prudent to combine with EncryptedEnv CI test for each PR, for efficiency
in testing. Also added supported compressions to sst_dump --help output
so that CI job can verify no compiled-in compression support.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9850

Test Plan: CI, some manual stuff

Reviewed By: riversand963

Differential Revision: D35682346

Pulled By: pdillinger

fbshipit-source-id: be9879c1533fed304ee32c89fd9ba4b07c2b90cc
2022-04-18 12:47:16 -07:00
Jay Zhuang
3d473235d4 Update main version.h to NEXT release (7.3) (#9852)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/9852

Reviewed By: ajkr

Differential Revision: D35694753

Pulled By: jay-zhuang

fbshipit-source-id: 729d416afc588e5db2367e899589bbb5419820d6
2022-04-18 10:26:21 -07:00
Jay Zhuang
673ada8225 Update HISTORY.md for 7.2 release (#9848)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/9848

Reviewed By: riversand963

Differential Revision: D35677606

Pulled By: jay-zhuang

fbshipit-source-id: 8a597ea47f302a6f51fb6672a33c848d613bccfc
2022-04-16 17:15:47 -07:00
sdong
4f9c0fd083 Add Aggregation Merge Operator (#9780)
Summary:
Add a merge operator that allows users to register specific aggregation function so that they can does aggregation based per key using different aggregation types.
See comments of function CreateAggMergeOperator() for actual usage.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9780

Test Plan: Add a unit test to coverage various cases.

Reviewed By: ltamasi

Differential Revision: D35267444

fbshipit-source-id: 5b02f31c4f3e17e96dd4025cdc49fca8c2868628
2022-04-15 23:24:05 -07:00
Levi Tamasi
db536ee045 Propagate errors from UpdateBoundaries (#9851)
Summary:
In `FileMetaData`, we keep track of the lowest-numbered blob file
referenced by the SST file in question for the purposes of BlobDB's
garbage collection in the `oldest_blob_file_number` field, which is
updated in `UpdateBoundaries`. However, with the current code,
`BlobIndex` decoding errors (or invalid blob file numbers) are swallowed
in this method. The patch changes this by propagating these errors
and failing the corresponding flush/compaction. (Note that since blob
references are generated by the BlobDB code and also parsed by
`CompactionIterator`, in reality this can only happen in the case of
memory corruption.)

This change necessitated updating some unit tests that involved
fake/corrupt `BlobIndex` objects. Some of these just used a dummy string like
`"blob_index"` as a placeholder; these were replaced with real `BlobIndex`es.
Some were relying on the earlier behavior to simulate corruption; these
were replaced with `SyncPoint`-based test code that corrupts a valid
blob reference at read time.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9851

Test Plan: `make check`

Reviewed By: riversand963

Differential Revision: D35683671

Pulled By: ltamasi

fbshipit-source-id: f7387af9945c48e4d5c4cd864f1ba425c7ad51f6
2022-04-15 20:25:48 -07:00
Yanqin Jin
be81609b43 Add a fail_if_not_bottommost_level to IngestExternalFileOptions (#9849)
Summary:
This new options allows application to specify that files must be
ingested to bottommost level, otherwise the ingestion will fail instead
of silently ingesting to a non-bottommost level.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9849

Test Plan: make check

Reviewed By: ajkr

Differential Revision: D35680307

Pulled By: riversand963

fbshipit-source-id: 01cf54ef6c76198f7654dc06b5544631dea1be1e
2022-04-15 18:12:06 -07:00
Akanksha Mahajan
0c7f455f85 Make initial auto readahead_size configurable (#9836)
Summary:
Make initial auto readahead_size configurable

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9836

Test Plan:
Added new unit test
Ran regression:
Without change:

```
./db_bench -use_existing_db=true -db=/tmp/prefix_scan_prefetch_main -benchmarks="seekrandom" -key_size=32 -value_size=512 -num=5000000 -use_direct_reads=true -seek_nexts=327680 -duration=120 -ops_between_duration_checks=1
Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
RocksDB:    version 7.0
Date:       Thu Mar 17 13:11:34 2022
CPU:        24 * Intel Core Processor (Broadwell)
CPUCache:   16384 KB
Keys:       32 bytes each (+ 0 bytes user-defined timestamp)
Values:     512 bytes each (256 bytes after compression)
Entries:    5000000
Prefix:    0 bytes
Keys per prefix:    0
RawSize:    2594.0 MB (estimated)
FileSize:   1373.3 MB (estimated)
Write rate: 0 bytes/second
Read rate: 0 ops/second
Compression: Snappy
Compression sampling rate: 0
Memtablerep: SkipListFactory
Perf Level: 1
------------------------------------------------
DB path: [/tmp/prefix_scan_prefetch_main]
seekrandom   :  483618.390 micros/op 2 ops/sec;  338.9 MB/s (249 of 249 found)
```

With this change:
```
 ./db_bench -use_existing_db=true -db=/tmp/prefix_scan_prefetch_main -benchmarks="seekrandom" -key_size=32 -value_size=512 -num=5000000 -use_direct_reads=true -seek_nexts=327680 -duration=120 -ops_between_duration_checks=1
Set seed to 1649895440554504 because --seed was 0
Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
RocksDB:    version 7.2
Date:       Wed Apr 13 17:17:20 2022
CPU:        24 * Intel Core Processor (Broadwell)
CPUCache:   16384 KB
Keys:       32 bytes each (+ 0 bytes user-defined timestamp)
Values:     512 bytes each (256 bytes after compression)
Entries:    5000000
Prefix:    0 bytes
Keys per prefix:    0
RawSize:    2594.0 MB (estimated)
FileSize:   1373.3 MB (estimated)
Write rate: 0 bytes/second
Read rate: 0 ops/second
Compression: Snappy
Compression sampling rate: 0
Memtablerep: SkipListFactory
Perf Level: 1
------------------------------------------------
DB path: [/tmp/prefix_scan_prefetch_main]
... finished 100 ops
seekrandom   :  476892.488 micros/op 2 ops/sec;  344.6 MB/s (252 of 252 found)
```

Reviewed By: anand1976

Differential Revision: D35632815

Pulled By: akankshamahajan15

fbshipit-source-id: c8057a88f9294c9d03b1d434b03affe02f74d796
2022-04-15 17:28:09 -07:00
sdong
d5dfa8c6fe Upgrade development environment. (#9843)
Summary:
It's to support Meta's internal environment platform010. Gcc still doesn't work but USE_CLANG=1 should work.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9843

Test Plan: Try to make and ROCKSDB_FBCODE_BUILD_WITH_PLATFORM010=1 USE_CLANG=1 make

Reviewed By: pdillinger

Differential Revision: D35652507

fbshipit-source-id: a4a14b2fa4a2d6ca6fbf1b65060e81c39f079363
2022-04-15 16:05:38 -07:00
Jay Zhuang
e91ec64cac Remove flaky servicelab metrics DBPut P95/P99 (#9844)
Summary:
The P95 and P99 metrics are flaky, similar to DBGet ones which removed
in https://github.com/facebook/rocksdb/issues/9742 .

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9844

Test Plan: `$ ./buckifier/buckify_rocksdb.py`

Reviewed By: ajkr

Differential Revision: D35655531

Pulled By: jay-zhuang

fbshipit-source-id: c1409f0fba4e23d461a65f988c27ac5e2ae85d13
2022-04-15 13:56:22 -07:00
yuzhangyu
082eb04200 Add option --decode_blob_index to dump_live_files command (#9842)
Summary:
This change only add decode blob index support to dump_live_files command, which is part of a task to add blob support to a few commands.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9842

Reviewed By: ltamasi

Differential Revision: D35650167

Pulled By: jowlyzhang

fbshipit-source-id: a78151b98bc38ac6f52c6e01ca6927a3429ddd14
2022-04-15 09:04:04 -07:00
Yanqin Jin
fe63899d1a Add checks to GetUpdatesSince (#9459)
Summary:
Make `DB::GetUpdatesSince` return early if told to scan WALs generated by transactions
with write-prepared or write-unprepared policies (`seq_per_batch` is true), as indicated by
API comment.

Also add checks to `TransactionLogIterator` to clarify some conditions.

No API change.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9459

Test Plan:
make check

Closing https://github.com/facebook/rocksdb/issues/1565

Reviewed By: akankshamahajan15

Differential Revision: D33821243

Pulled By: riversand963

fbshipit-source-id: c8b155d020ce0980e2d3b3b1da40b96e65b48d79
2022-04-14 17:12:16 -07:00
Yanqin Jin
0bd4dcde6b CompactionIterator sees consistent view of which keys are committed (#9830)
Summary:
**This PR does not affect the functionality of `DB` and write-committed transactions.**

`CompactionIterator` uses `KeyCommitted(seq)` to determine if a key in the database is committed.
As the name 'write-committed' implies, if write-committed policy is used, a key exists in the database only if
it is committed. In fact, the implementation of `KeyCommitted()` is as follows:

```
inline bool KeyCommitted(SequenceNumber seq) {
  // For non-txn-db and write-committed, snapshot_checker_ is always nullptr.
  return snapshot_checker_ == nullptr ||
         snapshot_checker_->CheckInSnapshot(seq, kMaxSequence) == SnapshotCheckerResult::kInSnapshot;
}
```

With that being said, we focus on write-prepared/write-unprepared transactions.

A few notes:
- A key can exist in the db even if it's uncommitted. Therefore, we rely on `snapshot_checker_` to determine data visibility. We also require that all writes go through transaction API instead of the raw `WriteBatch` + `Write`, thus at most one uncommitted version of one user key can exist in the database.
- `CompactionIterator` outputs a key as long as the key is uncommitted.

Due to the above reasons, it is possible that `CompactionIterator` decides to output an uncommitted key without
doing further checks on the key (`NextFromInput()`). By the time the key is being prepared for output, the key becomes
committed because the `snapshot_checker_(seq, kMaxSequence)` becomes true in the implementation of `KeyCommitted()`.
Then `CompactionIterator` will try to zero its sequence number and hit assertion error if the key is a tombstone.

To fix this issue, we should make the `CompactionIterator` see a consistent view of the input keys. Note that
for write-prepared/write-unprepared, the background flush/compaction jobs already take a "job snapshot" before starting
processing keys. The job snapshot is released only after the entire flush/compaction finishes. We can use this snapshot
to determine whether a key is committed or not with minor change to `KeyCommitted()`.

```
inline bool KeyCommitted(SequenceNumber sequence) {
  // For non-txn-db and write-committed, snapshot_checker_ is always nullptr.
  return snapshot_checker_ == nullptr ||
         snapshot_checker_->CheckInSnapshot(sequence, job_snapshot_) ==
             SnapshotCheckerResult::kInSnapshot;
}
```

As a result, whether a key is committed or not will remain a constant throughout compaction, causing no trouble
for `CompactionIterator`s assertions.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9830

Test Plan: make check

Reviewed By: ltamasi

Differential Revision: D35561162

Pulled By: riversand963

fbshipit-source-id: 0e00d200c195240341cfe6d34cbc86798b315b9f
2022-04-14 11:11:04 -07:00
Jonathan Albrecht
844a35108b Fix minimum libzstd version that supports ZSTD_STREAMING (#9841)
Summary:
The minimum libzstd version that has `ZSTD_compressStream2` is
1.4.0 so only define ZSTD_STREAMING in that case.

Fixes building on Ubuntu 18.04 which has libzstd 1.3.3 as its
repository version.

Fixes https://github.com/facebook/rocksdb/issues/9795

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9841

Test Plan:
Build and test on Ubuntu 18.04 with:
  apt-get install libsnappy-dev zlib1g-dev libbz2-dev liblz4-dev \
    libzstd-dev libgflags-dev g++ make curl

Reviewed By: ajkr

Differential Revision: D35648738

Pulled By: jay-zhuang

fbshipit-source-id: 2a9e969bcc17a7dc10172f3817283409de885811
2022-04-14 11:05:39 -07:00
Andrew Kryczka
d6e016be6d Expose CacheEntryRole and map keys for block cache stat collections (#9838)
Summary:
This gives users the ability to examine the map populated by `GetMapProperty()` with property `kBlockCacheEntryStats`. It also sets us up for a possible future where cache reservations are configured according to `CacheEntryRole`s rather than flags coupled to roles.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9838

Test Plan:
- migrated test DBBlockCacheTest.CacheEntryRoleStats to use this API. That test verifies some of the contents are as expected
- added a DBPropertiesTest to verify the public map keys are present, and nothing else

Reviewed By: hx235

Differential Revision: D35629493

Pulled By: ajkr

fbshipit-source-id: 5c4356b8560e85d1f881fd32c44c15960b02fc68
2022-04-14 09:38:55 -07:00
Peter Dillinger
fefacd33e3 Add db_stress to buck build (#9840)
Summary:
For internal testing purposes (minimal deps)

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9840

Test Plan: buck build :db_stress

Reviewed By: hx235

Differential Revision: D35635192

Pulled By: pdillinger

fbshipit-source-id: eefca3bcea174de6fdcdc1c763774f3134c7342c
2022-04-13 23:54:35 -07:00
Peter Dillinger
b3a6fb7e86 Serialize a space-hungry test (#9837)
Summary:
Tends to fill up /dev/shm

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9837

Test Plan: Some manual testing

Reviewed By: hx235

Differential Revision: D35627568

Pulled By: pdillinger

fbshipit-source-id: 22710f7b10bc287570475dae42318dd346f78db9
2022-04-13 17:10:43 -07:00
Levi Tamasi
5645207758 Expose the amount of garbage in live blob files as a dedicated DB property (#9835)
Summary:
This information has been already available as part of the `rocksdb.blob-stats`
string property. The patch adds a dedicated integer property to make it easier
to surface this information in monitoring systems.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9835

Test Plan: `make check`

Reviewed By: riversand963

Differential Revision: D35619495

Pulled By: ltamasi

fbshipit-source-id: 03fb0b228aa27d3859a1e3783bcb7eca095607f8
2022-04-13 13:36:30 -07:00
Jay Zhuang
dc1c90c4e3 Support canceling running RemoteCompaction on remote side (#9725)
Summary:
Add the ability to cancel remote compaction on the remote side by
setting `OpenAndCompactOptions.canceled` to true.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9725

Test Plan: added unittest

Reviewed By: ajkr

Differential Revision: D35018800

Pulled By: jay-zhuang

fbshipit-source-id: be3652f9645e0347df429e42a5614d5a9b3a1ec4
2022-04-13 13:28:09 -07:00
Siying Dong
9454e744ed Update supported VS versions in INSTALL.md (#9823)
Summary:
We only run CI for VS2017 and VS2019 now, so the claim that users can build with "VS13" is stale.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9823

Reviewed By: riversand963

Differential Revision: D35511401

fbshipit-source-id: e3ae2643e26ab46753fea439599d2ed98abba439
2022-04-13 13:03:40 -07:00
Peter Dillinger
7c7df1850a Update main version.h to NEXT release (#9834)
Summary:
Henceforth, the version number in version.h shall reflect the
*next* version number to be tagged (to the best of our knowledge) rather
than the *previous* (unpatched) version.

The primary advantage is being able to distinguish (in source code `#if`s
or human running tools) the development version from the last released
version.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9834

Test Plan: CI

Reviewed By: ajkr

Differential Revision: D35617373

Pulled By: pdillinger

fbshipit-source-id: f3286089d17b82409e6af08e5aa9c1affefe2862
2022-04-13 12:16:07 -07:00
Peter Dillinger
efd035164b Meta-internal folly integration with F14FastMap (#9546)
Summary:
Especially after updating to C++17, I don't see a compelling case for
*requiring* any folly components in RocksDB. I was able to purge the existing
hard dependencies, and it can be quite difficult to strip out non-trivial components
from folly for use in RocksDB. (The prospect of doing that on F14 has changed
my mind on the best approach here.)

But this change creates an optional integration where we can plug in
components from folly at compile time, starting here with F14FastMap to replace
std::unordered_map when possible (probably no public APIs for example). I have
replaced the biggest CPU users of std::unordered_map with compile-time
pluggable UnorderedMap which will use F14FastMap when USE_FOLLY is set.
USE_FOLLY is always set in the Meta-internal buck build, and a simulation of
that is in the Makefile for public CI testing. A full folly build is not needed, but
checking out the full folly repo is much simpler for getting the dependency,
and anything else we might want to optionally integrate in the future.

Some picky details:
* I don't think the distributed mutex stuff is actually used, so it was easy to remove.
* I implemented an alternative to `folly::constexpr_log2` (which is much easier
in C++17 than C++11) so that I could pull out the hard dependencies on
`ConstexprMath.h`
* I had to add noexcept move constructors/operators to some types to make
F14's complainUnlessNothrowMoveAndDestroy check happy, and I added a
macro to make that easier in some common cases.
* Updated Meta-internal buck build to use folly F14Map (always)

No updates to HISTORY.md nor INSTALL.md as this is not (yet?) considered a
production integration for open source users.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9546

Test Plan:
CircleCI tests updated so that a couple of them use folly.

Most internal unit & stress/crash tests updated to use Meta-internal latest folly.
(Note: they should probably use buck but they currently use Makefile.)

Example performance improvement: when filter partitions are pinned in cache,
they are tracked by PartitionedFilterBlockReader::filter_map_ and we can build
a test that exercises that heavily. Build DB with

```
TEST_TMPDIR=/dev/shm/rocksdb ./db_bench -benchmarks=fillrandom -num=10000000 -disable_wal=1 -write_buffer_size=30000000 -bloom_bits=16 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=0 -partition_index_and_filters
```

and test with (simultaneous runs with & without folly, ~20 times each to see
convergence)

```
TEST_TMPDIR=/dev/shm/rocksdb ./db_bench_folly -readonly -use_existing_db -benchmarks=readrandom -num=10000000 -bloom_bits=16 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=0 -partition_index_and_filters -duration=40 -pin_l0_filter_and_index_blocks_in_cache
```

Average ops/s no folly: 26229.2
Average ops/s with folly: 26853.3 (+2.4%)

Reviewed By: ajkr

Differential Revision: D34181736

Pulled By: pdillinger

fbshipit-source-id: ffa6ad5104c2880321d8a1aa7187e00ab0d02e94
2022-04-13 07:34:01 -07:00
Jay Zhuang
f934a0af46 Add event listener support on remote compactor side (#9821)
Summary:
So the user is able to set event listener on the compactor
side.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9821

Test Plan: unittest added

Reviewed By: ajkr

Differential Revision: D35485388

Pulled By: jay-zhuang

fbshipit-source-id: 669d8a3aaee012b75b940470306756c03ffa09b2
2022-04-12 17:25:36 -07:00
Kinan Dak Albab
1eee99fc8c Fix usage of USE_RTTI flag in CMakeLists. (#9760)
Summary:
By default, rocksdb release compiles with `-fno-rtti`. This causes issues when linking with other code that requires RTTI. Documentation indicate that setting the environment variable `USE_RTTI=1` when compiling rocksdb can override this behavior so that `-fno-rtti` is not used (http://rocksdb.org/blog/2017/09/28/rocksdb-5-8-released.html). However, this environment flag had no effect due to a bug in how `CMakeLists.txt` refers to `USE_RTTI`. This PR fixes this issue.

Now, running `USE_RTTI=1 cmake <......>` is correctly recognized by cmake, and causes `ROCKSDB_USE_RTTI `to be defined and `-fno-rtti` not to be issued for release builds. Behavior when USE_RTTI=0 or USE_RTTI is not provided is unchanged.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9760

Reviewed By: jay-zhuang

Differential Revision: D35334552

Pulled By: mrambacher

fbshipit-source-id: e405fcac4e14b246642e52bc7e73b04bf143e5b6
2022-04-12 12:12:23 -07:00
dependabot[bot]
0b81efed1d Bump nokogiri from 1.13.3 to 1.13.4 in /docs (#9831)
Summary:
Bumps [nokogiri](https://github.com/sparklemotion/nokogiri) from 1.13.3 to 1.13.4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/sparklemotion/nokogiri/releases">nokogiri's releases</a>.</em></p>
<blockquote>
<h2>1.13.4 / 2022-04-11</h2>
<h3>Security</h3>
<ul>
<li>Address <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-24836">CVE-2022-24836</a>, a regular expression denial-of-service vulnerability. See <a href="https://github.com/sparklemotion/nokogiri/security/advisories/GHSA-crjr-9rc5-ghw8">GHSA-crjr-9rc5-ghw8</a> for more information.</li>
<li>[CRuby] Vendored zlib is updated to address <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-25032">CVE-2018-25032</a>. See <a href="https://github.com/sparklemotion/nokogiri/security/advisories/GHSA-v6gp-9mmm-c6p5">GHSA-v6gp-9mmm-c6p5</a> for more information.</li>
<li>[JRuby] Vendored Xerces-J (<code>xerces:xercesImpl</code>) is updated to address <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-23437">CVE-2022-23437</a>. See <a href="https://github.com/sparklemotion/nokogiri/security/advisories/GHSA-xxx9-3xcr-gjj3">GHSA-xxx9-3xcr-gjj3</a> for more information.</li>
<li>[JRuby] Vendored nekohtml (<code>org.cyberneko.html</code>) is updated to address <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-24839">CVE-2022-24839</a>. See <a href="https://github.com/sparklemotion/nokogiri/security/advisories/GHSA-gx8x-g87m-h5q6">GHSA-gx8x-g87m-h5q6</a> for more information.</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li>[CRuby] Vendored zlib is updated from 1.2.11 to 1.2.12. (See <a href="https://github.com/sparklemotion/nokogiri/blob/v1.13.x/LICENSE-DEPENDENCIES.md#platform-releases">LICENSE-DEPENDENCIES.md</a> for details on which packages redistribute this library.)</li>
<li>[JRuby] Vendored Xerces-J (<code>xerces:xercesImpl</code>) is updated from 2.12.0 to 2.12.2.</li>
<li>[JRuby] Vendored nekohtml (<code>org.cyberneko.html</code>) is updated from a fork of 1.9.21 to 1.9.22.noko2. This fork is now publicly developed at <a href="https://github.com/sparklemotion/nekohtml">https://github.com/sparklemotion/nekohtml</a></li>
</ul>
<hr />
<p>sha256sum:</p>
<pre><code>095ff1995ed3dda3ea98a5f08bdc54bef02be1ce4e7c81034c4812e5e7c6e7e3  nokogiri-1.13.4-aarch64-linux.gem
7ebfc7415c819bcd4e849627e879cef2fb328bec90e802e50d74ccd13a60ec75  nokogiri-1.13.4-arm64-darwin.gem
41efd87c121991de26ef0393ac713d687e539813c3b79e454a2e3ffeecd107ea  nokogiri-1.13.4-java.gem
ab547504692ada0cec9d2e4e15afab659677c3f4c1ac3ea639bf5212b65246a1  nokogiri-1.13.4-x64-mingw-ucrt.gem
fa5c64cfdb71642ed647428e4d0d75ee0f4d189cfb63560c66fd8bdf99eb146b  nokogiri-1.13.4-x64-mingw32.gem
d6f07cbcbc28b75e8ac5d6e729ffba3602dffa0ad16ffac2322c9b4eb9b971fc  nokogiri-1.13.4-x86-linux.gem
0f7a4fd13e25abe3f98663fef0d115d58fdeff62cf23fef12d368e42adad2ce6  nokogiri-1.13.4-x86-mingw32.gem
3eef282f00ad360304fbcd5d72eb1710ff41138efda9513bb49eec832db5fa3e  nokogiri-1.13.4-x86_64-darwin.gem
3978610354ec67b59c128d23259c87b18374ee1f61cb9ed99de7143a88e70204  nokogiri-1.13.4-x86_64-linux.gem
0d46044eb39271e3360dae95ed6061ce17bc0028d475651dc48db393488c83bc  nokogiri-1.13.4.gem
</code></pre>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/sparklemotion/nokogiri/blob/v1.13.4/CHANGELOG.md">nokogiri's changelog</a>.</em></p>
<blockquote>
<h2>1.13.4 / 2022-04-11</h2>
<h3>Security</h3>
<ul>
<li>Address <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-24836">CVE-2022-24836</a>, a regular expression denial-of-service vulnerability. See <a href="https://github.com/sparklemotion/nokogiri/security/advisories/GHSA-crjr-9rc5-ghw8">GHSA-crjr-9rc5-ghw8</a> for more information.</li>
<li>[CRuby] Vendored zlib is updated to address <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-25032">CVE-2018-25032</a>. See <a href="https://github.com/sparklemotion/nokogiri/security/advisories/GHSA-v6gp-9mmm-c6p5">GHSA-v6gp-9mmm-c6p5</a> for more information.</li>
<li>[JRuby] Vendored Xerces-J (<code>xerces:xercesImpl</code>) is updated to address <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-23437">CVE-2022-23437</a>. See <a href="https://github.com/sparklemotion/nokogiri/security/advisories/GHSA-xxx9-3xcr-gjj3">GHSA-xxx9-3xcr-gjj3</a> for more information.</li>
<li>[JRuby] Vendored nekohtml (<code>org.cyberneko.html</code>) is updated to address <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-24839">CVE-2022-24839</a>. See <a href="https://github.com/sparklemotion/nokogiri/security/advisories/GHSA-gx8x-g87m-h5q6">GHSA-gx8x-g87m-h5q6</a> for more information.</li>
</ul>
<h3>Dependencies</h3>
<ul>
<li>[CRuby] Vendored zlib is updated from 1.2.11 to 1.2.12. (See <a href="https://github.com/sparklemotion/nokogiri/blob/v1.13.x/LICENSE-DEPENDENCIES.md#platform-releases">LICENSE-DEPENDENCIES.md</a> for details on which packages redistribute this library.)</li>
<li>[JRuby] Vendored Xerces-J (<code>xerces:xercesImpl</code>) is updated from 2.12.0 to 2.12.2.</li>
<li>[JRuby] Vendored nekohtml (<code>org.cyberneko.html</code>) is updated from a fork of 1.9.21 to 1.9.22.noko2. This fork is now publicly developed at <a href="https://github.com/sparklemotion/nekohtml">https://github.com/sparklemotion/nekohtml</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="4e2c4b2571"><code>4e2c4b2</code></a> version bump to v1.13.4</li>
<li><a href="6a20ee4d5d"><code>6a20ee4</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2510">https://github.com/facebook/rocksdb/issues/2510</a> from sparklemotion/flavorjones-encoding-reader-perfo...</li>
<li><a href="b848031a59"><code>b848031</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2509">https://github.com/facebook/rocksdb/issues/2509</a> from sparklemotion/flavorjones-parse-processing-inst...</li>
<li><a href="c0ecf3b6ef"><code>c0ecf3b</code></a> test: pend the LIBXML_LOADED_VERSION test on freebsd</li>
<li><a href="e444525ef1"><code>e444525</code></a> fix(perf): HTML4::EncodingReader detection</li>
<li><a href="1eb5580666"><code>1eb5580</code></a> style(rubocop): allow intentional use of empty initializer</li>
<li><a href="0feac5af68"><code>0feac5a</code></a> fix(dep): HTML parsing of processing instructions</li>
<li><a href="db72b906c5"><code>db72b90</code></a> test: recent nekohtml versions do not consider 'a' to be inline</li>
<li><a href="2af2a87985"><code>2af2a87</code></a> style(rubocop): allow intentional use of empty initializer</li>
<li><a href="ba7a28c9a2"><code>ba7a28c</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/sparklemotion/nokogiri/issues/2499">https://github.com/facebook/rocksdb/issues/2499</a> from sparklemotion/2441-xerces-2.12.2-backport-v1.13.x</li>
<li>Additional commits viewable in <a href="https://github.com/sparklemotion/nokogiri/compare/v1.13.3...v1.13.4">compare view</a></li>
</ul>
</details>
<br />

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=nokogiri&package-manager=bundler&previous-version=1.13.3&new-version=1.13.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

 ---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `dependabot rebase` will rebase this PR
- `dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `dependabot merge` will merge this PR after your CI passes on it
- `dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `dependabot cancel merge` will cancel a previously requested merge and block automerging
- `dependabot reopen` will reopen this PR if it is closed
- `dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/facebook/rocksdb/network/alerts).

</details>

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9831

Reviewed By: akankshamahajan15

Differential Revision: D35580365

Pulled By: jay-zhuang

fbshipit-source-id: f9d7d3096598418740e2c174d4dbc99a73e02dc6
2022-04-12 09:07:14 -07:00
Akanksha Mahajan
ae82d91492 Remove corrupted WAL files in kPointRecoveryMode with avoid_flush_duing_recovery set true (#9634)
Summary:
1) In case of non-TransactionDB and avoid_flush_during_recovery = true, RocksDB won't
flush the data from WAL to L0 for all column families if possible. As a
result, not all column families can increase their log_numbers, and
min_log_number_to_keep won't change.
2) For transaction DB (.allow_2pc), even with the flush, there may be old WAL files that it must not delete because they can contain data of uncommitted transactions and min_log_number_to_keep won't change.

If we persist a new MANIFEST with
advanced log_numbers for some column families, then during a second
crash after persisting the MANIFEST, RocksDB will see some column
families' log_numbers larger than the corrupted wal, and the "column family inconsistency" error will be hit, causing recovery to fail.

As a solution,
1. the corrupted WALs whose numbers are larger than the
corrupted wal and smaller than the new WAL will be moved to archive folder.
2. Currently, RocksDB DB::Open() may creates and writes to two new MANIFEST files even before recovery succeeds. This PR buffers the edits in a structure and writes to a new MANIFEST after recovery is successful

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9634

Test Plan:
1. Added new unit tests
                2. make crast_test -j

Reviewed By: riversand963

Differential Revision: D34463666

Pulled By: akankshamahajan15

fbshipit-source-id: e233d3af0ed4e2028ca0cf051e5a334a0fdc9d19
2022-04-11 15:39:31 -07:00
Akanksha Mahajan
63e68a4e77 Enable async prefetching for ReadOptions.readahead_size (#9827)
Summary:
Currently async prefetching is enabled for implicit internal auto readahead in FilePrefetchBuffer if `ReadOptions.async_io` is set. This PR enables async prefetching for `ReadOptions.readahead_size` when `ReadOptions.async_io` is set true.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9827

Test Plan: Update unit test

Reviewed By: anand1976

Differential Revision: D35552129

Pulled By: akankshamahajan15

fbshipit-source-id: d9f9a96672852a591375a21eef15355cf3289f5c
2022-04-11 13:46:57 -07:00
mrambacher
b7db7eae26 Plugin Registry (#7949)
Summary:
Added a Plugin class to the ObjectRegistry.  Enabled compile-time and program-time addition of plugins to the Registry.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/7949

Reviewed By: mrambacher

Differential Revision: D33517674

Pulled By: pdillinger

fbshipit-source-id: c3e3270aab76a489bfa9e85d78cdfca951912557
2022-04-11 13:44:09 -07:00
gitbw95
f241d082b6 Prevent double caching in the compressed secondary cache (#9747)
Summary:
###  **Summary:**
When both LRU Cache and CompressedSecondaryCache are configured together, there possibly are some data blocks double cached.

**Changes include:**
1. Update IS_PROMOTED to IS_IN_SECONDARY_CACHE to prevent confusions.
2. This PR updates SecondaryCacheResultHandle and use IsErasedFromSecondaryCache to determine whether the handle is erased in the secondary cache. Then, the caller can determine whether to SetIsInSecondaryCache().
3. Rename LRUSecondaryCache to CompressedSecondaryCache.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9747

Test Plan:
**Test Scripts:**
1. Populate a DB. The on disk footprint is 482 MB. The data is set to be 50% compressible, so the total decompressed size is expected to be 964 MB.
./db_bench --benchmarks=fillrandom --num=10000000 -db=/db_bench_1

2. overwrite it to a stable state:
./db_bench --benchmarks=overwrite,stats --num=10000000 -use_existing_db -duration=10 --benchmark_write_rate_limit=2000000 -db=/db_bench_1

4. Run read tests with diffeernt cache setting:

T1:
./db_bench --benchmarks=seekrandom,stats --threads=16 --num=10000000 -use_existing_db -duration=120 --benchmark_write_rate_limit=52000000 -use_direct_reads --cache_size=520000000  --statistics -db=/db_bench_1

T2:
./db_bench --benchmarks=seekrandom,stats --threads=16 --num=10000000 -use_existing_db -duration=120 --benchmark_write_rate_limit=52000000 -use_direct_reads --cache_size=320000000 -compressed_secondary_cache_size=400000000 --statistics -use_compressed_secondary_cache -db=/db_bench_1

T3:
./db_bench --benchmarks=seekrandom,stats --threads=16 --num=10000000 -use_existing_db -duration=120 --benchmark_write_rate_limit=52000000 -use_direct_reads --cache_size=520000000 -compressed_secondary_cache_size=400000000 --statistics -use_compressed_secondary_cache -db=/db_bench_1

T4:
./db_bench --benchmarks=seekrandom,stats --threads=16 --num=10000000 -use_existing_db -duration=120 --benchmark_write_rate_limit=52000000 -use_direct_reads --cache_size=20000000 -compressed_secondary_cache_size=500000000 --statistics -use_compressed_secondary_cache -db=/db_bench_1

**Before this PR**
| Cache Size | Compressed Secondary Cache Size | Cache Hit Rate |
|------------|-------------------------------------|----------------|
|520 MB | 0 MB | 85.5% |
|320 MB | 400 MB | 96.2% |
|520 MB | 400 MB | 98.3% |
|20 MB | 500 MB | 98.8% |

**Before this PR**
| Cache Size | Compressed Secondary Cache Size | Cache Hit Rate |
|------------|-------------------------------------|----------------|
|520 MB | 0 MB | 85.5% |
|320 MB | 400 MB | 99.9% |
|520 MB | 400 MB | 99.9% |
|20 MB | 500 MB | 99.2% |

Reviewed By: anand1976

Differential Revision: D35117499

Pulled By: gitbw95

fbshipit-source-id: ea2657749fc13efebe91a8a1b56bc61d6a224a12
2022-04-11 13:28:33 -07:00
Akanksha Mahajan
f3bcac39a6 Fix stress test failure in ReadAsync. (#9824)
Summary:
Fix stress test failure in ReadAsync by ignoring errors
injected during async read by FaultInjectionFS.
Failure:
```
 WARNING: prefix_size is non-zero but memtablerep != prefix_hash
Didn't get expected error from MultiGet.
num_keys 14 Expected 1 errors, seen 0
Callstack that injected the fault
Injected error type = 32538
Message: error;
#0   ./db_stress() [0x6f7dd4] rocksdb::port::SaveStack(int*, int)	/data/sandcastle/boxes/trunk-hg-fbcode-fbsource/fbcode/internal_repo_rocksdb/repo/port/stack_trace.cc:152
https://github.com/facebook/rocksdb/issues/1   ./db_stress() [0x7f2bda] rocksdb::FaultInjectionTestFS::InjectThreadSpecificReadError(rocksdb::FaultInjectionTestFS::ErrorOperation, rocksdb::Slice*, bool, char*, bool, bool*)	/data/sandcastle/boxes/trunk-hg-fbcode-fbsource/fbcode/internal_repo_rocksdb/repo/utilities/fault_injection_fs.cc:891
https://github.com/facebook/rocksdb/issues/2   ./db_stress() [0x7f2e78] rocksdb::TestFSRandomAccessFile::Read(unsigned long, unsigned long, rocksdb::IOOptions const&, rocksdb::Slice*, char*, rocksdb::IODebugContext*) const	/data/sandcastle/boxes/trunk-hg-fbcode-fbsource/fbcode/internal_repo_rocksdb/repo/utilities/fault_injection_fs.cc:367
https://github.com/facebook/rocksdb/issues/3   ./db_stress() [0x6483d7] rocksdb::(anonymous namespace)::CompositeRandomAccessFileWrapper::Read(unsigned long, unsigned long, rocksdb::Slice*, char*) const	/data/sandcastle/boxes/trunk-hg-fbcode-fbsource/fbcode/internal_repo_rocksdb/repo/env/composite_env.cc:61
https://github.com/facebook/rocksdb/issues/4   ./db_stress() [0x654564] rocksdb::(anonymous namespace)::LegacyRandomAccessFileWrapper::Read(unsigned long, unsigned long, rocksdb::IOOptions const&, rocksdb::Slice*, char*, rocksdb::IODebugContext*) const	/data/sandcastle/boxes/trunk-hg-fbcode-fbsource/fbcode/internal_repo_rocksdb/repo/env/env.cc:152
https://github.com/facebook/rocksdb/issues/5   ./db_stress() [0x659b3b] rocksdb::FSRandomAccessFile::ReadAsync(rocksdb::FSReadRequest&, rocksdb::IOOptions const&, std::function<void (rocksdb::FSReadRequest const&, void*)>, void*, void**, std::function<void (void*)>*, rocksdb::IODebugContext*)	/data/sandcastle/boxes/trunk-hg-fbcode-fbsource/fbcode/internal_repo_rocksdb/repo/./include/rocksdb/file_system.h:896
https://github.com/facebook/rocksdb/issues/6   ./db_stress() [0x8b8bab] rocksdb::RandomAccessFileReader::ReadAsync(rocksdb::FSReadRequest&, rocksdb::IOOptions const&, std::function<void (rocksdb::FSReadRequest const&, void*)>, void*, void**, std::function<void (void*)>*, rocksdb::Env::IOPriority)	/data/sandcastle/boxes/trunk-hg-fbcode-fbsource/fbcode/internal_repo_rocksdb/repo/file/random_access_file_reader.cc:459
https://github.com/facebook/rocksdb/issues/7   ./db_stress() [0x8b501f] rocksdb::FilePrefetchBuffer::ReadAsync(rocksdb::IOOptions const&, rocksdb::RandomAccessFileReader*, rocksdb::Env::IOPriority, unsigned long, unsigned long, unsigned long, unsigned int)	/data/sandcastle/boxes/trunk-hg-fbcode-fbsource/fbcode/internal_repo_rocksdb/repo/file/file_prefetch_buffer.cc:124
https://github.com/facebook/rocksdb/issues/8   ./db_stress() [0x8b55fc] rocksdb::FilePrefetchBuffer::PrefetchAsync(rocksdb::IOOptions const&, rocksdb::RandomAccessFileReader*, unsigned long, unsigned long, unsigned long, rocksdb::Env::IOPriority, bool&)	/data/sandcastle/boxes/trunk-hg-fbcode-fbsource/fbcode/internal_repo_rocksdb/repo/file/file_prefetch_buffer.cc:363
https://github.com/facebook/rocksdb/issues/9   ./db_stress() [0x8b61f8] rocksdb::FilePrefetchBuffer::TryReadFromCacheAsync(rocksdb::IOOptions const&, rocksdb::RandomAccessFileReader*, unsigned long, unsigned long, rocksdb::Slice*, rocksdb::Status*, rocksdb::Env::IOPriority, bool)	/data/sandcastle/boxes/trunk-hg-fbcode-fbsource/fbcode/internal_repo_rocksdb/repo/file/file_prefetch_buffer.cc:482
https://github.com/facebook/rocksdb/issues/10  ./db_stress() [0x745e04] rocksdb::BlockFetcher::TryGetFromPrefetchBuffer()	/data/sandcastle/boxes/trunk-hg-fbcode-fbsource/fbcode/internal_repo_rocksdb/repo/table/block_fetcher.cc:76
```

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9824

Test Plan:
```
./db_stress --acquire_snapshot_one_in=10000 --adaptive_readahead=1 --allow_concurrent_memtable_write=0 --async_io=1 --atomic_flush=1 --avoid_flush_during_recovery=0 --avoid_unnecessary_blocking_io=0 -- backup_max_size=104857600 --backup_one_in=100000 --batch_protection_bytes_per_key=0 --block_size=16384 --bloom_bits=5.037629726741734 --bottommost_compression_type=lz4hc --cache_index_and_filter_blocks=0 --cache_size=8388608 --checkpoint_one_in=1000000 --checksum_type=kxxHash --clear_column_family_one_in=0 --column_families=1 --compact_files_one_in=1000000 --compact_range_one_in=1000000 --compaction_ttl=100 --compression_max_dict_buffer_bytes=1073741823 --compression_max_dict_bytes=16384 --compression_parallel_threads=1 --compression_type=zstd --compression_zstd_max_train_bytes=0 --continuous_verification_interval=0 --db=/home/akankshamahajan/dev/shm/rocksdb/rocksdb_crashtest_blackbox --db_write_buffer_size=8388608 --delpercent=0 --delrangepercent=0 --destroy_db_initially=0 - detect_filter_construct_corruption=1 --disable_wal=1 --enable_compaction_filter=0 --enable_pipelined_write=0 --expected_values_dir=/home/akankshamahajan/dev/shm/rocksdb/rocksdb_crashtest_expected --experimental_mempurge_threshold=8.772789063014715 --fail_if_options_file_error=0 --file_checksum_impl=crc32c --flush_one_in=1000000 --format_version=3 --get_current_wal_file_one_in=0 --get_live_files_one_in=1000000 --get_property_one_in=1000000 --get_sorted_wal_files_one_in=0 --index_block_restart_interval=15 --index_type=3 --iterpercent=0 --key_len_percent_dist=1,30,69 --level_compaction_dynamic_level_bytes=False --long_running_snapshots=0 --mark_for_compaction_one_file_in=0 --max_background_compactions=1 --max_bytes_for_level_base=67108864 --max_key=25000000 --max_key_len=3 --max_manifest_file_size=1073741824 --max_write_batch_group_size_bytes=16777216 --max_write_buffer_number=3 --max_write_buffer_size_to_maintain=2097152 --memtable_prefix_bloom_size_ratio=0.001 --memtable_whole_key_filtering=1 --memtablerep=skip_list --mmap_read=0 --mock_direct_io=True --nooverwritepercent=1 --open_files=-1 --open_metadata_write_fault_one_in=0 --open_read_fault_one_in=0 --open_write_fault_one_in=0 --ops_per_thread=100000000 --optimize_filters_for_memory=0 --paranoid_file_checks=1 --partition_filters=0 --partition_pinning=2 --pause_background_one_in=1000000 --periodic_compaction_seconds=1000 --prefix_size=-1 --prefixpercent=0 --prepopulate_block_cache=0 --progress_reports=0 --read_fault_one_in=32 --readpercent=100 --recycle_log_file_num=1 --reopen=0 --reserve_table_reader_memory=1 --ribbon_starting_level=999 --secondary_cache_fault_one_in=0 --set_options_one_in=0 --snapshot_hold_ops=100000 --sst_file_manager_bytes_per_sec=0 --sst_file_manager_bytes_per_truncate=0 --subcompactions=2 --sync=0 --sync_fault_injection=False --target_file_size_base=16777216 --target_file_size_multiplier=1 --test_batches_snapshots=0 --top_level_index_pinning=3 --unpartitioned_pinning=2 --use_block_based_filter=0 --use_clock_cache=0 --use_direct_io_for_flush_and_compaction=1 --use_direct_reads=0 --use_full_merge_v1=0 --use_merge=1 --use_multiget=1 --user_timestamp_size=0 --value_size_mult=32 --verify_checksum=1 --verify_checksum_one_in=1000000 --verify_db_one_in=100000 --wal_compression=none --write_buffer_size=33554432 --write_dbid_to_manifest=1 --write_fault_one_in=0 --writepercent=0
```

Reviewed By: anand1976

Differential Revision: D35514566

Pulled By: akankshamahajan15

fbshipit-source-id: e2a868fdd7422604774c1419738f9926a21e92a4
2022-04-11 10:56:11 -07:00
Yanqin Jin
0ad9ee30ce Remove dead code (#9825)
Summary:
Options `preserve_deletes` and `iter_start_seqnum` have been removed since 7.0.

This PR removes dead code related to these two removed options.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9825

Test Plan: make check

Reviewed By: akankshamahajan15

Differential Revision: D35517950

Pulled By: riversand963

fbshipit-source-id: 86282ce5ec4087acb94a06a42a1b6d55b1715482
2022-04-11 10:26:55 -07:00
Duncan Bellamy
25e31d1a94 tools/db_bench_tool.cc use uint64_t instead of size_t (#9800)
Summary:
to fix compilation for 32bit

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9800

Reviewed By: riversand963

Differential Revision: D35404447

fbshipit-source-id: 6a1185bb38f3a718357aa120e3b26a1ea77f023d
2022-04-08 13:29:19 -07:00
Hui Xiao
f337542948 Fix a bug of TEST_SetRandomTableProperties due to non-zero padding between fields in TableProperties struct (#9812)
Summary:
Context:
https://github.com/facebook/rocksdb/pull/9748#discussion_r843134214 reveals an issue with TEST_SetRandomTableProperties when non-zero padding is used between the last string field and first non-string field in TableProperties.
Fixed by https://github.com/facebook/rocksdb/pull/9748#discussion_r843244375

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9812

Test Plan: No production code changes and rely on existing CI

Reviewed By: ajkr

Differential Revision: D35423680

Pulled By: hx235

fbshipit-source-id: fd855eef3d32771bb79c65bd7012ab8bb3c400ab
2022-04-07 12:25:43 -07:00
Akanksha Mahajan
3fc2eaf561 Fix valgrind test failure for async read (#9819)
Summary:
Since all plaftorms don't support io_uring. So updated the unit
test to take that into consideration when testing async reads in unit tests.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9819

Test Plan:
valgrind --error-exitcode=2 --leak-check=full ./prefetch_test
--gtest_filter=PrefetchTest2.ReadAsyncWithPosixFS
CircleCI jobs

Reviewed By: pdillinger

Differential Revision: D35469959

Pulled By: akankshamahajan15

fbshipit-source-id: b170459ec816487fc0a13b1d55dbbe4f754b2eba
2022-04-07 10:31:50 -07:00
Akanksha Mahajan
7ea26abb8b Fix reseting of async_read_in_progress_ variable in FilePrefetchBuffer to call Poll API (#9815)
Summary:
Currently RocksDB reset async_read_in_progress_ in callback
due to which underlying filesystem relying on Poll API won't be called
leading to stale memory access.
In order to fix it, async_read_in_progress_ will be reset after Poll API
is called to make sure underlying file_system waiting on Poll can clear
its state or take appropriate action.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9815

Test Plan: CircleCI tests

Reviewed By: anand1976

Differential Revision: D35451534

Pulled By: akankshamahajan15

fbshipit-source-id: b70ef6251a7aa9ed4876ba5e5100baa33d7d474c
2022-04-06 18:36:23 -07:00
sdong
e03f8a0c12 L0 Subcompaction to trim input files (#9802)
Summary:
When sub compaction is decided for L0->L1 compaction, most of the cases, all L0 files will be involved in all sub compactions. However, it is not always the case. When files are generally (but not strictly) inserted in sequential order, there can be a subset of L0 files invovled. Yet RocksDB always open all those L0 files, and build an iterator, read many of the files' first of last block with expensive readahead. We trim some input files to reduce overhead a little bit.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9802

Test Plan: Add a unit test to cover this case and manually validate the behavior while running the test.

Reviewed By: ajkr

Differential Revision: D35371031

fbshipit-source-id: 701ed7375b5cbe41672e93b38fe8a1503dad08b6
2022-04-06 18:19:19 -07:00
Peter Dillinger
8ce7cea93f Tests for filter compatibility (#9773)
Summary:
This change adds two unit tests that would each catch the
regression fixed in https://github.com/facebook/rocksdb/issues/9736

* TableMetaIndexKeys - detects any churn in metaindex block keys
generated by SST files using standard db_test_util configurations.
* BloomFilterCompatibility - this detects if any common built-in
FilterPolicy configurations fail to read filters generated by another.
(The regression bug caused NewRibbonFilterPolicy not to read filters
from NewBloomFilterPolicy and vice-versa.) This replaces some previous
tests that didn't really appear to be testing much of anything except
basic data correctness, which doesn't tell you a filter is being used.

Light refactoring in meta_blocks.cc/h to support inspecting metaindex
keys.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9773

Test Plan:
this is the test. Verified that 7.0.2 fails both tests and 7.0.3 passes.
With backporting for intentional API changes in 7.0, 6.29 also passes.

Reviewed By: ajkr

Differential Revision: D35236248

Pulled By: pdillinger

fbshipit-source-id: 493dfe9ad7e27524bf7c6c1af8a4b8c31bc6ef5a
2022-04-06 15:54:40 -07:00
anand76
c3d7e16252 Add WAL compression to stress tests (#9811)
Summary:
Add the WAL compression feature to the stress test.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9811

Reviewed By: riversand963

Differential Revision: D35414316

Pulled By: anand1976

fbshipit-source-id: 0c17b1ec55679a52f088ad368798b57139bd921a
2022-04-06 15:47:09 -07:00
Peter Dillinger
ad32646e18 Remove public rocksdb-lego-determinator (#9803)
Summary:
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9803

Only use Meta-internal version now. precommit_checker.py also now obsolete

Bring back `make commit_prereq` in follow-up work

Reviewed By: jay-zhuang

Differential Revision: D35372283

fbshipit-source-id: 7428438ca51f878802c301d0d5591675e551a113
2022-04-06 14:27:01 -07:00
Akanksha Mahajan
0b8f885939 Update stats for Read and ReadAsync in random_access_file_reader for async prefetching (#9810)
Summary:
Update stats in random_access_file_reader for Read and
ReadAsync API to take into account the read latency for async
prefetching.

It also fixes ERROR_HANDLER_AUTORESUME_RETRY_COUNT stat whose value was
incorrect in portal.h

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9810

Test Plan: Update unit test

Reviewed By: anand1976

Differential Revision: D35433081

Pulled By: akankshamahajan15

fbshipit-source-id: aeec3901270e58a003ce6b5214bd25ddcb3a12a9
2022-04-06 14:26:53 -07:00
Hui Xiao
49623f9c8e Account memory of big memory users in BlockBasedTable in global memory limit (#9748)
Summary:
**Context:**
Through heap profiling, we discovered that `BlockBasedTableReader` objects can accumulate and lead to high memory usage (e.g, `max_open_file = -1`). These memories are currently not saved, not tracked, not constrained and not cache evict-able. As a first step to improve this, similar to https://github.com/facebook/rocksdb/pull/8428,  this PR is to track an estimate of `BlockBasedTableReader` object's memory in block cache and fail future creation if the memory usage exceeds the available space of cache at the time of creation.

**Summary:**
- Approximate big memory users  (`BlockBasedTable::Rep` and `TableProperties` )' memory usage in addition to the existing estimated ones (filter block/index block/un-compression dictionary)
- Charge all of these memory usages to block cache on `BlockBasedTable::Open()` and release them on `~BlockBasedTable()` as there is no memory usage fluctuation of concern in between
- Refactor on CacheReservationManager (and its call-sites) to add concurrent support for BlockBasedTable  used in this PR.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9748

Test Plan:
- New unit tests
- db bench: `OpenDb` : **-0.52% in ms**
  - Setup `./db_bench -benchmarks=fillseq -db=/dev/shm/testdb -disable_auto_compactions=1 -write_buffer_size=1048576`
  - Repeated run with pre-change w/o feature and post-change with feature, benchmark `OpenDb`:  `./db_bench -benchmarks=readrandom -use_existing_db=1 -db=/dev/shm/testdb -reserve_table_reader_memory=true (remove this when running w/o feature) -file_opening_threads=3 -open_files=-1 -report_open_timing=true| egrep 'OpenDb:'`

#-run | (feature-off) avg milliseconds | std milliseconds | (feature-on) avg milliseconds | std milliseconds | change (%)
-- | -- | -- | -- | -- | --
10 | 11.4018 | 5.95173 | 9.47788 | 1.57538 | -16.87382694
20 | 9.23746 | 0.841053 | 9.32377 | 1.14074 | 0.9343477536
40 | 9.0876 | 0.671129 | 9.35053 | 1.11713 | 2.893283155
80 | 9.72514 | 2.28459 | 9.52013 | 1.0894 | -2.108041632
160 | 9.74677 | 0.991234 | 9.84743 | 1.73396 | 1.032752389
320 | 10.7297 | 5.11555 | 10.547 | 1.97692 | **-1.70275031**
640 | 11.7092 | 2.36565 | 11.7869 | 2.69377 | **0.6635807741**

-  db bench on write with cost to cache in WriteBufferManager (just in case this PR's CRM refactoring accidentally slows down anything in WBM) : `fillseq` : **+0.54% in micros/op**
`./db_bench -benchmarks=fillseq -db=/dev/shm/testdb -disable_auto_compactions=1 -cost_write_buffer_to_cache=true -write_buffer_size=10000000000 | egrep 'fillseq'`

#-run | (pre-PR) avg micros/op | std micros/op | (post-PR)  avg micros/op | std micros/op | change (%)
-- | -- | -- | -- | -- | --
10 | 6.15 | 0.260187 | 6.289 | 0.371192 | 2.260162602
20 | 7.28025 | 0.465402 | 7.37255 | 0.451256 | 1.267813605
40 | 7.06312 | 0.490654 | 7.13803 | 0.478676 | **1.060579461**
80 | 7.14035 | 0.972831 | 7.14196 | 0.92971 | **0.02254791432**

-  filter bench: `bloom filter`: **-0.78% in ms/key**
    - ` ./filter_bench -impl=2 -quick -reserve_table_builder_memory=true | grep 'Build avg'`

#-run | (pre-PR) avg ns/key | std ns/key | (post-PR)  ns/key | std ns/key | change (%)
-- | -- | -- | -- | -- | --
10 | 26.4369 | 0.442182 | 26.3273 | 0.422919 | **-0.4145720565**
20 | 26.4451 | 0.592787 | 26.1419 | 0.62451 | **-1.1465262**

- Crash test `python3 tools/db_crashtest.py blackbox --reserve_table_reader_memory=1 --cache_size=1` killed as normal

Reviewed By: ajkr

Differential Revision: D35136549

Pulled By: hx235

fbshipit-source-id: 146978858d0f900f43f4eb09bfd3e83195e3be28
2022-04-06 10:33:00 -07:00
Ramkumar Vadivelu
633b7f15d5 Update/Fix API comments for OpenForReadOnly() and OpenAsSecondary() (#9807)
Summary:
Updates/fixes to API comments for OpenForReadOnly() and OpenAsSecondary()

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9807

Reviewed By: ajkr

Differential Revision: D35419206

Pulled By: ramvadiv

fbshipit-source-id: ac2514a14e4ec77b2ed34c5dca6251528c5b92f1
2022-04-05 20:22:47 -07:00
Andrew Kryczka
3ae9c5309b Remove explicit padding from CacheAlignedInstrumentedMutex (#9809)
Summary:
Fixes https://github.com/facebook/rocksdb/issues/9779.

The padding at the end of a struct is added implicitly according to the
sizeof spec: "When applied to a class, the result is the
number of bytes in an object of that class including any padding
required for placing objects of that type in an array"
(https://eel.is/c++draft/expr.sizeof#2.sentence-2). We should drop the
explicit padding since it assumed support for zero-length arrays, which
is non-standard.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9809

Test Plan: rely on CI

Reviewed By: riversand963

Differential Revision: D35413496

Pulled By: ajkr

fbshipit-source-id: 25d52ca45e648ad0d5657149f26f6adecbed1cb4
2022-04-05 18:32:05 -07:00
gukaifeng
60ceb8d0e2 rename property "kIsFileDeletionsEnabled" to "kIsFileDeletionsDisabled" (#9791)
Summary:
The name of this property "kIsFileDeletionsEnabled" is very, very easy to misunderstand.

I think 0 represents false (i.e. disabled) and non-0 means true (enabled), and this property is just the opposite.

I modified the name of this property, and as few other positions as possible, so that the final meaning remains the same, but the name of this property is more common sense.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9791

Reviewed By: ajkr

Differential Revision: D35362166

Pulled By: jay-zhuang

fbshipit-source-id: 85310d88bdd131893effb64e1adb7d0d7b202f88
2022-04-05 17:16:47 -07:00
Changyu Bi
a180c5cc3a Added GetMergeOperands() to stress test (#9804)
Summary:
db_stress does not yet cover is GetMergeOperands(), added GetMergeOperands() to db_stress.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9804

Test Plan:
```make -j32 db_stress```

```python3 tools/db_crashtest.py blackbox --simple --interval=30 --duration=2400 --max_key=100000 --write_buffer_size=524288 --target_file_size_base=524288 --max_bytes_for_level_base=2097152 --value_size_mult=33```

Reviewed By: ajkr

Differential Revision: D35387137

Pulled By: cbi42

fbshipit-source-id: 8f851ef68b5af4d824128ad55ebe564f7ad6f7e6
2022-04-05 14:56:28 -07:00