Compare commits

...

14 Commits

Author SHA1 Message Date
Levi Tamasi
fcf3d75f3f Update HISTORY.md for PR 9273 (#9282)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/9282

Reviewed By: akankshamahajan15

Differential Revision: D33027844

Pulled By: ltamasi

fbshipit-source-id: 7540d36010414311bc39610fff92a6498be1570c
2021-12-10 14:56:20 -08:00
akankshamahajan
417828ba84 Update HISTORY.md and version.h for 6.27.3 2021-12-10 10:26:22 -08:00
Akanksha Mahajan
1251252519 Fix segmentation fault in table_options.prepopulate_block_cache when used with partition_filters (#9263)
Summary:
When table_options.prepopulate_block_cache is set to
BlockBasedTableOptions::PrepopulateBlockCache::kFlushOnly and
table_options.partition_filters is also set true, then there is
segmentation failure when top level filter is fetched because its
entered with wrong type in cache.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9263

Test Plan:
Updated unit tests;
Ran db_stress: make crash_test -j32

Reviewed By: pdillinger

Differential Revision: D32936566

Pulled By: akankshamahajan15

fbshipit-source-id: 8bd79e53830d3e3c1bb79787e1ffbc3cb46d4426
2021-12-10 09:53:08 -08:00
mrambacher
f181158b95 Fix an issue with MemTableRepFactory::CreateFromString (#9273)
Summary:
If ignore_unsupported_options=true, then it is possible for MemTableRepFactory::CreateFromString to succeed without setting a result (result=nullptr).  This would cause the original value to be overwritten with null and an error would be raised later when PrepareOptions is invoked.

Added unit test for this condition.  Will add (in another PR unless required by reviewers) comparable tests for all of the other Customizable classes.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9273

Reviewed By: ltamasi

Differential Revision: D32990365

Pulled By: mrambacher

fbshipit-source-id: b150724c3f5ae7346357b3866244fd93466875c7
2021-12-09 13:40:03 -08:00
mrambacher
1217630bf3 Fix GetOptionsPtr for Wrapped Customizable; Allow null options map (#9213)
Summary:
1.  Fix GetOptionsPtr for Wrapped (Inner() != nullptr) Customizable objects.  This allows the inner options to be returned via this method.

2.  Allow the option type map to be nullptr.  This allows objects to be registered as options (for GetOptionsPtr) but not be used by the configuration methods.

Added tests as appropriate.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9213

Reviewed By: zhichao-cao

Differential Revision: D32718882

Pulled By: mrambacher

fbshipit-source-id: 563203d1f006a2629060feb31c5dff9a233e1e83
2021-12-09 13:39:49 -08:00
Adam Retter
8e588307fc Fix fstatfs call for compilation on 32 bit systems (#9251)
Summary:
On some 32-bit systems, BTRFS_SUPER_MAGIC is unsigned while __fsword_t is signed.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9251

Reviewed By: ajkr

Differential Revision: D32961651

Pulled By: pdillinger

fbshipit-source-id: 78e85fc1336f304a21e4d5961e60957c90daed63
2021-12-08 23:16:24 -08:00
akankshamahajan
5ac7843a03 Update version to 6.27.2 2021-12-01 11:43:55 -08:00
akankshamahajan
0ddd74c708 History Update for #9234
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:
2021-12-01 11:40:09 -08:00
Akanksha Mahajan
db0435772f Fix bug in rocksdb internal automatic prefetching (#9234)
Summary:
After introducing adaptive_readahead, the original flow got
broken. Readahead size was set to 0 because of which rocksdb wasn't be
able to do automatic prefetching which it enables after seeing
sequential reads. This PR fixes it.

----------------------------------------------------------------------------------------------------

Before this patch:
b_bench -use_existing_db=true -db=/tmp/prefix_scan -benchmarks="seekrandom" -key_size=32 -value_size=512 -num=5000000 -use_direct_reads=true -seek_nexts=327680 -duration=120 -ops_between_duration_checks=1
Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
RocksDB:    version 6.27
Date:       Tue Nov 30 11:56:50 2021
CPU:        24 * Intel Core Processor (Broadwell)
CPUCache:   16384 KB
Keys:       32 bytes each (+ 0 bytes user-defined timestamp)
Values:     512 bytes each (256 bytes after compression)
Entries:    5000000
Prefix:    0 bytes
Keys per prefix:    0
RawSize:    2594.0 MB (estimated)
FileSize:   1373.3 MB (estimated)
Write rate: 0 bytes/second
Read rate: 0 ops/second
Compression: Snappy
Compression sampling rate: 0
Memtablerep: SkipListFactory
Perf Level: 1
WARNING: Assertions are enabled; benchmarks unnecessarily slow
------------------------------------------------
DB path: [/tmp/prefix_scan]

seekrandom   : 5356367.174 micros/op 0 ops/sec;   29.4 MB/s (23 of 23 found)

----------------------------------------------------------------------------------------------------

After the patch:
./db_bench -use_existing_db=true -db=/tmp/prefix_scan -benchmarks="seekrandom" -key_size=32 -value_size=512 -num=5000000 -use_direct_reads=true -seek_nexts=327680 -duration=120 -ops_between_duration_checks=1
Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
RocksDB:    version 6.27
Date:       Tue Nov 30 14:38:33 2021
CPU:        24 * Intel Core Processor (Broadwell)
CPUCache:   16384 KB
Keys:       32 bytes each (+ 0 bytes user-defined timestamp)
Values:     512 bytes each (256 bytes after compression)
Entries:    5000000
Prefix:    0 bytes
Keys per prefix:    0
RawSize:    2594.0 MB (estimated)
FileSize:   1373.3 MB (estimated)
Write rate: 0 bytes/second
Read rate: 0 ops/second
Compression: Snappy
Compression sampling rate: 0
Memtablerep: SkipListFactory
Perf Level: 1
WARNING: Assertions are enabled; benchmarks unnecessarily slow
------------------------------------------------
DB path: [/tmp/prefix_scan]
seekrandom   :  456504.277 micros/op 2 ops/sec;  359.8 MB/s (264 of 264 found)

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9234

Test Plan:
Ran ./db_bench -db=/data/mysql/rocksdb/prefix_scan
-benchmarks="fillseq" -key_size=32 -value_size=512 -num=5000000 -use_d
irect_io_for_flush_and_compaction=true -target_file_size_base=16777216
and then ./db_bench -use_existing_db=true
-db=/data/mysql/rocksdb/prefix_scan -benchmarks="seekrandom"
-key_size=32 -value_siz
e=512 -num=5000000 -use_direct_reads=true -seek_nexts=327680
-duration=120 -ops_between_duration_checks=1
 and compared the results.

Reviewed By: anand1976

Differential Revision: D32743965

Pulled By: akankshamahajan15

fbshipit-source-id: b950fba68c91963b7deb5c20acdf471bc60251f5
2021-12-01 09:19:55 -08:00
Peter Dillinger
e15c8e6819 Update version to 6.27.1 2021-11-29 09:37:41 -08:00
Peter Dillinger
0b31668e0c HISTORY for #9208
Summary: Update HISTORY for bug fix.

Test Plan: n/a
2021-11-29 09:36:44 -08:00
Peter Dillinger
57a817df76 Fix bug affecting GetSortedWalFiles, Backups, Checkpoint (#9208)
Summary:
Saw error like this:
`Backup failed -- IO error: No such file or directory: While opening a
file for sequentially reading:
/dev/shm/rocksdb/rocksdb_crashtest_blackbox/004426.log: No such file or
directory`

Unfortunately, GetSortedWalFiles (used by Backups, Checkpoint, etc.)
relies on no file deletions happening while its operating, which
means not only disabling (more) deletions, but ensuring any pending
deletions are completed. Two fixes related to this:

* There was a gap in several places between decrementing
pending_purge_obsolete_files_ and incrementing bg_purge_scheduled_ where
the db mutex would be released and GetSortedWalFiles (and others) could
get false information that no deletions are pending.

* The fix to https://github.com/facebook/rocksdb/issues/8591 (disabling deletions in GetSortedWalFiles) seems
incomplete because it doesn't prevent pending deletions from occuring
during the operation (if deletions not already disabled, the case that
was to be fixed by the change).

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9208

Test Plan:
existing tests (it's hard to write a test for interleavings
that are now excluded - this is what stress test is for)

Reviewed By: ajkr

Differential Revision: D32630675

Pulled By: pdillinger

fbshipit-source-id: a121e3da648de130cd24d44c524232f4eb22f178
2021-11-29 09:21:19 -08:00
Peter Dillinger
8e43279f93 More improvements to output for CircleCI (#9201)
Summary:
More follow-up to https://github.com/facebook/rocksdb/issues/9193 + https://github.com/facebook/rocksdb/issues/9188
* Even though we need to print ETA updates to avoid hitting the 10min
timeout, we need to avoid printing an update if there's no actual
progress, so that hung tests will timeout after 10 min rather than 5
hours.
* When there is a hung test, it's really annoying to track down which
test is hung, so if no progress is observed for 1 minute, we run ps once
to show what is running.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9201

Test Plan: manual and CircleCI

Reviewed By: jay-zhuang

Differential Revision: D32612028

Pulled By: pdillinger

fbshipit-source-id: 00f8ea70fc5fec9ede28ff74287d90fc73854aad
2021-11-29 09:19:49 -08:00
Yanqin Jin
c48bae0968 Fix internal build error (#9195)
Summary:
Internal build reported:
```
rocksdb/listener.h:470:3: error: extra ';' inside a struct [-Werror,-Wextra-semi]
```

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9195

Test Plan: import to fbcode and compile.

Reviewed By: akankshamahajan15

Differential Revision: D32590138

Pulled By: riversand963

fbshipit-source-id: ca4ed9cca210a1a9a12d3de17c789ef9151c57e8
2021-11-22 09:40:00 -08:00
28 changed files with 446 additions and 190 deletions

View File

@ -1,4 +1,17 @@
# Rocksdb Change Log
## 6.27.3 (2021-12-10)
### Bug Fixes
* Fixed a bug in TableOptions.prepopulate_block_cache which causes segmentation fault when used with TableOptions.partition_filters = true and TableOptions.cache_index_and_filter_blocks = true.
* Fixed a bug affecting custom memtable factories which are not registered with the `ObjectRegistry`. The bug could result in failure to save the OPTIONS file.
## 6.27.2 (2021-12-01)
### Bug Fixes
* Fixed a bug in rocksdb automatic implicit prefetching which got broken because of new feature adaptive_readahead and internal prefetching got disabled when iterator moves from one file to next.
## 6.27.1 (2021-11-29)
### Bug Fixes
* Fixed a bug that could, with WAL enabled, cause backups, checkpoints, and `GetSortedWalFiles()` to fail randomly with an error like `IO error: 001234.log: No such file or directory`
## 6.27.0 (2021-11-19)
### New Features
* Added new ChecksumType kXXH3 which is faster than kCRC32c on almost all x86\_64 hardware.

View File

@ -1842,7 +1842,6 @@ sub start_another_job {
}
$opt::min_progress_interval = 0;
$opt::progress_sep = "\r";
sub init_progress {
# Uses:
@ -1852,7 +1851,6 @@ sub init_progress {
$|=1;
if (not $Global::is_terminal) {
$opt::min_progress_interval = 30;
$opt::progress_sep = "\n";
}
if($opt::bar) {
return("","");
@ -1878,7 +1876,9 @@ sub drain_job_queue {
}
my $last_header="";
my $sleep = 0.2;
my $last_left = 1000000000;
my $last_progress_time = 0;
my $ps_reported = 0;
do {
while($Global::total_running > 0) {
debug($Global::total_running, "==", scalar
@ -1889,14 +1889,37 @@ sub drain_job_queue {
close $job->fh(0,"w");
}
}
if($opt::progress and (time() - $last_progress_time) >= $opt::min_progress_interval) {
$last_progress_time = time();
# When not connected to terminal, assume CI (e.g. CircleCI). In
# that case we want occasional progress output to prevent abort
# due to timeout with no output, but we also need to stop sending
# progress output if there has been no actual progress, so that
# the job can time out appropriately (CirecleCI: 10m) in case of
# a hung test. But without special output, it is extremely
# annoying to diagnose which test is hung, so we add that using
# `ps` below.
if($opt::progress and
($Global::is_terminal or (time() - $last_progress_time) >= 30)) {
my %progress = progress();
if($last_header ne $progress{'header'}) {
print $Global::original_stderr "\n", $progress{'header'}, "\n";
$last_header = $progress{'header'};
}
print $Global::original_stderr $opt::progress_sep,$progress{'status'};
if ($Global::is_terminal) {
print $Global::original_stderr "\r",$progress{'status'};
}
if ($last_left > $Global::left) {
if (not $Global::is_terminal) {
print $Global::original_stderr $progress{'status'},"\n";
}
$last_progress_time = time();
$ps_reported = 0;
} elsif (not $ps_reported and (time() - $last_progress_time) >= 60) {
# No progress in at least 60 seconds: run ps
print $Global::original_stderr "\n";
system("ps", "-wf");
$ps_reported = 1;
}
$last_left = $Global::left;
flush $Global::original_stderr;
}
if($Global::total_running < $Global::max_jobs_running
@ -1968,6 +1991,7 @@ sub progress {
compute_eta();
$eta = sprintf("ETA: %ds Left: %d AVG: %.2fs ",
$this_eta, $left, $avgtime);
$Global::left = $left;
}
my $termcols = terminal_columns();
my @workers = sort keys %Global::host;

View File

@ -75,11 +75,6 @@ ColumnFamilyHandleImpl::~ColumnFamilyHandleImpl() {
bool defer_purge =
db_->immutable_db_options().avoid_unnecessary_blocking_io;
db_->PurgeObsoleteFiles(job_context, defer_purge);
if (defer_purge) {
mutex_->Lock();
db_->SchedulePurge();
mutex_->Unlock();
}
}
job_context.Clean();
}

View File

@ -650,13 +650,32 @@ TEST_F(DBBlockCacheTest, WarmCacheWithDataBlocksDuringFlush) {
}
// This test cache data, index and filter blocks during flush.
TEST_F(DBBlockCacheTest, WarmCacheWithBlocksDuringFlush) {
class DBBlockCacheTest1 : public DBTestBase,
public ::testing::WithParamInterface<bool> {
public:
const size_t kNumBlocks = 10;
const size_t kValueSize = 100;
DBBlockCacheTest1() : DBTestBase("db_block_cache_test1", true) {}
};
INSTANTIATE_TEST_CASE_P(DBBlockCacheTest1, DBBlockCacheTest1,
::testing::Bool());
TEST_P(DBBlockCacheTest1, WarmCacheWithBlocksDuringFlush) {
Options options = CurrentOptions();
options.create_if_missing = true;
options.statistics = ROCKSDB_NAMESPACE::CreateDBStatistics();
BlockBasedTableOptions table_options;
table_options.block_cache = NewLRUCache(1 << 25, 0, false);
bool use_partition = GetParam();
if (use_partition) {
table_options.partition_filters = true;
table_options.index_type =
BlockBasedTableOptions::IndexType::kTwoLevelIndexSearch;
}
table_options.cache_index_and_filter_blocks = true;
table_options.prepopulate_block_cache =
BlockBasedTableOptions::PrepopulateBlockCache::kFlushOnly;
@ -669,9 +688,15 @@ TEST_F(DBBlockCacheTest, WarmCacheWithBlocksDuringFlush) {
ASSERT_OK(Put(ToString(i), value));
ASSERT_OK(Flush());
ASSERT_EQ(i, options.statistics->getTickerCount(BLOCK_CACHE_DATA_ADD));
if (use_partition) {
ASSERT_EQ(2 * i,
options.statistics->getTickerCount(BLOCK_CACHE_INDEX_ADD));
ASSERT_EQ(2 * i,
options.statistics->getTickerCount(BLOCK_CACHE_FILTER_ADD));
} else {
ASSERT_EQ(i, options.statistics->getTickerCount(BLOCK_CACHE_INDEX_ADD));
ASSERT_EQ(i, options.statistics->getTickerCount(BLOCK_CACHE_FILTER_ADD));
}
ASSERT_EQ(value, Get(ToString(i)));
ASSERT_EQ(0, options.statistics->getTickerCount(BLOCK_CACHE_DATA_MISS));
@ -679,11 +704,16 @@ TEST_F(DBBlockCacheTest, WarmCacheWithBlocksDuringFlush) {
ASSERT_EQ(0, options.statistics->getTickerCount(BLOCK_CACHE_INDEX_MISS));
ASSERT_EQ(i * 3, options.statistics->getTickerCount(BLOCK_CACHE_INDEX_HIT));
ASSERT_EQ(0, options.statistics->getTickerCount(BLOCK_CACHE_FILTER_MISS));
if (use_partition) {
ASSERT_EQ(i * 3,
options.statistics->getTickerCount(BLOCK_CACHE_FILTER_HIT));
} else {
ASSERT_EQ(i * 2,
options.statistics->getTickerCount(BLOCK_CACHE_FILTER_HIT));
}
ASSERT_EQ(0, options.statistics->getTickerCount(BLOCK_CACHE_FILTER_MISS));
}
// Verify compaction not counted
ASSERT_OK(db_->CompactRange(CompactRangeOptions(), /*begin=*/nullptr,
/*end=*/nullptr));
@ -692,11 +722,18 @@ TEST_F(DBBlockCacheTest, WarmCacheWithBlocksDuringFlush) {
// Index and filter blocks are automatically warmed when the new table file
// is automatically opened at the end of compaction. This is not easily
// disabled so results in the new index and filter blocks being warmed.
if (use_partition) {
EXPECT_EQ(2 * (1 + kNumBlocks),
options.statistics->getTickerCount(BLOCK_CACHE_INDEX_ADD));
EXPECT_EQ(2 * (1 + kNumBlocks),
options.statistics->getTickerCount(BLOCK_CACHE_FILTER_ADD));
} else {
EXPECT_EQ(1 + kNumBlocks,
options.statistics->getTickerCount(BLOCK_CACHE_INDEX_ADD));
EXPECT_EQ(1 + kNumBlocks,
options.statistics->getTickerCount(BLOCK_CACHE_FILTER_ADD));
}
}
TEST_F(DBBlockCacheTest, DynamicallyWarmCacheDuringFlush) {
Options options = CurrentOptions();

View File

@ -127,31 +127,30 @@ Status DBImpl::GetLiveFiles(std::vector<std::string>& ret,
}
Status DBImpl::GetSortedWalFiles(VectorLogPtr& files) {
{
// If caller disabled deletions, this function should return files that are
// guaranteed not to be deleted until deletions are re-enabled. We need to
// wait for pending purges to finish since WalManager doesn't know which
// files are going to be purged. Additional purges won't be scheduled as
// long as deletions are disabled (so the below loop must terminate).
// Also note that we disable deletions anyway to avoid the case where a
// file is deleted in the middle of the scan, causing IO error.
Status deletions_disabled = DisableFileDeletions();
{
InstrumentedMutexLock l(&mutex_);
while (disable_delete_obsolete_files_ > 0 &&
(pending_purge_obsolete_files_ > 0 || bg_purge_scheduled_ > 0)) {
while (pending_purge_obsolete_files_ > 0 || bg_purge_scheduled_ > 0) {
bg_cv_.Wait();
}
}
// Disable deletion in order to avoid the case where a file is deleted in
// the middle of the process so IO error is returned.
Status s = DisableFileDeletions();
bool file_deletion_supported = !s.IsNotSupported();
if (s.ok() || !file_deletion_supported) {
s = wal_manager_.GetSortedWalFiles(files);
if (file_deletion_supported) {
Status s2 = EnableFileDeletions(false);
if (!s2.ok() && s.ok()) {
s = s2;
}
}
Status s = wal_manager_.GetSortedWalFiles(files);
// DisableFileDeletions / EnableFileDeletions not supported in read-only DB
if (deletions_disabled.ok()) {
Status s2 = EnableFileDeletions(/*force*/ false);
assert(s2.ok());
s2.PermitUncheckedError();
} else {
assert(deletions_disabled.IsNotSupported());
}
return s;

View File

@ -1546,6 +1546,8 @@ void DBImpl::BackgroundCallPurge() {
mutex_.Lock();
}
assert(bg_purge_scheduled_ > 0);
// Can't use iterator to go over purge_files_ because inside the loop we're
// unlocking the mutex that protects purge_files_.
while (!purge_files_.empty()) {
@ -1613,17 +1615,7 @@ static void CleanupIteratorState(void* arg1, void* /*arg2*/) {
delete state->super_version;
}
if (job_context.HaveSomethingToDelete()) {
if (state->background_purge) {
// PurgeObsoleteFiles here does not delete files. Instead, it adds the
// files to be deleted to a job queue, and deletes it in a separate
// background thread.
state->db->PurgeObsoleteFiles(job_context, true /* schedule only */);
state->mu->Lock();
state->db->SchedulePurge();
state->mu->Unlock();
} else {
state->db->PurgeObsoleteFiles(job_context);
}
state->db->PurgeObsoleteFiles(job_context, state->background_purge);
}
job_context.Clean();
}

View File

@ -56,6 +56,8 @@ Status DBImpl::DisableFileDeletions() {
return s;
}
// FIXME: can be inconsistent with DisableFileDeletions in cases like
// DBImplReadOnly
Status DBImpl::DisableFileDeletionsWithLock() {
mutex_.AssertHeld();
++disable_delete_obsolete_files_;
@ -642,6 +644,11 @@ void DBImpl::PurgeObsoleteFiles(JobContext& state, bool schedule_only) {
InstrumentedMutexLock l(&mutex_);
--pending_purge_obsolete_files_;
assert(pending_purge_obsolete_files_ >= 0);
if (schedule_only) {
// Must change from pending_purge_obsolete_files_ to bg_purge_scheduled_
// while holding mutex (for GetSortedWalFiles() etc.)
SchedulePurge();
}
if (pending_purge_obsolete_files_ == 0) {
bg_cv_.SignalAll();
}
@ -657,11 +664,6 @@ void DBImpl::DeleteObsoleteFiles() {
if (job_context.HaveSomethingToDelete()) {
bool defer_purge = immutable_db_options_.avoid_unnecessary_blocking_io;
PurgeObsoleteFiles(job_context, defer_purge);
if (defer_purge) {
mutex_.Lock();
SchedulePurge();
mutex_.Unlock();
}
}
job_context.Clean();
mutex_.Lock();

View File

@ -266,6 +266,50 @@ TEST_F(DBOptionsTest, SetMutableTableOptions) {
ASSERT_EQ(c_bbto->block_restart_interval, 13);
}
TEST_F(DBOptionsTest, SetWithCustomMemTableFactory) {
class DummySkipListFactory : public SkipListFactory {
public:
static const char* kClassName() { return "DummySkipListFactory"; }
const char* Name() const override { return kClassName(); }
explicit DummySkipListFactory() : SkipListFactory(2) {}
};
{
// Verify the DummySkipList cannot be created
ConfigOptions config_options;
config_options.ignore_unsupported_options = false;
std::unique_ptr<MemTableRepFactory> factory;
ASSERT_NOK(MemTableRepFactory::CreateFromString(
config_options, DummySkipListFactory::kClassName(), &factory));
}
Options options;
options.create_if_missing = true;
// Try with fail_if_options_file_error=false/true to update the options
for (bool on_error : {false, true}) {
options.fail_if_options_file_error = on_error;
options.env = env_;
options.disable_auto_compactions = false;
options.memtable_factory.reset(new DummySkipListFactory());
Reopen(options);
ColumnFamilyHandle* cfh = dbfull()->DefaultColumnFamily();
ASSERT_OK(
dbfull()->SetOptions(cfh, {{"disable_auto_compactions", "true"}}));
ColumnFamilyDescriptor cfd;
ASSERT_OK(cfh->GetDescriptor(&cfd));
ASSERT_STREQ(cfd.options.memtable_factory->Name(),
DummySkipListFactory::kClassName());
ColumnFamilyHandle* test = nullptr;
ASSERT_OK(dbfull()->CreateColumnFamily(options, "test", &test));
ASSERT_OK(test->GetDescriptor(&cfd));
ASSERT_STREQ(cfd.options.memtable_factory->Name(),
DummySkipListFactory::kClassName());
ASSERT_OK(dbfull()->DropColumnFamily(test));
delete test;
}
}
TEST_F(DBOptionsTest, SetBytesPerSync) {
const size_t kValueSize = 1024 * 1024; // 1MB
Options options;

View File

@ -269,6 +269,8 @@ DECLARE_uint64(user_timestamp_size);
DECLARE_string(secondary_cache_uri);
DECLARE_int32(secondary_cache_fault_one_in);
DECLARE_int32(prepopulate_block_cache);
constexpr long KB = 1024;
constexpr int kRandomValueMaxFactor = 3;
constexpr int kValueMaxLen = 100;

View File

@ -860,5 +860,10 @@ DEFINE_int32(injest_error_severity, 1,
"The severity of the injested IO Error. 1 is soft error (e.g. "
"retryable error), 2 is fatal error, and the default is "
"retryable error.");
DEFINE_int32(prepopulate_block_cache,
static_cast<int32_t>(ROCKSDB_NAMESPACE::BlockBasedTableOptions::
PrepopulateBlockCache::kDisable),
"Options related to cache warming (see `enum "
"PrepopulateBlockCache` in table.h)");
#endif // GFLAGS

View File

@ -2226,6 +2226,9 @@ void StressTest::Open() {
FLAGS_optimize_filters_for_memory;
block_based_options.index_type =
static_cast<BlockBasedTableOptions::IndexType>(FLAGS_index_type);
block_based_options.prepopulate_block_cache =
static_cast<BlockBasedTableOptions::PrepopulateBlockCache>(
FLAGS_prepopulate_block_cache);
options_.table_factory.reset(
NewBlockBasedTableFactory(block_based_options));
options_.db_write_buffer_size = FLAGS_db_write_buffer_size;

3
env/io_posix.cc vendored
View File

@ -1543,7 +1543,8 @@ PosixDirectory::PosixDirectory(int fd) : fd_(fd) {
#ifdef OS_LINUX
struct statfs buf;
int ret = fstatfs(fd, &buf);
is_btrfs_ = (ret == 0 && buf.f_type == BTRFS_SUPER_MAGIC);
is_btrfs_ = (ret == 0 && buf.f_type == static_cast<decltype(buf.f_type)>(
BTRFS_SUPER_MAGIC));
#endif
}

View File

@ -123,6 +123,8 @@ bool FilePrefetchBuffer::TryReadFromCache(const IOOptions& opts,
// If readahead is enabled: prefetch the remaining bytes + readahead bytes
// and satisfy the request.
// If readahead is not enabled: return false.
TEST_SYNC_POINT_CALLBACK("FilePrefetchBuffer::TryReadFromCache",
&readahead_size_);
if (offset + n > buffer_offset_ + buffer_.CurrentSize()) {
if (readahead_size_ > 0) {
assert(reader != nullptr);
@ -161,8 +163,6 @@ bool FilePrefetchBuffer::TryReadFromCache(const IOOptions& opts,
#endif
return false;
}
TEST_SYNC_POINT_CALLBACK("FilePrefetchBuffer::TryReadFromCache",
&readahead_size_);
readahead_size_ = std::min(max_readahead_size_, readahead_size_ * 2);
} else {
return false;

View File

@ -30,6 +30,8 @@ class RandomAccessFileReader;
class FilePrefetchBuffer {
public:
static const int kMinNumFileReadsToStartAutoReadahead = 2;
static const size_t kInitAutoReadaheadSize = 8 * 1024;
// Constructor.
//
// All arguments are optional.
@ -57,7 +59,6 @@ class FilePrefetchBuffer {
: buffer_offset_(0),
readahead_size_(readahead_size),
max_readahead_size_(max_readahead_size),
initial_readahead_size_(readahead_size),
min_offset_read_(port::kMaxSizet),
enable_(enable),
track_min_offset_(track_min_offset),
@ -95,6 +96,7 @@ class FilePrefetchBuffer {
// tracked if track_min_offset = true.
size_t min_offset_read() const { return min_offset_read_; }
// Called in case of implicit auto prefetching.
void UpdateReadPattern(const uint64_t& offset, const size_t& len,
bool is_adaptive_readahead = false) {
if (is_adaptive_readahead) {
@ -111,9 +113,10 @@ class FilePrefetchBuffer {
return (prev_len_ == 0 || (prev_offset_ + prev_len_ == offset));
}
// Called in case of implicit auto prefetching.
void ResetValues() {
num_file_reads_ = 1;
readahead_size_ = initial_readahead_size_;
readahead_size_ = kInitAutoReadaheadSize;
}
void GetReadaheadState(ReadaheadFileInfo::ReadaheadInfo* readahead_info) {
@ -136,8 +139,9 @@ class FilePrefetchBuffer {
if ((offset + size > buffer_offset_ + buffer_.CurrentSize()) &&
IsBlockSequential(offset) &&
(num_file_reads_ + 1 > kMinNumFileReadsToStartAutoReadahead)) {
size_t initial_auto_readahead_size = kInitAutoReadaheadSize;
readahead_size_ =
std::max(initial_readahead_size_,
std::max(initial_auto_readahead_size,
(readahead_size_ >= value ? readahead_size_ - value : 0));
}
}
@ -150,7 +154,6 @@ class FilePrefetchBuffer {
// FilePrefetchBuffer object won't be created from Iterator flow if
// max_readahead_size_ = 0.
size_t max_readahead_size_;
size_t initial_readahead_size_;
// The minimum `offset` ever passed to TryReadFromCache().
size_t min_offset_read_;
// if false, TryReadFromCache() always return false, and we only take stats

View File

@ -670,13 +670,16 @@ TEST_P(PrefetchTest, PrefetchWhenReseekwithCache) {
Close();
}
class PrefetchTest1 : public DBTestBase,
public ::testing::WithParamInterface<bool> {
class PrefetchTest1
: public DBTestBase,
public ::testing::WithParamInterface<std::tuple<bool, bool>> {
public:
PrefetchTest1() : DBTestBase("prefetch_test1", true) {}
};
INSTANTIATE_TEST_CASE_P(PrefetchTest1, PrefetchTest1, ::testing::Bool());
INSTANTIATE_TEST_CASE_P(PrefetchTest1, PrefetchTest1,
::testing::Combine(::testing::Bool(),
::testing::Bool()));
#ifndef ROCKSDB_LITE
TEST_P(PrefetchTest1, DBIterLevelReadAhead) {
@ -686,12 +689,13 @@ TEST_P(PrefetchTest1, DBIterLevelReadAhead) {
std::make_shared<MockFS>(env_->GetFileSystem(), false);
std::unique_ptr<Env> env(new CompositeEnvWrapper(env_, fs));
bool is_adaptive_readahead = std::get<1>(GetParam());
Options options = CurrentOptions();
options.write_buffer_size = 1024;
options.create_if_missing = true;
options.compression = kNoCompression;
options.env = env.get();
if (GetParam()) {
if (std::get<0>(GetParam())) {
options.use_direct_reads = true;
options.use_direct_io_for_flush_and_compaction = true;
}
@ -704,7 +708,8 @@ TEST_P(PrefetchTest1, DBIterLevelReadAhead) {
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
Status s = TryReopen(options);
if (GetParam() && (s.IsNotSupported() || s.IsInvalidArgument())) {
if (std::get<0>(GetParam()) &&
(s.IsNotSupported() || s.IsInvalidArgument())) {
// If direct IO is not supported, skip the test
return;
} else {
@ -748,12 +753,15 @@ TEST_P(PrefetchTest1, DBIterLevelReadAhead) {
SyncPoint::GetInstance()->SetCallBack(
"FilePrefetchBuffer::TryReadFromCache", [&](void* arg) {
current_readahead_size = *reinterpret_cast<size_t*>(arg);
ASSERT_GT(current_readahead_size, 0);
});
SyncPoint::GetInstance()->EnableProcessing();
ReadOptions ro;
if (is_adaptive_readahead) {
ro.adaptive_readahead = true;
}
auto iter = std::unique_ptr<Iterator>(db_->NewIterator(ro));
int num_keys = 0;
for (iter->SeekToFirst(); iter->Valid(); iter->Next()) {
@ -763,14 +771,28 @@ TEST_P(PrefetchTest1, DBIterLevelReadAhead) {
ASSERT_GT(buff_prefetch_count, 0);
buff_prefetch_count = 0;
// For index and data blocks.
if (is_adaptive_readahead) {
ASSERT_EQ(readahead_carry_over_count, 2 * (num_sst_files - 1));
} else {
ASSERT_EQ(readahead_carry_over_count, 0);
}
SyncPoint::GetInstance()->DisableProcessing();
SyncPoint::GetInstance()->ClearAllCallBacks();
}
Close();
}
#endif //! ROCKSDB_LITE
TEST_P(PrefetchTest1, NonSequentialReads) {
class PrefetchTest2 : public DBTestBase,
public ::testing::WithParamInterface<bool> {
public:
PrefetchTest2() : DBTestBase("prefetch_test2", true) {}
};
INSTANTIATE_TEST_CASE_P(PrefetchTest2, PrefetchTest2, ::testing::Bool());
#ifndef ROCKSDB_LITE
TEST_P(PrefetchTest2, NonSequentialReads) {
const int kNumKeys = 1000;
// Set options
std::shared_ptr<MockFS> fs =
@ -856,7 +878,7 @@ TEST_P(PrefetchTest1, NonSequentialReads) {
}
#endif //! ROCKSDB_LITE
TEST_P(PrefetchTest1, DecreaseReadAheadIfInCache) {
TEST_P(PrefetchTest2, DecreaseReadAheadIfInCache) {
const int kNumKeys = 2000;
// Set options
std::shared_ptr<MockFS> fs =

View File

@ -101,6 +101,20 @@ class Customizable : public Configurable {
}
}
const void* GetOptionsPtr(const std::string& name) const override {
const void* ptr = Configurable::GetOptionsPtr(name);
if (ptr != nullptr) {
return ptr;
} else {
const auto inner = Inner();
if (inner != nullptr) {
return inner->GetOptionsPtr(name);
} else {
return nullptr;
}
}
}
// Returns the named instance of the Customizable as a T*, or nullptr if not
// found. This method uses IsInstanceOf/Inner to find the appropriate class
// instance and then casts it to the expected return type.

View File

@ -467,7 +467,6 @@ struct IOErrorInfo {
std::string file_path;
size_t length;
uint64_t offset;
;
};
// EventListener class contains a set of callback functions that will

View File

@ -11,7 +11,7 @@
#define ROCKSDB_MAJOR 6
#define ROCKSDB_MINOR 27
#define ROCKSDB_PATCH 0
#define ROCKSDB_PATCH 3
// Do not use these. We made the mistake of declaring macros starting with
// double underscore. Now we have to live with our choice. We'll deprecate these

View File

@ -597,7 +597,7 @@ static std::unordered_map<std::string, OptionTypeInfo>
static_cast<std::shared_ptr<MemTableRepFactory>*>(addr);
Status s =
MemTableRepFactory::CreateFromString(opts, value, &factory);
if (s.ok()) {
if (factory && s.ok()) {
shared->reset(factory.release());
}
return s;
@ -613,7 +613,7 @@ static std::unordered_map<std::string, OptionTypeInfo>
static_cast<std::shared_ptr<MemTableRepFactory>*>(addr);
Status s =
MemTableRepFactory::CreateFromString(opts, value, &factory);
if (s.ok()) {
if (factory && s.ok()) {
shared->reset(factory.release());
}
return s;

View File

@ -43,6 +43,7 @@ Status Configurable::PrepareOptions(const ConfigOptions& opts) {
Status status = Status::OK();
#ifndef ROCKSDB_LITE
for (auto opt_iter : options_) {
if (opt_iter.type_map != nullptr) {
for (auto map_iter : *(opt_iter.type_map)) {
auto& opt_info = map_iter.second;
if (!opt_info.IsDeprecated() && !opt_info.IsAlias() &&
@ -52,12 +53,13 @@ Status Configurable::PrepareOptions(const ConfigOptions& opts) {
opt_info.AsRawPointer<Configurable>(opt_iter.opt_ptr);
if (config != nullptr) {
status = config->PrepareOptions(opts);
} else if (!opt_info.CanBeNull()) {
status = Status::NotFound("Missing configurable object",
map_iter.first);
}
if (!status.ok()) {
return status;
}
} else if (!opt_info.CanBeNull()) {
status =
Status::NotFound("Missing configurable object", map_iter.first);
}
}
}
@ -74,6 +76,7 @@ Status Configurable::ValidateOptions(const DBOptions& db_opts,
Status status;
#ifndef ROCKSDB_LITE
for (auto opt_iter : options_) {
if (opt_iter.type_map != nullptr) {
for (auto map_iter : *(opt_iter.type_map)) {
auto& opt_info = map_iter.second;
if (!opt_info.IsDeprecated() && !opt_info.IsAlias()) {
@ -83,8 +86,8 @@ Status Configurable::ValidateOptions(const DBOptions& db_opts,
if (config != nullptr) {
status = config->ValidateOptions(db_opts, cf_opts);
} else if (!opt_info.CanBeNull()) {
status =
Status::NotFound("Missing configurable object", map_iter.first);
status = Status::NotFound("Missing configurable object",
map_iter.first);
}
if (!status.ok()) {
return status;
@ -93,6 +96,7 @@ Status Configurable::ValidateOptions(const DBOptions& db_opts,
}
}
}
}
#else
(void)db_opts;
(void)cf_opts;
@ -124,6 +128,7 @@ const OptionTypeInfo* ConfigurableHelper::FindOption(
const std::vector<Configurable::RegisteredOptions>& options,
const std::string& short_name, std::string* opt_name, void** opt_ptr) {
for (auto iter : options) {
if (iter.type_map != nullptr) {
const auto opt_info =
OptionTypeInfo::Find(short_name, *(iter.type_map), opt_name);
if (opt_info != nullptr) {
@ -131,6 +136,7 @@ const OptionTypeInfo* ConfigurableHelper::FindOption(
return opt_info;
}
}
}
return nullptr;
}
#endif // ROCKSDB_LITE
@ -280,6 +286,7 @@ Status ConfigurableHelper::ConfigureOptions(
if (!opts_map.empty()) {
#ifndef ROCKSDB_LITE
for (const auto& iter : configurable.options_) {
if (iter.type_map != nullptr) {
s = ConfigureSomeOptions(config_options, configurable, *(iter.type_map),
&remaining, iter.opt_ptr);
if (remaining.empty()) { // Are there more options left?
@ -288,6 +295,7 @@ Status ConfigurableHelper::ConfigureOptions(
break;
}
}
}
#else
(void)configurable;
if (!config_options.ignore_unknown_options) {
@ -573,6 +581,7 @@ Status ConfigurableHelper::SerializeOptions(const ConfigOptions& config_options,
std::string* result) {
assert(result);
for (auto const& opt_iter : configurable.options_) {
if (opt_iter.type_map != nullptr) {
for (const auto& map_iter : *(opt_iter.type_map)) {
const auto& opt_name = map_iter.first;
const auto& opt_info = map_iter.second;
@ -607,6 +616,7 @@ Status ConfigurableHelper::SerializeOptions(const ConfigOptions& config_options,
}
}
}
}
return Status::OK();
}
#endif // ROCKSDB_LITE
@ -629,6 +639,7 @@ Status ConfigurableHelper::ListOptions(
const std::string& prefix, std::unordered_set<std::string>* result) {
Status status;
for (auto const& opt_iter : configurable.options_) {
if (opt_iter.type_map != nullptr) {
for (const auto& map_iter : *(opt_iter.type_map)) {
const auto& opt_name = map_iter.first;
const auto& opt_info = map_iter.second;
@ -643,6 +654,7 @@ Status ConfigurableHelper::ListOptions(
}
}
}
}
return status;
}
#endif // ROCKSDB_LITE
@ -702,7 +714,7 @@ bool ConfigurableHelper::AreEquivalent(const ConfigOptions& config_options,
if (this_offset != that_offset) {
if (this_offset == nullptr || that_offset == nullptr) {
return false;
} else {
} else if (o.type_map != nullptr) {
for (const auto& map_iter : *(o.type_map)) {
const auto& opt_info = map_iter.second;
if (config_options.IsCheckEnabled(opt_info.GetSanityLevel())) {

View File

@ -656,6 +656,30 @@ TEST_F(ConfigurableTest, TestNoCompare) {
ASSERT_TRUE(base->AreEquivalent(config_options_, copy.get(), &mismatch));
ASSERT_FALSE(copy->AreEquivalent(config_options_, base.get(), &mismatch));
}
TEST_F(ConfigurableTest, NullOptionMapTest) {
std::unique_ptr<Configurable> base;
std::unordered_set<std::string> names;
std::string str;
base.reset(
SimpleConfigurable::Create("c", TestConfigMode::kDefaultMode, nullptr));
ASSERT_NOK(base->ConfigureFromString(config_options_, "int=10"));
ASSERT_NOK(base->ConfigureFromString(config_options_, "int=20"));
ASSERT_NOK(base->ConfigureOption(config_options_, "int", "20"));
ASSERT_NOK(base->GetOption(config_options_, "int", &str));
ASSERT_NE(base->GetOptions<TestOptions>("c"), nullptr);
ASSERT_OK(base->GetOptionNames(config_options_, &names));
ASSERT_EQ(names.size(), 0UL);
ASSERT_OK(base->PrepareOptions(config_options_));
ASSERT_OK(base->ValidateOptions(DBOptions(), ColumnFamilyOptions()));
std::unique_ptr<Configurable> copy;
copy.reset(
SimpleConfigurable::Create("c", TestConfigMode::kDefaultMode, nullptr));
ASSERT_OK(base->GetOptionString(config_options_, &str));
ASSERT_OK(copy->ConfigureFromString(config_options_, str));
ASSERT_TRUE(base->AreEquivalent(config_options_, copy.get(), &str));
}
#endif
static std::unordered_map<std::string, ConfigTestFactoryFunc> TestFactories = {

View File

@ -612,11 +612,20 @@ static std::unordered_map<std::string, OptionTypeInfo> inner_option_info = {
#endif // ROCKSDB_LITE
};
struct InnerOptions {
static const char* kName() { return "InnerOptions"; }
std::shared_ptr<Customizable> inner;
};
class InnerCustomizable : public Customizable {
public:
explicit InnerCustomizable(const std::shared_ptr<Customizable>& w)
: inner_(w) {}
explicit InnerCustomizable(const std::shared_ptr<Customizable>& w) {
iopts_.inner = w;
RegisterOptions(&iopts_, &inner_option_info);
}
static const char* kClassName() { return "Inner"; }
const char* Name() const override { return kClassName(); }
bool IsInstanceOf(const std::string& name) const override {
if (name == kClassName()) {
return true;
@ -626,26 +635,51 @@ class InnerCustomizable : public Customizable {
}
protected:
const Customizable* Inner() const override { return inner_.get(); }
const Customizable* Inner() const override { return iopts_.inner.get(); }
private:
std::shared_ptr<Customizable> inner_;
InnerOptions iopts_;
};
struct WrappedOptions1 {
static const char* kName() { return "WrappedOptions1"; }
int i = 42;
};
class WrappedCustomizable1 : public InnerCustomizable {
public:
explicit WrappedCustomizable1(const std::shared_ptr<Customizable>& w)
: InnerCustomizable(w) {}
: InnerCustomizable(w) {
RegisterOptions(&wopts_, nullptr);
}
const char* Name() const override { return kClassName(); }
static const char* kClassName() { return "Wrapped1"; }
private:
WrappedOptions1 wopts_;
};
struct WrappedOptions2 {
static const char* kName() { return "WrappedOptions2"; }
std::string s = "42";
};
class WrappedCustomizable2 : public InnerCustomizable {
public:
explicit WrappedCustomizable2(const std::shared_ptr<Customizable>& w)
: InnerCustomizable(w) {}
const void* GetOptionsPtr(const std::string& name) const override {
if (name == WrappedOptions2::kName()) {
return &wopts_;
} else {
return InnerCustomizable::GetOptionsPtr(name);
}
}
const char* Name() const override { return kClassName(); }
static const char* kClassName() { return "Wrapped2"; }
private:
WrappedOptions2 wopts_;
};
} // namespace
@ -677,6 +711,29 @@ TEST_F(CustomizableTest, WrappedInnerTest) {
ASSERT_EQ(wc2->CheckedCast<TestCustomizable>(), ac.get());
}
TEST_F(CustomizableTest, CustomizableInnerTest) {
std::shared_ptr<Customizable> c =
std::make_shared<InnerCustomizable>(std::make_shared<ACustomizable>("a"));
std::shared_ptr<Customizable> wc1 = std::make_shared<WrappedCustomizable1>(c);
std::shared_ptr<Customizable> wc2 = std::make_shared<WrappedCustomizable2>(c);
auto inner = c->GetOptions<InnerOptions>();
ASSERT_NE(inner, nullptr);
auto aopts = c->GetOptions<AOptions>();
ASSERT_NE(aopts, nullptr);
ASSERT_EQ(aopts, wc1->GetOptions<AOptions>());
ASSERT_EQ(aopts, wc2->GetOptions<AOptions>());
auto w1opts = wc1->GetOptions<WrappedOptions1>();
ASSERT_NE(w1opts, nullptr);
ASSERT_EQ(c->GetOptions<WrappedOptions1>(), nullptr);
ASSERT_EQ(wc2->GetOptions<WrappedOptions1>(), nullptr);
auto w2opts = wc2->GetOptions<WrappedOptions2>();
ASSERT_NE(w2opts, nullptr);
ASSERT_EQ(c->GetOptions<WrappedOptions2>(), nullptr);
ASSERT_EQ(wc1->GetOptions<WrappedOptions2>(), nullptr);
}
TEST_F(CustomizableTest, CopyObjectTest) {
class CopyCustomizable : public Customizable {
public:
@ -714,20 +771,9 @@ TEST_F(CustomizableTest, CopyObjectTest) {
}
TEST_F(CustomizableTest, TestStringDepth) {
class ShallowCustomizable : public Customizable {
public:
ShallowCustomizable() {
inner_ = std::make_shared<ACustomizable>("a");
RegisterOptions("inner", &inner_, &inner_option_info);
}
static const char* kClassName() { return "shallow"; }
const char* Name() const override { return kClassName(); }
private:
std::shared_ptr<TestCustomizable> inner_;
};
ConfigOptions shallow = config_options_;
std::unique_ptr<Configurable> c(new ShallowCustomizable());
std::unique_ptr<Configurable> c(
new InnerCustomizable(std::make_shared<ACustomizable>("a")));
std::string opt_str;
shallow.depth = ConfigOptions::Depth::kDepthShallow;
ASSERT_OK(c->GetOptionString(shallow, &opt_str));

View File

@ -1221,7 +1221,8 @@ void BlockBasedTableBuilder::WriteRawBlock(const Slice& block_contents,
CompressionType type,
BlockHandle* handle,
BlockType block_type,
const Slice* raw_block_contents) {
const Slice* raw_block_contents,
bool is_top_level_filter_block) {
Rep* r = rep_;
bool is_data_block = block_type == BlockType::kData;
Status s = Status::OK();
@ -1262,9 +1263,11 @@ void BlockBasedTableBuilder::WriteRawBlock(const Slice& block_contents,
}
if (warm_cache) {
if (type == kNoCompression) {
s = InsertBlockInCacheHelper(block_contents, handle, block_type);
s = InsertBlockInCacheHelper(block_contents, handle, block_type,
is_top_level_filter_block);
} else if (raw_block_contents != nullptr) {
s = InsertBlockInCacheHelper(*raw_block_contents, handle, block_type);
s = InsertBlockInCacheHelper(*raw_block_contents, handle, block_type,
is_top_level_filter_block);
}
if (!s.ok()) {
r->SetStatus(s);
@ -1472,12 +1475,12 @@ Status BlockBasedTableBuilder::InsertBlockInCompressedCache(
Status BlockBasedTableBuilder::InsertBlockInCacheHelper(
const Slice& block_contents, const BlockHandle* handle,
BlockType block_type) {
BlockType block_type, bool is_top_level_filter_block) {
Status s;
if (block_type == BlockType::kData || block_type == BlockType::kIndex) {
s = InsertBlockInCache<Block>(block_contents, handle, block_type);
} else if (block_type == BlockType::kFilter) {
if (rep_->filter_builder->IsBlockBased()) {
if (rep_->filter_builder->IsBlockBased() || is_top_level_filter_block) {
s = InsertBlockInCache<Block>(block_contents, handle, block_type);
} else {
s = InsertBlockInCache<ParsedFullFilterBlock>(block_contents, handle,
@ -1564,8 +1567,14 @@ void BlockBasedTableBuilder::WriteFilterBlock(
rep_->filter_builder->Finish(filter_block_handle, &s, &filter_data);
assert(s.ok() || s.IsIncomplete());
rep_->props.filter_size += filter_content.size();
bool top_level_filter_block = false;
if (s.ok() && rep_->table_options.partition_filters &&
!rep_->filter_builder->IsBlockBased()) {
top_level_filter_block = true;
}
WriteRawBlock(filter_content, kNoCompression, &filter_block_handle,
BlockType::kFilter);
BlockType::kFilter, nullptr /*raw_contents*/,
top_level_filter_block);
}
rep_->filter_builder->ResetFilterBitsBuilder();
}

View File

@ -119,7 +119,8 @@ class BlockBasedTableBuilder : public TableBuilder {
BlockType block_type);
// Directly write data to the file.
void WriteRawBlock(const Slice& data, CompressionType, BlockHandle* handle,
BlockType block_type, const Slice* raw_data = nullptr);
BlockType block_type, const Slice* raw_data = nullptr,
bool is_top_level_filter_block = false);
void SetupCacheKeyPrefix(const TableBuilderOptions& tbo);
@ -129,7 +130,8 @@ class BlockBasedTableBuilder : public TableBuilder {
Status InsertBlockInCacheHelper(const Slice& block_contents,
const BlockHandle* handle,
BlockType block_type);
BlockType block_type,
bool is_top_level_filter_block);
Status InsertBlockInCompressedCache(const Slice& block_contents,
const CompressionType type,

View File

@ -161,12 +161,14 @@ class BlockBasedTableIterator : public InternalIteratorBase<Slice> {
}
void SetReadaheadState(ReadaheadFileInfo* readahead_file_info) override {
if (read_options_.adaptive_readahead) {
block_prefetcher_.SetReadaheadState(
&(readahead_file_info->data_block_readahead_info));
if (index_iter_) {
index_iter_->SetReadaheadState(readahead_file_info);
}
}
}
std::unique_ptr<InternalIteratorBase<IndexValue>> index_iter_;

View File

@ -30,8 +30,11 @@ class BlockPrefetcher {
void ResetValues() {
num_file_reads_ = 1;
readahead_size_ = BlockBasedTable::kInitAutoReadaheadSize;
initial_auto_readahead_size_ = readahead_size_;
// Since initial_auto_readahead_size_ can be different from
// kInitAutoReadaheadSize in case of adaptive_readahead, so fallback the
// readahead_size_ to kInitAutoReadaheadSize in case of reset.
initial_auto_readahead_size_ = BlockBasedTable::kInitAutoReadaheadSize;
readahead_size_ = initial_auto_readahead_size_;
readahead_limit_ = 0;
return;
}

View File

@ -123,9 +123,11 @@ class PartitionedIndexIterator : public InternalIteratorBase<IndexValue> {
}
void SetReadaheadState(ReadaheadFileInfo* readahead_file_info) override {
if (read_options_.adaptive_readahead) {
block_prefetcher_.SetReadaheadState(
&(readahead_file_info->index_block_readahead_info));
}
}
std::unique_ptr<InternalIteratorBase<IndexValue>> index_iter_;

View File

@ -154,6 +154,7 @@ default_params = {
[0, 1024 * 1024, 2 * 1024 * 1024, 4 * 1024 * 1024, 8 * 1024 * 1024]),
"user_timestamp_size": 0,
"secondary_cache_fault_one_in" : lambda: random.choice([0, 0, 32]),
"prepopulate_block_cache" : lambda: random.choice([0, 1]),
}
_TEST_DIR_ENV_VAR = 'TEST_TMPDIR'