Summary: D36669 introduces a bug that trivial moved data is not going to specific level but the next level, which will incorrectly be level 1 for level 0 compaciton if base level is not level 1. Fixing it by appreciating the output level
Test Plan: Run all tests
Reviewers: MarkCallaghan, rven, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D37119
Summary:
Recent change of DBTest.DynamicLevelCompressionPerLevel2 has a bug that the second sync point is not enabled. Fix it. Also add an assert for that.
Also, flush compression is not tracked in the test. Add it.
Test Plan: Build everything
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D37101
Summary: When commiting the sync point interface change, didn't resolve the new occurance of the old interface in rebase. Fix it.
Test Plan: Build and see it pass
Reviewers: igor, yhchiang, rven, anthony, kradhakrishnan
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D37095
Summary:
Allow users to give a callback function with parameter using sync point, so more complicated verification can be done in tests.
Use it in DBTest.DynamicLevelCompressionPerLevel2 so that failures will be more easy to debug.
Test Plan: Run all tests. Run DBTest.DynamicLevelCompressionPerLevel2 with valgrind check.
Reviewers: rven, yhchiang, anthony, kradhakrishnan, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D36999
Summary: https://reviews.facebook.net/D36963 made the debug build much faster and that triggered failures of CompactFilesOnLevelCompaction test. 3 out of 4 last tests on Jenkins failed. I'm disabling this test temporarily, since we likely know the reason why it's failing and there's already work in progress to address it -- https://reviews.facebook.net/D36225
Test Plan: none
Reviewers: sdong, rven, yhchiang, meyering
Reviewed By: meyering
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36993
On Centos 6, you need to explicitely include linux/falloc.h which is
whele the FALLOC_FL_* flags are defined. Otherwise, the fallocate()
support test defined in build_detect_platform will fail.
Signed-off-by: Pooya Shareghi <shareghi@gmail.com>
Summary:
This changes loads to use vector memtable and disable the WAL. This also
increases the chance we will see IO bottlenecks during loads which is good to stress
test HW. But I also think it is a good way to load data quickly as this is a bulk
operation and the WAL isn't needed.
The two numbers below are the MB/sec rates for fillseq, bulkload using a skiplist
or vector memtable and the WAL enabled or disabled. There is a big benefit from
using the vector memtable and WAL disabled. Alas there is also a perf bug in
the use of std::sort for ordered input when the vector is flushed. Task is open
for that.
112, 66 - skiplist with wal
250, 116 - skiplist without wal
110, 108 - vector with wal
232, 370 - vector without wal
Task ID: #
Blame Rev:
Test Plan:
Revert Plan:
Database Impact:
Memcache Impact:
Other Notes:
EImportant:
- begin *PUBLIC* platform impact section -
Bugzilla: #
- end platform impact -
Reviewers: igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D36957
Summary: We should use mocked-out env for these tests to make it more realiable. Added benefit is that instead of actually sleeping for 3 seconds, we can instead pretend to sleep and just increase time counters.
Test Plan: for i in `seq 100`; do ./wal_manager_test --gtest_filter=WalManagerTest.WALArchivalTtl ;done
Reviewers: rven, meyering
Reviewed By: meyering
Subscribers: meyering, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36951
Summary: As title. For every operation we're asserting Valid(), which sorts the data. That's pretty terrible. We have to be careful to have decent performance even with DEBUG builds.
Test Plan: make check
Reviewers: sdong, rven, yhchiang, MarkCallaghan
Reviewed By: MarkCallaghan
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36969
Summary: Make target for running all xfunc tests
Test Plan: make xfunc
Reviewers: igor, sdong, meyering
Reviewed By: meyering
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36873
Summary:
The problem is that sometimes two memtables will be compacted together into a single file. In that case, our assertion
ASSERT_EQ(NumTableFilesAtLevel(0), 5);
fails because same amount of data is in 4 files instead of 5. We should wait for flush so that we prevent two memtables merging into a single file.
Test Plan: `for i in `seq 20`; do mrtest FIFOCompactionTest; done` -- fails at least once before. fails zero times after.
Reviewers: rven
Reviewed By: rven
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36939
Summary:
1. it doesn't work
2. we're not using it
In the future, if we need general benchmark framework, we should probably use https://github.com/google/benchmark
Test Plan: make all
Reviewers: yhchiang, rven, anthony, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36777
Summary:
The goal of this diff is to make Compaction class easier to use. This should also make new compaction algorithms easier to write (like CompactFiles from @yhchiang and dynamic leveled and multi-leveled universal from @sdong).
Here are couple of things demonstrating that Compaction class is hard to use:
1. we have two constructors of Compaction class
2. there's this thing called grandparents_, but it appears to only be setup for leveled compaction and not compactfiles
3. it's easy to introduce a subtle and dangerous bug like this: D36225
4. SetupBottomMostLevel() is hard to understand and it shouldn't be. See this comment: afbafeaeae/db/compaction.cc (L236-L241). It also made it harder for @yhchiang to write CompactFiles, as evidenced by this: afbafeaeae/db/compaction_picker.cc (L204-L210)
The problem is that we create Compaction object, which holds a lot of state, and then pass it around to some functions. After those functions are done mutating, then we call couple of functions on Compaction object, like SetupBottommostLevel() and MarkFilesBeingCompacted(). It is very hard to see what's happening with all that Compaction's state while it's travelling across different functions. If you're writing a new PickCompaction() function you need to try really hard to understand what are all the functions you need to run on Compaction object and what state you need to setup.
My proposed solution is to make important parts of Compaction immutable after construction. PickCompaction() should calculate compaction inputs and then pass them onto Compaction object once they are finalized. That makes it easy to create a new compaction -- just provide all the parameters to the constructor and you're done. No need to call confusing functions after you created your object.
This diff doesn't fully achieve that goal, but it comes pretty close. Here are some of the changes:
* have one Compaction constructor instead of two.
* inputs_ is constant after construction
* MarkFilesBeingCompacted() is now private to Compaction class and automatically called on construction/destruction.
* SetupBottommostLevel() is gone. Compaction figures it out on its own based on the input.
* CompactionPicker's functions are not passing around Compaction object anymore. They are only passing around the state that they need.
Test Plan:
make check
make asan_check
make valgrind_check
Reviewers: rven, anthony, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, yhchiang, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36687
Summary: Need to remember to unref MemTableList->current() before deleting.
Test Plan: ran test with valgrind
Reviewers: igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36855
Summary:
Test failing due to a missing directory caused by a simple bug (did not run into this on my dev box since the path already existed).
We should look into deleting test::TmpDir() before each test run.
Test Plan: ran test
Reviewers: igor, yhchiang, meyering, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36831
Summary:
Fixed xfunc related compile errors in ROCKSDB_LITE
Now make OPT=-DROCKSDB_LITE shared_lib -j32 would work
Test Plan:
make clean
make OPT=-DROCKSDB_LITE shared_lib -j32
make clean
make OPT=-DROCKSDB_LITE static_lib -j32
Reviewers: sdong, igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36825
Summary: Add tests for MemTableList
Test Plan: run test
Reviewers: yhchiang, kradhakrishnan, sdong, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36735
Summary:
Fix a compile error in ROCKSDB_LITE in db/db_impl.cc
related to internal_stats.
Test Plan: make OPT=-DROCKSDB_LITE shared_lib
Reviewers: sdong, igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36819
Summary: "make commit-prereq" uses "make release" which overrides OPT, so ROCKSDB_LITE is not covered. Fix it by using "make static_lib"
Test Plan: Run it and see it fail (which is expected)
Reviewers: yhchiang, meyering, rven, anthony, kradhakrishnan, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D36813
Summary:
Fix a compilation error in ROCKSDB_LITE in db/internal_stats.h
Other compilation errors will be fixed in a separate diff.
Test Plan: make OPT=-DROCKSDB_LITE
Reviewers: sdong, igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36807
Summary:
Add a test case:
Write some keys without sync, flush, write other keys and do sync. Before flush finishes, host crashes and unsync data is dropped.
Tag the new test as disabled since it is not passing.
Test Plan: Run the test
Reviewers: MarkCallaghan, rven, anthony, igor, kradhakrishnan
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D36741
Summary:
This fixes two problems:
1) the env should not be created twice when use_existing_db is false
2) the env dtor should run before cachedev_fd_ is closed.
Task ID: #
Blame Rev:
Test Plan:
Revert Plan:
Database Impact:
Memcache Impact:
Other Notes:
EImportant:
- begin *PUBLIC* platform impact section -
Bugzilla: #
- end platform impact -
Reviewers: igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D36795
Summary: I don't think we need to use whole-archive to include jemalloc. This change only affects our development builds -- it does not affect our open source builds (which don't support jemalloc) or our fbcode third-party2 builds (which use open-source build codepaths).
Test Plan:
make
verify that jemalloc is running by running `MALLOC_CONF="prof:true" ./cache_test` and observing that file was created
Reviewers: MarkCallaghan
Reviewed By: MarkCallaghan
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36783
Summary: Other than making some class members private, this is a documentation-only change
Test Plan: unit tests
Reviewers: sdong, yhchiang, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36567
Summary: Add a script, which checks out changes from a list of tags, build them and load the same data into it. In the last, checkout the target build and make sure it can successfully open DB and read all the data. It is implemented through ldb tool, because ldb tool is available from all previous builds so that we don't have to cross build anything.
Test Plan: Run the script.
Reviewers: yhchiang, rven, anthony, kradhakrishnan, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D36639
Summary: Now EnvOptions uses unsanitized DB options. bytes_per_sync is tuned off when rate_limiter is used, but this change doesn't take effort.
Test Plan: See different I/O pattern in db_bench running fillseq.
Reviewers: yhchiang, kradhakrishnan, rven, anthony, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D36723
Summary: These two files are test binaries and are not included in TESTS in Makefile.
Test Plan: `make clean` now deletes those files, too
Reviewers: sdong, kradhakrishnan, meyering
Reviewed By: kradhakrishnan, meyering
Subscribers: kradhakrishnan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36705
Summary:
When building rocksdbjava and rocksdbjavastatic, create -fPIC-enabled
binaries in a temporary subdirectory, jl/.
* Makefile (java_libobjects): New variable.
(java_libobjects): New rule.
(CLEAN_FILES): Arrange for "make clean" to remove that temporary dir.
(rocksdbjavastatic): Depend on the new variable.
Remove useless OPT=... line.
(rocksdbjava): Likewise.
Test Plan:
JAVA_HOME=/usr/local/jdk-7u67-64 PATH=$JAVA_HOME/bin:$PATH \
make rocksdbjavastatic
Reviewers: yhchiang
Reviewed By: yhchiang
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D36645
Summary: Now trivial move is only triggered when moving from level n to n+1. With dynamic level base, it is possible that file is moved from level 0 to level n, while levels from 1 to n-1 are empty. Extend trivial move to this case.
Test Plan: Add a more unit test of sequential loading. Non-trivial compaction happened without the patch and now doesn't happen.
Reviewers: rven, yhchiang, MarkCallaghan, igor
Reviewed By: igor
Subscribers: leveldb, dhruba, IslamAbdelRahman
Differential Revision: https://reviews.facebook.net/D36669
Summary:
Fix the following compilation error in flashcache.cc on Mac
Undefined symbols for architecture x86_64:
"rocksdb::NewFlashcacheAwareEnv(rocksdb::Env*, int)", referenced from:
rocksdb::Benchmark::Open(rocksdb::Options*) in db_bench.o
Test Plan: make db_bench
Reviewers: sdong, igor, rven
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36657
Summary:
* src.mk (JNI_NATIVE_SOURCES): New variable, so we don't have to use
a glob in Makefile
* Makefile (JNI_NATIVE_SOURCES): Remove glob-using definition, now
that the explicit list of sources is in src.mk.
Test Plan:
Run this:
JAVA_HOME=/usr/local/jdk-7u67-64 PATH=$JAVA_HOME/bin:$PATH \
make rocksdbjava
Reviewers: yhchiang
Reviewed By: yhchiang
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D36633
Summary:
As described in https://github.com/facebook/rocksdb/issues/563, we should add minor version to SONAME, since we break ABI with minor releases.
I also turned PLATFORM_SHARED_VERSIONED to true by default. This is true in LevelDB and it was switched to false by D15117 for no apparent reason. It should only be false for iOS.
Test Plan: `make shared_lib` produced librocksdb.dylib.3.10.0
Reviewers: sdong, yhchiang, meyering
Reviewed By: meyering
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36573
Summary:
After this diff, when a user submits a diff from Facebook's VPN
network, we'll automatically trigger a jenkins test. Once jenkins test
is done, we'll update the diff with test results.
Test Plan:
Made sure that jenkins build is triggered on `arc diff` and
that result is reflected back on the diff
Reviewers: sdong, rven, kradhakrishnan, anthony, yhchiang
Reviewed By: anthony
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D36555
Summary:
There are some cases when flachcache file descriptor was
already allocated (i.e. fb-MySQL). Then NewFlashcacheAwareEnv returns an
error at open() because fd was already assigned. This diff adds another
function to instantiate FlashcacheAwareEnv, with pre-allocated fd cachedev_fd.
Test Plan: Tested with MyRocks using this function, then worked
Reviewers: sdong, igor
Reviewed By: igor
Subscribers: dhruba, MarkCallaghan, rven
Differential Revision: https://reviews.facebook.net/D36447
Summary: Now we add warnings when user configures compression and the compression is not supported.
Test Plan:
Configured compression to non-supported values. Observed messages in my log:
2015/03/26-12:17:57.586341 7ffb8a496840 [WARN] Compression type chosen for level 2 is not supported: LZ4. RocksDB will not compress data on level 2.
2015/03/26-12:19:10.768045 7f36f15c5840 [WARN] Compression type chosen is not supported: LZ4. RocksDB will not compress data.
Reviewers: rven, sdong, yhchiang
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D35979
Summary:
When GNU parallel is available, "make check" tests are now run in parallel.
When /dev/shm is usable, we tell those tests to create temporary files therein.
Now, the longest-running single test, db_test, (which is composed of hundreds of sub-tests)
is no longer run sequentially: instead, each of its sub-tests is run independently, and can
be parallelized along with all other tests. To make that process easier, this change
creates a temporary directory, "t/", in which it puts a small script for each of those
subtests. The output from each parallel-run test is now saved in t/log-TEST_NAME.
When GNU parallel is not available, we run the tests in sequence, just as before.
If GNU parallel is available and you don't like the default of running one subtest
per core, you can invoke "make J=1 check" to run only one test at a time.
Beware: this will take a long time, and it starts with the two longest-running tests, so you
will wait for a long time before seeing any results. Instead, if you want to use fewer resources
but still see useful progress, try "make J=60% check". That will attempt to ensure that 60% of
the cores are occupied by test runs.
To watch progress of individual tests (duration, success (PASS-or-FAIL), name), run "make watch-log"
in the same directory from another window. That will start with something like this:
and when complete should show numbers/names like this:
Every 0.1s: sort -k7,7nr -k4,4gr LOG|perl -n -e '@a=split("\t",$_,-1); $t=$a[8]; $t =~ s,^\./,,;' -e '$t =~ s, >.*,,; chomp $t;' -e '$t =~ /.*--gtest_filter=... Wed Apr 1 10:51:42 2015
152.221 PASS t/DBTest.FileCreationRandomFailure
109.280 PASS t/DBTest.EncodeDecompressedBlockSizeTest
82.315 PASS reduce_levels_test
77.812 PASS t/DBTest.CompactionFilterWithValueChange
73.236 PASS backupable_db_test
63.428 PASS deletefile_test
57.248 PASS table_test
55.665 PASS prefix_test
49.816 PASS t/DBTest.RateLimitingTest
...
Test Plan:
Timings (measured so as to exclude compile and link times):
With this change, all tests complete in 2m40s on a system for which nproc prints 32.
Prior to this this change, "make check" would take 24.5 minutes on that same system.
Here are durations (in seconds) of the longest-running subtests:
152.435 PASS t/DBTest.FileCreationRandomFailure
107.070 PASS t/DBTest.EncodeDecompressedBlockSizeTest
81.391 PASS ./reduce_levels_test
71.587 PASS ./backupable_db_test
61.746 PASS ./deletefile_test
57.960 PASS ./table_test
55.230 PASS ./prefix_test
54.060 PASS t/DBTest.CompactionFilterWithValueChange
48.873 PASS t/DBTest.RateLimitingTest
47.569 PASS ./fault_injection_test
46.593 PASS t/DBTest.Randomized
42.662 PASS t/DBTest.CompactionFilter
31.793 PASS t/DBTest.SparseMerge
30.612 PASS t/DBTest.CompactionFilterV2
25.891 PASS t/DBTest.GroupCommitTest
23.863 PASS t/DBTest.DynamicLevelMaxBytesBase
22.976 PASS ./rate_limiter_test
18.942 PASS t/DBTest.OptimizeFiltersForHits
16.851 PASS ./env_test
15.399 PASS t/DBTest.CompactionFilterV2WithValueChange
14.827 PASS t/DBTest.CompactionFilterV2NULLPrefix
Reviewers: igor, sdong, rven, yhchiang, igor.sugak
Reviewed By: igor.sugak
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D35379
Summary:
Fix build break on travis build:
$ OPT=-DTRAVIS V=1 make unity && make clean && OPT=-DTRAVIS V=1 make db_test && ./db_test
......
In file included from unity.cc:65:0:
./table/plain_table_key_coding.cc: In member function ‘rocksdb::Status rocksdb::PlainTableKeyDecoder::NextPrefixEncodingKey(const char*, const char*, rocksdb::ParsedInternalKey*, rocksdb::Slice*, size_t*, bool*)’:
./table/plain_table_key_coding.cc:224:3: error: reference to ‘EntryType’ is ambiguous
EntryType entry_type;
^
In file included from ./db/table_properties_collector.h:9:0,
from ./db/builder.h:11,
from ./db/builder.cc:10,
from unity.cc:1:
./include/rocksdb/table_properties.h:81:6: note: candidates are: enum rocksdb::EntryType
enum EntryType {
^
In file included from unity.cc:65:0:
./table/plain_table_key_coding.cc:16:6: note: enum rocksdb::{anonymous}::EntryType
enum EntryType : unsigned char {
^
./table/plain_table_key_coding.cc:231:51: error: ‘entry_type’ was not declared in this scope
const char* pos = DecodeSize(key_ptr, limit, &entry_type, &size);
^
make: *** [unity.o] Error 1
Test Plan:
OPT=-DTRAVIS V=1 make unity
And make sure it doesn't break anymore.
Reviewers: yhchiang, kradhakrishnan, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D36549
Summary:
This adds p99.9 and p99.99 response times to the benchmark report and
adds a second report, report2.txt that has tests listed in test order rather
than the time in which they were run, so overwrite tests are listed for
all thread counts, then update etc.
Also changes fillseq to compress all levels to avoid write-amp from rewriting
uncompressed files when they reach the first level to compress.
Increase max_write_buffer_number to avoid stalls during fillseq and make
max_background_flushes agree with max_write_buffer_number.
See https://gist.github.com/mdcallag/297ff4316a25cb2988f7 for an example
of the new report (report2.txt)
Task ID: #
Blame Rev:
Test Plan:
Revert Plan:
Database Impact:
Memcache Impact:
Other Notes:
EImportant:
- begin *PUBLIC* platform impact section -
Bugzilla: #
- end platform impact -
Reviewers: igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D36537
Summary:
Currently users have no idea a key is add, delete or merge from TablePropertiesCollector call back. Add a new function to add it.
Also refactor the codes so that
(1) make table property collector and internal table property collector two separate data structures with the later one now exposed
(2) table builders only receive internal table properties
Test Plan: Add cases in table_properties_collector_test to cover both of old and new ways of using TablePropertiesCollector.
Reviewers: yhchiang, igor.sugak, rven, igor
Reviewed By: rven, igor
Subscribers: meyering, yoshinorim, maykov, leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D35373