Commit Graph

2939 Commits

Author SHA1 Message Date
Robert
628a67b007 Reduce memory footprint in backupable db.
* Use emplace when possible.
* Make FileInfo shared among all BackupMeta, instead of storing filenames.
* Make checksum_value in FileInfo constant.
* Reserve space beforehand if container size is known.
* Make FileInfo and BackupMeta non-copyable and non-assignable to prevent future logic errors.
  It is very dangerous to copy BackupMeta without careful handling refcounts of FileInfo.
* Remove a copy of BackupMeta when detected corrupt backup.
2015-01-09 16:58:13 +08:00
Igor Canadi
b89d58dfa3 :%s/build_config/make_config
Summary: I'm tired of double-tab when opening build_tools/<something>. This change will make bu<tab> fully complete my path :)

Test Plan: `vi bu<tab>` gives me `vi build_tools/` yay!

Reviewers: yhchiang, rven, sdong

Reviewed By: sdong

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D30639
2015-01-07 17:26:24 -08:00
Ameya Gupte
242b9769c3 Memtablerep Benchmark
Summary:
Create a benchmark for testing memtablereps. This diff is a bit rough, but it should do the trick until other bootcampers can clean it up.

Addressing comments
Removed the mutexes
Changed ReadWriteBenchmark to fix number of reads and count the number of writes we can perform in that time.

Test Plan:
Run it.

Below runs pass
./memtablerep_bench --benchmarks fillrandom,readrandom --memtablerep skiplist

./memtablerep_bench --benchmarks fillseq,readseq --memtablerep skiplist

./memtablerep_bench --benchmarks readwrite,seqreadwrite --memtablerep skiplist --num_operations 200 --num_threads 5

./memtablerep_bench --benchmarks fillrandom,readrandom --memtablerep hashskiplist

./memtablerep_bench --benchmarks fillseq,readseq --memtablerep hashskiplist
 --num_scans 2

./memtablerep_bench --benchmarks fillseq,readseq --memtablerep vector

Reviewers: jpaton, ikabiljo, sdong

Reviewed By: sdong

Subscribers: dhruba, ameyag

Differential Revision: https://reviews.facebook.net/D22683
2015-01-07 15:15:30 -08:00
sdong
73ee4febab Add comments about properties supported by DB::GetProperty() and DB::GetIntProperty()
Summary: Add comments in db.h to help users discover their options.

Test Plan: Compile

Reviewers: rven, yhchiang, igor

Reviewed By: igor

Subscribers: MarkCallaghan, yoshinorim, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D31077
2015-01-07 15:09:35 -08:00
Igor Canadi
2dca48f550 Merge pull request #451 from StanislavGlebik/document_db_improvement
Fixed negative numbers comparison in DocumentDB
2015-01-07 14:11:22 -08:00
stash93
4b57d9a820 Fixed negative numbers comparison in DocumentDB 2015-01-08 01:03:51 +03:00
sdong
9ef59a09a5 VersionSet::AddLiveFiles() to assert current version is included.
Summary: Add an extra assert to make sure current version is included in VersionSet::AddLiveFiles().

Test Plan: make all check

Reviewers: yhchiang, rven, igor

Reviewed By: igor

Subscribers: dhruba, hermanlee4, leveldb

Differential Revision: https://reviews.facebook.net/D30819
2015-01-07 12:03:40 -08:00
sdong
4d16a9a633 VersionBuilder to optimize for applying a later edit deleting files added by previous edits
Summary: During recovery, VersionBuilder::Apply() was called multiple times. If the DB is open for long enough, most of files added earlier will be deleted by later deletes. In current solution, sorting added file happens first and then deletes are applied. In this patch, deletes are applied when possible inside Apply(), which can significantly reduce the sorting time in some cases.

Test Plan:
Add unit tests in version_builder
valgrind_check
Open a manifest of 50MB, with 9K live files. The manifest read time reduced from 1.6 seconds to 0.7 seconds.

Reviewers: rven, yhchiang, igor

Reviewed By: igor

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D30765
2015-01-07 10:36:58 -08:00
Igor Canadi
7731d51c82 Simplify column family concurrency
Summary:
This patch changes concurrency guarantees around ColumnFamilySet::column_families_ and ColumnFamilySet::column_families_data_.

Before:
* When mutating: lock DB mutex and spin lock
* When reading: lock DB mutex OR spin lock

After:
* When mutating: lock DB mutex and be in write thread
* When reading: lock DB mutex or be in write thread

That way, we eliminate the spin lock that protects these hash maps and  simplify concurrency. That means we don't need to lock the spin lock during writing, since writing is mutually exclusive with column family create/drop (the only operations that mutate those hash maps).

With these new restrictions, I also needed to move column family create to the write thread (column family drop was already in the write thread).

Even though we don't need to lock the spin lock during write, impact on performance should be minimal -- the spin lock is almost never busy, so locking it is almost free.

This addresses task t5116919.

Test Plan:
make check

Stress test with lots and lots of column family drop and create:

   time ./db_stress --threads=30 --ops_per_thread=5000000 --max_key=5000 --column_families=200 --clear_column_family_one_in=100000 --verify_before_write=0  --reopen=15 --max_background_compactions=10 --max_background_flushes=10 --db=/fast-rocksdb-tmp/db_stress/

Reviewers: yhchiang, rven, sdong

Reviewed By: sdong

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D30651
2015-01-06 12:44:21 -08:00
Igor Canadi
07aa4e0e35 Fix compaction summary log for trivial move
Summary: When trivial move commit is done, we log the summary of the input version instead of current. This is inconsistent with other log messages and confusing.

Test Plan: compiles

Reviewers: sdong

Reviewed By: sdong

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D30939
2015-01-05 17:32:49 -08:00
Leonidas Galanis
9d5bd411be benchmark.sh won't run through all tests properly if one specifies wal_dir to be different than db directory.
Summary:
A command line like this to run all the tests:
source benchmark.config.sh && nohup ./benchmark.sh 'bulkload,fillseq,overwrite,filluniquerandom,readrandom,readwhilewriting'
where
benchmark.config.sh is:
export DB_DIR=/data/mysql/rocksdata
export WAL_DIR=/txlogs/rockswal
export OUTPUT_DIR=/root/rocks_benchmarking/output

Will fail for the tests that need a new DB .

Also 1) set disable_data_sync=0 and 2) add debug mode to run through all the tests more quickly

Test Plan: run ./benchmark.sh 'debug,bulkload,fillseq,overwrite,filluniquerandom,readrandom,readwhilewriting' and verify that there are no complaints about WAL dir not being empty.

Reviewers: sdong, yhchiang, rven, igor

Reviewed By: igor

Subscribers: dhruba

Differential Revision: https://reviews.facebook.net/D30909
2015-01-05 15:36:47 -08:00
Igor Canadi
62ad0a9b19 Deprecating skip_log_error_on_recovery
Summary:
Since https://reviews.facebook.net/D16119, we ignore partial tailing writes. Because of that, we no longer need skip_log_error_on_recovery.

The documentation says "Skip log corruption error on recovery (If client is ok with losing most recent changes)", while the option actually ignores any corruption of the WAL (not only just the most recent changes). This is very dangerous and can lead to DB inconsistencies. This was originally set up to ignore partial tailing writes, which we now do automatically (after D16119). I have digged up old task t2416297 which confirms my findings.

Test Plan: There was actually no tests that verified correct behavior of skip_log_error_on_recovery.

Reviewers: yhchiang, rven, dhruba, sdong

Reviewed By: sdong

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D30603
2015-01-05 13:35:56 -08:00
Igor Canadi
fa0b126c0c Fix corruption_test -- if status is not OK, return status -- during recovery 2015-01-05 10:49:41 -08:00
Igor Canadi
d7b4bb62a7 Fail DB::Open() on WAL corruption
Summary:
This is a serious bug. If paranod_check == true and WAL is corrupted, we don't fail DB::Open(). I tried going into history and it seems we've been doing this for a long long time.

I found this when investigating t5852041.

Test Plan: Added unit test to verify correct behavior.

Reviewers: yhchiang, rven, sdong

Reviewed By: sdong

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D30597
2015-01-05 10:26:34 -08:00
Igor Canadi
9619081d9b Merge pull request #449 from robertabcd/improve-backupable
Improve backupable db performance on loading BackupMeta
2015-01-05 09:59:50 -08:00
Robert
49376bfe87 Fix errors when using -Wshorten-64-to-32. 2015-01-05 21:21:04 +08:00
Robert
a8c5564a9d Do not issue extra GetFileSize() calls when loading BackupMeta. 2015-01-04 12:20:49 +08:00
Robert
caa1fd0e0e Improve performance when loading BackupMeta.
* Use strtoul() and strtoull() instead of sscanf().
  glibc's sscanf() will do a implicit strlen().

* Move implicit construction of Slice("crc32 ") out of loop.
2015-01-04 12:19:32 +08:00
sdong
e9ca358157 Fix CLANG build for db_bench
Summary: CLANG was broken for a recent change in db_ench. Fix it.

Test Plan: Build db_bench using CLANG.

Reviewers: rven, igor, yhchiang

Reviewed By: yhchiang

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D30801
2014-12-30 18:33:49 -08:00
Yueh-Hsuan Chiang
bf287b76e0 Add structures for exposing thread events and operations.
Summary:
Add structures for exposing events and operations.  Event describes
high-level action about a thread such as doing compaciton or
doing flush, while an operation describes lower-level action
of a thread such as reading / writing a SST table, waiting for
mutex.  Events and operations are designed to be independent.
One thread would typically involve in one event and one operation.

Code instrument will be in a separate diff.

Test Plan:
Add unit-tests in thread_list_test
make dbg -j32
./thread_list_test
export ROCKSDB_TESTS=ThreadList
./db_test

Reviewers: ljin, igor, sdong

Reviewed By: sdong

Subscribers: rven, jonahcohen, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D29781
2014-12-30 10:39:13 -08:00
sdong
a801c1fb09 db_bench --num_hot_column_families to be default off
Summary: Having --num_hot_column_families default on fails some existing regression tests. By default turn it off

Test Plan: Run db_bench to make sure it is default off.

Reviewers: yhchiang, rven, igor

Reviewed By: igor

Subscribers: leveldb, dhruba

Differential Revision: https://reviews.facebook.net/D30705
2014-12-24 09:00:23 -08:00
Manish Patil
2067058a60 Dump routine to BlockBasedTableReader (valgrind)
Summary: Fixed valgrind issue

Test Plan: valgrind check done

Reviewers: rven, sdong

Reviewed By: sdong

Subscribers: sdong, dhruba

Differential Revision: https://reviews.facebook.net/D30699
2014-12-23 18:01:29 -08:00
sdong
ddc81440d5 db_bench to add an option as number of hot column families to add to
Summary:
Add option --num_hot_column_families in db_bench. If it is set, write options will first write to that number of column families, and then move on to next set of hot column families. The working set of column families can be smaller than total number of CFs.

It is to test how RocksDB can handle cold column families

Test Plan: Run db_bench with  --num_hot_column_families set and not set.

Reviewers: yhchiang, rven, igor

Reviewed By: igor

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D30663
2014-12-23 16:52:05 -08:00
Yueh-Hsuan Chiang
a944afd356 Fixed a compile error in db/db_impl.cc on ROCKSDB_LITE 2014-12-23 16:19:40 -08:00
Manish Patil
7ea7bdf04d Dump routine to BlockBasedTableReader
Summary: Added necessary routines for dumping block based SST with block filter

Test Plan: Added "raw" mode to utility sst_dump

Reviewers: sdong, rven

Reviewed By: rven

Subscribers: dhruba

Differential Revision: https://reviews.facebook.net/D29679
2014-12-23 13:24:07 -08:00
Igor Canadi
ae508df90e Clean up compile for c_simple_example 2014-12-23 17:32:30 +01:00
Igor Canadi
b623009619 Fix compile of compact_file_example 2014-12-23 17:14:44 +01:00
Igor Canadi
ded26605f4 Merge pull request #444 from adamretter/java-api-fix
Fix the Java API build on Mac OS X
2014-12-23 15:26:45 +01:00
Adam Retter
98490bccf6 Fix the build on Mac OS X 2014-12-23 14:22:56 +00:00
Yueh-Hsuan Chiang
4d99729741 Merge pull request #443 from behanna/master
Fix the build with -DNDEBUG.
2014-12-22 23:57:19 -08:00
Lei Jin
5045c43944 add support for nested BlockBasedTableOptions in config string
Summary:
Add support to allow nested config for block-based table factory. The format looks like this:

"write_buffer_size=1024;block_based_table_factory={block_size=4k};max_write_buffer_num=2"

Test Plan: unit test

Reviewers: yhchiang, rven, igor, ljin, jonahcohen

Reviewed By: jonahcohen

Subscribers: jonahcohen, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D29223
2014-12-22 16:34:21 -08:00
Chris BeHanna
d232cb156b Fix the build with -DNDEBUG.
Dike out the body of VerifyCompactionResult.  With assert() compiled out, the
loop index variable in the inner loop was unused, breaking the build when
-Werror is enabled.
2014-12-22 17:06:18 -06:00
Yueh-Hsuan Chiang
45bab305f9 Move GetThreadList() feature under Env.
Summary:
GetThreadList() feature depends on the thread creation and destruction, which is currently handled under Env.
This patch moves GetThreadList() feature under Env to better manage the dependency of GetThreadList() feature
on thread creation and destruction.

Renamed ThreadStatusImpl to ThreadStatusUpdater.  Add ThreadStatusUtil, which is a static class contains
utility functions for ThreadStatusUpdater.

Test Plan: run db_test, thread_list_test and db_bench and verify the life cycle of Env and ThreadStatusUpdater is properly managed.

Reviewers: igor, sdong

Reviewed By: sdong

Subscribers: ljin, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D30057
2014-12-22 12:20:17 -08:00
Igor Canadi
4fd26f287c Only execute flush from compaction if max_background_flushes = 0
Summary: As title. We shouldn't need to execute flush from compaction if there are dedicated threads doing flushes.

Test Plan: make check

Reviewers: rven, yhchiang, sdong

Reviewed By: sdong

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D30579
2014-12-22 12:05:14 +01:00
Igor Canadi
0acc738810 Speed up FindObsoleteFiles()
Summary:
There are two versions of FindObsoleteFiles():
* full scan, which is executed every 6 hours (and it's terribly slow)
* no full scan, which is executed every time a background process finishes and iterator is deleted

This diff is optimizing the second case (no full scan). Here's what we do before the diff:
* Get the list of obsolete files (files with ref==0). Some files in obsolete_files set might actually be live.
* Get the list of live files to avoid deleting files that are live.
* Delete files that are in obsolete_files and not in live_files.

After this diff:
* The only files with ref==0 that are still live are files that have been part of move compaction. Don't include moved files in obsolete_files.
* Get the list of obsolete files (which exclude moved files).
* No need to get the list of live files, since all files in obsolete_files need to be deleted.

I'll post the benchmark results, but you can get the feel of it here: https://reviews.facebook.net/D30123

This depends on D30123.

P.S. We should do full scan only in failure scenarios, not every 6 hours. I'll do this in a follow-up diff.

Test Plan:
One new unit test. Made sure that unit test fails if we don't have a `if (!f->moved)` safeguard in ~Version.

make check

Big number of compactions and flushes:

  ./db_stress --threads=30 --ops_per_thread=20000000 --max_key=10000 --column_families=20 --clear_column_family_one_in=10000000 --verify_before_write=0  --reopen=15 --max_background_compactions=10 --max_background_flushes=10 --db=/fast-rocksdb-tmp/db_stress --prefixpercent=0 --iterpercent=0 --writepercent=75 --db_write_buffer_size=2000000

Reviewers: yhchiang, rven, sdong

Reviewed By: sdong

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D30249
2014-12-22 12:04:45 +01:00
Igor Canadi
d8c4ce6b50 Merge pull request #442 from alabid/alabid/fix-example-typo
fix really trivial typo in column families example
2014-12-22 09:07:51 +01:00
alabid
949bd71fd0 fix really trivial typo 2014-12-22 00:36:16 -05:00
Igor Canadi
f8999fcf31 Fix a SIGSEGV in BackgroundFlush
Summary:
This one wasn't easy to find :)

What happens is we go through all cfds on flush_queue_ and find no cfds to flush, *but* the cfd is set to the last CF we looped through and following code assumes we want it flushed.

BTW @sdong do you think we should also make BackgroundFlush() only check a single cfd for flushing instead of doing this `while (!flush_queue_.empty())`?

Test Plan: regression test no longer fails

Reviewers: sdong, rven, yhchiang

Reviewed By: yhchiang

Subscribers: sdong, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D30591
2014-12-21 00:23:28 -08:00
Igor Canadi
ade4034a9d MultiGet for DBWithTTL
Summary: This is a feature request from rocksdb's user. I didn't even realize we don't support multigets on TTL DB :)

Test Plan: added a unit test

Reviewers: yhchiang, rven, sdong

Reviewed By: sdong

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D30561
2014-12-20 12:46:37 +01:00
Igor Canadi
fdb6be4e24 Rewritten system for scheduling background work
Summary:
When scaling to higher number of column families, the worst bottleneck was MaybeScheduleFlushOrCompaction(), which did a for loop over all column families while holding a mutex. This patch addresses the issue.

The approach is similar to our earlier efforts: instead of a pull-model, where we do something for every column family, we can do a push-based model -- when we detect that column family is ready to be flushed/compacted, we add it to the flush_queue_/compaction_queue_. That way we don't need to loop over every column family in MaybeScheduleFlushOrCompaction.

Here are the performance results:

Command:

    ./db_bench --write_buffer_size=268435456 --db_write_buffer_size=268435456 --db=/fast-rocksdb-tmp/rocks_lots_of_cf --use_existing_db=0 --open_files=55000 --statistics=1 --histogram=1 --disable_data_sync=1 --max_write_buffer_number=2 --sync=0 --benchmarks=fillrandom --threads=16 --num_column_families=5000  --disable_wal=1 --max_background_flushes=16 --max_background_compactions=16 --level0_file_num_compaction_trigger=2 --level0_slowdown_writes_trigger=2 --level0_stop_writes_trigger=3 --hard_rate_limit=1 --num=33333333 --writes=33333333

Before the patch:

     fillrandom   :      26.950 micros/op 37105 ops/sec;    4.1 MB/s

After the patch:

      fillrandom   :      17.404 micros/op 57456 ops/sec;    6.4 MB/s

Next bottleneck is VersionSet::AddLiveFiles, which is painfully slow when we have a lot of files. This is coming in the next patch, but when I removed that code, here's what I got:

      fillrandom   :       7.590 micros/op 131758 ops/sec;   14.6 MB/s

Test Plan:
make check

two stress tests:

Big number of compactions and flushes:

    ./db_stress --threads=30 --ops_per_thread=20000000 --max_key=10000 --column_families=20 --clear_column_family_one_in=10000000 --verify_before_write=0  --reopen=15 --max_background_compactions=10 --max_background_flushes=10 --db=/fast-rocksdb-tmp/db_stress --prefixpercent=0 --iterpercent=0 --writepercent=75 --db_write_buffer_size=2000000

max_background_flushes=0, to verify that this case also works correctly

    ./db_stress --threads=30 --ops_per_thread=2000000 --max_key=10000 --column_families=20 --clear_column_family_one_in=10000000 --verify_before_write=0  --reopen=3 --max_background_compactions=3 --max_background_flushes=0 --db=/fast-rocksdb-tmp/db_stress --prefixpercent=0 --iterpercent=0 --writepercent=75 --db_write_buffer_size=2000000

Reviewers: ljin, rven, yhchiang, sdong

Reviewed By: sdong

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D30123
2014-12-19 20:38:12 +01:00
Igor Canadi
a3001b1d3d Remove -mtune=native because it's redundant 2014-12-19 09:06:45 -08:00
Yueh-Hsuan Chiang
e27c84522f Merge pull request #437 from fyrz/RocksJava-SliceTests-Fixes
[RocksJava] Slice / DirectSlice improvements
2014-12-18 13:53:35 -08:00
fyrz
1fed1282ad [RocksJava] Incorporated changes D30081 2014-12-18 22:27:50 +01:00
fyrz
5b9ceef01d [RocksJava] JavaDoc correction 2014-12-18 22:19:57 +01:00
fyrz
5fbba60b6a [RocksJava] Incorporated changes D30081 2014-12-18 22:15:00 +01:00
fyrz
b0230d7e09 [RocksJava] Incorporate additions for D30081 2014-12-18 22:05:07 +01:00
fyrz
b015ed0ca6 [RocksJava] Slice / DirectSlice improvements
Summary:
- AssertionError when initialized with Non-Direct Buffer
- Tests + coverage for DirectSlice
- Slice sigsegv fixes when initializing from String and byte arrays
- Slice Tests

Test Plan: Run tests without source modifications.

Reviewers: yhchiang, adamretter, ankgup87

Subscribers: dhruba

Differential Revision: https://reviews.facebook.net/D30081
2014-12-18 22:05:07 +01:00
Yueh-Hsuan Chiang
4d422db010 Merge pull request #430 from adamretter/increase-parallelism
Added setIncreaseParallelism() to Java API Options
2014-12-18 11:02:13 -08:00
Yueh-Hsuan Chiang
04c4e49691 Merge pull request #411 from fyrz/RocksJava-RangeCompaction
[RocksJava] Range compaction
2014-12-18 11:01:23 -08:00
Igor Canadi
62d19b7b59 Merge pull request #427 from haneefmubarak/c-examples
C example
2014-12-18 16:01:15 +01:00