Commit Graph

5529 Commits

Author SHA1 Message Date
sdong
0ce258f9b3 Compaction picker to expand output level files for keys cross files' boundary too.
Summary: We may wrongly drop delete operation if we pick a file with the entry to be delete, the put entry of the same user key is in the next file in the level, and the next file is not picked. We expand compaction inputs for output level too.

Test Plan: Add unit tests that reproduct the bug of dropping delete entry. Change compaction_picker_test to assert the new behavior.

Reviewers: IslamAbdelRahman, igor

Reviewed By: igor

Subscribers: leveldb, andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D61173
2016-07-26 17:56:36 -07:00
Alexander Jipa
ac0d93b08f fixes 1217: rocksdbjni javac and javah execute_processes fail on windows (#1218)
picking execute_process command based on OS
2016-07-27 00:18:49 +01:00
Islam AbdelRahman
1ae46094de Appveyor badge to show master branch
appveyor badge to show master branch
2016-07-26 13:54:08 -07:00
Islam AbdelRahman
8745f013ff [Fix java build] Stop using non standard std::make_unique
Summary: std::make_unique is not standard and not always available, remove it

Test Plan: Run "make clean jclean rocksdbjava jtest -j8" on my mac

Reviewers: yhchiang, yiwu, sdong, andrewkr

Reviewed By: andrewkr

Subscribers: andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D61143
2016-07-26 13:18:31 -07:00
Wanning Jiang
e12270dfee fix previous typo
Summary: old typos with FILTER/INDEX_CACHE

Test Plan: still pass this unit test

Reviewers: andrewkr

Reviewed By: andrewkr

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D61185
2016-07-26 11:15:14 -07:00
Andrew Kryczka
bbd6a5a184 ldb restore subcommand
Summary:
- Added a new subcommand, 'ldb restore', that restores from backup
- Made backup_env_uri optional (also for 'ldb backup') because it can use db_env when backup_env isn't provided

Test Plan:
verify backup and restore commands work:

  $ ./ldb backup --db=/data/users/andrewkr/rocksdb/db_bench-out/ --num_threads 1 --backup_dir=/data/users/andrewkr/rocksdb/db_bench-out/backup/
  $ ./ldb restore --db=/data/users/andrewkr/rocksdb/db_bench-out/restore/ --num_threads 1 --backup_dir=/data/users/andrewkr/rocksdb/db_bench-out/backup/

Reviewers: sdong, wanning

Reviewed By: wanning

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D60849
2016-07-26 11:13:26 -07:00
Andrew Kryczka
9498069fc9 Run error-filtering script on diff-triggered tests
Summary:
this script creates a summary message based on the test type and
output. Previously it only ran during contbuild. This diff makes it also
run on commit-triggered diffs.

Test Plan: will see if it works on diff's sandcastle tests

Reviewers: kradhakrishnan, lightmark, sdong

Reviewed By: sdong

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D60879
2016-07-25 16:37:23 -07:00
Islam AbdelRahman
f8061a237e Fix Statistics TickersNameMap miss match with Tickers enum
Summary:
TickersNameMap is not consistent with Tickers enum.
this cause us to report wrong statistics and sometimes to access TickersNameMap outside it's boundary causing crashes (in Fb303 statistics)

Test Plan: added new unit test

Reviewers: sdong, kradhakrishnan

Reviewed By: kradhakrishnan

Subscribers: andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D61083
2016-07-25 16:05:50 -07:00
Islam AbdelRahman
16e225f70d Fix MergeContext::copied_operands_ strings moving
Summary:
MergeContext::copied_operands contain strings that MergeContext::operand_list_ Slices point to
It's possible that when MergeContext::copied_operands grow, these strings are moved and there place in memory is changed, this will cause MergeContext::operand_list_ to point to invalid memory.
fix this problem by using unique_ptr<string> instead of string

Test Plan: run tests under mac/clang

Reviewers: sdong, yiwu

Reviewed By: yiwu

Subscribers: andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D61023
2016-07-25 15:31:41 -07:00
krad
a4955b39ac Run sandcastle tests in /dev/shm 2016-07-25 15:10:24 -07:00
Yi Wu
ae0ad719de Fix flaky DBSSTTEST::DeleteObsoleteFilesPendingOutputs
Summary: The test is flaky on Travis in osx environment. The background flush the test wanting to block can run behind the L2 manual compaction, making the test actually blocking the L2 compaction and won't able to proceed.

Test Plan: Test run on travis

Reviewers: kradhakrishnan, sdong, andrewkr, IslamAbdelRahman

Reviewed By: IslamAbdelRahman

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D61101
2016-07-25 15:09:34 -07:00
zhang-tong
b2a8016df1 Update db_bench_tool.cc (#1239)
* Update db_bench_tool.cc

* Update db_bench_tool.cc
2016-07-25 14:50:44 -07:00
Yi Wu
c6654588bd Disable two dynamic options tests under lite build
Summary: RocksDB lite don't support dynamic options. Disable the two test from lite build, and assert `SetOptions` should return `status::OK`.

Test Plan: Run the db_options test under lite build and normal build.

Reviewers: sdong

Reviewed By: sdong

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D61119
2016-07-25 11:48:17 -07:00
sdong
2a6d0cde72 Ignore stale logs while restarting DBs
Summary:
Stale log files can be deleted out of order. This can happen for various reasons. One of the reason is that no data is ever inserted to a column family and we have an optimization to update its log number, but not all the old log files are cleaned up (the case shown in the unit tests added). It can also happen when we simply delete multiple log files out of order.

This causes data corruption because we simply increase seqID after processing the next row and we may end up with writing data with smaller seqID than what is already flushed to memtables.

In DB recovery, for the oldest files we are replaying, if there it contains no data for any column family, we ignore the sequence IDs in the file.

Test Plan: Add two unit tests that fail without the fix.

Reviewers: IslamAbdelRahman, igor, yiwu

Reviewed By: yiwu

Subscribers: hermanlee4, yoshinorim, leveldb, andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D60891
2016-07-25 11:47:31 -07:00
Alexander Jipa
ee8bf2e41f fixes 1228: rockdbjni loadLibraryFromJarToTemp fails when file is already present (#1232)
overriding existing library in tmp folder
2016-07-24 11:19:00 +01:00
sdong
f85df120f2 Re-enable tsan crash white-box test with reduced killing odds
Summary:
Tsan crash white-box test was disabled because it never ends. Re-enable it with reduced killing odds.
Add a parameter in crash test script to allow we pass it through an environment variable.

Test Plan: Run it manually.

Reviewers: IslamAbdelRahman

Reviewed By: IslamAbdelRahman

Subscribers: leveldb, andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D61053
2016-07-22 12:51:27 -07:00
Islam AbdelRahman
89e4c48820 Update README.md to include appveyor badge
Update README.md to include appveyor badge
2016-07-22 12:03:21 -07:00
Peter (Stig) Edwards
b06ca5f860 ldb load, prefer ifsteam(/dev/stdin) to std::cin (#1207)
getline on std::cin can be very inefficient when ldb is loading large values, with high CPU usage in libc _IO_(un)getc, this is because of the performance penalty that comes from synchronizing stdio and iostream buffers.
See the reproducers and tests in #1133 .
If an ifstream on /dev/stdin is used (when available) then using ldb to load large values can be much more efficient.
I thought for ldb load, that this approach is preferable to using <cstdio> or std::ios_base::sync_with_stdio(false).
I couldn't think of a use case where ldb load would need to support reading unbuffered input, an alternative approach would be to add support for passing --input_file=/dev/stdin.
I have a CLA in place, thanks.

The CI tests were failing at the time of https://github.com/facebook/rocksdb/pull/1156, so this change and PR will supersede it.
2016-07-22 11:46:40 -07:00
sdong
4ea0ab3cc5 Revert "Remove bashism from make check (#1225)"
This reverts commit 08ab1d83ac.
2016-07-22 10:42:56 -07:00
Alexander Jipa
12767b3130 fixes 1220: rocksjni build fails on Windows due to variable-size array declaration (#1223)
* fixes 1220: rocksjni build fails on Windows due to variable-size array declaration

using new keyword to create variable-size arrays in order to satisfy most of the compilers

* fixes 1220: rocksjni build fails on Windows due to variable-size array declaration

using unique_ptr keyword to create variable-size arrays in order to satisfy most of the compilers
2016-07-22 10:17:10 -07:00
Andrew Kryczka
a9d512a76b Update .gitignore for internal release 2016-07-22 01:17:03 -07:00
sdong
d5a51d4de3 Need to make sure log file synced before flushing memtable of one column family
Summary: Multiput atomiciy is broken across multiple column families if we don't sync WAL before flushing one column family. The WAL file may contain a write batch containing writes to a key to the CF to be flushed and a key to other CF. If we don't sync WAL before flushing, if machine crashes after flushing, the write batch will only be partial recovered. Data to other CFs are lost.

Test Plan: Add a new unit test which will fail without the diff.

Reviewers: yhchiang, IslamAbdelRahman, igor, yiwu

Reviewed By: yiwu

Subscribers: yiwu, leveldb, andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D60915
2016-07-21 16:29:06 -07:00
Yi Wu
89f319c2df Fix unit test which breaks lite build
Summary: Comment out assertion of number of table files from lite build.

Test Plan:
    OPT=-DROCKSDB_LITE make check

Reviewers: lightmark

Reviewed By: lightmark

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D60999
2016-07-21 14:32:12 -07:00
sdong
b50632920d Add unit test not on /dev/shm as part of the pre-commit tests
Summary: RocksDB behavior is slightly different between data on tmpfs and normal file systems. Add a test case to run RocksDB on normal file system.

Test Plan: See the tests launched by Phabricator

Reviewers: kradhakrishnan, IslamAbdelRahman, gunnarku

Reviewed By: gunnarku

Subscribers: leveldb, andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D60963
2016-07-21 12:53:23 -07:00
krad
b9a97181aa Bump next release version
Summary: Bump master version to the next potential release version of 4.11

Test Plan: Compile

Reviewers: sdong

Subscribers: andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D60993
2016-07-21 12:22:14 -07:00
ryneli
663afef884 Add EnvLibrados - RocksDB Env of RADOS (#1222)
EnvLibrados is a customized RocksDB Env to use RADOS as the backend file system of RocksDB. It overrides all file system related API of default Env. The easiest way to use it is just like following:

	std::string db_name = "test_db";
	std::string config_path = "path/to/ceph/config";
	DB* db;
	Options options;
	options.env = EnvLibrados(db_name, config_path);
	Status s = DB::Open(options, kDBPath, &db);

Then EnvLibrados will forward all file read/write operation to the RADOS cluster assigned by config_path. Default pool is db_name+"_pool".

There are some options that users could set for EnvLibrados.
- write_buffer_size. This variable is the max buffer size for WritableFile. After reaching the buffer_max_size, EnvLibrados will sync buffer content to RADOS, then clear buffer.
- db_pool. Rather than using default pool, users could set their own db pool name
- wal_dir. The dir for WAL files. Because RocksDB only has 2-level structure (dir_name/file_name), the format of wal_dir is "/dir_name"(CAN'T be "/dir1/dir2"). Default wal_dir is "/wal".
- wal_pool. Corresponding pool name for WAL files. Default value is db_name+"_wal_pool"

The example of setting options looks like following:

	db_name = "test_db";
	db_pool = db_name+"_pool";
	wal_dir = "/wal";
	wal_pool = db_name+"_wal_pool";
	write_buffer_size = 1 << 20;
	env_ = new EnvLibrados(db_name, config, db_pool, wal_dir, wal_pool, write_buffer_size);

	DB* db;
	Options options;
	options.env = env_;
	// The last level dir name should match the dir name in prefix_pool_map
	options.wal_dir = "/tmp/wal";

	// open DB
	Status s = DB::Open(options, kDBPath, &db);

Librados is required to compile EnvLibrados. Then use "$make LIBRADOS=1" to compile RocksDB. If you want to only compile EnvLibrados test, just run "$ make env_librados_test LIBRADOS=1". To run env_librados_test, you need to have a running RADOS cluster with the configure file located in "../ceph/src/ceph.conf" related to "rocksdb/".
2016-07-21 11:16:34 -07:00
Yi Wu
32604e6601 Fix flush not being commit while writing manifest
Summary:
Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.

The issue:
# Options.max_background_flushes > 1
# Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
# Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
# After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.

Test Plan: run the test. Also verify the new test hit deadlock without the fix.

Reviewers: sdong, igor, lightmark

Reviewed By: lightmark

Subscribers: andrewkr, omegaga, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D60969
2016-07-21 10:10:41 -07:00
John Alexander
9ab38c45ad Remove %z Format Specifier and Fix Windows Build of sim_cache.cc (#1224)
* Replace %zu format specifier with Windows-compatible macro 'ROCKSDB_PRIszt'

* Added "port/port.h" include to sim_cache.cc for call to snprintf().

* Applied cleaner fix to windows build, reverting part of 7bedd94
2016-07-20 15:28:04 -07:00
Volker Mische
08ab1d83ac Remove bashism from make check (#1225)
The tests run by `make check` require Bash. On Debian you'd need to run
the test as `make SHELL=/bin/bash check`. This commit makes it work on
all POSIX compatible shells (tested on Debian with `dash`).
2016-07-20 14:41:11 -07:00
sdong
f9b14be495 Re-enable TSAN crash test but only with black box crash test
Summary: I tried on my host and TSAN black box test runs well. I didn't see any problem with white box crash test too but it chance of hitting crash point is too low that it may run almost forever. First re-enable black box crash test to unblock the job.

Test Plan: Run it locally.

Reviewers: kradhakrishnan, andrewkr, IslamAbdelRahman

Reviewed By: IslamAbdelRahman

Subscribers: leveldb, andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D60951
2016-07-20 13:52:25 -07:00
sdong
68f3eb746c Run release build for CLANG and GCC 4.8.1 in pre-commit tests too
Summary: As Title.

Test Plan: See how the diff works.

Reviewers: kradhakrishnan, andrewkr, gunnarku

Reviewed By: gunnarku

Subscribers: leveldb, andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D60933
2016-07-20 13:46:32 -07:00
omegaga
e70020e4f6 Only cache level 0 indexes and filter when opening table reader
Summary: In T8216281 we decided to disable prefetching the index and filter during opening table handlers during startup (max_open_files = -1).

Test Plan: Rely on `IndexAndFilterBlocksOfNewTableAddedToCache` to guarantee L0 indexes and filters are still cached and change `PinL0IndexAndFilterBlocksTest` to make sure other levels are not cached (maybe add one more test to test we don't cache other levels?)

Reviewers: sdong, andrewkr

Reviewed By: andrewkr

Subscribers: andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D59913
2016-07-20 11:23:31 -07:00
krad
7bedd94406 Build break fix
Summary:
(1) Integer size correction (mac build break)
(2) snprint usage in Windows (windows build break)

Test Plan: Build in windows and mac

Reviewers: sdong

Subscribers: andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D60927
2016-07-20 10:56:43 -07:00
Islam AbdelRahman
68a8e6b8fa Introduce FullMergeV2 (eliminate memcpy from merge operators)
Summary:
This diff update the code to pin the merge operator operands while the merge operation is done, so that we can eliminate the memcpy cost, to do that we need a new public API for FullMerge that replace the std::deque<std::string> with std::vector<Slice>

This diff is stacked on top of D56493 and D56511

In this diff we
- Update FullMergeV2 arguments to be encapsulated in MergeOperationInput and MergeOperationOutput which will make it easier to add new arguments in the future
- Replace std::deque<std::string> with std::vector<Slice> to pass operands
- Replace MergeContext std::deque with std::vector (based on a simple benchmark I ran https://gist.github.com/IslamAbdelRahman/78fc86c9ab9f52b1df791e58943fb187)
- Allow FullMergeV2 output to be an existing operand

```
[Everything in Memtable | 10K operands | 10 KB each | 1 operand per key]

DEBUG_LEVEL=0 make db_bench -j64 && ./db_bench --benchmarks="mergerandom,readseq,readseq,readseq,readseq,readseq" --merge_operator="max" --merge_keys=10000 --num=10000 --disable_auto_compactions --value_size=10240 --write_buffer_size=1000000000

[FullMergeV2]
readseq      :       0.607 micros/op 1648235 ops/sec; 16121.2 MB/s
readseq      :       0.478 micros/op 2091546 ops/sec; 20457.2 MB/s
readseq      :       0.252 micros/op 3972081 ops/sec; 38850.5 MB/s
readseq      :       0.237 micros/op 4218328 ops/sec; 41259.0 MB/s
readseq      :       0.247 micros/op 4043927 ops/sec; 39553.2 MB/s

[master]
readseq      :       3.935 micros/op 254140 ops/sec; 2485.7 MB/s
readseq      :       3.722 micros/op 268657 ops/sec; 2627.7 MB/s
readseq      :       3.149 micros/op 317605 ops/sec; 3106.5 MB/s
readseq      :       3.125 micros/op 320024 ops/sec; 3130.1 MB/s
readseq      :       4.075 micros/op 245374 ops/sec; 2400.0 MB/s
```

```
[Everything in Memtable | 10K operands | 10 KB each | 10 operand per key]

DEBUG_LEVEL=0 make db_bench -j64 && ./db_bench --benchmarks="mergerandom,readseq,readseq,readseq,readseq,readseq" --merge_operator="max" --merge_keys=1000 --num=10000 --disable_auto_compactions --value_size=10240 --write_buffer_size=1000000000

[FullMergeV2]
readseq      :       3.472 micros/op 288018 ops/sec; 2817.1 MB/s
readseq      :       2.304 micros/op 434027 ops/sec; 4245.2 MB/s
readseq      :       1.163 micros/op 859845 ops/sec; 8410.0 MB/s
readseq      :       1.192 micros/op 838926 ops/sec; 8205.4 MB/s
readseq      :       1.250 micros/op 800000 ops/sec; 7824.7 MB/s

[master]
readseq      :      24.025 micros/op 41623 ops/sec;  407.1 MB/s
readseq      :      18.489 micros/op 54086 ops/sec;  529.0 MB/s
readseq      :      18.693 micros/op 53495 ops/sec;  523.2 MB/s
readseq      :      23.621 micros/op 42335 ops/sec;  414.1 MB/s
readseq      :      18.775 micros/op 53262 ops/sec;  521.0 MB/s

```

```
[Everything in Block cache | 10K operands | 10 KB each | 1 operand per key]

[FullMergeV2]
$ DEBUG_LEVEL=0 make db_bench -j64 && ./db_bench --benchmarks="readseq,readseq,readseq,readseq,readseq" --merge_operator="max" --num=100000 --db="/dev/shm/merge-random-10K-10KB" --cache_size=1000000000 --use_existing_db --disable_auto_compactions
readseq      :      14.741 micros/op 67837 ops/sec;  663.5 MB/s
readseq      :       1.029 micros/op 971446 ops/sec; 9501.6 MB/s
readseq      :       0.974 micros/op 1026229 ops/sec; 10037.4 MB/s
readseq      :       0.965 micros/op 1036080 ops/sec; 10133.8 MB/s
readseq      :       0.943 micros/op 1060657 ops/sec; 10374.2 MB/s

[master]
readseq      :      16.735 micros/op 59755 ops/sec;  584.5 MB/s
readseq      :       3.029 micros/op 330151 ops/sec; 3229.2 MB/s
readseq      :       3.136 micros/op 318883 ops/sec; 3119.0 MB/s
readseq      :       3.065 micros/op 326245 ops/sec; 3191.0 MB/s
readseq      :       3.014 micros/op 331813 ops/sec; 3245.4 MB/s
```

```
[Everything in Block cache | 10K operands | 10 KB each | 10 operand per key]

DEBUG_LEVEL=0 make db_bench -j64 && ./db_bench --benchmarks="readseq,readseq,readseq,readseq,readseq" --merge_operator="max" --num=100000 --db="/dev/shm/merge-random-10-operands-10K-10KB" --cache_size=1000000000 --use_existing_db --disable_auto_compactions

[FullMergeV2]
readseq      :      24.325 micros/op 41109 ops/sec;  402.1 MB/s
readseq      :       1.470 micros/op 680272 ops/sec; 6653.7 MB/s
readseq      :       1.231 micros/op 812347 ops/sec; 7945.5 MB/s
readseq      :       1.091 micros/op 916590 ops/sec; 8965.1 MB/s
readseq      :       1.109 micros/op 901713 ops/sec; 8819.6 MB/s

[master]
readseq      :      27.257 micros/op 36687 ops/sec;  358.8 MB/s
readseq      :       4.443 micros/op 225073 ops/sec; 2201.4 MB/s
readseq      :       5.830 micros/op 171526 ops/sec; 1677.7 MB/s
readseq      :       4.173 micros/op 239635 ops/sec; 2343.8 MB/s
readseq      :       4.150 micros/op 240963 ops/sec; 2356.8 MB/s
```

Test Plan: COMPILE_WITH_ASAN=1 make check -j64

Reviewers: yhchiang, andrewkr, sdong

Reviewed By: sdong

Subscribers: lovro, andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D57075
2016-07-20 09:49:03 -07:00
sdong
e70ba4e40e MemTable::PostProcess() can skip updating num_deletes if the delta is 0
Summary: In many use cases there is no deletes. No need to pay the overhead of atomically updating num_deletes.

Test Plan: Run existing test.

Reviewers: ngbronson, yiwu, andrewkr, igor

Reviewed By: andrewkr

Subscribers: leveldb, andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D60555
2016-07-19 18:10:18 -07:00
sdong
2a282e5f54 DBTablePropertiesTest.GetPropertiesOfTablesInRange: Fix Flaky
Summary:
Summary
There is a possibility that there is no L0 file after writing the data. Generate an L0 file to make it work.

Test Plan: Run the test many times.

Reviewers: andrewkr, yiwu

Reviewed By: yiwu

Subscribers: leveldb, andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D60825
2016-07-19 15:46:20 -07:00
krad
d9cfaa2b16 Persistent Read Cache (6) Persistent cache tier implentation - File layout
Summary:
Persistent cache tier is the tier abstraction that can work for any block
device based device mounted on a file system. The design/implementation can
handle any generic block device.

Any generic block support is achieved by generalizing the access patten as
{io-size, q-depth, direct-io/buffered}.

We have specifically tested and adapted the IO path for NVM and SSD.

Persistent cache tier consists of there parts :

1) File layout

Provides the implementation for handling IO path for reading and writing data
(key/value pair).

2) Meta-data
Provides the implementation for handling the index for persistent read cache.

3) Implementation
It binds (1) and (2) and flushed out the PersistentCacheTier interface

This patch provides implementation for (1)(2). Follow up patch will provide (3)
and tests.

Test Plan: Compile and run check

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D57117
2016-07-19 12:01:46 -07:00
John Alexander
9430333f84 New Statistics to track Compression/Decompression (#1197)
* Added new statistics and refactored to allow ioptions to be passed around as required to access environment and statistics pointers (and, as a convenient side effect, info_log pointer).

* Prevent incrementing compression counter when compression is turned off in options.

* Prevent incrementing compression counter when compression is turned off in options.

* Added two more supported compression types to test code in db_test.cc

* Prevent incrementing compression counter when compression is turned off in options.

* Added new StatsLevel that excludes compression timing.

* Fixed casting error in coding.h

* Fixed CompressionStatsTest for new StatsLevel.

* Removed unused variable that was breaking the Linux build
2016-07-19 09:44:03 -07:00
Alexander Jipa
515b11ffab fixes #1210: rocksdb/java/CMakeLists.txt lacks cmake_minimum_required (#1214)
specifying minimum cmake version so that no warning has to be suppressed
2016-07-18 16:37:49 +02:00
Alexander Jipa
876cb8bfb4 fixes #1212: rocksdbjni maven build does not escape slashes in groovy script (#1213)
using gmaven property so that script is not broken by embedding unescaped Windows-style path
2016-07-17 09:43:34 +02:00
sdong
21c55bdb6e DBTest.DynamicLevelCompressionPerLevel: Tune Threshold
Summary: Each SST's file size increases after we add more table properties. Threshold in DBTest.DynamicLevelCompressionPerLevel need to adjust accordingly to avoid occasional failures.

Test Plan: Run the test

Reviewers: andrewkr, yiwu

Subscribers: leveldb, andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D60819
2016-07-15 16:10:09 -07:00
Yi Wu
4b95253587 Refactor cache.cc
Summary: Refactor cache.cc so that I can plugin clock cache (D55581). Mainly move `ShardedCache` to separate file, move `LRUHandle` back to cache.cc and rename it lru_cache.cc.

Test Plan:
    make check -j64

Reviewers: lightmark, sdong

Reviewed By: sdong

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D59655
2016-07-15 10:41:36 -07:00
Igor Canadi
c6a8665b37 Update LANGUAGE-BINDINGS.md 2016-07-15 13:09:30 +02:00
Wanning Jiang
880ee363eb ldb backup support
Summary: add backup support for ldb tool, and use it to run load test for backup on two HDFS envs

Test Plan: first generate some db, then compile against load test in fbcode, run load_test --db=<db path> backup --backup_env_uri=<URI of backup env> --backup_dir=<backup directory> --num_threads=<number of thread>

Reviewers: andrewkr

Reviewed By: andrewkr

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D60633
2016-07-14 14:09:31 -07:00
sdong
6797e6ffac Avoid updating memtable allocated bytes if write_buffer_size is not set
Summary: If options.write_buffer_size is not set, nor options.write_buffer_manager, no need to update the bytes allocated counter in MemTableAllocator, which is expensive in parallel memtable insert case. Remove it can improve parallel memtable insert throughput by 10% with write batch size 128.

Test Plan:
Run benchmarks
TEST_TMPDIR=/dev/shm/ ./db_bench --benchmarks=fillrandom -disable_auto_compactions -level0_slowdown_writes_trigger=9999 -level0_stop_writes_trigger=9999 -num=10000000 --writes=1000000 -max_background_flushes=16 -max_write_buffer_number=16 --threads=32 --batch_size=128   -allow_concurrent_memtable_write -enable_write_thread_adaptive_yield

The throughput grows 10% with the benchmark.

Reviewers: andrewkr, yiwu, IslamAbdelRahman, igor, ngbronson

Reviewed By: ngbronson

Subscribers: ngbronson, leveldb, andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D60465
2016-07-13 19:33:57 -07:00
Aaron Gao
dda6c72ac8 Add DestroyColumnFamilyHandle(ColumnFamilyHandle**) to db.h
Summary:
add DestroyColumnFamilyHandle(ColumnFamilyHandle**) to close column family instead of deleting cfh*
User should call this to close a cf and then we can detect the deletion in this function.

Test Plan: make all check -j64

Reviewers: andrewkr, yiwu, sdong

Reviewed By: sdong

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D60765
2016-07-13 17:59:25 -07:00
Andrew Kryczka
56222f57df Avoid FileMetaData copy
Summary: as titled

Test Plan: unit tests

Reviewers: sdong, lightmark

Reviewed By: lightmark

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D60597
2016-07-13 15:36:22 -07:00
Jay Edgar
15b7a4ab88 Fixed output size and removed unneeded loop
Summary: In Zlib_Compress and BZip2_Compress the calculation for size was slightly off when using compression_foramt_version 2 (which includes the decompressed size in the output).  Also there were unnecessary loops around the deflate/BZ2_bzCompress calls.  In Zlib_Compress there was also a possible exit from the function after calling deflateInit2 that didn't call deflateEnd.

Test Plan: Standard tests

Reviewers: sdong, IslamAbdelRahman, igor

Reviewed By: igor

Subscribers: sdong, IslamAbdelRahman, andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D60537
2016-07-13 09:10:06 -07:00
Yi Wu
6ea41f8527 Fix deadlock when trying update options when write stalls
Summary:
When write stalls because of auto compaction is disabled, or stop write trigger is reached,
user may change these two options to unblock writes. Unfortunately we had issue where the write
thread will block the attempt to persist the options, thus creating a deadlock. This diff
fix the issue and add two test cases to detect such deadlock.

Test Plan:
Run unit tests.

Also, revert db_impl.cc to master (but don't revert `DBImpl::BackgroundCompaction:Finish` sync point) and run db_options_test. Both tests should hit deadlock.

Reviewers: sdong

Reviewed By: sdong

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D60627
2016-07-12 15:30:38 -07:00
Jay Edgar
efd013d6d8 Miscellaneous performance improvements
Summary:
I was investigating performance issues in the SstFileWriter and found all of the following:

- The SstFileWriter::Add() function created a local InternalKey every time it was called generating a allocation and free each time.  Changed to have an InternalKey member variable that can be reset with the new InternalKey::Set() function.
- In SstFileWriter::Add() the smallest_key and largest_key values were assigned the result of a ToString() call, but it is simpler to just assign them directly from the user's key.
- The Slice class had no move constructor so each time one was returned from a function a new one had to be allocated, the old data copied to the new, and the old one was freed.  I added the move constructor which also required a copy constructor and assignment operator.
- The BlockBuilder::CurrentSizeEstimate() function calculates the current estimate size, but was being called 2 or 3 times for each key added.  I changed the class to maintain a running estimate (equal to the original calculation) so that the function can return an already calculated value.
- The code in BlockBuilder::Add() that calculated the shared bytes between the last key and the new key duplicated what Slice::difference_offset does, so I replaced it with the standard function.
- BlockBuilder::Add() had code to copy just the changed portion into the last key value (and asserted that it now matched the new key).  It is more efficient just to copy the whole new key over.
- Moved this same code up into the 'if (use_delta_encoding_)' since the last key value is only needed when delta encoding is on.
- FlushBlockBySizePolicy::BlockAlmostFull calculated a standard deviation value each time it was called, but this information would only change if block_size of block_size_deviation changed, so I created a member variable to hold the value to avoid the calculation each time.
- Each PutVarint??() function has a buffer and calls std::string::append().  Two or three calls in a row could share a buffer and a single call to std::string::append().

Some of these will be helpful outside of the SstFileWriter.  I'm not 100% the addition of the move constructor is appropriate as I wonder why this wasn't done before - maybe because of compiler compatibility?  I tried it on gcc 4.8 and 4.9.

Test Plan: The changes should not affect the results so the existing tests should all still work and no new tests were added.  The value of the changes was seen by manually testing the SstFileWriter class through MyRocks and adding timing code to identify problem areas.

Reviewers: sdong, IslamAbdelRahman

Reviewed By: IslamAbdelRahman

Subscribers: andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D59607
2016-07-12 14:15:32 -07:00