Summary:
Add kTypeBlobIndex value type, which will be used by blob db only, to insert a (key, blob_offset) KV pair. The purpose is to
1. Make it possible to open existing rocksdb instance as blob db. Existing value will be of kTypeIndex type, while value inserted by blob db will be of kTypeBlobIndex.
2. Make rocksdb able to detect if the db contains value written by blob db, if so return error.
3. Make it possible to have blob db optionally store value in SST file (with kTypeValue type) or as a blob value (with kTypeBlobIndex type).
The root db (DBImpl) basically pretended kTypeBlobIndex are normal value on write. On Get if is_blob is provided, return whether the value read is of kTypeBlobIndex type, or return Status::NotSupported() status if is_blob is not provided. On scan allow_blob flag is pass and if the flag is true, return wether the value is of kTypeBlobIndex type via iter->IsBlob().
Changes on blob db side will be in a separate patch.
Closes https://github.com/facebook/rocksdb/pull/2886
Differential Revision: D5838431
Pulled By: yiwu-arbug
fbshipit-source-id: 3c5306c62bc13bb11abc03422ec5cbcea1203cca
Summary:
This enables us to crossbuild pcc64le RocksJava binaries with a suitably old version of glibc (2.17) on CentOS 7.
Closes https://github.com/facebook/rocksdb/pull/2491
Differential Revision: D5955301
Pulled By: sagar0
fbshipit-source-id: 69ef9746f1dc30ffde4063dc764583d8c7ae937e
Summary:
TSAN shows error when we grab too many locks at the same time. In TSAN crash test, make one shard key cover 2^22 keys so that no many keys will be hold at the same time.
Closes https://github.com/facebook/rocksdb/pull/2719
Differential Revision: D5609035
Pulled By: siying
fbshipit-source-id: 930e5d63fff92dbc193dc154c4c615efbdf06c6a
Summary:
Commit 4f81ab38bf has the test wrong.
clang doesn't support a -dumpversion option. By lucky coincidence
clang/gcc --version both place a version number at the same output location
when --verison is passed.
Example output (1st line only).
$ clang --version
clang version 3.9.1 (tags/RELEASE_391/final)
$ gcc --version
gcc (GCC) 6.4.1 20170727 (Red Hat 6.4.1-1)
During the test of the compiler we ensure that a minimum version is met
as Makefile doesn't support patterns.
Also xcode9 doesn't seem affected by https://github.com/facebook/rocksdb/issues/2672
and also doesn't have "clang" as the first part of its output so the
fix implemented here also is Apple clang friendly.
$ clang --version
Apple LLVM version 9.0.0 (clang-900.0.31)
Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
Closes https://github.com/facebook/rocksdb/pull/2699
Differential Revision: D5600818
Pulled By: yiwu-arbug
fbshipit-source-id: 3b0f2751becb53c1c35468bf29f3f828e7cf2c2a
Summary:
Replace dynamic_cast<> so that users can choose to build with RTTI off, so that they can save several bytes per object, and get tiny more memory available.
Some nontrivial changes:
1. Add Comparator::GetRootComparator() to get around the internal comparator hack
2. Add the two experiemental functions to DB
3. Add TableFactory::GetOptionString() to avoid unnecessary casting to get the option string
4. Since 3 is done, move the parsing option functions for table factory to table factory files too, to be symmetric.
Closes https://github.com/facebook/rocksdb/pull/2645
Differential Revision: D5502723
Pulled By: siying
fbshipit-source-id: fd13cec5601cf68a554d87bfcf056f2ffa5fbf7c
Summary:
platform_dependent tests in Travis now builds all tests, which is not needed. Only build those tests we need to run.
Closes https://github.com/facebook/rocksdb/pull/2647
Differential Revision: D5513954
Pulled By: siying
fbshipit-source-id: 4d540b146124e70dd25586c47939d19f93655b0a
Summary:
Major changes in this PR:
* Implement CassandraCompactionFilter to remove expired columns and rows (if all column expired)
* Move cassandra related code from utilities/merge_operators/cassandra to utilities/cassandra/*
* Switch to use shared_ptr<> from uniqu_ptr for Column membership management in RowValue. Since columns do have multiple owners in Merge and GC process, use shared_ptr helps make RowValue immutable.
* Rename cassandra_merge_test to cassandra_functional_test and add two TTL compaction related tests there.
Closes https://github.com/facebook/rocksdb/pull/2588
Differential Revision: D5430010
Pulled By: wpc
fbshipit-source-id: 9566c21e06de17491d486a68c70f52d501f27687
Summary:
Instead of ignoring UBSan checks, fix the negative shifts in
Hash(). Also add test to make sure the hash values are stable over
time. The values were computed before this change, so the test also
verifies the correctness of the change.
Closes https://github.com/facebook/rocksdb/pull/2546
Differential Revision: D5386369
Pulled By: yiwu-arbug
fbshipit-source-id: 6de4b44461a544d6222cc5d72d8cda2c0373d17e
Summary:
1. The buckifier script assume each test "foo" comes with a .cc file of the same name (i.e. foo.cc). Update cassandra tests to follow this pattern so that the buckifier script can recognize them.
2. add blob_db_test
Closes https://github.com/facebook/rocksdb/pull/2506
Differential Revision: D5331517
Pulled By: yiwu-arbug
fbshipit-source-id: 86f3eba471fc621186ab44cbd073b6162cde8e57
Summary:
This PR adds support for encrypting data stored by RocksDB when written to disk.
It adds an `EncryptedEnv` override of the `Env` class with matching overrides for sequential&random access files.
The encryption itself is done through a configurable `EncryptionProvider`. This class creates is asked to create `BlockAccessCipherStream` for a file. This is where the actual encryption/decryption is being done.
Currently there is a Counter mode implementation of `BlockAccessCipherStream` with a `ROT13` block cipher (NOTE the `ROT13` is for demo purposes only!!).
The Counter operation mode uses an initial counter & random initialization vector (IV).
Both are created randomly for each file and stored in a 4K (default size) block that is prefixed to that file. The `EncryptedEnv` implementation is such that clients of the `Env` class do not see this prefix (nor data, nor in filesize).
The largest part of the prefix block is also encrypted, and there is room left for implementation specific settings/values/keys in there.
To test the encryption, the `DBTestBase` class has been extended to consider a new environment variable called `ENCRYPTED_ENV`. If set, the test will setup a encrypted instance of the `Env` class to use for all tests.
Typically you would run it like this:
```
ENCRYPTED_ENV=1 make check_some
```
There is also an added test that checks that some data inserted into the database is or is not "visible" on disk. With `ENCRYPTED_ENV` active it must not find plain text strings, with `ENCRYPTED_ENV` unset, it must find the plain text strings.
Closes https://github.com/facebook/rocksdb/pull/2424
Differential Revision: D5322178
Pulled By: sdwilsh
fbshipit-source-id: 253b0a9c2c498cc98f580df7f2623cbf7678a27f
Summary:
Improve write buffer manager in several ways:
1. Size is tracked when arena block is allocated, rather than every allocation, so that it can better track actual memory usage and the tracking overhead is slightly lower.
2. We start to trigger memtable flush when 7/8 of the memory cap hits, instead of 100%, and make 100% much harder to hit.
3. Allow a cache object to be passed into buffer manager and the size allocated by memtable can be costed there. This can help users have one single memory cap across block cache and memtable.
Closes https://github.com/facebook/rocksdb/pull/2350
Differential Revision: D5110648
Pulled By: siying
fbshipit-source-id: b4238113094bf22574001e446b5d88523ba00017
Summary:
Blob db rely on base db returning sequence number through write batch after DB::Write(). However after recent changes to the write path, DB::Writ()e no longer return sequence number in some cases. Fixing it by have WriteBatchInternal::InsertInto() always encode sequence number into write batch.
Stacking on #2375.
Closes https://github.com/facebook/rocksdb/pull/2385
Differential Revision: D5148358
Pulled By: yiwu-arbug
fbshipit-source-id: 8bda0aa07b9334ed03ed381548b39d167dc20c33
Summary:
Re-enable blob_db_test with some update:
* Commented out delay at the end of GC tests. Will update the logic later with sync point to properly trigger GC.
* Added some helper functions.
Also update make files to include blob_dump tool.
Closes https://github.com/facebook/rocksdb/pull/2375
Differential Revision: D5133793
Pulled By: yiwu-arbug
fbshipit-source-id: 95470b26d0c1f9592ba4b7637e027fdd263f425c
Summary:
zstd files are downloaded and used as part of JNI build, but are left behind even after doing a `make clean`. This PR updates the `clean` target to remove these zstd files as well.
Closes https://github.com/facebook/rocksdb/pull/2365
Differential Revision: D5123537
Pulled By: sagar0
fbshipit-source-id: a8f355da5ba961aa89d5852e35751ffc35de03ea
Summary:
added ctags -e to the tags target in the makefile. It creates an etags file suitable for emacs.
Closes https://github.com/facebook/rocksdb/pull/2193
Differential Revision: D4983535
Pulled By: siying
fbshipit-source-id: 1077ef0676025b8109df37433572533c9e8fe86e
Summary:
-pic seems to be not working in gcc-5 and it is curently broken. Remove it to fix the build.
Closes https://github.com/facebook/rocksdb/pull/2320
Differential Revision: D5082775
Pulled By: siying
fbshipit-source-id: 5055f987353f1417643a394e7ce05905670410a4
Summary:
snprintf is in <stdio.h> and not in namespace std.
Closes https://github.com/facebook/rocksdb/pull/2287
Reviewed By: anirbanr-fb
Differential Revision: D5054752
Pulled By: yiwu-arbug
fbshipit-source-id: 356807ec38f3c7d95951cdb41f31a3d3ae0714d4
Summary:
As an alternative to Vagrant, we can now also use Docker to cross-build RocksDB. The advantages are:
1. The Docker images are fixed; they include all the latest updates and build tools.
2. The Vagrant image, required scripts that ran for every build that would update CentOS and install the buildtools. This lead to slow repeatable builds, we don't need to do this with Docker as they are already in the provided images.
The Docker images I have used have their Docker build files here: https://github.com/evolvedbinary/docker-rocksjava and the images themselves are available from Docker hub: https://hub.docker.com/r/evolvedbinary/rocksjava/
I have added the following targets to the `Makefile`:
1. `rocksdbjavastaticreleasedocker` this uses Docker to perform the cross-builds. It is basically the Docker version of the existing Vagrant `rocksdbjavastaticrelease` target.
2. `rocksdbjavastaticpublishdocker` delegates to `rocksdbjavastaticreleasedocker` and then `rocksdbjavastaticpublishcentral` to upload the artiacts to Maven Central. Equivalent to the existing Vagrant target: `rocksdbjavastaticpublish`
Closes https://github.com/facebook/rocksdb/pull/2278
Differential Revision: D5048206
Pulled By: yiwu-arbug
fbshipit-source-id: 78fa96ef9d966fe09638ed01de282cd4e31961a9
Summary:
When building rocksdb in fbcode using `make`, util/build_version.cc is always updated (gitignore/hgignore doesn't apply because the file is already checked into fbcode). To use the rocksdb makefile from our own makefile, I would like an option to prevent the metadata update, which is of no value for us.
Closes https://github.com/facebook/rocksdb/pull/2264
Differential Revision: D5037846
Pulled By: siying
fbshipit-source-id: 9fa005725c5ecb31d9cbe2e738cbee209591f08a
Summary:
I'm going to add more DB tests for statistics as currently we have very few. I started a file dedicated to this purpose and moved the existing stats-specific tests there.
Closes https://github.com/facebook/rocksdb/pull/2211
Differential Revision: D4951558
Pulled By: ajkr
fbshipit-source-id: 05d11c35079c40ecabdfd2cf5556ccb761f694a4
Summary:
Replacement of #2147
The change was squashed due to a lot of conflicts.
Closes https://github.com/facebook/rocksdb/pull/2194
Differential Revision: D4929799
Pulled By: siying
fbshipit-source-id: 5cd49c254737a1d5ac13f3c035f128e86524c581
Summary:
Previously, the shared library (make shared_lib) was built with only one
compile line, compiling all .cc files and linking the shared library in
one step. That step would often take 10+ minutes on one machine, and
could not take advantage of multiple CPUs (it's only one invocation of
the compiler).
This commit changes the shared_lib build to compile .o files
individually (placing the resulting .o files in the directory
shared-objects) and then link them into the shared library at the end,
similarly to how the java static build (jls) does it.
Tested by making sure that both static and shared libraries work, and by
making sure that "make clean" cleans up the shared-objects directory.
Closes https://github.com/facebook/rocksdb/pull/2165
Differential Revision: D4897121
Pulled By: yiwu-arbug
fbshipit-source-id: 9811e043d1c01e10503593f3489d186c786ee7d7
Summary:
In some CI test environment, compression libraries can't be successfully built. It still helps to build RocksDB there. Provide such an option to skip to download and build compression libraries.
Closes https://github.com/facebook/rocksdb/pull/2135
Differential Revision: D4872617
Pulled By: siying
fbshipit-source-id: bb21ac373bc62a2528cdf1ca4547e05fcae86214
Summary:
siying this is a resubmission of #2081 with the 4th commit fixed. From that commit message:
> Note that the previous use of quotes in PLATFORM_{CC,CXX}FLAGS was
incorrect and caused GCC to produce the incorrect define:
>
> #define ROCKSDB_JEMALLOC -DJEMALLOC_NO_DEMANGLE 1
>
> This was the cause of the Linux build failure on the previous version
of this change.
I've tested this locally, and the Linux build succeeds now.
Closes https://github.com/facebook/rocksdb/pull/2097
Differential Revision: D4839964
Pulled By: siying
fbshipit-source-id: cc51322
Summary:
Move some files under util/ to new directories env/, monitoring/ options/ and cache/
Closes https://github.com/facebook/rocksdb/pull/2090
Differential Revision: D4833681
Pulled By: siying
fbshipit-source-id: 2fd8bef
Summary:
I've needed Env timing measurements a few times now, so finally built something for it.
Closes https://github.com/facebook/rocksdb/pull/2073
Differential Revision: D4811231
Pulled By: ajkr
fbshipit-source-id: 218a249
Summary:
It is confusing to have auto_roll_logger to stay under db/, which has nothing to do with database. Move filename together as it is a dependency.
Closes https://github.com/facebook/rocksdb/pull/2080
Differential Revision: D4821141
Pulled By: siying
fbshipit-source-id: ca7d768
Summary:
Right now, building rocksdbjava in PowerPC is broken due to JNI library name. I figured it out that "uname -m" and java's os.arch matches in PowerPC architecture. I made use of this advantage to fix the issue. More info can found from this issue --> https://github.com/facebook/rocksdb/issues/1317
Closes https://github.com/facebook/rocksdb/pull/2040
Differential Revision: D4779967
Pulled By: siying
fbshipit-source-id: 259f939
Summary:
After we have db_basic_test and external_sst_file_basic_test, we don't need to run db_test and external_sst_file_test in Travis's MAC OS run anymore. Move it out.
Closes https://github.com/facebook/rocksdb/pull/1940
Differential Revision: D4659361
Pulled By: siying
fbshipit-source-id: e64e291
Summary:
Separate the platform dependent tests from external_sst_file_test. Only those tests need to run on platforms like OSX
Closes https://github.com/facebook/rocksdb/pull/1923
Differential Revision: D4622461
Pulled By: siying
fbshipit-source-id: d2d6f04
Summary:
Separate a smal subset of tests in DBTest to DBBasicTest. Tests in DBTest don't have to run in CI tests on platforms like OSX, as long as they are covered by Linux.
Closes https://github.com/facebook/rocksdb/pull/1924
Differential Revision: D4616702
Pulled By: siying
fbshipit-source-id: 13e6549
Summary:
Travis is short of OSX resource. Try to move platform independent test suites out of OSX
Closes https://github.com/facebook/rocksdb/pull/1922
Differential Revision: D4616070
Pulled By: siying
fbshipit-source-id: 786342c
Summary:
valgrind tests always timeout with parallel run. Black list some slowest ones. It is better to run fewer tests than always have the tests timeout.
Closes https://github.com/facebook/rocksdb/pull/1908
Differential Revision: D4607875
Pulled By: siying
fbshipit-source-id: 7062664
Summary:
The previous version of zlib is no longer available. I have also updated the versions of the other static libraries and added checkum checks for the downloads; This is related to https://github.com/facebook/rocksdb/issues/1769
Closes https://github.com/facebook/rocksdb/pull/1863
Differential Revision: D4550742
Pulled By: yiwu-arbug
fbshipit-source-id: 4414150
Summary:
The Env registration framework supports registering client Envs and selecting which one to instantiate according to a text field. This enabled things like adding the -env_uri argument to db_bench, so the same binary could be reused with different Envs just by changing CLI config.
Now this problem has come up again in a non-Env context, as I want to instantiate a client Statistics implementation from db_bench, which is configured entirely via text parameters. Also, in the future we may wish to use it for deserializing client objects when loading OPTIONS file.
This diff generalizes the Env registration logic to work with arbitrary types.
- Generalized registration and instantiation code by templating them
- The entire implementation is in a header file as that's Google style guide's recommendation for template definitions
- Pattern match with std::regex_match rather than checking prefix, which was the previous behavior
- Rename functions/files to be non-Env-specific
Closes https://github.com/facebook/rocksdb/pull/1776
Differential Revision: D4421933
Pulled By: ajkr
fbshipit-source-id: 34647d1
Summary:
Cleanable objects will perform the registered cleanups when
they are destructed. We however rather to delay this cleaning like when
we are gathering the merge operands. Current approach is to create the
Cleanable object on heap (instead of on stack) and delay deleting it.
By allowing Cleanables to delegate their cleanups to another cleanable
object we can delay the cleaning without however the need to craete the
cleanable object on heap and keeping it around. This patch applies this
technique for the cleanups of BlockIter and shows improved performance
for some in-memory benchmarks:
+1.8% for merge worklaod, +6.4% for non-merge workload when the merge
operator is specified.
https://our.intern.facebook.com/intern/tasks?t=15168163
Non-merge benchmark:
TEST_TMPDIR=/dev/shm/v100nocomp/ ./db_bench --benchmarks=fillrandom
--num=1000000 -value_size=100 -compression_type=none
Reading random with no merge operator specified:
TEST_TMPDIR=/dev/shm/v100nocomp/ ./db_bench
--benchmarks="read
Closes https://github.com/facebook/rocksdb/pull/1711
Differential Revision: D4361163
Pulled By: maysamyabandeh
fbshipit-source-id: 9801e07
Summary:
Added a tombstone-collapsing mode to RangeDelAggregator, which eliminates overlap in the TombstoneMap. In this mode, we can check whether a tombstone covers a user key using upper_bound() (i.e., binary search). However, the tradeoff is the overhead to add tombstones is now higher, so at first I've only enabled it for range scans (compaction/flush/user iterators), where we expect a high number of calls to ShouldDelete() for the same tombstones. Point queries like Get() will still use the linear scan approach.
Also in this diff I changed RangeDelAggregator's TombstoneMap to use multimap with user keys instead of map with internal keys. Callers sometimes provided ParsedInternalKey directly, from which it would've required string copying to derive an internal key Slice with which we could search the map.
Closes https://github.com/facebook/rocksdb/pull/1614
Differential Revision: D4270397
Pulled By: ajkr
fbshipit-source-id: 93092c7
Summary:
Iterator should be in corrupted status if merge operator return false.
Also add test to make sure if max_successive_merges is hit during write,
data will not be lost.
Closes https://github.com/facebook/rocksdb/pull/1665
Differential Revision: D4322695
Pulled By: yiwu-arbug
fbshipit-source-id: b327b05
Summary:
currently when running a portable build we have to do the following
PORTABLE=1 make ...
this commit adds support for the following
make PORTABLE=1 ...
this might be seem subtle but it makes PORTABLE like all other
makefile args and simplifies invocation from numerous build systems
including things like ExternalProject_Add in cmake.
Signed-off-by: Bassam Tabbara <bassam.tabbara@quantum.com>
Closes https://github.com/facebook/rocksdb/pull/1643
Differential Revision: D4315870
Pulled By: yiwu-arbug
fbshipit-source-id: ee43755
Summary:
The second variable "SHELL" simply tells make explicitly which shell to use, instead of allowing it to default to "/bin/sh", which may or may not be Bash.
However, simply defining the second variable by itself causes make to throw an error concerning a circular definition, as it would be attempting to use the "shell" command while simultaneously trying to set which shell to use. Thus, the first variable "BASH_EXISTS" is defined such that make already knows about "/path/to/bash" before trying to use it to set "SHELL".
A more technically correct solution would be to edit the makefile itself to make it compatible with non-bash shells (see the original Issue discussion for details). However, as it seems very few of the people working on this project were building with non-bash shells, I figured this solution would be good enough.
Closes https://github.com/facebook/rocksdb/pull/1631
Differential Revision: D4295689
Pulled By: yiwu-arbug
fbshipit-source-id: e4f9532
Summary:
Travis now is building for ldb tests. Disable for now to unblock other tests while we are investigating.
Closes https://github.com/facebook/rocksdb/pull/1546
Differential Revision: D4209404
Pulled By: siying
fbshipit-source-id: 47edd97
Summary:
This diff includes an implementation of CompactionFilter that allows
users to write CompactionFilter in Lua. With this ability, users can
dynamically change compaction filter logic without requiring building
the rocksdb binary and restarting the database.
To compile, WITH_LUA_PATH must be specified to the base directory
of lua.
Closes https://github.com/facebook/rocksdb/pull/1478
Differential Revision: D4150138
Pulled By: yhchiang
fbshipit-source-id: ed84222
Summary:
Currently our skip-list have an optimization to speedup sequential
inserts from a single stream, by remembering the last insert position.
We extend the idea to support sequential inserts from multiple streams,
and even tolerate small reordering wihtin each stream.
This PR is the interface part adding the following:
- Add `memtable_insert_prefix_extractor` to allow specifying prefix for each key.
- Add `InsertWithHint()` interface to memtable, to allow underlying
implementation to return a hint of insert position, which can be later
pass back to optimize inserts.
- Memtable will maintain a map from prefix to hints and pass the hint
via `InsertWithHint()` if `memtable_insert_prefix_extractor` is non-null.
Closes https://github.com/facebook/rocksdb/pull/1419
Differential Revision: D4079367
Pulled By: yiwu-arbug
fbshipit-source-id: 3555326
* util/build_verion.cc.in: add this file, so cmake and make can share the
template file for generating util/build_version.cc.
* CMakeLists.txt: also, cmake v2.8.11 does not support file(GENERATE ...),
so we are using configure_file() for creating build_version.cc.
* Makefile: use util/build_verion.cc.in for creating build_version.cc.
Signed-off-by: Kefu Chai <tchaikov@gmail.com>
Summary:
Valgrind does not work well with JEMALLOC. If you run
a simple make valgrind_check, you will see lots of issues and
crashes. When precommit runs, this is taken care of. Here we
make sure valgrind_check is passed in DISABLE_JEMALLOC=1
Test Plan: Ran local valgrind_test and noticed the difference
Reviewers: IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D65379
Summary:
Revert the behavior where we don't read sequence id from WAL, but increase it as we replay the log. We still keep the behave for 2PC for now but will fix later.
This change fixes github issue 1339, where some writes come with WAL disabled and we may recover records with wrong sequence id.
Test Plan: Added unit test.
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D64275
Summary:
I just realized that when we run parallel valgrind we actually don't run the parallel tests under valgrind (we run the normally)
This patch make sure that we run both parallel and non-parallel tests with valgrind
Test Plan: DISABLE_JEMALLOC=1 make valgrind_check -j64
Reviewers: andrewkr, yiwu, lightmark, sdong
Reviewed By: sdong
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D62469
Summary:
Add mid-point insertion functionality to LRU cache. Caller of `Cache::Insert()` can set an additional parameter to make a cache entry have higher priority. The LRU cache will reserve at most `capacity * high_pri_pool_pct` bytes for high-pri cache entries. If `high_pri_pool_pct` is zero, the cache degenerates to normal LRU cache.
Context: If we are to put index and filter blocks into RocksDB block cache, index/filter block can be swap out too early. We want to add an option to RocksDB to reserve some capacity in block cache just for index/filter blocks, to mitigate the issue.
In later diffs I'll update block based table reader to use the interface to cache index/filter blocks at high priority, and expose the option to `DBOptions` and make it dynamic changeable.
Test Plan: unit test.
Reviewers: IslamAbdelRahman, sdong, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, dhruba, march, leveldb
Differential Revision: https://reviews.facebook.net/D61977
Summary:
This is a proof of concept of a RocksDB blob log file. The actual value of the Put() is appended to a blob log using normal data block format, and the handle of the block is written as the value of the key in RocksDB.
The prototype only supports Put() and Get(). It doesn't support DB restart, garbage collection, Write() call, iterator, snapshots, etc.
Test Plan: Add unit tests.
Reviewers: arahut
Reviewed By: arahut
Subscribers: kradhakrishnan, leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D61485
Summary: With read_options.background_purge_on_iterator_cleanup=true, File deletion and closing can still happen in forward iterator, or WAL file closing. Cover those cases too.
Test Plan: I am adding unit tests.
Reviewers: andrewkr, IslamAbdelRahman, yiwu
Reviewed By: yiwu
Subscribers: leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D61503
Summary:
compact_on_deletion_collector_test does not support --gtest_list_tests
since it isn't gtest, so the full program would run for the target
gen_parallel_tests. This caused gen_parallel_tests to take 8+ minutes for tsan
and prevented compact_on_deletion_collector_test from running during check_0
since no t/run-* script could be generated.
Test Plan:
run make check, verify generating t/run-* scripts is fast and
./compact_on_deletion_collector_test is now run
Reviewers: IslamAbdelRahman, wanning, lightmark, sdong
Reviewed By: sdong
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D61695
Summary: Implement a time series database that supports DateTieredCompactionStrategy. It wraps a db object and separate SST files in different column families (time windows).
Test Plan: Add `date_tiered_test`.
Reviewers: dhruba, sdong
Reviewed By: sdong
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D61653
Summary: Add a utility function that trigger necessary full compaction and put output to the correct level by looking at new options and old options.
Test Plan: Add unit tests for it.
Reviewers: andrewkr, igor, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: muthu, sumeet, leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D60783
Summary:
parallel tests are broken because gnu_parallel is reading deprecated options from `/etc/parallel/config`
Fix this by passing `--plain` to ignore `/etc/parallel/config`
Test Plan: make check -j64
Reviewers: kradhakrishnan, sdong, andrewkr, yiwu, arahut
Reviewed By: arahut
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D61359
Summary:
Experiments on column-aware encodings. Supported features: 1) extract data blocks from SST file and encode with specified encodings; 2) Decode encoded data back into row format; 3) Directly extract data blocks and write in row format (without prefix encoding); 4) Get column distribution statistics for column format; 5) Dump data blocks separated by columns in human-readable format.
There is still on-going work on this diff. More refactoring is necessary.
Test Plan: Wrote tests in `column_aware_encoding_test.cc`. More tests should be added.
Reviewers: sdong
Reviewed By: sdong
Subscribers: arahut, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D60027
Summary:
Removing moreutils from sandcastle and adding gnu parallel.
Then passing in J= nproc command
Test Plan: Testing on sandcastle
Reviewers: sdong, kradhakrishnan
Reviewed By: kradhakrishnan
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D61017
Summary:
TickersNameMap is not consistent with Tickers enum.
this cause us to report wrong statistics and sometimes to access TickersNameMap outside it's boundary causing crashes (in Fb303 statistics)
Test Plan: added new unit test
Reviewers: sdong, kradhakrishnan
Reviewed By: kradhakrishnan
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D61083
Summary:
Tsan crash white-box test was disabled because it never ends. Re-enable it with reduced killing odds.
Add a parameter in crash test script to allow we pass it through an environment variable.
Test Plan: Run it manually.
Reviewers: IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D61053
Summary: Multiput atomiciy is broken across multiple column families if we don't sync WAL before flushing one column family. The WAL file may contain a write batch containing writes to a key to the CF to be flushed and a key to other CF. If we don't sync WAL before flushing, if machine crashes after flushing, the write batch will only be partial recovered. Data to other CFs are lost.
Test Plan: Add a new unit test which will fail without the diff.
Reviewers: yhchiang, IslamAbdelRahman, igor, yiwu
Reviewed By: yiwu
Subscribers: yiwu, leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D60915
Summary: RocksDB behavior is slightly different between data on tmpfs and normal file systems. Add a test case to run RocksDB on normal file system.
Test Plan: See the tests launched by Phabricator
Reviewers: kradhakrishnan, IslamAbdelRahman, gunnarku
Reviewed By: gunnarku
Subscribers: leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D60963
EnvLibrados is a customized RocksDB Env to use RADOS as the backend file system of RocksDB. It overrides all file system related API of default Env. The easiest way to use it is just like following:
std::string db_name = "test_db";
std::string config_path = "path/to/ceph/config";
DB* db;
Options options;
options.env = EnvLibrados(db_name, config_path);
Status s = DB::Open(options, kDBPath, &db);
Then EnvLibrados will forward all file read/write operation to the RADOS cluster assigned by config_path. Default pool is db_name+"_pool".
There are some options that users could set for EnvLibrados.
- write_buffer_size. This variable is the max buffer size for WritableFile. After reaching the buffer_max_size, EnvLibrados will sync buffer content to RADOS, then clear buffer.
- db_pool. Rather than using default pool, users could set their own db pool name
- wal_dir. The dir for WAL files. Because RocksDB only has 2-level structure (dir_name/file_name), the format of wal_dir is "/dir_name"(CAN'T be "/dir1/dir2"). Default wal_dir is "/wal".
- wal_pool. Corresponding pool name for WAL files. Default value is db_name+"_wal_pool"
The example of setting options looks like following:
db_name = "test_db";
db_pool = db_name+"_pool";
wal_dir = "/wal";
wal_pool = db_name+"_wal_pool";
write_buffer_size = 1 << 20;
env_ = new EnvLibrados(db_name, config, db_pool, wal_dir, wal_pool, write_buffer_size);
DB* db;
Options options;
options.env = env_;
// The last level dir name should match the dir name in prefix_pool_map
options.wal_dir = "/tmp/wal";
// open DB
Status s = DB::Open(options, kDBPath, &db);
Librados is required to compile EnvLibrados. Then use "$make LIBRADOS=1" to compile RocksDB. If you want to only compile EnvLibrados test, just run "$ make env_librados_test LIBRADOS=1". To run env_librados_test, you need to have a running RADOS cluster with the configure file located in "../ceph/src/ceph.conf" related to "rocksdb/".
Summary:
Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.
The issue:
# Options.max_background_flushes > 1
# Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
# Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
# After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.
Test Plan: run the test. Also verify the new test hit deadlock without the fix.
Reviewers: sdong, igor, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, omegaga, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60969
The tests run by `make check` require Bash. On Debian you'd need to run
the test as `make SHELL=/bin/bash check`. This commit makes it work on
all POSIX compatible shells (tested on Debian with `dash`).
Summary: As Title.
Test Plan: See how the diff works.
Reviewers: kradhakrishnan, andrewkr, gunnarku
Reviewed By: gunnarku
Subscribers: leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D60933
Summary:
When write stalls because of auto compaction is disabled, or stop write trigger is reached,
user may change these two options to unblock writes. Unfortunately we had issue where the write
thread will block the attempt to persist the options, thus creating a deadlock. This diff
fix the issue and add two test cases to detect such deadlock.
Test Plan:
Run unit tests.
Also, revert db_impl.cc to master (but don't revert `DBImpl::BackgroundCompaction:Finish` sync point) and run db_options_test. Both tests should hit deadlock.
Reviewers: sdong
Reviewed By: sdong
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60627
Summary: In D33849 we updated Makefile to generate .d files for all .cc sources. Since we have more types of source files now, this needs to be updated so that this mechanism can work for new files.
Test Plan: change a dependent .h file, re-make and see if .o file is recompiled.
Reviewers: sdong, andrewkr
Reviewed By: andrewkr
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60591
Summary:
Update Makefile to show warnings when we have invalid paths in our make_config.mk file
sample output
```
$ make static_lib -j64
Makefile:150: Warning: /mnt/gvfs/third-party2/libgcc/53e0eac8911888a105aa98b9a35fe61cf1d8b278/4.9.x/gcc-4.9-glibc-2.20/024dbc3/libs dont exist
Makefile:150: Warning: /mnt/gvfs/third-party2/llvm-fb/b91de48a4974ec839946d824402b098d43454cef/stable/centos6-native/7aaccbe/../../src/clang/tools/scan-build/scan-build dont exist
GEN util/build_version.cc
```
Test Plan: check that warning is printed visually
Reviewers: sdong, yiwu, andrewkr
Reviewed By: andrewkr
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D59523
Summary:
This provides provides an implementation of PersistentCacheTier that is
specialized for RAM. This tier does not persist data though.
Why do we need this tier ?
This is ideal as tier 0. This tier can host data that is too hot.
Why can't we use Cache variants ?
Yes you can use them instead. This tier can potentially outperform BlockCache
in RAW mode by virtue of compression and compressed cache in block cache doesn't
seem very popular. Potentially this tier can be modified to under stand the
disadvantage of the tier below and retain data that the tier below is bad at
handling (for example index and bloom data that is huge in size)
Test Plan: Run unit tests added
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D57069
Summary:
- Provide env_test as a static library. We will build it for future releases so internal Envs can use env_test by linking against this library.
- Add tests for CustomEnv, which is configurable via ENV_TEST_URI environment variable. It uses the URI-based Env lookup (depends on D58449).
- Refactor env_basic_test cases to use a unique/configurable directory for test files.
Test Plan:
built a test binary against librocksdb_env_test.a. It registered the
default Env with URI prefix "a://".
- verify runs all CustomEnv tests when URI with correct prefix is provided
```
$ ENV_TEST_URI="a://ok" ./tmp --gtest_filter="CustomEnv/*"
...
[ PASSED ] 12 tests.
```
- verify runs no CustomEnv tests when URI with non-matching prefix is provided
```
$ ENV_TEST_URI="b://ok" ./tmp --gtest_filter="CustomEnv/*"
...
[ PASSED ] 0 tests.
```
Reviewers: ldemailly, IslamAbdelRahman, lightmark, sdong
Reviewed By: sdong
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D58485
Summary:
Extracted basic Env-related tests from mock_env_test and memenv_test into a
parameterized test for Envs: env_basic_test.
Depends on D58449. (The dependency is here only so I can keep this series of
diffs in a chain -- there is no dependency on that diff's code.)
Test Plan: ran tests
Reviewers: IslamAbdelRahman, sdong
Reviewed By: sdong
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D58635
Summary:
This enables configurable Envs without recompiling. For example, my
next diff will make env_test test an Env created by NewEnvFromUri(). Then,
users can determine which Env is tested simply by providing the URI for
NewEnvFromUri() (e.g., through a CLI argument or environment variable).
The registration process allows us to register any Env that is linked with the
RocksDB library, so we can register our internal Envs as well.
The registration code is inspired by our internal InitRegistry.
Test Plan: new unit test
Reviewers: IslamAbdelRahman, lightmark, ldemailly, sdong
Reviewed By: sdong
Subscribers: leveldb, dhruba, andrewkr
Differential Revision: https://reviews.facebook.net/D58449
Summary:
This patch allows db_bench to initialize it's RocksDB Options via a
options file, specified by the --options_file flag. Note that if
--options_file flag is set, then it has higher priority than the
command-line argument.
Test Plan: db_bench_tool_test
Reviewers: sdong, IslamAbdelRahman, kradhakrishnan, yiwu, andrewkr
Reviewed By: andrewkr
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D58533
Summary:
Currently all the tools are included in librocksdb.a (db_bench is not). With
this separate library, we can access db_bench functionality from our internal
repo and eventually move tools out of librocksdb.a.
Test Plan: built a simple binary against this library that invokes db_bench_tool().
Reviewers: IslamAbdelRahman, sdong
Reviewed By: sdong
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D58977
Summary: Deprecate this one option and delete code and tests that are now superfluous.
Test Plan: all tests pass
Reviewers: igor, yhchiang, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: msalib, leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D55317
Summary:
Introduce MaxOperator a simple merge operator that return the max of all operands.
This merge operand help me in benchmarking
Test Plan: Add new unitttests
Reviewers: sdong, andrewkr, yhchiang
Reviewed By: yhchiang
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D57873
Summary:
This is a part of effort to reduce the size of db_test.cc. We move the following tests to a separate file `db_io_failure_test.cc`:
* DropWrites
* DropWritesFlush
* NoSpaceCompactRange
* NonWritableFileSystem
* ManifestWriteError
* PutFailsParanoid
Test Plan: Run `make check` to see if the tests are working properly.
Reviewers: sdong, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D58341
Summary:
We expect the persistent read cache to perform at speeds upto 8 GB/s. In order
to accomplish that, we need build a index mechanism which operate in the order
of multiple millions per sec rate.
This patch provide the basic data structure to accomplish that:
(1) Hash table implementation with lock contention spread
It is based on the StripedHashSet<T> implementation in
The Art of multiprocessor programming by Maurice Henry & Nir Shavit
(2) LRU implementation
Place holder algorithm for further optimizing
(3) Evictable Hash Table implementation
Building block for building index data structure that evicts data like files
etc
TODO:
(1) Figure if the sharded hash table and LRU can be used instead
(2) Figure if we need to support configurable eviction algorithm for
EvictableHashTable
Test Plan: Run unit tests
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D55785
Summary:
Introduced option to dump malloc statistics using new option flag.
Added new command line option to db_bench tool to enable this
funtionality.
Also extended build to support environments with/without jemalloc.
Test Plan:
1) Build rocksdb using `make` command. Launch the following command
`./db_bench --benchmarks=fillrandom --dump_malloc_stats=true
--num=10000000` end verified that jemalloc dump is present in LOG file.
2) Build rocksdb using `DISABLE_JEMALLOC=1 make db_bench -j32` and ran
the same db_bench tool and found the following message in LOG file:
"Please compile with jemalloc to enable malloc dump".
3) Also built rocksdb using `make` command on MacOS to verify behavior
in non-FB environment.
Also to debug build configuration change temporary changed
AM_DEFAULT_VERBOSITY = 1 in Makefile to see compiler and build
tools output. For case 1) -DROCKSDB_JEMALLOC was present in compiler
command line. For both 2) and 3) this flag was not present.
Reviewers: sdong
Reviewed By: sdong
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D57321
* Musl libc does not provide adaptive mutex. Added feature test for PTHREAD_MUTEX_ADAPTIVE_NP.
* Musl libc does not provide backtrace(3). Added a feature check for backtrace(3).
* Fixed compiler error.
* Musl libc does not implement backtrace(3). Added platform check for libexecinfo.
* Alpine does not appear to support gcc -pg option. By default (gcc has PIE option enabled) it fails with:
gcc: error: -pie and -pg|p|profile are incompatible when linking
When -fno-PIE and -nopie are used it fails with:
/usr/lib/gcc/x86_64-alpine-linux-musl/5.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find gcrt1.o: No such file or directory
Added gcc -pg platform test and output PROFILING_FLAGS accordingly. Replaced pg var in Makefile with PROFILING_FLAGS.
* fix segfault when TEST_IOCTL_FRIENDLY_TMPDIR is undefined and default candidates are not suitable
* use ASSERT_DOUBLE_EQ instead of ASSERT_EQ
* When compiled with ROCKSDB_MALLOC_USABLE_SIZE UniversalCompactionFourPaths and UniversalCompactionSecondPathRatio tests fail due to premature memtable flushes on systems with 16-byte alignment. Arena runs out of block space before GenerateNewFile() completes.
Increased options.write_buffer_size.
Summary:
Generate t/run-* scripts to run tests in $PARALLEL_TEST separately, then make check_0 rule execute all of them.
Run `time make check` after running `make all`.
master: 71 sec
with this diff: 63 sec.
It seems moving more tests to $PARALLEL_TEST doesn't help improve test time though.
Test Plan:
Run the following
make check
J=16 make check
J=1 make check
make valgrind_check
J=1 make valgrind_check
J=16 make_valgrind_check
Reviewers: IslamAbdelRahman, sdong
Reviewed By: sdong
Subscribers: leveldb, kradhakrishnan, dhruba, andrewkr, yhchiang
Differential Revision: https://reviews.facebook.net/D56805
Summary:
Before
{F1131675}
After
{F1131681}
This will have no effect on normal make check
Test Plan: make watch-log
Reviewers: sdong
Reviewed By: sdong
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D56847
Summary:
Extend "J=<parallel>" to valgrind_check.
For DBTest, modify the script to run valgrind. For other tests, prefix launch command with valgrind.
Test Plan: Run valgrind_check with J=1 and J>1 and make sure tests run under valgrind. Manually change codes to introduce memory leak and make sure "make watch-log" correctly report it.
Reviewers: yhchiang, yiwu, andrewkr, kradhakrishnan, IslamAbdelRahman
Reviewed By: kradhakrishnan, IslamAbdelRahman
Subscribers: leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D56727
Summary:
- Update Makefile check_some command to run a subset of the tests
- Update travis config to split the unittests job into 2 jobs
-- job testing db_test, db_test2
-- job testing the rest of the unittests
Test Plan:
Run the new travis.yml on my own branch
https://travis-ci.org/facebook/rocksdb/builds/122691453
Reviewers: kradhakrishnan, yhchiang, sdong, andrewkr
Reviewed By: andrewkr
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D56673
Summary: Test DBOptionsAllFieldsSettable sometimes fails under valgrind. Move option settable tests to a separate test file and disable it in valgrind..
Test Plan: Run valgrind test and make sure the test doesn't run.
Reviewers: andrewkr, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: kradhakrishnan, yiwu, yhchiang, leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D56529
Summary:
Undefined Behavior Sanitizer (ubsan) is //a good thing// which will help us to find sneaky bugs with low cost. Please see http://developerblog.redhat.com/2014/10/16/gcc-undefined-behavior-sanitizer-ubsan/ for more details and official GCC documentation for more context: https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html.
Changes itself are quite simple and pretty much imitating whatever is implemented for ASan.
Hooking the UBsan validation build to Sandcastle is a separate step and will be dealt as separate diff because code is in internal repository.
Test Plan: Make sure that that there no regressions when it comes to builds and test pass rate.
Reviewers: leveldb, sdong, IslamAbdelRahman, yhchiang
Reviewed By: yhchiang
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D56049
Summary:
Basic test cases:
- Manifest is lost or corrupt
- Manifest refers to too many or too few SST files
- SST file is corrupt
- Unflushed data is present when RepairDB is called
Depends on D55065 for its CreateFile() function in file_utils
Test Plan: Ran the tests.
Reviewers: IslamAbdelRahman, yhchiang, yoshinorim, sdong
Reviewed By: sdong
Subscribers: leveldb, andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D55485
Summary:
Cache to have an option to fail Cache::Insert() when full. Update call sites to check status and handle error.
I totally have no idea what's correct behavior of all the call sites when they encounter error. Please let me know if you see something wrong or more unit test is needed.
Test Plan: make check -j32, see tests pass.
Reviewers: anthony, yhchiang, andrewkr, IslamAbdelRahman, kradhakrishnan, sdong
Reviewed By: sdong
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D54705
Summary: We want to provide a way to detect whether an iterator is stale and needs to be recreated. Add a iterator property to return version number.
Test Plan: Add two unit tests for it.
Reviewers: IslamAbdelRahman, yhchiang, anthony, kradhakrishnan, andrewkr
Reviewed By: andrewkr
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D54921
Summary: similar to D52809 add option to exclude zero counters.
Test Plan:
[yiwu@dev4504.prn1 ~/rocksdb] ./iostats_context_test
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from IOStatsContextTest
[ RUN ] IOStatsContextTest.ToString
[ OK ] IOStatsContextTest.ToString (0 ms)
[----------] 1 test from IOStatsContextTest (0 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (0 ms total)
[ PASSED ] 1 test.
Reviewers: anthony, yhchiang, andrewkr, IslamAbdelRahman, kradhakrishnan, sdong
Reviewed By: sdong
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D54591
Summary:
Users are confused on how to get the parallel compilation going. This
can help wire the parallelism.
Test Plan: Run manually
Reviewers: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53931
Summary:
as titled, this will prevent the error that was printed because
test_names was evaluated before db_test was built.
Test Plan:
verified below command works and no longer prints errors:
$ make release -j32
verified below command still finds the right tests:
$ make J=32 parallel_check
Reviewers: igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D54117
Summary: unit_481 is misspelt. Fixing it.
Test Plan: Running make commit_prereq
Reviewers: leveldb
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53757
Summary:
db/c_test.c uses the functions in db/c.cc. If we have tags generated
for one but not the other, it's easy to make mistakes like updating a function
signature and missing a call site.
Test Plan:
$ make tags
in vim:
:cscope find s rocksdb_options_set_compression_options
...
3 325 db/c_test.c <<main>>
rocksdb_options_set_compression_options(options, -14, -1, 0);
Reviewers: sdong, yhchiang, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53685
Summary:
Added make targets parallel_test and parallel_dbtest to run
tests in parallel. Each test is run 32 times in parallel. There is a
timeout to catch hangs. The test continues after a failure and reports
non-zero status on failure
Test Plan: Run the two make targets
Reviewers: anthony, yhchiang, IslamAbdelRahman, kradhakrishnan, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D53079
Summary: This patch provides a mechanism to run pre commit tests on the local
branch before committing. This can help prevent frequent build breaks.
The tests can be run in parallel by specifying the J=<..> environment
variable.
Test Plan: Run manually
Reviewers: sdong rven tec
CC: leveldb@
Task ID: #9689218
Blame Rev:
Summary:
Moved all the tests that verify property correctness into a separate
file. The goal is to reduce compile time and complexity of db_test. I didn't
add parallelism for db_properties_test, even though these tests were
parallelized in db_test, since the file is small enough that it won't matter.
Some of these moves may be controversial since it's hard to say whether the
test is "verifying property correctness," or "using properties to verify
rocksdb's correctness." I'm interested in any opinions.
Test Plan: ran db_properties_test, also waiting on "make commit-prereq -j32"
Reviewers: yhchiang, IslamAbdelRahman, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D52995
These simple changes are required to allow builds on ppc64[le] systems
consistent with X86. The Makefile now recognizes both ppc64 and ppc64le, and
in the absence of PORTABLE=1, the code will be built analogously to the X86
-march=native.
Note that although GCC supports -mcpu=native -mtune=native on POWER, it
doesn't work correctly on all systems. This is why we need to get the actual
machine model from the AUX vector.
Makefile adjust paths for solaris build
Makefile enable _GLIBCXX_USE_C99 so that std::to_string is available
db_compaction_test.cc Initialise a variable to avoid a compilation error
db_impl.cc Include <alloca.h>
db_test.cc Include <alloca.h>
Environment.java recognise solaris envrionment
options_bulder.cc Make log unambiguous
geodb_impl.cc Make log and floor unambiguous
Summary:
GTEST dont compile under clang when -Werror=missing-field-initializers is set
revert this change
Test Plan:
USE_CLANG=1 make check
make check
Reviewers: rven
Reviewed By: rven
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D52455
Summary:
myrocks seems to build rocksdb using
-Wmissing-field-initializers (and treats warnings as errors). This diff
adds that flag to the rocksdb build, and fixes the compilation failures
that result. I have not checked for any other differences in the build
flags for rocksdb build as part of myrocks.
Test Plan: make check
Reviewers: sdong, rven
Reviewed By: rven
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D52443
This is an Env implementation that mirrors all storage-related methods on
two different backend Env's and verifies that they return the same
results (return status and read results). This is useful for implementing
a new Env and verifying its correctness.
Signed-off-by: Sage Weil <sage@redhat.com>
Summary: Ldb and sst_dump are not included in shared library now. Add it.
Test Plan:
Build
make release
make shared_lib
Reviewers: igor, kradhakrishnan, rven, yhchiang, IslamAbdelRahman, anthony
Reviewed By: IslamAbdelRahman, anthony
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D51735
Summary:
This diff is 1/3 in a sequence that introduces a skip list optimized for
a key that is a freshly-allocated const char*. The diff is broken into
pieces to make it easier to review. This piece only introduces the new
type by copying the existing SkipList, with mechanical naming changes
and reformatting.
Test Plan: new unit test
Reviewers: igor, sdong
Reviewed By: sdong
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D51279
Summary: forward_iterator_bench is not stable enough for build. Remove it for now.
Test Plan: Build it with both of CLANG and non-CLANG and make sure it builds.
Reviewers: rven, kradhakrishnan, anthony, yhchiang, igor, IslamAbdelRahman
Reviewed By: igor, IslamAbdelRahman
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D50991
Summary:
In case rocksdb java package is built using make rocksdbjavastaticrelease, then
only those rocksdb binary built under the virtual environments is release build.
This patch fix this issue.
Test Plan:
PORTABLE=1 V=2 make rocksdbjavastaticrelease -j32
and make sure -O2 and -NDEBUG is included when compiling all source files.
Reviewers: sdong, anthony, IslamAbdelRahman, rven, kradhakrishnan, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D50895
Summary:
Under a tailing workload, there were increased block cache
misses when a memtable was flushed because we were rebuilding iterators
in that case since the version set changed. This was exacerbated in the
case of iterate_upper_bound, since file iterators which were over the
iterate_upper_bound would have been deleted and are now brought back as
part of the Rebuild, only to be deleted again. We now renew the iterators
and only build iterators for files which are added and delete file
iterators for files which are deleted.
Refer to https://reviews.facebook.net/D50463 for previous version
Test Plan: DBTestTailingIterator.TailingIteratorTrimSeekToNext
Reviewers: anthony, IslamAbdelRahman, igor, tnovak, yhchiang, sdong
Reviewed By: sdong
Subscribers: yhchiang, march, dhruba, leveldb, lovro
Differential Revision: https://reviews.facebook.net/D50679
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
Summary:
This patch allows rocksdb to persist options into a file on
DB::Open, SetOptions, and Create / Drop ColumnFamily.
Options files are created under the same directory as the rocksdb
instance.
In addition, this patch also adds a fail_if_missing_options_file in DBOptions
that makes any function call return non-ok status when it is not able to
persist options properly.
// If true, then DB::Open / CreateColumnFamily / DropColumnFamily
// / SetOptions will fail if options file is not detected or properly
// persisted.
//
// DEFAULT: false
bool fail_if_missing_options_file;
Options file names are formatted as OPTIONS-<number>, and RocksDB
will always keep the latest two options files.
Test Plan:
Add options_file_test.
options_test
column_family_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D48285
Summary:
Reverting c745f1d2c4 because it
was based on an incorrect understanding of the correct way to enable
TSAN tests (it assumes "make COMPILE_WITH_TSAN=1 check" but in fact only
"COMPILE_WITH_TSAN=1 make check" is supported).
Test Plan: COMPILE_WITH_TSAN=1 make check
Reviewers: kradhakrishnan
Reviewed By: kradhakrishnan
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D50445
Summary:
TSAN builds for gcc 4.9 need a PIC version of the libraries
taken from the fbcode platform. This is accomplished by assuming every
.a has a _pic.a sibling, and by fixing the third-party2 zlib build.
Test Plan: make COMPILE_WITH_TSAN=1 check
Reviewers: sdong, igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D50331
Summary:
This patch introduces utilities/memory, which currently includes
GetApproximateMemoryUsageByType that reports different types of
rocksdb memory usage given a list of input DBs.
The API also take care of the case where Cache could be shared
across multiple column families / multiple db instances.
Currently, it reports memory usage of memtable, table-readers
and cache.
Test Plan: utilities/memory/memory_test.cc
Reviewers: igor, anthony, IslamAbdelRahman, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49257
Summary:
The goal of this diff is to create a simple stress test with focus on catching:
* bugs in compaction/flush processes, especially the ones that cause assertion errors
* bugs in the code that deletes obsolete files
There are two parts of the test:
* write_stress, a binary that writes to the database
* write_stress_runner.py, a script that invokes and kills write_stress
Here are some interesting parts of write_stress:
* Runs with very high concurrency of compactions and flushes (32 threads total) and tries to create a huge amount of small files
* The keys written to the database are not uniformly distributed -- there is a 3-character prefix that mutates occasionally (in prefix mutator thread), in such a way that the first character mutates slower than second, which mutates slower than third character. That way, the compaction stress tests some interesting compaction features like trivial moves and bottommost level calculation
* There is a thread that creates an iterator, holds it for couple of seconds and then iterates over all keys. This is supposed to test RocksDB's abilities to keep the files alive when there are references to them.
* Some writes trigger WAL sync. This is stress testing our WAL sync code.
* At the end of the run, we make sure that we didn't leak any of the sst files
write_stress_runner.py changes the mode in which we run write_stress and also kills and restarts it. There are some interesting characteristics:
* At the beginning we divide the full test runtime into smaller parts -- shorter runtimes (couple of seconds) and longer runtimes (100, 1000) seconds
* The first time we run write_stress, we destroy the old DB. Every next time during the test, we use the same DB.
* We can run in kill mode or clean-restart mode. Kill mode kills the write_stress violently.
* We can run in mode where delete_obsolete_files_with_fullscan is true or false
* We can run with low_open_files mode turned on or off. When it's turned on, we configure table cache to only hold a couple of files -- that way we need to reopen files every time we access them.
Another goal was to create a stress test without a lot of parameters. So tools/write_stress_runner.py should only take one parameter -- runtime_sec and it should figure out everything else on its own.
In a separate diff, I'll add this new test to our nightly legocastle runs.
Test Plan:
The goal of this test was to retroactively catch the following bugs: D33045, D48201, D46899, D42399. I failed to reproduce D48201, but all others have been caught!
When i reverted https://reviews.facebook.net/D33045:
./write_stress --runtime_sec=200 --low_open_files_mode=true
Iterator statuts not OK: IO error: /fast-rocksdb-tmp/rocksdb_test/write_stress/089166.sst: No such file or directory
When i reverted https://reviews.facebook.net/D42399:
python tools/write_stress_runner.py --runtime_sec=5000
Running write_stress, will kill after 5 seconds: ./write_stress --runtime_sec=-1
Running write_stress, will kill after 2 seconds: ./write_stress --runtime_sec=-1 --destroy_db=false --delete_obsolete_files_with_fullscan=true
Running write_stress, will kill after 7 seconds: ./write_stress --runtime_sec=-1 --destroy_db=false
Running write_stress, will kill after 5 seconds: ./write_stress --runtime_sec=-1 --destroy_db=false
Running write_stress, will kill after 8 seconds: ./write_stress --runtime_sec=-1 --destroy_db=false --low_open_files_mode=true
Write to DB failed: IO error: /fast-rocksdb-tmp/rocksdb_test/write_stress/019250.sst: No such file or directory
ERROR: write_stress died with exitcode=-6
When i reverted https://reviews.facebook.net/D46899:
python tools/write_stress_runner.py --runtime_sec=1000
runtime: 1000
Going to execute write stress for [3, 3, 100, 3, 2, 100, 1, 788]
Running write_stress for 3 seconds: ./write_stress --runtime_sec=3 --low_open_files_mode=true
Running write_stress for 3 seconds: ./write_stress --runtime_sec=3 --destroy_db=false --delete_obsolete_files_with_fullscan=true
Running write_stress, will kill after 100 seconds: ./write_stress --runtime_sec=-1 --destroy_db=false --delete_obsolete_files_with_fullscan=true
write_stress: db/db_impl.cc:2070: void rocksdb::DBImpl::MarkLogsSynced(uint64_t, bool, const rocksdb::Status&): Assertion `log.getting_synced' failed.
ERROR: write_stress died with exitcode=-6
Reviewers: IslamAbdelRahman, yhchiang, rven, kradhakrishnan, sdong, anthony
Reviewed By: anthony
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D49533
Summary: Use DEBUG_LEVEL=0 in make release and make clean
Test Plan:
make clean
make release -j32
Reviewers: MarkCallaghan, sdong, anthony, IslamAbdelRahman, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49125
Summary:
merge tools/db_crashtest2.py into tools/db_crashtest.py
python tools/db_crashtest.py -h # show help message, ALL parameters can be overwrite by arguments
Example usages:
python tools/db_crashtest.py blackbox # run blackbox with default parameters
python tools/db_crashtest.py blackbox --simple
python tools/db_crashtest.py whitebox # run whitebox with default parameters
python tools/db_crashtest.py whitebox --simple
all default parameters are identical to previous version.
Test Plan: `make crash_test` and make sure it can run with expected parameters pased to db_stress.
Reviewers: igor, rven, anthony, IslamAbdelRahman, yhchiang, sdong
Reviewed By: sdong
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D48567
Summary:
In the current make file, DEBUG_LEVEL is forced to set to 1
instead of default to 1. This patch fix this issue.
Test Plan:
DEBUG_LEVEL=2 make db_bench -j32
DEBUG_LEVEL=0 make db_bench -j32
DEBUG_LEVEL=0 make release -j32
And see whether there's a warning pops up.
Reviewers: MarkCallaghan, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D48933
Summary: In MyRocks, it is sometimes important to get propeties only for the subset of the database. This diff implements the API in RocksDB.
Test Plan: ran the GetPropertiesOfTablesInRange
Reviewers: rven, sdong
Reviewed By: sdong
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D48651
Summary:
This diff exclude alot of tests in db_test that are not compiling / failing under ROCKSD_LITE
Test Plan:
OPT=-DROCKSDB_LITE make check -j64
make check -j64
Reviewers: yhchiang, igor, sdong
Reviewed By: sdong
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D48771
Summary: As part of cleaning up dependencies for tech debt week, we are moving ldb and sst_dump tools from util to tools, since they are tools.
Test Plan: make check
Reviewers: sdong
Reviewed By: sdong
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D48747
Summary: manual_compaction_test.cc incorrectly in util. Moved to db.
Test Plan: make check
Reviewers: sdong
Reviewed By: sdong
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D48687
1) There is no need to download and install the compression libraries twice
just to get access to their header files during the compile phase.
2) Ensure that the compression library headers files used during the
compile phase are the same ones used to build the static library that
is linked into the library.
Summary:
As part of tech debt week, we are cleaning up dependencies.
This diff moves db_test_util.[h,cc] from util to db directory.
Test Plan: make check
Reviewers: igor, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D48543
in the Java jar. Also build the linux libraries using the portable flag to fix a problem with
the linux32 build and improve the general portability of the RocksDB dynamic libraries.
==> linux32: util/crc32c.cc:318:39: error: ‘_mm_crc32_u64’ was not declared in this scope
Summary:
I see a lot of users compiling RocksDB with `make` or `make all` and then using those binaries in production. They end up running debug builds :(
This diff makes it harder for them:
1. I added an explicit warning to INSTALL.md
2. When you compile with `make all`, your resulting library will be librocksdb_debug.a
3. I also print out a warning when you compile in debug mode.
Hopefully should be enough :)
Test Plan: none
Reviewers: rven, yhchiang, kradhakrishnan, anthony, sdong, dhruba, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D48093
Summary: unity_test will compile db_test with unity.a instead of librocksdb.a. This will test both the compilation and some small amount of runtime.
Test Plan: This is a test :)
Reviewers: kradhakrishnan
Reviewed By: kradhakrishnan
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D48297
This commit adds two new targets to the Makefile: rocksdb.cc and rocksdb.h
These files, when combined with the c.h header, are a self-contained RocksDB
source distribution called an amalgamation. (The name comes from SQLite's, which
is similar in concept.)
The main benefit of an amalgamation is that it's very easy to drop into a
new project. It also compiles faster compared to compiling individual source
files and potentially gives the compiler more opportunity to make optimizations
since it can see all functions at once.
rocksdb.cc and rocksdb.h are generated by a new script, amalgamate.py.
A detailed description of how amalgamate.py works is in a comment at the top of
the file.
There are also some small changes to existing files to enable the amalgamation:
* Use quotes for includes in unity build
* Fix an old header inclusion in util/xfunc.cc
* Move some includes outside ifdef in util/env_hdfs.cc
* Separate out tool sources in Makefile so they won't be included in unity.cc
* Unity build now produces a static library
Closes#733
Summary: Our valgrind testing is buggy and cumbersome in terms of locating the
error. It originates from the fact we accumulate all output for the tests. It is
extremely hard to locate the point of error.
The communication between valgrind and the script is not sturdy. We are
experiencing bugs.
Simplifying to stop on first error.
Test Plan: Run manually
Reviewers: sdong igor
CC: leveldb@
Task ID: #6968635
Blame Rev:
Summary:
Fix hex2String performance issues by removing sscanf dependency.
Also fixed some edge case handling (odd length, bad input).
Test Plan: Created a test file which called old and new implementation, and validated results are the same. I'll paste results in the phabricator diff.
Reviewers: igor, rven, anthony, IslamAbdelRahman, kradhakrishnan, yhchiang, sdong
Reviewed By: sdong
Subscribers: thatsafunnyname, leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D46785
Summary:
This patch fixes#7460559. It introduces SingleDelete as a new database
operation. This operation can be used to delete keys that were never
overwritten (no put following another put of the same key). If an overwritten
key is single deleted the behavior is undefined. Single deletion of a
non-existent key has no effect but multiple consecutive single deletions are
not allowed (see limitations).
In contrast to the conventional Delete() operation, the deletion entry is
removed along with the value when the two are lined up in a compaction. Note:
The semantics are similar to @igor's prototype that allowed to have this
behavior on the granularity of a column family (
https://reviews.facebook.net/D42093 ). This new patch, however, is more
aggressive when it comes to removing tombstones: It removes the SingleDelete
together with the value whenever there is no snapshot between them while the
older patch only did this when the sequence number of the deletion was older
than the earliest snapshot.
Most of the complex additions are in the Compaction Iterator, all other changes
should be relatively straightforward. The patch also includes basic support for
single deletions in db_stress and db_bench.
Limitations:
- Not compatible with cuckoo hash tables
- Single deletions cannot be used in combination with merges and normal
deletions on the same key (other keys are not affected by this)
- Consecutive single deletions are currently not allowed (and older version of
this patch supported this so it could be resurrected if needed)
Test Plan: make all check
Reviewers: yhchiang, sdong, rven, anthony, yoshinorim, igor
Reviewed By: igor
Subscribers: maykov, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D43179
Summary:
When running `make rocksdbjava V=1`, it currently does
not print out the original command but something odd instead:
mkdir -p jl/db
This patch tries to fix this issue.
Test Plan: make rocksdbjava V=1
Reviewers: igor, sdong, IslamAbdelRahman, meyering
Reviewed By: meyering
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D47061
Summary: This patch fixed couple build issues of rocksdbjavastaticrelease.
Test Plan: make rocksdbjavastaticrelease
Reviewers: sdong, anthony, IslamAbdelRahman, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D45915
My use case is to build the rocksdb static library and tools, and
ideally I'd like to not spend time building the shared library and other
targets that I won't use.
Summary:
Initial implementation of Pessimistic Transactions. This diff contains the api changes discussed in D38913. This diff is pretty large, so let me know if people would prefer to meet up to discuss it.
MyRocks folks: please take a look at the API in include/rocksdb/utilities/transaction[_db].h and let me know if you have any issues.
Also, you'll notice a couple of TODOs in the implementation of RollbackToSavePoint(). After chatting with Siying, I'm going to send out a separate diff for an alternate implementation of this feature that implements the rollback inside of WriteBatch/WriteBatchWithIndex. We can then decide which route is preferable.
Next, I'm planning on doing some perf testing and then integrating this diff into MongoRocks for further testing.
Test Plan: Unit tests, db_bench parallel testing.
Reviewers: igor, rven, sdong, yhchiang, yoshinorim
Reviewed By: sdong
Subscribers: hermanlee4, maykov, spetrunia, leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D40869
Summary: Currently, valgrind_check doesn't fail on test failures, which creates confusion. valgrind_check should fail if test fails.
Test Plan: Manually change tests to return test failure or cause memory leak and see valgrind_check has the correct behavior.
Reviewers: anthony, yhchiang, IslamAbdelRahman, igor, kradhakrishnan, rven
Reviewed By: rven
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D43629
Summary:
Add two unit tests for SyncWAL(). One makes sure SyncWAL() doesn't block writes in the other thread. Another one makes sure SyncWAL() doesn't wait ongoing writes to finish before being executed.
Create a new test file db_wal_test and move two WAL related tests from db_test to here.
Test Plan: Run the new tests
Reviewers: IslamAbdelRahman, rven, kradhakrishnan, kolmike, tnovak, yhchiang
Reviewed By: yhchiang
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D43605
Summary: "make commit-prereq" fails to clean up java, which can cause rocksjava failure.
Test Plan: Run commit-prepreq
Reviewers: IslamAbdelRahman, rven, kradhakrishnan, yhchiang
Reviewed By: yhchiang
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D43575
Summary:
Introduce DeleteScheduler that allow enforcing a rate limit on file deletion
Instead of deleting files immediately, files are moved to trash directory and deleted in a background thread that apply sleep penalty between deletes if needed.
I have updated PurgeObsoleteFiles and PurgeObsoleteWALFiles to use the delete_scheduler instead of env_->DeleteFile
Test Plan:
added delete_scheduler_test
existing unit tests
Reviewers: kradhakrishnan, anthony, rven, yhchiang, sdong
Reviewed By: sdong
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D43221
Summary:
crash_test now only runs complicated options, multiple column families, prefix hash, frequently changing options, many compaction threads, etc. These options are good to cover new features but we loss coverage in most common use cases. Furthermore, by running only for multiple column families, we are not able to create LSM trees that are large enough to cover some stress cases.
Make half of crash_test runs the simply tests: single column family, default mem table, one compaction thread, no change options.
Test Plan: Run crash_test
Reviewers: rven, yhchiang, IslamAbdelRahman, kradhakrishnan
Reviewed By: kradhakrishnan
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D43461
Summary:
This diff adds CompactOnDeletionCollector in utilities/table_properties_collectors,
which applies a sliding window to a sst file and mark this file as need-compaction
when it observe enough deletion entries within the consecutive keys covered by
the sliding window.
Test Plan: compact_on_deletion_collector_test
Reviewers: igor, anthony, IslamAbdelRahman, kradhakrishnan, yoshinorim, sdong
Reviewed By: sdong
Subscribers: maykov, dhruba
Differential Revision: https://reviews.facebook.net/D41175
Summary:
Whenever a Java class implements equals(), it has to implement hashCode(), otherwise
there might be weird behavior when inserting instances of the class in a hash map for
example. This adds two missing hashCode() implementations and extends tests to test
the hashCode() implementations.
Test Plan: make jtest
Reviewers: rven, igor, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: anthony, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D43017
Summary:
From other ones' investigation:
"sync_file_range() behavior highly depends on kernel version and filesystem.
xfs does neighbor page flushing outside of the specified ranges. For example, sync_file_range(fd, 8192, 16384) does not only trigger flushing page #3 to #4, but also flushing many more dirty pages (i.e. up to page#16)... Ranges of the sync_file_range() should be far enough from write() offset (at least 1MB)."
Test Plan: make all check
Reviewers: igor, rven, kradhakrishnan, yhchiang, IslamAbdelRahman, anthony
Reviewed By: anthony
Subscribers: yoshinorim, MarkCallaghan, sumeet, domas, dhruba, leveldb, ljin
Differential Revision: https://reviews.facebook.net/D15807
Summary: Move general compaction tests from db_test.cc to db_compaction_test.cc
Test Plan:
db_test
db_compaction_test
Reviewers: igor, sdong, IslamAbdelRahman, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D42651
Summary: Add db_inplace_update_test back to Makefile
Test Plan: make all check
Reviewers: igor, sdong, anthony, IslamAbdelRahman
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D42717
Summary:
This reverts commit 03467bdd4d.
It seems the previous git merge remove all the tabs in Makefile
and cause the Makefile not runnable. This patch revert that incorrect
merge.
Test Plan: make
Reviewers: sdong, anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D42705
Conflicts:
CMakeLists.txt
Makefile
Summary: Add missing db_log_iter_test in the test list
Test Plan: make all check
Reviewers: igor, anthony, IslamAbdelRahman, sdong
Reviewed By: sdong
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D42663
Summary:
Fixed DBTestUniversalManualCompactionOutputPathId test
by changing the expected number of files when setting up
the test as flushes no-longer preempt compactions
in patch https://reviews.facebook.net/D41931.
Also, include db_universal_copaction_test in make all check.
Test Plan: db_universal_copaction_test
Reviewers: igor, sdong, IslamAbdelRahman, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D42639
Summary:
When we first started, max_background_flushes was 0 by default and compaction thread was executing flushes (since there was no flush thread). Then, we switched the default max_background_flushes to 1. However, we still support the case where there is no flush thread and flushes are done in compaction. This is making our code a bit more complicated. By not supporting this use-case we can make our code simpler.
We have a special case that when you set max_background_flushes to 0, we
schedule the flush to execute on the compaction thread.
Test Plan: make check (there might be some unit tests that depend on this behavior)
Reviewers: IslamAbdelRahman, yhchiang, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41931
Summary:
MergeUntil was not reporting a success when merging an operand with
a Value/Deletion despite the comments in MergeHelper and CompactionJob
indicating otherwise. This lead to operands being written to the compaction
output unnecessarily:
M1 M2 M3 P M4 M5 --> (P+M1+M2+M3) M2 M3 M4 M5 (before the diff)
M1 M2 M3 P M4 M5 --> (P+M1+M2+M3) M4 M5 (after the diff)
In addition, the code handling Values/Deletion was basically identical.
This patch unifies the code. Finally, this patch also adds testing for
merge_helper.
Test Plan: make && make check
Reviewers: sdong, rven, yhchiang, tnovak, igor
Reviewed By: igor
Subscribers: tnovak, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D42351
Summary:
For some reason, parallel tests are failing in Jenkins. We turned off parallel tests for all tests except `make coverage`. Let's turn it off here, too, so that we can have Jenkins test working again.
For the error log, see: https://ci-builds.fb.com/job/rocksdb_coverage/166/console
Test Plan: commit and wait for jenkins
Reviewers: meyering, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D42057
Summary: Move UniversalCompaction related db-tests to db_universal_compaction_test.cc
Test Plan:
db_test
db_universal_compaction_test
Reviewers: igor, sdong, IslamAbdelRahman, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D42225
Summary: Make TransactionLogIterator related tests from db_test.cc to db_log_iter_test.cc
Test Plan:
db_test
db_log_iter_test
Reviewers: sdong, IslamAbdelRahman, igor, anthony
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D42045
Summary: This is revert of D40449. I'm not sure why our tests failed before, they seem to be working fine on my devbox.
Test Plan: ran `make check`
Reviewers: anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41955
Summary:
Move reusable part of db_test.cc to util/db_test_util.h.
This makes it more possible to partition db_test.cc into
multiple smaller test files.
Also, fixed many old lint errors in db_test.
Test Plan: db_test
Reviewers: igor, anthony, IslamAbdelRahman, sdong, kradhakrishnan
Reviewed By: sdong, kradhakrishnan
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41973
Summary:
Remove -Wl,--no-as-needed flag when making shared_lib in OSX and IOS as
those environment doe not have compile option --no-as-needed
ld: unknown option: --no-as-needed
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Test Plan: make shared_lib
Reviewers: meyering, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D40353
Summary: Recent checkin added ldb_test.py to the make check target but the test fails. Remove it again for now and make task.
Test Plan: No more ldb_tests.py running
Reviewers: igor, anthony
Reviewed By: anthony
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D40449
Summary: Hack up rocksdb_dump and rocksdb_undump utilities to get this task rolling/promote discussion.
Test Plan: Dump/undump databases recursively to see if nothing is lost.
Reviewers: sdong, yhchiang, rven, anthony, kradhakrishnan, igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D37269
Summary:
When there are multiple column families, the flush in
GetLiveFiles is not atomic, so that there are entries in the wal files
which are needed to get a consisten RocksDB. We now add the log files to
the checkpoint.
Test Plan:
CheckpointCF - This test forces more data to be written to
the other column families after the flush of the first column family but
before the second.
Reviewers: igor, yhchiang, IslamAbdelRahman, anthony, kradhakrishnan, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D40323
Summary:
Remove compactor_test, which depends on a directory not exist
in our code base.
make compactor_test
GEN util/build_version.cc
GEN util/build_version.cc
make: *** No rule to make target `utilities/compaction/compactor_test.o', needed by `compactor_test'. Stop.
Test Plan: verify the output message of make compactor_test
Reviewers: rven, anthony, kradhakrishnan, igor, IslamAbdelRahman, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D40341
Summary:
The motivation is here: http://stackoverflow.com/questions/30706348/rocksdb-make-error
Apparently there is a bug (?) in g++ where we don't link rt library properly when building RocksDB's shared library. It looks like this works when we add an option -Wl, --no-as-needed.
Test Plan: make shared_lib
Reviewers: meyering, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D39681
Summary:
Allow EventListener::OnCompactionCompleted to return CompactionJobStats,
which contains useful information about a compaction.
Example CompactionJobStats returned by OnCompactionCompleted():
smallest_output_key_prefix 05000000
largest_output_key_prefix 06990000
elapsed_time 42419
num_input_records 300
num_input_files 3
num_input_files_at_output_level 2
num_output_records 200
num_output_files 1
actual_bytes_input 167200
actual_bytes_output 110688
total_input_raw_key_bytes 5400
total_input_raw_value_bytes 300000
num_records_replaced 100
is_manual_compaction 1
Test Plan: Developed a mega test in db_test which covers 20 variables in CompactionJobStats.
Reviewers: rven, igor, anthony, sdong
Reviewed By: sdong
Subscribers: tnovak, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D38463
Summary: Optimistic transactions supporting begin/commit/rollback semantics. Currently relies on checking the memtable to determine if there are any collisions at commit time. Not yet implemented would be a way of enuring the memtable has some minimum amount of history so that we won't fail to commit when the memtable is empty. You should probably start with transaction.h to get an overview of what is currently supported.
Test Plan: Added a new test, but still need to look into stress testing.
Reviewers: yhchiang, igor, rven, sdong
Reviewed By: sdong
Subscribers: adamretter, MarkCallaghan, leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D33435
Summary: as title
Test Plan: make release
Reviewers: MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D38853
Summary: Sometimes we want to run tests sequentially. J=1 gives us that option
Test Plan:
make J=1 check -- sequential
make J=2 check -- parallel
Reviewers: sdong, yhchiang, meyering
Reviewed By: meyering
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D38805
Summary: In third-party2 build we need to force git sha because we're compiling from a different git repositry.
Test Plan: `FORCE_GIT_SHA=igor make`
Reviewers: kradhakrishnan, sdong
Reviewed By: kradhakrishnan
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D38679
Summary: Now rocksdb_build_git_sha is determined from "git sha". It is hard if the release is not from the repository directly but from a source code copy. Change to use the versions given in Makefile.
Test Plan: Run "make util/build_version.cc"
Reviewers: kradhakrishnan, rven, meyering, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D38451
Summary: Don't treat warnings as error when building rocksdbjavastatic
Test Plan: make rocksdbjavastatic -j32
Reviewers: rven, fyrz, adamretter, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D38187
Summary: Without this I get bunch of questions when I run `make clean`
Test Plan: no more questions!
Reviewers: rven, yhchiang, meyering, anthony
Reviewed By: meyering, anthony
Subscribers: meyering, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D38145
Summary: Based on comment from D37455
Test Plan: make install after make static_lib
Reviewers: meyering
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D37461
Summary: make install fails when there is no shared lib. We need to revert the conditions, which will have the same effect, but without the failure
Test Plan: make install after only compiling static library
Reviewers: meyering
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37455
Summary: The current version tries to install librocksdb.so even though it doesn't exist. This version will install librocksdb.so.3.10.0 and then create soft links in place
Test Plan:
`make static_lib; sudo make install` does not try to install librocksdb.so
`make shared_lib; sudo make install` installs one library and 3 symlinks. Before, four libraries were installed
Reviewers: sdong, meyering
Reviewed By: meyering
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37425
Summary: Make target for running all xfunc tests
Test Plan: make xfunc
Reviewers: igor, sdong, meyering
Reviewed By: meyering
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36873
Summary:
1. it doesn't work
2. we're not using it
In the future, if we need general benchmark framework, we should probably use https://github.com/google/benchmark
Test Plan: make all
Reviewers: yhchiang, rven, anthony, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36777
Summary: Add tests for MemTableList
Test Plan: run test
Reviewers: yhchiang, kradhakrishnan, sdong, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36735
Summary: "make commit-prereq" uses "make release" which overrides OPT, so ROCKSDB_LITE is not covered. Fix it by using "make static_lib"
Test Plan: Run it and see it fail (which is expected)
Reviewers: yhchiang, meyering, rven, anthony, kradhakrishnan, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D36813
Summary: I don't think we need to use whole-archive to include jemalloc. This change only affects our development builds -- it does not affect our open source builds (which don't support jemalloc) or our fbcode third-party2 builds (which use open-source build codepaths).
Test Plan:
make
verify that jemalloc is running by running `MALLOC_CONF="prof:true" ./cache_test` and observing that file was created
Reviewers: MarkCallaghan
Reviewed By: MarkCallaghan
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36783
Summary: These two files are test binaries and are not included in TESTS in Makefile.
Test Plan: `make clean` now deletes those files, too
Reviewers: sdong, kradhakrishnan, meyering
Reviewed By: kradhakrishnan, meyering
Subscribers: kradhakrishnan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36705
Summary:
When building rocksdbjava and rocksdbjavastatic, create -fPIC-enabled
binaries in a temporary subdirectory, jl/.
* Makefile (java_libobjects): New variable.
(java_libobjects): New rule.
(CLEAN_FILES): Arrange for "make clean" to remove that temporary dir.
(rocksdbjavastatic): Depend on the new variable.
Remove useless OPT=... line.
(rocksdbjava): Likewise.
Test Plan:
JAVA_HOME=/usr/local/jdk-7u67-64 PATH=$JAVA_HOME/bin:$PATH \
make rocksdbjavastatic
Reviewers: yhchiang
Reviewed By: yhchiang
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D36645
Summary:
* src.mk (JNI_NATIVE_SOURCES): New variable, so we don't have to use
a glob in Makefile
* Makefile (JNI_NATIVE_SOURCES): Remove glob-using definition, now
that the explicit list of sources is in src.mk.
Test Plan:
Run this:
JAVA_HOME=/usr/local/jdk-7u67-64 PATH=$JAVA_HOME/bin:$PATH \
make rocksdbjava
Reviewers: yhchiang
Reviewed By: yhchiang
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D36633
Summary:
As described in https://github.com/facebook/rocksdb/issues/563, we should add minor version to SONAME, since we break ABI with minor releases.
I also turned PLATFORM_SHARED_VERSIONED to true by default. This is true in LevelDB and it was switched to false by D15117 for no apparent reason. It should only be false for iOS.
Test Plan: `make shared_lib` produced librocksdb.dylib.3.10.0
Reviewers: sdong, yhchiang, meyering
Reviewed By: meyering
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36573
Summary:
When GNU parallel is available, "make check" tests are now run in parallel.
When /dev/shm is usable, we tell those tests to create temporary files therein.
Now, the longest-running single test, db_test, (which is composed of hundreds of sub-tests)
is no longer run sequentially: instead, each of its sub-tests is run independently, and can
be parallelized along with all other tests. To make that process easier, this change
creates a temporary directory, "t/", in which it puts a small script for each of those
subtests. The output from each parallel-run test is now saved in t/log-TEST_NAME.
When GNU parallel is not available, we run the tests in sequence, just as before.
If GNU parallel is available and you don't like the default of running one subtest
per core, you can invoke "make J=1 check" to run only one test at a time.
Beware: this will take a long time, and it starts with the two longest-running tests, so you
will wait for a long time before seeing any results. Instead, if you want to use fewer resources
but still see useful progress, try "make J=60% check". That will attempt to ensure that 60% of
the cores are occupied by test runs.
To watch progress of individual tests (duration, success (PASS-or-FAIL), name), run "make watch-log"
in the same directory from another window. That will start with something like this:
and when complete should show numbers/names like this:
Every 0.1s: sort -k7,7nr -k4,4gr LOG|perl -n -e '@a=split("\t",$_,-1); $t=$a[8]; $t =~ s,^\./,,;' -e '$t =~ s, >.*,,; chomp $t;' -e '$t =~ /.*--gtest_filter=... Wed Apr 1 10:51:42 2015
152.221 PASS t/DBTest.FileCreationRandomFailure
109.280 PASS t/DBTest.EncodeDecompressedBlockSizeTest
82.315 PASS reduce_levels_test
77.812 PASS t/DBTest.CompactionFilterWithValueChange
73.236 PASS backupable_db_test
63.428 PASS deletefile_test
57.248 PASS table_test
55.665 PASS prefix_test
49.816 PASS t/DBTest.RateLimitingTest
...
Test Plan:
Timings (measured so as to exclude compile and link times):
With this change, all tests complete in 2m40s on a system for which nproc prints 32.
Prior to this this change, "make check" would take 24.5 minutes on that same system.
Here are durations (in seconds) of the longest-running subtests:
152.435 PASS t/DBTest.FileCreationRandomFailure
107.070 PASS t/DBTest.EncodeDecompressedBlockSizeTest
81.391 PASS ./reduce_levels_test
71.587 PASS ./backupable_db_test
61.746 PASS ./deletefile_test
57.960 PASS ./table_test
55.230 PASS ./prefix_test
54.060 PASS t/DBTest.CompactionFilterWithValueChange
48.873 PASS t/DBTest.RateLimitingTest
47.569 PASS ./fault_injection_test
46.593 PASS t/DBTest.Randomized
42.662 PASS t/DBTest.CompactionFilter
31.793 PASS t/DBTest.SparseMerge
30.612 PASS t/DBTest.CompactionFilterV2
25.891 PASS t/DBTest.GroupCommitTest
23.863 PASS t/DBTest.DynamicLevelMaxBytesBase
22.976 PASS ./rate_limiter_test
18.942 PASS t/DBTest.OptimizeFiltersForHits
16.851 PASS ./env_test
15.399 PASS t/DBTest.CompactionFilterV2WithValueChange
14.827 PASS t/DBTest.CompactionFilterV2NULLPrefix
Reviewers: igor, sdong, rven, yhchiang, igor.sugak
Reviewed By: igor.sugak
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D35379
Summary:
Just couple of small changes:
1. removed signal_test, since it doesn't seem useful and we don't even run it as part of `make check`
2. moved perf_context_test to TESTS instead of PROGRAMS
3. `make release` probably shouldn't compile benchmarks. We currently rely on `make release` building db_bench (via Jenkins), so I left db_bench there.
This is just a minor cleanup. We need to rethink our targets since they are a bit messy right now. We can do this during our tech debt week.
Test Plan: make release
Reviewers: anthony, rven, yhchiang, sdong, meyering
Reviewed By: meyering
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36171
Summary:
This fixes two bugs: "make clean" would never remove the generated
file, util/build_version.cc, and since D33591, would be regenerated
only if it were absent.
* Makefile (clean): Remove the generated file.
(util/build_version.cc): Depend on the no-prereq FORCE target,
so that this target's rules are always run.
Since this is a generated file, make it read-only.
Also, be sure to remove the temporary file when it is the same
as the original.
Test Plan:
Ensure that we attempt regeneration every time.
Make it empty with an up-to-date time stamp and demonstrate
that it is rebuilt with the expected content:
$ : > util/build_version.cc
$ make util/build_version.o
GEN util/build_version.cc
GEN util/build_version.d
GEN util/build_version.cc
CC util/build_version.o
$ cat util/build_version.cc
#include "build_version.h"
const char* rocksdb_build_git_sha = "rocksdb_build_git_sha:v3.10-2-gb30e72a";
const char* rocksdb_build_git_date = "rocksdb_build_git_date:2015-03-27";
const char* rocksdb_build_compile_date = __DATE__;
Reviewers: igor.sugak, sdong, ljin, igor, rven
Reviewed By: rven
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D36087
Summary:
* Makefile (COMPILE_WITH_TSAN): Avoid a link failure by disabling
-pg when building with TSAN enabled.
Now that "make check" builds all $(PROGRAMS), it is linking
a few programs that were not normally linked before.
For example, this would fail to link with the following diagnostic:
COMPILE_WITH_TSAN=1 make -j40 log_and_apply_bench
CCLD log_and_apply_bench
ld: /usr/lib/../lib64/gcrt1.o: relocation R_X86_64_32S against `__libc_csu_fini' can not be used when making a shared object; recompile with -fPIC
/usr/lib/../lib64/gcrt1.o: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
Makefile:511: recipe for target 'log_and_apply_bench' failed
make: *** [log_and_apply_bench] Error 1
Since removing -pg is sufficient to get past this link
failure, and no one cares about profiling TSAN-enabled
binaries anyway, we will refrain from linking with -pg
when TSAN testing is enabled. Use a new variable, "pg"
which is set to "-pg" in most cases, but that is made
empty when COMPILE_WITH_TSAN is set.
Test Plan:
Now, this succeeds:
rm -f log_and_apply_bench
COMPILE_WITH_TSAN=1 make -j40 log_and_apply_bench
Reviewers: igor.sugak, rven, sdong, ljin, igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D35943
Summary:
* Makefile (check): Cause "make check" to build all $(PROGRAMS),
so that it verifies that the few benchmark-only source files that
not already built via "make check" do compile and link successfully.
Test Plan:
run "make clean; make check", and verify that
table/table_reader_bench.cc is now compiled.
Before, it was not, which led to an incomplete fix
for a build break.
Reviewers: ljin, igor.sugak, rven, igor, sdong
Reviewed By: sdong
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D35883
Summary:
changed make unity target to use $LIB_SOURCES as a source of library source code. In the old way (using find) table/mock_table.h was added to a list of library objects and this was a course of `make unity` break.
`build_tools/unity` contains some redundant code, I deleted it and moved the functionality in the Makefile.
Test Plan:
Make sure unity completes with no errors.
```lang=bash
% make unity
```
Reviewers: sdong, rven, igor, meyering
Reviewed By: igor, meyering
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D35385
Summary: Make it easier for people to run all the tests.
Test Plan: Run it.
Reviewers: rven, yhchiang, igor, MarkCallaghan, IslamAbdelRahman, igor.sugak, anthony, kradhakrishnan, meyering
Reviewed By: meyering
Subscribers: meyering, leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D35319
Summary:
Adds gtest fused source code into `third-party` directory. No manual changes.
gtest latest released 1.7 has clang dev compilation errors. Trunk version requires only one disabled warning (-Wno-missing-field-initializers)
Fused code is made as described here https://fburl.com/90806322
Details about why we need gtest source code instead of precompiled library https://fburl.com/90805763
Source used from http://googletest.googlecode.com/svn/trunk
Test Plan:
Build and notice no errors. Also check in logs that gtest-all.o being compiled gtest-all.o.
```lang=bash
% USE_CLANG=1 make all
```
Reviewers: lgalanis, yufei.zhu, rven, sdong, igor, meyering
Reviewed By: meyering
Subscribers: meyering, yhchiang, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D33345
Summary:
Here's my proposal for making our LOGs easier to read by machines.
The idea is to dump all events as JSON objects. JSON is easy to read by humans, but more importantly, it's easy to read by machines. That way, we can parse this, load into SQLite/mongo and then query or visualize.
I started with table_create and table_delete events, but if everybody agrees, I'll continue by adding more events (flush/compaction/etc etc)
Test Plan:
Ran db_bench. Observed:
2015/01/15-14:13:25.788019 1105ef000 EVENT_LOG_v1 {"time_micros": 1421360005788015, "event": "table_file_creation", "file_number": 12, "file_size": 1909699}
2015/01/15-14:13:25.956500 110740000 EVENT_LOG_v1 {"time_micros": 1421360005956498, "event": "table_file_deletion", "file_number": 12}
Reviewers: yhchiang, rven, dhruba, MarkCallaghan, lgalanis, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D31647
Summary:
Any time one would modify a dependent of any *test*.cc file,
"make" would fail to rebuild the affected test binaries,
e.g., db_test. That was due to the fact that we deliberately
excluded those test-related files from the definition of SOURCES
and only $(SOURCES) was used to create the automatically-generated
.d dependency files. The fix is to generate a .d file for every
source file.
* src.mk: New file. Defines LIB_SOURCES, MOCK_SOURCES
and TEST_BENCH_SOURCES.
* Makefile: Include src.mk.
Reflect s/SOURCES/LIB_SOURCES/ renaming.
* build_tools/build_detect_platform: Remove the code
that was used to generate SOURCES= and MOCK_SOURCES=
definitions in make_config.mk. Those lists of files
are now hard-coded in src.mk. Hard-coding this list of
sources is desirable, because without that, one risks
including stray .cc files in a build. Not reproducible.
Test Plan:
Touch a file used by db_test's dependent .o files and ensure that
they are all recompiled. Before, none would be:
$ touch db/db_impl.h && make db_test
CC db/db_test.o
CC db/column_family.o
CC db/db_filesnapshot.o
CC db/db_impl.o
CC db/db_impl_debug.o
CC db/db_impl_readonly.o
CC db/forward_iterator.o
CC db/internal_stats.o
CC db/managed_iterator.o
CC db/repair.o
CC db/write_batch.o
CC utilities/compacted_db/compacted_db_impl.o
CC utilities/ttl/db_ttl_impl.o
CC util/ldb_cmd.o
CC util/ldb_tool.o
CC util/sst_dump_tool.o
CC util/xfunc.o
CCLD db_test
Reviewers: ljin, igor.sugak, igor, rven, sdong
Reviewed By: sdong
Subscribers: yhchiang, adamretter, fyrz, dhruba
Differential Revision: https://reviews.facebook.net/D33849
Summary:
This patch will update the Makefile and source code so that we can build RocksDB successfully on FreeBSD 10 and 11 (64-bit and 32-bit)
I have also encountered some problems when running tests on FreeBSD, I will try to fix them individually in different diffs
Notes:
- FreeBSD uses clang as it's default compiler (http://lists.freebsd.org/pipermail/freebsd-current/2012-September/036480.html)
- GNU C++ compiler have C++ 11 problems on FreeBSD (https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193528)
- make is not gmake on FreeBSD (http://www.khmere.com/freebsd_book/html/ch01.html)
Test Plan:
Using VMWare Fusion Create 4 VM machines (FreeBSD 11 64-bit, FreeBSD 11 32-bit, FreeBSD 10 64-bit, FreeBSD 10 32-bit)
- pkg install git gmake gflags archivers/snappy
- git clone https://github.com/facebook/rocksdb.git
- apply this patch
- setenv CXX c++
- setenv CPATH /usr/local/include/
- setenv LIBRARY_PATH /usr/local/lib/
- gmake db_bench
- make sure compilation is successful and db_bench is running
- gmake all
- make sure compilation is successful
Reviewers: sdong, igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D33891
Summary: See https://www.facebook.com/groups/rocksdb.dev/permalink/718536351578215/. We should enable people to run without warning-as-error, since we break this so often for different compilers.
Test Plan: introduce an warning, compile sucessfully
Reviewers: yhchiang, meyering, igor.sugak, rven, sdong
Reviewed By: sdong
Subscribers: adamretter, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D33963
Summary:
* Updated Makefile to exit `make analyze` with status 1 if scan-build detected any bugs.
* scan-build automatically detects which c++ compiler to use, and some times is uses wrong ones (from $CPP). Added implicit parameters to use $CC and $CXX.
* Added `scan_build_report` directory to .gitignore file.
* Added `scan_build_report` directory to clean target.
Test Plan:
Run `make analyze` and verify that exit status is 1, if there are scan-build bugs detected.
Run `make clean` and verify that files in `scan_build_report` directory are deleted.
After running `make analyze; git status` and verify that no untracked files in `scan_build_report` directory.
Reviewers: meyering, sdong, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D33831
Summary:
* Makefile (WARNING_FLAGS): Add -W and -Wextra, and at least for now,
-Wno-unused-parameter. Without that latter one, there would be
many errors/warnings about unused parameters.
Test Plan:
run these and ensure there is no warning:
make clean && make
make clean && USE_CLANG=1 make
Reviewers: ljin, sdong, igor.sugak, igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D33819
Summary:
Prior to this change, "make check" would always waste a lot of
time relinking 60+ binaries. With this change, it does that
only when the generated file, util/build_version.cc, changes,
and that happens only when the date changes or when the
current git SHA changes.
This change makes some other improvements: before, there was no
rule to build a deleted util/build_version.cc. If it was somehow
removed, any attempt to link a program would fail.
There is no longer any need for the separate file,
build_tools/build_detect_version. Its functionality is
now in the Makefile.
* Makefile (DEPFILES): Don't filter-out util/build_version.cc.
No need, and besides, removing that dependency was wrong.
(date, git_sha, gen_build_version): New helper variables.
(util/build_version.cc): New rule, to create this file
and update it only if it would contain new information.
* build_tools/build_detect_platform: Remove file.
* db/db_impl.cc: Now, print only date (not the time).
* util/build_version.h (rocksdb_build_compile_time): Remove
declaration. No longer used.
Test Plan:
- Run "make check" twice, and note that the second time no linking is performed.
- Remove util/build_version.cc and ensure that any "make"
command regenerates it before doing anything else.
- Run this: strings librocksdb.a|grep _build_.
That prints output including the following:
rocksdb_build_git_date:2015-02-19
rocksdb_build_git_sha:2.8.fb-1792-g3cb6cc0
Reviewers: ljin, sdong, igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D33591
Summary:
The "ar" command was excessively verbose.
This abbreviates it to be like CC and LD.
Also, factor "rs" options into ARFLAGS.
Test Plan:
$ rm librocksdb.a
$ make librocksdb.a
AR librocksdb.a
ar: creating librocksdb.a
Reviewers: ljin, sdong, igor.sugak, igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D33699
Summary:
Many link commands were identical.
Factor that out into a variable, AM_LINK, and use
it in place of all of those open-coded commands.
Test Plan: run "make check"
Reviewers: ljin, igor, sdong
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D33585
Summary:
With this change, make now prints a summary line for each
compiler and linker invocation, e.g.,:
CC db/builder.o
CC db/c.o
CC db/column_family.o
To see full commands, insert "V=1" into your make command.
E.g., run "make V=1 all" if you want it to print each command
in its full glory.
$^ is GNU make's abbreviation for the prerequisites of the current target.
These AM_V_... variables expand to some very short string like "CC" or
"LD", by default, so that the output of "make" is readable. If/when you
want more details, just build with "make V=1 ...", and make will print
each full command as it is executed. If you prefer to see the noise
all the time, and only want to optionally see the abbreviated output,
set AM_DEFAULT_VERBOSITY=1 in your environment, and then build with
V=0 to see the abbreviated command indicators.
Test Plan:
invoke make a few different ways and observe:
make clean; make # abbreviated
make clean; make V=0 # also abbreviated
make clean; make V=1 # full detail
Reviewers: sdong, ljin, igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D33579
Summary:
This is in preparation for some factorization.
* Makefile (deletefile_test): Add $(COVERAGEFLAGS) to link command.
(options_test): Remove explicit (redundant) dependency on
options_helper.o: that is already a dependent, via $(LIBOBJECTS)
Test Plan: run make
Reviewers: sdong, ljin, igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D33573
Summary:
There were Makefile rules to build those two targets,
but neither rule has worked for a long time, due to missing
dependent source files. Remove those rules.
Test Plan: run "make"
Reviewers: sdong, ljin, igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D33567
Summary:
Before this change, running "make" with no arguments would
silently run the rules for the "uninstall" target(!). Don't do that.
* Makefile (default): New, first target; depend on "all".
(uninstall, install): Do not hide the commands we run.
Test Plan:
Run "make" and verify that the rules for "uninstall" are no longer run.
Instead, note that many files are compiled and linked. Before, none were.
Reviewers: sdong, ljin, igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D33531
Summary:
* Makefile (dummy): Prefix this statement with "dummy := ",
so that it no longer triggers a syntax error from GNU make 3.80
and earlier. Reported by nielsl in
https://github.com/facebook/rocksdb/issues/509
Test Plan: run make
Reviewers: sdong, ljin, igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D33429
Summary:
The code being removed would invoke sed differently to work
around a portability difference in how sed -i works (different
on MacOS). Yet performing a host-type-based ifdef fails when
the tools installed do not match. That sed use was solely to
post-process the .d file. Instead, generate the desired output
directly, by using the compiler's -MT<FILE> option.
* Makefile (%.d: %.cc): With the prior use of Makefile-ifdef'd
sed, when building on MacOS with gnu sed, every run of this rule
would fail with a sed usage error. Also list each .d file as a
dependent.
Test Plan:
Ensure that a selected .d file is the same as before both with
g++ and with clang++. However, note that the new .d files each
contain a new reference to the .d file itself.
Reviewers: sdong, ljin, igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D33369
Summary:
Added new target ##make analyze## into Makefile. This command runs clang static analyzer and builds the sources as ##make all##. The result report is put into ##$(RocksDbSourceRoot)/can_build_report/##
If the development environment is a Facebook devserver and ##ROCKSDB_NO_FBCODE## is not set, then scan-build is used from fbcode. If it is run not on a Facebook devserver, scan-build should be available in ##$PATH##. I'll add details to wiki how to install scan-build on a non Facebook devserver environment.
Test Plan:
Run the fallowing commands on a Facebook devserver and Mac OS, and ensure no build or test errors.
```
% make all check -j32
% make clean
% USE_CLANG=1 make all -j32
% make analyze
% USE_CLANG=1 make analyze
```
Reviewers: sdong, lgalanis, leveldb, igor
Reviewed By: igor
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D32799
Summary: Add CappedFixTransform, which is the same as fixed length prefix extractor, except that when slice is shorter than the fixed length, it will use the full key.
Test Plan:
Add a test case for
db_test
options_test
and a new test
Reviewers: yhchiang, rven, igor
Reviewed By: igor
Subscribers: MarkCallaghan, leveldb, dhruba, yoshinorim
Differential Revision: https://reviews.facebook.net/D31887
Summary: This was a feature request by osquery. See task t5617758
Test Plan: compiles and memenv_test runs
Reviewers: yhchiang, rven, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D32115
Summary:
There's a bug in TSAN (or libstdc++?) with std::shared_ptr<> for some reason. In db_test, only FlushSchedule is affected.
See more: https://groups.google.com/forum/#!topic/thread-sanitizer/vz_s-t226Vg
With this change and all other @sdong's and mine diffs, our db_test should be TSAN-clean. I'll move to other tests.
Test Plan: no more flush schedule when running TSAN
Reviewers: yhchiang, rven, sdong
Reviewed By: sdong
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D32469
Summary: We don't have plans to work on this in the short term. If we ever resurrect the project, we can find the code in the history. No need for it to linger around
Test Plan: no test
Reviewers: yhchiang, rven, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D32349
Summary:
1. If WAL directory is different from db directory. Sync the directory after creating a log file under it.
2. After creating an SST file, sync its parent directory instead of DB directory.
3. change the check of kResetDeleteUnsyncedFiles in fault_injection_test. Since we changed the behavior to sync log files' parent directory after first WAL sync, instead of creating, kResetDeleteUnsyncedFiles will not guarantee to show post sync updates.
Test Plan: make all check
Reviewers: yhchiang, rven, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D32067
Summary: When you compile with COMPILE_WITH_TSAN=1, we will compile the code with -fsanitize=thread. This will resolve bunch of data race issues we might have.
Test Plan: COMPILE_WITH_TSAN=1 m db_test
Reviewers: yhchiang, rven, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D32019
Summary:
This is a port of [[ https://github.com/google/leveldb/blob/master/db/fault_injection_test.cc | LevelDB's fault_injection_test ]] to RocksDB. Unfortunately it fails with:
```
==== Test FaultInjectionTest.FaultTest
db/fault_injection_test.cc:491: Corruption: no meta-nextfile entry in descriptor
#0 ./fault_injection_test() [0x41477a] rocksdb::FaultInjectionTest::PartialCompactTestReopenWithFault(rocksdb::FaultInjectionTest::ResetMethod, int, int) /data/users/tomdzk/rocksdb/db/fault_injection_test.cc:491
#1 ./fault_injection_test() [0x40a38a] rocksdb::_Test_FaultTest::_Run() /data/users/tomdzk/rocksdb/db/fault_injection_test.cc:517
#2 ./fault_injection_test() [0x415bea] rocksdb::_Test_FaultTest::_RunIt() /data/users/tomdzk/rocksdb/db/fault_injection_test.cc:507
#3 ./fault_injection_test() [0x584367] rocksdb::test::RunAllTests() /data/users/tomdzk/rocksdb/util/testharness.cc:70
#4 /usr/local/fbcode/gcc-4.8.1-glibc-2.17/lib/libc.so.6(__libc_start_main+0x10e) [0x7f7a40857efe] ?? ??:0
#5 ./fault_injection_test() [0x408bb8] _start ??:0
```
so I commented out the test invocation in the source code for now (lines 514-520) so it can be merged.
Test Plan: This is a new test.
Reviewers: igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D31587
Summary: I'm tired of double-tab when opening build_tools/<something>. This change will make bu<tab> fully complete my path :)
Test Plan: `vi bu<tab>` gives me `vi build_tools/` yay!
Reviewers: yhchiang, rven, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D30639
Summary:
Create a benchmark for testing memtablereps. This diff is a bit rough, but it should do the trick until other bootcampers can clean it up.
Addressing comments
Removed the mutexes
Changed ReadWriteBenchmark to fix number of reads and count the number of writes we can perform in that time.
Test Plan:
Run it.
Below runs pass
./memtablerep_bench --benchmarks fillrandom,readrandom --memtablerep skiplist
./memtablerep_bench --benchmarks fillseq,readseq --memtablerep skiplist
./memtablerep_bench --benchmarks readwrite,seqreadwrite --memtablerep skiplist --num_operations 200 --num_threads 5
./memtablerep_bench --benchmarks fillrandom,readrandom --memtablerep hashskiplist
./memtablerep_bench --benchmarks fillseq,readseq --memtablerep hashskiplist
--num_scans 2
./memtablerep_bench --benchmarks fillseq,readseq --memtablerep vector
Reviewers: jpaton, ikabiljo, sdong
Reviewed By: sdong
Subscribers: dhruba, ameyag
Differential Revision: https://reviews.facebook.net/D22683
Summary:
Add support to allow nested config for block-based table factory. The format looks like this:
"write_buffer_size=1024;block_based_table_factory={block_size=4k};max_write_buffer_num=2"
Test Plan: unit test
Reviewers: yhchiang, rven, igor, ljin, jonahcohen
Reviewed By: jonahcohen
Subscribers: jonahcohen, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D29223
Summary: Add -fno-exceptions flag to ROCKSDB_LITE.
Test Plan: make OPT=-DROCKSDB_LITE shared_lib -j32
Reviewers: sdong, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D29901
Summary:
While running rocksdb tests, we sometimes encounter errors and
the test run stops. We now provide a new make target call check_some
which restarts the test run from a specific test and continues from
there depending on the value of the environment variable ROCKSDBTESTS_START
Test Plan:
Run make check_some with different values of
ROCKSDBTESTS_START.
Reviewers: igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D29913
Summary:
Add GetThreadList API, which allows developer to track the
status of each process. Currently, calling GetThreadList will
only get the list of background threads in RocksDB with their
thread-id and thread-type (priority) set. Will add more support
on this in the later diffs.
ThreadStatus currently has the following properties:
// An unique ID for the thread.
const uint64_t thread_id;
// The type of the thread, it could be ROCKSDB_HIGH_PRIORITY,
// ROCKSDB_LOW_PRIORITY, and USER_THREAD
const ThreadType thread_type;
// The name of the DB instance where the thread is currently
// involved with. It would be set to empty string if the thread
// does not involve in any DB operation.
const std::string db_name;
// The name of the column family where the thread is currently
// It would be set to empty string if the thread does not involve
// in any column family.
const std::string cf_name;
// The event that the current thread is involved.
// It would be set to empty string if the information about event
// is not currently available.
Test Plan:
./thread_list_test
export ROCKSDB_TESTS=GetThreadList
./db_test
Reviewers: rven, igor, sdong, ljin
Reviewed By: ljin
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D25047
Currently maven publishing uses the library with debug symbols. What
leads to unnecessary big library sizes. Included strip to remove
unnecessary stuff. 40M -> 2.7M
Previous to this commit too much targets got dependencies
on javadocs target.
Introduced one additional target "javalib" which resolves
that situation. JavaDoc will now be generated once while
executing a task with prefix "rocksdbjava".
Summary:
This is just a simple test that passes two files though a compaction. It shows the framework so that people can continue building new compaction *unit* tests.
In the future we might want to move some Compaction* tests from DBTest here. For example, CompactBetweenSnapshot seems a good candidate.
Hopefully this test can be simpler when we mock out VersionSet.
Test Plan: this is a test
Reviewers: ljin, rven, yhchiang, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D28449
Summary:
We need to turn on -Wshorten-64-to-32 for mobile. See D1671432 (internal phabricator) for details.
This diff turns on the warning flag and fixes all the errors. There were also some interesting errors that I might call bugs, especially in plain table. Going forward, I think it makes sense to have this flag turned on and be very very careful when converting 64-bit to 32-bit variables.
Test Plan: compiles
Reviewers: ljin, rven, yhchiang, sdong
Reviewed By: yhchiang
Subscribers: bobbaldwin, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D28689
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
Summary:
Only one more try, I promise.
I talked to Jim and he mentioned that if we include our system includes with -isystem rather than with -I, that signals to the compile that those are system includes and thus no warnings are issued. So I turned our glibc includes into system includes and now we no longer get the warning from there, making us shadow-warning-free!
Test Plan: compiles with both clang and gcc
Reviewers: sdong, yhchiang, rven, ljin
Reviewed By: ljin
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D28479
Summary:
So glibc is not -Wshadow-safe, so we need to turn it off :(
error: ‘int sigaction(int, const sigaction*, sigaction*)’ hides
constructor for ‘struct sigaction’
The rest of the changes in this diff is that we include .h files under rocksdb namespace, which is a no-no.
Test Plan: compiles now
Reviewers: ljin, yhchiang, rven, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D28413
Summary: It turns out that -Wshadow has different rules for gcc than clang. Previous commit fixed clang. This commits fixes the rest of the warnings for gcc.
Test Plan: compiles
Reviewers: ljin, yhchiang, rven, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D28131
Summary:
TestMemEnv simulates all Env APIs using in-memory data structures.
We can use it to speed up db_test run, which is now reduced ~7mins when it is
enabled.
We can also add features to simulate power/disk failures in the next
step
TestMemEnv is derived from helper/mem_env
mem_env can not be used for rocksdb since some of its APIs do not give
the same results as env_posix. And its file read/write is not thread safe
Test Plan:
make all -j32
./db_test
./env_mem_test
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D28035
Summary: as title
Test Plan: ./c_test
Reviewers: igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D28119
Summary:
...and fix all the errors :)
Jim suggested turning on -Wshadow because it helped him fix number of critical bugs in fbcode. I think it's a good idea to be -Wshadow clean.
Test Plan: compiles
Reviewers: yhchiang, rven, sdong, ljin
Reviewed By: ljin
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27711
Summary:
Rename Version::Builder to VersionBuilder and expose its definition to a header.
Make VerisonBuilder not reference Version or ColumnFamilyData, only working with VersionStorageInfo.
Add version_builder_test which has a simple test.
Test Plan: make all check
Reviewers: rven, yhchiang, igor, ljin
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D27969
Summary: Decoupling code that deals with archived log files outside of DBImpl. That will make this code easier to reason about and test. It will also make the code easier to improve, because an improver doesn't have to understand DBImpl code in entirety.
Test Plan: added test
Reviewers: ljin, yhchiang, rven, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27873
Summary:
Add some helper functions to make sure DB works well for non-default comparators.
Add a test for SimpleSuffixReverseComparator.
Test Plan: Run the new test
Reviewers: ljin, rven, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D27831
Summary:
Make compaction picker easier to test.
The basic idea is to separate a minimum subcomponent of Version to VersionStorageInfo, which just responsible to LSM tree. A stub VersionStorageInfo can then be easily created and passed into compaction picker so that we can check the outputs.
It now passes most tests. Still two things need to be done:
(1) deal with the FIFO compaction's file size.
(2) write an example test to make sure the interface can do the job.
Add a compaction_picker_test to make sure compaction picker codes can be easily unit tested.
Test Plan:
Pass all unit tests and compaction_picker_test
Reviewers: yhchiang, rven, igor, ljin
Reviewed By: ljin
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D27639
Summary:
This diff replaces BlockBasedTable in flush_job_test with TableMock, making it depend on less things and making it closer to an unit test than integration test.
It also introduces a framework to compile mock classes -- Any file named *mock.cc will not be compiled into the build. It will only get compiled into the tests. What way we can mock out most other classes, Version, VersionSet, DBImpl, etc.
Test Plan: flush_job_test
Reviewers: ljin, rven, yhchiang, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27681
Summary:
Abstract out FlushProcess and take it out of DBImpl.
This also includes taking DeletionState outside of DBImpl.
Currently this diff is only doing the refactoring. Future work includes:
1. Decoupling flush_process.cc, make it depend on less state
2. Write flush_process_test, which will mock out everything that FlushProcess depends on and test it in isolation
Test Plan: make check
Reviewers: rven, yhchiang, sdong, ljin
Reviewed By: ljin
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27561