Summary:
Rocksdb allos specifying the number of files in L0 that triggers
compactions. Expose this api as a command line parameter for
running db_stress.
Test Plan: Run test
Reviewers: sheki, emayanke
Reviewed By: emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D11343
Summary:
This diff simplifies EnvOptions by treating it as POD, similar to Options.
- virtual functions are removed and member fields are accessed directly.
- StorageOptions is removed.
- Options.allow_readahead and Options.allow_readahead_compactions are deprecated.
- Unused global variables are removed: useOsBuffer, useFsReadAhead, useMmapRead, useMmapWrite
Test Plan: make check; db_stress
Reviewers: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D11175
Summary: These extra options caught some bugs. Will be run via Jenkins now with the crash_test
Test Plan: ./make crashtest
Reviewers: dhruba, vamsi
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D11151
Summary:
I think the check for "error" that I added had caused
false alarm. Fixed that.
Test Plan:
Revert Plan: OK
Task ID: #
Reviewers: emayanke, dhruba
Reviewed By: emayanke
Differential Revision: https://reviews.facebook.net/D11139
Summary: Make Statistics usable by client
Test Plan: make check; db_bench
Reviewers: dhruba
Reviewed By: dhruba
Differential Revision: https://reviews.facebook.net/D10899
Summary:
This is initial version. A few ways in which this could
be extended in the future are:
(a) Killing from more places in source code
(b) Hashing stack and using that hash in determining whether to crash.
This is to avoid crashing more often at source lines that are executed
more often.
(c) Raising exceptions or returning errors instead of killing
Test Plan:
This whole thing is for testing.
Here is part of output:
python2.7 tools/db_crashtest2.py -d 600
Running db_stress
db_stress retncode -15 output LevelDB version : 1.5
Number of threads : 32
Ops per thread : 10000000
Read percentage : 50
Write-buffer-size : 4194304
Delete percentage : 30
Max key : 1000
Ratio #ops/#keys : 320000
Num times DB reopens: 0
Batches/snapshots : 1
Purge redundant % : 50
Num keys per lock : 4
Compression : snappy
------------------------------------------------
No lock creation because test_batches_snapshots set
2013/04/26-17:55:17 Starting database operations
Created bg thread 0x7fc1f07ff700
... finished 60000 ops
Running db_stress
db_stress retncode -15 output LevelDB version : 1.5
Number of threads : 32
Ops per thread : 10000000
Read percentage : 50
Write-buffer-size : 4194304
Delete percentage : 30
Max key : 1000
Ratio #ops/#keys : 320000
Num times DB reopens: 0
Batches/snapshots : 1
Purge redundant % : 50
Num keys per lock : 4
Compression : snappy
------------------------------------------------
Created bg thread 0x7ff0137ff700
No lock creation because test_batches_snapshots set
2013/04/26-17:56:15 Starting database operations
... finished 90000 ops
Revert Plan: OK
Task ID: #2252691
Reviewers: dhruba, emayanke
Reviewed By: emayanke
CC: leveldb, haobo
Differential Revision: https://reviews.facebook.net/D10581
Summary: db can't reopen safely with disable_wal set!
Test Plan: make db_stress; run db_stress with disable_wal and reopens set and see error
Reviewers: dhruba, vamsi
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D10857
Summary: Will help while debugging if the generated value is truncated at proper length.
Test Plan: make db_stress;/db_stress --max_key=10000 --db=/tmp/mcr --threads=1 --ops_per_thread=10000
Reviewers: dhruba, vamsi
Reviewed By: vamsi
Differential Revision: https://reviews.facebook.net/D10845
Summary: ldb works with raw data from the database and needs to be aware of ttl-database to work with it meaningfully. '-ttl' option now tells it that. Also added onto the ldb_test.py test. This option may be specified alongwith put, get, scan or dump. There is no support to provide a ttl-value and it uses default forever because there is no use-case for this currently.
Test Plan: make ldb_test; python tools/ldb_test.py
Reviewers: dhruba, sheki, haobo, vamsi
Reviewed By: sheki
CC: leveldb
Differential Revision: https://reviews.facebook.net/D10797
Summary:
When opened with DBTimestamp::Open call, timestamps are prepended to and stripped from the value during subsequent Put and Get calls respectively. The Timestamp is used to discard values in Get and custom compaction filter which have exceeded their TTL which is specified during Open.
Have made a temporary change to Makefile to let us test with the temporary file TestTime.cc. Have also changed the private members of db_impl.h to protected to let them be inherited by the new class DBTimestamp
Test Plan: make db_timestamp; TestTime.cc(will not check it in) shows how to use the apis currently, but I will write unit-tests shortly
Reviewers: dhruba, vamsi, haobo, sheki, heyongqiang, vkrest
Reviewed By: vamsi
CC: zshao, xjin, vkrest, MarkCallaghan
Differential Revision: https://reviews.facebook.net/D10311
Summary:
- don't see a point exposing table.h to the public.
- fixed make clean to remove also *.d files.
Test Plan: make check; db_stress
Reviewers: dhruba, heyongqiang
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D10479
Summary: Primarily a refactor. Introduced LDBTool interface to which customers can plug in their options and this will create their own version of ldb tool.
Test Plan: made ldb tool and tried it.
Reviewers: dhruba, heyongqiang
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D10191
Summary: To know which options the crashtest was run with. Also changed print to sys.stdout.write which is more standard.
Test Plan: python tools/db_crashtest.py
Reviewers: vamsi, akushner, dhruba
Reviewed By: akushner
Differential Revision: https://reviews.facebook.net/D10119
Summary: make crash_test will now invoke the crash_test. Also some cleanup in the db_crashtest.py file
Test Plan: make crash_test
Reviewers: akushner, vamsi, sheki, dhruba
Reviewed By: vamsi
Differential Revision: https://reviews.facebook.net/D9987
Summary: The crash_test depends on db_stress to work with pre-existing dir
Test Plan: make db_stress; Run db_stress with 'destroy_db_initially=0'
Reviewers: vamsi, dhruba
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D10041
Summary: For sanity w.r.t. the way we split up the reopens equally among the ops/thread
Test Plan: make db_stress; db_stress --ops_per_thread=10 --reopens=10 => error
Reviewers: vamsi, dhruba
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D10023
Summary:
When I run db_crashtest, I am seeing lot of warnings that say db_stress completed
before it was killed. To fix that I made ops per thread a very large value so that it keeps
running until it is killed.
I also set #reopens to 0. Since we are killing the process anyway, the 'simulated crash'
that happens during reopen may not add additional value.
I usually see 10-25K ops happening before the kill. So I increased max_key from 100 to
1000 so that we use more distinct keys.
Test Plan:
Ran a few times.
Revert Plan: OK
Task ID: #
Reviewers: emayanke
Reviewed By: emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D9909
Summary: The script runs and kills the stress test periodically. Default values have been used in the script now. Should I make this a part of the Makefile or automated rocksdb build? The values can be easily changed in the script right now, but should I add some support for variable values or input to the script? I believe the script achieves its objective of unsafe crashes and reopening to expect sanity in the database.
Test Plan: python tools/db_crashtest.py
Reviewers: dhruba, vamsi, MarkCallaghan
Reviewed By: vamsi
CC: leveldb
Differential Revision: https://reviews.facebook.net/D9369
Summary:
Earlier Statistics object was a raw pointer. This meant the user had to clear up
the Statistics object after creating the database. In most use cases the database is created in a function and the statistics pointer is out of scope. Hence the statistics object would never be deleted.
Now Using a shared_ptr to manage this.
Want this in before the next release.
Test Plan: make all check.
Reviewers: dhruba, emayanke
Reviewed By: emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D9735
Summary: simple sed command to replace NULL in tools directory. Was missed by the previous codemod.
Test Plan: it compiles
Reviewers: emayanke
Reviewed By: emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D9621
Summary:
This patch allows an application to specify whether to use bufferedio,
reads-via-mmaps and writes-via-mmaps per database. Earlier, there
was a global static variable that was used to configure this functionality.
The default setting remains the same (and is backward compatible):
1. use bufferedio
2. do not use mmaps for reads
3. use mmap for writes
4. use readaheads for reads needed for compaction
I also added a parameter to db_bench to be able to explicitly specify
whether to do readaheads for compactions or not.
Test Plan: make check
Reviewers: sheki, heyongqiang, MarkCallaghan
Reviewed By: sheki
CC: leveldb
Differential Revision: https://reviews.facebook.net/D9429
Summary: ldb_test.py did a lot of assertFalse checks and displayed all the failed messages on the std output making it confusing to tell a successful from a failed run. Also many empty lines used to be needlessly printed. Also added some progression-"feel-good" lines in the tests
Test Plan: python ldb_test.py
Reviewers: dhruba, sheki, dilipj, chip
Reviewed By: dilipj
CC: leveldb
Differential Revision: https://reviews.facebook.net/D9297
Summary:
Also added some comments and fixed some bugs in
stats reporting. Now the stats seem to match what is expected.
Test Plan:
[nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_stress --test_batches_snapshots=1 --ops_per_thread=1000 --threads=1 --max_key=320
LevelDB version : 1.5
Number of threads : 1
Ops per thread : 1000
Read percentage : 10
Delete percentage : 30
Max key : 320
Ratio #ops/#keys : 3
Num times DB reopens: 10
Batches/snapshots : 1
Num keys per lock : 4
Compression : snappy
------------------------------------------------
No lock creation because test_batches_snapshots set
2013/03/04-15:58:56 Starting database operations
2013/03/04-15:58:56 Reopening database for the 1th time
2013/03/04-15:58:56 Reopening database for the 2th time
2013/03/04-15:58:56 Reopening database for the 3th time
2013/03/04-15:58:56 Reopening database for the 4th time
Created bg thread 0x7f4542bff700
2013/03/04-15:58:56 Reopening database for the 5th time
2013/03/04-15:58:56 Reopening database for the 6th time
2013/03/04-15:58:56 Reopening database for the 7th time
2013/03/04-15:58:57 Reopening database for the 8th time
2013/03/04-15:58:57 Reopening database for the 9th time
2013/03/04-15:58:57 Reopening database for the 10th time
2013/03/04-15:58:57 Reopening database for the 11th time
2013/03/04-15:58:57 Limited verification already done during gets
Stress Test : 1811.551 micros/op 552 ops/sec
: Wrote 0.10 MB (0.05 MB/sec) (598% of 1011 ops)
: Wrote 6050 times
: Deleted 3050 times
: 500/900 gets found the key
: Got errors 0 times
[nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_stress --ops_per_thread=1000 --threads=1 --max_key=320
LevelDB version : 1.5
Number of threads : 1
Ops per thread : 1000
Read percentage : 10
Delete percentage : 30
Max key : 320
Ratio #ops/#keys : 3
Num times DB reopens: 10
Batches/snapshots : 0
Num keys per lock : 4
Compression : snappy
------------------------------------------------
Creating 80 locks
2013/03/04-15:58:17 Starting database operations
2013/03/04-15:58:17 Reopening database for the 1th time
2013/03/04-15:58:17 Reopening database for the 2th time
2013/03/04-15:58:17 Reopening database for the 3th time
2013/03/04-15:58:17 Reopening database for the 4th time
Created bg thread 0x7fc0f5bff700
2013/03/04-15:58:17 Reopening database for the 5th time
2013/03/04-15:58:17 Reopening database for the 6th time
2013/03/04-15:58:18 Reopening database for the 7th time
2013/03/04-15:58:18 Reopening database for the 8th time
2013/03/04-15:58:18 Reopening database for the 9th time
2013/03/04-15:58:18 Reopening database for the 10th time
2013/03/04-15:58:18 Reopening database for the 11th time
2013/03/04-15:58:18 Starting verification
Stress Test : 1836.258 micros/op 544 ops/sec
: Wrote 0.01 MB (0.01 MB/sec) (59% of 1011 ops)
: Wrote 605 times
: Deleted 305 times
: 50/90 gets found the key
: Got errors 0 times
2013/03/04-15:58:18 Verification successful
Revert Plan: OK
Task ID: #
Reviewers: emayanke, dhruba
Reviewed By: emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D9081
Summary: In light of the new option introduced by commit 806e264350 where the database has an option to compact before flushing to disk, we want the stress test to test both sides of the option. Have made it to 'deterministically' and configurably change that option for reopens.
Test Plan: make db_stress; ./db_stress with some differnet options
Reviewers: dhruba, vamsi
Reviewed By: dhruba
CC: leveldb, sheki
Differential Revision: https://reviews.facebook.net/D9165
Summary:
Store the last flushed, seq no. in db_impl. Check against it in
transaction Log iterator. Do not attempt to read ahead if we do not know
if the data is flushed completely.
Does not work if flush is disabled. Any ideas on fixing that?
* Minor change, iter->Next is called the first time automatically for
* the first time.
Test Plan:
existing test pass.
More ideas on testing this?
Planning to run some stress test.
Reviewers: dhruba, heyongqiang
CC: leveldb
Differential Revision: https://reviews.facebook.net/D9087
Summary:
Use only the counter mechanism. Do away with
incNumFileOpens, incNumFileClose, incNumFileErrors
s/NULL/nullptr/g in db/table_cache.cc
Test Plan: make clean check
Reviewers: dhruba, heyongqiang, emayanke
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D8841
Summary:
Currently the test tracks all writes in memory and
uses it for verification at the end. This has 4 problems:
(a) It needs mutex for each write to ensure in-memory update
and leveldb update are done atomically. This slows down the
benchmark.
(b) Verification phase at the end is time consuming as well
(c) Does not test batch writes or snapshots
(d) We cannot kill the test and restart multiple times in a
loop because in-memory state will be lost.
I am adding a FLAGS_multi that does MultiGet/MultiPut/MultiDelete
instead of get/put/delete to get/put/delete a group of related
keys with same values atomically. Every get retrieves the group
of keys and checks that their values are same. This does not have
the above problems but the downside is that it does less amount
of validation than the other approach.
Test Plan:
This whole this is a test! Here is a small run. I am doing larger run now.
[nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_stress --ops_per_thread=10000 --multi=1 --ops_per_key=25
LevelDB version : 1.5
Number of threads : 32
Ops per thread : 10000
Read percentage : 10
Delete percentage : 30
Max key : 2147483648
Num times DB reopens: 10
Num keys per lock : 4
Compression : snappy
------------------------------------------------
Creating 536870912 locks
2013/02/20-16:59:32 Starting database operations
Created bg thread 0x7f9ebcfff700
2013/02/20-16:59:37 Reopening database for the 1th time
2013/02/20-16:59:46 Reopening database for the 2th time
2013/02/20-16:59:57 Reopening database for the 3th time
2013/02/20-17:00:11 Reopening database for the 4th time
2013/02/20-17:00:25 Reopening database for the 5th time
2013/02/20-17:00:36 Reopening database for the 6th time
2013/02/20-17:00:47 Reopening database for the 7th time
2013/02/20-17:00:59 Reopening database for the 8th time
2013/02/20-17:01:10 Reopening database for the 9th time
2013/02/20-17:01:20 Reopening database for the 10th time
2013/02/20-17:01:31 Reopening database for the 11th time
2013/02/20-17:01:31 Starting verification
Stress Test : 109.125 micros/op 22191 ops/sec
: Wrote 0.00 MB (0.23 MB/sec) (59% of 32 ops)
: Deleted 10 times
2013/02/20-17:01:31 Verification successful
Revert Plan: OK
Task ID: #
Reviewers: dhruba, emayanke
Reviewed By: emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D8733
Summary:
Fixed a bug in the stress-test where the correct size was not being
passed to GenerateValue. This bug was there since the beginning but assertions
were switched on in our code-base only recently.
Added comments on the top detailing how the stress test works and how to
quicken/slow it down after investigation.
Test Plan: make all check. ./db_stress
Reviewers: dhruba, asad
Reviewed By: dhruba
CC: vamsi, sheki, heyongqiang, zshao
Differential Revision: https://reviews.facebook.net/D8727
Summary:
* Introduce is histogram in statistics.h
* stop watch to measure time.
* introduce two timers as a poc.
Replaced NULL with nullptr to fight some lint errors
Should be useful for google.
Test Plan:
ran db_bench and check stats.
make all check
Reviewers: dhruba, heyongqiang
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D8637
Summary:
Previously, if you opened a db with num_levels set lower than
the database, you received the unhelpful message "Corruption:
VersionEdit: new-file entry." Now you get a more verbose message
describing the issue.
Also, fix handling of compression_levels (both the run-over-the-end
issue and the memory management of it).
Lastly, unique_ptr'ify a couple of minor calls.
Test Plan: make check
Reviewers: dhruba
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D8151
Summary:
Replace manual memory management with std::unique_ptr in a
number of places; not exhaustive, but this fixes a few leaks with file
handles as well as clarifies semantics of the ownership of file handles
with log classes.
Test Plan: db_stress, make check
Reviewers: dhruba
Reviewed By: dhruba
CC: zshao, leveldb, heyongqiang
Differential Revision: https://reviews.facebook.net/D8043
Summary:
Found issues with `db_test` and `db_stress` when running valgrind.
`DBImpl` had an issue where if an compaction failed then it will use the uninitialised file size of an output file is used. This manifested as the final call to output to the log in `DoCompactionWork()` branching on uninitialized memory (all the way down in printf's innards).
Test Plan:
Ran `valgrind --track_origins=yes ./db_test` and `valgrind ./db_stress` to see if issues disappeared.
Ran `make check` to see if there were no regressions.
Reviewers: vamsi, dhruba
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D8001
Summary:
Check in LogAndApply if the file size is more than the limit set in
Options.
Things to consider : will this be expensive?
Test Plan: make all check. Inputs on a new unit test?
Reviewers: dhruba
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D7701
Summary: The queries will come from stdin. One key per line. The output will be in stdout, in the format of "<key> ==> <value>" if found, or "<key>" if not found. "--hex" uses HEX-encoded keys and values in both input and output.
Test Plan: ldb query --db=leveldb_db --hex
Reviewers: dhruba, emayanke, sheki
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D7617
Summary: The current parsing logic ignores any additional chars after the arg.
Test Plan: "./sst_dump --verify_checksumAAA" now outputs error.
Reviewers: dhruba, sheki, emayanke
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D7611
Summary:
Create an Executable with
* A thread to do Put's
* A thread to use GetUpdatesSince. Check if we miss any sequence
Numbers.
Test Plan: It runs.
Reviewers: dhruba
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D7383
Summary: Now sst_dump has the same option --output_hex as "ldb dump" and also share the same output format. So we can do "sst_dump ... | ldb load ..." for an experiment.
Test Plan:
[zshao@dev485 ~/git/rocksdb] ./sst_dump --file=/data/users/zshao/test_leveldb/000005.sst --output_hex | head -n 2
0000027F4FBE00000101000000000000 ==> D901000000000000000057596F7520726563656976656420746F6461792773207370656369616C20676966742120436C69636B2041636365707420746F207669657720796F75722047696674206265666F72652069742064697361707065617273210000000000000000
000007F9C2D400000102000000000000 ==> D1010000000000000000544974277320676F6F6420746F206265204B696E67E280A6206F7220517565656E212048657265277320796F75722054756573646179204D79737465727920476966742066726F6D20436173746C6556696C6C65219B7B227A636F6465223A22633566306531633039663764222C227470223A22613275222C227A6B223A22663638663061343262666264303966383435666239626235366365396536643024246234506C345139382E734C4D33522169482D4F31315A64794C4B7A4F4653766D7863746534625F2A3968684E3433786521776C427636504A414355795F70222C227473223A313334343936313737357D0000000000000000
Reviewers: dhruba, sheki, emayanke
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D7587
Summary: This command accepts key-value pairs from stdin with the same format of "ldb dump" command. This allows us to try out different compression algorithms/block sizes easily.
Test Plan: dump, load, dump, verify the data is the same.
Reviewers: dhruba
Reviewed By: dhruba
CC: leveldb
Differential Revision: https://reviews.facebook.net/D7443
Summary:
The manifest file contains a series of edits. If the verbose
option is switched on, then print each individual edit in the
manifest file. This helps in debugging.
Test Plan: make clean manifest_dump
Reviewers: emayanke, sheki
Reviewed By: sheki
CC: leveldb
Differential Revision: https://reviews.facebook.net/D6807
Summary:
dbstress has an option to reopen the database. Make it such that the
previous handle is not closed before we reopen, this simulates a
situation similar to a process crash.
Added new api to DMImpl to remove the lock file.
Test Plan: run db_stress
Reviewers: emayanke
Reviewed By: emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D6777
Summary:
Create more than one background compaction thread if specified.
This code peice is similar to what exists in db_bench.
Test Plan: make check
Differential Revision: https://reviews.facebook.net/D6753
Summary: FLAGS_reopen (configurable) specifies the number of times the databse is to be reopened. FLAGS_ops_per_thread is divided into points based on that reopen field. At these points all threads come together to wait for the databse to reopen. Each thread "votes" for the database to reopen and when all have voted, the database reopens.
Test Plan: make all;./db_stress
Reviewers: dhruba, MarkCallaghan, sheki, asad, heyongqiang
Reviewed By: dhruba
Differential Revision: https://reviews.facebook.net/D6627