Commit Graph

720 Commits

Author SHA1 Message Date
Chris Riccomini
9db13987b1 Update RocksDB's Java bindings to support multiple native RocksDB builds in the same Jar file. Cross build RocksDB for linux32 and linux64 using Vagrant. Build a cross-platform fat jar that contains osx, linux32, and linux64 RocksDB static builds. 2014-09-26 13:57:12 -07:00
Naveen
ba6d660f6d Resolving merge conflict 2014-09-23 16:00:54 -07:00
Igor Canadi
49aacd8d2b Fix make install
Summary: See https://github.com/facebook/rocksdb/issues/283

Test Plan: make install/uninstall

Reviewers: ljin, sdong, yhchiang

Reviewed By: yhchiang

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23373
2014-09-15 15:30:17 -07:00
Yueh-Hsuan Chiang
ebb5c65e60 Add make install
Summary:
Add make install.  If INSTALL_PATH is not set, then rocksdb will be
installed under "/usr/local" directory (/usr/local/include for headers
and /usr/local/lib for library file(s).)

Test Plan:
Develop a simple rocksdb app, called test.cc, and do the followings.

make clean
make static_lib -j32
sudo make install
g++ -std=c++11 test.cc -lrocksdb -lbz2 -lz -o test
./test

sudo make uninstall
make clean
make shared_lib -j32
sudo make install
g++ -std=c++11 test.cc -lrocksdb -lbz2 -lz -o test
./test

make INSTALL_PATH=/tmp/path install
make INSTALL_PATH=/tmp/path uninstall
and make sure things are installed / uninstalled in the specified path.

Reviewers: ljin, sdong, igor

Reviewed By: igor

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23211
2014-09-11 20:41:02 -07:00
Igor Canadi
6bb7e3ef25 Merger test
Summary: I abandoned https://reviews.facebook.net/D18789, but I wrote a good unit test there, so let's check it in. :)

Test Plan: this is test

Reviewers: sdong, yhchiang, ljin

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22827
2014-09-08 22:24:40 -07:00
Naveen
1d284db213 Addressing review comments 2014-09-08 17:44:52 -07:00
Igor Canadi
a2bb7c3c33 Push- instead of pull-model for managing Write stalls
Summary:
Introducing WriteController, which is a source of truth about per-DB write delays. Let's define an DB epoch as a period where there are no flushes and compactions (i.e. new epoch is started when flush or compaction finishes). Each epoch can either:
* proceed with all writes without delay
* delay all writes by fixed time
* stop all writes

The three modes are recomputed at each epoch change (flush, compaction), rather than on every write (which is currently the case).

When we have a lot of column families, our current pull behavior adds a big overhead, since we need to loop over every column family for every write. With new push model, overhead on Write code-path is minimal.

This is just the start. Next step is to also take care of stalls introduced by slow memtable flushes. The final goal is to eliminate function MakeRoomForWrite(), which currently needs to be called for every column family by every write.

Test Plan: make check for now. I'll add some unit tests later. Also, perf test.

Reviewers: dhruba, yhchiang, MarkCallaghan, sdong, ljin

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22791
2014-09-08 11:20:25 -07:00
Feng Zhu
0af157f9bf Implement full filter for block based table.
Summary:
1. Make filter_block.h a base class. Derive block_based_filter_block and full_filter_block. The previous one is the traditional filter block. The full_filter_block is newly added. It would generate a filter block that contain all the keys in SST file.

2. When querying a key, table would first check if full_filter is available. If not, it would go to the exact data block and check using block_based filter.

3. User could choose to use full_filter or tradional(block_based_filter). They would be stored in SST file with different meta index name. "filter.filter_policy" or "full_filter.filter_policy". Then, Table reader is able to know the fllter block type.

4. Some optimizations have been done for full_filter_block, thus it requires a different interface compared to the original one in filter_policy.h.

5. Actual implementation of filter bits coding/decoding is placed in util/bloom_impl.cc

Benchmark: base commit 1d23b5c470
Command:
db_bench --db=/dev/shm/rocksdb --num_levels=6 --key_size=20 --prefix_size=20 --keys_per_prefix=0 --value_size=100 --write_buffer_size=134217728 --max_write_buffer_number=2 --target_file_size_base=33554432 --max_bytes_for_level_base=1073741824 --verify_checksum=false --max_background_compactions=4 --use_plain_table=0 --memtablerep=prefix_hash --open_files=-1 --mmap_read=1 --mmap_write=0 --bloom_bits=10 --bloom_locality=1 --memtable_bloom_bits=500000 --compression_type=lz4 --num=393216000 --use_hash_search=1 --block_size=1024 --block_restart_interval=16 --use_existing_db=1 --threads=1 --benchmarks=readrandom —disable_auto_compactions=1
Read QPS increase for about 30% from 2230002 to 2991411.

Test Plan:
make all check
valgrind db_test
db_stress --use_block_based_filter = 0
./auto_sanity_test.sh

Reviewers: igor, yhchiang, ljin, sdong

Reviewed By: sdong

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D20979
2014-09-08 10:37:05 -07:00
Feng Zhu
40ddc3d6c4 add cache bench
Summary: 1. A benchmark for cache

Test Plan: ./cache_bench

Reviewers: yhchiang, dhruba, sdong, igor, ljin

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22809
2014-09-05 15:55:43 -07:00
Radheshyam Balasundaram
b6fd7811eb Don't do memtable lookup in db_impl_readonly if memtables are empty while opening db.
Summary: In DBImpl::Recover method, while loading memtables, also check if memtables are empty. Use this in DBImplReadonly to determine whether to lookup memtable or not.

Test Plan:
db_test
make check all

Reviewers: sdong, yhchiang, ljin, igor

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22281
2014-08-26 17:19:03 -07:00
sdong
28b5c76004 WriteBatchWithIndex: a wrapper of WriteBatch, with a searchable index
Summary:
Add WriteBatchWithIndex so that a user can query data out of a WriteBatch, to support MongoDB's read-its-own-write.

WriteBatchWithIndex uses a skiplist to store the binary index. The index stores the offset of the entry in the write batch. When searching for a key, the key for the entry is read by read the entry from the write batch from the offset.

Define a new iterator class for querying data out of WriteBatchWithIndex. A user can create an iterator of the write batch for one column family, seek to a key and keep calling Next() to see next entries.

I will add more unit tests if people are OK about this API.

Test Plan:
make all check
Add unit tests.

Reviewers: yhchiang, igor, MarkCallaghan, ljin

Reviewed By: ljin

Subscribers: dhruba, leveldb, xjin

Differential Revision: https://reviews.facebook.net/D21381
2014-08-18 16:37:38 -07:00
Naveen
ddb8039e3a RocksDB static build
Make file changes to download and build the dependencies
.Load the shared library when RocksDB is initialized
2014-08-18 14:04:37 -07:00
Lei Jin
218857b3f5 remove tailing_iter.h/cc
Summary: as title

Test Plan:
make all check
ran db_bench and saw seek stats at the end

Reviewers: yhchiang, igor, sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D21651
2014-08-12 17:13:15 -07:00
Radheshyam Balasundaram
9674c11d01 Integrating Cuckoo Hash SST Table format into RocksDB
Summary:
Contains the following changes:
- Implementation of cuckoo_table_factory
- Adding cuckoo table into AdaptiveTableFactory
- Adding cuckoo_table_db_test, similar to lines of plain_table_db_test
- Minor fixes to Reader: When a key is found in the table, return the key found instead of the search key.
- Minor fixes to Builder: Add table properties that are required by Version::UpdateTemporaryStats() during Get operation. Don't define curr_node as a reference variable as the memory locations may get reassigned during tree.push_back operation, leading to invalid memory access.

Test Plan:
cuckoo_table_reader_test --enable_perf
cuckoo_table_builder_test
cuckoo_table_db_test
make check all
make valgrind_check
make asan_check

Reviewers: sdong, igor, yhchiang, ljin

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D21219
2014-08-11 20:21:07 -07:00
miguelportilla
93e6b5e9d9 Changes to support unity build:
* Script for building the unity.cc file via Makefile
* Unity executable Makefile target for testing builds
* Source code changes to fix compilation of unity build
2014-08-11 13:22:47 -04:00
Radheshyam Balasundaram
62f9b071ff Implementation of CuckooTableReader
Summary:
Contains:
- Implementation of TableReader based on Cuckoo Hashing
- Unittests for CuckooTableReader
- Performance test for TableReader

Test Plan:
make cuckoo_table_reader_test
./cuckoo_table_reader_test
make valgrind_check
make asan_check

Reviewers: yhchiang, sdong, igor, ljin

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D20511
2014-07-25 16:37:32 -07:00
Igor Canadi
6296330417 SpatialDB
Summary:
This diff is adding spatial index support to RocksDB.

When creating the DB user specifies a list of spatial indexes. Spatial indexes can cover different areas and have different resolution (i.e. number of tiles). This is useful for supporting different zoom levels.

Each element inserted into SpatialDB has:
* a bounding box, which determines how will the element be indexed
* string blob, which will usually be WKB representation of the polygon (http://en.wikipedia.org/wiki/Well-known_text)
* feature set, which is a map of key-value pairs, where value can be int, double, bool, null or a string. FeatureSet will be a set of tags associated with geo elements (for example, 'road': 'highway' and similar)
* a list of indexes to insert the element in. For example, small river element will be inserted in index for high zoom level, while country border will be inserted in all indexes (including the index for low zoom level).

Each query is executed on single spatial index. Query guarantees that it will return all elements intersecting the specified bounding box, but it might also return some extra non-intersecting elements.

Test Plan: Added bunch of unit tests in spatial_db_test

Reviewers: dhruba, yinwang

Reviewed By: yinwang

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D20361
2014-07-23 14:22:58 -04:00
Yueh-Hsuan Chiang
b5c4c0b86b [Java] Add the missing ROCKSDB_JAR variable in Makefile
Summary:
Add the missing ROCKSDB_JAR variable in Makefile, which is mistakenly
removed in https://reviews.facebook.net/D20289.

Test Plan:
export ROCKSDB_JAR=
make rocksdbjava
2014-07-23 11:16:18 -07:00
Igor Canadi
f82d4a2498 Also bump version in Makefile 2014-07-23 10:28:41 -04:00
sdong
e6de02103a Add a utility function to guess optimized options based on constraints
Summary:
Add a function GetOptions(), where based on four parameters users give: read/write amplification threshold, memory budget for mem tables and target DB size, it picks up a compaction style and parameters for them. Background threads are not touched yet.

One limit of this algorithm: since compression rate and key/value size are hard to predict, it's hard to predict level 0 file size from write buffer size. Simply make 1:1 ratio here.

Sample results: https://reviews.facebook.net/P477

Test Plan: Will add some a unit test where some sample scenarios are given and see they pick the results that make sense

Reviewers: yhchiang, dhruba, haobo, igor, ljin

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D18741
2014-07-22 15:24:21 -07:00
Yueh-Hsuan Chiang
ae7743f226 Fixed some make and linking issues of RocksDBJava
Summary:
Fixed some make and linking issues of RocksDBJava. Specifically:
* Add JAVA_LDFLAGS, which does not include gflags
* rocksdbjava library now uses JAVA_LDFLAGS instead of LDFLAGS
* java/Makefile now includes build_config.mk
* rearrange make rocksdbjava workflow to ensure the library file is correctly
  included in the jar file.

Test Plan:
make rocksdbjava
make jdb_bench
java/jdb_bench.sh

Reviewers: dhruba, swapnilghike, zzbennett, rsumbaly, ankgup87

Reviewed By: ankgup87

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D20289
2014-07-21 22:41:54 -07:00
Radheshyam Balasundaram
cf3da899b0 Adding a new SST table builder based on Cuckoo Hashing
Summary:
Cuckoo Hashing based SST table builder. Contains:
- Cuckoo Hashing logic and file storage logic.
- Unit tests for logic

Test Plan:
make cuckoo_table_builder_test
./cuckoo_table_builder_test
make check all

Reviewers: yhchiang, igor, sdong, ljin

Reviewed By: ljin

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D19545
2014-07-21 13:26:09 -07:00
Yueh-Hsuan Chiang
2f289dccf3 Add -Wsign-compare to WARNING_FLAGS in Makefile
Summary:
Add -Wsign-compare to WARNING_FLAGS in Makefile as not all g++ compiler
include -Wsign-compare in -Wall when compiling '.h' file.

Test Plan: make -j32

Reviewers: ljin, igor, sdong

Reviewed By: sdong

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D20169
2014-07-17 17:26:12 -07:00
Stanislau Hlebik
1c9f190ae3 Fix db_test
Summary: Added deletion of DBIterators in DBIterator's tests

Test Plan: make valgrind_check

Reviewers: igor, sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D20043
2014-07-16 14:51:43 -07:00
sdong
01700b6911 Update master to version 3.3
Summary: As tittle

Test Plan: no need

Reviewers: igor, yhchiang, ljin

Reviewed By: ljin

Subscribers: haobo, dhruba, xjin, leveldb

Differential Revision: https://reviews.facebook.net/D19629
2014-07-10 11:59:35 -07:00
Igor Canadi
f0a8be253e JSON (Document) API sketch
Summary:
This is a rough sketch of our new document API. Would like to get some thoughts and comments about the high-level architecture and API.

I didn't optimize for performance at all. Leaving some low-hanging fruit so that we can be happy when we fix them! :)

Currently, bunch of features are not supported at all. Indexes can be only specified when creating database. There is no query planner whatsoever. This will all be added in due time.

Test Plan: Added a simple unit test

Reviewers: haobo, yhchiang, dhruba, sdong, ljin

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D18747
2014-07-10 09:31:42 -07:00
Lei Jin
5ef1ba7ff5 generic rate limiter
Summary:
A generic rate limiter that can be shared by threads and rocksdb
instances. Will use this to smooth out write traffic generated by
compaction and flush. This will help us get better p99 behavior on flash
storage.

Test Plan:
unit test output
==== Test RateLimiterTest.Rate
request size [1 - 1023], limit 10 KB/sec, actual rate: 10.374969 KB/sec, elapsed 2002265
request size [1 - 2047], limit 20 KB/sec, actual rate: 20.771242 KB/sec, elapsed 2002139
request size [1 - 4095], limit 40 KB/sec, actual rate: 41.285299 KB/sec, elapsed 2202424
request size [1 - 8191], limit 80 KB/sec, actual rate: 81.371605 KB/sec, elapsed 2402558
request size [1 - 16383], limit 160 KB/sec, actual rate: 162.541268 KB/sec, elapsed 3303500

Reviewers: yhchiang, igor, sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D19359
2014-07-08 11:41:57 -07:00
Ankit Gupta
e0ebea6cc2 Add doc and end of line 2014-06-22 13:27:22 -07:00
Igor Canadi
00b26c3a83 JSONDocument
Summary:
After evaluating options for JSON storage, I decided to implement our own. The reason is that we'll be able to optimize it better and we get to reduce unnecessary dependencies (which is what we'd get with folly).

I also plan to write a serializer/deserializer for JSONDocument with our own binary format similar to BSON. That way we'll store binary JSON format in RocksDB instead of the plain-text JSON. This means less storage and faster deserialization.

There are still some inefficiencies left here. I plan to optimize them after we develop a functioning DocumentDB. That way we can move and iterate faster.

Test Plan: added a unit test

Reviewers: dhruba, haobo, sdong, ljin, yhchiang

Reviewed By: haobo

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D18831
2014-06-20 11:14:14 +02:00
Igor Canadi
f068d2a94d Move master version to 3.2 2014-05-23 10:27:56 -07:00
Mike Lin
76596b5318 Fix building RocksDB in paths containing spaces -- quote path names in Makefile and build_detect_platform. 2014-05-10 21:01:25 -07:00
Igor Canadi
313b2e5da1 Better INSTALL.md and Makefile rules
Summary: We have a lot of problems with gflags. However, when compiling rocksdb static library, we don't need gflags dependency. Reorganize INSTALL.md such that first-time customers don't need any dependency installed to actually build rocksdb static library.

Test Plan: none

Reviewers: dhruba, haobo

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D18501
2014-05-07 16:51:30 -07:00
Igor Canadi
d2569fea47 log_and_apply_bench on a new benchmark framework
Summary:
db_test includes Benchmark for LogAndApply. This diff removes it from db_test and puts it into a separate log_and_apply bench. I just wanted to play around with our new benchmark framework and figure out how it works.

I would also like to show you how great it is! I believe right set of microbenchmarks can speed up our productivity a lot and help catch early regressions.

Test Plan: no

Reviewers: dhruba, haobo, sdong, ljin, yhchiang

Reviewed By: yhchiang

CC: leveldb

Differential Revision: https://reviews.facebook.net/D18261
2014-05-05 11:11:48 -07:00
Yueh-Hsuan Chiang
d56959a5fc [Java] Use environmental variable JAVA_HOME in Makefile for RocksJava. 2014-05-04 13:23:21 -07:00
Krzysztof Kowalczyk
2b7cf03e0d Update Makefile 2014-04-29 14:29:45 -07:00
Igor Canadi
f1c9aa6ebe More unsigned/signed compare fixes 2014-04-29 13:01:06 -07:00
Igor Canadi
38693d99c4 Fix more signed/unsigned comparsions 2014-04-29 12:40:18 -07:00
Igor Canadi
e525bb16ea Make kMajorVersion and kMinorVersion take version from version macros 2014-04-29 11:59:48 -04:00
Igor Canadi
a40970aa31 Run whitebox test before black box 2014-04-24 12:28:16 -04:00
Igor Canadi
d0939cdcea Single-threaded asan_crash_test 2014-04-21 15:42:28 -07:00
Igor Canadi
8dc34364d2 Rename "benchmark" back to "bench".
Also, make `benchharness.cc` not compiled into rocksdb library.
2014-04-21 13:12:15 -07:00
Pratyush Seth
ff1b5df4c6 Added benchmark functionality on the lines of folly/Benchmark.h
Summary: Added benchmark functionality on the lines of folly/Benchmark.h

Test Plan: Added unit tests

Reviewers: igor, haobo, sdong, ljin, yhchiang, dhruba

Reviewed By: igor

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17973
2014-04-21 12:29:55 -07:00
Lei Jin
0f2d768191 hints for narrowing down FindFile range and avoiding checking unrelevant L0 files
Summary:
The file tree structure in Version is prebuilt and the range of each file is known.
On the Get() code path, we do binary search in FindFile() by comparing
target key with each file's largest key and also check the range for each L0 file.
With some pre-calculated knowledge, each key comparision that has been done can serve
as a hint to narrow down further searches:
(1) If a key falls within a L0 file's range, we can safely skip the next
file if its range does not overlap with the current one.
(2) If a key falls within a file's range in level L0 - Ln-1, we should only
need to binary search in the next level for files that overlap with the current one.

(1) will be able to skip some files depending one the key distribution.
(2) can greatly reduce the range of binary search, especially for bottom
levels, given that one file most likely only overlaps with N files from
the level below (where N is max_bytes_for_level_multiplier). So on level
L, we will only look at ~N files instead of N^L files.

Some inital results: measured with 500M key DB, when write is light (10k/s = 1.2M/s), this
improves QPS ~7% on top of blocked bloom. When write is heavier (80k/s =
9.6M/s), it gives us ~13% improvement.

Test Plan: make all check

Reviewers: haobo, igor, dhruba, sdong, yhchiang

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17205
2014-04-21 09:10:12 -07:00
Ankit Gupta
ebd85e8f3a Fix build 2014-04-18 10:47:03 -07:00
Ankit Gupta
dc291f5bf0 Merge branch 'master' of https://github.com/facebook/rocksdb
Conflicts:
	Makefile
	java/Makefile
	java/org/rocksdb/Options.java
	java/rocksjni/portal.h
2014-04-18 10:32:14 -07:00
Yueh-Hsuan Chiang
bb6fd15a6e [Java] Add a basic binding and test for BackupableDB and StackableDB.
Summary:
Add a skeleton binding and test for BackupableDB which shows that BackupableDB
and RocksDB can share the same JNI calls.

Test Plan:
make rocksdbjava
make jtest

Reviewers: haobo, ankgup87, sdong, dhruba

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17793
2014-04-17 17:28:51 -07:00
Igor Canadi
62551b1c4e Don't compile sync_point if NDEBUG
Summary:
We don't really need sync_point.o if we're compiling with NDEBUG.

This diff depends on D17823

Test Plan: compiles

Reviewers: haobo, ljin, sdong

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17829
2014-04-17 10:49:58 -07:00
Ankit Gupta
320ae72e17 Add histogramType for statistics 2014-04-16 21:38:33 -07:00
Igor Canadi
7d838856cf Fix compile issues when doing make release 2014-04-15 16:00:10 -07:00
Igor Canadi
588bca2020 RocksDBLite
Summary:
Introducing RocksDBLite! Removes all the non-essential features and reduces the binary size. This effort should help our adoption on mobile.

Binary size when compiling for IOS (`TARGET_OS=IOS m static_lib`) is down to 9MB from 15MB (without stripping)

Test Plan: compiles :)

Reviewers: dhruba, haobo, ljin, sdong, yhchiang

Reviewed By: yhchiang

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17835
2014-04-15 13:39:26 -07:00
Igor Canadi
23c8f89b57 Revert "Don't compile ldb tool into static library"
This reverts commit e296577ef6.
2014-04-15 11:29:02 -07:00
Igor Canadi
a347ffe92f Revert "Fix sst_dump and reduce_levels_test compile errors"
This reverts commit d8f00b4109.
2014-04-15 11:28:52 -07:00
Igor Canadi
d8f00b4109 Fix sst_dump and reduce_levels_test compile errors 2014-04-15 11:13:12 -07:00
Igor Canadi
e296577ef6 Don't compile ldb tool into static library
Summary:
This is first step of my effort to reduce size of librocksdb.a for use in mobile.

ldb object files are huge and are ment to be used as a command line tool. I moved them to `tools/` directory and include them only when compiling `ldb`

This diff reduced librocksdb.a from 42MB to 39MB on my mac (not stripped).

Test Plan: ran ldb

Reviewers: dhruba, haobo, sdong, ljin, yhchiang

Reviewed By: yhchiang

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17823
2014-04-15 10:52:39 -07:00
Yueh-Hsuan Chiang
ca4fa2047e [Java] rename 'make jni' to 'make rocksdbjava' 2014-04-10 10:04:48 -07:00
Yueh-Hsuan Chiang
0f5cbcd798 [JNI] Add an initial benchmark for java binding for rocksdb.
Summary:
* Add a benchmark for java binding for rocksdb.  The java benchmark
  is a complete rewrite based on the c++ db/db_bench.cc and the
  DbBenchmark in dain's java leveldb.
* Support multithreading.
* 'readseq' is currently not supported as it requires RocksDB Iterator.

* usage:

  --benchmarks
    Comma-separated list of operations to run in the specified order
        Actual benchmarks:
                fillseq    -- write N values in sequential key order in async mode
                fillrandom -- write N values in random key order in async mode
                fillbatch  -- write N/1000 batch where each batch has 1000 values
                              in random key order in sync mode
                fillsync   -- write N/100 values in random key order in sync mode
                fill100K   -- write N/1000 100K values in random order in async mode
                readseq    -- read N times sequentially
                readrandom -- read N times in random order
                readhot    -- read N times in random order from 1% section of DB
        Meta Operations:
                delete     -- delete DB
    DEFAULT: [fillseq, readrandom, fillrandom]

  --compression_ratio
    Arrange to generate values that shrink to this fraction of
        their original size after compression
    DEFAULT: 0.5

  --use_existing_db
    If true, do not destroy the existing database.  If you set this
        flag and also specify a benchmark that wants a fresh database,  that benchmark will fail.
    DEFAULT: false

  --num
    Number of key/values to place in database.
    DEFAULT: 1000000

  --threads
    Number of concurrent threads to run.
    DEFAULT: 1

  --reads
    Number of read operations to do.  If negative, do --nums reads.

  --key_size
    The size of each key in bytes.
    DEFAULT: 16

  --value_size
    The size of each value in bytes.
    DEFAULT: 100

  --write_buffer_size
    Number of bytes to buffer in memtable before compacting
        (initialized to default value by 'main'.)
    DEFAULT: 4194304

  --cache_size
    Number of bytes to use as a cache of uncompressed data.
        Negative means use default settings.
    DEFAULT: -1

  --seed
    Seed base for random number generators.
    DEFAULT: 0

  --db
    Use the db with the following name.
    DEFAULT: /tmp/rocksdbjni-bench

* Add RocksDB.write().

Test Plan: make jbench

Reviewers: haobo, sdong, dhruba, ankgup87

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17433
2014-04-09 00:48:20 -07:00
Lei Jin
92c1eb0291 macros for perf_context
Summary: This will allow us to disable them completely for iOS or for better performance

Test Plan: will run make all check

Reviewers: igor, haobo, dhruba

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17511
2014-04-08 10:58:07 -07:00
Igor Canadi
3d2fe844ab Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
	db/db_impl.h
	db/memtable_list.cc
	db/version_set.cc
2014-04-07 11:31:11 -07:00
Igor Canadi
bcd1f15b60 Remove -Wno-unused-const-variable 2014-04-04 16:15:47 -07:00
Igor Canadi
51023c3911 Make RocksDB compile for iOS
Summary:
I had to make number of changes to the code and Makefile:
* Add `make lib`, that will create static library without debug info. We need this to avoid growing binary too much. Currently it's 14MB.
* Remove cpuinfo() function and use __SSE4_2__ macro. We actually used the macro as part of Fast_CRC32() function.
As a result, I also accidentally fixed this issue: https://www.facebook.com/groups/rocksdb.dev/permalink/549700778461774/?stream_ref=2
* Remove __thread locals in OS_MACOSX

Test Plan: `make lib PLATFORM=IOS`

Reviewers: ljin, haobo, dhruba, sdong

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17475
2014-04-04 13:11:44 -07:00
Yueh-Hsuan Chiang
da0887a3dc [JNI] Add java api and java tests for WriteBatch and WriteOptions, add put() and remove() to RocksDB.
Summary:
* Add java api for rocksdb::WriteBatch and rocksdb::WriteOptions, which are necessary components
  for running benchmark.
* Add java test for org.rocksdb.WriteBatch and org.rocksdb.WriteOptions.
* Add remove() to org.rocksdb.RocksDB, and add put() and remove() to RocksDB which take
  org.rocksdb.WriteOptions.

Test Plan: make jtest

Reviewers: haobo, sdong, dhruba

Reviewed By: sdong

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17373
2014-04-02 14:49:20 -07:00
Igor Canadi
8555ce2dec Merge branch 'master' into columnfamilies 2014-04-02 10:48:05 -07:00
Yueh-Hsuan Chiang
8c4a3bfa5b Add a java api for rocksdb::Options, currently only supports create_if_missing.
Summary:
* [java] Add a java api for rocksdb::Options, currently only supports create_if_missing.
* [java] Add a test for RocksDBException in RocksDBSample.

Test Plan: make jtest

Reviewers: haobo, sdong

Reviewed By: haobo

CC: leveldb, dhruba

Differential Revision: https://reviews.facebook.net/D17385
2014-04-01 16:59:05 -07:00
Igor Canadi
05080dae3f fix db_sanity_test 2014-03-31 17:06:53 -07:00
Igor Canadi
ddbd1ece88 Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
	db/db_test.cc
	db/internal_stats.cc
	db/internal_stats.h
	db/version_edit.cc
	db/version_edit.h
	db/version_set.cc
	include/rocksdb/options.h
	util/options.cc
2014-03-31 13:39:24 -07:00
Dhruba Borthakur
4031b98373 A GIS implementation for rocksdb.
Summary:
    This patch stores gps locations in rocksdb.

    Each object is uniquely identified by an id. Each object has
    a gps (latitude, longitude) associated with it. The geodb
    supports looking up an object either by its gps location
    or by its id. There is a method to retrieve all objects
    within a circular radius centered at a specified gps location.

Test Plan: Simple unit-test attached.

Reviewers: leveldb, haobo

Reviewed By: haobo

CC: leveldb, tecbot, haobo

Differential Revision: https://reviews.facebook.net/D15567
2014-03-28 15:26:44 -07:00
Yueh-Hsuan Chiang
0d463a3685 Add a jni library for rocksdb which supports Open, Get, Put, and Close.
Summary:
This diff contains a simple jni library for rocksdb which supports open, get,
put and closeusing default options (including Options, ReadOptions, and
WriteOptions.)  In the usual case, Java developers can use the c++ rocksdb
library in the way similar to the following:

    RocksDB db = RocksDB.open(path_to_db);
    ...
    db.put("hello".getBytes(), "world".getBytes();
    byte[] value = db.get("hello".getBytes());
    ...
    db.close();

Specifically, this diff has the following major classes:

* RocksDB: a Java wrapper class which forwards the operations
  from the java side to c++ rocksdb library.
* RocksDBException: ncapsulates the error of an operation.
  This exception type is used to describe an internal error from
  the c++ rocksdb library.

This diff also include a simple java sample code calling c++ rocksdb library.

To build the rocksdb jni library, simply run make jni, and make jtest will try to
build and run the sample code.

Note that if the rocksdb is not built with the default glibc that Java uses,
java will try to load the wrong glibc during the run time.  As a result,
the sample code might not work properly during the run time.

Test Plan:
* make jni
* make jtest

Reviewers: haobo, dhruba, sdong, igor, ljin

Reviewed By: dhruba

CC: leveldb, xjin

Differential Revision: https://reviews.facebook.net/D17109
2014-03-28 14:19:21 -07:00
Igor Canadi
e0c1211555 Merge branch 'master' into columnfamilies
Conflicts:
	db/version_set.cc
	tools/db_stress.cc
2014-03-17 12:21:05 -07:00
Igor Canadi
f9d0530213 Don't care about signed/unsigned compare
Summary:
We need to stop these:
https://github.com/facebook/rocksdb/pull/99
https://github.com/facebook/rocksdb/pull/83

Test Plan: no

Reviewers: dhruba, haobo, sdong, ljin, yhchiang

Reviewed By: ljin

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16905
2014-03-17 09:41:41 -07:00
Igor Canadi
1e0d47276c Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
	db/db_impl.h
2014-03-07 16:59:47 -08:00
Igor Canadi
9c8ad62691 DB Sanity Test
Summary:
@kailiu mentioned on meeting yesterday that we sometimes have trouble opening DB created by old version with the new version. This will be very important to test for column families, since I'm changing disk format for the MANIFEST.

I added a tool that can help us test that. Usage:
./db_sanity_test <path> create
will create a bunch of DBs under <path>
<change RocksDB version>
./db_sanity_test <path> verify
will verify consistency of DBs created under <path>

Test Plan: ran the db_sanity_test

Reviewers: kailiu, dhruba, haobo

Reviewed By: kailiu

CC: leveldb, kailiu, xjin

Differential Revision: https://reviews.facebook.net/D16605
2014-03-06 11:36:39 -08:00
Igor Canadi
fa34697237 Merge branch 'master' into columnfamilies 2014-03-04 09:39:14 -08:00
kailiu
906f3dca72 Add a hash-index component for block
Summary:
this is the key component extracted from diff: https://reviews.facebook.net/D14271
I separate it to a dedicated patch to make the review easier.

Test Plan: added a unit test and passed it.

Reviewers: haobo, sdong, dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16245
2014-03-03 21:11:49 -08:00
kailiu
6b9da48a03 Get rid of all optimization flags in debug mode 2014-03-03 21:06:24 -08:00
Igor Canadi
9d0577a6be Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
	db/db_impl.h
	db/transaction_log_impl.cc
	db/transaction_log_impl.h
	include/rocksdb/options.h
	util/env.cc
	util/options.cc
2014-03-03 18:29:03 -08:00
Kai Liu
6ba1084f24 Fix some compilation bugs in different platforms
Summary:

detect some problems when testing my 3rd party release tool.
2014-02-27 22:20:17 -08:00
Igor Canadi
944ff673d6 Merge branch 'master' into columnfamilies 2014-02-26 10:09:52 -08:00
Lei Jin
b2795b799e thread local pointer storage
Summary:
This is not a generic thread local implementation in the sense that it
only takes pointer. But it does support multiple instances per thread
and lets user plugin function to perform cleanup when thread exits or an
instance gets destroyed.

Test Plan: unit test for now

Reviewers: haobo, igor, sdong, dhruba

Reviewed By: igor

CC: leveldb, kailiu

Differential Revision: https://reviews.facebook.net/D16131
2014-02-25 17:47:37 -08:00
Igor Canadi
d39da4b578 Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
2014-02-24 17:09:05 -08:00
sdong
01c27be5fb A simple benchmark to measure WAL append latency
Summary: A simple benchmark that simulates WAL append. It can be used to test different platform/file system's performance on WAL.

Test Plan: run it.

Reviewers: haobo, kailiu

Reviewed By: haobo

CC: igor, dhruba, i.am.jin.lei, yhchiang, leveldb, nkg-

Differential Revision: https://reviews.facebook.net/D16239
2014-02-24 14:39:32 -08:00
sdong
b2d29675c8 Add a test in prefix_test to verify correctness of results
Summary:
Add a test to verify HashLinkList and HashSkipList (mainly for the former one) returns the correct results when inserting the same bucket in the different orders.

Some other changes:
(1) add the test to test list
(2) fix compile error
(3) add header

Test Plan: ./prefix_test

Reviewers: haobo, kailiu

Reviewed By: haobo

CC: igor, yhchiang, i.am.jin.lei, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D16143
2014-02-19 17:00:34 -08:00
Igor Canadi
76c048183c Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
	db/db_test.cc
	include/rocksdb/db.h
2014-02-14 16:46:03 -08:00
sdong
b5140a0361 Fix table_reader_bench and add it to "make"
Summary: Fix table_reader_bench after some interface changes. Add it to make to avoid future breaking

Test Plan: make table_reader_bench and run it with different options.

Reviewers: kailiu, haobo

Reviewed By: haobo

CC: igor, leveldb

Differential Revision: https://reviews.facebook.net/D16107
2014-02-12 18:31:02 -08:00
Igor Canadi
ccdb93e775 Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
	db/db_impl.h
	db/memtable_list.cc
	db/memtable_list.h
	db/version_set.cc
	db/version_set.h
2014-02-12 14:01:30 -08:00
Kai Liu
d4b789fdee Add LIBRARY back to make dbg 2014-02-10 20:15:09 -08:00
Igor Canadi
2b8c44639a Merge branch 'master' into columnfamilies 2014-02-07 11:42:56 -08:00
kailiu
1d08140e81 Compile -O2 by default and add make dbg
Summary: To speed up the compilation while allowing us to compile in debug mode.

Test Plan:
make: see -O2 enabled
make dbg: didn't see -O2

Reviewers: igor

Reviewed By: igor

CC: leveldb, dhruba

Differential Revision: https://reviews.facebook.net/D15969
2014-02-06 16:11:35 -08:00
Igor Canadi
0143abdbb0 Merge branch 'master' into columnfamilies
Conflicts:
	HISTORY.md
	db/db_impl.cc
	db/db_impl.h
	db/db_iter.cc
	db/db_test.cc
	db/dbformat.h
	db/memtable.cc
	db/memtable_list.cc
	db/memtable_list.h
	db/table_cache.cc
	db/table_cache.h
	db/version_edit.h
	db/version_set.cc
	db/version_set.h
	db/write_batch.cc
	db/write_batch_test.cc
	include/rocksdb/options.h
	util/options.cc
2014-02-06 15:58:20 -08:00
Kai Liu
fd0ffbc7ca Disable the html-based coverage report by default 2014-02-06 12:58:13 -08:00
Igor Canadi
c37e7de669 Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
	db/db_impl.h
2014-01-30 11:47:23 -08:00
Igor Canadi
ac92420fc5 Merge branch 'master' into performance
Conflicts:
	db/db_impl.h
2014-01-30 10:09:23 -08:00
Kai Liu
b7db241118 LIBNAME in Makefile is not really configurable
Summary:
In new third-party release tool, `LIBNAME=<customized_library> make`
will not really change the LIBNAME.

However it's very odd that the same approach works with old third-party
release tools. I checked previous rocksdb version and both librocksdb.a
and librocksdb_debug.a were correctly generated and copied to the
right place.

Test Plan:
`LIBNAME=hello make -j32` generates hello.a
`make -j32` generates librocksdb.a

Reviewers: igor, sdong, haobo, dhruba

Reviewed By: igor

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15555
2014-01-29 11:35:05 -08:00
kailiu
a5e220f5ef Merge branch 'master' into performance
Conflicts:
	Makefile
	db/db_impl.cc
	db/db_test.cc
	db/memtable_list.cc
	db/memtable_list.h
	table/block_based_table_reader.cc
	table/table_test.cc
	util/cache.cc
	util/coding.cc
2014-01-28 10:35:55 -08:00
Igor Canadi
1423e7c9de Merge branch 'master' into columnfamilies
Conflicts:
	db/version_set.cc
	db/version_set_reduce_num_levels.cc
	util/ldb_cmd.cc
2014-01-24 15:03:54 -08:00
kailiu
f131d4c280 Add a make target for shared library
Summary:
Previous we made `make release` also compile shared library. However it takes a long time to complete.

To make our development process more efficient. I added a new make target shared_lib.

User can of course run `make <library_name>` for direct compilation. However the <library_name> changed under certain condition. Thus we need `make shared_lib` to get rid of the memorization from users' side.

Test Plan: make shared_lib

Reviewers: igor, sdong, haobo, dhruba

Reviewed By: igor

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15309
2014-01-24 11:56:01 -08:00
Igor Canadi
23f6791c9e Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
	db/db_impl_readonly.cc
	db/db_test.cc
	db/version_edit.cc
	db/version_edit.h
	db/version_set.cc
	db/version_set.h
	db/version_set_reduce_num_levels.cc
2014-01-21 17:01:52 -08:00
Kai Liu
23576d773f Remove the extra line in "make release"
Summary:

that line was introduced during merge.
2014-01-16 17:01:34 -08:00
kailiu
1304d8c8ce Merge branch 'master' into performance
Conflicts:
	Makefile
	db/db_impl.cc
	db/db_impl.h
	db/db_test.cc
	db/memtable.cc
	db/memtable.h
	db/version_edit.h
	db/version_set.cc
	include/rocksdb/options.h
	util/hash_skiplist_rep.cc
	util/options.cc
2014-01-15 23:12:31 -08:00
kailiu
481c77e526 Move the compilation of the shared libraries to "make release"
Compiling the shared libraries took a long time. Thus to speed up the development speed, it still makes sense to be separated from regular compilation.
2014-01-14 13:54:33 -08:00
Kai Liu
d702d8073e A script that automatically reformat affected lines
Summary:
Added a script that reformat only the affected lines in a given diff.

I planned to make that file as pre-commit hook but looks it's a little bit more difficult than I thought. Since I don't want to spend too much time on this task right now, I eventually added a "make command" to achieve this with a few additional key strokes.

Also make the clang-format solely inherited from Google's style -- there are still debates on some of the style issues, but we can address them later once we reach a consensus.

Test Plan: Did some ugly format change and ran "make format", all affected lines are formatted as expected.

Reviewers: igor, sdong, haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15147
2014-01-14 12:21:24 -08:00
kailiu
ac2fe72832 Compile dynamic library by default
Summary:
Per request, some users need to use dynamic rocksdb library instead of static one.

However currently the dynamic libraries have to be manually compiled by default, which is inconvenient. I made dymamic libraries to be compiled by default.

Test Plan: make clean; make; make clean;

Reviewers: haobo, sdong, dhruba, igor

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15117
2014-01-14 00:28:10 -08:00
Igor Canadi
ef6ad1708d [column families] Support to create and drop column families
Summary:
This diff provides basic implementations of CreateColumnFamily(), DropColumnFamily() and ListColumnFamilies(). It builds on top of https://reviews.facebook.net/D14733

It also includes a bug fix for DBImplReadOnly, where Get implementation would be redirected to DBImpl instead of DBImplReadOnly.

Test Plan: Added unit test

Reviewers: dhruba, haobo, kailiu

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15021
2014-01-03 01:12:16 -08:00
kailiu
e72aa37cc5 Merge branch 'master' into performance
Conflicts:
	db/table_cache.cc
2014-01-02 16:34:59 -08:00
Igor Canadi
345fb94d26 moving autovector_test after db_test 2014-01-02 03:30:29 -08:00
kailiu
f1cec73a76 Merge branch 'master' into performance
Conflicts:
	db/db_impl.cc
	db/db_test.cc
	db/memtable.cc
	db/version_set.cc
	include/rocksdb/statistics.h
2013-12-27 12:23:17 -08:00
kailiu
c01676e46d Implement autovector
Summary:
A vector that leverages pre-allocated stack-based array to achieve better
performance for array with small amount of items.

Test Plan:
Added tests for both correctness and performance

Here is the performance benchmark between vector and autovector

Please note that in the test "Creation and Insertion Test", the test case were designed with the motivation described below:

* no element inserted: internal array of std::vector may not really get
  initialize.
* one element inserted: internal array of std::vector must have
  initialized.
* kSize elements inserted. This shows the most time we'll spend if we
  keep everything in stack.
* 2 * kSize elements inserted. The internal vector of
  autovector must have been initialized.

Note: kSize is the capacity of autovector

  =====================================================
  Creation and Insertion Test
  =====================================================
  created 100000 vectors:
  	each was inserted with 0 elements
  	total time elapsed: 128000 (ns)
  created 100000 autovectors:
  	each was inserted with 0 elements
  	total time elapsed: 3641000 (ns)
  created 100000 VectorWithReserveSizes:
  	each was inserted with 0 elements
  	total time elapsed: 9896000 (ns)
  -----------------------------------
  created 100000 vectors:
  	each was inserted with 1 elements
  	total time elapsed: 11089000 (ns)
  created 100000 autovectors:
  	each was inserted with 1 elements
  	total time elapsed: 5008000 (ns)
  created 100000 VectorWithReserveSizes:
  	each was inserted with 1 elements
  	total time elapsed: 24271000 (ns)
  -----------------------------------
  created 100000 vectors:
  	each was inserted with 4 elements
  	total time elapsed: 39369000 (ns)
  created 100000 autovectors:
  	each was inserted with 4 elements
  	total time elapsed: 10121000 (ns)
  created 100000 VectorWithReserveSizes:
  	each was inserted with 4 elements
  	total time elapsed: 28473000 (ns)
  -----------------------------------
  created 100000 vectors:
  	each was inserted with 8 elements
  	total time elapsed: 75013000 (ns)
  created 100000 autovectors:
  	each was inserted with 8 elements
  	total time elapsed: 18237000 (ns)
  created 100000 VectorWithReserveSizes:
  	each was inserted with 8 elements
  	total time elapsed: 42464000 (ns)
  -----------------------------------
  created 100000 vectors:
  	each was inserted with 16 elements
  	total time elapsed: 102319000 (ns)
  created 100000 autovectors:
  	each was inserted with 16 elements
  	total time elapsed: 76724000 (ns)
  created 100000 VectorWithReserveSizes:
  	each was inserted with 16 elements
  	total time elapsed: 68285000 (ns)
  -----------------------------------
  =====================================================
  Sequence Access Test
  =====================================================
  performed 100000 sequence access against vector
  	size: 4
  	total time elapsed: 198000 (ns)
  performed 100000 sequence access against autovector
  	size: 4
  	total time elapsed: 306000 (ns)
  -----------------------------------
  performed 100000 sequence access against vector
  	size: 8
  	total time elapsed: 565000 (ns)
  performed 100000 sequence access against autovector
  	size: 8
  	total time elapsed: 512000 (ns)
  -----------------------------------
  performed 100000 sequence access against vector
  	size: 16
  	total time elapsed: 1076000 (ns)
  performed 100000 sequence access against autovector
  	size: 16
  	total time elapsed: 1070000 (ns)
  -----------------------------------

Reviewers: dhruba, haobo, sdong, chip

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14655
2013-12-26 15:03:47 -08:00
Igor Canadi
e914b6490d Reorder tests
Summary:
db_test should be the first to execute because it finds the most bugs.

Also, when third parties report issues, we don't want ldb error message, we prefer to have db_test error message. For example, see thread: https://github.com/facebook/rocksdb/issues/25

Test Plan: make check

Reviewers: dhruba, haobo, kailiu

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14715
2013-12-18 13:37:06 -08:00
Haobo Xu
3c02c363b3 [RocksDB] [Performance Branch] Added dynamic bloom, to be used for memable non-existing key filtering
Summary: as title

Test Plan: dynamic_bloom_test

Reviewers: dhruba, sdong, kailiu

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14385
2013-12-11 00:15:14 -08:00
Igor Canadi
0e2c966f58 Merge pull request #29 from sepeth/fix-shared-lib-build
Build shared lib and put linker flags to the end
2013-12-10 09:19:12 -08:00
Dog̀†an C̀§ec̀§en
f6012ab826 Fix shared lib build 2013-12-10 09:35:22 +02:00
Igor Canadi
fb9fce4fc3 [RocksDB] BackupableDB
Summary:
In this diff I present you BackupableDB v1. You can easily use it to backup your DB and it will do incremental snapshots for you.
Let's first describe how you would use BackupableDB. It's inheriting StackableDB interface so you can easily construct it with your DB object -- it will add a method RollTheSnapshot() to the DB object. When you call RollTheSnapshot(), current snapshot of the DB will be stored in the backup dir. To restore, you can just call RestoreDBFromBackup() on a BackupableDB (which is a static method) and it will restore all files from the backup dir. In the next version, it will even support automatic backuping every X minutes.

There are multiple things you can configure:
1. backup_env and db_env can be different, which is awesome because then you can easily backup to HDFS or wherever you feel like.
2. sync - if true, it *guarantees* backup consistency on machine reboot
3. number of snapshots to keep - this will keep last N snapshots around if you want, for some reason, be able to restore from an earlier snapshot. All the backuping is done in incremental fashion - if we already have 00010.sst, we will not copy it again. *IMPORTANT* -- This is based on assumption that 00010.sst never changes - two files named 00010.sst from the same DB will always be exactly the same. Is this true? I always copy manifest, current and log files.
4. You can decide if you want to flush the memtables before you backup, or you're fine with backing up the log files -- either way, you get a complete and consistent view of the database at a time of backup.
5. More things you can find in BackupableDBOptions

Here is the directory structure I use:

   backup_dir/CURRENT_SNAPSHOT - just 4 bytes holding the latest snapshot
               0, 1, 2, ... - files containing serialized version of each snapshot - containing a list of files
               files/*.sst - sst files shared between snapshots - if one snapshot references 00010.sst and another one needs to backup it from the DB, it will just reference the same file
               files/ 0/, 1/, 2/, ... - snapshot directories containing private snapshot files - current, manifest and log files

All the files are ref counted and deleted immediatelly when they get out of scope.

Some other stuff in this diff:
1. Added GetEnv() method to the DB. Discussed with @haobo and we agreed that it seems right thing to do.
2. Fixed StackableDB interface. The way it was set up before, I was not able to implement BackupableDB.

Test Plan:
I have a unittest, but please don't look at this yet. I just hacked it up to help me with debugging. I will write a lot of good tests and update the diff.

Also, `make asan_check`

Reviewers: dhruba, haobo, emayanke

Reviewed By: dhruba

CC: leveldb, haobo

Differential Revision: https://reviews.facebook.net/D14295
2013-12-09 14:06:52 -08:00
Kai Liu
8c424456fc Make the default compilation debug-friendly 2013-12-02 20:05:16 -08:00
Siying Dong
b59d4d5a50 A Simple Plain Table
Summary:
A Simple plain table format. No block structure. When creating the table reader, scanning the full table to create indexes.

Test Plan:Add unit test

Reviewers:haobo,dhruba,kailiu

CC:

Task ID: #

Blame Rev:
2013-11-20 18:44:22 -08:00
Igor Canadi
8906ab59f7 Split asan_check into asan_check and asan_crash_test 2013-11-19 16:44:40 -08:00
Igor Canadi
92d905026b make asan_check
Summary: Add asan_check rule to Makefile. After we add this, we will create Jenkins run that will check for asan errors!

Test Plan: make asan_check

Reviewers: dhruba, kailiu, haobo

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14205
2013-11-19 16:33:24 -08:00
kailiu
1415f8820d Improve the "table stats"
Summary:
The primary motivation of the changes is to make it easier to figure out the inside of the tables.

* rename "table stats" to "table properties" since now we have more than "integers" to store in the property block.
* Add filter block size to the basic table properties.
* Whenever a table is built, we'll log the table properties (the sample output is in Test Plan).
* Make an api to expose deleted keys.

Test Plan:
Passed all existing test. and the sample output of table stats:

    ==================================================================
        Basic Properties
    ------------------------------------------------------------------
                  # data blocks: 1
                      # entries: 1

                   raw key size: 9
           raw average key size: 9
                 raw value size: 9
         raw average value size: 0

                data block size: 25
               index block size: 27
              filter block size: 18
         (estimated) table size: 70

                  filter policy: rocksdb.BuiltinBloomFilter
    ==================================================================
        User collected properties: InternalKeyPropertiesCollector
    ------------------------------------------------------------------
                    kDeletedKeys: 1
    ==================================================================

Reviewers: dhruba, haobo

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14187
2013-11-19 16:29:42 -08:00
Igor Canadi
f611aba559 Move the compiler back to 4.8.1 + more small fixes
Summary:
1. Moved the compiler back to 4.8.1 and uses Centos 5.2 binaries if OS is Centos 5.2.

2. Fixes this issue: https://github.com/facebook/rocksdb/issues/7

3. We use lot of c++11 features, so we can't pretend we can compile without them. Makes it a first class dependency.

4. Fix blob_store_test, which failes on Ubuntu with "too many files opened" error

5. Removed dependency on port/port_chromium.h, which does not even exist on our system

Test Plan: make clean; make check

Reviewers: dhruba, kailiu

Reviewed By: kailiu

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14145
2013-11-18 11:40:16 -08:00
kailiu
97d8e573a6 make util/env_posix.cc work under mac
Summary: This diff invoves some more complicated issues in the posix environment.

Test Plan: works under mac os. will need to verify dev box.

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14061
2013-11-16 23:44:39 -08:00
Kai Liu
aed9f1fa5e The updated sed still doesn't work in mac, revert. 2013-11-12 21:40:25 -08:00
Kai Liu
f3b3316a07 Fix a sed command issue that cannot generated *.d files
Summary:

The original sed command is not recognized by mac's sed, which generates ".d-e" extension instead of ".d"

Test Plan:

make clean && make -j32
2013-11-12 21:26:19 -08:00
kailiu
21587760b9 Fixing the warning messages captured under mac os # Consider using git commit -m 'One line title' && arc diff. # You will save time by running lint and unit in the background.
Summary: The work to make sure mac os compiles rocksdb is not completed yet. But at least we can start cleaning some warnings captured only by g++ from mac os..

Test Plan: ran make in mac os

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14049
2013-11-12 20:05:28 -08:00
Haobo Xu
fe4a449472 [RocksDB] prefixhash memtable test
Summary: as title, half baked test for prefixhash memtable. Also contains deadlock test option

Test Plan: run it

Reviewers: igor, dhruba

Reviewed By: igor

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13887
2013-11-05 23:20:10 -08:00
Siying Dong
7caadf2e52 A very simple benchmark to measure Table implemenation's Get() And Iterator performance
Summary: It is a very simple benchmark to measure a Table implementation's Get() and iterator performance if all the data is in memory.

Test Plan: N/A

Reviewers: dhruba, haobo, kailiu

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13743
2013-10-31 13:38:54 -07:00
Siying Dong
d4eec30ed0 Make "Table" pluggable
Summary: This patch makes Table and TableBuilder a abstract class and make all the implementation of the current table into BlockedBasedTable and BlockedBasedTable Builder.

Test Plan: Make db_test.cc to work with block based table. Add a new test simple_table_db_test.cc where a different simple table format is implemented.

Reviewers: dhruba, haobo, kailiu, emayanke, vamsi

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13521
2013-10-28 17:54:09 -07:00
Kai Liu
994575c134 Support user-defined table stats collector
Summary:
1. Added a new option that support user-defined table stats collection.
2. Added a deleted key stats collector in `utilities`

Test Plan:
Added a unit test for newly added code.
Also ran make check to make sure other tests are not broken.

Reviewers: dhruba, haobo

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13491
2013-10-28 15:45:14 -07:00
Igor Canadi
7e2c1ba173 BlobStore Benchmark
Summary:
Finally, arc diff works again! This has been sitting in my repo for a while.

I would like some comments on my BlobStore benchmark. We don't have to check this in.

Also, I don't do any fsync in the BlobStore, so this is all extremely fast. I'm not sure what durability guarantees we need from the BlobStore.

Test Plan: Nope

Reviewers: dhruba, haobo, kailiu, emayanke

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13527
2013-10-23 17:31:12 -07:00
Kai Liu
b37fda842d Improve the comment for the shared library in Make file 2013-10-22 22:40:16 -07:00
Igor Canadi
fc4616d898 External Value Store
Summary:
Developing a capability for storing values on external backing file(s).

This is just a highly unoptimized first pass - supports:
1) Allocating some portion of external file to be used to store value
2) Freeing the range, enabling it to be reused by other values

As next steps, I plan to:
1) Create some kind of stress testing. Once I can measure stuff, I can focus on optimizing.
2) Optimize locking.
3) Optimize freelist data structure. Currently we have O(n) for both freeing and allocation.
4) Figure out how to do recovery.

Test Plan: Created a unit test.

Reviewers: dhruba, haobo, kailiu

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13389
2013-10-16 17:33:49 -07:00
Dhruba Borthakur
0a9f873f4b Removed scribe, thrift and java modules.
Summary: Removed scribe, thrift and java modules.

Test Plan:
make release
make check

Reviewers: emayanke

Reviewed By: emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13293
2013-10-04 15:36:00 -07:00
Kai Liu
4c6dc7a9ae Fix the gcov/lcov related issues
Summary:

Jenkin reports errors that:

* Linking error on some machines. The error message shows it cannot find some gcov related symbols.
* lcov error due to the version issues.

Test Plan:

run make in different platforms

Reviewers:

CC:

Task ID: #

Blame Rev:
2013-08-22 17:01:06 -07:00
Simha Venkataramaiah
60bf2b7d4a Add APIs to query SST file metadata and to delete specific SST files
Summary: An api to query the level, key ranges, size etc for each SST file and an api to delete a specific file from the db and all associated state in the bookkeeping datastructures.

Notes: Editing the manifest version does not release the obsolete files right away. However deleting the file directly will mess up the iterator. We may need a more aggressive/timely file deletion api.

I have used std::unique_ptr - will switch to boost:: since this is external. thoughts?

Unit test is fragile right now as it expects the compaction at certain levels.

Test Plan: unittest

Reviewers: dhruba, vamsi, emayanke

CC: zshao, leveldb, haobo

Task ID: #

Blame Rev:
2013-08-22 15:27:19 -07:00
Jim Paton
bc8eed12d9 Do not use relative paths in build system
Summary: Previously, RocksDB's build scripts used relative pathnames like ./build_detect_platform. This can cause problems if the user uses CDPATH. Also, it just doesn't seem right to me.

Test Plan:
make clean
make -j32 check

Reviewers: MarkCallaghan, dhruba, kailiu

Reviewed By: kailiu

CC: leveldb

Differential Revision: https://reviews.facebook.net/D12459
2013-08-22 14:53:51 -07:00
Deon Nicholas
e1346968d8 Merge operator fixes part 1.
Summary:
-Added null checks and revisions to DBIter::MergeValuesNewToOld()
-Added DBIter test to stringappend_test
-Major fix with Merge and TTL
More plans for fixes later.

Test Plan:
-make clean; make stringappend_test -j 32; ./stringappend_test
-make all check;

Reviewers: haobo, emayanke, vamsi, dhruba

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D12315
2013-08-19 11:42:47 -07:00
Kai Liu
1635ea06c2 Remove PLATFORM_SHARED_CFLAGS when compiling .o files
Summary:

The flags is accidentally introduced, and resulted in problems in 3rd party release.

Test Plan:

run make in all platforms (when doing the 3rd party release)
2013-08-16 16:39:23 -07:00
Kai Liu
fd2f47dbe5 Improve the build files to simplify the 3rd party release process
Summary:
* Added LIBNAME to enable configurable library name.
* remove/check fPIC in linux platform from build_detect_platform

Test Plan: make

Reviewers: emayanke

Differential Revision: https://reviews.facebook.net/D12321
2013-08-16 12:05:27 -07:00
Kai Liu
457dcc605a Clean up the Makefile and the build scripts
Summary: As Aaron suggested, there are quite some problems with our Makefile and scripts. So in this diff I did some cleanup for them and revise some part of the scripts/makefile to help people better understand some mysterious parts.

Test Plan:
Ran make in several modes;
Ran the updated scripts.

Reviewers: dhruba, emayanke, akushner

Differential Revision: https://reviews.facebook.net/D12285
2013-08-15 12:59:45 -07:00
Jim Paton
0307c5fe3a Implement log blobs
Summary:
This patch adds the ability for the user to add sequences of arbitrary data (blobs) to write batches. These blobs are saved to the log along with everything else in the write batch. You can add multiple blobs per WriteBatch and the ordering of blobs, puts, merges, and deletes are preserved.

Blobs are not saves to SST files. RocksDB ignores blobs in every way except for writing them to the log.

Before committing this patch, I need to add some test code. But I'm submitting it now so people can comment on the API.

Test Plan: make -j32 check

Reviewers: dhruba, haobo, vamsi

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D12195
2013-08-14 16:32:46 -07:00
Haobo Xu
d9dd2a1926 [RocksDB] Expose thread local perf counter for low overhead, per call level performance statistics.
Summary:
As title. No locking/atomic is needed due to thread local. There is also no need to modify the existing client interface, in order to expose related counters.

perf_context_test shows a simple example of retrieving the number of user key comparison done for each put and get call. More counters could be added later.

Sample output
./perf_context_test 1000000
==== Test PerfContextTest.KeyComparisonCount
Inserting 1000000 key/value pairs
...
total user key comparison get: 43446523
total user key comparison put: 8017877
max user key comparison get: 88939
avg user key comparison get:43

Basically, the current skiplist does well on average, but could perform poorly in extreme cases.

Test Plan: run perf_context_test <total number of entries to put/get>

Reviewers: dhruba

Differential Revision: https://reviews.facebook.net/D12225
2013-08-14 15:24:06 -07:00
Kai Liu
9f6b8f0032 Add automatic coverage report scripts
Summary:
Ultimate goals of the coverage report are:

* Report the coverage for all files (done in this diff)
* Report the coverage for recently updated files (not fully finished)
* Report is available in html form (done in this diff, but need some extra work to integrate it in Jenkin)

Task link: https://our.intern.facebook.com/intern/tasks/?s=1154818042&t=2604914

Test Plan:
Ran: coverage/coverage_test.sh

The sample output can be found here: https://phabricator.fb.com/P2433892

Reviewers: dhruba, emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11943
2013-08-12 23:53:37 -07:00
Mayank Agarwal
7c9093ab53 Changing Makefile to have rocksdb instead of leveldb in binary-names
Summary: did a find-replace

Test Plan: make

Reviewers: dhruba, haobo

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11979
2013-08-05 11:14:01 -07:00
Jim Paton
abc90b067c Use specific DB name in merge_test
Summary: Currently, merge_test uses /tmp/testdb for the test database. It should really use something more specific to merge_test. Most of the other tests use test::TmpDir() + "/<test name>db". This patch implements such behavior for merge_test; it makes merge_test use test::TmpDir() + "/merge_testdb"

Test Plan:
make clean
make -j32 merge_test
./merge_test

Reviewers: dhruba, haobo

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11877
2013-07-29 13:26:38 -07:00
Mayank Agarwal
d56523c49c Update rocksdb version
Summary: rocksdb-2.0 released to third party

Test Plan: visual inspection

Reviewers: dhruba, haobo, sheki

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11559
2013-07-01 14:30:04 -07:00
Deon Nicholas
34ef873290 Added stringappend_test back into the unit tests.
Summary:
With the Makefile now updated to correctly update all .o files, this
should fix the issues recompiling stringappend_test. This should also fix the
"segmentation-fault" that we were getting earlier. Now, stringappend_test should
be clean, and I have added it back to the unit-tests. Also made some minor updates
to the tests themselves.

Test Plan:
1. make clean; make stringappend_test -j 32	(will test it by itself)
2. make clean; make all check -j 32		(to run all unit tests)
3. make clean; make release			(test in release mode)
4. valgrind ./stringappend_test 		(valgrind tests)

Reviewers: haobo, jpaton, dhruba

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11505
2013-06-26 11:41:13 -07:00
Deon Nicholas
6894a50aa7 Updated "make clean" to remove all .o files
Summary:
The old Makefile did not remove ALL .o and .d files, but rather only
those that happened to be in the root folder and one-level deep. This was causing
issues when recompiling files in deeper folders. This fix now causes make clean
to find ALL .o and .d files via a unix "find" command, and then remove them.

Test Plan:
make clean;
make all -j 32;

Reviewers: haobo, jpaton, dhruba

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11493
2013-06-25 11:30:37 -07:00
Abhishek Kona
00124683de [rocksdb] do not trim range for level0 in manual compaction
Summary:
https://code.google.com/p/leveldb/issues/detail?can=1&q=178&colspec=ID%20Type%20Status%20Priority%20Milestone%20Owner%20Summary&id=178

Ported the solution as is to RocksDB.

Test Plan: moved the unit test as manual_compaction_test

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11331
2013-06-17 13:58:17 -07:00
Deon Nicholas
8926b72751 Minor tweaks to StringAppend MergeOperator.
Summary:
I'm concerned about a random seg-fault that sometimes occurs when
running stringappend_test. I will investigate further. First, I am removing
stringappend_test from the regular release tests, and making some clean-ups
to the code.

Test Plan:
1. make stringappend_test
2. ./stringappend_test

Reviewers: haobo, dhruba

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11313
2013-06-14 16:44:39 -07:00
Deon Nicholas
5679107b07 Completed the implementation and test cases for Redis API.
Summary:
Completed the implementation for the Redis API for Lists.
The Redis API uses rocksdb as a backend to persistently
store maps from key->list. It supports basic operations
for appending, inserting, pushing, popping, and accessing
a list, given its key.

Test Plan:
  - Compile with: make redis_test
  - Test with: ./redis_test
  - Run all unit tests (for all rocksdb) with: make all check
  - To use an interactive REDIS client use: ./redis_test -m
  - To clean the database before use:       ./redis_test -m -d

Reviewers: haobo, dhruba, zshao

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10833
2013-06-11 11:19:49 -07:00
Vamsi Ponnekanti
3bb9449906 [Fix whilebox crash test failure]
Summary:
I think the check for "error" that I added had caused
false alarm. Fixed that.

Test Plan:
Revert Plan: OK

Task ID: #

Reviewers: emayanke, dhruba

Reviewed By: emayanke

Differential Revision: https://reviews.facebook.net/D11139
2013-06-07 11:34:46 -07:00
Jim Paton
8ef328ee6a ctags and cscope support to Makefile
Summary: Added a target to Makefile called 'tags' that runs ctags and cscope on all *.cc and *.h file

Test Plan:
Run 'make tags'. Then start vim and do
:set tags=./tags
:cs add cscope.out

These commands should give you no error messages. You should then be able to access cscope db and ctags as normal in vim.

Reviewers: dhruba

Differential Revision: https://reviews.facebook.net/D11103
2013-06-07 09:13:40 -07:00
Vamsi Ponnekanti
5cf7a00bda [Make most of the changes suggested by Aaron]
Summary: $title

Test Plan:
Revert Plan: OK

Task ID: #

Reviewers: emayanke, akushner

Reviewed By: akushner

Differential Revision: https://reviews.facebook.net/D10923
2013-06-06 17:31:45 -07:00
Deon Nicholas
accd3debbb Implemented StringAppendOperator and unit tests.
Summary:
Implemented the StringAppendOperator class (subclass of MergeOperator).
Found in utilities/merge_operators/string_append/stringappend.{h,cc}

It is a rocksdb Merge Operator that supports string/list concatenation
 with a configurable delimiter.

The tests are found in .../stringappend_test.cc. It implements a
 map : key -> (list of strings), with core operations Append(list_key,val)
 and Get(list_key).

Test Plan:
1. Navigate to your rocksdb repository
2. Execute: make stringappend_test  (to compile)
3. Execute: ./stringappend_test (to run the tests)
4. Execute: make all check (to test the ENTIRE rocksdb codebase / regression)

Reviewers: haobo, dhruba, zshao

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10737
2013-05-13 15:09:42 -07:00
Mayank Agarwal
85cccc5092 Replacing rocksdb by leveldb in Makefile
Summary: Since we are keeping 'leveldb' instead of 'rocksdb' in third-party, this is only logical.

Test Plan: make clean;make

Reviewers: sheki, dhruba, haobo

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10719
2013-05-09 18:39:25 -07:00
Haobo Xu
3c4efc4462 [RocksDB] fix build
Summary: makefile change: LIBRARY => LIBOBJECTS
thanks Abhishek for reproducing this locally.

Test Plan: make release

Reviewers: sheki

CC: leveldb

Task ID: #

Blame Rev:
2013-05-06 10:35:41 -07:00
Haobo Xu
05e8854085 [Rocksdb] Support Merge operation in rocksdb
Summary:
This diff introduces a new Merge operation into rocksdb.
The purpose of this review is mostly getting feedback from the team (everyone please) on the design.

Please focus on the four files under include/leveldb/, as they spell the client visible interface change.
include/leveldb/db.h
include/leveldb/merge_operator.h
include/leveldb/options.h
include/leveldb/write_batch.h

Please go over local/my_test.cc carefully, as it is a concerete use case.

Please also review the impelmentation files to see if the straw man implementation makes sense.

Note that, the diff does pass all make check and truly supports forward iterator over db and a version
of Get that's based on iterator.

Future work:
- Integration with compaction
- A raw Get implementation

I am working on a wiki that explains the design and implementation choices, but coding comes
just naturally and I think it might be a good idea to share the code earlier. The code is
heavily commented.

Test Plan: run all local tests

Reviewers: dhruba, heyongqiang

Reviewed By: dhruba

CC: leveldb, zshao, sheki, emayanke, MarkCallaghan

Differential Revision: https://reviews.facebook.net/D9651
2013-05-03 16:59:02 -07:00
Mayank Agarwal
d786b25e2d Timestamp and TTL Wrapper for rocksdb
Summary:
When opened with DBTimestamp::Open call, timestamps are prepended to and stripped from the value during subsequent Put and Get calls respectively. The Timestamp is used to discard values in Get and custom compaction filter which have exceeded their TTL which is specified during Open.
Have made a temporary change to Makefile to let us test with the temporary file TestTime.cc. Have also changed the private members of db_impl.h to protected to let them be inherited by the new class DBTimestamp

Test Plan: make db_timestamp; TestTime.cc(will not check it in) shows how to use the apis currently, but I will write unit-tests shortly

Reviewers: dhruba, vamsi, haobo, sheki, heyongqiang, vkrest

Reviewed By: vamsi

CC: zshao, xjin, vkrest, MarkCallaghan

Differential Revision: https://reviews.facebook.net/D10311
2013-05-02 16:34:42 -07:00
Haobo Xu
eb6d139666 [RocksDB] Move table.h to table/
Summary:
- don't see a point exposing table.h to the public.
- fixed make clean to remove also *.d files.

Test Plan: make check; db_stress

Reviewers: dhruba, heyongqiang

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10479
2013-04-22 16:07:56 -07:00
Haobo Xu
1255dcd446 [RocksDB] Add stacktrace signal handler
Summary:
This diff provides the ability to print out a stacktrace when the process receives certain signals.
Currently, we enable this for the following signals (program error related):
SIGILL SIGSEGV SIGBUS SIGABRT
Application simply #include "util/stack_trace.h" and call leveldb::InstallStackTraceHandler() during initialization, if signal handler is needed. It's not done automatically when openning db, because it's the application(process)'s responsibility to install signal handler and some applications might already have their own (like fbcode).

Sample output:
Received signal 11 (Segmentation fault)
#0  0x408ff0 ./signal_test() [0x408ff0] /home/haobo/rocksdb/util/signal_test.cc:4
#1  0x40827d ./signal_test() [0x40827d] /home/haobo/rocksdb/util/signal_test.cc:24
#2  0x7f8bb183172e /usr/local/fbcode/gcc-4.7.1-glibc-2.14.1/lib/libc.so.6(__libc_start_main+0x10e) [0x7f8bb183172e] ??:0
#3  0x408ebc ./signal_test() [0x408ebc] /home/engshare/third-party/src/glibc/glibc-2.14.1/glibc-2.14.1/csu/../sysdeps/x86_64/elf/start.S:113
Segmentation fault (core dumped)

For each frame, we print the raw pointer, the symbol provided by backtrace_symbols (still not good enough), and the source file/line. Note that address translation is done by directly shell out to addr2line. ??:0 means addr2line fails to do the translation. Hacky, but I think it's good for now.

Test Plan: signal_test.cc

Reviewers: dhruba, MarkCallaghan

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10173
2013-04-20 10:26:50 -07:00
Mayank Agarwal
f51b375062 Printing the options that db_crashtest.py is run with
Summary: To know which options the crashtest was run with. Also changed print to sys.stdout.write which is more standard.

Test Plan: python tools/db_crashtest.py

Reviewers: vamsi, akushner, dhruba

Reviewed By: akushner

Differential Revision: https://reviews.facebook.net/D10119
2013-04-10 14:03:10 -07:00
Mayank Agarwal
faa32a72a6 Invoke crash test from the Makefile
Summary: make crash_test will now invoke the crash_test. Also some cleanup in the db_crashtest.py file

Test Plan: make crash_test

Reviewers: akushner, vamsi, sheki, dhruba

Reviewed By: vamsi

Differential Revision: https://reviews.facebook.net/D9987
2013-04-08 18:11:11 -07:00
Mayank Agarwal
6203e4ccc0 Renaming the built leveldb.a library to rocksdb
Summary: Renames in the Makefile. It will be used like this in third-party.

Test Plan: make

Reviewers: dhruba, sheki, heyongqiang, haobo

Reviewed By: heyongqiang

CC: leveldb

Differential Revision: https://reviews.facebook.net/D9633
2013-04-05 14:10:03 -07:00
Simon Marlow
a8bf8fe504 Integrate the manifest_dump command with ldb
Summary:
Syntax:

   manifest_dump [--verbose] --num=<manifest_num>

e.g.

$ ./ldb --db=/home/smarlow/tmp/testdb manifest_dump --num=12
manifest_file_number 13 next_file_number 14 last_sequence 3 log_number
11  prev_log_number 0
--- level 0 --- version# 0 ---
 6:116['a1' @ 1 : 1 .. 'a1' @ 1 : 1]
 10:130['a3' @ 2 : 1 .. 'a4' @ 3 : 1]
--- level 1 --- version# 0 ---
--- level 2 --- version# 0 ---
--- level 3 --- version# 0 ---
--- level 4 --- version# 0 ---
--- level 5 --- version# 0 ---
--- level 6 --- version# 0 ---

Test Plan: - Tested on an example DB (see output in summary)

Reviewers: sheki, dhruba

Reviewed By: sheki

CC: leveldb, heyongqiang

Differential Revision: https://reviews.facebook.net/D9609
2013-03-22 09:17:30 -07:00
Mayank Agarwal
487168cdcf Fixed sign-comparison in rocksdb code-base and fixed Makefile
Summary: Makefile had options to ignore sign-comparisons and unused-parameters, which should be there. Also fixed the specific errors in the code-base

Test Plan: make

Reviewers: chip, dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D9531
2013-03-19 14:35:23 -07:00
amayank
3eed5c9c01 Putting option -std=gnu++0x in Makefile rather than fbcode.gcc471.sh
Summary: This option is needed for compilation and the open-sourced rocksdb version wiull need to get it from Makefile

Test Plan: make clean;make

Reviewers: MarkCallaghan, dhruba, sheki, chip

CC: leveldb

Differential Revision: https://reviews.facebook.net/D9243
2013-03-08 11:56:18 -08:00
amayank
9e1c89cde8 Moving VALGRIND_VER which takes the valgrind version from third party to fbcode.gcc471.sh file
Summary:
the valgrind version being used is in facebook specific path and should be moved to the fbcode.gcc471.sh file instead of the makefile.
The execution takes the environment's default valgrind version if the fbcode.gcc471.sh's valgrind_version is not available.

Test Plan: make valgrind_check

Reviewers: dhruba, sheki, akushner

Reviewed By: dhruba

Differential Revision: https://reviews.facebook.net/D9213
2013-03-08 11:51:26 -08:00
amayank
3b87e2bd2a Use version 3.8.1 for valgrind in third_party and do away with log files
Summary:
valgrind 3.7.0 used currently has a bug that needs LD_PRELOAD being set as a workaround. This caused problems when run on jenkins. 3.8.1 has fixed this issue and we should use it from third party
Also, have done away with log files. The whole output will be there on the terminal and the failed tests will be listed at the end. This is done because jenkins only lets us download the different files and not view them in the browser which is undesirable.

Test Plan: make valgrind_check

Reviewers: akushner, dhruba, vamsi, sheki, heyongqiang

Reviewed By: sheki

CC: leveldb

Differential Revision: https://reviews.facebook.net/D9171
2013-03-06 17:47:31 -08:00
Dhruba Borthakur
e7b726da08 Downgrade optimization level from -O3 to -O2.
Summary:
When we use -O3, the gcc 4.7.1 compiler generates 'pinsrd' which is
not supported on machines with "vendor_id       : AuthenticAMD".

Previous release of rocksdb used -O2.
Optimization -O2 was introduced at
772f75b3fb

Test Plan: make check

Reviewers: chip, heyongqiang, sheki

Reviewed By: sheki

CC: leveldb

Differential Revision: https://reviews.facebook.net/D9093
2013-03-05 10:54:02 -08:00
amayank
ec96ad5405 Automating valgrind to run with jenkins
Summary:
The script valgrind_test.sh runs Valgrind for all tests in the makefile
including leak-checks and outputs the logs for every test in a separate file
with the name "valgrind_log_<testname>". It prints the failed tests in the file
"valgrind_failed_tests". All these files are created in the directory
"VALGRIND_LOGS" which can be changed in the Makefile.
Finally it checks the line-count for the file "valgrind_failed_tests"
and returns 0 if no tests failed and 1 otherwise.

Test Plan: ./valgrind_test.sh; Changed the tests to incorporte leaks and verified correctness

Reviewers: dhruba, sheki, MarkCallaghan

Reviewed By: sheki

CC: zshao

Differential Revision: https://reviews.facebook.net/D8877
2013-03-01 11:44:40 -08:00
amayank
5024d72572 Adding a rule in the Makefile to run valgrind on the rocksdb tests
Summary: Added automated valgrind testing for rocksdb by adding valgrind_check in the Makefile

Test Plan: make clean; make all check

Reviewers: dhruba, sheki, MarkCallaghan, zshao

Reviewed By: sheki

CC: leveldb

Differential Revision: https://reviews.facebook.net/D8787
2013-02-21 18:41:00 -08:00
Kai Liu
45f0030458 Fix the "IO error" in auto_roll_logger_test
Summary:

I missed InitTestDb() in one of my tess. InitTestDb() initializes the test directory, without which the test will throw IO error.

This problem didn't occur before because I've already run the tests before so the test directory is already there.

Test Plan:

Reviewers: dhruba

CC:

Task ID: #

Blame Rev:
2013-02-19 00:13:22 -08:00
Kai Liu
ae09544cd6 Temporary remove the auto_roll_logger_test.
Summary:

auto_roll_logger_test is failing because it cannot create the test dir, leading to IO error: /tmp/leveldbtest-6108/db_log_test/LOG: No such file or directory.
I'll temporary remove the unit test and will revert the test this problem is solved.

Test Plan:

make all check

Reviewers: dhruba

CC: leveldb

Task ID: #

Blame Rev:
2013-02-18 23:30:26 -08:00
Kai Liu
b63aafce42 Allow the logs to be purged by TTL.
Summary:
* Add a SplitByTTLLogger to enable this feature. In this diff I implemented generalized AutoSplitLoggerBase class to simplify the
development of such classes.
* Refactor the existing AutoSplitLogger and fix several bugs.

Test Plan:
* Added a unit tests for different types of "auto splitable" loggers individually.
* Tested the composited logger which allows the log files to be splitted by both TTL and log size.

Reviewers: heyongqiang, dhruba

Reviewed By: heyongqiang

CC: zshao, leveldb

Differential Revision: https://reviews.facebook.net/D8037
2013-02-04 19:42:40 -08:00
Abhishek Kona
009034cf12 Performant util/histogram.
Summary:
Earlier way to record in histogram=>
Linear search BucketLimit array to find the bucket and increment the
counter
Current way to record in histogram=>
Store a HistMap statically which points the buckets of each value in the
range [kFirstValue, kLastValue);

In the proccess use vectors instead of array's and refactor some code to
HistogramHelper class.

Test Plan:
run db_bench with histogram=1 and see a histogram being
printed.

Reviewers: dhruba, chip, heyongqiang

Reviewed By: chip

CC: leveldb

Differential Revision: https://reviews.facebook.net/D8265
2013-01-31 16:10:34 -08:00
Dilip Antony Joseph
11ce6a060e Enhanced ldb to support data access commands
Summary: Added put/get/scan/batchput/delete/approxsize

Test Plan: Added pyunit script to test the newly added commands

Reviewers: chip, leveldb

Reviewed By: chip

CC: zshao, emayanke

Differential Revision: https://reviews.facebook.net/D7947
2013-01-28 11:38:26 -08:00
Chip Turner
772f75b3fb Stop continually re-creating build_version.c
Summary:
We continually rebuilt build_version.c because we put the
current date into it, but that's what __DATE__ already is.  This makes
builds faster.

This also fixes an issue with 'make clean FOO' not working properly.

Also tweak the build rules to be more consistent, always have warnings,
and add a 'make release' rule to handle flags for release builds.

Test Plan: make, make clean

Reviewers: dhruba

Reviewed By: dhruba

Differential Revision: https://reviews.facebook.net/D8139
2013-01-24 17:51:39 -08:00
Chip Turner
c0cb289d57 Various build cleanups/improvements
Summary:
Specific changes:

1) Turn on -Werror so all warnings are errors
2) Fix some warnings the above now complains about
3) Add proper dependency support so changing a .h file forces a .c file
to rebuild
4) Automatically use fbcode gcc on any internal machine rather than
whatever system compiler is laying around
5) Fix jemalloc to once again be used in the builds (seemed like it
wasn't being?)
6) Fix issue where 'git' would fail in build_detect_version because of
LD_LIBRARY_PATH being set in the third-party build system

Test Plan:
make, make check, make clean, touch a header file, make sure
rebuild is expected

Reviewers: dhruba

Reviewed By: dhruba

Differential Revision: https://reviews.facebook.net/D7887
2013-01-14 18:40:22 -08:00
Abhishek Kona
396c3aa08e Create a long running test to check GetUpdatesSince.
Summary:
Create an Executable with
* A thread to do Put's
* A thread to use GetUpdatesSince. Check if we miss any sequence
  Numbers.

Test Plan: It runs.

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D7383
2012-12-21 14:10:06 -08:00
Abhishek Kona
1aae609b92 Use CRC32 ss42 instruction. Load it dynamically.
Test Plan: make all check

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb, zshao

Differential Revision: https://reviews.facebook.net/D7503
2012-12-21 10:20:32 -08:00
Dhruba Borthakur
551f01fb23 Unit test to test block format.
Summary: This is a standalone unit test to test the format of a block.

Test Plan: ./block_test

Reviewers: sheki

Reviewed By: sheki

Differential Revision: https://reviews.facebook.net/D7533
2012-12-20 14:55:07 -08:00
Dhruba Borthakur
aa42c66814 Fix all warnings generated by -Wall option to the compiler.
Summary:
The default compilation process now uses "-Wall" to compile.
Fix all compilation error generated by gcc.

Test Plan: make all check

Reviewers: heyongqiang, emayanke, sheki

Reviewed By: heyongqiang

CC: MarkCallaghan

Differential Revision: https://reviews.facebook.net/D6525
2012-11-06 14:07:31 -08:00
heyongqiang
d55c2ba305 Add a tool to change number of levels
Summary: as subject.

Test Plan: manually test it, will add a testcase

Reviewers: dhruba, MarkCallaghan

Differential Revision: https://reviews.facebook.net/D6345
2012-11-05 10:17:39 -08:00
Asad K Awan
24f7983b1f [tools] Add a tool to stress test concurrent writing to levelDB
Summary:
Created a tool that runs multiple threads that concurrently read and write to levelDB.
All writes to the DB are stored in an in-memory hashtable and verified at the end of the
test. All writes for a given key are serialzied.

Test Plan:
 - Verified by writing only a few keys and logging all writes and verifying that values read and written are correct.
 - Verified correctness of value generator.
 - Ran with various parameters of number of keys, locks, and threads.

Reviewers: dhruba, MarkCallaghan, heyongqiang

Reviewed By: dhruba

Differential Revision: https://reviews.facebook.net/D5829
2012-10-10 12:12:55 -07:00
gjain
92368ab8a2 Add db_dump tool to dump DB keys
Summary:
Create a tool to iterate through keys and dump values. Current options
as follows:

db_dump --start=[START_KEY] --end=[END_KEY] --max_keys=[NUM] --stats
[PATH]

START_KEY: First key to start at
END_KEY: Key to end at (not inclusive)
NUM: Maximum number of keys to dump
PATH: Path to leveldb DB

The --stats command line argument prints out the DB stats before dumping
the keys.

Test Plan:
- Tested with invalid args
- Tested with invalid path
- Used empty DB
- Used filled DB
- Tried various permutations of command line options

Reviewers: dhruba, heyongqiang

Reviewed By: dhruba

Differential Revision: https://reviews.facebook.net/D5643
2012-09-27 09:53:58 -07:00
Dhruba Borthakur
eace74deac Add -fPIC to the shared library builds. Needed by libleveldbjni.
Summary: Add -fPIC to the shared library builds. Needed by libleveldbjni.

Test Plan: build

Reviewers: heyongqiang

Reviewed By: heyongqiang

Differential Revision: https://reviews.facebook.net/D5667
2012-09-25 11:07:35 -07:00
Dhruba Borthakur
dd45b8cd8c Keep symbols even for production release.
Summary:
Keeping symbols in the binary increases the size of the library but makes
it easier to debug. The optimization level is still -O2, so this should
have no impact on performance.

Test Plan: make all

Reviewers: heyongqiang, MarkCallaghan

Reviewed By: MarkCallaghan

CC: MarkCallaghan

Differential Revision: https://reviews.facebook.net/D5601
2012-09-21 15:57:47 -07:00
heyongqiang
a4f9b8b49e merge 1.5
Summary:

as subject

Test Plan:

db_test table_test

Reviewers: dhruba
2012-08-28 11:43:33 -07:00
Dhruba Borthakur
fc20273e73 Introduce a new method Env->Fsync() that issues fsync (instead of fdatasync).
Summary:
Introduce a new method Env->Fsync() that issues fsync (instead of fdatasync).
This is needed for data durability when running on ext3 filesystems.
Added options to the benchmark db_bench to generate performance numbers
with either fsync or fdatasync enabled.

Cleaned up Makefile to build leveldb_shell only when building the thrift
leveldb server.

Test Plan: build and run benchmark

Reviewers: heyongqiang

Reviewed By: heyongqiang

Differential Revision: https://reviews.facebook.net/D4911
2012-08-27 21:24:17 -07:00
Dhruba Borthakur
5d96f290b3 Record the version of the source repository that was used to build the leveldb library.
Summary: Record the version of the source that we are compiling. We keep a record of the git revision in util/version.cc. This source file is then built as a regular source file as part of the compilation process. One can run "strings executable_filename | grep _build_" to find the version of the source that we used to build the executable file.

Test Plan: none

Differential Revision: https://reviews.facebook.net/D4785
2012-08-24 15:18:43 -07:00
Dhruba Borthakur
d41316bc0f Prevent concurrent multiple opens of leveldb database.
Summary:
The fcntl call cannot detect lock conflicts when invoked multiple times
from the same thread.
Use a static lockedFile Set to record the paths that are locked.
A lockfile request checks to see if htis filename already exists in
lockedFiles, if so, then it triggers an error. Otherwise, it inserts
the filename in the lockedFiles Set.
A unlock file request verifies that the filename is in the lockedFiles
set and removes it from lockedFiles set.

Test Plan: unit test attached

Reviewers: heyongqiang

Reviewed By: heyongqiang

Differential Revision: https://reviews.facebook.net/D4755
2012-08-24 15:18:01 -07:00
Dhruba Borthakur
f3ee54526f Utility to dump manifest contents.
Summary:
./manifest_dump --file=/tmp/dbbench/MANIFEST-000002

Output looks like

manifest_file_number 30 next_file_number 31 last_sequence 388082 log_number 28  prev_log_number 0
--- level 0 ---
--- level 1 ---
--- level 2 ---
 5:3244155['0000000000000000' @ 1 : 1 .. '0000000000028220' @ 28221 : 1]
 7:3244177['0000000000028221' @ 28222 : 1 .. '0000000000056441' @ 56442 : 1]
 9:3244156['0000000000056442' @ 56443 : 1 .. '0000000000084662' @ 84663 : 1]
 11:3244178['0000000000084663' @ 84664 : 1 .. '0000000000112883' @ 112884 : 1]
 13:3244158['0000000000112884' @ 112885 : 1 .. '0000000000141104' @ 141105 : 1]
 15:3244176['0000000000141105' @ 141106 : 1 .. '0000000000169325' @ 169326 : 1]
 17:3244156['0000000000169326' @ 169327 : 1 .. '0000000000197546' @ 197547 : 1]
 19:3244178['0000000000197547' @ 197548 : 1 .. '0000000000225767' @ 225768 : 1]
 21:3244155['0000000000225768' @ 225769 : 1 .. '0000000000253988' @ 253989 : 1]
 23:3244179['0000000000253989' @ 253990 : 1 .. '0000000000282209' @ 282210 : 1]
 25:3244157['0000000000282210' @ 282211 : 1 .. '0000000000310430' @ 310431 : 1]
 27:3244176['0000000000310431' @ 310432 : 1 .. '0000000000338651' @ 338652 : 1]
 29:3244156['0000000000338652' @ 338653 : 1 .. '0000000000366872' @ 366873 : 1]
--- level 3 ---
--- level 4 ---
--- level 5 ---
--- level 6 ---

Test Plan: run on test directory created by dbbench

Reviewers: heyongqiang

Reviewed By: heyongqiang

CC: hustliubo

Differential Revision: https://reviews.facebook.net/D4743
2012-08-24 15:17:09 -07:00
bol
89f0c25360 Merge remote-tracking branch 'origin/master' into task1181723
Conflicts:
	Makefile
2012-08-24 03:34:55 -07:00
Dhruba Borthakur
2b443ba887 Titile: a command line shell to read/write data from a leveldb thrift server
Summary: implemented a commond line shell to talk with leveldb thrift server, which is based on state pattern and can be easily extended.

Test Plan: build and run

Reviewers: dhruba, zshao, heyongqiang

Differential Revision: https://reviews.facebook.net/D4713
2012-08-23 09:41:05 -07:00
Dhruba Borthakur
f4e7febf22 Record the version of the source repository that was used to build the leveldb library.
Summary: Record the version of the source that we are compiling. We keep a record of the git revision in util/version.cc. This source file is then built as a regular source file as part of the compilation process. One can run "strings executable_filename | grep _build_" to find the version of the source that we used to build the executable file.

Test Plan: none

Differential Revision: https://reviews.facebook.net/D4785
2012-08-21 14:47:15 -07:00
Dhruba Borthakur
e56b2c5a31 Prevent concurrent multiple opens of leveldb database.
Summary:
The fcntl call cannot detect lock conflicts when invoked multiple times
from the same thread.
Use a static lockedFile Set to record the paths that are locked.
A lockfile request checks to see if htis filename already exists in
lockedFiles, if so, then it triggers an error. Otherwise, it inserts
the filename in the lockedFiles Set.
A unlock file request verifies that the filename is in the lockedFiles
set and removes it from lockedFiles set.

Test Plan: unit test attached

Reviewers: heyongqiang

Reviewed By: heyongqiang

Differential Revision: https://reviews.facebook.net/D4755
2012-08-20 23:55:04 -07:00
Dhruba Borthakur
2aa514ec8c Utility to dump manifest contents.
Summary:
./manifest_dump --file=/tmp/dbbench/MANIFEST-000002

Output looks like

manifest_file_number 30 next_file_number 31 last_sequence 388082 log_number 28  prev_log_number 0
--- level 0 ---
--- level 1 ---
--- level 2 ---
 5:3244155['0000000000000000' @ 1 : 1 .. '0000000000028220' @ 28221 : 1]
 7:3244177['0000000000028221' @ 28222 : 1 .. '0000000000056441' @ 56442 : 1]
 9:3244156['0000000000056442' @ 56443 : 1 .. '0000000000084662' @ 84663 : 1]
 11:3244178['0000000000084663' @ 84664 : 1 .. '0000000000112883' @ 112884 : 1]
 13:3244158['0000000000112884' @ 112885 : 1 .. '0000000000141104' @ 141105 : 1]
 15:3244176['0000000000141105' @ 141106 : 1 .. '0000000000169325' @ 169326 : 1]
 17:3244156['0000000000169326' @ 169327 : 1 .. '0000000000197546' @ 197547 : 1]
 19:3244178['0000000000197547' @ 197548 : 1 .. '0000000000225767' @ 225768 : 1]
 21:3244155['0000000000225768' @ 225769 : 1 .. '0000000000253988' @ 253989 : 1]
 23:3244179['0000000000253989' @ 253990 : 1 .. '0000000000282209' @ 282210 : 1]
 25:3244157['0000000000282210' @ 282211 : 1 .. '0000000000310430' @ 310431 : 1]
 27:3244176['0000000000310431' @ 310432 : 1 .. '0000000000338651' @ 338652 : 1]
 29:3244156['0000000000338652' @ 338653 : 1 .. '0000000000366872' @ 366873 : 1]
--- level 3 ---
--- level 4 ---
--- level 5 ---
--- level 6 ---

Test Plan: run on test directory created by dbbench

Reviewers: heyongqiang

Reviewed By: heyongqiang

CC: hustliubo

Differential Revision: https://reviews.facebook.net/D4743
2012-08-17 22:36:59 -07:00
Dhruba Borthakur
fe6119bd3a "make all check" fails
Summary:
If I run "make all check" it fails to build the leveldb-thrift-server tests. This is because the vanilla build is not supposed to build thrift and/or hdfs extensions. Remove the server test from auto-running.

The thrift/hdfs build instructions are more elaborate and I would like to avoid it for embedded database builds.

Test Plan: build and run

Reviewers: heyongqiang

Reviewed By: heyongqiang

Differential Revision: https://reviews.facebook.net/D4641
2012-08-14 15:55:38 -07:00
Dhruba Borthakur
80c663882a Create leveldb server via Thrift.
Summary:
First draft.
Unit tests pass.

Test Plan: unit tests attached

Reviewers: heyongqiang

Reviewed By: heyongqiang

Differential Revision: https://reviews.facebook.net/D3969
2012-07-07 09:42:39 -07:00
Dhruba Borthakur
e9bc777163 While creating the leveldb shared library, use the shared library for jemalloc.
Summary:

Test Plan:

Reviewers:

CC:

Task ID: #

Blame Rev:
2012-06-19 00:56:07 -07:00
Dhruba Borthakur
8d41351666 Ability to switch to gcc 4.1.6 and also use jemalloc. Thanks for Chip for large amounts of help.
Summary:
Task ID: #

Blame Rev:

Test Plan: Revert Plan:

Reviewers: chip

Reviewed By: chip

CC: sc, adsharma

Differential Revision: https://reviews.facebook.net/D3225
2012-05-14 22:15:09 -07:00
Sanjay Ghemawat
85584d497e Added bloom filter support.
In particular, we add a new FilterPolicy class.  An instance
of this class can be supplied in Options when opening a
database.  If supplied, the instance is used to generate
summaries of keys (e.g., a bloom filter) which are placed in
sstables.  These summaries are consulted by DB::Get() so we
can avoid reading sstable blocks that are guaranteed to not
contain the key we are looking for.

This change provides one implementation of FilterPolicy
based on bloom filters.

Other changes:
- Updated version number to 1.4.
- Some build tweaks.
- C binding for CompactRange.
- A few more benchmarks: deleteseq, deleterandom, readmissing, seekrandom.
- Minor .gitignore update.
2012-04-17 08:36:46 -07:00
Sanjay Ghemawat
bc1ee4d25e build shared libraries; updated version to 1.3; add Status accessors 2012-03-30 13:15:49 -07:00
Sanjay Ghemawat
a1ad4d1995 Build fixes and cleanups:
(1) Separate out C++ and CC flags (fixes c_test compilation)
(2) Move snappy/perftools detection to script
(3) Fix db_bench_sqlite3 and db_bench_tree_db build rules
2012-03-21 10:28:03 -07:00
Sanjay Ghemawat
9013f13b15 use mmap on 64-bit machines to speed-up reads; small build fixes 2012-03-15 09:14:00 -07:00
Hans Wennborg
c8c5866a86 Makefile fixes for systems with $CXX other than g++.
- Makefile: Use $(CXX) for compiling C++ files,
  don't override the environment's value of $CXX

- build_detect_platform: use $CXX instead of g++.

Based on bug report from Theo Schlossnagle:
http://code.google.com/p/leveldb/issues/detail?id=46

(Sync with uptream at 25807040.)
2011-11-30 10:59:40 +00:00
Hans Wennborg
42fb47f6ed Pass system's CFLAGS, remove exit time destructor, sstable bug fix.
- Pass system's values of CFLAGS,LDFLAGS.
  Don't override OPT if it's already set.
  Original patch by Alessio Treglia <alessio@debian.org>:
  http://code.google.com/p/leveldb/issues/detail?id=27#c6

- Remove 1 exit time destructor from leveldb.
  See http://crbug.com/101600

- Fix problem where sstable building code would pass an
  internal key to the user comparator.

(Sync with uptream at 25436817.)
2011-11-14 17:06:16 +00:00
Hans Wennborg
213a68eb68 Sync with upstream @23860137.
Fix GCC -Wshadow warnings in LevelDB's public header files,
reported by Dustin.

Add in-memory Env implementation (helpers/memenv/*).
This enables users to create LevelDB databases in-memory.

Initialize ShardedLRUCache::last_id_ to zero.
This fixes a Valgrind warning.

(Also delete port/sha1_* which were removed upstream some time ago.)
2011-09-12 10:21:10 +01:00
gabor@google.com
021ee9c32b C binding for leveldb, better readseq benchmark for SQLite.
- Added a C binding for LevelDB.
  May be useful as a stable ABI that can be used by 
  programs that keep leveldb in a shared library, 
  or for JNI API.

- Replaced SQLite's readseq benchmark to a more efficient version. 
  SQLite readseq speeds increased by about a factor of 2x 
  from the previous version. Also updated benchmark page to
  reflect readseq speed up.



git-svn-id: https://leveldb.googlecode.com/svn/trunk@46 62dab493-f737-651d-591e-8d6aee1b9529
2011-08-05 20:40:49 +00:00
gabor@google.com
f122c6dfbb Adding FreeBSD support, removing Chromium files, adding benchmark.
- LevelDB patch for FreeBSD. This resolves Issue 22.
  Contributed by dforsythe (thanks!).

- Removing Chromium-specific files.
  They are now going to live in the Chromium repository.

- Adding a benchmark page comparing LevelDB performance
  to SQLite and Kyoto Cabinet's TreeDB, along with
  code to generate the benchmarks.
  Thanks to Kevin Tseng for compiling the benchmarks,
  and Scott Hess and Mikio Hirabayashi for their
  help and advice.



git-svn-id: https://leveldb.googlecode.com/svn/trunk@40 62dab493-f737-651d-591e-8d6aee1b9529
2011-07-27 01:46:25 +00:00
gabor@google.com
85f0ab1975 Fixing Makefile issue reported in Issue 15 (misspelled flag)
git-svn-id: https://leveldb.googlecode.com/svn/trunk@35 62dab493-f737-651d-591e-8d6aee1b9529
2011-06-29 22:53:17 +00:00
gabor@google.com
f57e23351f Platform detection during build, plus compatibility patches for machines without <cstdatomic>.
This revision adds two major changes:
1. build_detect_platform which generates build_config.mk
   with platform-dependent flags for the build process
2. /port/atomic_pointer.h with anAtomicPointerimplementation
   for platforms without <cstdatomic>

Some of this code is loosely based on patches submitted to the 
LevelDB mailing list at https://groups.google.com/forum/#!forum/leveldb
Tip of the hat to Dave Smith and Edouard A, who both sent patches.

The presence of Snappy (http://code.google.com/p/snappy/) and
cstdatomic are now both detected in the build_detect_platform
script (1.) which gets executing during make.

For (2.), instead of broadly importing atomicops_* from Chromium or
the Google performance tools, we chose to just implement AtomicPointer 
and the limited atomic load and store operations it needs. 
This resulted in much less code and fewer files - everything is 
contained in atomic_pointer.h.



git-svn-id: https://leveldb.googlecode.com/svn/trunk@34 62dab493-f737-651d-591e-8d6aee1b9529
2011-06-29 00:30:50 +00:00
gabor@google.com
ccf0fcd5c2 A number of smaller fixes and performance improvements:
- Implemented Get() directly instead of building on top of a full
  merging iterator stack.  This speeds up the "readrandom" benchmark
  by up to 15-30%.

- Fixed an opensource compilation problem.
  Added --db=<name> flag to control where the database is placed.

- Automatically compact a file when we have done enough
  overlapping seeks to that file.

- Fixed a performance bug where we would read from at least one
  file in a level even if none of the files overlapped the key
  being read.

- Makefile fix for Mac OSX installations that have XCode 4 without XCode 3.

- Unified the two occurrences of binary search in a file-list
  into one routine.

- Found and fixed a bug where we would unnecessarily search the
  last file when looking for a key larger than all data in the
  level.

- A fix to avoid the need for trivial move compactions and
  therefore gets rid of two out of five syncs in "fillseq".

- Removed the MANIFEST file write when switching to a new
  memtable/log-file for a 10-20% improvement on fill speed on ext4.

- Adding a SNAPPY setting in the Makefile for folks who have
  Snappy installed. Snappy compresses values and speeds up writes.



git-svn-id: https://leveldb.googlecode.com/svn/trunk@32 62dab493-f737-651d-591e-8d6aee1b9529
2011-06-22 02:36:45 +00:00
dgrogan@chromium.org
c4f5514948 sync with upstream @21627589
Minor changes:
* Reformat the bodies of the iterator interface routines in IteratorWrapper to
  make them a bit easier to read
* Switched the default in the leveldb makefile to be optimized mode, rather
  than debug mode
* Fix build problem in chromium port

git-svn-id: https://leveldb.googlecode.com/svn/trunk@30 62dab493-f737-651d-591e-8d6aee1b9529
2011-06-02 00:00:37 +00:00
dgrogan@chromium.org
740d8b3d00 Update from upstream @21551990
* Patch LevelDB to build for OSX and iOS
* Fix race condition in memtable iterator deletion.
* Other small fixes.

git-svn-id: https://leveldb.googlecode.com/svn/trunk@29 62dab493-f737-651d-591e-8d6aee1b9529
2011-05-28 00:53:58 +00:00
dgrogan@chromium.org
ba6dac0e80 @20776309
* env_chromium.cc should not export symbols.
* Fix MSVC warnings.
* Removed large value support.
* Fix broken reference to documentation file

git-svn-id: https://leveldb.googlecode.com/svn/trunk@24 62dab493-f737-651d-591e-8d6aee1b9529
2011-04-20 22:48:11 +00:00
dgrogan@chromium.org
69c6d38342 reverting disastrous MOE commit, returning to r21
git-svn-id: https://leveldb.googlecode.com/svn/trunk@23 62dab493-f737-651d-591e-8d6aee1b9529
2011-04-19 23:11:15 +00:00
dgrogan@chromium.org
b743906eea Revision created by MOE tool push_codebase.
MOE_MIGRATION=


git-svn-id: https://leveldb.googlecode.com/svn/trunk@22 62dab493-f737-651d-591e-8d6aee1b9529
2011-04-19 23:01:25 +00:00
dgrogan@chromium.org
b409afe968 chmod a-x
git-svn-id: https://leveldb.googlecode.com/svn/trunk@21 62dab493-f737-651d-591e-8d6aee1b9529
2011-04-18 23:15:58 +00:00
dgrogan@chromium.org
f779e7a5d8 @20602303. Default file permission is now 755.
git-svn-id: https://leveldb.googlecode.com/svn/trunk@20 62dab493-f737-651d-591e-8d6aee1b9529
2011-04-12 19:38:58 +00:00
jorlow@chromium.org
4671a695fc Move include files into a leveldb subdir.
git-svn-id: https://leveldb.googlecode.com/svn/trunk@18 62dab493-f737-651d-591e-8d6aee1b9529
2011-03-30 18:35:40 +00:00
jorlow@chromium.org
7a02caf2a4 Fix typo in Makefile.
git-svn-id: https://leveldb.googlecode.com/svn/trunk@4 62dab493-f737-651d-591e-8d6aee1b9529
2011-03-18 23:03:49 +00:00
jorlow@chromium.org
f67e15e50f Initial checkin.
git-svn-id: https://leveldb.googlecode.com/svn/trunk@2 62dab493-f737-651d-591e-8d6aee1b9529
2011-03-18 22:37:00 +00:00