A library that provides an embeddable, persistent key-value store for fast storage.
Go to file
Islam AbdelRahman 0522990358 Improve sst_dump help message
Summary:
Current Message

```
sst_dump [--command=check|scan|none|raw] [--verify_checksum] --file=data_dir_OR_sst_file [--output_hex] [--input_key_hex] [--from=<user_key>] [--to=<user_key>] [--read_num=NUM] [--show_properties] [--show_compression_sizes] [--show_compression_sizes [--set_block_size=<block_size>]]
```
New message

```
sst_dump --file=<data_dir_OR_sst_file> [--command=check|scan|raw]
    --file=<data_dir_OR_sst_file>
      Path to SST file or directory containing SST files

    --command=check|scan|raw
        check: Iterate over entries in files but dont print anything except if an error is encounterd (default command)
        scan: Iterate over entries in files and print them to screen
        raw: Dump all the table contents to <file_name>_dump.txt

    --output_hex
      Can be combined with scan command to print the keys and values in Hex

    --from=<user_key>
      Key to start reading from when executing check|scan

    --to=<user_key>
      Key to stop reading at when executing check|scan

    --read_num=<num>
      Maximum number of entries to read when executing check|scan

    --verify_checksum
      Verify file checksum when executing check|scan

    --input_key_hex
      Can be combined with --from and --to to indicate that these values are encoded in Hex

    --show_properties
      Print table properties after iterating over the file

    --show_compression_sizes
      Independent command that will recreate the SST file using 16K block size with different
      compressions and report the size of the file using such compression

    --set_block_size=<block_size>
      Can be combined with --show_compression_sizes to set the block size that will be used
      when trying different compression algorithms
```

Test Plan: none

Reviewers: yhchiang, andrewkr, kradhakrishnan, yiwu, sdong

Reviewed By: sdong

Subscribers: andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D56325
2016-04-08 12:05:02 -07:00
arcanist_util Updated all copyright headers to the new format. 2016-02-09 15:12:00 -08:00
build_tools Add support for UBsan builds to RocksDB 2016-03-30 15:59:24 -07:00
coverage Fix coverage script 2014-11-03 14:53:00 -08:00
db Embed column family name in SST file 2016-04-06 23:10:32 -07:00
doc Lint everything 2015-11-16 12:56:21 -08:00
examples Adding pin_l0_filter_and_index_blocks_in_cache feature and related fixes. 2016-04-01 10:42:39 -07:00
hdfs instructing people to use java7 for hdfs (#1063) 2016-04-07 14:34:28 -07:00
include/rocksdb Update comments on include/rocksdb/perf_context.h 2016-04-08 11:27:08 -07:00
java Merge pull request #1053 from adamretter/benchmark-java-comparator 2016-04-01 13:53:15 -07:00
memtable Updated all copyright headers to the new format. 2016-02-09 15:12:00 -08:00
port Fixed compile warnings in posix_logger.h and coding.h 2016-03-31 16:01:47 -07:00
table Change default number of cache shard bit to be 6 and max_file_opening_threads to be 16. 2016-04-07 13:55:10 -07:00
third-party Fix the build break on Ubuntu 15.10 when gcc 5.2.1 is used 2016-03-15 10:30:10 -07:00
tools Improve sst_dump help message 2016-04-08 12:05:02 -07:00
util Change default number of cache shard bit to be 6 and max_file_opening_threads to be 16. 2016-04-07 13:55:10 -07:00
utilities fix wrong assignment of level0_stop_writes_trigger in spatialdb (#1061) 2016-04-07 09:02:28 -07:00
.arcconfig Integrate Jenkins with Phabricator 2015-04-07 11:56:29 -07:00
.clang-format A script that automatically reformat affected lines 2014-01-14 12:21:24 -08:00
.gitignore Ignore db_test2 2016-03-07 15:56:16 -08:00
.travis.yml Travis CI to disable ROCKSDB_LITE tests 2016-02-01 18:42:01 -08:00
appveyor.yml Exclude DBTest.FileCreationRandomFailure as a long running test 2015-11-17 13:54:13 -08:00
AUTHORS Add AUTHORS file. Fix #203 2014-09-29 10:52:18 -07:00
CMakeLists.txt Add unit tests for RepairDB 2016-03-18 15:18:42 -07:00
CONTRIBUTING.md facebook accounts are not required for CLA signers 2014-07-08 05:57:54 -04:00
DEFAULT_OPTIONS_HISTORY.md Change default number of cache shard bit to be 6 and max_file_opening_threads to be 16. 2016-04-07 13:55:10 -07:00
DUMP_FORMAT.md First version of rocksdb_dump and rocksdb_undump. 2015-06-19 16:24:36 -07:00
HISTORY.md Change some RocksDB default options 2016-03-31 17:12:18 -07:00
INSTALL.md Simple changes to support builds for ppc64[le] consistent with X86 2016-01-19 09:08:19 -06:00
LANGUAGE-BINDINGS.md Merge pull request #1056 from facebook/igorcanadi-patch-1 2016-04-04 08:08:52 -07:00
LICENSE Updated all copyright headers to the new format. 2016-02-09 15:12:00 -08:00
Makefile Add support for UBsan builds to RocksDB 2016-03-30 15:59:24 -07:00
PATENTS Update Patent Grant. 2015-04-13 10:33:43 +01:00
README.md Replaced "built on on earlier work" by "built on earlier work" in README.md 2014-09-17 01:16:17 -07:00
ROCKSDB_LITE.md Optimistic Transactions 2015-05-29 14:36:35 -07:00
src.mk Merge pull request #1026 from SherlockNoMad/Hist 2016-03-15 11:27:54 -07:00
thirdparty.inc Latest versions of Jemalloc library do not require je_init()/je_unint() 2016-03-17 11:25:20 -07:00
USERS.md Added quasardb to the USERS.md file 2016-03-14 23:48:28 +01:00
Vagrantfile RocksDB on FreeBSD support 2015-02-26 15:19:17 -08:00
WINDOWS_PORT.md Commit both PR and internal code review changes 2015-07-07 16:58:20 -07:00

RocksDB: A Persistent Key-Value Store for Flash and RAM Storage

Build Status

RocksDB is developed and maintained by Facebook Database Engineering Team. It is built on earlier work on LevelDB by Sanjay Ghemawat (sanjay@google.com) and Jeff Dean (jeff@google.com)

This code is a library that forms the core building block for a fast key value server, especially suited for storing data on flash drives. It has a Log-Structured-Merge-Database (LSM) design with flexible tradeoffs between Write-Amplification-Factor (WAF), Read-Amplification-Factor (RAF) and Space-Amplification-Factor (SAF). It has multi-threaded compactions, making it specially suitable for storing multiple terabytes of data in a single database.

Start with example usage here: https://github.com/facebook/rocksdb/tree/master/examples

See the github wiki for more explanation.

The public interface is in include/. Callers should not include or rely on the details of any other header files in this package. Those internal APIs may be changed without warning.

Design discussions are conducted in https://www.facebook.com/groups/rocksdb.dev/