2011-03-18 23:37:00 +01:00
|
|
|
<!DOCTYPE html>
|
|
|
|
<html>
|
|
|
|
<head>
|
|
|
|
<link rel="stylesheet" type="text/css" href="doc.css" />
|
|
|
|
<title>Leveldb</title>
|
|
|
|
</head>
|
|
|
|
|
|
|
|
<body>
|
|
|
|
<h1>Leveldb</h1>
|
|
|
|
<address>Jeff Dean, Sanjay Ghemawat</address>
|
|
|
|
<p>
|
|
|
|
The <code>leveldb</code> library provides a persistent key value store. Keys and
|
|
|
|
values are arbitrary byte arrays. The keys are ordered within the key
|
|
|
|
value store according to a user-specified comparator function.
|
|
|
|
|
|
|
|
<p>
|
|
|
|
<h1>Opening A Database</h1>
|
|
|
|
<p>
|
|
|
|
A <code>leveldb</code> database has a name which corresponds to a file system
|
|
|
|
directory. All of the contents of database are stored in this
|
|
|
|
directory. The following example shows how to open a database,
|
|
|
|
creating it if necessary:
|
|
|
|
<p>
|
|
|
|
<pre>
|
|
|
|
#include <assert>
|
2011-08-16 03:21:01 +02:00
|
|
|
#include "leveldb/db.h"
|
2011-03-18 23:37:00 +01:00
|
|
|
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::DB* db;
|
|
|
|
rocksdb::Options options;
|
2011-03-18 23:37:00 +01:00
|
|
|
options.create_if_missing = true;
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::Status status = rocksdb::DB::Open(options, "/tmp/testdb", &db);
|
2011-03-18 23:37:00 +01:00
|
|
|
assert(status.ok());
|
|
|
|
...
|
|
|
|
</pre>
|
|
|
|
If you want to raise an error if the database already exists, add
|
2013-10-04 06:49:15 +02:00
|
|
|
the following line before the <code>rocksdb::DB::Open</code> call:
|
2011-03-18 23:37:00 +01:00
|
|
|
<pre>
|
|
|
|
options.error_if_exists = true;
|
|
|
|
</pre>
|
|
|
|
<h1>Status</h1>
|
|
|
|
<p>
|
2013-10-04 06:49:15 +02:00
|
|
|
You may have noticed the <code>rocksdb::Status</code> type above. Values of this
|
2011-03-18 23:37:00 +01:00
|
|
|
type are returned by most functions in <code>leveldb</code> that may encounter an
|
|
|
|
error. You can check if such a result is ok, and also print an
|
|
|
|
associated error message:
|
|
|
|
<p>
|
|
|
|
<pre>
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::Status s = ...;
|
2011-03-18 23:37:00 +01:00
|
|
|
if (!s.ok()) cerr << s.ToString() << endl;
|
|
|
|
</pre>
|
|
|
|
<h1>Closing A Database</h1>
|
|
|
|
<p>
|
|
|
|
When you are done with a database, just delete the database object.
|
|
|
|
Example:
|
|
|
|
<p>
|
|
|
|
<pre>
|
|
|
|
... open the db as described above ...
|
|
|
|
... do something with db ...
|
|
|
|
delete db;
|
|
|
|
</pre>
|
|
|
|
<h1>Reads And Writes</h1>
|
|
|
|
<p>
|
|
|
|
The database provides <code>Put</code>, <code>Delete</code>, and <code>Get</code> methods to
|
|
|
|
modify/query the database. For example, the following code
|
|
|
|
moves the value stored under key1 to key2.
|
|
|
|
<pre>
|
|
|
|
std::string value;
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::Status s = db->Get(rocksdb::ReadOptions(), key1, &value);
|
|
|
|
if (s.ok()) s = db->Put(rocksdb::WriteOptions(), key2, value);
|
|
|
|
if (s.ok()) s = db->Delete(rocksdb::WriteOptions(), key1);
|
2011-03-18 23:37:00 +01:00
|
|
|
</pre>
|
|
|
|
|
|
|
|
<h1>Atomic Updates</h1>
|
|
|
|
<p>
|
|
|
|
Note that if the process dies after the Put of key2 but before the
|
|
|
|
delete of key1, the same value may be left stored under multiple keys.
|
|
|
|
Such problems can be avoided by using the <code>WriteBatch</code> class to
|
|
|
|
atomically apply a set of updates:
|
|
|
|
<p>
|
|
|
|
<pre>
|
2011-08-16 03:21:01 +02:00
|
|
|
#include "leveldb/write_batch.h"
|
2011-03-18 23:37:00 +01:00
|
|
|
...
|
|
|
|
std::string value;
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::Status s = db->Get(rocksdb::ReadOptions(), key1, &value);
|
2011-03-18 23:37:00 +01:00
|
|
|
if (s.ok()) {
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::WriteBatch batch;
|
2011-03-18 23:37:00 +01:00
|
|
|
batch.Delete(key1);
|
|
|
|
batch.Put(key2, value);
|
2013-10-04 06:49:15 +02:00
|
|
|
s = db->Write(rocksdb::WriteOptions(), &batch);
|
2011-03-18 23:37:00 +01:00
|
|
|
}
|
|
|
|
</pre>
|
|
|
|
The <code>WriteBatch</code> holds a sequence of edits to be made to the database,
|
|
|
|
and these edits within the batch are applied in order. Note that we
|
|
|
|
called <code>Delete</code> before <code>Put</code> so that if <code>key1</code> is identical to <code>key2</code>,
|
|
|
|
we do not end up erroneously dropping the value entirely.
|
|
|
|
<p>
|
|
|
|
Apart from its atomicity benefits, <code>WriteBatch</code> may also be used to
|
|
|
|
speed up bulk updates by placing lots of individual mutations into the
|
|
|
|
same batch.
|
2011-04-12 21:38:58 +02:00
|
|
|
|
|
|
|
<h1>Synchronous Writes</h1>
|
|
|
|
By default, each write to <code>leveldb</code> is asynchronous: it
|
|
|
|
returns after pushing the write from the process into the operating
|
|
|
|
system. The transfer from operating system memory to the underlying
|
|
|
|
persistent storage happens asynchronously. The <code>sync</code> flag
|
|
|
|
can be turned on for a particular write to make the write operation
|
|
|
|
not return until the data being written has been pushed all the way to
|
|
|
|
persistent storage. (On Posix systems, this is implemented by calling
|
|
|
|
either <code>fsync(...)</code> or <code>fdatasync(...)</code> or
|
|
|
|
<code>msync(..., MS_SYNC)</code> before the write operation returns.)
|
|
|
|
<pre>
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::WriteOptions write_options;
|
2011-04-12 21:38:58 +02:00
|
|
|
write_options.sync = true;
|
|
|
|
db->Put(write_options, ...);
|
|
|
|
</pre>
|
|
|
|
Asynchronous writes are often more than a thousand times as fast as
|
|
|
|
synchronous writes. The downside of asynchronous writes is that a
|
|
|
|
crash of the machine may cause the last few updates to be lost. Note
|
|
|
|
that a crash of just the writing process (i.e., not a reboot) will not
|
|
|
|
cause any loss since even when <code>sync</code> is false, an update
|
|
|
|
is pushed from the process memory into the operating system before it
|
|
|
|
is considered done.
|
|
|
|
|
|
|
|
<p>
|
|
|
|
Asynchronous writes can often be used safely. For example, when
|
|
|
|
loading a large amount of data into the database you can handle lost
|
|
|
|
updates by restarting the bulk load after a crash. A hybrid scheme is
|
|
|
|
also possible where every Nth write is synchronous, and in the event
|
|
|
|
of a crash, the bulk load is restarted just after the last synchronous
|
|
|
|
write finished by the previous run. (The synchronous write can update
|
|
|
|
a marker that describes where to restart on a crash.)
|
|
|
|
|
|
|
|
<p>
|
|
|
|
<code>WriteBatch</code> provides an alternative to asynchronous writes.
|
|
|
|
Multiple updates may be placed in the same <code>WriteBatch</code> and
|
|
|
|
applied together using a synchronous write (i.e.,
|
|
|
|
<code>write_options.sync</code> is set to true). The extra cost of
|
|
|
|
the synchronous write will be amortized across all of the writes in
|
|
|
|
the batch.
|
|
|
|
|
2011-03-18 23:37:00 +01:00
|
|
|
<p>
|
|
|
|
<h1>Concurrency</h1>
|
|
|
|
<p>
|
2011-05-21 04:17:43 +02:00
|
|
|
A database may only be opened by one process at a time.
|
|
|
|
The <code>leveldb</code> implementation acquires a lock from the
|
|
|
|
operating system to prevent misuse. Within a single process, the
|
2013-10-04 06:49:15 +02:00
|
|
|
same <code>rocksdb::DB</code> object may be safely shared by multiple
|
2011-05-21 04:17:43 +02:00
|
|
|
concurrent threads. I.e., different threads may write into or fetch
|
|
|
|
iterators or call <code>Get</code> on the same database without any
|
|
|
|
external synchronization (the leveldb implementation will
|
|
|
|
automatically do the required synchronization). However other objects
|
|
|
|
(like Iterator and WriteBatch) may require external synchronization.
|
|
|
|
If two threads share such an object, they must protect access to it
|
|
|
|
using their own locking protocol. More details are available in
|
|
|
|
the public header files.
|
2011-03-18 23:37:00 +01:00
|
|
|
<p>
|
|
|
|
<h1>Iteration</h1>
|
|
|
|
<p>
|
|
|
|
The following example demonstrates how to print all key,value pairs
|
|
|
|
in a database.
|
|
|
|
<p>
|
|
|
|
<pre>
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::Iterator* it = db->NewIterator(rocksdb::ReadOptions());
|
2011-03-18 23:37:00 +01:00
|
|
|
for (it->SeekToFirst(); it->Valid(); it->Next()) {
|
|
|
|
cout << it->key().ToString() << ": " << it->value().ToString() << endl;
|
|
|
|
}
|
|
|
|
assert(it->status().ok()); // Check for any errors found during the scan
|
|
|
|
delete it;
|
|
|
|
</pre>
|
|
|
|
The following variation shows how to process just the keys in the
|
|
|
|
range <code>[start,limit)</code>:
|
|
|
|
<p>
|
|
|
|
<pre>
|
|
|
|
for (it->Seek(start);
|
|
|
|
it->Valid() && it->key().ToString() < limit;
|
|
|
|
it->Next()) {
|
|
|
|
...
|
|
|
|
}
|
|
|
|
</pre>
|
|
|
|
You can also process entries in reverse order. (Caveat: reverse
|
2011-03-25 21:27:43 +01:00
|
|
|
iteration may be somewhat slower than forward iteration.)
|
2011-03-18 23:37:00 +01:00
|
|
|
<p>
|
|
|
|
<pre>
|
|
|
|
for (it->SeekToLast(); it->Valid(); it->Prev()) {
|
|
|
|
...
|
|
|
|
}
|
|
|
|
</pre>
|
|
|
|
<h1>Snapshots</h1>
|
|
|
|
<p>
|
|
|
|
Snapshots provide consistent read-only views over the entire state of
|
|
|
|
the key-value store. <code>ReadOptions::snapshot</code> may be non-NULL to indicate
|
|
|
|
that a read should operate on a particular version of the DB state.
|
|
|
|
If <code>ReadOptions::snapshot</code> is NULL, the read will operate on an
|
|
|
|
implicit snapshot of the current state.
|
|
|
|
<p>
|
2011-10-06 01:30:28 +02:00
|
|
|
Snapshots are created by the DB::GetSnapshot() method:
|
2011-03-18 23:37:00 +01:00
|
|
|
<p>
|
|
|
|
<pre>
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::ReadOptions options;
|
2011-03-18 23:37:00 +01:00
|
|
|
options.snapshot = db->GetSnapshot();
|
|
|
|
... apply some updates to db ...
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::Iterator* iter = db->NewIterator(options);
|
2011-03-18 23:37:00 +01:00
|
|
|
... read using iter to view the state when the snapshot was created ...
|
|
|
|
delete iter;
|
|
|
|
db->ReleaseSnapshot(options.snapshot);
|
|
|
|
</pre>
|
|
|
|
Note that when a snapshot is no longer needed, it should be released
|
|
|
|
using the DB::ReleaseSnapshot interface. This allows the
|
|
|
|
implementation to get rid of state that was being maintained just to
|
|
|
|
support reading as of that snapshot.
|
|
|
|
<h1>Slice</h1>
|
|
|
|
<p>
|
|
|
|
The return value of the <code>it->key()</code> and <code>it->value()</code> calls above
|
2013-10-04 06:49:15 +02:00
|
|
|
are instances of the <code>rocksdb::Slice</code> type. <code>Slice</code> is a simple
|
2011-03-18 23:37:00 +01:00
|
|
|
structure that contains a length and a pointer to an external byte
|
|
|
|
array. Returning a <code>Slice</code> is a cheaper alternative to returning a
|
|
|
|
<code>std::string</code> since we do not need to copy potentially large keys and
|
|
|
|
values. In addition, <code>leveldb</code> methods do not return null-terminated
|
|
|
|
C-style strings since <code>leveldb</code> keys and values are allowed to
|
|
|
|
contain '\0' bytes.
|
|
|
|
<p>
|
|
|
|
C++ strings and null-terminated C-style strings can be easily converted
|
|
|
|
to a Slice:
|
|
|
|
<p>
|
|
|
|
<pre>
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::Slice s1 = "hello";
|
2011-03-18 23:37:00 +01:00
|
|
|
|
|
|
|
std::string str("world");
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::Slice s2 = str;
|
2011-03-18 23:37:00 +01:00
|
|
|
</pre>
|
|
|
|
A Slice can be easily converted back to a C++ string:
|
|
|
|
<pre>
|
|
|
|
std::string str = s1.ToString();
|
|
|
|
assert(str == std::string("hello"));
|
|
|
|
</pre>
|
|
|
|
Be careful when using Slices since it is up to the caller to ensure that
|
|
|
|
the external byte array into which the Slice points remains live while
|
|
|
|
the Slice is in use. For example, the following is buggy:
|
|
|
|
<p>
|
|
|
|
<pre>
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::Slice slice;
|
2011-03-18 23:37:00 +01:00
|
|
|
if (...) {
|
|
|
|
std::string str = ...;
|
|
|
|
slice = str;
|
|
|
|
}
|
|
|
|
Use(slice);
|
|
|
|
</pre>
|
|
|
|
When the <code>if</code> statement goes out of scope, <code>str</code> will be destroyed and the
|
|
|
|
backing storage for <code>slice</code> will disappear.
|
|
|
|
<p>
|
|
|
|
<h1>Comparators</h1>
|
|
|
|
<p>
|
|
|
|
The preceding examples used the default ordering function for key,
|
|
|
|
which orders bytes lexicographically. You can however supply a custom
|
|
|
|
comparator when opening a database. For example, suppose each
|
|
|
|
database key consists of two numbers and we should sort by the first
|
|
|
|
number, breaking ties by the second number. First, define a proper
|
2013-10-04 06:49:15 +02:00
|
|
|
subclass of <code>rocksdb::Comparator</code> that expresses these rules:
|
2011-03-18 23:37:00 +01:00
|
|
|
<p>
|
|
|
|
<pre>
|
2013-10-04 06:49:15 +02:00
|
|
|
class TwoPartComparator : public rocksdb::Comparator {
|
2011-03-18 23:37:00 +01:00
|
|
|
public:
|
|
|
|
// Three-way comparison function:
|
|
|
|
// if a < b: negative result
|
|
|
|
// if a > b: positive result
|
|
|
|
// else: zero result
|
2013-10-04 06:49:15 +02:00
|
|
|
int Compare(const rocksdb::Slice& a, const rocksdb::Slice& b) const {
|
2011-03-18 23:37:00 +01:00
|
|
|
int a1, a2, b1, b2;
|
|
|
|
ParseKey(a, &a1, &a2);
|
|
|
|
ParseKey(b, &b1, &b2);
|
|
|
|
if (a1 < b1) return -1;
|
|
|
|
if (a1 > b1) return +1;
|
|
|
|
if (a2 < b2) return -1;
|
|
|
|
if (a2 > b2) return +1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Ignore the following methods for now:
|
2011-08-16 03:21:01 +02:00
|
|
|
const char* Name() const { return "TwoPartComparator"; }
|
2013-10-04 06:49:15 +02:00
|
|
|
void FindShortestSeparator(std::string*, const rocksdb::Slice&) const { }
|
2011-03-18 23:37:00 +01:00
|
|
|
void FindShortSuccessor(std::string*) const { }
|
|
|
|
};
|
|
|
|
</pre>
|
|
|
|
Now create a database using this custom comparator:
|
|
|
|
<p>
|
|
|
|
<pre>
|
|
|
|
TwoPartComparator cmp;
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::DB* db;
|
|
|
|
rocksdb::Options options;
|
2011-03-18 23:37:00 +01:00
|
|
|
options.create_if_missing = true;
|
|
|
|
options.comparator = &cmp;
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::Status status = rocksdb::DB::Open(options, "/tmp/testdb", &db);
|
2011-03-18 23:37:00 +01:00
|
|
|
...
|
|
|
|
</pre>
|
|
|
|
<h2>Backwards compatibility</h2>
|
|
|
|
<p>
|
|
|
|
The result of the comparator's <code>Name</code> method is attached to the
|
|
|
|
database when it is created, and is checked on every subsequent
|
2013-10-04 06:49:15 +02:00
|
|
|
database open. If the name changes, the <code>rocksdb::DB::Open</code> call will
|
2011-03-18 23:37:00 +01:00
|
|
|
fail. Therefore, change the name if and only if the new key format
|
|
|
|
and comparison function are incompatible with existing databases, and
|
|
|
|
it is ok to discard the contents of all existing databases.
|
|
|
|
<p>
|
|
|
|
You can however still gradually evolve your key format over time with
|
|
|
|
a little bit of pre-planning. For example, you could store a version
|
|
|
|
number at the end of each key (one byte should suffice for most uses).
|
|
|
|
When you wish to switch to a new key format (e.g., adding an optional
|
|
|
|
third part to the keys processed by <code>TwoPartComparator</code>),
|
|
|
|
(a) keep the same comparator name (b) increment the version number
|
|
|
|
for new keys (c) change the comparator function so it uses the
|
|
|
|
version numbers found in the keys to decide how to interpret them.
|
|
|
|
<p>
|
|
|
|
<h1>Performance</h1>
|
|
|
|
<p>
|
|
|
|
Performance can be tuned by changing the default values of the
|
2011-08-16 03:21:01 +02:00
|
|
|
types defined in <code>include/leveldb/options.h</code>.
|
2011-03-18 23:37:00 +01:00
|
|
|
|
|
|
|
<p>
|
|
|
|
<h2>Block size</h2>
|
|
|
|
<p>
|
|
|
|
<code>leveldb</code> groups adjacent keys together into the same block and such a
|
|
|
|
block is the unit of transfer to and from persistent storage. The
|
2011-04-12 21:38:58 +02:00
|
|
|
default block size is approximately 4096 uncompressed bytes.
|
2011-03-18 23:37:00 +01:00
|
|
|
Applications that mostly do bulk scans over the contents of the
|
|
|
|
database may wish to increase this size. Applications that do a lot
|
|
|
|
of point reads of small values may wish to switch to a smaller block
|
|
|
|
size if performance measurements indicate an improvement. There isn't
|
|
|
|
much benefit in using blocks smaller than one kilobyte, or larger than
|
|
|
|
a few megabytes. Also note that compression will be more effective
|
|
|
|
with larger block sizes.
|
|
|
|
<p>
|
|
|
|
<h2>Compression</h2>
|
|
|
|
<p>
|
|
|
|
Each block is individually compressed before being written to
|
|
|
|
persistent storage. Compression is on by default since the default
|
|
|
|
compression method is very fast, and is automatically disabled for
|
|
|
|
uncompressible data. In rare cases, applications may want to disable
|
|
|
|
compression entirely, but should only do so if benchmarks show a
|
|
|
|
performance improvement:
|
|
|
|
<p>
|
|
|
|
<pre>
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::Options options;
|
|
|
|
options.compression = rocksdb::kNoCompression;
|
|
|
|
... rocksdb::DB::Open(options, name, ...) ....
|
2011-03-18 23:37:00 +01:00
|
|
|
</pre>
|
|
|
|
<h2>Cache</h2>
|
|
|
|
<p>
|
|
|
|
The contents of the database are stored in a set of files in the
|
|
|
|
filesystem and each file stores a sequence of compressed blocks. If
|
|
|
|
<code>options.cache</code> is non-NULL, it is used to cache frequently used
|
|
|
|
uncompressed block contents.
|
|
|
|
<p>
|
|
|
|
<pre>
|
2011-08-16 03:21:01 +02:00
|
|
|
#include "leveldb/cache.h"
|
2011-03-18 23:37:00 +01:00
|
|
|
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::Options options;
|
|
|
|
options.cache = rocksdb::NewLRUCache(100 * 1048576); // 100MB cache
|
|
|
|
rocksdb::DB* db;
|
|
|
|
rocksdb::DB::Open(options, name, &db);
|
2011-03-18 23:37:00 +01:00
|
|
|
... use the db ...
|
|
|
|
delete db
|
|
|
|
delete options.cache;
|
|
|
|
</pre>
|
|
|
|
Note that the cache holds uncompressed data, and therefore it should
|
|
|
|
be sized according to application level data sizes, without any
|
|
|
|
reduction from compression. (Caching of compressed blocks is left to
|
|
|
|
the operating system buffer cache, or any custom <code>Env</code>
|
|
|
|
implementation provided by the client.)
|
|
|
|
<p>
|
|
|
|
When performing a bulk read, the application may wish to disable
|
|
|
|
caching so that the data processed by the bulk read does not end up
|
|
|
|
displacing most of the cached contents. A per-iterator option can be
|
|
|
|
used to achieve this:
|
|
|
|
<p>
|
|
|
|
<pre>
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::ReadOptions options;
|
2011-03-18 23:37:00 +01:00
|
|
|
options.fill_cache = false;
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::Iterator* it = db->NewIterator(options);
|
2011-03-18 23:37:00 +01:00
|
|
|
for (it->SeekToFirst(); it->Valid(); it->Next()) {
|
|
|
|
...
|
|
|
|
}
|
|
|
|
</pre>
|
|
|
|
<h2>Key Layout</h2>
|
|
|
|
<p>
|
|
|
|
Note that the unit of disk transfer and caching is a block. Adjacent
|
|
|
|
keys (according to the database sort order) will usually be placed in
|
|
|
|
the same block. Therefore the application can improve its performance
|
|
|
|
by placing keys that are accessed together near each other and placing
|
|
|
|
infrequently used keys in a separate region of the key space.
|
|
|
|
<p>
|
|
|
|
For example, suppose we are implementing a simple file system on top
|
|
|
|
of <code>leveldb</code>. The types of entries we might wish to store are:
|
|
|
|
<p>
|
|
|
|
<pre>
|
|
|
|
filename -> permission-bits, length, list of file_block_ids
|
|
|
|
file_block_id -> data
|
|
|
|
</pre>
|
|
|
|
We might want to prefix <code>filename</code> keys with one letter (say '/') and the
|
|
|
|
<code>file_block_id</code> keys with a different letter (say '0') so that scans
|
|
|
|
over just the metadata do not force us to fetch and cache bulky file
|
|
|
|
contents.
|
|
|
|
<p>
|
2012-04-17 17:36:46 +02:00
|
|
|
<h2>Filters</h2>
|
|
|
|
<p>
|
|
|
|
Because of the way <code>leveldb</code> data is organized on disk,
|
|
|
|
a single <code>Get()</code> call may involve multiple reads from disk.
|
|
|
|
The optional <code>FilterPolicy</code> mechanism can be used to reduce
|
|
|
|
the number of disk reads substantially.
|
|
|
|
<pre>
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::Options options;
|
2012-04-17 17:36:46 +02:00
|
|
|
options.filter_policy = NewBloomFilter(10);
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::DB* db;
|
|
|
|
rocksdb::DB::Open(options, "/tmp/testdb", &db);
|
2012-04-17 17:36:46 +02:00
|
|
|
... use the database ...
|
|
|
|
delete db;
|
|
|
|
delete options.filter_policy;
|
|
|
|
</pre>
|
|
|
|
The preceding code associates a
|
|
|
|
<a href="http://en.wikipedia.org/wiki/Bloom_filter">Bloom filter</a>
|
|
|
|
based filtering policy with the database. Bloom filter based
|
|
|
|
filtering relies on keeping some number of bits of data in memory per
|
|
|
|
key (in this case 10 bits per key since that is the argument we passed
|
|
|
|
to NewBloomFilter). This filter will reduce the number of unnecessary
|
|
|
|
disk reads needed for <code>Get()</code> calls by a factor of
|
|
|
|
approximately a 100. Increasing the bits per key will lead to a
|
|
|
|
larger reduction at the cost of more memory usage. We recommend that
|
|
|
|
applications whose working set does not fit in memory and that do a
|
|
|
|
lot of random reads set a filter policy.
|
|
|
|
<p>
|
|
|
|
If you are using a custom comparator, you should ensure that the filter
|
|
|
|
policy you are using is compatible with your comparator. For example,
|
|
|
|
consider a comparator that ignores trailing spaces when comparing keys.
|
|
|
|
<code>NewBloomFilter</code> must not be used with such a comparator.
|
|
|
|
Instead, the application should provide a custom filter policy that
|
|
|
|
also ignores trailing spaces. For example:
|
|
|
|
<pre>
|
2013-10-04 06:49:15 +02:00
|
|
|
class CustomFilterPolicy : public rocksdb::FilterPolicy {
|
2012-04-17 17:36:46 +02:00
|
|
|
private:
|
|
|
|
FilterPolicy* builtin_policy_;
|
|
|
|
public:
|
|
|
|
CustomFilterPolicy() : builtin_policy_(NewBloomFilter(10)) { }
|
|
|
|
~CustomFilterPolicy() { delete builtin_policy_; }
|
|
|
|
|
|
|
|
const char* Name() const { return "IgnoreTrailingSpacesFilter"; }
|
|
|
|
|
|
|
|
void CreateFilter(const Slice* keys, int n, std::string* dst) const {
|
|
|
|
// Use builtin bloom filter code after removing trailing spaces
|
|
|
|
std::vector<Slice> trimmed(n);
|
|
|
|
for (int i = 0; i < n; i++) {
|
|
|
|
trimmed[i] = RemoveTrailingSpaces(keys[i]);
|
|
|
|
}
|
|
|
|
return builtin_policy_->CreateFilter(&trimmed[i], n, dst);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool KeyMayMatch(const Slice& key, const Slice& filter) const {
|
|
|
|
// Use builtin bloom filter code after removing trailing spaces
|
|
|
|
return builtin_policy_->KeyMayMatch(RemoveTrailingSpaces(key), filter);
|
|
|
|
}
|
|
|
|
};
|
|
|
|
</pre>
|
|
|
|
<p>
|
|
|
|
Advanced applications may provide a filter policy that does not use
|
|
|
|
a bloom filter but uses some other mechanism for summarizing a set
|
|
|
|
of keys. See <code>leveldb/filter_policy.h</code> for detail.
|
|
|
|
<p>
|
2011-03-18 23:37:00 +01:00
|
|
|
<h1>Checksums</h1>
|
|
|
|
<p>
|
|
|
|
<code>leveldb</code> associates checksums with all data it stores in the file system.
|
|
|
|
There are two separate controls provided over how aggressively these
|
|
|
|
checksums are verified:
|
|
|
|
<p>
|
|
|
|
<ul>
|
|
|
|
<li> <code>ReadOptions::verify_checksums</code> may be set to true to force
|
|
|
|
checksum verification of all data that is read from the file system on
|
|
|
|
behalf of a particular read. By default, no such verification is
|
|
|
|
done.
|
|
|
|
<p>
|
|
|
|
<li> <code>Options::paranoid_checks</code> may be set to true before opening a
|
|
|
|
database to make the database implementation raise an error as soon as
|
|
|
|
it detects an internal corruption. Depending on which portion of the
|
|
|
|
database has been corrupted, the error may be raised when the database
|
|
|
|
is opened, or later by another database operation. By default,
|
|
|
|
paranoid checking is off so that the database can be used even if
|
|
|
|
parts of its persistent storage have been corrupted.
|
|
|
|
<p>
|
|
|
|
If a database is corrupted (perhaps it cannot be opened when
|
2013-10-04 06:49:15 +02:00
|
|
|
paranoid checking is turned on), the <code>rocksdb::RepairDB</code> function
|
2011-03-18 23:37:00 +01:00
|
|
|
may be used to recover as much of the data as possible
|
|
|
|
<p>
|
|
|
|
</ul>
|
|
|
|
<h1>Approximate Sizes</h1>
|
|
|
|
<p>
|
|
|
|
The <code>GetApproximateSizes</code> method can used to get the approximate
|
|
|
|
number of bytes of file system space used by one or more key ranges.
|
|
|
|
<p>
|
|
|
|
<pre>
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::Range ranges[2];
|
|
|
|
ranges[0] = rocksdb::Range("a", "c");
|
|
|
|
ranges[1] = rocksdb::Range("x", "z");
|
2011-03-18 23:37:00 +01:00
|
|
|
uint64_t sizes[2];
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::Status s = db->GetApproximateSizes(ranges, 2, sizes);
|
2011-03-18 23:37:00 +01:00
|
|
|
</pre>
|
|
|
|
The preceding call will set <code>sizes[0]</code> to the approximate number of
|
|
|
|
bytes of file system space used by the key range <code>[a..c)</code> and
|
|
|
|
<code>sizes[1]</code> to the approximate number of bytes used by the key range
|
|
|
|
<code>[x..z)</code>.
|
|
|
|
<p>
|
|
|
|
<h1>Environment</h1>
|
|
|
|
<p>
|
|
|
|
All file operations (and other operating system calls) issued by the
|
2013-10-04 06:49:15 +02:00
|
|
|
<code>leveldb</code> implementation are routed through a <code>rocksdb::Env</code> object.
|
2011-03-18 23:37:00 +01:00
|
|
|
Sophisticated clients may wish to provide their own <code>Env</code>
|
|
|
|
implementation to get better control. For example, an application may
|
|
|
|
introduce artificial delays in the file IO paths to limit the impact
|
|
|
|
of <code>leveldb</code> on other activities in the system.
|
|
|
|
<p>
|
|
|
|
<pre>
|
2013-10-04 06:49:15 +02:00
|
|
|
class SlowEnv : public rocksdb::Env {
|
2011-03-18 23:37:00 +01:00
|
|
|
.. implementation of the Env interface ...
|
|
|
|
};
|
|
|
|
|
|
|
|
SlowEnv env;
|
2013-10-04 06:49:15 +02:00
|
|
|
rocksdb::Options options;
|
2011-03-18 23:37:00 +01:00
|
|
|
options.env = &env;
|
2013-10-04 06:49:15 +02:00
|
|
|
Status s = rocksdb::DB::Open(options, ...);
|
2011-03-18 23:37:00 +01:00
|
|
|
</pre>
|
|
|
|
<h1>Porting</h1>
|
|
|
|
<p>
|
|
|
|
<code>leveldb</code> may be ported to a new platform by providing platform
|
|
|
|
specific implementations of the types/methods/functions exported by
|
|
|
|
<code>leveldb/port/port.h</code>. See <code>leveldb/port/port_example.h</code> for more
|
|
|
|
details.
|
|
|
|
<p>
|
2013-10-04 06:49:15 +02:00
|
|
|
In addition, the new platform may need a new default <code>rocksdb::Env</code>
|
2011-03-18 23:37:00 +01:00
|
|
|
implementation. See <code>leveldb/util/env_posix.h</code> for an example.
|
|
|
|
|
|
|
|
<h1>Other Information</h1>
|
|
|
|
|
|
|
|
<p>
|
|
|
|
Details about the <code>leveldb</code> implementation may be found in
|
|
|
|
the following documents:
|
|
|
|
<ul>
|
|
|
|
<li> <a href="impl.html">Implementation notes</a>
|
|
|
|
<li> <a href="table_format.txt">Format of an immutable Table file</a>
|
|
|
|
<li> <a href="log_format.txt">Format of a log file</a>
|
|
|
|
</ul>
|
|
|
|
|
|
|
|
</body>
|
|
|
|
</html>
|