Commit Graph

5493 Commits

Author SHA1 Message Date
Aaron Gao
21e8daced5 fix assertion failure in Prev()
Summary:
fix assertion failure in db_stress.
It happens because of prefix seek key is larger than merge iterator key when they have the same user key

Test Plan: ./db_stress --max_background_compactions=1 --max_write_buffer_number=3 --sync=0 --reopen=20 --write_buffer_size=33554432 --delpercent=5 --log2_keys_per_lock=10 --block_size=16384 --allow_concurrent_memtable_write=0 --test_batches_snapshots=0 --max_bytes_for_level_base=67108864 --progress_reports=0 --mmap_read=0 --writepercent=35 --disable_data_sync=0 --readpercent=50 --subcompactions=4 --ops_per_thread=20000000 --memtablerep=skip_list --prefix_size=0 --target_file_size_multiplier=1 --column_families=1 --threads=32 --disable_wal=0 --open_files=500000 --destroy_db_initially=0 --target_file_size_base=16777216 --nooverwritepercent=1 --iterpercent=10 --max_key=100000000 --prefixpercent=0 --use_clock_cache=false --kill_random_test=888887 --cache_size=1048576 --verify_checksum=1

Reviewers: sdong, andrewkr, yiwu, yhchiang

Reviewed By: yhchiang

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D65025
2016-10-13 17:36:48 -07:00
Dmitri Smirnov
b9311aa65c Implement WinRandomRW file and improve code reuse (#1388) 2016-10-13 16:36:34 -07:00
Siying Dong
a249a0b75b check_format_compatible.sh to use some branch which allows to run with GCC 4.8 (#1393)
Summary:
Some older tags don't run GCC 4.8 with FB internal setting. Fixed them and created branches. Change the format compatible script accordingly.

Also add more releases to check format compatibility.
2016-10-13 16:15:55 -07:00
Yueh-Hsuan Chiang
040328a30d Remove an assertion for single-delete in MergeHelper::MergeUntil
Summary:
Previously we have an assertion which triggers when we issue Merges
after a single delete.  However, merges after a single delete are
unrelated to that single delete.  Thus this behavior should be
allowed.

This will address a flakyness of db_stress.

Test Plan: db_stress

Reviewers: IslamAbdelRahman, sdong

Reviewed By: sdong

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D64923
2016-10-13 14:26:57 -07:00
Yueh-Hsuan Chiang
8cbe3e10ca Relax the acceptable bias RateLimiterTest::Rate test be 25%
Summary:
In the current implementation of RateLimiter, the difference
between the configured rate and the actual rate might be more
than 20%, while our test only allows 15% difference.  This diff
relaxes the acceptable bias RateLimiterTest::Rate test be 25%
to make the test less flaky.

Test Plan: rate_limiter_test

Reviewers: IslamAbdelRahman, andrewkr, yiwu, lightmark, sdong

Reviewed By: sdong

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D64941
2016-10-13 14:26:12 -07:00
Islam AbdelRahman
f26a139d89 Log successful AddFile
Summary: Log successful AddFile

Test Plan: visually check LOG file

Reviewers: yiwu, andrewkr, lightmark, sdong

Reviewed By: sdong

Subscribers: andrewkr, jkedgar, dhruba

Differential Revision: https://reviews.facebook.net/D65019
2016-10-13 11:56:27 -07:00
Islam AbdelRahman
5691a1d8a4 Fix compaction conflict with running compaction
Summary:
Issue scenario:
(1) We have 3 files in L1 and we issue a compaction that will compact them into 1 file in L2
(2) While compaction (1) is running, we flush a file into L0 and trigger another compaction that decide to move this file to L1 and then move it again to L2 (this file don't overlap with any other files)
(3) compaction (1) finishes and install the file it generated in L2, but this file overlap with the file we generated in (2) so we break the LSM consistency

Looks like this issue can be triggered by using non-exclusive manual compaction or AddFile()

Test Plan: unit tests

Reviewers: sdong

Reviewed By: sdong

Subscribers: hermanlee4, jkedgar, andrewkr, dhruba, yoshinorim

Differential Revision: https://reviews.facebook.net/D64947
2016-10-13 10:49:06 -07:00
Andrew Kryczka
017de666c7 fixup commit
Summary: I accidentally left out these changes from my commit of D64053 due to
messing up the merge conflict resolution.

Test Plan: ./db_wal_test

Reviewers:

Subscribers:

Tasks:

Blame Revision: D64053
2016-10-13 08:48:40 -07:00
Andrew Kryczka
1b7af5fb1a Redo handling of recycled logs in full purge
Summary:
This reverts commit 9e4aa798c3,
which doesn't handle all cases (see inline comment).

I reimplemented the logic as suggested in the initial PR: https://github.com/facebook/rocksdb/pull/1313.

This approach has two benefits:

- All the parsing/filtering of full_scan_candidate_files is kept together in PurgeObsoleteFiles.
- We only need to check whether log file is recycled in one place where we've already determined it's a log file

Test Plan:
new unit test, verified fails before the original fix, still passes
now.

Reviewers: IslamAbdelRahman, yiwu, sdong

Reviewed By: yiwu, sdong

Subscribers: leveldb, dhruba, andrewkr

Differential Revision: https://reviews.facebook.net/D64053
2016-10-12 23:13:09 -07:00
Joel Marcey
27bfe327b2 Editorial change to README.md 2016-10-12 20:24:50 -07:00
Joel Marcey
89cc404dea A bit of doc restructuring 2016-10-12 20:23:00 -07:00
Islam AbdelRahman
9e7fda8299 Fix arcanist
Summary: Set no_proxy to fix arcanist

Test Plan: will check if tests are triggered

Reviewers: arahut, yiwu, lightmark, andrewkr, sdong

Reviewed By: sdong

Subscribers: andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D65001
2016-10-12 20:11:30 -07:00
Adam Retter
2e4b5cab0e Add missing RateLimiter class to the Windows build (#1382) 2016-10-12 15:00:37 -07:00
Adam Retter
ce4963fdfc [doc] Document that Visual Studio 2015+ is now required for Windows builds (#1389)
Closes https://github.com/facebook/rocksdb/issues/1377
2016-10-12 13:40:20 -07:00
Dmitri Smirnov
e489270980 Fix scoped arena iterator (#1387) 2016-10-12 11:16:16 -07:00
Peter (Stig) Edwards
f8d8cf53fe Fix log_write_bench -bytes_per_sync option. (#1375)
Hello and thanks for RocksDB,
 
When log_write_bench is run with the -bytes_per_sync option, the option does not influence any *sync* behaviour.
 
> strace -e trace=write,sync_file_range ./log_write_bench -record_interval 0 -record_size 1048576 -num_records 11 -bytes_per_sync 2097152 2>&1 | egrep '^(sync|write.*XXXX)'
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
 
I suspect that this is because the bytes_per_sync option now needs to be using a `WritableFileWriter` and not a `WritableFile`.
 
With the diff below applied, it changes to:
 
> strace -e trace=write,sync_file_range ./log_write_bench -record_interval 0 -record_size 1048576 -num_records 11 -bytes_per_sync 2097152 2>&1 | egrep '^(sync|write.*XXXX)'
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
sync_file_range(0x3, 0, 0x200000, 0x2)  = 0
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
sync_file_range(0x3, 0x200000, 0x200000, 0x2) = 0
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
sync_file_range(0x3, 0x400000, 0x200000, 0x2) = 0
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
sync_file_range(0x3, 0x600000, 0x200000, 0x2) = 0
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 1048576) = 1048576
sync_file_range(0x3, 0x800000, 0x200000, 0x2) = 0
 
( Note that the first 1MB is not synced as mentioned in util/file_reader_writer.cc::WritableFileWriter::Flush() )
 
This diff also includes the fix from https://github.com/facebook/rocksdb/pull/1373
 
> diff -du util/log_write_bench.cc.orig util/log_write_bench.cc
--- util/log_write_bench.cc.orig        2016-10-04 12:06:29.115122580 -0400
+++ util/log_write_bench.cc     2016-10-05 07:24:09.677037576 -0400
@@ -14,6 +14,7 @@
 #include <gflags/gflags.h>

 #include "rocksdb/env.h"
+#include "util/file_reader_writer.h"
 #include "util/histogram.h"
 #include "util/testharness.h"
 #include "util/testutil.h"
@@ -38,19 +39,21 @@
   env_options.bytes_per_sync = FLAGS_bytes_per_sync;
   unique_ptr<WritableFile> file;
   env->NewWritableFile(file_name, &file, env_options);
+  unique_ptr<WritableFileWriter> writer;
+  writer.reset(new WritableFileWriter(std::move(file), env_options));

   std::string record;
-  record.assign('X', FLAGS_record_size);
+  record.assign(FLAGS_record_size, 'X');

   HistogramImpl hist;

   uint64_t start_time = env->NowMicros();
   for (int i = 0; i < FLAGS_num_records; i++) {
     uint64_t start_nanos = env->NowNanos();
-    file->Append(record);
-    file->Flush();
+    writer->Append(record);
+    writer->Flush();
     if (FLAGS_enable_sync) {
-      file->Sync();
+      writer->Sync(false);
     }
     hist.Add(env->NowNanos() - start_nanos);
2016-10-11 16:45:51 -07:00
Reid Horuff
02b3e3985c Make txn->GetState() const
Summary: makes Transaction::GetState() a const function.

Test Plan: compiles.

Reviewers: mung

Reviewed By: mung

Subscribers: andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D64929
2016-10-11 15:48:50 -07:00
Aaron Gao
447f17127c new Prev() prefix support using SeekForPrev()
Summary:
1) The previous solution for Prev() prefix support is not clean.
Since I add api SeekForPrev(), now the Prev() can be symmetric to Next().
and we do not need SeekToLast() to be called in Prev() any more.

Also, Next() will Seek(prefix_seek_key_) to solve the problem of possible inconsistency between db_iter and merge_iter when
there is merge_operator. And prefix_seek_key is only refreshed when change direction to forward.

2) This diff also solves the bug of Iterator::SeekToLast() with iterate_upper_bound_ with prefix extractor.

add test cases for the above two cases.

There are some tests for the SeekToLast() in Prev(), I will clean them later.

Test Plan: make all check

Reviewers: IslamAbdelRahman, andrewkr, yiwu, sdong

Reviewed By: sdong

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D63933
2016-10-11 13:54:26 -07:00
Yi Wu
991b585ee0 More block cache tickers
Summary: Adding several missing block cache tickers.

Test Plan:
  make all check

Reviewers: IslamAbdelRahman, yhchiang, lightmark

Reviewed By: lightmark

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D64881
2016-10-11 11:59:05 -07:00
Yi Wu
d6ae6dec69 Add Statistics::getAndResetTickerCount().
Summary: A convience method to atomically get and reset ticker count. I'm wanting to use it to have a thin wrapper to the statistics object to export ticker counts to ODS for LogDevice (since they don't even use fb303).

Test Plan:
test in LogDevice shadow cluster.
https://fburl.com/461868822

Reviewers: andrewkr, yhchiang, IslamAbdelRahman

Reviewed By: IslamAbdelRahman

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D64869
2016-10-11 10:54:11 -07:00
Bassam Tabbara
aea3ce4c83 Avoid string CONCAT which is not supported in cmake 2.6 (#1383)
Signed-off-by: Bassam Tabbara <bassam.tabbara@quantum.com>
2016-10-10 17:32:04 -07:00
Islam AbdelRahman
2ad68b971a Support running consistency checks in release mode
Summary:
We always run consistency checks when compiling in debug mode
allow users to set Options::force_consistency_checks to true to be able to run such checks even when compiling in release mode

Test Plan:
make check -j64
make release

Reviewers: lightmark, sdong, yiwu

Reviewed By: yiwu

Subscribers: hermanlee4, andrewkr, yoshinorim, jkedgar, dhruba

Differential Revision: https://reviews.facebook.net/D64701
2016-10-07 17:21:45 -07:00
Islam AbdelRahman
67501cfc9a Fix -ve std::string::resize
Summary:
I saw this exception thrown because sometimes we may resize with -ve value
if we have empty max_bytes_for_level_multiplier_additional vector

Test Plan: run the tests

Reviewers: yiwu

Reviewed By: yiwu

Subscribers: andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D64791
2016-10-07 17:16:13 -07:00
Joel Marcey
04b02dd121 Testing asset links after config change 2016-10-07 16:28:44 -07:00
Reid Horuff
8c55bb87c8 Make Lock Info test multiple column families
Summary: Modifies the lock info export test to test multiple column families after I was experiencing a bug while developing the MyRocks front-end for this.

Test Plan: is test.

Reviewers: mung

Reviewed By: mung

Subscribers: andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D64725
2016-10-07 15:04:05 -07:00
Islam AbdelRahman
d062328977 Revert "Support SST files with Global sequence numbers"
This reverts commit ab01da5437.
2016-10-07 14:05:12 -07:00
Adam Retter
5cd28833a2 [RocksJava] Adjusted RateLimiter to 3.10.0 (#1368)
Summary:
- Deprecated RateLimiterConfig and GenericRateLimiterConfig
- Introduced RateLimiter

It is now possible to use all C++ related methods also in RocksJava.
A noteable method is setBytesPerSecond which can change the allowed
number of bytes per second at runtime.

Test Plan:
make rocksdbjava
make jtest

Reviewers: adamretter, yhchiang, ankgup87

Subscribers: dhruba

Differential Revision: https://reviews.facebook.net/D35715
2016-10-07 12:32:21 -07:00
Reid Horuff
37737c3a6b Expose Transaction State Publicly
Summary:
This exposes a transactions state through a public api rather than through a public member variable. I also do some name refactoring.
ExecutionStatus => TransactionState
exec_status_ => trx_state_

Test Plan: It compiles and transaction_test passes.

Reviewers: IslamAbdelRahman

Reviewed By: IslamAbdelRahman

Subscribers: andrewkr, mung, dhruba, sdong

Differential Revision: https://reviews.facebook.net/D64689
2016-10-07 11:58:53 -07:00
Reid Horuff
2c1f95291d Add facility to write only a portion of WriteBatch to WAL
Summary:
When constructing a write batch a client may now call MarkWalTerminationPoint() on that batch. No batch operations after this call will be added written to the WAL but will still be inserted into the Memtable. This facility is used to remove one of the three WriteImpl calls in 2PC transactions. This produces a ~1% perf improvement.

```
RocksDB - unoptimized 2pc, sync_binlog=1, disable_2pc=off
INFO 2016-08-31 14:30:38,814 [main]: REQUEST PHASE COMPLETED. 75000000 requests done in 2619 seconds. Requests/second = 28628

RocksDB - optimized 2pc , sync_binlog=1, disable_2pc=off
INFO 2016-08-31 16:26:59,442 [main]: REQUEST PHASE COMPLETED. 75000000 requests done in 2581 seconds. Requests/second = 29054
```

Test Plan: Two unit tests added.

Reviewers: sdong, yiwu, IslamAbdelRahman

Reviewed By: yiwu

Subscribers: hermanlee4, dhruba, andrewkr

Differential Revision: https://reviews.facebook.net/D64599
2016-10-07 11:32:10 -07:00
Peter (Stig) Edwards
043cb62d63 Fix record_size in log_write_bench, swap args to std::string::assign. (#1373)
Hello and thank you for RocksDB,
 
I noticed when using log_write_bench that writes were always 88 bytes:
 
> strace -e trace=write ./log_write_bench -num_records 2 2>&1 | head -n 2
write(3, "\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371"..., 88) = 88
write(3, "\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371\371"..., 88) = 88

> strace -e trace=write ./log_write_bench -record_size 4096 -num_records 2 2>&1 | head -n 2
write(3, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 88) = 88
write(3, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 88) = 88
 
I think this should be:

<<    record.assign('X', FLAGS_record_size);
>>    record.assign(FLAGS_record_size, 'X');

So fill and not buffer. Otherwise I always see writes of size 88 (the decimal value for chr "X").

string& assign (const char* s, size_t n);
buffer - Copies the first n characters from the array of characters pointed by s.

string& assign (size_t n, char c);
fill   - Replaces the current value by n consecutive copies of character c.

perl -le 'print ord "X"'
88
 
With the change:
 
> strace -e trace=write ./log_write_bench -record_size 4096 -num_records 2 2>&1 | head -n 2
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 4096) = 4096
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 4096) = 4096
 
> strace -e trace=write ./log_write_bench -num_records 2 2>&1 | head -n 2
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 249) = 249
write(3, "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"..., 249) = 249

Thanks.

01c27be5fb
https://reviews.facebook.net/D16239
2016-10-06 10:45:31 -07:00
Sage Weil
4985f60fc8 env_mirror: fix a few leaks (#1363)
* env_mirror: fix leak from LockFile

Signed-off-by: Sage Weil <sage@redhat.com>

* env_mirror: instruct EnvMirror whether mirrored Envs should be destroyed

The lifecycle rules for Env are frustrating and undocumented.  Notably,
Env::Default() should *not* be freed, but any Env instances we created
should be.

Explicitly instruct EnvMirror whether to clean up child Env instances.
Default to false so that we do not affect existing callers.

Signed-off-by: Sage Weil <sage@redhat.com>
2016-10-06 10:43:05 -07:00
Igor Mihalik
5aded67ddb update of c.h (#1371)
Added rocksdb_options_set_memtable_prefix_bloom_size_ratio function implemented in c.cc but not exported via c.h
2016-10-06 10:37:19 -07:00
Joel Marcey
912aec13c3 "Recent Posts" -> "All Posts"
Blog sidebar shows all the posts, not just the most recent ones.
2016-10-05 10:29:11 -07:00
Joel Marcey
7cbb298dbe Make sure that when contribtuing we call out creating appropriate directories
.... if they do not exist
2016-10-04 15:38:41 -07:00
Joel Marcey
a06ad47112 Add top level doc information to CONTRIBUTING.md 2016-10-04 15:27:28 -07:00
Joel Marcey
3fdd5b9712 A little more generic CONTRIBUTING.md 2016-10-04 15:22:28 -07:00
Joel Marcey
ed4fc31db5 Add link to CONTRIBUTING.md to main docs README.md 2016-10-04 15:21:43 -07:00
Joel Marcey
e4922e181e Forgot to truncate one blog post 2016-10-04 15:20:15 -07:00
Joel Marcey
6d8cd7ede1 Add CONTRIBUTING.md for rocksdb.org contribution guidance 2016-10-04 15:19:00 -07:00
Joel Marcey
bd55e5a1e7 Fix some formatting of compaction blog post 2016-10-04 14:33:07 -07:00
Joel Marcey
0f60358b0b CRLF -> LF mod (including removing trailing whitespace for those files) 2016-10-04 14:31:36 -07:00
Joel Marcey
b90e29c901 Truncate posts on the main /blog/ page 2016-10-04 14:20:26 -07:00
Joel Marcey
0d7acadaf9 Add author fields to blog posts
Now the author associated with fbid will be shown at top of blog post
2016-10-04 14:11:04 -07:00
Joel Marcey
01be44181b Add GitHub link to the landing page header 2016-10-04 13:49:13 -07:00
Islam AbdelRahman
9d6c961383 Fix Mac build 2016-10-03 18:25:10 -07:00
Islam AbdelRahman
ab01da5437 Support SST files with Global sequence numbers
Summary:
- Update SstFileWriter to include a property for a global sequence number in the SST file `rocksdb.external_sst_file.global_seqno`
- Update TableProperties to be aware of the offset of each property in the file
- Update BlockBasedTableReader and Block to be able to honor the sequence number in `rocksdb.external_sst_file.global_seqno` property and use it to overwrite all sequence number in the file

Something worth mentioning is that we don't update the seqno in the index block since and when doing a binary search, the reason for that is that it's guaranteed that SST files with global seqno will have only one user_key and each key will have seqno=0 encoded in it, This mean that this key is greater than any other key with seqno> 0. That mean that we can actually keep the current logic for these blocks

Test Plan: unit tests

Reviewers: andrewkr, yhchiang, yiwu, sdong

Reviewed By: sdong

Subscribers: hcz, andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D62523
2016-10-03 16:12:39 -07:00
Adam Retter
d346ba2468 Minor fixes around Windows 64 Java Artifacts (#1366) 2016-10-03 11:58:08 -07:00
krad
e91b4d0cf6 Add factory method for creating persistent cache that is accessible from public
Summary:
Currently there is no mechanism to create persistent cache from
headers. Adding a simple factory method to create a simple persistent cache with
default or NVM optimized settings.

note: Any idea to test this factory is appreciated.

Test Plan: None

Reviewers: sdong

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D64527
2016-10-03 10:55:46 -07:00
Manuel Ung
be1f1092c9 Expose transaction id, lock state information and transaction wait information
Summary:
This diff does 3 things:

Expose TransactionID so that we can identify transactions when we retrieve locking and lock wait information. This is exposed as `Transaction::GetID`.

Expose lock state information by locking all stripes in all column families and copying their contents to a data structure. This is exposed as `TransactionDB::GetLockStatusData`.

Adds support for tracking the transaction and the key being waited on, and exposes this as `Transaction::GetWaitingTxn`.

Test Plan: unit tests

Reviewers: horuff, sdong

Reviewed By: sdong

Subscribers: vasilep, hermanlee4, andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D64413
2016-09-30 11:41:21 -07:00
Andrew Kryczka
6009c473c7 Store range tombstones in memtable
Summary:
- Store range tombstones in a separate MemTableRep instantiated with ColumnFamilyOptions::memtable_factory
- MemTable::NewRangeTombstoneIterator() returns a MemTableIterator over the separate MemTableRep
- Part of the read path is not implemented yet (i.e., MemTable::Get())

Test Plan: see unit tests

Reviewers: wanning

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D62217
2016-09-30 09:06:43 -07:00