Commit Graph

2336 Commits

Author SHA1 Message Date
Igor Canadi
536e9973e3 Remove assert in vector rep
Summary: This assert makes Insert O(n^2) instead of O(n) in debug mode. Memtable insert is in the critical path. No need to assert uniqunnes of the key here, since we're adding a sequence number to it anyway.

Test Plan: none

Reviewers: sdong, ljin

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22443
2014-08-27 11:05:41 -07:00
Radheshyam Balasundaram
4142a3e783 Adding a user comparator for comparing Uint64 slices.
Summary:
- New Uint64 comparator
- Modify Reader and Builder to take custom user comparators instead of bytewise comparator
- Modify logic for choosing unused user key in builder
- Modify iterator logic in reader
- test changes

Test Plan:
cuckoo_table_{builder,reader,db}_test
make check all

Reviewers: ljin, sdong

Reviewed By: ljin

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D22377
2014-08-27 10:39:31 -07:00
Igor Canadi
1913ce27b9 more concurrent flushes in SpatialDB 2014-08-27 08:48:31 -07:00
Igor Canadi
808e809366 Adjust SpatialDB column family options 2014-08-27 07:56:10 -07:00
Igor Canadi
0c39f54dfb Use Vector memtable when bulk loading SpatialDB 2014-08-26 19:23:09 -07:00
Radheshyam Balasundaram
b6fd7811eb Don't do memtable lookup in db_impl_readonly if memtables are empty while opening db.
Summary: In DBImpl::Recover method, while loading memtables, also check if memtables are empty. Use this in DBImplReadonly to determine whether to lookup memtable or not.

Test Plan:
db_test
make check all

Reviewers: sdong, yhchiang, ljin, igor

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22281
2014-08-26 17:19:03 -07:00
Stanislau Hlebik
9dcb75b6d9 Add is-file-deletions-enabled property
Summary:
Add property 'rocksdb.is-file-deletions-enable'
	 which equals disable_delete_obsole_file_

Test Plan: make all check

Reviewers: sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22119
2014-08-26 16:26:29 -07:00
Lei Jin
1755581f19 improve OptimizeForPointLookup()
Summary: also fix HISTORY.md

Test Plan: make all check

Reviewers: sdong, yhchiang, igor

Reviewed By: igor

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22437
2014-08-26 14:15:00 -07:00
Igor Canadi
d9c0785812 Fix assertion in PosixRandomAccessFile
Summary:
See https://github.com/facebook/rocksdb/issues/244#issuecomment-53372297
Also see this: https://github.com/facebook/rocksdb/blob/master/util/env_posix.cc#L1075

Test Plan: compiles

Reviewers: yhchiang, ljin, sdong

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22419
2014-08-26 15:28:36 -04:00
Lei Jin
bda6f3363d fix valgrind error in c_test caused by BlockBasedTableOptions
Summary:
It was creating BlockBasedTableOptions object in a loop without calling
destroy()

Test Plan: valgrind ./c_test --leak-check=full --show-reachable=yes

Reviewers: sdong, igor

Reviewed By: igor

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22431
2014-08-26 09:57:25 -07:00
Torrie Fischer
0db6b028e7 Update timeout to 50ms instead of 3. 2014-08-26 09:38:45 -07:00
Igor Canadi
ff6ec0eb17 Optimize SpatialDB
Summary:
Two things:
1. Use hash-based index for data column family
2. Use Get() instead of Iterator Seek() when DB is opened read-only

Test Plan: added read-only test in unit test

Reviewers: yinwang

Reviewed By: yinwang

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22323
2014-08-25 17:50:18 -07:00
Lei Jin
23861857c4 ReadOptions.total_order_seek to allow total order seek for block-based table when hash index is enabled
Summary: as title

Test Plan: table_test

Reviewers: igor, yhchiang, sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22239
2014-08-25 16:14:30 -07:00
Lei Jin
a98badff16 print table options
Summary: Add a virtual function in table factory that will print table options

Test Plan: make release

Reviewers: igor, yhchiang, sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22149
2014-08-25 14:24:09 -07:00
Lei Jin
66f62e5c78 JNI changes corresponding to BlockBasedTableOptions migration
Summary: as title

Test Plan:
tested on my mac
make rocksdbjava
make jtest

Reviewers: sdong, igor, yhchiang

Reviewed By: yhchiang

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D21963
2014-08-25 14:22:55 -07:00
Lei Jin
384400128f move block based table related options BlockBasedTableOptions
Summary:
I will move compression related options in a separate diff since this
diff is already pretty lengthy.
I guess I will also need to change JNI accordingly :(

Test Plan: make all check

Reviewers: yhchiang, igor, sdong

Reviewed By: igor

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D21915
2014-08-25 14:22:05 -07:00
Igor Canadi
17b54aea53 Merge pull request #243 from andybons/patch-1
Add missing include to use std::unique_ptr
2014-08-24 10:54:04 -04:00
Andrew Bonventre
050869177e Add missing include to use std::unique_ptr
This was causing issues when including this header from another file.
2014-08-23 13:02:21 -04:00
Igor Canadi
42ea795209 Fix concurrency issue in CompactionPicker
Summary:
I am currently working on a project that uses RocksDB. While debugging some perf issues, I came up across interesting compaction concurrency issue. Namely, I had 15 idle threads and a good comapction to do, but CompactionPicker returned "Compaction nothing to do". Here's how Internal stats looked:

    2014/08/22-08:08:04.551982 7fc7fc3f5700 ------- DUMPING STATS -------
    2014/08/22-08:08:04.552000 7fc7fc3f5700
    ** Compaction Stats [default] **
    Level   Files   Size(MB) Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s)  Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt)  Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
      L0     7/5        353   1.0      0.0     0.0      0.0       2.3      2.3    0.0   0.0      0.0      9.4        0         0         0         0        247        46    5.359       8.53          1 8526.25
      L1     2/2         86   1.3      2.6     1.9      0.7       2.6      1.9    2.7   1.3     24.3     24.0       39        19        71        52        109        11    9.938       0.00          0    0.00
      L2    26/0        833   1.3      5.7     1.7      4.0       5.2      1.2    6.3   3.0     15.6     14.2       47       112       147        35        373        44    8.468       0.00          0    0.00
      L3    12/0        505   0.1      0.0     0.0      0.0       0.0      0.0    0.0   0.0      0.0      0.0        0         0         0         0          0         0    0.000       0.00          0    0.00
     Sum    47/7       1778   0.0      8.3     3.6      4.6      10.0      5.4    8.1   4.4     11.6     14.1       86       131       218        87        728       101    7.212       8.53          1 8526.25
     Int     0/0          0   0.0      2.4     0.8      1.6       2.7      1.2   11.5   6.1     12.0     13.6       20        43        63        20        203        23    8.845       0.00          0    0.00
    Flush(GB): accumulative 2.266, interval 0.444
    Stalls(secs): 0.000 level0_slowdown, 0.000 level0_numfiles, 8.526 memtable_compaction, 0.000 leveln_slowdown_soft, 0.000 leveln_slowdown_hard
    Stalls(count): 0 level0_slowdown, 0 level0_numfiles, 1 memtable_compaction, 0 leveln_slowdown_soft, 0 leveln_slowdown_hard

    ** DB Stats **
    Uptime(secs): 336.8 total, 60.4 interval
    Cumulative writes: 61584000 writes, 6480589 batches, 9.5 writes per batch, 1.39 GB user ingest
    Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, 0.00 GB written
    Interval writes: 11235257 writes, 1175050 batches, 9.6 writes per batch, 259.9 MB user ingest
    Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, 0.00 MB written

To see what happened, go here: 47b452cfcf/db/compaction_picker.cc (L430)
* The for loop started with level 1, because it has the worst score.
* PickCompactionBySize on L429 returned nullptr because all files were being compacted
* ExpandWhileOverlapping(c) returned true (because that's what it does when it gets nullptr!?)
* for loop break-ed, never trying compactions for level 2 :( :(

This bug was present at least since January. I have no idea how we didn't find this sooner.

Test Plan:
Unit testing compaction picker is hard. I tested this by running my service and observing L0->L1 and L2->L3 compactions in parallel. However, for long-term, I opened the task #4968469. @yhchiang is currently refactoring CompactionPicker, hopefully the new version will be unit-testable ;)

Here's how my compactions look like after the patch:

    2014/08/22-08:50:02.166699 7f3400ffb700 ------- DUMPING STATS -------
    2014/08/22-08:50:02.166722 7f3400ffb700
    ** Compaction Stats [default] **
    Level   Files   Size(MB) Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) RW-Amp W-Amp Rd(MB/s) Wr(MB/s)  Rn(cnt) Rnp1(cnt) Wnp1(cnt) Wnew(cnt)  Comp(sec) Comp(cnt) Avg(sec) Stall(sec) Stall(cnt) Avg(ms)
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
      L0     8/5        404   1.5      0.0     0.0      0.0       4.3      4.3    0.0   0.0      0.0      9.6        0         0         0         0        463        88    5.260       0.00          0    0.00
      L1     2/2         60   0.9      4.8     3.9      0.8       4.7      3.9    2.4   1.2     23.9     23.6       80        23       131       108        204        19   10.747       0.00          0    0.00
      L2    23/3        697   1.0     11.6     3.5      8.1      10.9      2.8    6.4   3.1     17.7     16.6       95       242       317        75        669        92    7.268       0.00          0    0.00
      L3    58/14      2207   0.3      6.2     1.6      4.6       5.9      1.3    7.4   3.6     14.6     13.9       43       121       159        38        436        36   12.106       0.00          0    0.00
     Sum    91/24      3368   0.0     22.5     9.1     13.5      25.8     12.4   11.2   6.0     13.0     14.9      218       386       607       221       1772       235    7.538       0.00          0    0.00
     Int     0/0          0   0.0      3.2     0.9      2.3       3.6      1.3   15.3   8.0     12.4     13.7       24        66        89        23        266        27    9.838       0.00          0    0.00
    Flush(GB): accumulative 4.336, interval 0.444
    Stalls(secs): 0.000 level0_slowdown, 0.000 level0_numfiles, 0.000 memtable_compaction, 0.000 leveln_slowdown_soft, 0.000 leveln_slowdown_hard
    Stalls(count): 0 level0_slowdown, 0 level0_numfiles, 0 memtable_compaction, 0 leveln_slowdown_soft, 0 leveln_slowdown_hard

    ** DB Stats **
    Uptime(secs): 577.7 total, 60.1 interval
    Cumulative writes: 116960736 writes, 11966220 batches, 9.8 writes per batch, 2.64 GB user ingest
    Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, 0.00 GB written
    Interval writes: 11643735 writes, 1206136 batches, 9.7 writes per batch, 269.2 MB user ingest
    Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, 0.00 MB written

Yay for concurrent L0->L1 and L2->L3 compactions!

Reviewers: sdong, yhchiang, ljin

Reviewed By: yhchiang

Subscribers: yhchiang, leveldb

Differential Revision: https://reviews.facebook.net/D22305
2014-08-22 11:32:40 -07:00
Siying Dong
bb530c0f47 Merge pull request #240 from ShaoYuZhang/master
Fix compilation issue on OSX
2014-08-21 18:16:15 -07:00
Shao Yu Zhang
f76eda74d6 Fix compilation issue on OSX 2014-08-21 18:11:33 -07:00
Radheshyam Balasundaram
08be7f5266 Implement Prepare method in CuckooTableReader
Summary:
- Implement Prepare method
- Rewrite performance tests in cuckoo_table_reader_test to write new file only if one doesn't already exist.
- Add performance tests for batch lookup along with prefetching.

Test Plan:
./cuckoo_table_reader_test --enable_perf
Results (We get better results if we used int64 comparator instead of string comparator (TBD in future diffs)):
With 100000000 items and hash table ratio 0.500000, number of hash functions used: 2.
Time taken per op is 0.208us (4.8 Mqps) with batch size of 0
With 100000000 items and hash table ratio 0.500000, number of hash functions used: 2.
Time taken per op is 0.182us (5.5 Mqps) with batch size of 10
With 100000000 items and hash table ratio 0.500000, number of hash functions used: 2.
Time taken per op is 0.161us (6.2 Mqps) with batch size of 25
With 100000000 items and hash table ratio 0.500000, number of hash functions used: 2.
Time taken per op is 0.161us (6.2 Mqps) with batch size of 50
With 100000000 items and hash table ratio 0.500000, number of hash functions used: 2.
Time taken per op is 0.163us (6.1 Mqps) with batch size of 100

With 100000000 items and hash table ratio 0.600000, number of hash functions used: 3.
Time taken per op is 0.252us (4.0 Mqps) with batch size of 0
With 100000000 items and hash table ratio 0.600000, number of hash functions used: 3.
Time taken per op is 0.192us (5.2 Mqps) with batch size of 10
With 100000000 items and hash table ratio 0.600000, number of hash functions used: 3.
Time taken per op is 0.195us (5.1 Mqps) with batch size of 25
With 100000000 items and hash table ratio 0.600000, number of hash functions used: 3.
Time taken per op is 0.191us (5.2 Mqps) with batch size of 50
With 100000000 items and hash table ratio 0.600000, number of hash functions used: 3.
Time taken per op is 0.194us (5.1 Mqps) with batch size of 100

With 100000000 items and hash table ratio 0.750000, number of hash functions used: 3.
Time taken per op is 0.228us (4.4 Mqps) with batch size of 0
With 100000000 items and hash table ratio 0.750000, number of hash functions used: 3.
Time taken per op is 0.185us (5.4 Mqps) with batch size of 10
With 100000000 items and hash table ratio 0.750000, number of hash functions used: 3.
Time taken per op is 0.186us (5.4 Mqps) with batch size of 25
With 100000000 items and hash table ratio 0.750000, number of hash functions used: 3.
Time taken per op is 0.189us (5.3 Mqps) with batch size of 50
With 100000000 items and hash table ratio 0.750000, number of hash functions used: 3.
Time taken per op is 0.188us (5.3 Mqps) with batch size of 100

With 100000000 items and hash table ratio 0.900000, number of hash functions used: 3.
Time taken per op is 0.325us (3.1 Mqps) with batch size of 0
With 100000000 items and hash table ratio 0.900000, number of hash functions used: 3.
Time taken per op is 0.196us (5.1 Mqps) with batch size of 10
With 100000000 items and hash table ratio 0.900000, number of hash functions used: 3.
Time taken per op is 0.199us (5.0 Mqps) with batch size of 25
With 100000000 items and hash table ratio 0.900000, number of hash functions used: 3.
Time taken per op is 0.196us (5.1 Mqps) with batch size of 50
With 100000000 items and hash table ratio 0.900000, number of hash functions used: 3.
Time taken per op is 0.209us (4.8 Mqps) with batch size of 100

Reviewers: sdong, yhchiang, igor, ljin

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22167
2014-08-20 18:35:35 -07:00
Yueh-Hsuan Chiang
47b452cfcf Fix the error of c_test.c
Summary:
Fix the error of c_test.c

Test Plan:
make c_test
./c_test
2014-08-20 17:05:29 -07:00
Yueh-Hsuan Chiang
562b7a1f28 Add missing implementaiton of SanitizeDBOptions in simple_table_db_test.cc
Summary:
Add missing implementaiton of SanitizeDBOptions in simple_table_db_test.cc

Test Plan:
make simple_table_db_test.cc
2014-08-20 16:33:25 -07:00
Yueh-Hsuan Chiang
63a2215c63 Improve Options sanitization and add MmapReadRequired() to TableFactory
Summary:
Currently, PlainTable must use mmap_reads.  When PlainTable is used but
allow_mmap_reads is not set, rocksdb will fail in flush.

This diff improve Options sanitization and add MmapReadRequired() to
TableFactory.

Test Plan:
export ROCKSDB_TESTS=PlainTableOptionsSanitizeTest
make db_test -j32
./db_test

Reviewers: sdong, ljin

Reviewed By: ljin

Subscribers: you, leveldb

Differential Revision: https://reviews.facebook.net/D21939
2014-08-20 15:53:39 -07:00
Jonah Cohen
e173bf9c19 Eliminate VersionSet memory leak
Summary:
ManifestDumpCommand::DoCommand was allocating a VersionSet and never
freeing it.

Test Plan: make

Reviewers: igor

Reviewed By: igor

Differential Revision: https://reviews.facebook.net/D22221
2014-08-20 13:52:03 -07:00
sdong
10720a5587 Revert the unintended change that DestroyDB() doesn't clean up info logs.
Summary: A previous change triggered a change by mistake: DestroyDB() will keep info logs under DB directory. Revert the unintended change.

Test Plan: Add a unit test case to verify it.

Reviewers: ljin, yhchiang, igor

Reviewed By: igor

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22209
2014-08-20 12:07:32 -07:00
Igor Canadi
01cbdd2aae Optimize storage parameters for spatialDB
Summary: We need to start compression at level 1, while OptimizeForLevelComapaction() only sets up rocksdb to start compressing at level 2. I also adjusted some other things.

Test Plan: compiles

Reviewers: yinwang

Reviewed By: yinwang

Differential Revision: https://reviews.facebook.net/D22203
2014-08-20 11:14:01 -07:00
sdong
045575ad0c Add CuckooHash table format to table_reader_bench
Summary: Make table_reader_bench cover all the three table formats.

Test Plan: Run it using three options

Reviewers: radheshyamb, ljin

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22137
2014-08-19 14:58:15 -07:00
Torrie Fischer
7c5173d27f test: db: fix test to have a smaller timeout for when it runs on faster hardware 2014-08-19 13:45:12 -07:00
Igor Canadi
6929b08616 Remove BitStream* tests 2014-08-19 09:52:54 -04:00
Igor Canadi
50b790c6d4 Removing BitStream* functions
Summary: I was checking some functions in coding.h and coding.cc when I noticed these unused functions. Let's remove them.

Test Plan: compiles

Reviewers: sdong, ljin, yhchiang, dhruba

Reviewed By: dhruba

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22077
2014-08-19 06:48:21 -07:00
Radheshyam Balasundaram
162b8151f1 Adding Column Family support in db_bench.
Summary:
Adding num_column_families flag. Adding support for column families in DoWrite and ReadRandom methods.
[Igor, please let me know if this approach sounds good. I shall add it to other methods too.]

Test Plan: Ran fillseq on 1M keys and 10 Column families and ran readrandom.

Reviewers: sdong, yhchiang, igor, ljin

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D21387
2014-08-18 18:15:01 -07:00
sdong
28b5c76004 WriteBatchWithIndex: a wrapper of WriteBatch, with a searchable index
Summary:
Add WriteBatchWithIndex so that a user can query data out of a WriteBatch, to support MongoDB's read-its-own-write.

WriteBatchWithIndex uses a skiplist to store the binary index. The index stores the offset of the entry in the write batch. When searching for a key, the key for the entry is read by read the entry from the write batch from the offset.

Define a new iterator class for querying data out of WriteBatchWithIndex. A user can create an iterator of the write batch for one column family, seek to a key and keep calling Next() to see next entries.

I will add more unit tests if people are OK about this API.

Test Plan:
make all check
Add unit tests.

Reviewers: yhchiang, igor, MarkCallaghan, ljin

Reviewed By: ljin

Subscribers: dhruba, leveldb, xjin

Differential Revision: https://reviews.facebook.net/D21381
2014-08-18 16:37:38 -07:00
sdong
5585e00279 Update release note of 3.4
Summary: N/A

Test Plan: N/A

Reviewers: yhchiang, ljin

Reviewed By: ljin

Subscribers: dhruba, leveldb, igor

Differential Revision: https://reviews.facebook.net/D22053
2014-08-18 16:37:03 -07:00
Naveen
343e98a7d1 Reverting import change 2014-08-18 14:08:10 -07:00
Naveen
ddb8039e3a RocksDB static build
Make file changes to download and build the dependencies
.Load the shared library when RocksDB is initialized
2014-08-18 14:04:37 -07:00
sdong
68eed8c9f0 Bump up version
Summary: Bump up version after we've cut 3.4

Test Plan: N/A

Reviewers: yhchiang, ljin

Reviewed By: ljin

Subscribers: leveldb, dhruba, igor

Differential Revision: https://reviews.facebook.net/D22047
2014-08-18 12:02:02 -07:00
Radheshyam Balasundaram
36e759d199 Adding Cuckoo Table SST option to db_bench
Summary: Adding flags to use cuckoo table SST in db_bench.cc

Test Plan: Ran benchmark with fillseq and readrandom

Reviewers: sdong, ljin

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D21729
2014-08-18 11:59:38 -07:00
Igor Canadi
a6fd14c881 Fix valgrind error in c_test 2014-08-18 11:08:51 -07:00
Lei Jin
c7037153c3 attempt to fix auto_roll_logger_test
Summary:
auto_roll_logger_test fails from time to time. I wasn't able to repro
the issue but by looking at the code, it seems like the initial ctime_
value can be set to the boundary of the second so it may still have a
chance to get rolled when interval is set to 1 second.

```
util/auto_roll_logger_test.cc:120: failed: 118 > 708
==19470== Syscall param msync(start) points to unaddressable byte(s)
==19470==    at 0x4E46CE0: __msync_nocancel (in
/usr/local/fbcode/gcc-4.8.1-glibc-2.17/lib/libpthread-2.17.so)
==19470==    by 0x584EFB: access_mem (Ginit.c:137)
==19470==    by 0x5834E3: _ULx86_64_access_reg (libunwind_i.h:162)
==19470==    by 0x585601: apply_reg_state (Gparser.c:742)
==19470==    by 0x5866BE: _ULx86_64_dwarf_find_save_locs (Gparser.c:883)
==19470==    by 0x584550: _ULx86_64_dwarf_step (Gstep.c:34)
==19470==    by 0x583653: _ULx86_64_step (Gstep.c:71)
==19470==    by 0x583FD2: _ULx86_64_tdep_trace (Gtrace.c:217)
==19470==    by 0x5831C3: backtrace (backtrace.c:69)

Test Plan: ./auto_roll_logger_test

Reviewers: sdong, yhchiang, igor

Reviewed By: igor

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D21951
2014-08-18 10:23:18 -07:00
Igor Canadi
c8ecfaedd0 Merge pull request #230 from cockroachdb/spencerkimball/send-user-keys-to-v2-filter
Pass parsed user key to prefix extractor in V2 compaction
2014-08-18 11:09:30 -04:00
Yueh-Hsuan Chiang
570ba5aca8 Avoid retrying to read property block from a table when it does not exist.
Summary:
Avoid retrying to read property block from a table when it does not exist
in updating stats for compensating deletion entries.

In addition, ReadTableProperties() now returns Status::NotFound instead
of Status::Corruption when table properties does not exist in the file.

Test Plan:
make db_test -j32
export ROCKSDB_TESTS=CompactionDeleteionTrigger
./db_test

Reviewers: ljin, sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D21867
2014-08-15 12:17:44 -07:00
Lei Jin
625b9ef4e0 Merge pull request #234 from bbiao/master
fix compile error under Mac OS X
2014-08-14 20:34:48 -07:00
Jonah Cohen
59a2763d5c Fix typo huage => huge
Test Plan: Inspection

Reviewers: sdong

Reviewed By: sdong

Differential Revision: https://reviews.facebook.net/D21891
2014-08-14 17:01:20 -07:00
Jonah Cohen
f611935e9a Fix autovector iterator increment/decrement comments
Summary: The prefix and postfix operators were mixed up in the autovector class.

Test Plan: Inspection

Reviewers: sdong, kailiu

Reviewed By: kailiu

Differential Revision: https://reviews.facebook.net/D21873
2014-08-14 14:56:11 -07:00
sdong
58b0f9d890 Support purging logs from separate log directory
Summary:
1. Support purging info logs from a separate paths from DB path. Refactor the codes of generating info log prefixes so that it can be called when generating new files and scanning log directory.
2. Fix the bug of not scanning multiple DB paths (should only impact multiple DB paths)

Test Plan:
Add unit test for generating and parsing info log files
Add end-to-end test in db_test

Reviewers: yhchiang, ljin

Reviewed By: ljin

Subscribers: leveldb, igor, dhruba

Differential Revision: https://reviews.facebook.net/D21801
2014-08-14 13:22:50 -07:00
Yueh-Hsuan Chiang
2da53b1e06 [Java] Add purgeOldBackups API
Summary:
1. Check status of CreateNewBackup. If status is not OK, then throw.
2. Add purgeOldBackups API

Test Plan:
make test
make sample

Reviewers: haobo, sdong, zzbennett, swapnilghike, yhchiang

Reviewed By: yhchiang

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D21753
2014-08-14 10:58:09 -07:00
Feng Zhu
6c4c159b2c fix_sst_dump_for_old_sst_format
Summary:
1. fix segment error when dumping old sst format (no properties nor stats)
2. Enable dumpping old sst format

Test Plan:
Generate block based sst file with "properties", and one with "stats" and one without neither.
Read it using sst_dump

Reviewers: ljin, igor, yhchiang, dhruba, sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D21837
2014-08-14 09:59:41 -07:00
ZHANG Biao
8dfe2fdd51 fix compile error under Mac OS X 2014-08-14 20:01:01 +08:00