ccf0fcd5c2
- Implemented Get() directly instead of building on top of a full merging iterator stack. This speeds up the "readrandom" benchmark by up to 15-30%. - Fixed an opensource compilation problem. Added --db=<name> flag to control where the database is placed. - Automatically compact a file when we have done enough overlapping seeks to that file. - Fixed a performance bug where we would read from at least one file in a level even if none of the files overlapped the key being read. - Makefile fix for Mac OSX installations that have XCode 4 without XCode 3. - Unified the two occurrences of binary search in a file-list into one routine. - Found and fixed a bug where we would unnecessarily search the last file when looking for a key larger than all data in the level. - A fix to avoid the need for trivial move compactions and therefore gets rid of two out of five syncs in "fillseq". - Removed the MANIFEST file write when switching to a new memtable/log-file for a 10-20% improvement on fill speed on ext4. - Adding a SNAPPY setting in the Makefile for folks who have Snappy installed. Snappy compresses values and speeds up writes. git-svn-id: https://leveldb.googlecode.com/svn/trunk@32 62dab493-f737-651d-591e-8d6aee1b9529
14 lines
494 B
Plaintext
14 lines
494 B
Plaintext
ss
|
|
- Stats
|
|
|
|
db
|
|
- Maybe implement DB::BulkDeleteForRange(start_key, end_key)
|
|
that would blow away files whose ranges are entirely contained
|
|
within [start_key..end_key]? For Chrome, deletion of obsolete
|
|
object stores, etc. can be done in the background anyway, so
|
|
probably not that important.
|
|
|
|
After a range is completely deleted, what gets rid of the
|
|
corresponding files if we do no future changes to that range. Make
|
|
the conditions for triggering compactions fire in more situations?
|