Add redirects from old blog posts link to new format

Summary:
The new blog post links will be formatted differently coming over to gh-pages. But
we can redirect from the old style over to the new style for existing blog posts.

Test Plan:
Visual

https://www.facebook.com/pxlcld/pvWQ

Reviewers: lgalanis, sdong

Reviewed By: sdong

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D63513
This commit is contained in:
Joel Marcey 2016-09-06 21:07:13 -07:00
parent 607628d349
commit 1ec75ee76b
36 changed files with 89 additions and 15 deletions

View File

@ -1,2 +1,3 @@
source 'https://rubygems.org'
gem 'github-pages', group: :jekyll_plugins
gem 'jekyll-redirect-from'

View File

@ -21,7 +21,7 @@ GEM
ffi (1.9.14)
forwardable-extended (2.6.0)
gemoji (2.1.0)
github-pages (93)
github-pages (94)
activesupport (= 4.2.7)
github-pages-health-check (= 1.2.0)
jekyll (= 3.2.1)
@ -29,7 +29,7 @@ GEM
jekyll-feed (= 0.5.1)
jekyll-gist (= 1.4.0)
jekyll-github-metadata (= 2.0.2)
jekyll-mentions (= 1.1.3)
jekyll-mentions (= 1.2.0)
jekyll-paginate (= 1.1.0)
jekyll-redirect-from (= 0.11.0)
jekyll-sass-converter (= 1.3.0)
@ -71,7 +71,8 @@ GEM
jekyll-github-metadata (2.0.2)
jekyll (~> 3.1)
octokit (~> 4.0)
jekyll-mentions (1.1.3)
jekyll-mentions (1.2.0)
activesupport (~> 4.0)
html-pipeline (~> 2.3)
jekyll (~> 3.0)
jekyll-paginate (1.1.0)
@ -119,18 +120,21 @@ GEM
sawyer (0.7.0)
addressable (>= 2.3.5, < 2.5)
faraday (~> 0.8, < 0.10)
terminal-table (1.6.0)
terminal-table (1.7.0)
unicode-display_width (~> 1.1)
thread_safe (0.3.5)
typhoeus (0.8.0)
ethon (>= 0.8.0)
tzinfo (1.2.2)
thread_safe (~> 0.1)
unicode-display_width (1.1.0)
PLATFORMS
ruby
DEPENDENCIES
github-pages
jekyll-redirect-from
BUNDLED WITH
1.12.5

View File

@ -57,3 +57,6 @@ sass:
redcarpet:
extensions: [with_toc_data]
gems:
- jekyll-redirect-from

View File

@ -3,6 +3,8 @@ title: How to backup RocksDB?
layout: post
author: icanadi
category: blog
redirect_from:
- /blog/191/how-to-backup-rocksdb/
---
In RocksDB, we have implemented an easy way to backup your DB. Here is a simple example:

View File

@ -3,6 +3,8 @@ title: How to persist in-memory RocksDB database?
layout: post
author: icanadi
category: blog
redirect_from:
- /blog/245/how-to-persist-in-memory-rocksdb-database/
---
In recent months, we have focused on optimizing RocksDB for in-memory workloads. With growing RAM sizes and strict low-latency requirements, lots of applications decide to keep their entire data in memory. Running in-memory database with RocksDB is easy -- just mount your RocksDB directory on tmpfs or ramfs [1]. Even if the process crashes, RocksDB can recover all of your data from in-memory filesystem. However, what happens if the machine reboots?

View File

@ -3,6 +3,8 @@ title: The 1st RocksDB Local Meetup Held on March 27, 2014
layout: post
author: xjin
category: blog
redirect_from:
- /blog/323/the-1st-rocksdb-local-meetup-held-on-march-27-2014/
---
On Mar 27, 2014, RocksDB team @ Facebook held the 1st RocksDB local meetup in FB HQ (Menlo Park, California). We invited around 80 guests from 20+ local companies, including LinkedIn, Twitter, Dropbox, Square, Pinterest, MapR, Microsoft and IBM. Finally around 50 guests showed up, totaling around 60% show-up rate.

View File

@ -3,6 +3,8 @@ title: RocksDB 2.8 release
layout: post
author: icanadi
category: blog
redirect_from:
- /blog/371/rocksdb-2-8-release/
---
Check out the new RocksDB 2.8 release on [Github](https://github.com/facebook/rocksdb/releases/tag/2.8.fb).

View File

@ -3,6 +3,8 @@ title: Indexing SST Files for Better Lookup Performance
layout: post
author: leijin
category: blog
redirect_from:
- /blog/431/indexing-sst-files-for-better-lookup-performance/
---
For a `Get()` request, RocksDB goes through mutable memtable, list of immutable memtables, and SST files to look up the target key. SST files are organized in levels.

View File

@ -3,6 +3,8 @@ title: Reducing Lock Contention in RocksDB
layout: post
author: sdong
category: blog
redirect_from:
- /blog/521/lock/
---
In this post, we briefly introduce the recent improvements we did to RocksDB to improve the issue of lock contention costs.

View File

@ -3,6 +3,8 @@ title: RocksDB 3.0 release
layout: post
author: icanadi
category: blog
redirect_from:
- /blog/557/rocksdb-3-0-release/
---
Check out new RocksDB release on [Github](https://github.com/facebook/rocksdb/releases/tag/3.0.fb)!

View File

@ -3,6 +3,8 @@ title: RocksDB 3.1 release
layout: post
author: icanadi
category: blog
redirect_from:
- /blog/575/rocksdb-3-1-release/
---
Check out the new release on [Github](https://github.com/facebook/rocksdb/releases/tag/rocksdb-3.1)!

View File

@ -3,6 +3,8 @@ title: PlainTable — A New File Format
layout: post
author: sdong
category: blog
redirect_from:
- /blog/599/plaintable-a-new-file-format/
---
In this post, we are introducing "PlainTable" -- a file format we designed for RocksDB, initially to satisfy a production use case at Facebook.

View File

@ -3,6 +3,8 @@ title: Avoid Expensive Locks in Get()
layout: post
author: leijin
category: blog
redirect_from:
- /blog/677/avoid-expensive-locks-in-get/
---
As promised in the previous [blog post](blog/2014/05/14/lock.html)!

View File

@ -3,6 +3,8 @@ title: RocksDB 3.2 release
layout: post
author: leijin
category: blog
redirect_from:
- /blog/647/rocksdb-3-2-release/
---
Check out new RocksDB release on [GitHub](https://github.com/facebook/rocksdb/releases/tag/rocksdb-3.2)!

View File

@ -3,6 +3,8 @@ title: RocksDB 3.3 Release
layout: post
author: yhciang
category: blog
redirect_from:
- /blog/1301/rocksdb-3-3-release/
---
Check out new RocksDB release on [GitHub](https://github.com/facebook/rocksdb/releases/tag/rocksdb-3.3)!

View File

@ -3,6 +3,8 @@ title: Cuckoo Hashing Table Format
layout: post
author: radheshyam
category: blog
redirect_from:
- /blog/1427/new-bloom-filter-format/
---
## Introduction

View File

@ -3,6 +3,8 @@ title: New Bloom Filter Format
layout: post
author: zagfox
category: blog
redirect_from:
- /blog/1367/cuckoo/
---
## Introduction

View File

@ -3,6 +3,8 @@ title: RocksDB 3.5 Release!
layout: post
author: leijin
category: blog
redirect_from:
- /blog/1547/rocksdb-3-5-release/
---
New RocksDB release - 3.5!

View File

@ -3,6 +3,8 @@ title: Migrating from LevelDB to RocksDB
layout: post
author: lgalanis
category: blog
redirect_from:
- /blog/1811/migrating-from-leveldb-to-rocksdb-2/
---
If you have an existing application that uses LevelDB and would like to migrate to using RocksDB, one problem you need to overcome is to map the options for LevelDB to proper options for RocksDB. As of release 3.9 this can be automatically done by using our option conversion utility found in rocksdb/utilities/leveldb_options.h. What is needed, is to first replace `leveldb::Options` with `rocksdb::LevelDBOptions`. Then, use `rocksdb::ConvertOptions( )` to convert the `LevelDBOptions` struct into appropriate RocksDB options. Here is an example:

View File

@ -3,6 +3,8 @@ title: Reading RocksDB options from a file
layout: post
author: lgalanis
category: blog
redirect_from:
- /blog/1883/reading-rocksdb-options-from-a-file/
---
RocksDB options can be provided using a file or any string to RocksDB. The format is straightforward: `write_buffer_size=1024;max_write_buffer_number=2`. Any whitespace around `=` and `;` is OK. Moreover, options can be nested as necessary. For example `BlockBasedTableOptions` can be nested as follows: `write_buffer_size=1024; max_write_buffer_number=2; block_based_table_factory={block_size=4k};`. Similarly any white space around `{` or `}` is ok. Here is what it looks like in code:

View File

@ -3,14 +3,16 @@ title: 'WriteBatchWithIndex: Utility for Implementing Read-Your-Own-Writes'
layout: post
author: sdong
category: blog
redirect_from:
- /blog/1901/write-batch-with-index/
---
RocksDB can be used as a storage engine of a higher level database. In fact, we are currently plugging RocksDB into MySQL and MongoDB as one of their storage engines. RocksDB can help with guaranteeing some of the ACID properties: durability is guaranteed by RocksDB by design; while consistency and isolation need to be enforced by concurrency controls on top of RocksDB; Atomicity can be implemented by committing a transaction's writes with one write batch to RocksDB in the end.
RocksDB can be used as a storage engine of a higher level database. In fact, we are currently plugging RocksDB into MySQL and MongoDB as one of their storage engines. RocksDB can help with guaranteeing some of the ACID properties: durability is guaranteed by RocksDB by design; while consistency and isolation need to be enforced by concurrency controls on top of RocksDB; Atomicity can be implemented by committing a transaction's writes with one write batch to RocksDB in the end.
However, if we enforce atomicity by only committing all writes in the end of the transaction in one batch, you cannot get the updated value from RocksDB previously written by the same transaction (read-your-own-write). To read the updated value, the databases on top of RocksDB need to maintain an internal buffer for all the written keys, and when a read happens they need to merge the result from RocksDB and from this buffer. This is a problem we faced when building the RocksDB storage engine in MongoDB. We solved it by creating a utility class, WriteBatchWithIndex (a write batch with a searchable index) and made it part of public API so that the community can also benefit from it.
However, if we enforce atomicity by only committing all writes in the end of the transaction in one batch, you cannot get the updated value from RocksDB previously written by the same transaction (read-your-own-write). To read the updated value, the databases on top of RocksDB need to maintain an internal buffer for all the written keys, and when a read happens they need to merge the result from RocksDB and from this buffer. This is a problem we faced when building the RocksDB storage engine in MongoDB. We solved it by creating a utility class, WriteBatchWithIndex (a write batch with a searchable index) and made it part of public API so that the community can also benefit from it.
Before talking about the index part, let me introduce write batch first. The write batch class, `WriteBatch`, is a RocksDB data structure for atomic writes of multiple keys. Users can buffer their updates to a `WriteBatch` by calling `write_batch.Put("key1", "value1")` or `write_batch.Delete("key2")`, similar as calling RocksDB's functions of the same names. In the end, they call `db->Write(write_batch)` to atomically update all those batched operations to the DB. It is how a database can guarantee atomicity, as shown above. Adding a searchable index to `WriteBatch`, we now have `WriteBatchWithIndex`. Users can put updates to WriteBatchIndex in the same way as to `WriteBatch`. In the end, users can get a `WriteBatch` object from it and issue `db->Write()`. Additionally, users can create an iterator of a WriteBatchWithIndex, seek to any key location and iterate from there.
Before talking about the index part, let me introduce write batch first. The write batch class, `WriteBatch`, is a RocksDB data structure for atomic writes of multiple keys. Users can buffer their updates to a `WriteBatch` by calling `write_batch.Put("key1", "value1")` or `write_batch.Delete("key2")`, similar as calling RocksDB's functions of the same names. In the end, they call `db->Write(write_batch)` to atomically update all those batched operations to the DB. It is how a database can guarantee atomicity, as shown above. Adding a searchable index to `WriteBatch`, we now have `WriteBatchWithIndex`. Users can put updates to WriteBatchIndex in the same way as to `WriteBatch`. In the end, users can get a `WriteBatch` object from it and issue `db->Write()`. Additionally, users can create an iterator of a WriteBatchWithIndex, seek to any key location and iterate from there.
To implement read-your-own-write using `WriteBatchWithIndex`, every time the user creates a transaction, we create a `WriteBatchWithIndex` attached to it. All the writes of the transaction go to the `WriteBatchWithIndex` first. When we commit the transaction, we atomically write the batch to RocksDB. When the user wants to call `Get()`, we first check if the value exists in the `WriteBatchWithIndex` and return the value if existing, by seeking and reading from an iterator of the write batch, before checking data in RocksDB. For example, here is the we implement it in MongoDB's RocksDB storage engine: [link](https://github.com/mongodb/mongo/blob/a31cc114a89a3645e97645805ba77db32c433dce/src/mongo/db/storage/rocks/rocks_recovery_unit.cpp#L245-L260). If a range query comes, we pass a DB's iterator to `WriteBatchWithIndex`, which creates a super iterator which combines the results from the DB iterator with the batch's iterator. Using this super iterator, we can iterate the DB with the transaction's own writes. Here is the iterator creation codes in MongoDB's RocksDB storage engine: [link](https://github.com/mongodb/mongo/blob/a31cc114a89a3645e97645805ba77db32c433dce/src/mongo/db/storage/rocks/rocks_recovery_unit.cpp#L266-L269). In this way, the database can solve the read-your-own-write problem by using RocksDB to handle a transaction's uncommitted writes.
Using `WriteBatchWithIndex`, we successfully implemented read-your-own-writes in the RocksDB storage engine of MongoDB. If you also have a read-your-own-write problem, `WriteBatchWithIndex` can help you implement it quickly and correctly.
To implement read-your-own-write using `WriteBatchWithIndex`, every time the user creates a transaction, we create a `WriteBatchWithIndex` attached to it. All the writes of the transaction go to the `WriteBatchWithIndex` first. When we commit the transaction, we atomically write the batch to RocksDB. When the user wants to call `Get()`, we first check if the value exists in the `WriteBatchWithIndex` and return the value if existing, by seeking and reading from an iterator of the write batch, before checking data in RocksDB. For example, here is the we implement it in MongoDB's RocksDB storage engine: [link](https://github.com/mongodb/mongo/blob/a31cc114a89a3645e97645805ba77db32c433dce/src/mongo/db/storage/rocks/rocks_recovery_unit.cpp#L245-L260). If a range query comes, we pass a DB's iterator to `WriteBatchWithIndex`, which creates a super iterator which combines the results from the DB iterator with the batch's iterator. Using this super iterator, we can iterate the DB with the transaction's own writes. Here is the iterator creation codes in MongoDB's RocksDB storage engine: [link](https://github.com/mongodb/mongo/blob/a31cc114a89a3645e97645805ba77db32c433dce/src/mongo/db/storage/rocks/rocks_recovery_unit.cpp#L266-L269). In this way, the database can solve the read-your-own-write problem by using RocksDB to handle a transaction's uncommitted writes.
Using `WriteBatchWithIndex`, we successfully implemented read-your-own-writes in the RocksDB storage engine of MongoDB. If you also have a read-your-own-write problem, `WriteBatchWithIndex` can help you implement it quickly and correctly.

View File

@ -3,6 +3,8 @@ title: Integrating RocksDB with MongoDB
layout: post
author: icanadi
category: blog
redirect_from:
- /blog/1967/integrating-rocksdb-with-mongodb-2/
---
Over the last couple of years, we have been busy integrating RocksDB with various services here at Facebook that needed to store key-value pairs locally. We have also seen other companies using RocksDB as local storage components of their distributed systems.

View File

@ -3,6 +3,8 @@ title: RocksDB in osquery
layout: post
author: icanadi
category: lgalanis
redirect_from:
- /blog/1997/rocksdb-in-osquery/
---
Check out [this](https://code.facebook.com/posts/1411870269134471/how-rocksdb-is-used-in-osquery/) blog post by [Mike Arpaia](https://www.facebook.com/mike.arpaia) and [Ted Reed](https://www.facebook.com/treeded) about how osquery leverages RocksDB to build an embedded pub-sub system. This article is a great read and contains insights on how to properly use RocksDB.

View File

@ -3,6 +3,8 @@ title: RocksDB 2015 H2 roadmap
layout: post
author: icanadi
category: blog
redirect_from:
- /blog/2015/rocksdb-2015-h2-roadmap/
---
Every 6 months, RocksDB team gets together to prioritize the work ahead of us. We just went through this exercise and we wanted to share the results with the community. Here's what RocksDB team will be focusing on for the next 6 months:

View File

@ -3,6 +3,8 @@ title: Spatial indexing in RocksDB
layout: post
author: icanadi
category: blog
redirect_from:
- /blog/2039/spatial-indexing-in-rocksdb/
---
About a year ago, there was a need to develop a spatial database at Facebook. We needed to store and index Earth's map data. Before building our own, we looked at the existing spatial databases. They were all very good technology, but also general purpose. We could sacrifice a general-purpose API, so we thought we could build a more performant database, since it would be specifically designed for our use-case. Furthermore, we decided to build the spatial database on top of RocksDB, because we have a lot of operational experience with running and tuning RocksDB at a large scale.

View File

@ -3,6 +3,8 @@ title: RocksDB is now available in Windows Platform
layout: post
author: dmitrism
category: blog
redirect_from:
- /blog/2033/rocksdb-is-now-available-in-windows-platform/
---
Over the past 6 months we have seen a number of use cases where RocksDB is successfully used by the community and various companies to achieve high throughput and volume in a modern server environment.

View File

@ -3,19 +3,21 @@ title: Dynamic Level Size for Level-Based Compaction
layout: post
author: sdong
category: blog
redirect_from:
- /blog/2207/dynamic-level/
---
In this article, we follow up on the first part of an answer to one of the questions in our [AMA](https://www.reddit.com/r/IAmA/comments/3de3cv/we_are_rocksdb_engineering_team_ask_us_anything/ct4a8tb), the dynamic level size in level-based compaction.
In this article, we follow up on the first part of an answer to one of the questions in our [AMA](https://www.reddit.com/r/IAmA/comments/3de3cv/we_are_rocksdb_engineering_team_ask_us_anything/ct4a8tb), the dynamic level size in level-based compaction.
Level-based compaction is the original LevelDB compaction style and one of the two major compaction styles in RocksDB (See [our wiki](https://github.com/facebook/rocksdb/wiki/RocksDB-Basics#multi-threaded-compactions)). In RocksDB we introduced parallelism and more configurable options to it but the main algorithm stayed the same, until we recently introduced the dynamic level size mode.
Level-based compaction is the original LevelDB compaction style and one of the two major compaction styles in RocksDB (See [our wiki](https://github.com/facebook/rocksdb/wiki/RocksDB-Basics#multi-threaded-compactions)). In RocksDB we introduced parallelism and more configurable options to it but the main algorithm stayed the same, until we recently introduced the dynamic level size mode.
In level-based compaction, we organize data to different sorted runs, called levels. Each level has a target size.  Usually target size of levels increases by the same size multiplier. For example, you can set target size of level 1 to be 1GB, and size multiplier to be 10, and the target size of level 1, 2, 3, 4 will be 1GB, 10GB, 100GB and 1000GB. Before level 1, there will be some staging file flushed from mem tables, called Level 0 files, which will later be merged to level 1. Compactions will be triggered as soon as actual size of a level exceeds its target size. We will merge a subset of data of that level to next level, to reduce size of the level. More compactions will be triggered until sizes of all the levels are lower than their target sizes. In a steady state, the size of each level will be around the same size of the size of level targets.
In level-based compaction, we organize data to different sorted runs, called levels. Each level has a target size.  Usually target size of levels increases by the same size multiplier. For example, you can set target size of level 1 to be 1GB, and size multiplier to be 10, and the target size of level 1, 2, 3, 4 will be 1GB, 10GB, 100GB and 1000GB. Before level 1, there will be some staging file flushed from mem tables, called Level 0 files, which will later be merged to level 1. Compactions will be triggered as soon as actual size of a level exceeds its target size. We will merge a subset of data of that level to next level, to reduce size of the level. More compactions will be triggered until sizes of all the levels are lower than their target sizes. In a steady state, the size of each level will be around the same size of the size of level targets.
@ -30,7 +32,7 @@ How do we estimate space amplification of level-based compaction? We focus speci
Applying the equation, if we have four non-zero levels, their sizes are 1GB, 10GB, 100GB, 1000GB, the size amplification will be approximately (1000GB + 100GB + 10GB + 1GB) / 1000GB = 1.111, which is a very good number. However, there is a catch here: how to make sure the last levels size is 1000GB, the same as the levels size target? A user has to fine tune level sizes to achieve this number and will need to re-tune if DB size changes. The theoretic number 1.11 is hard to achieve in practice. In a worse case, if you have the target size of last level to be 1000GB but the user data is only 200GB, then the actual space amplification will be (200GB + 100GB + 10GB + 1GB) / 200GB = 1.555, a much worse number.
Applying the equation, if we have four non-zero levels, their sizes are 1GB, 10GB, 100GB, 1000GB, the size amplification will be approximately (1000GB + 100GB + 10GB + 1GB) / 1000GB = 1.111, which is a very good number. However, there is a catch here: how to make sure the last levels size is 1000GB, the same as the levels size target? A user has to fine tune level sizes to achieve this number and will need to re-tune if DB size changes. The theoretic number 1.11 is hard to achieve in practice. In a worse case, if you have the target size of last level to be 1000GB but the user data is only 200GB, then the actual space amplification will be (200GB + 100GB + 10GB + 1GB) / 200GB = 1.555, a much worse number.
To solve this problem, my colleague Igor Kabiljo came up with a solution of dynamic level size target mode. You can enable it by setting options.level_compaction_dynamic_level_bytes=true. In this mode, size target of levels are changed dynamically based on size of the last level. Suppose the level size multiplier to be 10, and the DB size is 200GB. The target size of the last level is automatically set to be the actual size of the level, which is 200GB, the second to last levels size target will be automatically set to be size_last_level / 10 = 20GB, the third last levels will be size_last_level/100 = 2GB, and next level to be size_last_level/1000 = 200MB. We stop here because 200MB is within the range of the first level. In this way, we can achieve the 1.111 space amplification, without fine tuning of the level size targets. More details can be found in [code comments of the option](https://github.com/facebook/rocksdb/blob/v3.11/include/rocksdb/options.h#L366-L423) in the header file.
To solve this problem, my colleague Igor Kabiljo came up with a solution of dynamic level size target mode. You can enable it by setting options.level_compaction_dynamic_level_bytes=true. In this mode, size target of levels are changed dynamically based on size of the last level. Suppose the level size multiplier to be 10, and the DB size is 200GB. The target size of the last level is automatically set to be the actual size of the level, which is 200GB, the second to last levels size target will be automatically set to be size_last_level / 10 = 20GB, the third last levels will be size_last_level/100 = 2GB, and next level to be size_last_level/1000 = 200MB. We stop here because 200MB is within the range of the first level. In this way, we can achieve the 1.111 space amplification, without fine tuning of the level size targets. More details can be found in [code comments of the option](https://github.com/facebook/rocksdb/blob/v3.11/include/rocksdb/options.h#L366-L423) in the header file.

View File

@ -3,6 +3,8 @@ title: GetThreadList
layout: post
author: yhciang
category: blog
redirect_from:
- /blog/2261/getthreadlist/
---
We recently added a new API, called `GetThreadList()`, that exposes the RocksDB background thread activity. With this feature, developers will be able to obtain the real-time information about the currently running compactions and flushes such as the input / output size, elapsed time, the number of bytes it has written. Below is an example output of `GetThreadList`. To better illustrate the example, we have put a sample output of `GetThreadList` into a table where each column represents a thread status:

View File

@ -3,6 +3,8 @@ title: Use Checkpoints for Efficient Snapshots
layout: post
author: rven2
category: blog
redirect_from:
- /blog/2609/use-checkpoints-for-efficient-snapshots/
---
**Checkpoint** is a feature in RocksDB which provides the ability to take a snapshot of a running RocksDB database in a separate directory. Checkpoints can be used as a point in time snapshot, which can be opened Read-only to query rows as of the point in time or as a Writeable snapshot by opening it Read-Write. Checkpoints can be used for both full and incremental backups.

View File

@ -3,6 +3,8 @@ title: Analysis File Read Latency by Level
layout: post
author: sdong
category: blog
redirect_from:
-/blog/2537/analysis-file-read-latency-by-level/
---
In many use cases of RocksDB, people rely on OS page cache for caching compressed data. With this approach, verifying effective of the OS page caching is challenging, because file system is a black box to users.

View File

@ -3,6 +3,8 @@ title: Option of Compaction Priority
layout: post
author: sdong
category: blog
redirect_from:
- /blog/2921/compaction_pri/
---
The most popular compaction style of RocksDB is level-based compaction, which is an improved version of LevelDB's compaction algorithm. Page 9- 16 of this [slides](https://github.com/facebook/rocksdb/blob/gh-pages/talks/2015-09-29-HPTS-Siying-RocksDB.pdf) gives an illustrated introduction of this compaction style. The basic idea that: data is organized by multiple levels with exponential increasing target size. Except a special level 0, every level is key-range partitioned into many files. When size of a level exceeds its target size, we pick one or more of its files, and merge the file into the next level.

View File

@ -3,6 +3,8 @@ title: RocksDB 4.2 Release!
layout: post
author: sdong
category: blog
redirect_from:
- /blog/3017/rocksdb-4-2-release/
---
New RocksDB release - 4.2!

View File

@ -3,6 +3,8 @@ title: RocksDB AMA
layout: post
author: yhchiang
category: blog
redirect_from:
- /blog/3065/rocksdb-ama/
---
RocksDB developers are doing a Reddit Ask-Me-Anything now at 10AM 11AM PDT! We welcome you to stop by and ask any RocksDB related questions, including existing / upcoming features, tuning tips, or database design.

View File

@ -3,6 +3,8 @@ title: RocksDB Options File
layout: post
author: yhciang
category: blog
redirect_from:
- /blog/3089/rocksdb-options-file/
---
In RocksDB 4.3, we added a new set of features that makes managing RocksDB options easier. Specifically:

View File

@ -3,6 +3,8 @@ title: RocksDB 4.5.1 Released!
layout: post
author: sdong
category: blog
redirect_from:
- /blog/3179/rocksdb-4-5-1-released/
---
## 4.5.1 (3/25/2016)

View File

@ -3,6 +3,8 @@ title: RocksDB 4.8 Released!
layout: post
author: yiwu
category: blog
redirect_from:
- /blog/3239/rocksdb-4-8-released/
---
## 4.8.0 (5/2/2016)