2015-01-08 02:26:24 +01:00
|
|
|
make_config.mk
|
2020-08-17 20:51:35 +02:00
|
|
|
rocksdb.pc
|
2013-11-06 06:04:22 +01:00
|
|
|
|
2012-03-05 19:35:46 +01:00
|
|
|
*.a
|
2013-08-01 22:59:01 +02:00
|
|
|
*.arc
|
|
|
|
*.d
|
2012-04-17 17:36:46 +02:00
|
|
|
*.dylib*
|
2013-08-01 22:59:01 +02:00
|
|
|
*.gcda
|
|
|
|
*.gcno
|
|
|
|
*.o
|
2020-11-03 21:45:18 +01:00
|
|
|
*.o.tmp
|
2012-04-17 17:36:46 +02:00
|
|
|
*.so
|
|
|
|
*.so.*
|
2012-03-05 19:35:46 +01:00
|
|
|
*_test
|
2014-04-21 22:01:50 +02:00
|
|
|
*_bench
|
2013-10-24 02:31:12 +02:00
|
|
|
*_stress
|
2014-02-13 23:14:32 +01:00
|
|
|
*.out
|
2014-04-03 19:46:55 +02:00
|
|
|
*.class
|
|
|
|
*.jar
|
|
|
|
*.*jnilib*
|
2014-04-04 22:11:44 +02:00
|
|
|
*.d-e
|
2014-04-23 15:11:35 +02:00
|
|
|
*.o-*
|
2014-05-16 22:17:29 +02:00
|
|
|
*.swp
|
2015-01-28 01:39:39 +01:00
|
|
|
*~
|
2015-07-02 01:13:49 +02:00
|
|
|
*.vcxproj
|
|
|
|
*.vcxproj.filters
|
|
|
|
*.sln
|
|
|
|
*.cmake
|
2019-07-19 23:55:07 +02:00
|
|
|
.watchmanconfig
|
2015-07-02 01:13:49 +02:00
|
|
|
CMakeCache.txt
|
|
|
|
CMakeFiles/
|
|
|
|
build/
|
2013-11-06 06:04:22 +01:00
|
|
|
|
2012-11-21 01:14:04 +01:00
|
|
|
ldb
|
|
|
|
manifest_dump
|
|
|
|
sst_dump
|
2017-05-23 19:30:04 +02:00
|
|
|
blob_dump
|
2019-07-17 04:13:35 +02:00
|
|
|
block_cache_trace_analyzer
|
2020-03-13 19:33:04 +01:00
|
|
|
db_with_timestamp_basic_test
|
Block cache simulator: Add pysim to simulate caches using reinforcement learning. (#5610)
Summary:
This PR implements cache eviction using reinforcement learning. It includes two implementations:
1. An implementation of Thompson Sampling for the Bernoulli Bandit [1].
2. An implementation of LinUCB with disjoint linear models [2].
The idea is that a cache uses multiple eviction policies, e.g., MRU, LRU, and LFU. The cache learns which eviction policy is the best and uses it upon a cache miss.
Thompson Sampling is contextless and does not include any features.
LinUCB includes features such as level, block type, caller, column family id to decide which eviction policy to use.
[1] Daniel J. Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband, and Zheng Wen. 2018. A Tutorial on Thompson Sampling. Found. Trends Mach. Learn. 11, 1 (July 2018), 1-96. DOI: https://doi.org/10.1561/2200000070
[2] Lihong Li, Wei Chu, John Langford, and Robert E. Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web (WWW '10). ACM, New York, NY, USA, 661-670. DOI=http://dx.doi.org/10.1145/1772690.1772758
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5610
Differential Revision: D16435067
Pulled By: HaoyuHuang
fbshipit-source-id: 6549239ae14115c01cb1e70548af9e46d8dc21bb
2019-07-26 23:36:16 +02:00
|
|
|
tools/block_cache_analyzer/*.pyc
|
2016-08-01 23:50:19 +02:00
|
|
|
column_aware_encoding_exp
|
2012-11-21 01:14:04 +01:00
|
|
|
util/build_version.cc
|
2013-08-14 22:29:05 +02:00
|
|
|
build_tools/VALGRIND_LOGS/
|
|
|
|
coverage/COVERAGE_REPORT
|
2013-08-25 07:48:51 +02:00
|
|
|
.gdbhistory
|
2016-11-28 19:12:28 +01:00
|
|
|
.gdb_history
|
Package generation for Ubuntu and CentOS
Summary:
I put together a script to assist in the generation of deb's and
rpm's. I've tested that this works on ubuntu via vagrant. I've included the
Vagrantfile here, but I can remove it if it's not useful. The package.sh
script should work on any ubuntu or centos machine, I just added a bit of
logic in there to allow a base Ubuntu or Centos machine to be able to build
RocksDB from scratch.
Example output on Ubuntu 14.04:
```
root@vagrant-ubuntu-trusty-64:/vagrant# ./tools/package.sh
[+] g++-4.7 is already installed. skipping.
[+] libgflags-dev is already installed. skipping.
[+] ruby-all-dev is already installed. skipping.
[+] fpm is already installed. skipping.
Created package {:path=>"rocksdb_3.5_amd64.deb"}
root@vagrant-ubuntu-trusty-64:/vagrant# dpkg --info rocksdb_3.5_amd64.deb
new debian package, version 2.0.
size 17392022 bytes: control archive=1518 bytes.
275 bytes, 11 lines control
2911 bytes, 38 lines md5sums
Package: rocksdb
Version: 3.5
License: BSD
Vendor: Facebook
Architecture: amd64
Maintainer: rocksdb@fb.com
Installed-Size: 83358
Section: default
Priority: extra
Homepage: http://rocksdb.org/
Description: RocksDB is an embeddable persistent key-value store for fast storage.
```
Example output on CentOS 6.5:
```
[root@localhost vagrant]# rpm -qip rocksdb-3.5-1.x86_64.rpm
Name : rocksdb Relocations: /usr
Version : 3.5 Vendor: Facebook
Release : 1 Build Date: Mon 29 Sep 2014 01:26:11 AM UTC
Install Date: (not installed) Build Host: localhost
Group : default Source RPM: rocksdb-3.5-1.src.rpm
Size : 96231106 License: BSD
Signature : (none)
Packager : rocksdb@fb.com
URL : http://rocksdb.org/
Summary : RocksDB is an embeddable persistent key-value store for fast storage.
Description :
RocksDB is an embeddable persistent key-value store for fast storage.
```
Test Plan:
How this gets used is really up to the RocksDB core team. If you
want to actually get this into mainline, you might have to change `make
install` such that it install the RocksDB shared object file as well, which
would require you to link against gflags (maybe?) and that would require some
potential modifications to the script here (basically add a depends on that
package).
Currently, this will install the headers and a pre-compiled statically linked
object file. If that's what you want out of life, than this requires no
modifications.
Reviewers: ljin, yhchiang, igor
Reviewed By: igor
Differential Revision: https://reviews.facebook.net/D24141
2014-09-30 01:09:46 +02:00
|
|
|
package/
|
2015-09-25 00:29:05 +02:00
|
|
|
unity.a
|
2014-03-19 23:40:33 +01:00
|
|
|
tags
|
2017-05-18 16:50:40 +02:00
|
|
|
etags
|
2015-06-20 01:24:36 +02:00
|
|
|
rocksdb_dump
|
|
|
|
rocksdb_undump
|
2016-03-08 00:56:16 +01:00
|
|
|
db_test2
|
RocksDB Trace Analyzer (#4091)
Summary:
A framework of trace analyzing for RocksDB
After collecting the trace by using the tool of [PR #3837](https://github.com/facebook/rocksdb/pull/3837). User can use the Trace Analyzer to interpret, analyze, and characterize the collected workload.
**Input:**
1. trace file
2. Whole keys space file
**Statistics:**
1. Access count of each operation (Get, Put, Delete, SingleDelete, DeleteRange, Merge) in each column family.
2. Key hotness (access count) of each one
3. Key space separation based on given prefix
4. Key size distribution
5. Value size distribution if appliable
6. Top K accessed keys
7. QPS statistics including the average QPS and peak QPS
8. Top K accessed prefix
9. The query correlation analyzing, output the number of X after Y and the corresponding average time
intervals
**Output:**
1. key access heat map (either in the accessed key space or whole key space)
2. trace sequence file (interpret the raw trace file to line base text file for future use)
3. Time serial (The key space ID and its access time)
4. Key access count distritbution
5. Key size distribution
6. Value size distribution (in each intervals)
7. whole key space separation by the prefix
8. Accessed key space separation by the prefix
9. QPS of each operation and each column family
10. Top K QPS and their accessed prefix range
**Test:**
1. Added the unit test of analyzing Get, Put, Delete, SingleDelete, DeleteRange, Merge
2. Generated the trace and analyze the trace
**Implemented but not tested (due to the limitation of trace_replay):**
1. Analyzing Iterator, supporting Seek() and SeekForPrev() analyzing
2. Analyzing the number of Key found by Get
**Future Work:**
1. Support execution time analyzing of each requests
2. Support cache hit situation and block read situation of Get
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4091
Differential Revision: D9256157
Pulled By: zhichao-cao
fbshipit-source-id: f0ceacb7eedbc43a3eee6e85b76087d7832a8fe6
2018-08-13 20:32:04 +02:00
|
|
|
trace_analyzer
|
|
|
|
trace_analyzer_test
|
2019-07-26 00:23:46 +02:00
|
|
|
block_cache_trace_analyzer
|
2020-09-26 01:43:23 +02:00
|
|
|
io_tracer_parser
|
2019-06-04 07:59:54 +02:00
|
|
|
.DS_Store
|
2020-02-19 01:08:14 +01:00
|
|
|
.vs
|
|
|
|
.vscode
|
2014-08-03 22:11:15 +02:00
|
|
|
|
|
|
|
java/out
|
2015-01-31 20:43:09 +01:00
|
|
|
java/target
|
|
|
|
java/test-libs
|
2014-04-03 19:46:55 +02:00
|
|
|
java/*.log
|
|
|
|
java/include/org_rocksdb_*.h
|
2014-08-03 22:11:15 +02:00
|
|
|
|
|
|
|
.idea/
|
|
|
|
*.iml
|
|
|
|
|
2015-09-25 00:29:05 +02:00
|
|
|
rocksdb.cc
|
|
|
|
rocksdb.h
|
2014-08-07 19:05:04 +02:00
|
|
|
unity.cc
|
2014-09-27 00:41:28 +02:00
|
|
|
java/crossbuild/.vagrant
|
Package generation for Ubuntu and CentOS
Summary:
I put together a script to assist in the generation of deb's and
rpm's. I've tested that this works on ubuntu via vagrant. I've included the
Vagrantfile here, but I can remove it if it's not useful. The package.sh
script should work on any ubuntu or centos machine, I just added a bit of
logic in there to allow a base Ubuntu or Centos machine to be able to build
RocksDB from scratch.
Example output on Ubuntu 14.04:
```
root@vagrant-ubuntu-trusty-64:/vagrant# ./tools/package.sh
[+] g++-4.7 is already installed. skipping.
[+] libgflags-dev is already installed. skipping.
[+] ruby-all-dev is already installed. skipping.
[+] fpm is already installed. skipping.
Created package {:path=>"rocksdb_3.5_amd64.deb"}
root@vagrant-ubuntu-trusty-64:/vagrant# dpkg --info rocksdb_3.5_amd64.deb
new debian package, version 2.0.
size 17392022 bytes: control archive=1518 bytes.
275 bytes, 11 lines control
2911 bytes, 38 lines md5sums
Package: rocksdb
Version: 3.5
License: BSD
Vendor: Facebook
Architecture: amd64
Maintainer: rocksdb@fb.com
Installed-Size: 83358
Section: default
Priority: extra
Homepage: http://rocksdb.org/
Description: RocksDB is an embeddable persistent key-value store for fast storage.
```
Example output on CentOS 6.5:
```
[root@localhost vagrant]# rpm -qip rocksdb-3.5-1.x86_64.rpm
Name : rocksdb Relocations: /usr
Version : 3.5 Vendor: Facebook
Release : 1 Build Date: Mon 29 Sep 2014 01:26:11 AM UTC
Install Date: (not installed) Build Host: localhost
Group : default Source RPM: rocksdb-3.5-1.src.rpm
Size : 96231106 License: BSD
Signature : (none)
Packager : rocksdb@fb.com
URL : http://rocksdb.org/
Summary : RocksDB is an embeddable persistent key-value store for fast storage.
Description :
RocksDB is an embeddable persistent key-value store for fast storage.
```
Test Plan:
How this gets used is really up to the RocksDB core team. If you
want to actually get this into mainline, you might have to change `make
install` such that it install the RocksDB shared object file as well, which
would require you to link against gflags (maybe?) and that would require some
potential modifications to the script here (basically add a depends on that
package).
Currently, this will install the headers and a pre-compiled statically linked
object file. If that's what you want out of life, than this requires no
modifications.
Reviewers: ljin, yhchiang, igor
Reviewed By: igor
Differential Revision: https://reviews.facebook.net/D24141
2014-09-30 01:09:46 +02:00
|
|
|
.vagrant/
|
2017-05-15 21:11:22 +02:00
|
|
|
java/**/*.asc
|
2014-10-06 17:23:31 +02:00
|
|
|
java/javadoc
|
2015-02-24 02:45:25 +01:00
|
|
|
|
|
|
|
scan_build_report/
|
run 'make check's rules (and even subtests) in parallel
Summary:
When GNU parallel is available, "make check" tests are now run in parallel.
When /dev/shm is usable, we tell those tests to create temporary files therein.
Now, the longest-running single test, db_test, (which is composed of hundreds of sub-tests)
is no longer run sequentially: instead, each of its sub-tests is run independently, and can
be parallelized along with all other tests. To make that process easier, this change
creates a temporary directory, "t/", in which it puts a small script for each of those
subtests. The output from each parallel-run test is now saved in t/log-TEST_NAME.
When GNU parallel is not available, we run the tests in sequence, just as before.
If GNU parallel is available and you don't like the default of running one subtest
per core, you can invoke "make J=1 check" to run only one test at a time.
Beware: this will take a long time, and it starts with the two longest-running tests, so you
will wait for a long time before seeing any results. Instead, if you want to use fewer resources
but still see useful progress, try "make J=60% check". That will attempt to ensure that 60% of
the cores are occupied by test runs.
To watch progress of individual tests (duration, success (PASS-or-FAIL), name), run "make watch-log"
in the same directory from another window. That will start with something like this:
and when complete should show numbers/names like this:
Every 0.1s: sort -k7,7nr -k4,4gr LOG|perl -n -e '@a=split("\t",$_,-1); $t=$a[8]; $t =~ s,^\./,,;' -e '$t =~ s, >.*,,; chomp $t;' -e '$t =~ /.*--gtest_filter=... Wed Apr 1 10:51:42 2015
152.221 PASS t/DBTest.FileCreationRandomFailure
109.280 PASS t/DBTest.EncodeDecompressedBlockSizeTest
82.315 PASS reduce_levels_test
77.812 PASS t/DBTest.CompactionFilterWithValueChange
73.236 PASS backupable_db_test
63.428 PASS deletefile_test
57.248 PASS table_test
55.665 PASS prefix_test
49.816 PASS t/DBTest.RateLimitingTest
...
Test Plan:
Timings (measured so as to exclude compile and link times):
With this change, all tests complete in 2m40s on a system for which nproc prints 32.
Prior to this this change, "make check" would take 24.5 minutes on that same system.
Here are durations (in seconds) of the longest-running subtests:
152.435 PASS t/DBTest.FileCreationRandomFailure
107.070 PASS t/DBTest.EncodeDecompressedBlockSizeTest
81.391 PASS ./reduce_levels_test
71.587 PASS ./backupable_db_test
61.746 PASS ./deletefile_test
57.960 PASS ./table_test
55.230 PASS ./prefix_test
54.060 PASS t/DBTest.CompactionFilterWithValueChange
48.873 PASS t/DBTest.RateLimitingTest
47.569 PASS ./fault_injection_test
46.593 PASS t/DBTest.Randomized
42.662 PASS t/DBTest.CompactionFilter
31.793 PASS t/DBTest.SparseMerge
30.612 PASS t/DBTest.CompactionFilterV2
25.891 PASS t/DBTest.GroupCommitTest
23.863 PASS t/DBTest.DynamicLevelMaxBytesBase
22.976 PASS ./rate_limiter_test
18.942 PASS t/DBTest.OptimizeFiltersForHits
16.851 PASS ./env_test
15.399 PASS t/DBTest.CompactionFilterV2WithValueChange
14.827 PASS t/DBTest.CompactionFilterV2NULLPrefix
Reviewers: igor, sdong, rven, yhchiang, igor.sugak
Reviewed By: igor.sugak
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D35379
2015-04-06 21:35:25 +02:00
|
|
|
t
|
|
|
|
LOG
|
2016-07-19 18:44:03 +02:00
|
|
|
|
|
|
|
db_logs/
|
2016-07-22 10:13:25 +02:00
|
|
|
tp2/
|
|
|
|
fbcode/
|
2016-10-26 22:45:08 +02:00
|
|
|
fbcode
|
2017-04-05 01:09:31 +02:00
|
|
|
buckifier/*.pyc
|
2019-10-23 22:51:03 +02:00
|
|
|
buckifier/__pycache__
|
2020-03-02 21:51:17 +01:00
|
|
|
|
|
|
|
compile_commands.json
|
2020-05-29 20:24:19 +02:00
|
|
|
clang-format-diff.py
|
|
|
|
.py3/
|
2020-11-17 21:55:27 +01:00
|
|
|
|
|
|
|
fuzz/proto/gen/
|
|
|
|
fuzz/crash-*
|
2021-02-19 22:41:17 +01:00
|
|
|
|
|
|
|
cmake-build-*
|
Meta-internal folly integration with F14FastMap (#9546)
Summary:
Especially after updating to C++17, I don't see a compelling case for
*requiring* any folly components in RocksDB. I was able to purge the existing
hard dependencies, and it can be quite difficult to strip out non-trivial components
from folly for use in RocksDB. (The prospect of doing that on F14 has changed
my mind on the best approach here.)
But this change creates an optional integration where we can plug in
components from folly at compile time, starting here with F14FastMap to replace
std::unordered_map when possible (probably no public APIs for example). I have
replaced the biggest CPU users of std::unordered_map with compile-time
pluggable UnorderedMap which will use F14FastMap when USE_FOLLY is set.
USE_FOLLY is always set in the Meta-internal buck build, and a simulation of
that is in the Makefile for public CI testing. A full folly build is not needed, but
checking out the full folly repo is much simpler for getting the dependency,
and anything else we might want to optionally integrate in the future.
Some picky details:
* I don't think the distributed mutex stuff is actually used, so it was easy to remove.
* I implemented an alternative to `folly::constexpr_log2` (which is much easier
in C++17 than C++11) so that I could pull out the hard dependencies on
`ConstexprMath.h`
* I had to add noexcept move constructors/operators to some types to make
F14's complainUnlessNothrowMoveAndDestroy check happy, and I added a
macro to make that easier in some common cases.
* Updated Meta-internal buck build to use folly F14Map (always)
No updates to HISTORY.md nor INSTALL.md as this is not (yet?) considered a
production integration for open source users.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9546
Test Plan:
CircleCI tests updated so that a couple of them use folly.
Most internal unit & stress/crash tests updated to use Meta-internal latest folly.
(Note: they should probably use buck but they currently use Makefile.)
Example performance improvement: when filter partitions are pinned in cache,
they are tracked by PartitionedFilterBlockReader::filter_map_ and we can build
a test that exercises that heavily. Build DB with
```
TEST_TMPDIR=/dev/shm/rocksdb ./db_bench -benchmarks=fillrandom -num=10000000 -disable_wal=1 -write_buffer_size=30000000 -bloom_bits=16 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=0 -partition_index_and_filters
```
and test with (simultaneous runs with & without folly, ~20 times each to see
convergence)
```
TEST_TMPDIR=/dev/shm/rocksdb ./db_bench_folly -readonly -use_existing_db -benchmarks=readrandom -num=10000000 -bloom_bits=16 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=0 -partition_index_and_filters -duration=40 -pin_l0_filter_and_index_blocks_in_cache
```
Average ops/s no folly: 26229.2
Average ops/s with folly: 26853.3 (+2.4%)
Reviewed By: ajkr
Differential Revision: D34181736
Pulled By: pdillinger
fbshipit-source-id: ffa6ad5104c2880321d8a1aa7187e00ab0d02e94
2022-04-13 16:34:01 +02:00
|
|
|
third-party/folly/
|