Pick the number of writes dynamically and wait for compactions when using the overwrite benchmark

Summary:
The patch makes two small changes/fixes when it comes to the overwrite
benchmark in `benchmark.sh`:

1) Currently, the benchmark uses a fixed number of writes per thread
(125,000,000) no matter how many (or few) keys the database has. The
patch changes this to `num_keys/num_threads` so that one overwrite
"pass" overwrites each key in the database once on average. I tend to
think that this was the original intention as well, for two reasons:
a) it's pretty standard practice and b) the math checks out with the
default values of `num_keys=8000000000` and `num_threads=64`.
2) As a heavy write-only benchmark, overwrite can create significant
*durability debt* (see
http://smalldatum.blogspot.com/2018/09/durability-debt.html) by
building up a backlog of compactions, which can skew the write
amplification values of subsequent benchmarks (see e.g.
https://github.com/facebook/rocksdb/wiki/Performance-Benchmarks#test-6-multi-threaded-read-and-single-threaded-write-benchmarksh-readwhilewriting).
In order to avoid this, the patch adds a "waitforcompaction" step at
the end of the benchmark after all writes have finished.

Note that these changes will affect our "official" performance numbers,
so if we land this, we might want to make a note on the Wiki about the
change in methodology.

Test Plan:
```
$ DB_DIR=/tmp/rocksdbtest/dbbench/ WAL_DIR=/tmp/rocksdbtest/dbbench/ NUM_KEYS=20000000 NUM_THREADS=32 tools/benchmark.sh overwrite --enable_blob_files=1 --enable_blob_garbage_collection=1
===== Benchmark =====
Starting overwrite (ID: ) at Wed Sep 15 13:57:05 PDT 2021
Do 20000000 random overwrite
./db_bench --benchmarks=overwrite,waitforcompaction --use_existing_db=1 --sync=0 --level0_file_num_compaction_trigger=4 --level0_stop_writes_trigger=20 --max_background_compactions=16 --max_write_buffer_number=8 --max_background_flushes=7 --db=/tmp/rocksdbtest/dbbench/ --wal_dir=/tmp/rocksdbtest/dbbench/ --num=20000000 --num_levels=6 --key_size=20 --value_size=400 --block_size=8192 --cache_size=17179869184 --cache_numshardbits=6 --compression_max_dict_bytes=0 --compression_ratio=0.5 --compression_type=zstd --level_compaction_dynamic_level_bytes=true --bytes_per_sync=8388608 --cache_index_and_filter_blocks=0 --pin_l0_filter_and_index_blocks_in_cache=1 --benchmark_write_rate_limit=0 --hard_rate_limit=3 --rate_limit_delay_max_milliseconds=1000000 --write_buffer_size=134217728 --target_file_size_base=134217728 --max_bytes_for_level_base=1073741824 --verify_checksum=1 --delete_obsolete_files_period_micros=62914560 --max_bytes_for_level_multiplier=8 --statistics=0 --stats_per_interval=1 --stats_interval_seconds=60 --histogram=1 --memtablerep=skip_list --bloom_bits=10 --open_files=-1 --enable_blob_files=1 --enable_blob_garbage_collection=1 --writes=625000 --subcompactions=4 --soft_pending_compaction_bytes_limit=1099511627776 --hard_pending_compaction_bytes_limit=4398046511104 --threads=32 --merge_operator="put" --seed=1631739425 2>&1 | tee -a /tmp/benchmark_overwrite.t32.s0.log
RocksDB:    version 6.24
Date:       Wed Sep 15 13:57:07 2021
CPU:        24 * Intel Core Processor (Broadwell)
CPUCache:   16384 KB

...

waitforcompaction(/tmp/rocksdbtest/dbbench/): started
waitforcompaction(/tmp/rocksdbtest/dbbench/): finished
```
This commit is contained in:
Levi Tamasi 2021-09-15 14:03:41 -07:00
parent 8df334342e
commit 5c90f3ad25

View File

@ -463,7 +463,7 @@ function run_change {
operation=$1
echo "Do $num_keys random $operation"
log_file_name="$output_dir/benchmark_${operation}.t${num_threads}.s${syncval}.log"
cmd="./db_bench --benchmarks=$operation \
cmd="./db_bench --benchmarks=$operation,waitforcompaction \
--use_existing_db=1 \
--sync=$syncval \
$params_w \
@ -650,7 +650,7 @@ for job in ${jobs[@]}; do
elif [ $job = overwrite ]; then
syncval="0"
params_w="$params_w \
--writes=125000000 \
--writes=$(($num_keys / $num_threads)) \
--subcompactions=4 \
--soft_pending_compaction_bytes_limit=$((1 * T)) \
--hard_pending_compaction_bytes_limit=$((4 * T)) "