78a309bf86
Summary: Adds a new Cache::ApplyToAllEntries API that we expect to use (in follow-up PRs) for efficiently gathering block cache statistics. Notable features vs. old ApplyToAllCacheEntries: * Includes key and deleter (in addition to value and charge). We could have passed in a Handle but then more virtual function calls would be needed to get the "fields" of each entry. We expect to use the 'deleter' to identify the origin of entries, perhaps even more. * Heavily tuned to minimize latency impact on operating cache. It does this by iterating over small sections of each cache shard while cycling through the shards. * Supports tuning roughly how many entries to operate on for each lock acquire and release, to control the impact on the latency of other operations without excessive lock acquire & release. The right balance can depend on the cost of the callback. Good default seems to be around 256. * There should be no need to disable thread safety. (I would expect uncontended locks to be sufficiently fast.) I have enhanced cache_bench to validate this approach: * Reports a histogram of ns per operation, so we can look at the ditribution of times, not just throughput (average). * Can add a thread for simulated "gather stats" which calls ApplyToAllEntries at a specified interval. We also generate a histogram of time to run ApplyToAllEntries. To make the iteration over some entries of each shard work as cleanly as possible, even with resize between next set of entries, I have re-arranged which hash bits are used for sharding and which for indexing within a shard. Pull Request resolved: https://github.com/facebook/rocksdb/pull/8225 Test Plan: A couple of unit tests are added, but primary validation is manual, as the primary risk is to performance. The primary validation is using cache_bench to ensure that neither the minor hashing changes nor the simulated stats gathering significantly impact QPS or latency distribution. Note that adding op latency histogram seriously impacts the benchmark QPS, so for a fair baseline, we need the cache_bench changes (except remove simulated stat gathering to make it compile). In short, we don't see any reproducible difference in ops/sec or op latency unless we are gathering stats nearly continuously. Test uses 10GB block cache with 8KB values to be somewhat realistic in the number of items to iterate over. Baseline typical output: ``` Complete in 92.017 s; Rough parallel ops/sec = 869401 Thread ops/sec = 54662 Operation latency (ns): Count: 80000000 Average: 11223.9494 StdDev: 29.61 Min: 0 Median: 7759.3973 Max: 9620500 Percentiles: P50: 7759.40 P75: 14190.73 P99: 46922.75 P99.9: 77509.84 P99.99: 217030.58 ------------------------------------------------------ [ 0, 1 ] 68 0.000% 0.000% ( 2900, 4400 ] 89 0.000% 0.000% ( 4400, 6600 ] 33630240 42.038% 42.038% ######## ( 6600, 9900 ] 18129842 22.662% 64.700% ##### ( 9900, 14000 ] 7877533 9.847% 74.547% ## ( 14000, 22000 ] 15193238 18.992% 93.539% #### ( 22000, 33000 ] 3037061 3.796% 97.335% # ( 33000, 50000 ] 1626316 2.033% 99.368% ( 50000, 75000 ] 421532 0.527% 99.895% ( 75000, 110000 ] 56910 0.071% 99.966% ( 110000, 170000 ] 16134 0.020% 99.986% ( 170000, 250000 ] 5166 0.006% 99.993% ( 250000, 380000 ] 3017 0.004% 99.996% ( 380000, 570000 ] 1337 0.002% 99.998% ( 570000, 860000 ] 805 0.001% 99.999% ( 860000, 1200000 ] 319 0.000% 100.000% ( 1200000, 1900000 ] 231 0.000% 100.000% ( 1900000, 2900000 ] 100 0.000% 100.000% ( 2900000, 4300000 ] 39 0.000% 100.000% ( 4300000, 6500000 ] 16 0.000% 100.000% ( 6500000, 9800000 ] 7 0.000% 100.000% ``` New, gather_stats=false. Median thread ops/sec of 5 runs: ``` Complete in 92.030 s; Rough parallel ops/sec = 869285 Thread ops/sec = 54458 Operation latency (ns): Count: 80000000 Average: 11298.1027 StdDev: 42.18 Min: 0 Median: 7722.0822 Max: 6398720 Percentiles: P50: 7722.08 P75: 14294.68 P99: 47522.95 P99.9: 85292.16 P99.99: 228077.78 ------------------------------------------------------ [ 0, 1 ] 109 0.000% 0.000% ( 2900, 4400 ] 793 0.001% 0.001% ( 4400, 6600 ] 34054563 42.568% 42.569% ######### ( 6600, 9900 ] 17482646 21.853% 64.423% #### ( 9900, 14000 ] 7908180 9.885% 74.308% ## ( 14000, 22000 ] 15032072 18.790% 93.098% #### ( 22000, 33000 ] 3237834 4.047% 97.145% # ( 33000, 50000 ] 1736882 2.171% 99.316% ( 50000, 75000 ] 446851 0.559% 99.875% ( 75000, 110000 ] 68251 0.085% 99.960% ( 110000, 170000 ] 18592 0.023% 99.983% ( 170000, 250000 ] 7200 0.009% 99.992% ( 250000, 380000 ] 3334 0.004% 99.997% ( 380000, 570000 ] 1393 0.002% 99.998% ( 570000, 860000 ] 700 0.001% 99.999% ( 860000, 1200000 ] 293 0.000% 100.000% ( 1200000, 1900000 ] 196 0.000% 100.000% ( 1900000, 2900000 ] 69 0.000% 100.000% ( 2900000, 4300000 ] 32 0.000% 100.000% ( 4300000, 6500000 ] 10 0.000% 100.000% ``` New, gather_stats=true, 1 second delay between scans. Scans take about 1 second here so it's spending about 50% time scanning. Still the effect on ops/sec and latency seems to be in the noise. Median thread ops/sec of 5 runs: ``` Complete in 91.890 s; Rough parallel ops/sec = 870608 Thread ops/sec = 54551 Operation latency (ns): Count: 80000000 Average: 11311.2629 StdDev: 45.28 Min: 0 Median: 7686.5458 Max: 10018340 Percentiles: P50: 7686.55 P75: 14481.95 P99: 47232.60 P99.9: 79230.18 P99.99: 232998.86 ------------------------------------------------------ [ 0, 1 ] 71 0.000% 0.000% ( 2900, 4400 ] 291 0.000% 0.000% ( 4400, 6600 ] 34492060 43.115% 43.116% ######### ( 6600, 9900 ] 16727328 20.909% 64.025% #### ( 9900, 14000 ] 7845828 9.807% 73.832% ## ( 14000, 22000 ] 15510654 19.388% 93.220% #### ( 22000, 33000 ] 3216533 4.021% 97.241% # ( 33000, 50000 ] 1680859 2.101% 99.342% ( 50000, 75000 ] 439059 0.549% 99.891% ( 75000, 110000 ] 60540 0.076% 99.967% ( 110000, 170000 ] 14649 0.018% 99.985% ( 170000, 250000 ] 5242 0.007% 99.991% ( 250000, 380000 ] 3260 0.004% 99.995% ( 380000, 570000 ] 1599 0.002% 99.997% ( 570000, 860000 ] 1043 0.001% 99.999% ( 860000, 1200000 ] 471 0.001% 99.999% ( 1200000, 1900000 ] 275 0.000% 100.000% ( 1900000, 2900000 ] 143 0.000% 100.000% ( 2900000, 4300000 ] 60 0.000% 100.000% ( 4300000, 6500000 ] 27 0.000% 100.000% ( 6500000, 9800000 ] 7 0.000% 100.000% ( 9800000, 14000000 ] 1 0.000% 100.000% Gather stats latency (us): Count: 46 Average: 980387.5870 StdDev: 60911.18 Min: 879155 Median: 1033777.7778 Max: 1261431 Percentiles: P50: 1033777.78 P75: 1120666.67 P99: 1261431.00 P99.9: 1261431.00 P99.99: 1261431.00 ------------------------------------------------------ ( 860000, 1200000 ] 45 97.826% 97.826% #################### ( 1200000, 1900000 ] 1 2.174% 100.000% Most recent cache entry stats: Number of entries: 1295133 Total charge: 9.88 GB Average key size: 23.4982 Average charge: 8.00 KB Unique deleters: 3 ``` Reviewed By: mrambacher Differential Revision: D28295742 Pulled By: pdillinger fbshipit-source-id: bbc4a552f91ba0fe10e5cc025c42cef5a81f2b95
529 lines
16 KiB
C++
529 lines
16 KiB
C++
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
|
|
#include <cinttypes>
|
|
#include <cstdio>
|
|
#include <limits>
|
|
#include <set>
|
|
#include <sstream>
|
|
|
|
#include "monitoring/histogram.h"
|
|
#include "port/port.h"
|
|
#include "rocksdb/cache.h"
|
|
#include "rocksdb/db.h"
|
|
#include "rocksdb/env.h"
|
|
#include "rocksdb/system_clock.h"
|
|
#include "table/block_based/cachable_entry.h"
|
|
#include "util/coding.h"
|
|
#include "util/hash.h"
|
|
#include "util/mutexlock.h"
|
|
#include "util/random.h"
|
|
#include "util/stop_watch.h"
|
|
#include "util/string_util.h"
|
|
|
|
#ifndef GFLAGS
|
|
int main() {
|
|
fprintf(stderr, "Please install gflags to run rocksdb tools\n");
|
|
return 1;
|
|
}
|
|
#else
|
|
|
|
#include "util/gflags_compat.h"
|
|
|
|
using GFLAGS_NAMESPACE::ParseCommandLineFlags;
|
|
|
|
static constexpr uint32_t KiB = uint32_t{1} << 10;
|
|
static constexpr uint32_t MiB = KiB << 10;
|
|
static constexpr uint64_t GiB = MiB << 10;
|
|
|
|
DEFINE_uint32(threads, 16, "Number of concurrent threads to run.");
|
|
DEFINE_uint64(cache_size, 1 * GiB,
|
|
"Number of bytes to use as a cache of uncompressed data.");
|
|
DEFINE_uint32(num_shard_bits, 6, "shard_bits.");
|
|
|
|
DEFINE_double(resident_ratio, 0.25,
|
|
"Ratio of keys fitting in cache to keyspace.");
|
|
DEFINE_uint64(ops_per_thread, 2000000U, "Number of operations per thread.");
|
|
DEFINE_uint32(value_bytes, 8 * KiB, "Size of each value added.");
|
|
|
|
DEFINE_uint32(skew, 5, "Degree of skew in key selection");
|
|
DEFINE_bool(populate_cache, true, "Populate cache before operations");
|
|
|
|
DEFINE_uint32(lookup_insert_percent, 87,
|
|
"Ratio of lookup (+ insert on not found) to total workload "
|
|
"(expressed as a percentage)");
|
|
DEFINE_uint32(insert_percent, 2,
|
|
"Ratio of insert to total workload (expressed as a percentage)");
|
|
DEFINE_uint32(lookup_percent, 10,
|
|
"Ratio of lookup to total workload (expressed as a percentage)");
|
|
DEFINE_uint32(erase_percent, 1,
|
|
"Ratio of erase to total workload (expressed as a percentage)");
|
|
DEFINE_bool(gather_stats, false,
|
|
"Whether to periodically simulate gathering block cache stats, "
|
|
"using one more thread.");
|
|
DEFINE_uint32(
|
|
gather_stats_sleep_ms, 1000,
|
|
"How many milliseconds to sleep between each gathering of stats.");
|
|
|
|
DEFINE_uint32(gather_stats_entries_per_lock, 256,
|
|
"For Cache::ApplyToAllEntries");
|
|
|
|
DEFINE_bool(use_clock_cache, false, "");
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
|
|
class CacheBench;
|
|
namespace {
|
|
// State shared by all concurrent executions of the same benchmark.
|
|
class SharedState {
|
|
public:
|
|
explicit SharedState(CacheBench* cache_bench)
|
|
: cv_(&mu_),
|
|
num_initialized_(0),
|
|
start_(false),
|
|
num_done_(0),
|
|
cache_bench_(cache_bench) {}
|
|
|
|
~SharedState() {}
|
|
|
|
port::Mutex* GetMutex() {
|
|
return &mu_;
|
|
}
|
|
|
|
port::CondVar* GetCondVar() {
|
|
return &cv_;
|
|
}
|
|
|
|
CacheBench* GetCacheBench() const {
|
|
return cache_bench_;
|
|
}
|
|
|
|
void IncInitialized() {
|
|
num_initialized_++;
|
|
}
|
|
|
|
void IncDone() {
|
|
num_done_++;
|
|
}
|
|
|
|
bool AllInitialized() const { return num_initialized_ >= FLAGS_threads; }
|
|
|
|
bool AllDone() const { return num_done_ >= FLAGS_threads; }
|
|
|
|
void SetStart() {
|
|
start_ = true;
|
|
}
|
|
|
|
bool Started() const {
|
|
return start_;
|
|
}
|
|
|
|
private:
|
|
port::Mutex mu_;
|
|
port::CondVar cv_;
|
|
|
|
uint64_t num_initialized_;
|
|
bool start_;
|
|
uint64_t num_done_;
|
|
|
|
CacheBench* cache_bench_;
|
|
};
|
|
|
|
// Per-thread state for concurrent executions of the same benchmark.
|
|
struct ThreadState {
|
|
uint32_t tid;
|
|
Random64 rnd;
|
|
SharedState* shared;
|
|
HistogramImpl latency_ns_hist;
|
|
uint64_t duration_us = 0;
|
|
|
|
ThreadState(uint32_t index, SharedState* _shared)
|
|
: tid(index), rnd(1000 + index), shared(_shared) {}
|
|
};
|
|
|
|
struct KeyGen {
|
|
char key_data[27];
|
|
|
|
Slice GetRand(Random64& rnd, uint64_t max_key) {
|
|
uint64_t raw = rnd.Next();
|
|
// Skew according to setting
|
|
for (uint32_t i = 0; i < FLAGS_skew; ++i) {
|
|
raw = std::min(raw, rnd.Next());
|
|
}
|
|
uint64_t key = FastRange64(raw, max_key);
|
|
// Variable size and alignment
|
|
size_t off = key % 8;
|
|
key_data[0] = char{42};
|
|
EncodeFixed64(key_data + 1, key);
|
|
key_data[9] = char{11};
|
|
EncodeFixed64(key_data + 10, key);
|
|
key_data[18] = char{4};
|
|
EncodeFixed64(key_data + 19, key);
|
|
return Slice(&key_data[off], sizeof(key_data) - off);
|
|
}
|
|
};
|
|
|
|
char* createValue(Random64& rnd) {
|
|
char* rv = new char[FLAGS_value_bytes];
|
|
// Fill with some filler data, and take some CPU time
|
|
for (uint32_t i = 0; i < FLAGS_value_bytes; i += 8) {
|
|
EncodeFixed64(rv + i, rnd.Next());
|
|
}
|
|
return rv;
|
|
}
|
|
|
|
// Different deleters to simulate using deleter to gather
|
|
// stats on the code origin and kind of cache entries.
|
|
void deleter1(const Slice& /*key*/, void* value) {
|
|
delete[] static_cast<char*>(value);
|
|
}
|
|
void deleter2(const Slice& /*key*/, void* value) {
|
|
delete[] static_cast<char*>(value);
|
|
}
|
|
void deleter3(const Slice& /*key*/, void* value) {
|
|
delete[] static_cast<char*>(value);
|
|
}
|
|
} // namespace
|
|
|
|
class CacheBench {
|
|
static constexpr uint64_t kHundredthUint64 =
|
|
std::numeric_limits<uint64_t>::max() / 100U;
|
|
|
|
public:
|
|
CacheBench()
|
|
: max_key_(static_cast<uint64_t>(FLAGS_cache_size / FLAGS_resident_ratio /
|
|
FLAGS_value_bytes)),
|
|
lookup_insert_threshold_(kHundredthUint64 *
|
|
FLAGS_lookup_insert_percent),
|
|
insert_threshold_(lookup_insert_threshold_ +
|
|
kHundredthUint64 * FLAGS_insert_percent),
|
|
lookup_threshold_(insert_threshold_ +
|
|
kHundredthUint64 * FLAGS_lookup_percent),
|
|
erase_threshold_(lookup_threshold_ +
|
|
kHundredthUint64 * FLAGS_erase_percent) {
|
|
if (erase_threshold_ != 100U * kHundredthUint64) {
|
|
fprintf(stderr, "Percentages must add to 100.\n");
|
|
exit(1);
|
|
}
|
|
if (FLAGS_use_clock_cache) {
|
|
cache_ = NewClockCache(FLAGS_cache_size, FLAGS_num_shard_bits);
|
|
if (!cache_) {
|
|
fprintf(stderr, "Clock cache not supported.\n");
|
|
exit(1);
|
|
}
|
|
} else {
|
|
cache_ = NewLRUCache(FLAGS_cache_size, FLAGS_num_shard_bits);
|
|
}
|
|
}
|
|
|
|
~CacheBench() {}
|
|
|
|
void PopulateCache() {
|
|
Random64 rnd(1);
|
|
KeyGen keygen;
|
|
for (uint64_t i = 0; i < 2 * FLAGS_cache_size; i += FLAGS_value_bytes) {
|
|
cache_->Insert(keygen.GetRand(rnd, max_key_), createValue(rnd),
|
|
FLAGS_value_bytes, &deleter1);
|
|
}
|
|
}
|
|
|
|
bool Run() {
|
|
const auto clock = SystemClock::Default().get();
|
|
|
|
PrintEnv();
|
|
SharedState shared(this);
|
|
std::vector<std::unique_ptr<ThreadState> > threads(FLAGS_threads);
|
|
for (uint32_t i = 0; i < FLAGS_threads; i++) {
|
|
threads[i].reset(new ThreadState(i, &shared));
|
|
std::thread(ThreadBody, threads[i].get()).detach();
|
|
}
|
|
|
|
HistogramImpl stats_hist;
|
|
std::string stats_report;
|
|
std::thread stats_thread(StatsBody, &shared, &stats_hist, &stats_report);
|
|
|
|
uint64_t start_time;
|
|
{
|
|
MutexLock l(shared.GetMutex());
|
|
while (!shared.AllInitialized()) {
|
|
shared.GetCondVar()->Wait();
|
|
}
|
|
// Record start time
|
|
start_time = clock->NowMicros();
|
|
|
|
// Start all threads
|
|
shared.SetStart();
|
|
shared.GetCondVar()->SignalAll();
|
|
|
|
// Wait threads to complete
|
|
while (!shared.AllDone()) {
|
|
shared.GetCondVar()->Wait();
|
|
}
|
|
}
|
|
|
|
// Stats gathering is considered background work. This time measurement
|
|
// is for foreground work, and not really ideal for that. See below.
|
|
uint64_t end_time = clock->NowMicros();
|
|
stats_thread.join();
|
|
|
|
// Wall clock time - includes idle time if threads
|
|
// finish at different times (not ideal).
|
|
double elapsed_secs = static_cast<double>(end_time - start_time) * 1e-6;
|
|
uint32_t ops_per_sec = static_cast<uint32_t>(
|
|
1.0 * FLAGS_threads * FLAGS_ops_per_thread / elapsed_secs);
|
|
printf("Complete in %.3f s; Rough parallel ops/sec = %u\n", elapsed_secs,
|
|
ops_per_sec);
|
|
|
|
// Total time in each thread (more accurate throughput measure)
|
|
elapsed_secs = 0;
|
|
for (uint32_t i = 0; i < FLAGS_threads; i++) {
|
|
elapsed_secs += threads[i]->duration_us * 1e-6;
|
|
}
|
|
ops_per_sec = static_cast<uint32_t>(1.0 * FLAGS_threads *
|
|
FLAGS_ops_per_thread / elapsed_secs);
|
|
printf("Thread ops/sec = %u\n", ops_per_sec);
|
|
|
|
printf("\nOperation latency (ns):\n");
|
|
HistogramImpl combined;
|
|
for (uint32_t i = 0; i < FLAGS_threads; i++) {
|
|
combined.Merge(threads[i]->latency_ns_hist);
|
|
}
|
|
printf("%s", combined.ToString().c_str());
|
|
|
|
if (FLAGS_gather_stats) {
|
|
printf("\nGather stats latency (us):\n");
|
|
printf("%s", stats_hist.ToString().c_str());
|
|
}
|
|
|
|
printf("\n%s", stats_report.c_str());
|
|
|
|
return true;
|
|
}
|
|
|
|
private:
|
|
std::shared_ptr<Cache> cache_;
|
|
const uint64_t max_key_;
|
|
// Cumulative thresholds in the space of a random uint64_t
|
|
const uint64_t lookup_insert_threshold_;
|
|
const uint64_t insert_threshold_;
|
|
const uint64_t lookup_threshold_;
|
|
const uint64_t erase_threshold_;
|
|
|
|
// A benchmark version of gathering stats on an active block cache by
|
|
// iterating over it. The primary purpose is to measure the impact of
|
|
// gathering stats with ApplyToAllEntries on throughput- and
|
|
// latency-sensitive Cache users. Performance of stats gathering is
|
|
// also reported. The last set of gathered stats is also reported, for
|
|
// manual sanity checking for logical errors or other unexpected
|
|
// behavior of cache_bench or the underlying Cache.
|
|
static void StatsBody(SharedState* shared, HistogramImpl* stats_hist,
|
|
std::string* stats_report) {
|
|
if (!FLAGS_gather_stats) {
|
|
return;
|
|
}
|
|
const auto clock = SystemClock::Default().get();
|
|
uint64_t total_key_size = 0;
|
|
uint64_t total_charge = 0;
|
|
uint64_t total_entry_count = 0;
|
|
std::set<Cache::DeleterFn> deleters;
|
|
StopWatchNano timer(clock);
|
|
|
|
for (;;) {
|
|
uint64_t time;
|
|
time = clock->NowMicros();
|
|
uint64_t deadline = time + uint64_t{FLAGS_gather_stats_sleep_ms} * 1000;
|
|
|
|
{
|
|
MutexLock l(shared->GetMutex());
|
|
for (;;) {
|
|
if (shared->AllDone()) {
|
|
std::ostringstream ostr;
|
|
ostr << "Most recent cache entry stats:\n"
|
|
<< "Number of entries: " << total_entry_count << "\n"
|
|
<< "Total charge: " << BytesToHumanString(total_charge) << "\n"
|
|
<< "Average key size: "
|
|
<< (1.0 * total_key_size / total_entry_count) << "\n"
|
|
<< "Average charge: "
|
|
<< BytesToHumanString(1.0 * total_charge / total_entry_count)
|
|
<< "\n"
|
|
<< "Unique deleters: " << deleters.size() << "\n";
|
|
*stats_report = ostr.str();
|
|
return;
|
|
}
|
|
if (clock->NowMicros() >= deadline) {
|
|
break;
|
|
}
|
|
uint64_t diff = deadline - std::min(clock->NowMicros(), deadline);
|
|
shared->GetCondVar()->TimedWait(diff + 1);
|
|
}
|
|
}
|
|
|
|
// Now gather stats, outside of mutex
|
|
total_key_size = 0;
|
|
total_charge = 0;
|
|
total_entry_count = 0;
|
|
deleters.clear();
|
|
auto fn = [&](const Slice& key, void* /*value*/, size_t charge,
|
|
Cache::DeleterFn deleter) {
|
|
total_key_size += key.size();
|
|
total_charge += charge;
|
|
++total_entry_count;
|
|
// Something slightly more expensive as in (future) stats by category
|
|
deleters.insert(deleter);
|
|
};
|
|
timer.Start();
|
|
Cache::ApplyToAllEntriesOptions opts;
|
|
opts.average_entries_per_lock = FLAGS_gather_stats_entries_per_lock;
|
|
shared->GetCacheBench()->cache_->ApplyToAllEntries(fn, opts);
|
|
stats_hist->Add(timer.ElapsedNanos() / 1000);
|
|
}
|
|
}
|
|
|
|
static void ThreadBody(ThreadState* thread) {
|
|
SharedState* shared = thread->shared;
|
|
|
|
{
|
|
MutexLock l(shared->GetMutex());
|
|
shared->IncInitialized();
|
|
if (shared->AllInitialized()) {
|
|
shared->GetCondVar()->SignalAll();
|
|
}
|
|
while (!shared->Started()) {
|
|
shared->GetCondVar()->Wait();
|
|
}
|
|
}
|
|
thread->shared->GetCacheBench()->OperateCache(thread);
|
|
|
|
{
|
|
MutexLock l(shared->GetMutex());
|
|
shared->IncDone();
|
|
if (shared->AllDone()) {
|
|
shared->GetCondVar()->SignalAll();
|
|
}
|
|
}
|
|
}
|
|
|
|
void OperateCache(ThreadState* thread) {
|
|
// To use looked-up values
|
|
uint64_t result = 0;
|
|
// To hold handles for a non-trivial amount of time
|
|
Cache::Handle* handle = nullptr;
|
|
KeyGen gen;
|
|
const auto clock = SystemClock::Default().get();
|
|
uint64_t start_time = clock->NowMicros();
|
|
StopWatchNano timer(clock);
|
|
|
|
for (uint64_t i = 0; i < FLAGS_ops_per_thread; i++) {
|
|
timer.Start();
|
|
Slice key = gen.GetRand(thread->rnd, max_key_);
|
|
uint64_t random_op = thread->rnd.Next();
|
|
if (random_op < lookup_insert_threshold_) {
|
|
if (handle) {
|
|
cache_->Release(handle);
|
|
handle = nullptr;
|
|
}
|
|
// do lookup
|
|
handle = cache_->Lookup(key);
|
|
if (handle) {
|
|
// do something with the data
|
|
result += NPHash64(static_cast<char*>(cache_->Value(handle)),
|
|
FLAGS_value_bytes);
|
|
} else {
|
|
// do insert
|
|
cache_->Insert(key, createValue(thread->rnd), FLAGS_value_bytes,
|
|
&deleter2, &handle);
|
|
}
|
|
} else if (random_op < insert_threshold_) {
|
|
if (handle) {
|
|
cache_->Release(handle);
|
|
handle = nullptr;
|
|
}
|
|
// do insert
|
|
cache_->Insert(key, createValue(thread->rnd), FLAGS_value_bytes,
|
|
&deleter3, &handle);
|
|
} else if (random_op < lookup_threshold_) {
|
|
if (handle) {
|
|
cache_->Release(handle);
|
|
handle = nullptr;
|
|
}
|
|
// do lookup
|
|
handle = cache_->Lookup(key);
|
|
if (handle) {
|
|
// do something with the data
|
|
result += NPHash64(static_cast<char*>(cache_->Value(handle)),
|
|
FLAGS_value_bytes);
|
|
}
|
|
} else if (random_op < erase_threshold_) {
|
|
// do erase
|
|
cache_->Erase(key);
|
|
} else {
|
|
// Should be extremely unlikely (noop)
|
|
assert(random_op >= kHundredthUint64 * 100U);
|
|
}
|
|
thread->latency_ns_hist.Add(timer.ElapsedNanos());
|
|
}
|
|
if (handle) {
|
|
cache_->Release(handle);
|
|
handle = nullptr;
|
|
}
|
|
// Ensure computations on `result` are not optimized away.
|
|
if (result == 1) {
|
|
printf("You are extremely unlucky(2). Try again.\n");
|
|
exit(1);
|
|
}
|
|
thread->duration_us = clock->NowMicros() - start_time;
|
|
}
|
|
|
|
void PrintEnv() const {
|
|
printf("RocksDB version : %d.%d\n", kMajorVersion, kMinorVersion);
|
|
printf("Number of threads : %u\n", FLAGS_threads);
|
|
printf("Ops per thread : %" PRIu64 "\n", FLAGS_ops_per_thread);
|
|
printf("Cache size : %s\n",
|
|
BytesToHumanString(FLAGS_cache_size).c_str());
|
|
printf("Num shard bits : %u\n", FLAGS_num_shard_bits);
|
|
printf("Max key : %" PRIu64 "\n", max_key_);
|
|
printf("Resident ratio : %g\n", FLAGS_resident_ratio);
|
|
printf("Skew degree : %u\n", FLAGS_skew);
|
|
printf("Populate cache : %d\n", int{FLAGS_populate_cache});
|
|
printf("Lookup+Insert pct : %u%%\n", FLAGS_lookup_insert_percent);
|
|
printf("Insert percentage : %u%%\n", FLAGS_insert_percent);
|
|
printf("Lookup percentage : %u%%\n", FLAGS_lookup_percent);
|
|
printf("Erase percentage : %u%%\n", FLAGS_erase_percent);
|
|
std::ostringstream stats;
|
|
if (FLAGS_gather_stats) {
|
|
stats << "enabled (" << FLAGS_gather_stats_sleep_ms << "ms, "
|
|
<< FLAGS_gather_stats_entries_per_lock << "/lock)";
|
|
} else {
|
|
stats << "disabled";
|
|
}
|
|
printf("Gather stats : %s\n", stats.str().c_str());
|
|
printf("----------------------------\n");
|
|
}
|
|
};
|
|
} // namespace ROCKSDB_NAMESPACE
|
|
|
|
int main(int argc, char** argv) {
|
|
ParseCommandLineFlags(&argc, &argv, true);
|
|
|
|
if (FLAGS_threads <= 0) {
|
|
fprintf(stderr, "threads number <= 0\n");
|
|
exit(1);
|
|
}
|
|
|
|
ROCKSDB_NAMESPACE::CacheBench bench;
|
|
if (FLAGS_populate_cache) {
|
|
bench.PopulateCache();
|
|
printf("Population complete\n");
|
|
printf("----------------------------\n");
|
|
}
|
|
if (bench.Run()) {
|
|
return 0;
|
|
} else {
|
|
return 1;
|
|
}
|
|
}
|
|
|
|
#endif // GFLAGS
|