49623f9c8e
Summary: **Context:** Through heap profiling, we discovered that `BlockBasedTableReader` objects can accumulate and lead to high memory usage (e.g, `max_open_file = -1`). These memories are currently not saved, not tracked, not constrained and not cache evict-able. As a first step to improve this, similar to https://github.com/facebook/rocksdb/pull/8428, this PR is to track an estimate of `BlockBasedTableReader` object's memory in block cache and fail future creation if the memory usage exceeds the available space of cache at the time of creation. **Summary:** - Approximate big memory users (`BlockBasedTable::Rep` and `TableProperties` )' memory usage in addition to the existing estimated ones (filter block/index block/un-compression dictionary) - Charge all of these memory usages to block cache on `BlockBasedTable::Open()` and release them on `~BlockBasedTable()` as there is no memory usage fluctuation of concern in between - Refactor on CacheReservationManager (and its call-sites) to add concurrent support for BlockBasedTable used in this PR. Pull Request resolved: https://github.com/facebook/rocksdb/pull/9748 Test Plan: - New unit tests - db bench: `OpenDb` : **-0.52% in ms** - Setup `./db_bench -benchmarks=fillseq -db=/dev/shm/testdb -disable_auto_compactions=1 -write_buffer_size=1048576` - Repeated run with pre-change w/o feature and post-change with feature, benchmark `OpenDb`: `./db_bench -benchmarks=readrandom -use_existing_db=1 -db=/dev/shm/testdb -reserve_table_reader_memory=true (remove this when running w/o feature) -file_opening_threads=3 -open_files=-1 -report_open_timing=true| egrep 'OpenDb:'` #-run | (feature-off) avg milliseconds | std milliseconds | (feature-on) avg milliseconds | std milliseconds | change (%) -- | -- | -- | -- | -- | -- 10 | 11.4018 | 5.95173 | 9.47788 | 1.57538 | -16.87382694 20 | 9.23746 | 0.841053 | 9.32377 | 1.14074 | 0.9343477536 40 | 9.0876 | 0.671129 | 9.35053 | 1.11713 | 2.893283155 80 | 9.72514 | 2.28459 | 9.52013 | 1.0894 | -2.108041632 160 | 9.74677 | 0.991234 | 9.84743 | 1.73396 | 1.032752389 320 | 10.7297 | 5.11555 | 10.547 | 1.97692 | **-1.70275031** 640 | 11.7092 | 2.36565 | 11.7869 | 2.69377 | **0.6635807741** - db bench on write with cost to cache in WriteBufferManager (just in case this PR's CRM refactoring accidentally slows down anything in WBM) : `fillseq` : **+0.54% in micros/op** `./db_bench -benchmarks=fillseq -db=/dev/shm/testdb -disable_auto_compactions=1 -cost_write_buffer_to_cache=true -write_buffer_size=10000000000 | egrep 'fillseq'` #-run | (pre-PR) avg micros/op | std micros/op | (post-PR) avg micros/op | std micros/op | change (%) -- | -- | -- | -- | -- | -- 10 | 6.15 | 0.260187 | 6.289 | 0.371192 | 2.260162602 20 | 7.28025 | 0.465402 | 7.37255 | 0.451256 | 1.267813605 40 | 7.06312 | 0.490654 | 7.13803 | 0.478676 | **1.060579461** 80 | 7.14035 | 0.972831 | 7.14196 | 0.92971 | **0.02254791432** - filter bench: `bloom filter`: **-0.78% in ms/key** - ` ./filter_bench -impl=2 -quick -reserve_table_builder_memory=true | grep 'Build avg'` #-run | (pre-PR) avg ns/key | std ns/key | (post-PR) ns/key | std ns/key | change (%) -- | -- | -- | -- | -- | -- 10 | 26.4369 | 0.442182 | 26.3273 | 0.422919 | **-0.4145720565** 20 | 26.4451 | 0.592787 | 26.1419 | 0.62451 | **-1.1465262** - Crash test `python3 tools/db_crashtest.py blackbox --reserve_table_reader_memory=1 --cache_size=1` killed as normal Reviewed By: ajkr Differential Revision: D35136549 Pulled By: hx235 fbshipit-source-id: 146978858d0f900f43f4eb09bfd3e83195e3be28
138 lines
5.3 KiB
C++
138 lines
5.3 KiB
C++
// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
|
|
#pragma once
|
|
|
|
#include <array>
|
|
#include <cstdint>
|
|
#include <memory>
|
|
#include <type_traits>
|
|
#include <unordered_map>
|
|
|
|
#include "rocksdb/cache.h"
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
|
|
// Classifications of block cache entries, for reporting statistics
|
|
// Adding new enum to this class requires corresponding updates to
|
|
// kCacheEntryRoleToCamelString and kCacheEntryRoleToHyphenString
|
|
enum class CacheEntryRole {
|
|
// Block-based table data block
|
|
kDataBlock,
|
|
// Block-based table filter block (full or partitioned)
|
|
kFilterBlock,
|
|
// Block-based table metadata block for partitioned filter
|
|
kFilterMetaBlock,
|
|
// Block-based table deprecated filter block (old "block-based" filter)
|
|
kDeprecatedFilterBlock,
|
|
// Block-based table index block
|
|
kIndexBlock,
|
|
// Other kinds of block-based table block
|
|
kOtherBlock,
|
|
// WriteBufferManager reservations to account for memtable usage
|
|
kWriteBuffer,
|
|
// BlockBasedTableBuilder reservations to account for
|
|
// compression dictionary building buffer's memory usage
|
|
kCompressionDictionaryBuildingBuffer,
|
|
// Filter reservations to account for
|
|
// (new) bloom and ribbon filter construction's memory usage
|
|
kFilterConstruction,
|
|
// BlockBasedTableReader reservations to account for
|
|
// its memory usage
|
|
kBlockBasedTableReader,
|
|
// Default bucket, for miscellaneous cache entries. Do not use for
|
|
// entries that could potentially add up to large usage.
|
|
kMisc,
|
|
};
|
|
constexpr uint32_t kNumCacheEntryRoles =
|
|
static_cast<uint32_t>(CacheEntryRole::kMisc) + 1;
|
|
|
|
extern std::array<const char*, kNumCacheEntryRoles>
|
|
kCacheEntryRoleToCamelString;
|
|
extern std::array<const char*, kNumCacheEntryRoles>
|
|
kCacheEntryRoleToHyphenString;
|
|
|
|
// To associate cache entries with their role, we use a hack on the
|
|
// existing Cache interface. Because the deleter of an entry can authenticate
|
|
// the code origin of an entry, we can elaborate the choice of deleter to
|
|
// also encode role information, without inferring false role information
|
|
// from entries not choosing to encode a role.
|
|
//
|
|
// The rest of this file is for handling mappings between deleters and
|
|
// roles.
|
|
|
|
// To infer a role from a deleter, the deleter must be registered. This
|
|
// can be done "manually" with this function. This function is thread-safe,
|
|
// and the registration mappings go into private but static storage. (Note
|
|
// that DeleterFn is a function pointer, not std::function. Registrations
|
|
// should not be too many.)
|
|
void RegisterCacheDeleterRole(Cache::DeleterFn fn, CacheEntryRole role);
|
|
|
|
// Gets a copy of the registered deleter -> role mappings. This is the only
|
|
// function for reading the mappings made with RegisterCacheDeleterRole.
|
|
// Why only this interface for reading?
|
|
// * This function has to be thread safe, which could incur substantial
|
|
// overhead. We should not pay this overhead for every deleter look-up.
|
|
// * This is suitable for preparing for batch operations, like with
|
|
// CacheEntryStatsCollector.
|
|
// * The number of mappings should be sufficiently small (dozens).
|
|
std::unordered_map<Cache::DeleterFn, CacheEntryRole> CopyCacheDeleterRoleMap();
|
|
|
|
// ************************************************************** //
|
|
// An automatic registration infrastructure. This enables code
|
|
// to simply ask for a deleter associated with a particular type
|
|
// and role, and registration is automatic. In a sense, this is
|
|
// a small dependency injection infrastructure, because linking
|
|
// in new deleter instantiations is essentially sufficient for
|
|
// making stats collection (using CopyCacheDeleterRoleMap) aware
|
|
// of them.
|
|
|
|
namespace cache_entry_roles_detail {
|
|
|
|
template <typename T, CacheEntryRole R>
|
|
struct RegisteredDeleter {
|
|
RegisteredDeleter() { RegisterCacheDeleterRole(Delete, R); }
|
|
|
|
// These have global linkage to help ensure compiler optimizations do not
|
|
// break uniqueness for each <T,R>
|
|
static void Delete(const Slice& /* key */, void* value) {
|
|
// Supports T == Something[], unlike delete operator
|
|
std::default_delete<T>()(
|
|
static_cast<typename std::remove_extent<T>::type*>(value));
|
|
}
|
|
};
|
|
|
|
template <CacheEntryRole R>
|
|
struct RegisteredNoopDeleter {
|
|
RegisteredNoopDeleter() { RegisterCacheDeleterRole(Delete, R); }
|
|
|
|
static void Delete(const Slice& /* key */, void* /* value */) {
|
|
// Here was `assert(value == nullptr);` but we can also put pointers
|
|
// to static data in Cache, for testing at least.
|
|
}
|
|
};
|
|
|
|
} // namespace cache_entry_roles_detail
|
|
|
|
// Get an automatically registered deleter for value type T and role R.
|
|
// Based on C++ semantics, registration is invoked exactly once in a
|
|
// thread-safe way on first call to this function, for each <T, R>.
|
|
template <typename T, CacheEntryRole R>
|
|
Cache::DeleterFn GetCacheEntryDeleterForRole() {
|
|
static cache_entry_roles_detail::RegisteredDeleter<T, R> reg;
|
|
return reg.Delete;
|
|
}
|
|
|
|
// Get an automatically registered no-op deleter (value should be nullptr)
|
|
// and associated with role R. This is used for Cache "reservation" entries
|
|
// such as for WriteBufferManager.
|
|
template <CacheEntryRole R>
|
|
Cache::DeleterFn GetNoopDeleterForRole() {
|
|
static cache_entry_roles_detail::RegisteredNoopDeleter<R> reg;
|
|
return reg.Delete;
|
|
}
|
|
|
|
} // namespace ROCKSDB_NAMESPACE
|