2016-02-10 00:12:00 +01:00
|
|
|
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
2013-10-16 23:59:46 +02:00
|
|
|
// This source code is licensed under the BSD-style license found in the
|
|
|
|
// LICENSE file in the root directory of this source tree. An additional grant
|
|
|
|
// of patent rights can be found in the PATENTS file in the same directory.
|
2017-04-28 02:50:56 +02:00
|
|
|
// This source code is also licensed under the GPLv2 license found in the
|
|
|
|
// COPYING file in the root directory of this source tree.
|
2013-10-16 23:59:46 +02:00
|
|
|
//
|
2014-03-05 19:32:54 +01:00
|
|
|
|
|
|
|
#include <sstream>
|
2017-04-06 04:02:00 +02:00
|
|
|
#include "monitoring/perf_context_imp.h"
|
[RocksDB] Added nano second stopwatch and new perf counters to track block read cost
Summary: The pupose of this diff is to expose per user-call level precise timing of block read, so that we can answer questions like: a Get() costs me 100ms, is that somehow related to loading blocks from file system, or sth else? We will answer that with EXACTLY how many blocks have been read, how much time was spent on transfering the bytes from os, how much time was spent on checksum verification and how much time was spent on block decompression, just for that one Get. A nano second stopwatch was introduced to track time with higher precision. The cost/precision of the stopwatch is also measured in unit-test. On my dev box, retrieving one time instance costs about 30ns, on average. The deviation of timing results is good enough to track 100ns-1us level events. And the overhead could be safely ignored for 100us level events (10000 instances/s), for example, a viewstate thrift call.
Test Plan: perf_context_test, also testing with viewstate shadow traffic.
Reviewers: dhruba
Reviewed By: dhruba
CC: leveldb, xjin
Differential Revision: https://reviews.facebook.net/D12351
2013-06-04 08:09:15 +02:00
|
|
|
|
2013-10-04 06:49:15 +02:00
|
|
|
namespace rocksdb {
|
[RocksDB] Added nano second stopwatch and new perf counters to track block read cost
Summary: The pupose of this diff is to expose per user-call level precise timing of block read, so that we can answer questions like: a Get() costs me 100ms, is that somehow related to loading blocks from file system, or sth else? We will answer that with EXACTLY how many blocks have been read, how much time was spent on transfering the bytes from os, how much time was spent on checksum verification and how much time was spent on block decompression, just for that one Get. A nano second stopwatch was introduced to track time with higher precision. The cost/precision of the stopwatch is also measured in unit-test. On my dev box, retrieving one time instance costs about 30ns, on average. The deviation of timing results is good enough to track 100ns-1us level events. And the overhead could be safely ignored for 100us level events (10000 instances/s), for example, a viewstate thrift call.
Test Plan: perf_context_test, also testing with viewstate shadow traffic.
Reviewers: dhruba
Reviewed By: dhruba
CC: leveldb, xjin
Differential Revision: https://reviews.facebook.net/D12351
2013-06-04 08:09:15 +02:00
|
|
|
|
cross-platform compatibility improvements
Summary:
We've had a couple CockroachDB users fail to build RocksDB on exotic platforms, so I figured I'd try my hand at solving these issues upstream. The problems stem from a) `USE_SSE=1` being too aggressive about turning on SSE4.2, even on toolchains that don't support SSE4.2 and b) RocksDB attempting to detect support for thread-local storage based on OS, even though it can vary by compiler on the same OS.
See the individual commit messages for details. Regarding SSE support, this PR should change virtually nothing for non-CMake based builds. `make`, `PORTABLE=1 make`, `USE_SSE=1 make`, and `PORTABLE=1 USE_SSE=1 make` function exactly as before, except that SSE support will be automatically disabled when a simple SSE4.2-using test program fails to compile, as it does on OpenBSD. (OpenBSD's ports GCC supports SSE4.2, but its binutils do not, so `__SSE_4_2__` is defined but an SSE4.2-using program will fail to assemble.) A warning is emitted in this case. The CMake build is modified to support the same set of options, except that `USE_SSE` is spelled `FORCE_SSE42` because `USE_SSE` is rather useless now that we can automatically detect SSE support, and I figure changing options in the CMake build is less disruptive than changing the non-CMake build.
I've tested these changes on all the platforms I can get my hands on (macOS, Windows MSVC, Windows MinGW, and OpenBSD) and it all works splendidly. Let me know if there's anything you object to—I obviously don't mean to break any of your build pipelines in the process of fixing ours downstream.
Closes https://github.com/facebook/rocksdb/pull/2199
Differential Revision: D5054042
Pulled By: yiwu-arbug
fbshipit-source-id: 938e1fc665c049c02ae15698e1409155b8e72171
2017-05-15 23:42:32 +02:00
|
|
|
#if defined(NPERF_CONTEXT) || !defined(ROCKSDB_SUPPORT_THREAD_LOCAL)
|
2015-07-16 21:10:16 +02:00
|
|
|
PerfContext perf_context;
|
2014-04-04 22:11:44 +02:00
|
|
|
#else
|
2017-04-20 19:59:04 +02:00
|
|
|
#if defined(OS_SOLARIS)
|
|
|
|
__thread PerfContext perf_context_;
|
|
|
|
#else
|
|
|
|
__thread PerfContext perf_context;
|
|
|
|
#endif
|
2014-04-04 22:11:44 +02:00
|
|
|
#endif
|
[RocksDB] Added nano second stopwatch and new perf counters to track block read cost
Summary: The pupose of this diff is to expose per user-call level precise timing of block read, so that we can answer questions like: a Get() costs me 100ms, is that somehow related to loading blocks from file system, or sth else? We will answer that with EXACTLY how many blocks have been read, how much time was spent on transfering the bytes from os, how much time was spent on checksum verification and how much time was spent on block decompression, just for that one Get. A nano second stopwatch was introduced to track time with higher precision. The cost/precision of the stopwatch is also measured in unit-test. On my dev box, retrieving one time instance costs about 30ns, on average. The deviation of timing results is good enough to track 100ns-1us level events. And the overhead could be safely ignored for 100us level events (10000 instances/s), for example, a viewstate thrift call.
Test Plan: perf_context_test, also testing with viewstate shadow traffic.
Reviewers: dhruba
Reviewed By: dhruba
CC: leveldb, xjin
Differential Revision: https://reviews.facebook.net/D12351
2013-06-04 08:09:15 +02:00
|
|
|
|
|
|
|
void PerfContext::Reset() {
|
cross-platform compatibility improvements
Summary:
We've had a couple CockroachDB users fail to build RocksDB on exotic platforms, so I figured I'd try my hand at solving these issues upstream. The problems stem from a) `USE_SSE=1` being too aggressive about turning on SSE4.2, even on toolchains that don't support SSE4.2 and b) RocksDB attempting to detect support for thread-local storage based on OS, even though it can vary by compiler on the same OS.
See the individual commit messages for details. Regarding SSE support, this PR should change virtually nothing for non-CMake based builds. `make`, `PORTABLE=1 make`, `USE_SSE=1 make`, and `PORTABLE=1 USE_SSE=1 make` function exactly as before, except that SSE support will be automatically disabled when a simple SSE4.2-using test program fails to compile, as it does on OpenBSD. (OpenBSD's ports GCC supports SSE4.2, but its binutils do not, so `__SSE_4_2__` is defined but an SSE4.2-using program will fail to assemble.) A warning is emitted in this case. The CMake build is modified to support the same set of options, except that `USE_SSE` is spelled `FORCE_SSE42` because `USE_SSE` is rather useless now that we can automatically detect SSE support, and I figure changing options in the CMake build is less disruptive than changing the non-CMake build.
I've tested these changes on all the platforms I can get my hands on (macOS, Windows MSVC, Windows MinGW, and OpenBSD) and it all works splendidly. Let me know if there's anything you object to—I obviously don't mean to break any of your build pipelines in the process of fixing ours downstream.
Closes https://github.com/facebook/rocksdb/pull/2199
Differential Revision: D5054042
Pulled By: yiwu-arbug
fbshipit-source-id: 938e1fc665c049c02ae15698e1409155b8e72171
2017-05-15 23:42:32 +02:00
|
|
|
#if !defined(NPERF_CONTEXT) && defined(ROCKSDB_SUPPORT_THREAD_LOCAL)
|
[RocksDB] Added nano second stopwatch and new perf counters to track block read cost
Summary: The pupose of this diff is to expose per user-call level precise timing of block read, so that we can answer questions like: a Get() costs me 100ms, is that somehow related to loading blocks from file system, or sth else? We will answer that with EXACTLY how many blocks have been read, how much time was spent on transfering the bytes from os, how much time was spent on checksum verification and how much time was spent on block decompression, just for that one Get. A nano second stopwatch was introduced to track time with higher precision. The cost/precision of the stopwatch is also measured in unit-test. On my dev box, retrieving one time instance costs about 30ns, on average. The deviation of timing results is good enough to track 100ns-1us level events. And the overhead could be safely ignored for 100us level events (10000 instances/s), for example, a viewstate thrift call.
Test Plan: perf_context_test, also testing with viewstate shadow traffic.
Reviewers: dhruba
Reviewed By: dhruba
CC: leveldb, xjin
Differential Revision: https://reviews.facebook.net/D12351
2013-06-04 08:09:15 +02:00
|
|
|
user_key_comparison_count = 0;
|
|
|
|
block_cache_hit_count = 0;
|
|
|
|
block_read_count = 0;
|
|
|
|
block_read_byte = 0;
|
|
|
|
block_read_time = 0;
|
|
|
|
block_checksum_time = 0;
|
|
|
|
block_decompress_time = 0;
|
2013-10-02 19:28:25 +02:00
|
|
|
internal_key_skipped_count = 0;
|
|
|
|
internal_delete_skipped_count = 0;
|
2016-11-28 19:12:28 +01:00
|
|
|
internal_recent_skipped_count = 0;
|
|
|
|
internal_merge_count = 0;
|
2013-11-18 20:32:54 +01:00
|
|
|
write_wal_time = 0;
|
|
|
|
|
|
|
|
get_snapshot_time = 0;
|
|
|
|
get_from_memtable_time = 0;
|
|
|
|
get_from_memtable_count = 0;
|
|
|
|
get_post_process_time = 0;
|
|
|
|
get_from_output_files_time = 0;
|
2015-02-28 02:06:06 +01:00
|
|
|
seek_on_memtable_time = 0;
|
|
|
|
seek_on_memtable_count = 0;
|
2016-11-28 19:12:28 +01:00
|
|
|
next_on_memtable_count = 0;
|
|
|
|
prev_on_memtable_count = 0;
|
2013-11-18 20:32:54 +01:00
|
|
|
seek_child_seek_time = 0;
|
|
|
|
seek_child_seek_count = 0;
|
|
|
|
seek_min_heap_time = 0;
|
|
|
|
seek_internal_seek_time = 0;
|
|
|
|
find_next_user_entry_time = 0;
|
|
|
|
write_pre_and_post_process_time = 0;
|
|
|
|
write_memtable_time = 0;
|
2015-06-02 11:07:58 +02:00
|
|
|
write_delay_time = 0;
|
2015-02-10 01:12:31 +01:00
|
|
|
db_mutex_lock_nanos = 0;
|
|
|
|
db_condition_wait_nanos = 0;
|
2015-03-03 19:59:36 +01:00
|
|
|
merge_operator_time_nanos = 0;
|
2015-07-09 01:34:48 +02:00
|
|
|
read_index_block_nanos = 0;
|
|
|
|
read_filter_block_nanos = 0;
|
|
|
|
new_table_block_iter_nanos = 0;
|
2015-07-11 01:09:10 +02:00
|
|
|
new_table_iterator_nanos = 0;
|
2015-07-09 01:34:48 +02:00
|
|
|
block_seek_nanos = 0;
|
|
|
|
find_table_nanos = 0;
|
2015-10-07 20:23:20 +02:00
|
|
|
bloom_memtable_hit_count = 0;
|
|
|
|
bloom_memtable_miss_count = 0;
|
|
|
|
bloom_sst_hit_count = 0;
|
|
|
|
bloom_sst_miss_count = 0;
|
2017-04-04 20:12:47 +02:00
|
|
|
|
|
|
|
env_new_sequential_file_nanos = 0;
|
|
|
|
env_new_random_access_file_nanos = 0;
|
|
|
|
env_new_writable_file_nanos = 0;
|
|
|
|
env_reuse_writable_file_nanos = 0;
|
|
|
|
env_new_random_rw_file_nanos = 0;
|
|
|
|
env_new_directory_nanos = 0;
|
|
|
|
env_file_exists_nanos = 0;
|
|
|
|
env_get_children_nanos = 0;
|
|
|
|
env_get_children_file_attributes_nanos = 0;
|
|
|
|
env_delete_file_nanos = 0;
|
|
|
|
env_create_dir_nanos = 0;
|
|
|
|
env_create_dir_if_missing_nanos = 0;
|
|
|
|
env_delete_dir_nanos = 0;
|
|
|
|
env_get_file_size_nanos = 0;
|
|
|
|
env_get_file_modification_time_nanos = 0;
|
|
|
|
env_rename_file_nanos = 0;
|
|
|
|
env_link_file_nanos = 0;
|
|
|
|
env_lock_file_nanos = 0;
|
|
|
|
env_unlock_file_nanos = 0;
|
|
|
|
env_new_logger_nanos = 0;
|
2014-04-08 19:58:07 +02:00
|
|
|
#endif
|
[RocksDB] Added nano second stopwatch and new perf counters to track block read cost
Summary: The pupose of this diff is to expose per user-call level precise timing of block read, so that we can answer questions like: a Get() costs me 100ms, is that somehow related to loading blocks from file system, or sth else? We will answer that with EXACTLY how many blocks have been read, how much time was spent on transfering the bytes from os, how much time was spent on checksum verification and how much time was spent on block decompression, just for that one Get. A nano second stopwatch was introduced to track time with higher precision. The cost/precision of the stopwatch is also measured in unit-test. On my dev box, retrieving one time instance costs about 30ns, on average. The deviation of timing results is good enough to track 100ns-1us level events. And the overhead could be safely ignored for 100us level events (10000 instances/s), for example, a viewstate thrift call.
Test Plan: perf_context_test, also testing with viewstate shadow traffic.
Reviewers: dhruba
Reviewed By: dhruba
CC: leveldb, xjin
Differential Revision: https://reviews.facebook.net/D12351
2013-06-04 08:09:15 +02:00
|
|
|
}
|
|
|
|
|
2016-02-04 01:04:10 +01:00
|
|
|
#define PERF_CONTEXT_OUTPUT(counter) \
|
2016-02-01 22:41:13 +01:00
|
|
|
if (!exclude_zero_counters || (counter > 0)) { \
|
|
|
|
ss << #counter << " = " << counter << ", "; \
|
|
|
|
}
|
2014-03-05 19:32:54 +01:00
|
|
|
|
2016-02-01 22:41:13 +01:00
|
|
|
std::string PerfContext::ToString(bool exclude_zero_counters) const {
|
cross-platform compatibility improvements
Summary:
We've had a couple CockroachDB users fail to build RocksDB on exotic platforms, so I figured I'd try my hand at solving these issues upstream. The problems stem from a) `USE_SSE=1` being too aggressive about turning on SSE4.2, even on toolchains that don't support SSE4.2 and b) RocksDB attempting to detect support for thread-local storage based on OS, even though it can vary by compiler on the same OS.
See the individual commit messages for details. Regarding SSE support, this PR should change virtually nothing for non-CMake based builds. `make`, `PORTABLE=1 make`, `USE_SSE=1 make`, and `PORTABLE=1 USE_SSE=1 make` function exactly as before, except that SSE support will be automatically disabled when a simple SSE4.2-using test program fails to compile, as it does on OpenBSD. (OpenBSD's ports GCC supports SSE4.2, but its binutils do not, so `__SSE_4_2__` is defined but an SSE4.2-using program will fail to assemble.) A warning is emitted in this case. The CMake build is modified to support the same set of options, except that `USE_SSE` is spelled `FORCE_SSE42` because `USE_SSE` is rather useless now that we can automatically detect SSE support, and I figure changing options in the CMake build is less disruptive than changing the non-CMake build.
I've tested these changes on all the platforms I can get my hands on (macOS, Windows MSVC, Windows MinGW, and OpenBSD) and it all works splendidly. Let me know if there's anything you object to—I obviously don't mean to break any of your build pipelines in the process of fixing ours downstream.
Closes https://github.com/facebook/rocksdb/pull/2199
Differential Revision: D5054042
Pulled By: yiwu-arbug
fbshipit-source-id: 938e1fc665c049c02ae15698e1409155b8e72171
2017-05-15 23:42:32 +02:00
|
|
|
#if defined(NPERF_CONTEXT) || !defined(ROCKSDB_SUPPORT_THREAD_LOCAL)
|
2014-04-08 19:58:07 +02:00
|
|
|
return "";
|
|
|
|
#else
|
2014-03-05 19:32:54 +01:00
|
|
|
std::ostringstream ss;
|
2016-02-04 01:04:10 +01:00
|
|
|
PERF_CONTEXT_OUTPUT(user_key_comparison_count);
|
|
|
|
PERF_CONTEXT_OUTPUT(block_cache_hit_count);
|
|
|
|
PERF_CONTEXT_OUTPUT(block_read_count);
|
|
|
|
PERF_CONTEXT_OUTPUT(block_read_byte);
|
|
|
|
PERF_CONTEXT_OUTPUT(block_read_time);
|
|
|
|
PERF_CONTEXT_OUTPUT(block_checksum_time);
|
|
|
|
PERF_CONTEXT_OUTPUT(block_decompress_time);
|
|
|
|
PERF_CONTEXT_OUTPUT(internal_key_skipped_count);
|
|
|
|
PERF_CONTEXT_OUTPUT(internal_delete_skipped_count);
|
2016-11-28 19:12:28 +01:00
|
|
|
PERF_CONTEXT_OUTPUT(internal_recent_skipped_count);
|
|
|
|
PERF_CONTEXT_OUTPUT(internal_merge_count);
|
2016-02-04 01:04:10 +01:00
|
|
|
PERF_CONTEXT_OUTPUT(write_wal_time);
|
|
|
|
PERF_CONTEXT_OUTPUT(get_snapshot_time);
|
|
|
|
PERF_CONTEXT_OUTPUT(get_from_memtable_time);
|
|
|
|
PERF_CONTEXT_OUTPUT(get_from_memtable_count);
|
|
|
|
PERF_CONTEXT_OUTPUT(get_post_process_time);
|
|
|
|
PERF_CONTEXT_OUTPUT(get_from_output_files_time);
|
|
|
|
PERF_CONTEXT_OUTPUT(seek_on_memtable_time);
|
|
|
|
PERF_CONTEXT_OUTPUT(seek_on_memtable_count);
|
2016-11-28 19:12:28 +01:00
|
|
|
PERF_CONTEXT_OUTPUT(next_on_memtable_count);
|
|
|
|
PERF_CONTEXT_OUTPUT(prev_on_memtable_count);
|
2016-02-04 01:04:10 +01:00
|
|
|
PERF_CONTEXT_OUTPUT(seek_child_seek_time);
|
|
|
|
PERF_CONTEXT_OUTPUT(seek_child_seek_count);
|
|
|
|
PERF_CONTEXT_OUTPUT(seek_min_heap_time);
|
|
|
|
PERF_CONTEXT_OUTPUT(seek_internal_seek_time);
|
|
|
|
PERF_CONTEXT_OUTPUT(find_next_user_entry_time);
|
|
|
|
PERF_CONTEXT_OUTPUT(write_pre_and_post_process_time);
|
|
|
|
PERF_CONTEXT_OUTPUT(write_memtable_time);
|
|
|
|
PERF_CONTEXT_OUTPUT(db_mutex_lock_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(db_condition_wait_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(merge_operator_time_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(write_delay_time);
|
|
|
|
PERF_CONTEXT_OUTPUT(read_index_block_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(read_filter_block_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(new_table_block_iter_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(new_table_iterator_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(block_seek_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(find_table_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(bloom_memtable_hit_count);
|
|
|
|
PERF_CONTEXT_OUTPUT(bloom_memtable_miss_count);
|
|
|
|
PERF_CONTEXT_OUTPUT(bloom_sst_hit_count);
|
|
|
|
PERF_CONTEXT_OUTPUT(bloom_sst_miss_count);
|
2017-04-04 20:12:47 +02:00
|
|
|
PERF_CONTEXT_OUTPUT(env_new_sequential_file_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_new_random_access_file_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_new_writable_file_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_reuse_writable_file_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_new_random_rw_file_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_new_directory_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_file_exists_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_get_children_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_get_children_file_attributes_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_delete_file_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_create_dir_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_create_dir_if_missing_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_delete_dir_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_get_file_size_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_get_file_modification_time_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_rename_file_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_link_file_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_lock_file_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_unlock_file_nanos);
|
|
|
|
PERF_CONTEXT_OUTPUT(env_new_logger_nanos);
|
2014-03-05 19:32:54 +01:00
|
|
|
return ss.str();
|
2014-04-08 19:58:07 +02:00
|
|
|
#endif
|
2014-03-05 19:32:54 +01:00
|
|
|
}
|
|
|
|
|
2017-04-20 19:59:04 +02:00
|
|
|
#if defined(OS_SOLARIS)
|
|
|
|
PerfContext *getPerfContext() {
|
|
|
|
return &perf_context_;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
[RocksDB] Added nano second stopwatch and new perf counters to track block read cost
Summary: The pupose of this diff is to expose per user-call level precise timing of block read, so that we can answer questions like: a Get() costs me 100ms, is that somehow related to loading blocks from file system, or sth else? We will answer that with EXACTLY how many blocks have been read, how much time was spent on transfering the bytes from os, how much time was spent on checksum verification and how much time was spent on block decompression, just for that one Get. A nano second stopwatch was introduced to track time with higher precision. The cost/precision of the stopwatch is also measured in unit-test. On my dev box, retrieving one time instance costs about 30ns, on average. The deviation of timing results is good enough to track 100ns-1us level events. And the overhead could be safely ignored for 100us level events (10000 instances/s), for example, a viewstate thrift call.
Test Plan: perf_context_test, also testing with viewstate shadow traffic.
Reviewers: dhruba
Reviewed By: dhruba
CC: leveldb, xjin
Differential Revision: https://reviews.facebook.net/D12351
2013-06-04 08:09:15 +02:00
|
|
|
}
|