Sanitize negative request bytes in GenericRateLimiter::Request and clarify API (#9112)

Summary:
Context:
Surprisingly, there isn't any sanitization against negative `int64_t bytes` in `GenericRateLimiter::Request(int64_t bytes, const Env::IOPriority pri, Statistics* stats)`. A negative `bytes` can be passed in and incorrectly increases `available_bytes_` by subtracting the negative `bytes` from `available_bytes_`, such as  [here](https://github.com/facebook/rocksdb/blob/main/util/rate_limiter.cc#L138) and [here](https://github.com/facebook/rocksdb/blob/main/util/rate_limiter.cc#L283), which are incorrect behaviors.
- Sanitized negative request bytes by rounding it up to 0
- Added notes to public and internal API

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9112

Test Plan: - Rely on existing tests

Reviewed By: ajkr

Differential Revision: D32085364

Pulled By: hx235

fbshipit-source-id: b1b6066b2dd5ffc7bcbfb07069ca65a33578251b
This commit is contained in:
Hui Xiao 2021-11-04 10:10:32 -07:00 committed by Facebook GitHub Bot
parent dfedc74d82
commit a64c8ca7a8
4 changed files with 8 additions and 2 deletions

View File

@ -10,6 +10,7 @@
* Fix ticker WRITE_WITH_WAL("rocksdb.write.wal"), this bug is caused by a bad extra `RecordTick(stats_, WRITE_WITH_WAL)` (at 2 place), this fix remove the extra `RecordTick`s and fix the corresponding test case.
* EventListener::OnTableFileCreated was previously called with OK status and file_size==0 in cases of no SST file contents written (because there was no content to add) and the empty file deleted before calling the listener. Now the status is Aborted.
* Fixed a bug in CompactionIterator when write-preared transaction is used. Releasing earliest_snapshot during compaction may cause a SingleDelete to be output after a PUT of the same user key whose seq has been zeroed.
* Added input sanitization on negative bytes passed into `GenericRateLimiter::Request`.
### Behavior Changes
* `NUM_FILES_IN_SINGLE_COMPACTION` was only counting the first input level files, now it's including all input files.

View File

@ -66,7 +66,8 @@ class RateLimiter {
// Requests token to read or write bytes and potentially updates statistics.
//
// If this request can not be satisfied, the call is blocked. Caller is
// responsible to make sure bytes <= GetSingleBurstBytes().
// responsible to make sure bytes <= GetSingleBurstBytes()
// and bytes >= 0.
virtual void Request(const int64_t bytes, const Env::IOPriority pri,
Statistics* stats, OpType op_type) {
if (IsRateLimited(op_type)) {

View File

@ -9,6 +9,8 @@
#include "util/rate_limiter.h"
#include <algorithm>
#include "monitoring/statistics.h"
#include "port/port.h"
#include "rocksdb/system_clock.h"
@ -107,6 +109,7 @@ void GenericRateLimiter::SetBytesPerSecond(int64_t bytes_per_second) {
void GenericRateLimiter::Request(int64_t bytes, const Env::IOPriority pri,
Statistics* stats) {
assert(bytes <= refill_bytes_per_period_.load(std::memory_order_relaxed));
bytes = std::max(static_cast<int64_t>(0), bytes);
TEST_SYNC_POINT("GenericRateLimiter::Request");
TEST_SYNC_POINT_CALLBACK("GenericRateLimiter::Request:1",
&rate_bytes_per_sec_);

View File

@ -38,7 +38,8 @@ class GenericRateLimiter : public RateLimiter {
// Request for token to write bytes. If this request can not be satisfied,
// the call is blocked. Caller is responsible to make sure
// bytes <= GetSingleBurstBytes()
// bytes <= GetSingleBurstBytes() and bytes >= 0. Negative bytes
// passed in will be rounded up to 0.
using RateLimiter::Request;
virtual void Request(const int64_t bytes, const Env::IOPriority pri,
Statistics* stats) override;