2016-02-10 00:12:00 +01:00
|
|
|
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
2017-07-16 01:03:42 +02:00
|
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
|
|
// (found in the LICENSE.Apache file in the root directory).
|
2013-10-16 23:59:46 +02:00
|
|
|
//
|
2011-03-18 23:37:00 +01:00
|
|
|
// Copyright (c) 2011 The LevelDB Authors. All rights reserved.
|
|
|
|
// Use of this source code is governed by a BSD-style license that can be
|
|
|
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
|
|
|
|
2020-04-20 22:21:34 +02:00
|
|
|
#include "util/hash.h"
|
2021-08-21 03:40:53 +02:00
|
|
|
|
2011-03-18 23:37:00 +01:00
|
|
|
#include <string.h>
|
2021-08-21 03:40:53 +02:00
|
|
|
|
2020-04-20 22:21:34 +02:00
|
|
|
#include "port/lang.h"
|
2011-03-18 23:37:00 +01:00
|
|
|
#include "util/coding.h"
|
2021-08-21 03:40:53 +02:00
|
|
|
#include "util/hash128.h"
|
|
|
|
#include "util/math128.h"
|
Add new persistent 64-bit hash (#5984)
Summary:
For upcoming new SST filter implementations, we will use a new
64-bit hash function (XXH3 preview, slightly modified). This change
updates hash.{h,cc} for that change, adds unit tests, and out-of-lines
the implementations to keep hash.h as clean/small as possible.
In developing the unit tests, I discovered that the XXH3 preview always
returns zero for the empty string. Zero is problematic for some
algorithms (including an upcoming SST filter implementation) if it
occurs more often than at the "natural" rate, so it should not be
returned from trivial values using trivial seeds. I modified our fork
of XXH3 to return a modest hash of the seed for the empty string.
With hash function details out-of-lines in hash.h, it makes sense to
enable XXH_INLINE_ALL, so that direct calls to XXH64/XXH32/XXH3p
are inlined. To fix array-bounds warnings on some inline calls, I
injected some casts to uintptr_t in xxhash.cc. (Issue reported to Yann.)
Revised: Reverted using XXH_INLINE_ALL for now. Some Facebook
checks are unhappy about #include on xxhash.cc file. I would
fix that by rename to xxhash_cc.h, but to best preserve history I want
to do that in a separate commit (PR) from the uintptr casts.
Also updated filter_bench for this change, improving the performance
predictability of dry run hashing and adding support for 64-bit hash
(for upcoming new SST filter implementations, minor dead code in the
tool for now).
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5984
Differential Revision: D18246567
Pulled By: pdillinger
fbshipit-source-id: 6162fbf6381d63c8cc611dd7ec70e1ddc883fbb8
2019-11-01 00:34:51 +01:00
|
|
|
#include "util/xxhash.h"
|
2021-08-21 03:40:53 +02:00
|
|
|
#include "util/xxph3.h"
|
2011-03-18 23:37:00 +01:00
|
|
|
|
2020-02-20 21:07:53 +01:00
|
|
|
namespace ROCKSDB_NAMESPACE {
|
2011-03-18 23:37:00 +01:00
|
|
|
|
Integrity protection for live updates to WriteBatch (#7748)
Summary:
This PR adds the foundation classes for key-value integrity protection and the first use case: protecting live updates from the source buffers added to `WriteBatch` through the destination buffer in `MemTable`. The width of the protection info is not yet configurable -- only eight bytes per key is supported. This PR allows users to enable protection by constructing `WriteBatch` with `protection_bytes_per_key == 8`. It does not yet expose a way for users to get integrity protection via other write APIs (e.g., `Put()`, `Merge()`, `Delete()`, etc.).
The foundation classes (`ProtectionInfo.*`) embed the coverage info in their type, and provide `Protect.*()` and `Strip.*()` functions to navigate between types with different coverage. For making bytes per key configurable (for powers of two up to eight) in the future, these classes are templated on the unsigned integer type used to store the protection info. That integer contains the XOR'd result of hashes with independent seeds for all covered fields. For integer fields, the hash is computed on the raw unadjusted bytes, so the result is endian-dependent. The most significant bytes are truncated when the hash value (8 bytes) is wider than the protection integer.
When `WriteBatch` is constructed with `protection_bytes_per_key == 8`, we hold a `ProtectionInfoKVOTC` (i.e., one that covers key, value, optype aka `ValueType`, timestamp, and CF ID) for each entry added to the batch. The protection info is generated from the original buffers passed by the user, as well as the original metadata generated internally. When writing to memtable, each entry is transformed to a `ProtectionInfoKVOTS` (i.e., dropping coverage of CF ID and adding coverage of sequence number), since at that point we know the sequence number, and have already selected a memtable corresponding to a particular CF. This protection info is verified once the entry is encoded in the `MemTable` buffer.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/7748
Test Plan:
- an integration test to verify a wide variety of single-byte changes to the encoded `MemTable` buffer are caught
- add to stress/crash test to verify it works in variety of configs/operations without intentional corruption
- [deferred] unit tests for `ProtectionInfo.*` classes for edge cases like KV swap, `SliceParts` and `Slice` APIs are interchangeable, etc.
Reviewed By: pdillinger
Differential Revision: D25754492
Pulled By: ajkr
fbshipit-source-id: e481bac6c03c2ab268be41359730f1ceb9964866
2021-01-29 21:17:17 +01:00
|
|
|
uint64_t (*kGetSliceNPHash64UnseededFnPtr)(const Slice&) = &GetSliceHash64;
|
|
|
|
|
2011-03-18 23:37:00 +01:00
|
|
|
uint32_t Hash(const char* data, size_t n, uint32_t seed) {
|
Add new persistent 64-bit hash (#5984)
Summary:
For upcoming new SST filter implementations, we will use a new
64-bit hash function (XXH3 preview, slightly modified). This change
updates hash.{h,cc} for that change, adds unit tests, and out-of-lines
the implementations to keep hash.h as clean/small as possible.
In developing the unit tests, I discovered that the XXH3 preview always
returns zero for the empty string. Zero is problematic for some
algorithms (including an upcoming SST filter implementation) if it
occurs more often than at the "natural" rate, so it should not be
returned from trivial values using trivial seeds. I modified our fork
of XXH3 to return a modest hash of the seed for the empty string.
With hash function details out-of-lines in hash.h, it makes sense to
enable XXH_INLINE_ALL, so that direct calls to XXH64/XXH32/XXH3p
are inlined. To fix array-bounds warnings on some inline calls, I
injected some casts to uintptr_t in xxhash.cc. (Issue reported to Yann.)
Revised: Reverted using XXH_INLINE_ALL for now. Some Facebook
checks are unhappy about #include on xxhash.cc file. I would
fix that by rename to xxhash_cc.h, but to best preserve history I want
to do that in a separate commit (PR) from the uintptr casts.
Also updated filter_bench for this change, improving the performance
predictability of dry run hashing and adding support for 64-bit hash
(for upcoming new SST filter implementations, minor dead code in the
tool for now).
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5984
Differential Revision: D18246567
Pulled By: pdillinger
fbshipit-source-id: 6162fbf6381d63c8cc611dd7ec70e1ddc883fbb8
2019-11-01 00:34:51 +01:00
|
|
|
// MurmurHash1 - fast but mediocre quality
|
|
|
|
// https://github.com/aappleby/smhasher/wiki/MurmurHash1
|
|
|
|
//
|
2011-03-18 23:37:00 +01:00
|
|
|
const uint32_t m = 0xc6a4a793;
|
|
|
|
const uint32_t r = 24;
|
|
|
|
const char* limit = data + n;
|
2014-11-11 22:47:22 +01:00
|
|
|
uint32_t h = static_cast<uint32_t>(seed ^ (n * m));
|
2011-03-18 23:37:00 +01:00
|
|
|
|
|
|
|
// Pick up four bytes at a time
|
|
|
|
while (data + 4 <= limit) {
|
|
|
|
uint32_t w = DecodeFixed32(data);
|
|
|
|
data += 4;
|
|
|
|
h += w;
|
|
|
|
h *= m;
|
|
|
|
h ^= (h >> 16);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Pick up remaining bytes
|
|
|
|
switch (limit - data) {
|
2017-07-10 21:22:26 +02:00
|
|
|
// Note: The original hash implementation used data[i] << shift, which
|
|
|
|
// promotes the char to int and then performs the shift. If the char is
|
|
|
|
// negative, the shift is undefined behavior in C++. The hash algorithm is
|
|
|
|
// part of the format definition, so we cannot change it; to obtain the same
|
|
|
|
// behavior in a legal way we just cast to uint32_t, which will do
|
|
|
|
// sign-extension. To guarantee compatibility with architectures where chars
|
|
|
|
// are unsigned we first cast the char to int8_t.
|
2011-03-18 23:37:00 +01:00
|
|
|
case 3:
|
2017-07-10 21:22:26 +02:00
|
|
|
h += static_cast<uint32_t>(static_cast<int8_t>(data[2])) << 16;
|
2018-07-13 19:47:49 +02:00
|
|
|
FALLTHROUGH_INTENDED;
|
2011-03-18 23:37:00 +01:00
|
|
|
case 2:
|
2017-07-10 21:22:26 +02:00
|
|
|
h += static_cast<uint32_t>(static_cast<int8_t>(data[1])) << 8;
|
2018-07-13 19:47:49 +02:00
|
|
|
FALLTHROUGH_INTENDED;
|
2011-03-18 23:37:00 +01:00
|
|
|
case 1:
|
2017-07-10 21:22:26 +02:00
|
|
|
h += static_cast<uint32_t>(static_cast<int8_t>(data[0]));
|
2011-03-18 23:37:00 +01:00
|
|
|
h *= m;
|
|
|
|
h ^= (h >> r);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return h;
|
|
|
|
}
|
|
|
|
|
Add new persistent 64-bit hash (#5984)
Summary:
For upcoming new SST filter implementations, we will use a new
64-bit hash function (XXH3 preview, slightly modified). This change
updates hash.{h,cc} for that change, adds unit tests, and out-of-lines
the implementations to keep hash.h as clean/small as possible.
In developing the unit tests, I discovered that the XXH3 preview always
returns zero for the empty string. Zero is problematic for some
algorithms (including an upcoming SST filter implementation) if it
occurs more often than at the "natural" rate, so it should not be
returned from trivial values using trivial seeds. I modified our fork
of XXH3 to return a modest hash of the seed for the empty string.
With hash function details out-of-lines in hash.h, it makes sense to
enable XXH_INLINE_ALL, so that direct calls to XXH64/XXH32/XXH3p
are inlined. To fix array-bounds warnings on some inline calls, I
injected some casts to uintptr_t in xxhash.cc. (Issue reported to Yann.)
Revised: Reverted using XXH_INLINE_ALL for now. Some Facebook
checks are unhappy about #include on xxhash.cc file. I would
fix that by rename to xxhash_cc.h, but to best preserve history I want
to do that in a separate commit (PR) from the uintptr casts.
Also updated filter_bench for this change, improving the performance
predictability of dry run hashing and adding support for 64-bit hash
(for upcoming new SST filter implementations, minor dead code in the
tool for now).
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5984
Differential Revision: D18246567
Pulled By: pdillinger
fbshipit-source-id: 6162fbf6381d63c8cc611dd7ec70e1ddc883fbb8
2019-11-01 00:34:51 +01:00
|
|
|
// We are standardizing on a preview release of XXH3, because that's
|
|
|
|
// the best available at time of standardizing.
|
|
|
|
//
|
|
|
|
// In testing (mostly Intel Skylake), this hash function is much more
|
|
|
|
// thorough than Hash32 and is almost universally faster. Hash() only
|
|
|
|
// seems faster when passing runtime-sized keys of the same small size
|
|
|
|
// (less than about 24 bytes) thousands of times in a row; this seems
|
|
|
|
// to allow the branch predictor to work some magic. XXH3's speed is
|
|
|
|
// much less dependent on branch prediction.
|
|
|
|
//
|
|
|
|
// Hashing with a prefix extractor is potentially a common case of
|
|
|
|
// hashing objects of small, predictable size. We could consider
|
|
|
|
// bundling hash functions specialized for particular lengths with
|
|
|
|
// the prefix extractors.
|
|
|
|
uint64_t Hash64(const char* data, size_t n, uint64_t seed) {
|
2021-08-21 03:40:53 +02:00
|
|
|
return XXPH3_64bits_withSeed(data, n, seed);
|
Add new persistent 64-bit hash (#5984)
Summary:
For upcoming new SST filter implementations, we will use a new
64-bit hash function (XXH3 preview, slightly modified). This change
updates hash.{h,cc} for that change, adds unit tests, and out-of-lines
the implementations to keep hash.h as clean/small as possible.
In developing the unit tests, I discovered that the XXH3 preview always
returns zero for the empty string. Zero is problematic for some
algorithms (including an upcoming SST filter implementation) if it
occurs more often than at the "natural" rate, so it should not be
returned from trivial values using trivial seeds. I modified our fork
of XXH3 to return a modest hash of the seed for the empty string.
With hash function details out-of-lines in hash.h, it makes sense to
enable XXH_INLINE_ALL, so that direct calls to XXH64/XXH32/XXH3p
are inlined. To fix array-bounds warnings on some inline calls, I
injected some casts to uintptr_t in xxhash.cc. (Issue reported to Yann.)
Revised: Reverted using XXH_INLINE_ALL for now. Some Facebook
checks are unhappy about #include on xxhash.cc file. I would
fix that by rename to xxhash_cc.h, but to best preserve history I want
to do that in a separate commit (PR) from the uintptr casts.
Also updated filter_bench for this change, improving the performance
predictability of dry run hashing and adding support for 64-bit hash
(for upcoming new SST filter implementations, minor dead code in the
tool for now).
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5984
Differential Revision: D18246567
Pulled By: pdillinger
fbshipit-source-id: 6162fbf6381d63c8cc611dd7ec70e1ddc883fbb8
2019-11-01 00:34:51 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t Hash64(const char* data, size_t n) {
|
|
|
|
// Same as seed = 0
|
2021-08-21 03:40:53 +02:00
|
|
|
return XXPH3_64bits(data, n);
|
Add new persistent 64-bit hash (#5984)
Summary:
For upcoming new SST filter implementations, we will use a new
64-bit hash function (XXH3 preview, slightly modified). This change
updates hash.{h,cc} for that change, adds unit tests, and out-of-lines
the implementations to keep hash.h as clean/small as possible.
In developing the unit tests, I discovered that the XXH3 preview always
returns zero for the empty string. Zero is problematic for some
algorithms (including an upcoming SST filter implementation) if it
occurs more often than at the "natural" rate, so it should not be
returned from trivial values using trivial seeds. I modified our fork
of XXH3 to return a modest hash of the seed for the empty string.
With hash function details out-of-lines in hash.h, it makes sense to
enable XXH_INLINE_ALL, so that direct calls to XXH64/XXH32/XXH3p
are inlined. To fix array-bounds warnings on some inline calls, I
injected some casts to uintptr_t in xxhash.cc. (Issue reported to Yann.)
Revised: Reverted using XXH_INLINE_ALL for now. Some Facebook
checks are unhappy about #include on xxhash.cc file. I would
fix that by rename to xxhash_cc.h, but to best preserve history I want
to do that in a separate commit (PR) from the uintptr casts.
Also updated filter_bench for this change, improving the performance
predictability of dry run hashing and adding support for 64-bit hash
(for upcoming new SST filter implementations, minor dead code in the
tool for now).
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5984
Differential Revision: D18246567
Pulled By: pdillinger
fbshipit-source-id: 6162fbf6381d63c8cc611dd7ec70e1ddc883fbb8
2019-11-01 00:34:51 +01:00
|
|
|
}
|
|
|
|
|
Integrity protection for live updates to WriteBatch (#7748)
Summary:
This PR adds the foundation classes for key-value integrity protection and the first use case: protecting live updates from the source buffers added to `WriteBatch` through the destination buffer in `MemTable`. The width of the protection info is not yet configurable -- only eight bytes per key is supported. This PR allows users to enable protection by constructing `WriteBatch` with `protection_bytes_per_key == 8`. It does not yet expose a way for users to get integrity protection via other write APIs (e.g., `Put()`, `Merge()`, `Delete()`, etc.).
The foundation classes (`ProtectionInfo.*`) embed the coverage info in their type, and provide `Protect.*()` and `Strip.*()` functions to navigate between types with different coverage. For making bytes per key configurable (for powers of two up to eight) in the future, these classes are templated on the unsigned integer type used to store the protection info. That integer contains the XOR'd result of hashes with independent seeds for all covered fields. For integer fields, the hash is computed on the raw unadjusted bytes, so the result is endian-dependent. The most significant bytes are truncated when the hash value (8 bytes) is wider than the protection integer.
When `WriteBatch` is constructed with `protection_bytes_per_key == 8`, we hold a `ProtectionInfoKVOTC` (i.e., one that covers key, value, optype aka `ValueType`, timestamp, and CF ID) for each entry added to the batch. The protection info is generated from the original buffers passed by the user, as well as the original metadata generated internally. When writing to memtable, each entry is transformed to a `ProtectionInfoKVOTS` (i.e., dropping coverage of CF ID and adding coverage of sequence number), since at that point we know the sequence number, and have already selected a memtable corresponding to a particular CF. This protection info is verified once the entry is encoded in the `MemTable` buffer.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/7748
Test Plan:
- an integration test to verify a wide variety of single-byte changes to the encoded `MemTable` buffer are caught
- add to stress/crash test to verify it works in variety of configs/operations without intentional corruption
- [deferred] unit tests for `ProtectionInfo.*` classes for edge cases like KV swap, `SliceParts` and `Slice` APIs are interchangeable, etc.
Reviewed By: pdillinger
Differential Revision: D25754492
Pulled By: ajkr
fbshipit-source-id: e481bac6c03c2ab268be41359730f1ceb9964866
2021-01-29 21:17:17 +01:00
|
|
|
uint64_t GetSlicePartsNPHash64(const SliceParts& data, uint64_t seed) {
|
|
|
|
// TODO(ajkr): use XXH3 streaming APIs to avoid the copy/allocation.
|
|
|
|
size_t concat_len = 0;
|
|
|
|
for (int i = 0; i < data.num_parts; ++i) {
|
|
|
|
concat_len += data.parts[i].size();
|
|
|
|
}
|
|
|
|
std::string concat_data;
|
|
|
|
concat_data.reserve(concat_len);
|
|
|
|
for (int i = 0; i < data.num_parts; ++i) {
|
|
|
|
concat_data.append(data.parts[i].data(), data.parts[i].size());
|
|
|
|
}
|
|
|
|
assert(concat_data.size() == concat_len);
|
|
|
|
return NPHash64(concat_data.data(), concat_len, seed);
|
|
|
|
}
|
|
|
|
|
2021-08-21 03:40:53 +02:00
|
|
|
Unsigned128 Hash128(const char* data, size_t n, uint64_t seed) {
|
|
|
|
auto h = XXH3_128bits_withSeed(data, n, seed);
|
|
|
|
return (Unsigned128{h.high64} << 64) | (h.low64);
|
|
|
|
}
|
|
|
|
|
|
|
|
Unsigned128 Hash128(const char* data, size_t n) {
|
|
|
|
// Same as seed = 0
|
|
|
|
auto h = XXH3_128bits(data, n);
|
|
|
|
return (Unsigned128{h.high64} << 64) | (h.low64);
|
|
|
|
}
|
|
|
|
|
2020-02-20 21:07:53 +01:00
|
|
|
} // namespace ROCKSDB_NAMESPACE
|