Compare commits

...

18 Commits

Author SHA1 Message Date
Andrew Kryczka
abd4b1ff15 bump version and update HISTORY.md for 6.15.5 2021-02-05 17:01:51 -08:00
Andrew Kryczka
5893d5e2a1 Allow range deletions in *TransactionDB only when safe (#7929)
Summary:
Explicitly reject all range deletions on `TransactionDB` or `OptimisticTransactionDB`, except when the user provides sufficient promises that allow us to proceed safely. The necessary promises are described in the API doc for `TransactionDB::DeleteRange()`. There is currently no way to provide enough promises to make it safe in `OptimisticTransactionDB`.

Fixes https://github.com/facebook/rocksdb/issues/7913.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/7929

Test Plan: unit tests covering the cases it's permitted/rejected

Reviewed By: ltamasi

Differential Revision: D26240254

Pulled By: ajkr

fbshipit-source-id: 2834a0ce64cc3e4c3799e35b885a5e79c2f4f6d9
2021-02-05 17:00:34 -08:00
Andrew Kryczka
fbed72f03c bump version and update HISTORY.md for 6.15.4 2021-01-21 12:41:08 -08:00
Andrew Kryczka
1420cbf09d workaround race conditions during PeriodicWorkScheduler registration (#7888)
Summary:
This provides a workaround for two race conditions that will be fixed in
a more sophisticated way later. This PR:

(1) Makes the client serialize calls to `Timer::Start()` and `Timer::Shutdown()` (see https://github.com/facebook/rocksdb/issues/7711). The long-term fix will be to make those functions thread-safe.
(2) Makes `PeriodicWorkScheduler` atomically add/cancel work together with starting/shutting down its `Timer`. The long-term fix will be for `Timer` API to offer more specialized APIs so the client will not need to synchronize.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/7888

Test Plan: ran the repro provided in https://github.com/facebook/rocksdb/issues/7881

Reviewed By: jay-zhuang

Differential Revision: D25990891

Pulled By: ajkr

fbshipit-source-id: a97fdaebbda6d7db7ddb1b146738b68c16c5be38
2021-01-21 12:40:16 -08:00
Ramkumar Vadivelu
6002cce223 Bump the version file to 6.15.3 and update HISTORY.md with fix information. 2021-01-07 09:32:27 -08:00
Adam Retter
c9e00cce65 Attempt to fix build errors around missing compression library includes (#7803)
Summary:
This fixes an issue introduced in https://github.com/facebook/rocksdb/pull/7769 that caused many errors about missing compression libraries to be displayed during compilation, although compilation actually succeeded. This PR fixes the compilation so the compression libraries are only introduced where strictly needed.

It likely needs to be merged into the same branches as https://github.com/facebook/rocksdb/pull/7769 which I think are:
1. master
2. 6.15.fb
3. 6.16.fb

Pull Request resolved: https://github.com/facebook/rocksdb/pull/7803

Reviewed By: ramvadiv

Differential Revision: D25733743

Pulled By: pdillinger

fbshipit-source-id: 6c04f6864b2ff4a345841d791a89b19e0e3f5bf7
2021-01-07 09:27:26 -08:00
Ramkumar Vadivelu
ca8082762f Bump the version file to 6.15.2 and update HISTORY.md with fix information.
Also, fix a merge issue with .circleci/config.yml (missing
install-cmake-on-macos in commands section)
2020-12-22 09:43:58 -08:00
Adam Retter
12c87b16c6 Fix various small build issues, Java API naming (#7776)
Summary:
* Compatibility with older GCC.
* Compatibility with older jemalloc libraries.
* Remove Docker warning when building i686 binaries.
* Fix case inconsistency in Java API naming (potential update to HISTORY.md deferred)

Pull Request resolved: https://github.com/facebook/rocksdb/pull/7776

Reviewed By: akankshamahajan15

Differential Revision: D25607235

Pulled By: pdillinger

fbshipit-source-id: 7ab0fb7fa7a34e97ed0bec991f5081acb095777d
2020-12-22 09:29:16 -08:00
Adam Retter
b39b6d7711 Fix jemalloc compliation problem on macOS (#7624)
Summary:
Closes https://github.com/facebook/rocksdb/issues/7269

I have only tested this on macOS, let's see what CI makes of it for the other platforms...

Pull Request resolved: https://github.com/facebook/rocksdb/pull/7624

Reviewed By: ajkr

Differential Revision: D24834305

Pulled By: pdillinger

fbshipit-source-id: ba818d8424297ccebd18ed854b044764c2dbab5f
2020-12-22 09:27:57 -08:00
Adam Retter
ceee8ad97d Fix failing RocksJava test compilation and add CI (#7769)
Summary:
* Fixes a Java test compilation issue on macOS
* Cleans up CircleCI RocksDBJava build config
* Adds CircleCI for RocksDBJava on MacOS
* Ensures backwards compatibility with older macOS via CircleCI
* Fixes RocksJava static builds ordering
* Adds missing RocksJava static builds to CircleCI for Mac and Linux
* Improves parallelism in RocksJava builds
* Reduces the size of the machines used for RocksJava CircleCI as they don't need to be so large (Saves credits)

Pull Request resolved: https://github.com/facebook/rocksdb/pull/7769

Reviewed By: akankshamahajan15

Differential Revision: D25601293

Pulled By: pdillinger

fbshipit-source-id: 0a0bb9906f65438fe143487d78e37e1947364d08
2020-12-22 09:22:47 -08:00
anand76
c43be4f30c Use default FileSystem in GenerateUniqueId (#7672)
Summary:
Use ```FileSystem::Default``` to read ```/proc/sys/kernel/uuid```, so it works for ```Envs``` with remote ```FileSystem``` as well.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/7672

Reviewed By: riversand963

Differential Revision: D24998702

Pulled By: anand1976

fbshipit-source-id: fa95c1d70f0e4ed17561201f047aa055046d06c3
2020-12-10 19:14:32 -08:00
Neil Mitchell
d15cc91241 Make the TARGETS file Starlark compliant (#7743)
Summary:
Buck TARGETS files are sometimes parsed with Python, and sometimes with Starlark - this TARGETS file was not Starlark compliant. In Starlark you can't have a top-level if in a TARGETS file, but you can have a ternary `a if b else c`. Therefore I converted TARGETS, and updated the generator for it.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/7743

Reviewed By: pdillinger

Differential Revision: D25342587

Pulled By: ndmitchell

fbshipit-source-id: 88cbe8632071a45a3ea8675812967614c62c78d1
2020-12-10 11:12:58 -08:00
anand76
eea9a027f6 Ensure that MultiGet works properly with compressed cache (#7756)
Summary:
Ensure that when direct IO is enabled and a compressed block cache is
configured, MultiGet inserts compressed data blocks into the compressed
block cache.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/7756

Test Plan: Add unit test to db_basic_test

Reviewed By: cheng-chang

Differential Revision: D25416240

Pulled By: anand1976

fbshipit-source-id: 75d57526370c9c0a45ff72651f3278dbd8a9086f
2020-12-09 22:57:35 -08:00
Andrew Kryczka
4b879eeb9d update HISTORY.md and bump version to 6.15.1 2020-12-01 15:00:23 -08:00
Andrew Kryczka
cac1fc4dfb Fix kPointInTimeRecovery handling of truncated WAL (#7701)
Summary:
WAL may be truncated to an incomplete record due to crash while writing
the last record or corruption. In the former case, no hole will be
produced since no ACK'd data was lost. In the latter case, a hole could
be produced without this PR since we proceeded to recover the next WAL
as if nothing happened. This PR changes the record reading code to
always report a corruption for incomplete records in
`kPointInTimeRecovery` mode, and the upper layer will only ignore them
if the next WAL has consecutive seqnum (i.e., we are guaranteed no
hole).

While this solves the hole problem for the case of incomplete
records, the possibility is still there if the WAL is corrupted by
truncation to an exact record boundary.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/7701

Test Plan:
Interestingly there already was a test for this case
(`DBWALTestWithParams.kPointInTimeRecovery`); it just had a typo bug in
the verification that prevented it from noticing holes in recovery.

Reviewed By: anand1976

Differential Revision: D25111765

Pulled By: ajkr

fbshipit-source-id: 5e330b13b1ee2b5be096cea9d0ff6075843e57b6
2020-12-01 14:59:42 -08:00
anand76
a5a5be4b4c Fix initialization order of DBOptions and kHostnameForDbHostId (#7702)
Summary:
Fix initialization order of DBOptions and kHostnameForDbHostId by making the initialization of the latter static rather than dynamic.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/7702

Reviewed By: ajkr

Differential Revision: D25111633

Pulled By: anand1976

fbshipit-source-id: 7afad834a66e40bcd8694a43b40d378695212224
2020-11-21 07:13:37 -08:00
Ramkumar Vadivelu
46923e5ca4 Fixed a formatting issue in HISTORY.md 2020-11-17 14:18:54 -08:00
Yanqin Jin
518431d78e cherry-picked PR-7680 and fixed HISTORY.md 2020-11-17 07:16:30 -08:00
33 changed files with 1095 additions and 419 deletions

View File

@ -17,6 +17,13 @@ commands:
command: |
HOMEBREW_NO_AUTO_UPDATE=1 brew install pyenv
install-cmake-on-macos:
steps:
- run:
name: Install cmake on macos
command: |
HOMEBREW_NO_AUTO_UPDATE=1 brew install cmake
increase-max-open-files-on-macos:
steps:
- run:
@ -27,10 +34,14 @@ commands:
sudo launchctl limit maxfiles 1048576
pre-steps:
parameters:
python-version:
default: "3.5.9"
type: string
steps:
- checkout
- run: pyenv install --skip-existing 3.5.9
- run: pyenv global 3.5.9
- run: pyenv install --skip-existing <<parameters.python-version>>
- run: pyenv global <<parameters.python-version>>
- run:
name: Setup Environment Variables
command: |
@ -39,6 +50,11 @@ commands:
echo "export SKIP_FORMAT_BUCK_CHECKS=1" >> $BASH_ENV
echo "export PRINT_PARALLEL_OUTPUTS=1" >> $BASH_ENV
pre-steps-macos:
steps:
- pre-steps:
python-version: "3.6.0"
post-steps:
steps:
- slack/status: *notify-on-master-failure
@ -89,15 +105,27 @@ executors:
jobs:
build-macos:
macos:
xcode: 11.3.0
xcode: 9.4.1
steps:
- increase-max-open-files-on-macos
- install-pyenv-on-macos
- pre-steps
- install-gflags-on-macos
- pre-steps-macos
- run: ulimit -S -n 1048576 && OPT=-DCIRCLECI make V=1 J=32 -j32 check | .circleci/cat_ignore_eagain
- post-steps
build-macos-cmake:
macos:
xcode: 9.4.1
steps:
- increase-max-open-files-on-macos
- install-pyenv-on-macos
- install-cmake-on-macos
- install-gflags-on-macos
- pre-steps-macos
- run: ulimit -S -n 1048576 && (mkdir build && cd build && cmake -DWITH_GFLAGS=0 .. && make V=1 -j32) | .circleci/cat_ignore_eagain
- post-steps
build-linux:
machine:
image: ubuntu-1604:202007-01
@ -336,19 +364,96 @@ jobs:
build-linux-java:
machine:
image: ubuntu-1604:202007-01
resource_class: 2xlarge
resource_class: large
environment:
JAVA_HOME: /usr/lib/jvm/java-1.8.0-openjdk-amd64
steps:
- pre-steps
- install-gflags
- run:
name: "Build RocksDBJava"
name: "Set Java Environment"
command: |
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64
export PATH=$JAVA_HOME/bin:$PATH
echo "JAVA_HOME=${JAVA_HOME}"
echo 'export PATH=$JAVA_HOME/bin:$PATH' >> $BASH_ENV
which java && java -version
which javac && javac -version
make V=1 J=32 -j32 rocksdbjava jtest | .circleci/cat_ignore_eagain
- run:
name: "Build RocksDBJava Shared Library"
command: make V=1 J=8 -j8 rocksdbjava | .circleci/cat_ignore_eagain
- run:
name: "Test RocksDBJava"
command: make V=1 J=8 -j8 jtest | .circleci/cat_ignore_eagain
- post-steps
build-linux-java-static:
machine:
image: ubuntu-1604:202007-01
resource_class: large
environment:
JAVA_HOME: /usr/lib/jvm/java-1.8.0-openjdk-amd64
steps:
- pre-steps
- install-gflags
- run:
name: "Set Java Environment"
command: |
echo "JAVA_HOME=${JAVA_HOME}"
echo 'export PATH=$JAVA_HOME/bin:$PATH' >> $BASH_ENV
which java && java -version
which javac && javac -version
- run:
name: "Build RocksDBJava Static Library"
command: make V=1 J=8 -j8 rocksdbjavastatic | .circleci/cat_ignore_eagain
- post-steps
build-macos-java:
macos:
xcode: 9.4.1
resource_class: medium
environment:
JAVA_HOME: /Library/Java/JavaVirtualMachines/jdk1.8.0_172.jdk/Contents/Home
steps:
- increase-max-open-files-on-macos
- install-pyenv-on-macos
- install-gflags-on-macos
- pre-steps-macos
- run:
name: "Set Java Environment"
command: |
echo "JAVA_HOME=${JAVA_HOME}"
echo 'export PATH=$JAVA_HOME/bin:$PATH' >> $BASH_ENV
which java && java -version
which javac && javac -version
- run:
name: "Build RocksDBJava Shared Library"
command: make V=1 J=8 -j8 rocksdbjava | .circleci/cat_ignore_eagain
- run:
name: "Test RocksDBJava"
command: make V=1 J=8 -j8 jtest | .circleci/cat_ignore_eagain
- post-steps
build-macos-java-static:
macos:
xcode: 9.4.1
resource_class: medium
environment:
JAVA_HOME: /Library/Java/JavaVirtualMachines/jdk1.8.0_172.jdk/Contents/Home
steps:
- increase-max-open-files-on-macos
- install-pyenv-on-macos
- install-gflags-on-macos
- install-cmake-on-macos
- pre-steps-macos
- run:
name: "Set Java Environment"
command: |
echo "JAVA_HOME=${JAVA_HOME}"
echo 'export PATH=$JAVA_HOME/bin:$PATH' >> $BASH_ENV
which java && java -version
which javac && javac -version
- run:
name: "Build RocksDBJava Static Library"
command: make V=1 J=8 -j8 rocksdbjavastatic | .circleci/cat_ignore_eagain
- post-steps
build-examples:
@ -460,6 +565,9 @@ workflows:
build-java:
jobs:
- build-linux-java
- build-linux-java-static
- build-macos-java
- build-macos-java-static
build-examples:
jobs:
- build-examples

View File

@ -10,7 +10,7 @@ $process = Start-Process "${PWD}\vs_installer.exe" -ArgumentList $VS_INSTALL_ARG
Remove-Item -Path vs_installer.exe -Force
$exitCode = $process.ExitCode
if (($exitCode -ne 0) -and ($exitCode -ne 3010)) {
echo "VS 2017 installer exited with code $exitCode, which should be one of [0, 3010]."
echo "VS 2015 installer exited with code $exitCode, which should be one of [0, 3010]."
curl.exe --retry 3 -kL $COLLECT_DOWNLOAD_LINK --output Collect.exe
if ($LASTEXITCODE -ne 0) {
echo "Download of the VS Collect tool failed."

View File

@ -1,4 +1,28 @@
# Rocksdb Change Log
## 6.15.5 (02/05/2021)
### Bug Fixes
* Since 6.15.0, `TransactionDB` returns error `Status`es from calls to `DeleteRange()` and calls to `Write()` where the `WriteBatch` contains a range deletion. Previously such operations may have succeeded while not providing the expected transactional guarantees. There are certain cases where range deletion can still be used on such DBs; see the API doc on `TransactionDB::DeleteRange()` for details.
* `OptimisticTransactionDB` now returns error `Status`es from calls to `DeleteRange()` and calls to `Write()` where the `WriteBatch` contains a range deletion. Previously such operations may have succeeded while not providing the expected transactional guarantees.
## 6.15.4 (01/21/2021)
### Bug Fixes
* Fix a race condition between DB startups and shutdowns in managing the periodic background worker threads. One effect of this race condition could be the process being terminated.
## 6.15.3 (01/07/2021)
### Bug Fixes
* For Java builds, fix errors due to missing compression library includes.
## 6.15.2 (12/22/2020)
### Bug Fixes
* Fix failing RocksJava test compilation and add CI jobs
* Fix jemalloc compilation issue on macOS
* Fix build issues - compatibility with older gcc, older jemalloc libraries, docker warning when building i686 binaries
## 6.15.1 (12/01/2020)
### Bug Fixes
* Truncated WALs ending in incomplete records can no longer produce gaps in the recovered data when `WALRecoveryMode::kPointInTimeRecovery` is used. Gaps are still possible when WALs are truncated exactly on record boundaries.
* Fix a bug where compressed blocks read by MultiGet are not inserted into the compressed block cache when use_direct_reads = true.
## 6.15.0 (11/13/2020)
### Bug Fixes
* Fixed a bug in the following combination of features: indexes with user keys (`format_version >= 3`), indexes are partitioned (`index_type == kTwoLevelIndexSearch`), and some index partitions are pinned in memory (`BlockBasedTableOptions::pin_l0_filter_and_index_blocks_in_cache`). The bug could cause keys to be truncated when read from the index leading to wrong read results or other unexpected behavior.
@ -13,6 +37,9 @@
* Fixed MultiGet bugs it doesn't return valid data with user defined timestamp.
* Fixed a potential bug caused by evaluating `TableBuilder::NeedCompact()` before `TableBuilder::Finish()` in compaction job. For example, the `NeedCompact()` method of `CompactOnDeletionCollector` returned by built-in `CompactOnDeletionCollectorFactory` requires `BlockBasedTable::Finish()` to return the correct result. The bug can cause a compaction-generated file not to be marked for future compaction based on deletion ratio.
* Fixed a seek issue with prefix extractor and timestamp.
* Fixed a bug of encoding and parsing BlockBasedTableOptions::read_amp_bytes_per_bit as a 64-bit integer.
* Fixed the logic of populating native data structure for `read_amp_bytes_per_bit` during OPTIONS file parsing on big-endian architecture. Without this fix, original code introduced in PR7659, when running on big-endian machine, can mistakenly store read_amp_bytes_per_bit (an uint32) in little endian format. Future access to `read_amp_bytes_per_bit` will give wrong values. Little endian architecture is not affected.
### Public API Change
* Deprecate `BlockBasedTableOptions::pin_l0_filter_and_index_blocks_in_cache` and `BlockBasedTableOptions::pin_top_level_index_and_filter`. These options still take effect until users migrate to the replacement APIs in `BlockBasedTableOptions::metadata_cache_options`. Migration guidance can be found in the API comments on the deprecated options.

View File

@ -2111,73 +2111,73 @@ ifeq ($(PLATFORM), OS_OPENBSD)
ROCKSDB_JAR = rocksdbjni-$(ROCKSDB_JAVA_VERSION)-openbsd$(ARCH).jar
endif
libz.a:
-rm -rf zlib-$(ZLIB_VER)
ifeq (,$(wildcard ./zlib-$(ZLIB_VER).tar.gz))
zlib-$(ZLIB_VER).tar.gz:
curl --fail --output zlib-$(ZLIB_VER).tar.gz --location ${ZLIB_DOWNLOAD_BASE}/zlib-$(ZLIB_VER).tar.gz
endif
ZLIB_SHA256_ACTUAL=`$(SHA256_CMD) zlib-$(ZLIB_VER).tar.gz | cut -d ' ' -f 1`; \
if [ "$(ZLIB_SHA256)" != "$$ZLIB_SHA256_ACTUAL" ]; then \
echo zlib-$(ZLIB_VER).tar.gz checksum mismatch, expected=\"$(ZLIB_SHA256)\" actual=\"$$ZLIB_SHA256_ACTUAL\"; \
exit 1; \
fi
libz.a: zlib-$(ZLIB_VER).tar.gz
-rm -rf zlib-$(ZLIB_VER)
tar xvzf zlib-$(ZLIB_VER).tar.gz
cd zlib-$(ZLIB_VER) && CFLAGS='-fPIC ${EXTRA_CFLAGS}' LDFLAGS='${EXTRA_LDFLAGS}' ./configure --static && $(MAKE)
cp zlib-$(ZLIB_VER)/libz.a .
libbz2.a:
-rm -rf bzip2-$(BZIP2_VER)
ifeq (,$(wildcard ./bzip2-$(BZIP2_VER).tar.gz))
bzip2-$(BZIP2_VER).tar.gz:
curl --fail --output bzip2-$(BZIP2_VER).tar.gz --location ${CURL_SSL_OPTS} ${BZIP2_DOWNLOAD_BASE}/bzip2-$(BZIP2_VER).tar.gz
endif
BZIP2_SHA256_ACTUAL=`$(SHA256_CMD) bzip2-$(BZIP2_VER).tar.gz | cut -d ' ' -f 1`; \
if [ "$(BZIP2_SHA256)" != "$$BZIP2_SHA256_ACTUAL" ]; then \
echo bzip2-$(BZIP2_VER).tar.gz checksum mismatch, expected=\"$(BZIP2_SHA256)\" actual=\"$$BZIP2_SHA256_ACTUAL\"; \
exit 1; \
fi
libbz2.a: bzip2-$(BZIP2_VER).tar.gz
-rm -rf bzip2-$(BZIP2_VER)
tar xvzf bzip2-$(BZIP2_VER).tar.gz
cd bzip2-$(BZIP2_VER) && $(MAKE) CFLAGS='-fPIC -O2 -g -D_FILE_OFFSET_BITS=64 ${EXTRA_CFLAGS}' AR='ar ${EXTRA_ARFLAGS}'
cp bzip2-$(BZIP2_VER)/libbz2.a .
libsnappy.a:
-rm -rf snappy-$(SNAPPY_VER)
ifeq (,$(wildcard ./snappy-$(SNAPPY_VER).tar.gz))
snappy-$(SNAPPY_VER).tar.gz:
curl --fail --output snappy-$(SNAPPY_VER).tar.gz --location ${CURL_SSL_OPTS} ${SNAPPY_DOWNLOAD_BASE}/$(SNAPPY_VER).tar.gz
endif
SNAPPY_SHA256_ACTUAL=`$(SHA256_CMD) snappy-$(SNAPPY_VER).tar.gz | cut -d ' ' -f 1`; \
if [ "$(SNAPPY_SHA256)" != "$$SNAPPY_SHA256_ACTUAL" ]; then \
echo snappy-$(SNAPPY_VER).tar.gz checksum mismatch, expected=\"$(SNAPPY_SHA256)\" actual=\"$$SNAPPY_SHA256_ACTUAL\"; \
exit 1; \
fi
libsnappy.a: snappy-$(SNAPPY_VER).tar.gz
-rm -rf snappy-$(SNAPPY_VER)
tar xvzf snappy-$(SNAPPY_VER).tar.gz
mkdir snappy-$(SNAPPY_VER)/build
cd snappy-$(SNAPPY_VER)/build && CFLAGS='${EXTRA_CFLAGS}' CXXFLAGS='${EXTRA_CXXFLAGS}' LDFLAGS='${EXTRA_LDFLAGS}' cmake -DCMAKE_POSITION_INDEPENDENT_CODE=ON .. && $(MAKE) ${SNAPPY_MAKE_TARGET}
cp snappy-$(SNAPPY_VER)/build/libsnappy.a .
liblz4.a:
-rm -rf lz4-$(LZ4_VER)
ifeq (,$(wildcard ./lz4-$(LZ4_VER).tar.gz))
lz4-$(LZ4_VER).tar.gz:
curl --fail --output lz4-$(LZ4_VER).tar.gz --location ${CURL_SSL_OPTS} ${LZ4_DOWNLOAD_BASE}/v$(LZ4_VER).tar.gz
endif
LZ4_SHA256_ACTUAL=`$(SHA256_CMD) lz4-$(LZ4_VER).tar.gz | cut -d ' ' -f 1`; \
if [ "$(LZ4_SHA256)" != "$$LZ4_SHA256_ACTUAL" ]; then \
echo lz4-$(LZ4_VER).tar.gz checksum mismatch, expected=\"$(LZ4_SHA256)\" actual=\"$$LZ4_SHA256_ACTUAL\"; \
exit 1; \
fi
liblz4.a: lz4-$(LZ4_VER).tar.gz
-rm -rf lz4-$(LZ4_VER)
tar xvzf lz4-$(LZ4_VER).tar.gz
cd lz4-$(LZ4_VER)/lib && $(MAKE) CFLAGS='-fPIC -O2 ${EXTRA_CFLAGS}' all
cp lz4-$(LZ4_VER)/lib/liblz4.a .
libzstd.a:
-rm -rf zstd-$(ZSTD_VER)
ifeq (,$(wildcard ./zstd-$(ZSTD_VER).tar.gz))
zstd-$(ZSTD_VER).tar.gz:
curl --fail --output zstd-$(ZSTD_VER).tar.gz --location ${CURL_SSL_OPTS} ${ZSTD_DOWNLOAD_BASE}/v$(ZSTD_VER).tar.gz
endif
ZSTD_SHA256_ACTUAL=`$(SHA256_CMD) zstd-$(ZSTD_VER).tar.gz | cut -d ' ' -f 1`; \
if [ "$(ZSTD_SHA256)" != "$$ZSTD_SHA256_ACTUAL" ]; then \
echo zstd-$(ZSTD_VER).tar.gz checksum mismatch, expected=\"$(ZSTD_SHA256)\" actual=\"$$ZSTD_SHA256_ACTUAL\"; \
exit 1; \
fi
libzstd.a: zstd-$(ZSTD_VER).tar.gz
-rm -rf zstd-$(ZSTD_VER)
tar xvzf zstd-$(ZSTD_VER).tar.gz
cd zstd-$(ZSTD_VER)/lib && DESTDIR=. PREFIX= $(MAKE) CFLAGS='-fPIC -O2 ${EXTRA_CFLAGS}' install
cp zstd-$(ZSTD_VER)/lib/libzstd.a .
@ -2188,14 +2188,23 @@ JAVA_COMPRESSIONS = libz.a libbz2.a libsnappy.a liblz4.a libzstd.a
endif
JAVA_STATIC_FLAGS = -DZLIB -DBZIP2 -DSNAPPY -DLZ4 -DZSTD
JAVA_STATIC_INCLUDES = -I./zlib-$(ZLIB_VER) -I./bzip2-$(BZIP2_VER) -I./snappy-$(SNAPPY_VER) -I./lz4-$(LZ4_VER)/lib -I./zstd-$(ZSTD_VER)/lib/include
ifneq ($(findstring rocksdbjavastatic, $(MAKECMDGOALS)),)
JAVA_STATIC_INCLUDES = -I./zlib-$(ZLIB_VER) -I./bzip2-$(BZIP2_VER) -I./snappy-$(SNAPPY_VER) -I./snappy-$(SNAPPY_VER)/build -I./lz4-$(LZ4_VER)/lib -I./zstd-$(ZSTD_VER)/lib -I./zstd-$(ZSTD_VER)/lib/dictBuilder
ifneq ($(findstring rocksdbjavastatic, $(filter-out rocksdbjavastatic_deps, $(MAKECMDGOALS))),)
CXXFLAGS += $(JAVA_STATIC_FLAGS) $(JAVA_STATIC_INCLUDES)
CFLAGS += $(JAVA_STATIC_FLAGS) $(JAVA_STATIC_INCLUDES)
endif
rocksdbjavastatic: $(LIB_OBJECTS) $(JAVA_COMPRESSIONS)
cd java;$(MAKE) javalib;
rm -f ./java/target/$(ROCKSDBJNILIB)
rocksdbjavastatic:
ifeq ($(JAVA_HOME),)
$(error JAVA_HOME is not set)
endif
$(MAKE) rocksdbjavastatic_deps
$(MAKE) rocksdbjavastatic_libobjects
$(MAKE) rocksdbjavastatic_javalib
rocksdbjavastatic_javalib:
cd java;$(MAKE) javalib
rm -f java/target/$(ROCKSDBJNILIB)
$(CXX) $(CXXFLAGS) -I./java/. $(JAVA_INCLUDE) -shared -fPIC \
-o ./java/target/$(ROCKSDBJNILIB) $(JNI_NATIVE_SOURCES) \
$(LIB_OBJECTS) $(COVERAGEFLAGS) \
@ -2212,6 +2221,10 @@ rocksdbjavastatic: $(LIB_OBJECTS) $(JAVA_COMPRESSIONS)
openssl sha1 java/target/$(ROCKSDB_JAVADOCS_JAR) | sed 's/.*= \([0-9a-f]*\)/\1/' > java/target/$(ROCKSDB_JAVADOCS_JAR).sha1
openssl sha1 java/target/$(ROCKSDB_SOURCES_JAR) | sed 's/.*= \([0-9a-f]*\)/\1/' > java/target/$(ROCKSDB_SOURCES_JAR).sha1
rocksdbjavastatic_deps: $(JAVA_COMPRESSIONS)
rocksdbjavastatic_libobjects: $(LIB_OBJECTS)
rocksdbjavastaticrelease: rocksdbjavastatic
cd java/crossbuild && (vagrant destroy -f || true) && vagrant up linux32 && vagrant halt linux32 && vagrant up linux64 && vagrant halt linux64 && vagrant up linux64-musl && vagrant halt linux64-musl
cd java;jar -cf target/$(ROCKSDB_JAR_ALL) HISTORY*.md
@ -2227,7 +2240,7 @@ rocksdbjavastaticreleasedocker: rocksdbjavastatic rocksdbjavastaticdockerx86 roc
rocksdbjavastaticdockerx86:
mkdir -p java/target
docker run --rm --name rocksdb_linux_x86-be --attach stdin --attach stdout --attach stderr --volume $(HOME)/.m2:/root/.m2:ro --volume `pwd`:/rocksdb-host:ro --volume /rocksdb-local-build --volume `pwd`/java/target:/rocksdb-java-target --env DEBUG_LEVEL=$(DEBUG_LEVEL) evolvedbinary/rocksjava:centos6_x86-be /rocksdb-host/java/crossbuild/docker-build-linux-centos.sh
docker run --rm --name rocksdb_linux_x86-be --platform linux/386 --attach stdin --attach stdout --attach stderr --volume $(HOME)/.m2:/root/.m2:ro --volume `pwd`:/rocksdb-host:ro --volume /rocksdb-local-build --volume `pwd`/java/target:/rocksdb-java-target --env DEBUG_LEVEL=$(DEBUG_LEVEL) evolvedbinary/rocksjava:centos6_x86-be /rocksdb-host/java/crossbuild/docker-build-linux-centos.sh
rocksdbjavastaticdockerx86_64:
mkdir -p java/target
@ -2243,7 +2256,7 @@ rocksdbjavastaticdockerarm64v8:
rocksdbjavastaticdockerx86musl:
mkdir -p java/target
docker run --rm --name rocksdb_linux_x86-musl-be --attach stdin --attach stdout --attach stderr --volume $(HOME)/.m2:/root/.m2:ro --volume `pwd`:/rocksdb-host:ro --volume /rocksdb-local-build --volume `pwd`/java/target:/rocksdb-java-target --env DEBUG_LEVEL=$(DEBUG_LEVEL) evolvedbinary/rocksjava:alpine3_x86-be /rocksdb-host/java/crossbuild/docker-build-linux-centos.sh
docker run --rm --name rocksdb_linux_x86-musl-be --platform linux/386 --attach stdin --attach stdout --attach stderr --volume $(HOME)/.m2:/root/.m2:ro --volume `pwd`:/rocksdb-host:ro --volume /rocksdb-local-build --volume `pwd`/java/target:/rocksdb-java-target --env DEBUG_LEVEL=$(DEBUG_LEVEL) evolvedbinary/rocksjava:alpine3_x86-be /rocksdb-host/java/crossbuild/docker-build-linux-centos.sh
rocksdbjavastaticdockerx86_64musl:
mkdir -p java/target
@ -2281,6 +2294,9 @@ jl/%.o: %.cc
$(AM_V_CC)mkdir -p $(@D) && $(CXX) $(CXXFLAGS) -fPIC -c $< -o $@ $(COVERAGEFLAGS)
rocksdbjava: $(LIB_OBJECTS)
ifeq ($(JAVA_HOME),)
$(error JAVA_HOME is not set)
endif
$(AM_V_GEN)cd java;$(MAKE) javalib;
$(AM_V_at)rm -f ./java/target/$(ROCKSDBJNILIB)
$(AM_V_at)$(CXX) $(CXXFLAGS) -I./java/. $(JAVA_INCLUDE) -shared -fPIC -o ./java/target/$(ROCKSDBJNILIB) $(JNI_NATIVE_SOURCES) $(LIB_OBJECTS) $(JAVA_LDFLAGS) $(COVERAGEFLAGS)
@ -2411,6 +2427,8 @@ ifneq ($(MAKECMDGOALS),clean)
ifneq ($(MAKECMDGOALS),format)
ifneq ($(MAKECMDGOALS),jclean)
ifneq ($(MAKECMDGOALS),jtest)
ifneq ($(MAKECMDGOALS),rocksdbjavastatic)
ifneq ($(MAKECMDGOALS),rocksdbjavastatic_deps)
ifneq ($(MAKECMDGOALS),package)
ifneq ($(MAKECMDGOALS),analyze)
-include $(DEPFILES)
@ -2420,3 +2438,5 @@ endif
endif
endif
endif
endif
endif

10
TARGETS
View File

@ -775,8 +775,7 @@ cpp_library(
external_deps = ROCKSDB_EXTERNAL_DEPS,
)
if not is_opt_mode:
cpp_binary(
cpp_binary(
name = "c_test_bin",
srcs = ["db/c_test.c"],
arch_preprocessor_flags = ROCKSDB_ARCH_PREPROCESSOR_FLAGS,
@ -784,17 +783,16 @@ if not is_opt_mode:
compiler_flags = ROCKSDB_COMPILER_FLAGS,
preprocessor_flags = ROCKSDB_PREPROCESSOR_FLAGS,
deps = [":rocksdb_test_lib"],
)
) if not is_opt_mode else None
if not is_opt_mode:
custom_unittest(
custom_unittest(
"c_test",
command = [
native.package_name() + "/buckifier/rocks_test_runner.sh",
"$(location :{})".format("c_test_bin"),
],
type = "simple",
)
) if not is_opt_mode else None
cpp_library(
name = "env_basic_test_lib",

View File

@ -79,8 +79,7 @@ class TARGETSBuilder(object):
def add_c_test(self):
self.targets_file.write(b"""
if not is_opt_mode:
cpp_binary(
cpp_binary(
name = "c_test_bin",
srcs = ["db/c_test.c"],
arch_preprocessor_flags = ROCKSDB_ARCH_PREPROCESSOR_FLAGS,
@ -88,17 +87,16 @@ if not is_opt_mode:
compiler_flags = ROCKSDB_COMPILER_FLAGS,
preprocessor_flags = ROCKSDB_PREPROCESSOR_FLAGS,
deps = [":rocksdb_test_lib"],
)
) if not is_opt_mode else None
if not is_opt_mode:
custom_unittest(
custom_unittest(
"c_test",
command = [
native.package_name() + "/buckifier/rocks_test_runner.sh",
"$(location :{})".format("c_test_bin"),
],
type = "simple",
)
) if not is_opt_mode else None
""")
def register_test(self,

View File

@ -2593,6 +2593,7 @@ class DBBasicTestMultiGet : public DBTestBase {
} else {
options.compression_opts.parallel_threads = compression_parallel_threads;
}
options_ = options;
Reopen(options);
if (num_cfs > 1) {
@ -2662,6 +2663,7 @@ class DBBasicTestMultiGet : public DBTestBase {
bool compression_enabled() { return compression_enabled_; }
bool has_compressed_cache() { return compressed_cache_ != nullptr; }
bool has_uncompressed_cache() { return uncompressed_cache_ != nullptr; }
Options get_options() { return options_; }
static void SetUpTestCase() {}
static void TearDownTestCase() {}
@ -2747,6 +2749,7 @@ class DBBasicTestMultiGet : public DBTestBase {
std::shared_ptr<MyBlockCache> compressed_cache_;
std::shared_ptr<MyBlockCache> uncompressed_cache_;
Options options_;
bool compression_enabled_;
std::vector<std::string> values_;
std::vector<std::string> uncompressable_values_;
@ -2889,6 +2892,123 @@ TEST_P(DBBasicTestWithParallelIO, MultiGet) {
}
}
#ifndef ROCKSDB_LITE
TEST_P(DBBasicTestWithParallelIO, MultiGetDirectIO) {
class FakeDirectIOEnv : public EnvWrapper {
class FakeDirectIOSequentialFile;
class FakeDirectIORandomAccessFile;
public:
FakeDirectIOEnv(Env* env) : EnvWrapper(env) {}
Status NewRandomAccessFile(const std::string& fname,
std::unique_ptr<RandomAccessFile>* result,
const EnvOptions& options) override {
std::unique_ptr<RandomAccessFile> file;
assert(options.use_direct_reads);
EnvOptions opts = options;
opts.use_direct_reads = false;
Status s = target()->NewRandomAccessFile(fname, &file, opts);
if (!s.ok()) {
return s;
}
result->reset(new FakeDirectIORandomAccessFile(std::move(file)));
return s;
}
private:
class FakeDirectIOSequentialFile : public SequentialFileWrapper {
public:
FakeDirectIOSequentialFile(std::unique_ptr<SequentialFile>&& file)
: SequentialFileWrapper(file.get()), file_(std::move(file)) {}
~FakeDirectIOSequentialFile() {}
bool use_direct_io() const override { return true; }
size_t GetRequiredBufferAlignment() const override { return 1; }
private:
std::unique_ptr<SequentialFile> file_;
};
class FakeDirectIORandomAccessFile : public RandomAccessFileWrapper {
public:
FakeDirectIORandomAccessFile(std::unique_ptr<RandomAccessFile>&& file)
: RandomAccessFileWrapper(file.get()), file_(std::move(file)) {}
~FakeDirectIORandomAccessFile() {}
bool use_direct_io() const override { return true; }
size_t GetRequiredBufferAlignment() const override { return 1; }
private:
std::unique_ptr<RandomAccessFile> file_;
};
};
std::unique_ptr<FakeDirectIOEnv> env(new FakeDirectIOEnv(env_));
Options opts = get_options();
opts.env = env.get();
opts.use_direct_reads = true;
Reopen(opts);
std::vector<std::string> key_data(10);
std::vector<Slice> keys;
// We cannot resize a PinnableSlice vector, so just set initial size to
// largest we think we will need
std::vector<PinnableSlice> values(10);
std::vector<Status> statuses;
ReadOptions ro;
ro.fill_cache = fill_cache();
// Warm up the cache first
key_data.emplace_back(Key(0));
keys.emplace_back(Slice(key_data.back()));
key_data.emplace_back(Key(50));
keys.emplace_back(Slice(key_data.back()));
statuses.resize(keys.size());
dbfull()->MultiGet(ro, dbfull()->DefaultColumnFamily(), keys.size(),
keys.data(), values.data(), statuses.data(), true);
ASSERT_TRUE(CheckValue(0, values[0].ToString()));
ASSERT_TRUE(CheckValue(50, values[1].ToString()));
int random_reads = env_->random_read_counter_.Read();
key_data[0] = Key(1);
key_data[1] = Key(51);
keys[0] = Slice(key_data[0]);
keys[1] = Slice(key_data[1]);
values[0].Reset();
values[1].Reset();
if (uncompressed_cache_) {
uncompressed_cache_->SetCapacity(0);
uncompressed_cache_->SetCapacity(1048576);
}
dbfull()->MultiGet(ro, dbfull()->DefaultColumnFamily(), keys.size(),
keys.data(), values.data(), statuses.data(), true);
ASSERT_TRUE(CheckValue(1, values[0].ToString()));
ASSERT_TRUE(CheckValue(51, values[1].ToString()));
bool read_from_cache = false;
if (fill_cache()) {
if (has_uncompressed_cache()) {
read_from_cache = true;
} else if (has_compressed_cache() && compression_enabled()) {
read_from_cache = true;
}
}
int expected_reads = random_reads;
if (!compression_enabled() || !has_compressed_cache()) {
expected_reads += 2;
} else {
expected_reads += (read_from_cache ? 0 : 2);
}
if (env_->random_read_counter_.Read() != expected_reads) {
ASSERT_EQ(env_->random_read_counter_.Read(), expected_reads);
}
Close();
}
#endif // ROCKSDB_LITE
TEST_P(DBBasicTestWithParallelIO, MultiGetWithChecksumMismatch) {
std::vector<std::string> key_data(10);
std::vector<Slice> keys;

View File

@ -1348,14 +1348,20 @@ TEST_P(DBWALTestWithParams, kPointInTimeRecovery) {
size_t recovered_row_count = RecoveryTestHelper::GetData(this);
ASSERT_LT(recovered_row_count, row_count);
// Verify a prefix of keys were recovered. But not in the case of full WAL
// truncation, because we have no way to know there was a corruption when
// truncation happened on record boundaries (preventing recovery holes in
// that case requires using `track_and_verify_wals_in_manifest`).
if (!trunc || corrupt_offset != 0) {
bool expect_data = true;
for (size_t k = 0; k < maxkeys; ++k) {
bool found = Get("key" + ToString(corrupt_offset)) != "NOT_FOUND";
bool found = Get("key" + ToString(k)) != "NOT_FOUND";
if (expect_data && !found) {
expect_data = false;
}
ASSERT_EQ(found, expect_data);
}
}
const size_t min = RecoveryTestHelper::kKeysPerWALFile *
(wal_file_id - RecoveryTestHelper::kWALFileOffset);

View File

@ -119,16 +119,26 @@ bool Reader::ReadRecord(Slice* record, std::string* scratch,
break;
case kBadHeader:
if (wal_recovery_mode == WALRecoveryMode::kAbsoluteConsistency) {
// in clean shutdown we don't expect any error in the log files
if (wal_recovery_mode == WALRecoveryMode::kAbsoluteConsistency ||
wal_recovery_mode == WALRecoveryMode::kPointInTimeRecovery) {
// In clean shutdown we don't expect any error in the log files.
// In point-in-time recovery an incomplete record at the end could
// produce a hole in the recovered data. Report an error here, which
// higher layers can choose to ignore when it's provable there is no
// hole.
ReportCorruption(drop_size, "truncated header");
}
FALLTHROUGH_INTENDED;
case kEof:
if (in_fragmented_record) {
if (wal_recovery_mode == WALRecoveryMode::kAbsoluteConsistency) {
// in clean shutdown we don't expect any error in the log files
if (wal_recovery_mode == WALRecoveryMode::kAbsoluteConsistency ||
wal_recovery_mode == WALRecoveryMode::kPointInTimeRecovery) {
// In clean shutdown we don't expect any error in the log files.
// In point-in-time recovery an incomplete record at the end could
// produce a hole in the recovered data. Report an error here, which
// higher layers can choose to ignore when it's provable there is no
// hole.
ReportCorruption(scratch->size(), "error reading trailing data");
}
// This can be caused by the writer dying immediately after
@ -142,8 +152,13 @@ bool Reader::ReadRecord(Slice* record, std::string* scratch,
if (wal_recovery_mode != WALRecoveryMode::kSkipAnyCorruptedRecords) {
// Treat a record from a previous instance of the log as EOF.
if (in_fragmented_record) {
if (wal_recovery_mode == WALRecoveryMode::kAbsoluteConsistency) {
// in clean shutdown we don't expect any error in the log files
if (wal_recovery_mode == WALRecoveryMode::kAbsoluteConsistency ||
wal_recovery_mode == WALRecoveryMode::kPointInTimeRecovery) {
// In clean shutdown we don't expect any error in the log files.
// In point-in-time recovery an incomplete record at the end could
// produce a hole in the recovered data. Report an error here,
// which higher layers can choose to ignore when it's provable
// there is no hole.
ReportCorruption(scratch->size(), "error reading trailing data");
}
// This can be caused by the writer dying immediately after
@ -164,6 +179,20 @@ bool Reader::ReadRecord(Slice* record, std::string* scratch,
break;
case kBadRecordLen:
if (eof_) {
if (wal_recovery_mode == WALRecoveryMode::kAbsoluteConsistency ||
wal_recovery_mode == WALRecoveryMode::kPointInTimeRecovery) {
// In clean shutdown we don't expect any error in the log files.
// In point-in-time recovery an incomplete record at the end could
// produce a hole in the recovered data. Report an error here, which
// higher layers can choose to ignore when it's provable there is no
// hole.
ReportCorruption(drop_size, "truncated record body");
}
return false;
}
FALLTHROUGH_INTENDED;
case kBadRecordChecksum:
if (recycled_ &&
wal_recovery_mode ==
@ -355,19 +384,15 @@ unsigned int Reader::ReadPhysicalRecord(Slice* result, size_t* drop_size) {
}
}
if (header_size + length > buffer_.size()) {
assert(buffer_.size() >= static_cast<size_t>(header_size));
*drop_size = buffer_.size();
buffer_.clear();
if (!eof_) {
// If the end of the read has been reached without seeing
// `header_size + length` bytes of payload, report a corruption. The
// higher layers can decide how to handle it based on the recovery mode,
// whether this occurred at EOF, whether this is the final WAL, etc.
return kBadRecordLen;
}
// If the end of the file has been reached without reading |length|
// bytes of payload, assume the writer died in the middle of writing the
// record. Don't report a corruption unless requested.
if (*drop_size) {
return kBadHeader;
}
return kEof;
}
if (type == kZeroType && length == 0) {
// Skip zero length record without reporting any drops since

View File

@ -465,7 +465,7 @@ TEST_P(LogTest, BadLengthAtEndIsNotIgnored) {
ShrinkSize(1);
ASSERT_EQ("EOF", Read(WALRecoveryMode::kAbsoluteConsistency));
ASSERT_GT(DroppedBytes(), 0U);
ASSERT_EQ("OK", MatchError("Corruption: truncated header"));
ASSERT_EQ("OK", MatchError("Corruption: truncated record body"));
}
TEST_P(LogTest, ChecksumMismatch) {
@ -573,9 +573,7 @@ TEST_P(LogTest, PartialLastIsNotIgnored) {
ShrinkSize(1);
ASSERT_EQ("EOF", Read(WALRecoveryMode::kAbsoluteConsistency));
ASSERT_GT(DroppedBytes(), 0U);
ASSERT_EQ("OK", MatchError(
"Corruption: truncated headerCorruption: "
"error reading trailing data"));
ASSERT_EQ("OK", MatchError("Corruption: truncated record body"));
}
TEST_P(LogTest, ErrorJoinsRecords) {

View File

@ -10,13 +10,14 @@
#ifndef ROCKSDB_LITE
namespace ROCKSDB_NAMESPACE {
PeriodicWorkScheduler::PeriodicWorkScheduler(Env* env) {
PeriodicWorkScheduler::PeriodicWorkScheduler(Env* env) : timer_mu_(env) {
timer = std::unique_ptr<Timer>(new Timer(env));
}
void PeriodicWorkScheduler::Register(DBImpl* dbi,
unsigned int stats_dump_period_sec,
unsigned int stats_persist_period_sec) {
MutexLock l(&timer_mu_);
static std::atomic<uint64_t> initial_delay(0);
timer->Start();
if (stats_dump_period_sec > 0) {
@ -41,6 +42,7 @@ void PeriodicWorkScheduler::Register(DBImpl* dbi,
}
void PeriodicWorkScheduler::Unregister(DBImpl* dbi) {
MutexLock l(&timer_mu_);
timer->Cancel(GetTaskName(dbi, "dump_st"));
timer->Cancel(GetTaskName(dbi, "pst_st"));
timer->Cancel(GetTaskName(dbi, "flush_info_log"));
@ -78,7 +80,10 @@ PeriodicWorkTestScheduler* PeriodicWorkTestScheduler::Default(Env* env) {
MutexLock l(&mutex);
if (scheduler.timer.get() != nullptr &&
scheduler.timer->TEST_GetPendingTaskNum() == 0) {
{
MutexLock timer_mu_guard(&scheduler.timer_mu_);
scheduler.timer->Shutdown();
}
scheduler.timer.reset(new Timer(env));
}
}

View File

@ -42,6 +42,12 @@ class PeriodicWorkScheduler {
protected:
std::unique_ptr<Timer> timer;
// `timer_mu_` serves two purposes currently:
// (1) to ensure calls to `Start()` and `Shutdown()` are serialized, as
// they are currently not implemented in a thread-safe way; and
// (2) to ensure the `Timer::Add()`s and `Timer::Start()` run atomically, and
// the `Timer::Cancel()`s and `Timer::Shutdown()` run atomically.
port::Mutex timer_mu_;
explicit PeriodicWorkScheduler(Env* env);

5
env/env_posix.cc vendored
View File

@ -463,11 +463,12 @@ void PosixEnv::WaitForJoin() {
std::string Env::GenerateUniqueId() {
std::string uuid_file = "/proc/sys/kernel/random/uuid";
std::shared_ptr<FileSystem> fs = FileSystem::Default();
Status s = FileExists(uuid_file);
Status s = fs->FileExists(uuid_file, IOOptions(), nullptr);
if (s.ok()) {
std::string uuid;
s = ReadFileToString(this, uuid_file, &uuid);
s = ReadFileToString(fs.get(), uuid_file, &uuid);
if (s.ok()) {
return uuid;
}

View File

@ -349,7 +349,7 @@ struct DbPath {
DbPath(const std::string& p, uint64_t t) : path(p), target_size(t) {}
};
static const std::string kHostnameForDbHostId = "__hostname__";
extern const char* kHostnameForDbHostId;
struct DBOptions {
// The function recovers options to the option as in version 4.6.

View File

@ -51,6 +51,8 @@ struct OptimisticTransactionDBOptions {
uint32_t occ_lock_buckets = (1 << 20);
};
// Range deletions (including those in `WriteBatch`es passed to `Write()`) are
// incompatible with `OptimisticTransactionDB` and will return a non-OK `Status`
class OptimisticTransactionDB : public StackableDB {
public:
// Open an OptimisticTransactionDB similar to DB::Open().

View File

@ -244,6 +244,17 @@ class TransactionDB : public StackableDB {
// falls back to the un-optimized version of ::Write
return Write(opts, updates);
}
// Transactional `DeleteRange()` is not yet supported.
// However, users who know their deleted range does not conflict with
// anything can still use it via the `Write()` API. In all cases, the
// `Write()` overload specifying `TransactionDBWriteOptimizations` must be
// used and `skip_concurrency_control` must be set. When using either
// WRITE_PREPARED or WRITE_UNPREPARED , `skip_duplicate_key_check` must
// additionally be set.
virtual Status DeleteRange(const WriteOptions&, ColumnFamilyHandle*,
const Slice&, const Slice&) override {
return Status::NotSupported();
}
// Open a TransactionDB similar to DB::Open().
// Internally call PrepareWrap() and WrapDB()
// If the return status is not ok, then dbptr is set to nullptr.

View File

@ -6,7 +6,7 @@
#define ROCKSDB_MAJOR 6
#define ROCKSDB_MINOR 15
#define ROCKSDB_PATCH 0
#define ROCKSDB_PATCH 5
// Do not use these. We made the mistake of declaring macros starting with
// double underscore. Now we have to live with our choice. We'll deprecate these

View File

@ -88,7 +88,9 @@ NATIVE_JAVA_CLASSES = \
org.rocksdb.WriteBufferManager\
org.rocksdb.WBWIRocksIterator
NATIVE_JAVA_TEST_CLASSES = org.rocksdb.RocksDBExceptionTest\
NATIVE_JAVA_TEST_CLASSES = \
org.rocksdb.RocksDBExceptionTest\
org.rocksdb.test.TestableEventListener\
org.rocksdb.NativeComparatorWrapperTest.NativeStringComparatorWrapper\
org.rocksdb.WriteBatchTest\
org.rocksdb.WriteBatchTestInternalHelper
@ -207,12 +209,22 @@ SAMPLES_OUTPUT = samples/target
SAMPLES_MAIN_CLASSES = $(SAMPLES_OUTPUT)/classes
JAVA_TEST_LIBDIR = test-libs
JAVA_JUNIT_JAR = $(JAVA_TEST_LIBDIR)/junit-4.12.jar
JAVA_HAMCR_JAR = $(JAVA_TEST_LIBDIR)/hamcrest-core-1.3.jar
JAVA_MOCKITO_JAR = $(JAVA_TEST_LIBDIR)/mockito-all-1.10.19.jar
JAVA_CGLIB_JAR = $(JAVA_TEST_LIBDIR)/cglib-2.2.2.jar
JAVA_ASSERTJ_JAR = $(JAVA_TEST_LIBDIR)/assertj-core-1.7.1.jar
JAVA_TESTCLASSPATH = $(JAVA_JUNIT_JAR):$(JAVA_HAMCR_JAR):$(JAVA_MOCKITO_JAR):$(JAVA_CGLIB_JAR):$(JAVA_ASSERTJ_JAR)
JAVA_JUNIT_VER = 4.12
JAVA_JUNIT_JAR = junit-$(JAVA_JUNIT_VER).jar
JAVA_JUNIT_JAR_PATH = $(JAVA_TEST_LIBDIR)/$(JAVA_JUNIT_JAR)
JAVA_HAMCREST_VER = 1.3
JAVA_HAMCREST_JAR = hamcrest-core-$(JAVA_HAMCREST_VER).jar
JAVA_HAMCREST_JAR_PATH = $(JAVA_TEST_LIBDIR)/$(JAVA_HAMCREST_JAR)
JAVA_MOCKITO_VER = 1.10.19
JAVA_MOCKITO_JAR = mockito-all-$(JAVA_MOCKITO_VER).jar
JAVA_MOCKITO_JAR_PATH = $(JAVA_TEST_LIBDIR)/$(JAVA_MOCKITO_JAR)
JAVA_CGLIB_VER = 2.2.2
JAVA_CGLIB_JAR = cglib-$(JAVA_CGLIB_VER).jar
JAVA_CGLIB_JAR_PATH = $(JAVA_TEST_LIBDIR)/$(JAVA_CGLIB_JAR)
JAVA_ASSERTJ_VER = 1.7.1
JAVA_ASSERTJ_JAR = assertj-core-$(JAVA_ASSERTJ_VER).jar
JAVA_ASSERTJ_JAR_PATH = $(JAVA_TEST_LIBDIR)/$(JAVA_ASSERTJ_JAR)
JAVA_TESTCLASSPATH = $(JAVA_JUNIT_JAR_PATH):$(JAVA_HAMCREST_JAR_PATH):$(JAVA_MOCKITO_JAR_PATH):$(JAVA_CGLIB_JAR_PATH):$(JAVA_ASSERTJ_JAR_PATH)
MVN_LOCAL = ~/.m2/repository
@ -296,13 +308,45 @@ optimistic_transaction_sample: java
java -ea -Xcheck:jni -Djava.library.path=target -cp $(MAIN_CLASSES):$(SAMPLES_MAIN_CLASSES) OptimisticTransactionSample /tmp/rocksdbjni
$(AM_V_at)@rm -rf /tmp/rocksdbjni
resolve_test_deps:
test -d "$(JAVA_TEST_LIBDIR)" || mkdir -p "$(JAVA_TEST_LIBDIR)"
test -s "$(JAVA_JUNIT_JAR)" || cp $(MVN_LOCAL)/junit/junit/4.12/junit-4.12.jar $(JAVA_TEST_LIBDIR) || curl --fail --insecure --output $(JAVA_JUNIT_JAR) --location $(DEPS_URL)/junit-4.12.jar
test -s "$(JAVA_HAMCR_JAR)" || cp $(MVN_LOCAL)/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar $(JAVA_TEST_LIBDIR) || curl --fail --insecure --output $(JAVA_HAMCR_JAR) --location $(DEPS_URL)/hamcrest-core-1.3.jar
test -s "$(JAVA_MOCKITO_JAR)" || cp $(MVN_LOCAL)/org/mockito/mockito-all/1.10.19/mockito-all-1.10.19.jar $(JAVA_TEST_LIBDIR) || curl --fail --insecure --output "$(JAVA_MOCKITO_JAR)" --location $(DEPS_URL)/mockito-all-1.10.19.jar
test -s "$(JAVA_CGLIB_JAR)" || cp $(MVN_LOCAL)/cglib/cglib/2.2.2/cglib-2.2.2.jar $(JAVA_TEST_LIBDIR) || curl --fail --insecure --output "$(JAVA_CGLIB_JAR)" --location $(DEPS_URL)/cglib-2.2.2.jar
test -s "$(JAVA_ASSERTJ_JAR)" || cp $(MVN_LOCAL)/org/assertj/assertj-core/1.7.1/assertj-core-1.7.1.jar $(JAVA_TEST_LIBDIR) || curl --fail --insecure --output "$(JAVA_ASSERTJ_JAR)" --location $(DEPS_URL)/assertj-core-1.7.1.jar
$(JAVA_TEST_LIBDIR):
mkdir -p "$(JAVA_TEST_LIBDIR)"
$(JAVA_JUNIT_JAR_PATH): $(JAVA_TEST_LIBDIR)
ifneq (,$(wildcard $(MVN_LOCAL)/junit/junit/$(JAVA_JUNIT_VER)/$(JAVA_JUNIT_JAR)))
cp -v $(MVN_LOCAL)/junit/junit/$(JAVA_JUNIT_VER)/$(JAVA_JUNIT_JAR) $(JAVA_TEST_LIBDIR)
else
curl --fail --insecure --output $(JAVA_JUNIT_JAR_PATH) --location $(DEPS_URL)/$(JAVA_JUNIT_JAR)
endif
$(JAVA_HAMCREST_JAR_PATH): $(JAVA_TEST_LIBDIR)
ifneq (,$(wildcard $(MVN_LOCAL)/org/hamcrest/hamcrest-core/$(JAVA_HAMCREST_VER)/$(JAVA_HAMCREST_JAR)))
cp -v $(MVN_LOCAL)/org/hamcrest/hamcrest-core/$(JAVA_HAMCREST_VER)/$(JAVA_HAMCREST_JAR) $(JAVA_TEST_LIBDIR)
else
curl --fail --insecure --output $(JAVA_HAMCREST_JAR_PATH) --location $(DEPS_URL)/$(JAVA_HAMCREST_JAR)
endif
$(JAVA_MOCKITO_JAR_PATH): $(JAVA_TEST_LIBDIR)
ifneq (,$(wildcard $(MVN_LOCAL)/org/mockito/mockito-all/$(JAVA_MOCKITO_VER)/$(JAVA_MOCKITO_JAR)))
cp -v $(MVN_LOCAL)/org/mockito/mockito-all/$(JAVA_MOCKITO_VER)/$(JAVA_MOCKITO_JAR) $(JAVA_TEST_LIBDIR)
else
curl --fail --insecure --output "$(JAVA_MOCKITO_JAR_PATH)" --location $(DEPS_URL)/$(JAVA_MOCKITO_JAR)
endif
$(JAVA_CGLIB_JAR_PATH): $(JAVA_TEST_LIBDIR)
ifneq (,$(wildcard $(MVN_LOCAL)/cglib/cglib/$(JAVA_CGLIB_VER)/$(JAVA_CGLIB_JAR)))
cp -v $(MVN_LOCAL)/cglib/cglib/$(JAVA_CGLIB_VER)/$(JAVA_CGLIB_JAR) $(JAVA_TEST_LIBDIR)
else
curl --fail --insecure --output "$(JAVA_CGLIB_JAR_PATH)" --location $(DEPS_URL)/$(JAVA_CGLIB_JAR)
endif
$(JAVA_ASSERTJ_JAR_PATH): $(JAVA_TEST_LIBDIR)
ifneq (,$(wildcard $(MVN_LOCAL)/org/assertj/assertj-core/$(JAVA_ASSERTJ_VER)/$(JAVA_ASSERTJ_JAR)))
cp -v $(MVN_LOCAL)/org/assertj/assertj-core/$(JAVA_ASSERTJ_VER)/$(JAVA_ASSERTJ_JAR) $(JAVA_TEST_LIBDIR)
else
curl --fail --insecure --output "$(JAVA_ASSERTJ_JAR_PATH)" --location $(DEPS_URL)/$(JAVA_ASSERTJ_JAR)
endif
resolve_test_deps: $(JAVA_JUNIT_JAR_PATH) $(JAVA_HAMCREST_JAR_PATH) $(JAVA_MOCKITO_JAR_PATH) $(JAVA_CGLIB_JAR_PATH) $(JAVA_ASSERTJ_JAR_PATH)
java_test: java resolve_test_deps
$(AM_V_GEN)mkdir -p $(TEST_CLASSES)

View File

@ -7955,7 +7955,7 @@ class AbstractEventListenerJni
}
/**
* Get the Java Method: AbstractEventListener#OnFileFlushFinish
* Get the Java Method: AbstractEventListener#onFileFlushFinish
*
* @param env A pointer to the Java environment
*
@ -7965,13 +7965,13 @@ class AbstractEventListenerJni
jclass jclazz = getJClass(env);
assert(jclazz != nullptr);
static jmethodID mid = env->GetMethodID(
jclazz, "OnFileFlushFinish", "(Lorg/rocksdb/FileOperationInfo;)V");
jclazz, "onFileFlushFinish", "(Lorg/rocksdb/FileOperationInfo;)V");
assert(mid != nullptr);
return mid;
}
/**
* Get the Java Method: AbstractEventListener#OnFileSyncFinish
* Get the Java Method: AbstractEventListener#onFileSyncFinish
*
* @param env A pointer to the Java environment
*
@ -7981,13 +7981,13 @@ class AbstractEventListenerJni
jclass jclazz = getJClass(env);
assert(jclazz != nullptr);
static jmethodID mid = env->GetMethodID(
jclazz, "OnFileSyncFinish", "(Lorg/rocksdb/FileOperationInfo;)V");
jclazz, "onFileSyncFinish", "(Lorg/rocksdb/FileOperationInfo;)V");
assert(mid != nullptr);
return mid;
}
/**
* Get the Java Method: AbstractEventListener#OnFileRangeSyncFinish
* Get the Java Method: AbstractEventListener#onFileRangeSyncFinish
*
* @param env A pointer to the Java environment
*
@ -7997,13 +7997,13 @@ class AbstractEventListenerJni
jclass jclazz = getJClass(env);
assert(jclazz != nullptr);
static jmethodID mid = env->GetMethodID(
jclazz, "OnFileRangeSyncFinish", "(Lorg/rocksdb/FileOperationInfo;)V");
jclazz, "onFileRangeSyncFinish", "(Lorg/rocksdb/FileOperationInfo;)V");
assert(mid != nullptr);
return mid;
}
/**
* Get the Java Method: AbstractEventListener#OnFileTruncateFinish
* Get the Java Method: AbstractEventListener#onFileTruncateFinish
*
* @param env A pointer to the Java environment
*
@ -8013,13 +8013,13 @@ class AbstractEventListenerJni
jclass jclazz = getJClass(env);
assert(jclazz != nullptr);
static jmethodID mid = env->GetMethodID(
jclazz, "OnFileTruncateFinish", "(Lorg/rocksdb/FileOperationInfo;)V");
jclazz, "onFileTruncateFinish", "(Lorg/rocksdb/FileOperationInfo;)V");
assert(mid != nullptr);
return mid;
}
/**
* Get the Java Method: AbstractEventListener#OnFileCloseFinish
* Get the Java Method: AbstractEventListener#onFileCloseFinish
*
* @param env A pointer to the Java environment
*
@ -8029,7 +8029,7 @@ class AbstractEventListenerJni
jclass jclazz = getJClass(env);
assert(jclazz != nullptr);
static jmethodID mid = env->GetMethodID(
jclazz, "OnFileCloseFinish", "(Lorg/rocksdb/FileOperationInfo;)V");
jclazz, "onFileCloseFinish", "(Lorg/rocksdb/FileOperationInfo;)V");
assert(mid != nullptr);
return mid;
}

View File

@ -1,33 +1,40 @@
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
#include <climits>
#include <cstdint>
#include <utility>
#include "include/org_rocksdb_test_TestableEventListener.h"
#include "rocksdb/listener.h"
#include "rocksdb/status.h"
#include "rocksdb/table_properties.h"
using namespace ROCKSDB_NAMESPACE;
static TableProperties newTablePropertiesForTest() {
TableProperties table_properties;
table_properties.data_size = LLONG_MAX;
table_properties.index_size = LLONG_MAX;
table_properties.index_partitions = LLONG_MAX;
table_properties.top_level_index_size = LLONG_MAX;
table_properties.index_key_is_user_key = LLONG_MAX;
table_properties.index_value_is_delta_encoded = LLONG_MAX;
table_properties.filter_size = LLONG_MAX;
table_properties.raw_key_size = LLONG_MAX;
table_properties.raw_value_size = LLONG_MAX;
table_properties.num_data_blocks = LLONG_MAX;
table_properties.num_entries = LLONG_MAX;
table_properties.num_deletions = LLONG_MAX;
table_properties.num_merge_operands = LLONG_MAX;
table_properties.num_range_deletions = LLONG_MAX;
table_properties.format_version = LLONG_MAX;
table_properties.fixed_key_len = LLONG_MAX;
table_properties.column_family_id = LLONG_MAX;
table_properties.creation_time = LLONG_MAX;
table_properties.oldest_key_time = LLONG_MAX;
table_properties.file_creation_time = LLONG_MAX;
table_properties.data_size = UINT64_MAX;
table_properties.index_size = UINT64_MAX;
table_properties.index_partitions = UINT64_MAX;
table_properties.top_level_index_size = UINT64_MAX;
table_properties.index_key_is_user_key = UINT64_MAX;
table_properties.index_value_is_delta_encoded = UINT64_MAX;
table_properties.filter_size = UINT64_MAX;
table_properties.raw_key_size = UINT64_MAX;
table_properties.raw_value_size = UINT64_MAX;
table_properties.num_data_blocks = UINT64_MAX;
table_properties.num_entries = UINT64_MAX;
table_properties.num_deletions = UINT64_MAX;
table_properties.num_merge_operands = UINT64_MAX;
table_properties.num_range_deletions = UINT64_MAX;
table_properties.format_version = UINT64_MAX;
table_properties.fixed_key_len = UINT64_MAX;
table_properties.column_family_id = UINT64_MAX;
table_properties.creation_time = UINT64_MAX;
table_properties.oldest_key_time = UINT64_MAX;
table_properties.file_creation_time = UINT64_MAX;
table_properties.db_id = "dbId";
table_properties.db_session_id = "sessionId";
table_properties.column_family_name = "columnFamilyName";
@ -40,7 +47,7 @@ static TableProperties newTablePropertiesForTest() {
table_properties.compression_options = "compressionOptions";
table_properties.user_collected_properties = {{"key", "value"}};
table_properties.readable_properties = {{"key", "value"}};
table_properties.properties_offsets = {{"key", LLONG_MAX}};
table_properties.properties_offsets = {{"key", UINT64_MAX}};
return table_properties;
}
@ -61,14 +68,14 @@ void Java_org_rocksdb_test_TestableEventListener_invokeAllCallbacks(
flush_job_info.cf_id = INT_MAX;
flush_job_info.cf_name = "testColumnFamily";
flush_job_info.file_path = "/file/path";
flush_job_info.file_number = LLONG_MAX;
flush_job_info.oldest_blob_file_number = LLONG_MAX;
flush_job_info.thread_id = LLONG_MAX;
flush_job_info.file_number = UINT64_MAX;
flush_job_info.oldest_blob_file_number = UINT64_MAX;
flush_job_info.thread_id = UINT64_MAX;
flush_job_info.job_id = INT_MAX;
flush_job_info.triggered_writes_slowdown = true;
flush_job_info.triggered_writes_stop = true;
flush_job_info.smallest_seqno = LLONG_MAX;
flush_job_info.largest_seqno = LLONG_MAX;
flush_job_info.smallest_seqno = UINT64_MAX;
flush_job_info.largest_seqno = UINT64_MAX;
flush_job_info.table_properties = table_properties;
flush_job_info.flush_reason = FlushReason::kManualFlush;
@ -86,10 +93,10 @@ void Java_org_rocksdb_test_TestableEventListener_invokeAllCallbacks(
el->OnTableFileDeleted(file_deletion_info);
CompactionJobInfo compaction_job_info;
compaction_job_info.cf_id = INT_MAX;
compaction_job_info.cf_id = UINT32_MAX;
compaction_job_info.cf_name = "compactionColumnFamily";
compaction_job_info.status = status;
compaction_job_info.thread_id = LLONG_MAX;
compaction_job_info.thread_id = UINT64_MAX;
compaction_job_info.job_id = INT_MAX;
compaction_job_info.base_input_level = INT_MAX;
compaction_job_info.output_level = INT_MAX;
@ -109,7 +116,7 @@ void Java_org_rocksdb_test_TestableEventListener_invokeAllCallbacks(
el->OnCompactionCompleted(nullptr, compaction_job_info);
TableFileCreationInfo file_creation_info;
file_creation_info.file_size = LLONG_MAX;
file_creation_info.file_size = UINT64_MAX;
file_creation_info.table_properties = table_properties;
file_creation_info.status = status;
file_creation_info.file_checksum = "fileChecksum";
@ -133,10 +140,10 @@ void Java_org_rocksdb_test_TestableEventListener_invokeAllCallbacks(
MemTableInfo mem_table_info;
mem_table_info.cf_name = "columnFamilyName";
mem_table_info.first_seqno = LLONG_MAX;
mem_table_info.earliest_seqno = LLONG_MAX;
mem_table_info.num_entries = LLONG_MAX;
mem_table_info.num_deletes = LLONG_MAX;
mem_table_info.first_seqno = UINT64_MAX;
mem_table_info.earliest_seqno = UINT64_MAX;
mem_table_info.num_entries = UINT64_MAX;
mem_table_info.num_deletes = UINT64_MAX;
el->OnMemTableSealed(mem_table_info);
el->OnColumnFamilyHandleDeletionStarted(nullptr);
@ -145,7 +152,7 @@ void Java_org_rocksdb_test_TestableEventListener_invokeAllCallbacks(
file_ingestion_info.cf_name = "columnFamilyName";
file_ingestion_info.external_file_path = "/external/file/path";
file_ingestion_info.internal_file_path = "/internal/file/path";
file_ingestion_info.global_seqno = LLONG_MAX;
file_ingestion_info.global_seqno = UINT64_MAX;
file_ingestion_info.table_properties = table_properties;
el->OnExternalFileIngested(nullptr, file_ingestion_info);
@ -169,8 +176,8 @@ void Java_org_rocksdb_test_TestableEventListener_invokeAllCallbacks(
std::chrono::nanoseconds>(
std::chrono::nanoseconds(1600699425000000000ll)),
status);
op_info.offset = LLONG_MAX;
op_info.length = LLONG_MAX;
op_info.offset = UINT64_MAX;
op_info.length = SIZE_MAX;
op_info.status = status;
el->OnFileReadFinish(op_info);

View File

@ -265,27 +265,27 @@ public abstract class AbstractEventListener extends RocksCallbackObject implemen
}
@Override
public void OnFileFlushFinish(final FileOperationInfo fileOperationInfo) {
public void onFileFlushFinish(final FileOperationInfo fileOperationInfo) {
// no-op
}
@Override
public void OnFileSyncFinish(final FileOperationInfo fileOperationInfo) {
public void onFileSyncFinish(final FileOperationInfo fileOperationInfo) {
// no-op
}
@Override
public void OnFileRangeSyncFinish(final FileOperationInfo fileOperationInfo) {
public void onFileRangeSyncFinish(final FileOperationInfo fileOperationInfo) {
// no-op
}
@Override
public void OnFileTruncateFinish(final FileOperationInfo fileOperationInfo) {
public void onFileTruncateFinish(final FileOperationInfo fileOperationInfo) {
// no-op
}
@Override
public void OnFileCloseFinish(final FileOperationInfo fileOperationInfo) {
public void onFileCloseFinish(final FileOperationInfo fileOperationInfo) {
// no-op
}

View File

@ -259,7 +259,7 @@ public interface EventListener {
* @param fileOperationInfo file operation info,
* contains data copied from respective native structure.
*/
void OnFileFlushFinish(final FileOperationInfo fileOperationInfo);
void onFileFlushFinish(final FileOperationInfo fileOperationInfo);
/**
* A callback function for RocksDB which will be called whenever a file sync
@ -268,7 +268,7 @@ public interface EventListener {
* @param fileOperationInfo file operation info,
* contains data copied from respective native structure.
*/
void OnFileSyncFinish(final FileOperationInfo fileOperationInfo);
void onFileSyncFinish(final FileOperationInfo fileOperationInfo);
/**
* A callback function for RocksDB which will be called whenever a file
@ -277,7 +277,7 @@ public interface EventListener {
* @param fileOperationInfo file operation info,
* contains data copied from respective native structure.
*/
void OnFileRangeSyncFinish(final FileOperationInfo fileOperationInfo);
void onFileRangeSyncFinish(final FileOperationInfo fileOperationInfo);
/**
* A callback function for RocksDB which will be called whenever a file
@ -286,7 +286,7 @@ public interface EventListener {
* @param fileOperationInfo file operation info,
* contains data copied from respective native structure.
*/
void OnFileTruncateFinish(final FileOperationInfo fileOperationInfo);
void onFileTruncateFinish(final FileOperationInfo fileOperationInfo);
/**
* A callback function for RocksDB which will be called whenever a file close
@ -295,7 +295,7 @@ public interface EventListener {
* @param fileOperationInfo file operation info,
* contains data copied from respective native structure.
*/
void OnFileCloseFinish(final FileOperationInfo fileOperationInfo);
void onFileCloseFinish(final FileOperationInfo fileOperationInfo);
/**
* If true, the {@link #onFileReadFinish(FileOperationInfo)}

View File

@ -1,3 +1,7 @@
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
package org.rocksdb;
import static org.assertj.core.api.Assertions.assertThat;
@ -5,15 +9,13 @@ import static org.junit.Assert.*;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Collections;
import java.util.Map;
import java.util.Random;
import java.util.UUID;
import java.util.*;
import java.util.concurrent.atomic.AtomicBoolean;
import org.junit.ClassRule;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.TemporaryFolder;
import org.rocksdb.AbstractEventListener.EnabledEventCallback;
import org.rocksdb.test.TestableEventListener;
public class EventListenerTest {
@ -41,10 +43,8 @@ public class EventListenerTest {
@Test
public void onFlushCompleted() throws RocksDBException {
// Callback is synchronous, but we need mutable container to update boolean value in other
// method
final AtomicBoolean wasCbCalled = new AtomicBoolean();
AbstractEventListener onFlushCompletedListener = new AbstractEventListener() {
final AbstractEventListener onFlushCompletedListener = new AbstractEventListener() {
@Override
public void onFlushCompleted(final RocksDB rocksDb, final FlushJobInfo flushJobInfo) {
assertNotNull(flushJobInfo.getColumnFamilyName());
@ -57,10 +57,8 @@ public class EventListenerTest {
@Test
public void onFlushBegin() throws RocksDBException {
// Callback is synchronous, but we need mutable container to update boolean value in other
// method
final AtomicBoolean wasCbCalled = new AtomicBoolean();
AbstractEventListener onFlushBeginListener = new AbstractEventListener() {
final AbstractEventListener onFlushBeginListener = new AbstractEventListener() {
@Override
public void onFlushBegin(final RocksDB rocksDb, final FlushJobInfo flushJobInfo) {
assertNotNull(flushJobInfo.getColumnFamilyName());
@ -72,7 +70,7 @@ public class EventListenerTest {
}
void deleteTableFile(final AbstractEventListener el, final AtomicBoolean wasCbCalled)
throws RocksDBException, InterruptedException {
throws RocksDBException {
try (final Options opt =
new Options().setCreateIfMissing(true).setListeners(Collections.singletonList(el));
final RocksDB db = RocksDB.open(opt, dbFolder.getRoot().getAbsolutePath())) {
@ -80,7 +78,7 @@ public class EventListenerTest {
final byte[] value = new byte[24];
rand.nextBytes(value);
db.put("testKey".getBytes(), value);
RocksDB.LiveFiles liveFiles = db.getLiveFiles();
final RocksDB.LiveFiles liveFiles = db.getLiveFiles();
assertNotNull(liveFiles);
assertNotNull(liveFiles.files);
assertFalse(liveFiles.files.isEmpty());
@ -91,10 +89,8 @@ public class EventListenerTest {
@Test
public void onTableFileDeleted() throws RocksDBException, InterruptedException {
// Callback is synchronous, but we need mutable container to update boolean value in other
// method
final AtomicBoolean wasCbCalled = new AtomicBoolean();
AbstractEventListener onTableFileDeletedListener = new AbstractEventListener() {
final AbstractEventListener onTableFileDeletedListener = new AbstractEventListener() {
@Override
public void onTableFileDeleted(final TableFileDeletionInfo tableFileDeletionInfo) {
assertNotNull(tableFileDeletionInfo.getDbName());
@ -120,10 +116,8 @@ public class EventListenerTest {
@Test
public void onCompactionBegin() throws RocksDBException {
// Callback is synchronous, but we need mutable container to update boolean value in other
// method
final AtomicBoolean wasCbCalled = new AtomicBoolean();
AbstractEventListener onCompactionBeginListener = new AbstractEventListener() {
final AbstractEventListener onCompactionBeginListener = new AbstractEventListener() {
@Override
public void onCompactionBegin(final RocksDB db, final CompactionJobInfo compactionJobInfo) {
assertEquals(CompactionReason.kManualCompaction, compactionJobInfo.compactionReason());
@ -135,10 +129,8 @@ public class EventListenerTest {
@Test
public void onCompactionCompleted() throws RocksDBException {
// Callback is synchronous, but we need mutable container to update boolean value in other
// method
final AtomicBoolean wasCbCalled = new AtomicBoolean();
AbstractEventListener onCompactionCompletedListener = new AbstractEventListener() {
final AbstractEventListener onCompactionCompletedListener = new AbstractEventListener() {
@Override
public void onCompactionCompleted(
final RocksDB db, final CompactionJobInfo compactionJobInfo) {
@ -151,10 +143,8 @@ public class EventListenerTest {
@Test
public void onTableFileCreated() throws RocksDBException {
// Callback is synchronous, but we need mutable container to update boolean value in other
// method
final AtomicBoolean wasCbCalled = new AtomicBoolean();
AbstractEventListener onTableFileCreatedListener = new AbstractEventListener() {
final AbstractEventListener onTableFileCreatedListener = new AbstractEventListener() {
@Override
public void onTableFileCreated(final TableFileCreationInfo tableFileCreationInfo) {
assertEquals(TableFileCreationReason.FLUSH, tableFileCreationInfo.getReason());
@ -166,10 +156,8 @@ public class EventListenerTest {
@Test
public void onTableFileCreationStarted() throws RocksDBException {
// Callback is synchronous, but we need mutable container to update boolean value in other
// method
final AtomicBoolean wasCbCalled = new AtomicBoolean();
AbstractEventListener onTableFileCreationStartedListener = new AbstractEventListener() {
final AbstractEventListener onTableFileCreationStartedListener = new AbstractEventListener() {
@Override
public void onTableFileCreationStarted(
final TableFileCreationBriefInfo tableFileCreationBriefInfo) {
@ -197,10 +185,8 @@ public class EventListenerTest {
@Test
public void onColumnFamilyHandleDeletionStarted() throws RocksDBException {
// Callback is synchronous, but we need mutable container to update boolean value in other
// method
final AtomicBoolean wasCbCalled = new AtomicBoolean();
AbstractEventListener onColumnFamilyHandleDeletionStartedListener =
final AbstractEventListener onColumnFamilyHandleDeletionStartedListener =
new AbstractEventListener() {
@Override
public void onColumnFamilyHandleDeletionStarted(
@ -218,9 +204,9 @@ public class EventListenerTest {
new Options().setCreateIfMissing(true).setListeners(Collections.singletonList(el));
final RocksDB db = RocksDB.open(opt, dbFolder.getRoot().getAbsolutePath())) {
assertThat(db).isNotNull();
String uuid = UUID.randomUUID().toString();
SstFileWriter sstFileWriter = new SstFileWriter(new EnvOptions(), opt);
Path externalFilePath = Paths.get(db.getName(), uuid);
final String uuid = UUID.randomUUID().toString();
final SstFileWriter sstFileWriter = new SstFileWriter(new EnvOptions(), opt);
final Path externalFilePath = Paths.get(db.getName(), uuid);
sstFileWriter.open(externalFilePath.toString());
sstFileWriter.put("testKey".getBytes(), uuid.getBytes());
sstFileWriter.finish();
@ -232,10 +218,8 @@ public class EventListenerTest {
@Test
public void onExternalFileIngested() throws RocksDBException {
// Callback is synchronous, but we need mutable container to update boolean value in other
// method
final AtomicBoolean wasCbCalled = new AtomicBoolean();
AbstractEventListener onExternalFileIngestedListener = new AbstractEventListener() {
final AbstractEventListener onExternalFileIngestedListener = new AbstractEventListener() {
@Override
public void onExternalFileIngested(
final RocksDB db, final ExternalFileIngestionInfo externalFileIngestionInfo) {
@ -248,8 +232,8 @@ public class EventListenerTest {
@Test
public void testAllCallbacksInvocation() {
final int TEST_INT_VAL = Integer.MAX_VALUE;
final long TEST_LONG_VAL = Long.MAX_VALUE;
final int TEST_INT_VAL = -1;
final long TEST_LONG_VAL = -1;
// Expected test data objects
final Map<String, String> userCollectedPropertiesTestData =
Collections.singletonMap("key", "value");
@ -263,18 +247,18 @@ public class EventListenerTest {
"columnFamilyName".getBytes(), "filterPolicyName", "comparatorName", "mergeOperatorName",
"prefixExtractorName", "propertyCollectorsNames", "compressionName",
userCollectedPropertiesTestData, readablePropertiesTestData, propertiesOffsetsTestData);
final FlushJobInfo flushJobInfoTestData = new FlushJobInfo(TEST_INT_VAL, "testColumnFamily",
"/file/path", TEST_LONG_VAL, TEST_INT_VAL, true, true, TEST_LONG_VAL, TEST_LONG_VAL,
tablePropertiesTestData, (byte) 0x0a);
final FlushJobInfo flushJobInfoTestData = new FlushJobInfo(Integer.MAX_VALUE,
"testColumnFamily", "/file/path", TEST_LONG_VAL, Integer.MAX_VALUE, true, true,
TEST_LONG_VAL, TEST_LONG_VAL, tablePropertiesTestData, (byte) 0x0a);
final Status statusTestData = new Status(Status.Code.Incomplete, Status.SubCode.NoSpace, null);
final TableFileDeletionInfo tableFileDeletionInfoTestData =
new TableFileDeletionInfo("dbName", "/file/path", TEST_INT_VAL, statusTestData);
new TableFileDeletionInfo("dbName", "/file/path", Integer.MAX_VALUE, statusTestData);
final TableFileCreationInfo tableFileCreationInfoTestData =
new TableFileCreationInfo(TEST_LONG_VAL, tablePropertiesTestData, statusTestData, "dbName",
"columnFamilyName", "/file/path", TEST_INT_VAL, (byte) 0x03);
"columnFamilyName", "/file/path", Integer.MAX_VALUE, (byte) 0x03);
final TableFileCreationBriefInfo tableFileCreationBriefInfoTestData =
new TableFileCreationBriefInfo(
"dbName", "columnFamilyName", "/file/path", TEST_INT_VAL, (byte) 0x03);
"dbName", "columnFamilyName", "/file/path", Integer.MAX_VALUE, (byte) 0x03);
final MemTableInfo memTableInfoTestData = new MemTableInfo(
"columnFamilyName", TEST_LONG_VAL, TEST_LONG_VAL, TEST_LONG_VAL, TEST_LONG_VAL);
final FileOperationInfo fileOperationInfoTestData = new FileOperationInfo("/file/path",
@ -285,305 +269,496 @@ public class EventListenerTest {
new ExternalFileIngestionInfo("columnFamilyName", "/external/file/path",
"/internal/file/path", TEST_LONG_VAL, tablePropertiesTestData);
final int CALLBACKS_COUNT = 22;
final AtomicBoolean[] wasCalled = new AtomicBoolean[CALLBACKS_COUNT];
for (int i = 0; i < CALLBACKS_COUNT; ++i) {
wasCalled[i] = new AtomicBoolean();
}
TestableEventListener listener = new TestableEventListener() {
final CapturingTestableEventListener listener = new CapturingTestableEventListener() {
@Override
public void onFlushCompleted(final RocksDB db, final FlushJobInfo flushJobInfo) {
super.onFlushCompleted(db, flushJobInfo);
assertEquals(flushJobInfoTestData, flushJobInfo);
wasCalled[0].set(true);
}
@Override
public void onFlushBegin(final RocksDB db, final FlushJobInfo flushJobInfo) {
super.onFlushBegin(db, flushJobInfo);
assertEquals(flushJobInfoTestData, flushJobInfo);
wasCalled[1].set(true);
}
@Override
public void onTableFileDeleted(final TableFileDeletionInfo tableFileDeletionInfo) {
super.onTableFileDeleted(tableFileDeletionInfo);
assertEquals(tableFileDeletionInfoTestData, tableFileDeletionInfo);
wasCalled[2].set(true);
}
@Override
public void onCompactionBegin(final RocksDB db, final CompactionJobInfo compactionJobInfo) {
super.onCompactionBegin(db, compactionJobInfo);
assertArrayEquals(
"compactionColumnFamily".getBytes(), compactionJobInfo.columnFamilyName());
assertEquals(statusTestData, compactionJobInfo.status());
assertEquals(TEST_LONG_VAL, compactionJobInfo.threadId());
assertEquals(TEST_INT_VAL, compactionJobInfo.jobId());
assertEquals(TEST_INT_VAL, compactionJobInfo.baseInputLevel());
assertEquals(TEST_INT_VAL, compactionJobInfo.outputLevel());
assertEquals(Integer.MAX_VALUE, compactionJobInfo.jobId());
assertEquals(Integer.MAX_VALUE, compactionJobInfo.baseInputLevel());
assertEquals(Integer.MAX_VALUE, compactionJobInfo.outputLevel());
assertEquals(Collections.singletonList("inputFile.sst"), compactionJobInfo.inputFiles());
assertEquals(Collections.singletonList("outputFile.sst"), compactionJobInfo.outputFiles());
assertEquals(Collections.singletonMap("tableProperties", tablePropertiesTestData),
compactionJobInfo.tableProperties());
assertEquals(CompactionReason.kFlush, compactionJobInfo.compactionReason());
assertEquals(CompressionType.SNAPPY_COMPRESSION, compactionJobInfo.compression());
wasCalled[3].set(true);
}
@Override
public void onCompactionCompleted(
final RocksDB db, final CompactionJobInfo compactionJobInfo) {
super.onCompactionCompleted(db, compactionJobInfo);
assertArrayEquals(
"compactionColumnFamily".getBytes(), compactionJobInfo.columnFamilyName());
assertEquals(statusTestData, compactionJobInfo.status());
assertEquals(TEST_LONG_VAL, compactionJobInfo.threadId());
assertEquals(TEST_INT_VAL, compactionJobInfo.jobId());
assertEquals(TEST_INT_VAL, compactionJobInfo.baseInputLevel());
assertEquals(TEST_INT_VAL, compactionJobInfo.outputLevel());
assertEquals(Integer.MAX_VALUE, compactionJobInfo.jobId());
assertEquals(Integer.MAX_VALUE, compactionJobInfo.baseInputLevel());
assertEquals(Integer.MAX_VALUE, compactionJobInfo.outputLevel());
assertEquals(Collections.singletonList("inputFile.sst"), compactionJobInfo.inputFiles());
assertEquals(Collections.singletonList("outputFile.sst"), compactionJobInfo.outputFiles());
assertEquals(Collections.singletonMap("tableProperties", tablePropertiesTestData),
compactionJobInfo.tableProperties());
assertEquals(CompactionReason.kFlush, compactionJobInfo.compactionReason());
assertEquals(CompressionType.SNAPPY_COMPRESSION, compactionJobInfo.compression());
wasCalled[4].set(true);
}
@Override
public void onTableFileCreated(final TableFileCreationInfo tableFileCreationInfo) {
super.onTableFileCreated(tableFileCreationInfo);
assertEquals(tableFileCreationInfoTestData, tableFileCreationInfo);
wasCalled[5].set(true);
}
@Override
public void onTableFileCreationStarted(
final TableFileCreationBriefInfo tableFileCreationBriefInfo) {
super.onTableFileCreationStarted(tableFileCreationBriefInfo);
assertEquals(tableFileCreationBriefInfoTestData, tableFileCreationBriefInfo);
wasCalled[6].set(true);
}
@Override
public void onMemTableSealed(final MemTableInfo memTableInfo) {
super.onMemTableSealed(memTableInfo);
assertEquals(memTableInfoTestData, memTableInfo);
wasCalled[7].set(true);
}
@Override
public void onColumnFamilyHandleDeletionStarted(final ColumnFamilyHandle columnFamilyHandle) {
wasCalled[8].set(true);
super.onColumnFamilyHandleDeletionStarted(columnFamilyHandle);
}
@Override
public void onExternalFileIngested(
final RocksDB db, final ExternalFileIngestionInfo externalFileIngestionInfo) {
super.onExternalFileIngested(db, externalFileIngestionInfo);
assertEquals(externalFileIngestionInfoTestData, externalFileIngestionInfo);
wasCalled[9].set(true);
}
@Override
public void onBackgroundError(
final BackgroundErrorReason backgroundErrorReason, final Status backgroundError) {
wasCalled[10].set(true);
super.onBackgroundError(backgroundErrorReason, backgroundError);
}
@Override
public void onStallConditionsChanged(final WriteStallInfo writeStallInfo) {
super.onStallConditionsChanged(writeStallInfo);
assertEquals(writeStallInfoTestData, writeStallInfo);
wasCalled[11].set(true);
}
@Override
public void onFileReadFinish(final FileOperationInfo fileOperationInfo) {
super.onFileReadFinish(fileOperationInfo);
assertEquals(fileOperationInfoTestData, fileOperationInfo);
wasCalled[12].set(true);
}
@Override
public void onFileWriteFinish(final FileOperationInfo fileOperationInfo) {
super.onFileWriteFinish(fileOperationInfo);
assertEquals(fileOperationInfoTestData, fileOperationInfo);
wasCalled[13].set(true);
}
@Override
public void OnFileFlushFinish(final FileOperationInfo fileOperationInfo) {
public void onFileFlushFinish(final FileOperationInfo fileOperationInfo) {
super.onFileFlushFinish(fileOperationInfo);
assertEquals(fileOperationInfoTestData, fileOperationInfo);
wasCalled[14].set(true);
}
@Override
public void OnFileSyncFinish(final FileOperationInfo fileOperationInfo) {
public void onFileSyncFinish(final FileOperationInfo fileOperationInfo) {
super.onFileSyncFinish(fileOperationInfo);
assertEquals(fileOperationInfoTestData, fileOperationInfo);
wasCalled[15].set(true);
}
@Override
public void OnFileRangeSyncFinish(final FileOperationInfo fileOperationInfo) {
public void onFileRangeSyncFinish(final FileOperationInfo fileOperationInfo) {
super.onFileRangeSyncFinish(fileOperationInfo);
assertEquals(fileOperationInfoTestData, fileOperationInfo);
wasCalled[16].set(true);
}
@Override
public void OnFileTruncateFinish(final FileOperationInfo fileOperationInfo) {
public void onFileTruncateFinish(final FileOperationInfo fileOperationInfo) {
assertEquals(fileOperationInfoTestData, fileOperationInfo);
wasCalled[17].set(true);
super.onFileTruncateFinish(fileOperationInfo);
}
@Override
public void OnFileCloseFinish(final FileOperationInfo fileOperationInfo) {
public void onFileCloseFinish(final FileOperationInfo fileOperationInfo) {
super.onFileCloseFinish(fileOperationInfo);
assertEquals(fileOperationInfoTestData, fileOperationInfo);
wasCalled[18].set(true);
}
@Override
public boolean shouldBeNotifiedOnFileIO() {
wasCalled[19].set(true);
super.shouldBeNotifiedOnFileIO();
return false;
}
@Override
public boolean onErrorRecoveryBegin(
final BackgroundErrorReason backgroundErrorReason, final Status backgroundError) {
super.onErrorRecoveryBegin(backgroundErrorReason, backgroundError);
assertEquals(BackgroundErrorReason.FLUSH, backgroundErrorReason);
assertEquals(statusTestData, backgroundError);
wasCalled[20].set(true);
return true;
}
@Override
public void onErrorRecoveryCompleted(final Status oldBackgroundError) {
super.onErrorRecoveryCompleted(oldBackgroundError);
assertEquals(statusTestData, oldBackgroundError);
wasCalled[21].set(true);
}
};
// test action
listener.invokeAllCallbacks();
for (int i = 0; i < CALLBACKS_COUNT; ++i) {
assertTrue("Callback method " + i + " was not called", wasCalled[i].get());
}
// assert
assertAllEventsCalled(listener);
}
@Test
public void testEnabledCallbacks() {
final AtomicBoolean wasOnMemTableSealedCalled = new AtomicBoolean();
final AtomicBoolean wasOnErrorRecoveryCompletedCalled = new AtomicBoolean();
final TestableEventListener listener = new TestableEventListener(
AbstractEventListener.EnabledEventCallback.ON_MEMTABLE_SEALED,
AbstractEventListener.EnabledEventCallback.ON_ERROR_RECOVERY_COMPLETED) {
final EnabledEventCallback enabledEvents[] = {
EnabledEventCallback.ON_MEMTABLE_SEALED, EnabledEventCallback.ON_ERROR_RECOVERY_COMPLETED};
final CapturingTestableEventListener listener =
new CapturingTestableEventListener(enabledEvents);
// test action
listener.invokeAllCallbacks();
// assert
assertEventsCalled(listener, enabledEvents);
}
private static void assertAllEventsCalled(
final CapturingTestableEventListener capturingTestableEventListener) {
assertEventsCalled(capturingTestableEventListener, EnumSet.allOf(EnabledEventCallback.class));
}
private static void assertEventsCalled(
final CapturingTestableEventListener capturingTestableEventListener,
final EnabledEventCallback[] expected) {
assertEventsCalled(capturingTestableEventListener, EnumSet.copyOf(Arrays.asList(expected)));
}
private static void assertEventsCalled(
final CapturingTestableEventListener capturingTestableEventListener,
final EnumSet<EnabledEventCallback> expected) {
final ListenerEvents capturedEvents = capturingTestableEventListener.capturedListenerEvents;
if (expected.contains(EnabledEventCallback.ON_FLUSH_COMPLETED)) {
assertTrue("onFlushCompleted was not called", capturedEvents.flushCompleted);
} else {
assertFalse("onFlushCompleted was not called", capturedEvents.flushCompleted);
}
if (expected.contains(EnabledEventCallback.ON_FLUSH_BEGIN)) {
assertTrue("onFlushBegin was not called", capturedEvents.flushBegin);
} else {
assertFalse("onFlushBegin was called", capturedEvents.flushBegin);
}
if (expected.contains(EnabledEventCallback.ON_TABLE_FILE_DELETED)) {
assertTrue("onTableFileDeleted was not called", capturedEvents.tableFileDeleted);
} else {
assertFalse("onTableFileDeleted was called", capturedEvents.tableFileDeleted);
}
if (expected.contains(EnabledEventCallback.ON_COMPACTION_BEGIN)) {
assertTrue("onCompactionBegin was not called", capturedEvents.compactionBegin);
} else {
assertFalse("onCompactionBegin was called", capturedEvents.compactionBegin);
}
if (expected.contains(EnabledEventCallback.ON_COMPACTION_COMPLETED)) {
assertTrue("onCompactionCompleted was not called", capturedEvents.compactionCompleted);
} else {
assertFalse("onCompactionCompleted was called", capturedEvents.compactionCompleted);
}
if (expected.contains(EnabledEventCallback.ON_TABLE_FILE_CREATED)) {
assertTrue("onTableFileCreated was not called", capturedEvents.tableFileCreated);
} else {
assertFalse("onTableFileCreated was called", capturedEvents.tableFileCreated);
}
if (expected.contains(EnabledEventCallback.ON_TABLE_FILE_CREATION_STARTED)) {
assertTrue(
"onTableFileCreationStarted was not called", capturedEvents.tableFileCreationStarted);
} else {
assertFalse("onTableFileCreationStarted was called", capturedEvents.tableFileCreationStarted);
}
if (expected.contains(EnabledEventCallback.ON_MEMTABLE_SEALED)) {
assertTrue("onMemTableSealed was not called", capturedEvents.memTableSealed);
} else {
assertFalse("onMemTableSealed was called", capturedEvents.memTableSealed);
}
if (expected.contains(EnabledEventCallback.ON_COLUMN_FAMILY_HANDLE_DELETION_STARTED)) {
assertTrue("onColumnFamilyHandleDeletionStarted was not called",
capturedEvents.columnFamilyHandleDeletionStarted);
} else {
assertFalse("onColumnFamilyHandleDeletionStarted was called",
capturedEvents.columnFamilyHandleDeletionStarted);
}
if (expected.contains(EnabledEventCallback.ON_EXTERNAL_FILE_INGESTED)) {
assertTrue("onExternalFileIngested was not called", capturedEvents.externalFileIngested);
} else {
assertFalse("onExternalFileIngested was called", capturedEvents.externalFileIngested);
}
if (expected.contains(EnabledEventCallback.ON_BACKGROUND_ERROR)) {
assertTrue("onBackgroundError was not called", capturedEvents.backgroundError);
} else {
assertFalse("onBackgroundError was called", capturedEvents.backgroundError);
}
if (expected.contains(EnabledEventCallback.ON_STALL_CONDITIONS_CHANGED)) {
assertTrue("onStallConditionsChanged was not called", capturedEvents.stallConditionsChanged);
} else {
assertFalse("onStallConditionsChanged was called", capturedEvents.stallConditionsChanged);
}
if (expected.contains(EnabledEventCallback.ON_FILE_READ_FINISH)) {
assertTrue("onFileReadFinish was not called", capturedEvents.fileReadFinish);
} else {
assertFalse("onFileReadFinish was called", capturedEvents.fileReadFinish);
}
if (expected.contains(EnabledEventCallback.ON_FILE_WRITE_FINISH)) {
assertTrue("onFileWriteFinish was not called", capturedEvents.fileWriteFinish);
} else {
assertFalse("onFileWriteFinish was called", capturedEvents.fileWriteFinish);
}
if (expected.contains(EnabledEventCallback.ON_FILE_FLUSH_FINISH)) {
assertTrue("onFileFlushFinish was not called", capturedEvents.fileFlushFinish);
} else {
assertFalse("onFileFlushFinish was called", capturedEvents.fileFlushFinish);
}
if (expected.contains(EnabledEventCallback.ON_FILE_SYNC_FINISH)) {
assertTrue("onFileSyncFinish was not called", capturedEvents.fileSyncFinish);
} else {
assertFalse("onFileSyncFinish was called", capturedEvents.fileSyncFinish);
}
if (expected.contains(EnabledEventCallback.ON_FILE_RANGE_SYNC_FINISH)) {
assertTrue("onFileRangeSyncFinish was not called", capturedEvents.fileRangeSyncFinish);
} else {
assertFalse("onFileRangeSyncFinish was called", capturedEvents.fileRangeSyncFinish);
}
if (expected.contains(EnabledEventCallback.ON_FILE_TRUNCATE_FINISH)) {
assertTrue("onFileTruncateFinish was not called", capturedEvents.fileTruncateFinish);
} else {
assertFalse("onFileTruncateFinish was called", capturedEvents.fileTruncateFinish);
}
if (expected.contains(EnabledEventCallback.ON_FILE_CLOSE_FINISH)) {
assertTrue("onFileCloseFinish was not called", capturedEvents.fileCloseFinish);
} else {
assertFalse("onFileCloseFinish was called", capturedEvents.fileCloseFinish);
}
if (expected.contains(EnabledEventCallback.SHOULD_BE_NOTIFIED_ON_FILE_IO)) {
assertTrue(
"shouldBeNotifiedOnFileIO was not called", capturedEvents.shouldBeNotifiedOnFileIO);
} else {
assertFalse("shouldBeNotifiedOnFileIO was called", capturedEvents.shouldBeNotifiedOnFileIO);
}
if (expected.contains(EnabledEventCallback.ON_ERROR_RECOVERY_BEGIN)) {
assertTrue("onErrorRecoveryBegin was not called", capturedEvents.errorRecoveryBegin);
} else {
assertFalse("onErrorRecoveryBegin was called", capturedEvents.errorRecoveryBegin);
}
if (expected.contains(EnabledEventCallback.ON_ERROR_RECOVERY_COMPLETED)) {
assertTrue("onErrorRecoveryCompleted was not called", capturedEvents.errorRecoveryCompleted);
} else {
assertFalse("onErrorRecoveryCompleted was called", capturedEvents.errorRecoveryCompleted);
}
}
/**
* Members are volatile as they may be written
* and read by different threads.
*/
private static class ListenerEvents {
volatile boolean flushCompleted;
volatile boolean flushBegin;
volatile boolean tableFileDeleted;
volatile boolean compactionBegin;
volatile boolean compactionCompleted;
volatile boolean tableFileCreated;
volatile boolean tableFileCreationStarted;
volatile boolean memTableSealed;
volatile boolean columnFamilyHandleDeletionStarted;
volatile boolean externalFileIngested;
volatile boolean backgroundError;
volatile boolean stallConditionsChanged;
volatile boolean fileReadFinish;
volatile boolean fileWriteFinish;
volatile boolean fileFlushFinish;
volatile boolean fileSyncFinish;
volatile boolean fileRangeSyncFinish;
volatile boolean fileTruncateFinish;
volatile boolean fileCloseFinish;
volatile boolean shouldBeNotifiedOnFileIO;
volatile boolean errorRecoveryBegin;
volatile boolean errorRecoveryCompleted;
}
private static class CapturingTestableEventListener extends TestableEventListener {
final ListenerEvents capturedListenerEvents = new ListenerEvents();
public CapturingTestableEventListener() {}
public CapturingTestableEventListener(final EnabledEventCallback... enabledEventCallbacks) {
super(enabledEventCallbacks);
}
@Override
public void onFlushCompleted(final RocksDB db, final FlushJobInfo flushJobInfo) {
fail("onFlushCompleted was not enabled");
capturedListenerEvents.flushCompleted = true;
}
@Override
public void onFlushBegin(final RocksDB db, final FlushJobInfo flushJobInfo) {
fail("onFlushBegin was not enabled");
capturedListenerEvents.flushBegin = true;
}
@Override
public void onTableFileDeleted(final TableFileDeletionInfo tableFileDeletionInfo) {
fail("onTableFileDeleted was not enabled");
capturedListenerEvents.tableFileDeleted = true;
}
@Override
public void onCompactionBegin(final RocksDB db, final CompactionJobInfo compactionJobInfo) {
fail("onCompactionBegin was not enabled");
capturedListenerEvents.compactionBegin = true;
}
@Override
public void onCompactionCompleted(
final RocksDB db, final CompactionJobInfo compactionJobInfo) {
fail("onCompactionCompleted was not enabled");
public void onCompactionCompleted(final RocksDB db, final CompactionJobInfo compactionJobInfo) {
capturedListenerEvents.compactionCompleted = true;
}
@Override
public void onTableFileCreated(final TableFileCreationInfo tableFileCreationInfo) {
fail("onTableFileCreated was not enabled");
capturedListenerEvents.tableFileCreated = true;
}
@Override
public void onTableFileCreationStarted(
final TableFileCreationBriefInfo tableFileCreationBriefInfo) {
fail("onTableFileCreationStarted was not enabled");
capturedListenerEvents.tableFileCreationStarted = true;
}
@Override
public void onMemTableSealed(final MemTableInfo memTableInfo) {
wasOnMemTableSealedCalled.set(true);
capturedListenerEvents.memTableSealed = true;
}
@Override
public void onColumnFamilyHandleDeletionStarted(final ColumnFamilyHandle columnFamilyHandle) {
fail("onColumnFamilyHandleDeletionStarted was not enabled");
capturedListenerEvents.columnFamilyHandleDeletionStarted = true;
}
@Override
public void onExternalFileIngested(
final RocksDB db, final ExternalFileIngestionInfo externalFileIngestionInfo) {
fail("onExternalFileIngested was not enabled");
capturedListenerEvents.externalFileIngested = true;
}
@Override
public void onBackgroundError(
final BackgroundErrorReason backgroundErrorReason, final Status backgroundError) {
fail("onBackgroundError was not enabled");
capturedListenerEvents.backgroundError = true;
}
@Override
public void onStallConditionsChanged(final WriteStallInfo writeStallInfo) {
fail("onStallConditionsChanged was not enabled");
capturedListenerEvents.stallConditionsChanged = true;
}
@Override
public void onFileReadFinish(final FileOperationInfo fileOperationInfo) {
fail("onFileReadFinish was not enabled");
capturedListenerEvents.fileReadFinish = true;
}
@Override
public void onFileWriteFinish(final FileOperationInfo fileOperationInfo) {
fail("onFileWriteFinish was not enabled");
capturedListenerEvents.fileWriteFinish = true;
}
@Override
public void OnFileFlushFinish(final FileOperationInfo fileOperationInfo) {
fail("OnFileFlushFinish was not enabled");
public void onFileFlushFinish(final FileOperationInfo fileOperationInfo) {
capturedListenerEvents.fileFlushFinish = true;
}
@Override
public void OnFileSyncFinish(final FileOperationInfo fileOperationInfo) {
fail("OnFileSyncFinish was not enabled");
public void onFileSyncFinish(final FileOperationInfo fileOperationInfo) {
capturedListenerEvents.fileSyncFinish = true;
}
@Override
public void OnFileRangeSyncFinish(final FileOperationInfo fileOperationInfo) {
fail("OnFileRangeSyncFinish was not enabled");
public void onFileRangeSyncFinish(final FileOperationInfo fileOperationInfo) {
capturedListenerEvents.fileRangeSyncFinish = true;
}
@Override
public void OnFileTruncateFinish(final FileOperationInfo fileOperationInfo) {
fail("OnFileTruncateFinish was not enabled");
public void onFileTruncateFinish(final FileOperationInfo fileOperationInfo) {
capturedListenerEvents.fileTruncateFinish = true;
}
@Override
public void OnFileCloseFinish(final FileOperationInfo fileOperationInfo) {
fail("OnFileCloseFinish was not enabled");
public void onFileCloseFinish(final FileOperationInfo fileOperationInfo) {
capturedListenerEvents.fileCloseFinish = true;
}
@Override
public boolean shouldBeNotifiedOnFileIO() {
fail("shouldBeNotifiedOnFileIO was not enabled");
capturedListenerEvents.shouldBeNotifiedOnFileIO = true;
return false;
}
@Override
public boolean onErrorRecoveryBegin(
final BackgroundErrorReason backgroundErrorReason, final Status backgroundError) {
fail("onErrorRecoveryBegin was not enabled");
capturedListenerEvents.errorRecoveryBegin = true;
return true;
}
@Override
public void onErrorRecoveryCompleted(final Status oldBackgroundError) {
wasOnErrorRecoveryCompletedCalled.set(true);
capturedListenerEvents.errorRecoveryCompleted = true;
}
};
listener.invokeAllCallbacks();
assertTrue(wasOnMemTableSealedCalled.get());
assertTrue(wasOnErrorRecoveryCompletedCalled.get());
}
}

View File

@ -1,3 +1,7 @@
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
package org.rocksdb.test;
import org.rocksdb.AbstractEventListener;

View File

@ -38,25 +38,54 @@ static inline bool HasJemalloc() { return true; }
#else
// definitions for compatibility with older versions of jemalloc
#if !defined(JEMALLOC_ALLOCATOR)
#define JEMALLOC_ALLOCATOR
#endif
#if !defined(JEMALLOC_RESTRICT_RETURN)
#define JEMALLOC_RESTRICT_RETURN
#endif
#if !defined(JEMALLOC_NOTHROW)
#define JEMALLOC_NOTHROW JEMALLOC_ATTR(nothrow)
#endif
#if !defined(JEMALLOC_ALLOC_SIZE)
#ifdef JEMALLOC_HAVE_ATTR_ALLOC_SIZE
#define JEMALLOC_ALLOC_SIZE(s) JEMALLOC_ATTR(alloc_size(s))
#else
#define JEMALLOC_ALLOC_SIZE(s)
#endif
#endif
// Declare non-standard jemalloc APIs as weak symbols. We can null-check these
// symbols to detect whether jemalloc is linked with the binary.
extern "C" void* mallocx(size_t, int) __attribute__((__nothrow__, __weak__));
extern "C" void* rallocx(void*, size_t, int) __attribute__((__nothrow__, __weak__));
extern "C" size_t xallocx(void*, size_t, size_t, int) __attribute__((__nothrow__, __weak__));
extern "C" size_t sallocx(const void*, int) __attribute__((__nothrow__, __weak__));
extern "C" void dallocx(void*, int) __attribute__((__nothrow__, __weak__));
extern "C" void sdallocx(void*, size_t, int) __attribute__((__nothrow__, __weak__));
extern "C" size_t nallocx(size_t, int) __attribute__((__nothrow__, __weak__));
extern "C" int mallctl(const char*, void*, size_t*, void*, size_t)
__attribute__((__nothrow__, __weak__));
extern "C" int mallctlnametomib(const char*, size_t*, size_t*)
__attribute__((__nothrow__, __weak__));
extern "C" int mallctlbymib(const size_t*, size_t, void*, size_t*, void*,
size_t) __attribute__((__nothrow__, __weak__));
extern "C" void malloc_stats_print(void (*)(void*, const char*), void*,
const char*) __attribute__((__nothrow__, __weak__));
extern "C" size_t malloc_usable_size(JEMALLOC_USABLE_SIZE_CONST void*)
JEMALLOC_CXX_THROW __attribute__((__weak__));
extern "C" JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *
mallocx(size_t, int) JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(1)
__attribute__((__weak__));
extern "C" JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN void JEMALLOC_NOTHROW *
rallocx(void *, size_t, int) JEMALLOC_ALLOC_SIZE(2) __attribute__((__weak__));
extern "C" size_t JEMALLOC_NOTHROW xallocx(void *, size_t, size_t, int)
__attribute__((__weak__));
extern "C" size_t JEMALLOC_NOTHROW sallocx(const void *, int)
JEMALLOC_ATTR(pure) __attribute__((__weak__));
extern "C" void JEMALLOC_NOTHROW dallocx(void *, int) __attribute__((__weak__));
extern "C" void JEMALLOC_NOTHROW sdallocx(void *, size_t, int)
__attribute__((__weak__));
extern "C" size_t JEMALLOC_NOTHROW nallocx(size_t, int) JEMALLOC_ATTR(pure)
__attribute__((__weak__));
extern "C" int JEMALLOC_NOTHROW mallctl(const char *, void *, size_t *, void *,
size_t) __attribute__((__weak__));
extern "C" int JEMALLOC_NOTHROW mallctlnametomib(const char *, size_t *,
size_t *)
__attribute__((__weak__));
extern "C" int JEMALLOC_NOTHROW mallctlbymib(const size_t *, size_t, void *,
size_t *, void *, size_t)
__attribute__((__weak__));
extern "C" void JEMALLOC_NOTHROW
malloc_stats_print(void (*)(void *, const char *), void *, const char *)
__attribute__((__weak__));
extern "C" size_t JEMALLOC_NOTHROW
malloc_usable_size(JEMALLOC_USABLE_SIZE_CONST void *) JEMALLOC_CXX_THROW
__attribute__((__weak__));
// Check if Jemalloc is linked with the binary. Note the main program might be
// using a different memory allocator even this method return true.

View File

@ -373,7 +373,8 @@ static std::unordered_map<std::string, OptionTypeInfo>
// generated by affected releases before the fix, we need to
// manually parse read_amp_bytes_per_bit with this special hack.
uint64_t read_amp_bytes_per_bit = ParseUint64(value);
EncodeFixed32(addr, static_cast<uint32_t>(read_amp_bytes_per_bit));
*(reinterpret_cast<uint32_t*>(addr)) =
static_cast<uint32_t>(read_amp_bytes_per_bit);
return Status::OK();
}}},
{"enable_index_compression",

View File

@ -1821,13 +1821,21 @@ void BlockBasedTable::RetrieveMultipleBlocks(
if (s.ok()) {
// When the blocks share the same underlying buffer (scratch or direct io
// buffer), if the block is compressed, the shared buffer will be
// uncompressed into heap during uncompressing; otherwise, we need to
// manually copy the block into heap before inserting the block to block
// cache.
// buffer), we may need to manually copy the block into heap if the raw
// block has to be inserted into a cache. That falls into th following
// cases -
// 1. Raw block is not compressed, it needs to be inserted into the
// uncompressed block cache if there is one
// 2. If the raw block is compressed, it needs to be inserted into the
// compressed block cache if there is one
//
// In all other cases, the raw block is either uncompressed into a heap
// buffer or there is no cache at all.
CompressionType compression_type =
raw_block_contents.get_compression_type();
if (use_shared_buffer && compression_type == kNoCompression) {
if (use_shared_buffer && (compression_type == kNoCompression ||
(compression_type != kNoCompression &&
rep_->table_options.block_cache_compressed))) {
Slice raw = Slice(req.result.data() + req_offset, block_size(handle));
raw_block_contents = BlockContents(
CopyBufferToHeap(GetMemoryAllocator(rep_->table_options), raw),

View File

@ -41,6 +41,7 @@ extern const uint64_t kPlainTableMagicNumber;
const uint64_t kLegacyPlainTableMagicNumber = 0;
const uint64_t kPlainTableMagicNumber = 0;
#endif
const char* kHostnameForDbHostId = "__hostname__";
bool ShouldReportDetailedTime(Env* env, Statistics* stats) {
return env != nullptr && stats != nullptr &&

View File

@ -22,6 +22,9 @@ namespace ROCKSDB_NAMESPACE {
// A Timer class to handle repeated work.
//
// `Start()` and `Shutdown()` are currently not thread-safe. The client must
// serialize calls to these two member functions.
//
// A single timer instance can handle multiple functions via a single thread.
// It is better to leave long running work to a dedicated thread pool.
//

View File

@ -46,6 +46,22 @@ class OptimisticTransactionDBImpl : public OptimisticTransactionDB {
const OptimisticTransactionOptions& txn_options,
Transaction* old_txn) override;
// Transactional `DeleteRange()` is not yet supported.
virtual Status DeleteRange(const WriteOptions&, ColumnFamilyHandle*,
const Slice&, const Slice&) override {
return Status::NotSupported();
}
// Range deletions also must not be snuck into `WriteBatch`es as they are
// incompatible with `OptimisticTransactionDB`.
virtual Status Write(const WriteOptions& write_opts,
WriteBatch* batch) override {
if (batch->HasDeleteRange()) {
return Status::NotSupported();
}
return OptimisticTransactionDB::Write(write_opts, batch);
}
size_t GetLockBucketsSize() const { return bucketed_locks_.size(); }
OccValidationPolicy GetValidatePolicy() const { return validate_policy_; }

View File

@ -1098,6 +1098,17 @@ TEST_P(OptimisticTransactionTest, IteratorTest) {
delete txn;
}
TEST_P(OptimisticTransactionTest, DeleteRangeSupportTest) {
// `OptimisticTransactionDB` does not allow range deletion in any API.
ASSERT_TRUE(
txn_db
->DeleteRange(WriteOptions(), txn_db->DefaultColumnFamily(), "a", "b")
.IsNotSupported());
WriteBatch wb;
ASSERT_OK(wb.DeleteRange("a", "b"));
ASSERT_NOK(txn_db->Write(WriteOptions(), &wb));
}
TEST_P(OptimisticTransactionTest, SavepointTest) {
WriteOptions write_options;
ReadOptions read_options, snapshot_read_options;

View File

@ -4835,6 +4835,56 @@ TEST_P(TransactionTest, MergeTest) {
ASSERT_EQ("a,3", value);
}
TEST_P(TransactionTest, DeleteRangeSupportTest) {
// The `DeleteRange()` API is banned everywhere.
ASSERT_TRUE(
db->DeleteRange(WriteOptions(), db->DefaultColumnFamily(), "a", "b")
.IsNotSupported());
// But range deletions can be added via the `Write()` API by specifying the
// proper flags to promise there are no conflicts according to the DB type
// (see `TransactionDB::DeleteRange()` API doc for details).
for (bool skip_concurrency_control : {false, true}) {
for (bool skip_duplicate_key_check : {false, true}) {
ASSERT_OK(db->Put(WriteOptions(), "a", "val"));
WriteBatch wb;
ASSERT_OK(wb.DeleteRange("a", "b"));
TransactionDBWriteOptimizations flags;
flags.skip_concurrency_control = skip_concurrency_control;
flags.skip_duplicate_key_check = skip_duplicate_key_check;
Status s = db->Write(WriteOptions(), flags, &wb);
std::string value;
switch (txn_db_options.write_policy) {
case WRITE_COMMITTED:
if (skip_concurrency_control) {
ASSERT_OK(s);
ASSERT_TRUE(db->Get(ReadOptions(), "a", &value).IsNotFound());
} else {
ASSERT_NOK(s);
ASSERT_OK(db->Get(ReadOptions(), "a", &value));
}
break;
case WRITE_PREPARED:
// Intentional fall-through
case WRITE_UNPREPARED:
if (skip_concurrency_control && skip_duplicate_key_check) {
ASSERT_OK(s);
ASSERT_TRUE(db->Get(ReadOptions(), "a", &value).IsNotFound());
} else {
ASSERT_NOK(s);
ASSERT_OK(db->Get(ReadOptions(), "a", &value));
}
break;
}
// Without any promises from the user, range deletion via other `Write()`
// APIs are still banned.
ASSERT_OK(db->Put(WriteOptions(), "a", "val"));
ASSERT_NOK(db->Write(WriteOptions(), &wb));
ASSERT_OK(db->Get(ReadOptions(), "a", &value));
}
}
}
TEST_P(TransactionTest, DeferSnapshotTest) {
WriteOptions write_options;
ReadOptions read_options;

View File

@ -157,7 +157,9 @@ Status WritePreparedTxnDB::WriteInternal(const WriteOptions& write_options_orig,
// TODO(myabandeh): add an option to allow user skipping this cost
SubBatchCounter counter(*GetCFComparatorMap());
auto s = batch->Iterate(&counter);
assert(s.ok());
if (!s.ok()) {
return s;
}
batch_cnt = counter.BatchCount();
WPRecordTick(TXN_DUPLICATE_KEY_OVERHEAD);
ROCKS_LOG_DETAILS(info_log_, "Duplicate key overhead: %" PRIu64 " batches",