run 'make check's rules (and even subtests) in parallel

Summary:
When GNU parallel is available, "make check" tests are now run in parallel.
When /dev/shm is usable, we tell those tests to create temporary files therein.
Now, the longest-running single test, db_test, (which is composed of hundreds of sub-tests)
is no longer run sequentially: instead, each of its sub-tests is run independently, and can
be parallelized along with all other tests. To make that process easier, this change
creates a temporary directory, "t/", in which it puts a small script for each of those
subtests. The output from each parallel-run test is now saved in t/log-TEST_NAME.

When GNU parallel is not available, we run the tests in sequence, just as before.
If GNU parallel is available and you don't like the default of running one subtest
per core, you can invoke "make J=1 check" to run only one test at a time.
Beware: this will take a long time, and it starts with the two longest-running tests, so you
will wait for a long time before seeing any results. Instead, if you want to use fewer resources
but still see useful progress, try "make J=60% check". That will attempt to ensure that 60% of
the cores are occupied by test runs.

To watch progress of individual tests (duration, success (PASS-or-FAIL), name), run "make watch-log"
in the same directory from another window.  That will start with something like this:

and when complete should show numbers/names like this:

  Every 0.1s: sort -k7,7nr -k4,4gr LOG|perl -n -e '@a=split("\t",$_,-1); $t=$a[8]; $t =~ s,^\./,,;' -e '$t =~ s, >.*,,; chomp $t;' -e '$t =~ /.*--gtest_filter=...  Wed Apr  1 10:51:42 2015

  152.221 PASS t/DBTest.FileCreationRandomFailure
  109.280 PASS t/DBTest.EncodeDecompressedBlockSizeTest
   82.315 PASS reduce_levels_test
   77.812 PASS t/DBTest.CompactionFilterWithValueChange
   73.236 PASS backupable_db_test
   63.428 PASS deletefile_test
   57.248 PASS table_test
   55.665 PASS prefix_test
   49.816 PASS t/DBTest.RateLimitingTest
  ...

Test Plan:
Timings (measured so as to exclude compile and link times):
With this change, all tests complete in 2m40s on a system for which nproc prints 32.
Prior to this this change, "make check" would take 24.5 minutes on that same system.

Here are durations (in seconds) of the longest-running subtests:

152.435 PASS t/DBTest.FileCreationRandomFailure
107.070 PASS t/DBTest.EncodeDecompressedBlockSizeTest
 81.391 PASS ./reduce_levels_test
 71.587 PASS ./backupable_db_test
 61.746 PASS ./deletefile_test
 57.960 PASS ./table_test
 55.230 PASS ./prefix_test
 54.060 PASS t/DBTest.CompactionFilterWithValueChange
 48.873 PASS t/DBTest.RateLimitingTest
 47.569 PASS ./fault_injection_test
 46.593 PASS t/DBTest.Randomized
 42.662 PASS t/DBTest.CompactionFilter
 31.793 PASS t/DBTest.SparseMerge
 30.612 PASS t/DBTest.CompactionFilterV2
 25.891 PASS t/DBTest.GroupCommitTest
 23.863 PASS t/DBTest.DynamicLevelMaxBytesBase
 22.976 PASS ./rate_limiter_test
 18.942 PASS t/DBTest.OptimizeFiltersForHits
 16.851 PASS ./env_test
 15.399 PASS t/DBTest.CompactionFilterV2WithValueChange
 14.827 PASS t/DBTest.CompactionFilterV2NULLPrefix

Reviewers: igor, sdong, rven, yhchiang, igor.sugak

Reviewed By: igor.sugak

Subscribers: dhruba

Differential Revision: https://reviews.facebook.net/D35379
This commit is contained in:
Jim Meyering 2015-04-06 12:35:25 -07:00
parent a45e7581b7
commit e3ee98b38a
2 changed files with 159 additions and 15 deletions

2
.gitignore vendored
View File

@ -50,3 +50,5 @@ java/**.asc
java/javadoc
scan_build_report/
t
LOG

170
Makefile
View File

@ -6,12 +6,21 @@
#-----------------------------------------------
CLEAN_FILES = # deliberately empty, so we can append below.
CFLAGS += ${EXTRA_CFLAGS}
CXXFLAGS += ${EXTRA_CXXFLAGS}
LDFLAGS += $(EXTRA_LDFLAGS)
MACHINE ?= $(shell uname -m)
ARFLAGS = rs
# Transform parallel LOG output into something more readable.
perl_command = perl -n \
-e '@a=split("\t",$$_,-1); $$t=$$a[8]; $$t =~ s,^\./,,;' \
-e '$$t =~ s, >.*,,; chomp $$t;' \
-e '$$t =~ /.*--gtest_filter=(.*?\.[\w\/]+)/ and $$t=$$1;' \
-e 'printf "%7.3f %s %s\n", $$a[3], $$a[6] == 0 ? "PASS" : "FAIL", $$t'
quoted_perl_command = $(subst ','\'',$(perl_command))
ifneq ($(MAKECMDGOALS),dbg)
OPT += -O2 -fno-omit-frame-pointer
ifneq ($(MACHINE),ppc64) # ppc64 doesn't support -momit-leaf-frame-pointer
@ -64,6 +73,7 @@ AM_LINK = $(AM_V_CCLD)$(CXX) $^ $(EXEC_LDFLAGS) -o $@ $(LDFLAGS) $(COVERAGEFLAGS
dummy := $(shell (export ROCKSDB_ROOT="$(CURDIR)"; "$(CURDIR)/build_tools/build_detect_platform" "$(CURDIR)/make_config.mk"))
# this file is generated by the previous line to set build flags and sources
include make_config.mk
CLEAN_FILES += make_config.mk
ifneq ($(PLATFORM), IOS)
CFLAGS += -g
@ -163,6 +173,7 @@ gen_build_version = \
# as a regular source file as part of the compilation process.
# One can run "strings executable_filename | grep _build_" to find
# the version of the source that we used to build the executable file.
CLEAN_FILES += util/build_version.cc:
FORCE:
util/build_version.cc: FORCE
$(AM_V_GEN)rm -f $@-t
@ -332,18 +343,149 @@ release:
coverage:
$(MAKE) clean
COVERAGEFLAGS="-fprofile-arcs -ftest-coverage" LDFLAGS+="-lgcov" $(MAKE) all check -j32
(cd coverage; ./coverage_test.sh)
cd coverage && ./coverage_test.sh
# Delete intermediate files
find . -type f -regex ".*\.\(\(gcda\)\|\(gcno\)\)" -exec rm {} \;
# Extract the names of its tests by running db_test with --gtest_list_tests.
# This filter removes the "#"-introduced comments, and expands to
# fully-qualified names by changing input like this:
#
# DBTest.
# Empty
# WriteEmptyBatch
# MultiThreaded/MultiThreadedDBTest.
# MultiThreaded/0 # GetParam() = 0
# MultiThreaded/1 # GetParam() = 1
#
# into this:
#
# DBTest.Empty
# DBTest.WriteEmptyBatch
# MultiThreaded/MultiThreadedDBTest.MultiThreaded/0
# MultiThreaded/MultiThreadedDBTest.MultiThreaded/1
#
test_names = \
./db_test --gtest_list_tests \
| perl -n \
-e 's/ *\#.*//;' \
-e '/^(\s*)(\S+)/; !$$1 and do {$$p=$$2; break};' \
-e 'print qq! $$p$$2!'
ifeq ($(MAKECMDGOALS),check)
# Use /dev/shm if it has the sticky bit set (otherwise, /tmp),
# and create a randomly-named rocksdb.XXXX directory therein.
# We'll use that directory in the "make check" rules.
ifeq ($(TMPD),)
TMPD := $(shell f=/dev/shm; test -k $$f || f=/tmp; \
perl -le 'use File::Temp "tempdir";' \
-e 'print tempdir("'$$f'/rocksdb.XXXX", CLEANUP => 0)')
endif
endif
ifneq ($(T),)
# Run all tests in parallel, accumulating per-test logs in t/log-*.
# t_sanitized is each $(T) with "-" in place of each "/".
t_sanitized = $(subst /,-,$(T))
# t_run is each sanitized name with a leading "t/".
t_run = $(patsubst %,t/%,$(t_sanitized))
# Each t_run file is a tiny generated bourne shell script
# that invokes one of db_tests's sub-tests. Why use a file
# for this? Because that makes the invocation of parallel
# below simpler, which in turn makes the parsing of parallel's
# LOG simpler (the latter is for live monitoring as parallel
# tests run).
filter = --gtest_filter=$(subst -,/,$(@F))
$(t_run): Makefile db_test
$(AM_V_GEN)mkdir -p t
$(AM_V_at)rm -f $@ $@-t
$(AM_V_at)printf '%s\n' \
'#!/bin/sh' \
'd=$(TMPD)/$(@F)' \
'mkdir -p $$d' \
'TEST_TMPDIR=$$d ./db_test $(filter)' \
> $@-t
$(AM_V_at)chmod a=rx $@-t
$(AM_V_at)mv $@-t $@
# Reorder input lines (which are one per test) so that the
# longest-running tests appear first in the output.
# Do this by prefixing each selected name with its duration,
# sort the resulting names, and remove the leading numbers.
# FIXME: the "100" we prepend is a fake time, for now.
# FIXME: squirrel away timings from each run and use them
# (when present) on subsequent runs to order these tests.
#
# Without this reordering, these two tests would happen to start only
# after almost all other tests had completed, thus adding 100 seconds
# to the duration of parallel "make check". That's the difference
# between 4 minutes (old) and 2m20s (new).
#
# 152.120 PASS t/DBTest.FileCreationRandomFailure
# 107.816 PASS t/DBTest.EncodeDecompressedBlockSizeTest
#
slow_test_regexp = \
^t/DBTest\.(?:FileCreationRandomFailure|EncodeDecompressedBlockSizeTest)$$
prioritize_long_running_tests = \
perl -pe 's,($(slow_test_regexp)),100 $$1,' \
| sort -k1,1gr \
| sed 's/^[.0-9]* //'
# "make check" uses
# Run with "make J=1 check" to disable parallelism in "make check".
# Run with "make J=200% check" to run two parallel jobs per core.
# The default is to run one job per core (J=100%).
# See "man parallel" for its "-j ..." option.
J = 100%
.PHONY: check_0
check_0: $(t_run)
$(AM_V_GEN)export TEST_TMPDIR=$(TMPD); \
printf '%s\n' '' \
'To monitor subtest <duration,pass/fail,name>,' \
' run "make watch-log" in a separate window' ''; \
test -t 1 && eta=--eta || eta=; \
{ \
printf './%s\n' $(filter-out db_test, $(TESTS)); \
printf '%s\n' $(t_run); \
} \
| $(prioritize_long_running_tests) \
| parallel -j$(J) --joblog=LOG $$eta --gnu '{} >& t/log-{/}'
endif
CLEAN_FILES += t LOG $(TMPD)
# When running parallel "make check", you can monitor its progress
# from another window.
# Run "make watch_LOG" to show the duration,PASS/FAIL,name of parallel
# tests as they are being run. We sort them so that longer-running ones
# appear at the top of the list and any failing tests remain at the top
# regardless of their duration. As with any use of "watch", hit ^C to
# interrupt.
watch-log:
watch --interval=0 'sort -k7,7nr -k4,4gr LOG|$(quoted_perl_command)'
# If GNU parallel is installed, run the tests in parallel,
# via the check_0 rule above. Otherwise, run them sequentially.
check: all
for t in $(TESTS); do echo "***** Running $$t"; ./$$t || exit 1; done
python tools/ldb_test.py
$(AM_V_GEN)case $$(parallel --gnu --help 2>/dev/null) in \
*'GNU Parallel'*) \
t=$$($(test_names)); \
$(MAKE) T="$$t" TMPD=$(TMPD) check_0;; \
*) \
for t in $(TESTS); do \
echo "===== Running $$t"; ./$$t || exit 1; done;; \
esac
rm -rf $(TMPD)
check_some: $(SUBSET) ldb
for t in $(SUBSET); do echo "***** Running $$t"; ./$$t || exit 1; done
python tools/ldb_test.py
check_some: $(SUBSET) ldb_tests
for t in $(SUBSET); do echo "===== Running $$t"; ./$$t || exit 1; done
.PHONY: ldb_tests
ldb_tests: ldb
python tools/ldb_test.py
@ -385,7 +527,8 @@ analyze: clean
-o $(CURDIR)/scan_build_report \
$(MAKE) dbg
unity.cc:
CLEAN_FILES += unity.cc
unity.cc: Makefile
rm -f $@ $@-t
for source_file in $(LIB_SOURCES); do \
echo "#include <$$source_file>" >> $@-t; \
@ -398,9 +541,8 @@ unity: unity.o
$(AM_LINK)
clean:
rm -f $(BENCHMARKS) $(TOOLS) $(TESTS) $(LIBRARY) $(SHARED) make_config.mk unity.cc
rm -f util/build_version.cc
rm -rf ios-x86 ios-arm scan_build_report
rm -f $(BENCHMARKS) $(TOOLS) $(TESTS) $(LIBRARY) $(SHARED)
rm -rf $(CLEAN_FILES) ios-x86 ios-arm scan_build_report
find . -name "*.[oda]" -exec rm {} \;
find . -type f -regex ".*\.\(\(gcda\)\|\(gcno\)\)" -exec rm {} \;
rm -rf bzip2* snappy* zlib* lz4*
@ -811,18 +953,18 @@ endif
# Source files dependencies detection
# ---------------------------------------------------------------------------
all_sources = $(LIB_SOURCES) $(TEST_BENCH_SOURCES) $(MOCK_SOURCES)
DEPFILES = $(all_sources:.cc=.d)
# Add proper dependency support so changing a .h file forces a .cc file to
# rebuild.
# The .d file indicates .cc file's dependencies on .h files. We generate such
# dependency by g++'s -MM option, whose output is a make dependency rule.
%.d: %.cc
$(DEPFILES): %.d: %.cc
@$(CXX) $(CXXFLAGS) $(PLATFORM_SHARED_CFLAGS) \
-MM -MT'$@' -MT'$(<:.cc=.o)' "$<" -o '$@'
all_sources = $(LIB_SOURCES) $(TEST_BENCH_SOURCES) $(MOCK_SOURCES)
DEPFILES = $(all_sources:.cc=.d)
depend: $(DEPFILES)
# if the make goal is either "clean" or "format", we shouldn't