netty5/common/src/main/java/io/netty/util/ResourceLeakDetector.java

623 lines
24 KiB
Java
Raw Normal View History

/*
* Copyright 2013 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.util;
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
import io.netty.util.internal.EmptyArrays;
import io.netty.util.internal.SystemPropertyUtil;
import io.netty.util.internal.logging.InternalLogger;
import io.netty.util.internal.logging.InternalLoggerFactory;
import java.lang.ref.ReferenceQueue;
import java.lang.ref.WeakReference;
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
import java.lang.reflect.Method;
import java.util.Arrays;
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
import java.util.HashSet;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.ThreadLocalRandom;
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
import java.util.concurrent.atomic.AtomicIntegerFieldUpdater;
import java.util.concurrent.atomic.AtomicReference;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
import static io.netty.util.internal.StringUtil.EMPTY_STRING;
import static io.netty.util.internal.StringUtil.NEWLINE;
import static io.netty.util.internal.StringUtil.simpleClassName;
import static java.util.Objects.requireNonNull;
Pluggable resource leak detector Allow users of Netty to plug in their own leak detector for the purpose of instrumentation. Motivation: We are rolling out a large Netty deployment and want to be able to track the amount of leaks we're seeing in production via custom instrumentation. In order to achieve this today, I had to plug in a custom `ByteBufAllocator` into the bootstrap and have it initialize a custom `ResourceLeakDetector`. Due to these classes mostly being marked `final` or having private or static methods, a lot of the code had to be copy-pasted and it's quite ugly. Modifications: * I've added a static loader method for the `ResourceLeakDetector` in `AbstractByteBuf` that tries to instantiate the class passed in via the `-Dio.netty.customResourceLeakDetector`, otherwise falling back to the default one. * I've modified `ResourceLeakDetector` to be non-final and to have the reporting broken out in to methods that can be overridden. Result: You can instrument leaks in your application by just adding something like the following: ```java public class InstrumentedResourceLeakDetector<T> extends ResourceLeakDetector<T> { @Monitor("InstanceLeakCounter") private final AtomicInteger instancesLeakCounter; @Monitor("LeakCounter") private final AtomicInteger leakCounter; public InstrumentedResourceLeakDetector(Class<T> resource) { super(resource); this.instancesLeakCounter = new AtomicInteger(); this.leakCounter = new AtomicInteger(); } @Override protected void reportTracedLeak(String records) { super.reportTracedLeak(records); leakCounter.incrementAndGet(); } @Override protected void reportUntracedLeak() { super.reportUntracedLeak(); leakCounter.incrementAndGet(); } @Override protected void reportInstancesLeak() { super.reportInstancesLeak(); instancesLeakCounter.incrementAndGet(); } } ```
2016-06-15 09:19:15 +02:00
public class ResourceLeakDetector<T> {
private static final String PROP_LEVEL = "io.netty.leakDetection.level";
private static final Level DEFAULT_LEVEL = Level.SIMPLE;
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
private static final String PROP_TARGET_RECORDS = "io.netty.leakDetection.targetRecords";
private static final int DEFAULT_TARGET_RECORDS = 4;
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
private static final String PROP_SAMPLING_INTERVAL = "io.netty.leakDetection.samplingInterval";
// There is a minor performance benefit in TLR if this is a power of 2.
private static final int DEFAULT_SAMPLING_INTERVAL = 128;
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
private static final int TARGET_RECORDS;
static final int SAMPLING_INTERVAL;
/**
* Represents the level of resource leak detection.
*/
public enum Level {
/**
* Disables resource leak detection.
*/
DISABLED,
/**
* Enables simplistic sampling resource leak detection which reports there is a leak or not,
* at the cost of small overhead (default).
*/
SIMPLE,
/**
* Enables advanced sampling resource leak detection which reports where the leaked object was accessed
* recently at the cost of high overhead.
*/
ADVANCED,
/**
* Enables paranoid resource leak detection which reports where the leaked object was accessed recently,
* at the cost of the highest possible overhead (for testing purposes only).
*/
2017-01-15 16:51:58 +01:00
PARANOID;
/**
* Returns level based on string value. Accepts also string that represents ordinal number of enum.
*
* @param levelStr - level string : DISABLED, SIMPLE, ADVANCED, PARANOID. Ignores case.
* @return corresponding level or SIMPLE level in case of no match.
*/
static Level parseLevel(String levelStr) {
String trimmedLevelStr = levelStr.trim();
for (Level l : values()) {
if (trimmedLevelStr.equalsIgnoreCase(l.name()) || trimmedLevelStr.equals(String.valueOf(l.ordinal()))) {
return l;
}
}
return DEFAULT_LEVEL;
}
}
private static Level level;
private static final InternalLogger logger = InternalLoggerFactory.getInstance(ResourceLeakDetector.class);
static {
String levelStr = SystemPropertyUtil.get(PROP_LEVEL, DEFAULT_LEVEL.name());
2017-01-15 16:51:58 +01:00
Level level = Level.parseLevel(levelStr);
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
TARGET_RECORDS = SystemPropertyUtil.getInt(PROP_TARGET_RECORDS, DEFAULT_TARGET_RECORDS);
SAMPLING_INTERVAL = SystemPropertyUtil.getInt(PROP_SAMPLING_INTERVAL, DEFAULT_SAMPLING_INTERVAL);
ResourceLeakDetector.level = level;
if (logger.isDebugEnabled()) {
logger.debug("-D{}: {}", PROP_LEVEL, level.name().toLowerCase());
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
logger.debug("-D{}: {}", PROP_TARGET_RECORDS, TARGET_RECORDS);
}
}
/**
* @deprecated Use {@link #setLevel(Level)} instead.
*/
@Deprecated
public static void setEnabled(boolean enabled) {
setLevel(enabled? Level.SIMPLE : Level.DISABLED);
}
/**
* Returns {@code true} if resource leak detection is enabled.
*/
public static boolean isEnabled() {
return getLevel().ordinal() > Level.DISABLED.ordinal();
}
/**
* Sets the resource leak detection level.
*/
public static void setLevel(Level level) {
requireNonNull(level, "level");
ResourceLeakDetector.level = level;
}
/**
* Returns the current resource leak detection level.
*/
public static Level getLevel() {
return level;
}
/** the collection of active resources */
private final Set<DefaultResourceLeak<?>> allLeaks = ConcurrentHashMap.newKeySet();
private final ReferenceQueue<Object> refQueue = new ReferenceQueue<>();
private final ConcurrentMap<String, Boolean> reportedLeaks = new ConcurrentHashMap<>();
private final String resourceType;
private final int samplingInterval;
/**
* @deprecated use {@link ResourceLeakDetectorFactory#newResourceLeakDetector(Class, int, long)}.
*/
@Deprecated
public ResourceLeakDetector(Class<?> resourceType) {
this(simpleClassName(resourceType));
}
/**
* @deprecated use {@link ResourceLeakDetectorFactory#newResourceLeakDetector(Class, int, long)}.
*/
@Deprecated
public ResourceLeakDetector(String resourceType) {
this(resourceType, DEFAULT_SAMPLING_INTERVAL, Long.MAX_VALUE);
}
/**
* @deprecated Use {@link ResourceLeakDetector#ResourceLeakDetector(Class, int)}.
* <p>
* This should not be used directly by users of {@link ResourceLeakDetector}.
* Please use {@link ResourceLeakDetectorFactory#newResourceLeakDetector(Class)}
* or {@link ResourceLeakDetectorFactory#newResourceLeakDetector(Class, int, long)}
*
* @param maxActive This is deprecated and will be ignored.
*/
@Deprecated
public ResourceLeakDetector(Class<?> resourceType, int samplingInterval, long maxActive) {
this(resourceType, samplingInterval);
}
/**
* This should not be used directly by users of {@link ResourceLeakDetector}.
* Please use {@link ResourceLeakDetectorFactory#newResourceLeakDetector(Class)}
* or {@link ResourceLeakDetectorFactory#newResourceLeakDetector(Class, int, long)}
*/
@SuppressWarnings("deprecation")
public ResourceLeakDetector(Class<?> resourceType, int samplingInterval) {
this(simpleClassName(resourceType), samplingInterval, Long.MAX_VALUE);
}
/**
* @deprecated use {@link ResourceLeakDetectorFactory#newResourceLeakDetector(Class, int, long)}.
* <p>
* @param maxActive This is deprecated and will be ignored.
*/
@Deprecated
public ResourceLeakDetector(String resourceType, int samplingInterval, long maxActive) {
requireNonNull(resourceType, "resourceType");
this.resourceType = resourceType;
this.samplingInterval = samplingInterval;
}
/**
* Creates a new {@link ResourceLeak} which is expected to be closed via {@link ResourceLeak#close()} when the
* related resource is deallocated.
*
* @return the {@link ResourceLeak} or {@code null}
* @deprecated use {@link #track(Object)}
*/
@Deprecated
public final ResourceLeak open(T obj) {
return track0(obj);
}
/**
* Creates a new {@link ResourceLeakTracker} which is expected to be closed via
* {@link ResourceLeakTracker#close(Object)} when the related resource is deallocated.
*
* @return the {@link ResourceLeakTracker} or {@code null}
*/
@SuppressWarnings("unchecked")
public final ResourceLeakTracker<T> track(T obj) {
return track0(obj);
}
@SuppressWarnings("unchecked")
private DefaultResourceLeak track0(T obj) {
Level level = ResourceLeakDetector.level;
if (level == Level.DISABLED) {
return null;
}
if (level.ordinal() < Level.PARANOID.ordinal()) {
if (ThreadLocalRandom.current().nextInt(samplingInterval) == 0) {
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
reportLeak();
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
return new DefaultResourceLeak(obj, refQueue, allLeaks);
}
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
return null;
}
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
reportLeak();
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
return new DefaultResourceLeak(obj, refQueue, allLeaks);
}
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
private void clearRefQueue() {
for (;;) {
@SuppressWarnings("unchecked")
DefaultResourceLeak ref = (DefaultResourceLeak) refQueue.poll();
if (ref == null) {
break;
}
ref.dispose();
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
}
}
private void reportLeak() {
if (!logger.isErrorEnabled()) {
clearRefQueue();
return;
}
// Detect and report previous leaks.
for (;;) {
@SuppressWarnings("unchecked")
DefaultResourceLeak ref = (DefaultResourceLeak) refQueue.poll();
if (ref == null) {
break;
}
if (!ref.dispose()) {
continue;
}
String records = ref.toString();
if (reportedLeaks.putIfAbsent(records, Boolean.TRUE) == null) {
if (records.isEmpty()) {
Pluggable resource leak detector Allow users of Netty to plug in their own leak detector for the purpose of instrumentation. Motivation: We are rolling out a large Netty deployment and want to be able to track the amount of leaks we're seeing in production via custom instrumentation. In order to achieve this today, I had to plug in a custom `ByteBufAllocator` into the bootstrap and have it initialize a custom `ResourceLeakDetector`. Due to these classes mostly being marked `final` or having private or static methods, a lot of the code had to be copy-pasted and it's quite ugly. Modifications: * I've added a static loader method for the `ResourceLeakDetector` in `AbstractByteBuf` that tries to instantiate the class passed in via the `-Dio.netty.customResourceLeakDetector`, otherwise falling back to the default one. * I've modified `ResourceLeakDetector` to be non-final and to have the reporting broken out in to methods that can be overridden. Result: You can instrument leaks in your application by just adding something like the following: ```java public class InstrumentedResourceLeakDetector<T> extends ResourceLeakDetector<T> { @Monitor("InstanceLeakCounter") private final AtomicInteger instancesLeakCounter; @Monitor("LeakCounter") private final AtomicInteger leakCounter; public InstrumentedResourceLeakDetector(Class<T> resource) { super(resource); this.instancesLeakCounter = new AtomicInteger(); this.leakCounter = new AtomicInteger(); } @Override protected void reportTracedLeak(String records) { super.reportTracedLeak(records); leakCounter.incrementAndGet(); } @Override protected void reportUntracedLeak() { super.reportUntracedLeak(); leakCounter.incrementAndGet(); } @Override protected void reportInstancesLeak() { super.reportInstancesLeak(); instancesLeakCounter.incrementAndGet(); } } ```
2016-06-15 09:19:15 +02:00
reportUntracedLeak(resourceType);
} else {
Pluggable resource leak detector Allow users of Netty to plug in their own leak detector for the purpose of instrumentation. Motivation: We are rolling out a large Netty deployment and want to be able to track the amount of leaks we're seeing in production via custom instrumentation. In order to achieve this today, I had to plug in a custom `ByteBufAllocator` into the bootstrap and have it initialize a custom `ResourceLeakDetector`. Due to these classes mostly being marked `final` or having private or static methods, a lot of the code had to be copy-pasted and it's quite ugly. Modifications: * I've added a static loader method for the `ResourceLeakDetector` in `AbstractByteBuf` that tries to instantiate the class passed in via the `-Dio.netty.customResourceLeakDetector`, otherwise falling back to the default one. * I've modified `ResourceLeakDetector` to be non-final and to have the reporting broken out in to methods that can be overridden. Result: You can instrument leaks in your application by just adding something like the following: ```java public class InstrumentedResourceLeakDetector<T> extends ResourceLeakDetector<T> { @Monitor("InstanceLeakCounter") private final AtomicInteger instancesLeakCounter; @Monitor("LeakCounter") private final AtomicInteger leakCounter; public InstrumentedResourceLeakDetector(Class<T> resource) { super(resource); this.instancesLeakCounter = new AtomicInteger(); this.leakCounter = new AtomicInteger(); } @Override protected void reportTracedLeak(String records) { super.reportTracedLeak(records); leakCounter.incrementAndGet(); } @Override protected void reportUntracedLeak() { super.reportUntracedLeak(); leakCounter.incrementAndGet(); } @Override protected void reportInstancesLeak() { super.reportInstancesLeak(); instancesLeakCounter.incrementAndGet(); } } ```
2016-06-15 09:19:15 +02:00
reportTracedLeak(resourceType, records);
}
}
}
}
Pluggable resource leak detector Allow users of Netty to plug in their own leak detector for the purpose of instrumentation. Motivation: We are rolling out a large Netty deployment and want to be able to track the amount of leaks we're seeing in production via custom instrumentation. In order to achieve this today, I had to plug in a custom `ByteBufAllocator` into the bootstrap and have it initialize a custom `ResourceLeakDetector`. Due to these classes mostly being marked `final` or having private or static methods, a lot of the code had to be copy-pasted and it's quite ugly. Modifications: * I've added a static loader method for the `ResourceLeakDetector` in `AbstractByteBuf` that tries to instantiate the class passed in via the `-Dio.netty.customResourceLeakDetector`, otherwise falling back to the default one. * I've modified `ResourceLeakDetector` to be non-final and to have the reporting broken out in to methods that can be overridden. Result: You can instrument leaks in your application by just adding something like the following: ```java public class InstrumentedResourceLeakDetector<T> extends ResourceLeakDetector<T> { @Monitor("InstanceLeakCounter") private final AtomicInteger instancesLeakCounter; @Monitor("LeakCounter") private final AtomicInteger leakCounter; public InstrumentedResourceLeakDetector(Class<T> resource) { super(resource); this.instancesLeakCounter = new AtomicInteger(); this.leakCounter = new AtomicInteger(); } @Override protected void reportTracedLeak(String records) { super.reportTracedLeak(records); leakCounter.incrementAndGet(); } @Override protected void reportUntracedLeak() { super.reportUntracedLeak(); leakCounter.incrementAndGet(); } @Override protected void reportInstancesLeak() { super.reportInstancesLeak(); instancesLeakCounter.incrementAndGet(); } } ```
2016-06-15 09:19:15 +02:00
/**
* This method is called when a traced leak is detected. It can be overridden for tracking how many times leaks
* have been detected.
*/
protected void reportTracedLeak(String resourceType, String records) {
logger.error(
"LEAK: {}.release() was not called before it's garbage-collected. " +
"See https://netty.io/wiki/reference-counted-objects.html for more information.{}",
Pluggable resource leak detector Allow users of Netty to plug in their own leak detector for the purpose of instrumentation. Motivation: We are rolling out a large Netty deployment and want to be able to track the amount of leaks we're seeing in production via custom instrumentation. In order to achieve this today, I had to plug in a custom `ByteBufAllocator` into the bootstrap and have it initialize a custom `ResourceLeakDetector`. Due to these classes mostly being marked `final` or having private or static methods, a lot of the code had to be copy-pasted and it's quite ugly. Modifications: * I've added a static loader method for the `ResourceLeakDetector` in `AbstractByteBuf` that tries to instantiate the class passed in via the `-Dio.netty.customResourceLeakDetector`, otherwise falling back to the default one. * I've modified `ResourceLeakDetector` to be non-final and to have the reporting broken out in to methods that can be overridden. Result: You can instrument leaks in your application by just adding something like the following: ```java public class InstrumentedResourceLeakDetector<T> extends ResourceLeakDetector<T> { @Monitor("InstanceLeakCounter") private final AtomicInteger instancesLeakCounter; @Monitor("LeakCounter") private final AtomicInteger leakCounter; public InstrumentedResourceLeakDetector(Class<T> resource) { super(resource); this.instancesLeakCounter = new AtomicInteger(); this.leakCounter = new AtomicInteger(); } @Override protected void reportTracedLeak(String records) { super.reportTracedLeak(records); leakCounter.incrementAndGet(); } @Override protected void reportUntracedLeak() { super.reportUntracedLeak(); leakCounter.incrementAndGet(); } @Override protected void reportInstancesLeak() { super.reportInstancesLeak(); instancesLeakCounter.incrementAndGet(); } } ```
2016-06-15 09:19:15 +02:00
resourceType, records);
}
/**
* This method is called when an untraced leak is detected. It can be overridden for tracking how many times leaks
* have been detected.
*/
protected void reportUntracedLeak(String resourceType) {
logger.error("LEAK: {}.release() was not called before it's garbage-collected. " +
"Enable advanced leak reporting to find out where the leak occurred. " +
"To enable advanced leak reporting, " +
"specify the JVM option '-D{}={}' or call {}.setLevel() " +
"See https://netty.io/wiki/reference-counted-objects.html for more information.",
Pluggable resource leak detector Allow users of Netty to plug in their own leak detector for the purpose of instrumentation. Motivation: We are rolling out a large Netty deployment and want to be able to track the amount of leaks we're seeing in production via custom instrumentation. In order to achieve this today, I had to plug in a custom `ByteBufAllocator` into the bootstrap and have it initialize a custom `ResourceLeakDetector`. Due to these classes mostly being marked `final` or having private or static methods, a lot of the code had to be copy-pasted and it's quite ugly. Modifications: * I've added a static loader method for the `ResourceLeakDetector` in `AbstractByteBuf` that tries to instantiate the class passed in via the `-Dio.netty.customResourceLeakDetector`, otherwise falling back to the default one. * I've modified `ResourceLeakDetector` to be non-final and to have the reporting broken out in to methods that can be overridden. Result: You can instrument leaks in your application by just adding something like the following: ```java public class InstrumentedResourceLeakDetector<T> extends ResourceLeakDetector<T> { @Monitor("InstanceLeakCounter") private final AtomicInteger instancesLeakCounter; @Monitor("LeakCounter") private final AtomicInteger leakCounter; public InstrumentedResourceLeakDetector(Class<T> resource) { super(resource); this.instancesLeakCounter = new AtomicInteger(); this.leakCounter = new AtomicInteger(); } @Override protected void reportTracedLeak(String records) { super.reportTracedLeak(records); leakCounter.incrementAndGet(); } @Override protected void reportUntracedLeak() { super.reportUntracedLeak(); leakCounter.incrementAndGet(); } @Override protected void reportInstancesLeak() { super.reportInstancesLeak(); instancesLeakCounter.incrementAndGet(); } } ```
2016-06-15 09:19:15 +02:00
resourceType, PROP_LEVEL, Level.ADVANCED.name().toLowerCase(), simpleClassName(this));
}
/**
* @deprecated This method will no longer be invoked by {@link ResourceLeakDetector}.
Pluggable resource leak detector Allow users of Netty to plug in their own leak detector for the purpose of instrumentation. Motivation: We are rolling out a large Netty deployment and want to be able to track the amount of leaks we're seeing in production via custom instrumentation. In order to achieve this today, I had to plug in a custom `ByteBufAllocator` into the bootstrap and have it initialize a custom `ResourceLeakDetector`. Due to these classes mostly being marked `final` or having private or static methods, a lot of the code had to be copy-pasted and it's quite ugly. Modifications: * I've added a static loader method for the `ResourceLeakDetector` in `AbstractByteBuf` that tries to instantiate the class passed in via the `-Dio.netty.customResourceLeakDetector`, otherwise falling back to the default one. * I've modified `ResourceLeakDetector` to be non-final and to have the reporting broken out in to methods that can be overridden. Result: You can instrument leaks in your application by just adding something like the following: ```java public class InstrumentedResourceLeakDetector<T> extends ResourceLeakDetector<T> { @Monitor("InstanceLeakCounter") private final AtomicInteger instancesLeakCounter; @Monitor("LeakCounter") private final AtomicInteger leakCounter; public InstrumentedResourceLeakDetector(Class<T> resource) { super(resource); this.instancesLeakCounter = new AtomicInteger(); this.leakCounter = new AtomicInteger(); } @Override protected void reportTracedLeak(String records) { super.reportTracedLeak(records); leakCounter.incrementAndGet(); } @Override protected void reportUntracedLeak() { super.reportUntracedLeak(); leakCounter.incrementAndGet(); } @Override protected void reportInstancesLeak() { super.reportInstancesLeak(); instancesLeakCounter.incrementAndGet(); } } ```
2016-06-15 09:19:15 +02:00
*/
@Deprecated
Pluggable resource leak detector Allow users of Netty to plug in their own leak detector for the purpose of instrumentation. Motivation: We are rolling out a large Netty deployment and want to be able to track the amount of leaks we're seeing in production via custom instrumentation. In order to achieve this today, I had to plug in a custom `ByteBufAllocator` into the bootstrap and have it initialize a custom `ResourceLeakDetector`. Due to these classes mostly being marked `final` or having private or static methods, a lot of the code had to be copy-pasted and it's quite ugly. Modifications: * I've added a static loader method for the `ResourceLeakDetector` in `AbstractByteBuf` that tries to instantiate the class passed in via the `-Dio.netty.customResourceLeakDetector`, otherwise falling back to the default one. * I've modified `ResourceLeakDetector` to be non-final and to have the reporting broken out in to methods that can be overridden. Result: You can instrument leaks in your application by just adding something like the following: ```java public class InstrumentedResourceLeakDetector<T> extends ResourceLeakDetector<T> { @Monitor("InstanceLeakCounter") private final AtomicInteger instancesLeakCounter; @Monitor("LeakCounter") private final AtomicInteger leakCounter; public InstrumentedResourceLeakDetector(Class<T> resource) { super(resource); this.instancesLeakCounter = new AtomicInteger(); this.leakCounter = new AtomicInteger(); } @Override protected void reportTracedLeak(String records) { super.reportTracedLeak(records); leakCounter.incrementAndGet(); } @Override protected void reportUntracedLeak() { super.reportUntracedLeak(); leakCounter.incrementAndGet(); } @Override protected void reportInstancesLeak() { super.reportInstancesLeak(); instancesLeakCounter.incrementAndGet(); } } ```
2016-06-15 09:19:15 +02:00
protected void reportInstancesLeak(String resourceType) {
}
@SuppressWarnings("deprecation")
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
private static final class DefaultResourceLeak<T>
extends WeakReference<Object> implements ResourceLeakTracker<T>, ResourceLeak {
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
@SuppressWarnings("unchecked") // generics and updaters do not mix.
private static final AtomicReferenceFieldUpdater<DefaultResourceLeak<?>, Record> headUpdater =
(AtomicReferenceFieldUpdater)
AtomicReferenceFieldUpdater.newUpdater(DefaultResourceLeak.class, Record.class, "head");
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
@SuppressWarnings("unchecked") // generics and updaters do not mix.
private static final AtomicIntegerFieldUpdater<DefaultResourceLeak<?>> droppedRecordsUpdater =
(AtomicIntegerFieldUpdater)
AtomicIntegerFieldUpdater.newUpdater(DefaultResourceLeak.class, "droppedRecords");
@SuppressWarnings("unused")
private volatile Record head;
@SuppressWarnings("unused")
private volatile int droppedRecords;
private final Set<DefaultResourceLeak<?>> allLeaks;
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
private final int trackedHash;
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
DefaultResourceLeak(
Object referent,
ReferenceQueue<Object> refQueue,
Set<DefaultResourceLeak<?>> allLeaks) {
super(referent, refQueue);
assert referent != null;
// Store the hash of the tracked object to later assert it in the close(...) method.
// It's important that we not store a reference to the referent as this would disallow it from
// be collected via the WeakReference.
trackedHash = System.identityHashCode(referent);
allLeaks.add(this);
// Create a new Record so we always have the creation stacktrace included.
headUpdater.set(this, new Record(Record.BOTTOM));
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
this.allLeaks = allLeaks;
}
@Override
public void record() {
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
record0(null);
}
@Override
public void record(Object hint) {
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
record0(hint);
}
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
/**
* This method works by exponentially backing off as more records are present in the stack. Each record has a
* 1 / 2^n chance of dropping the top most record and replacing it with itself. This has a number of convenient
* properties:
*
* <ol>
* <li> The current record is always recorded. This is due to the compare and swap dropping the top most
* record, rather than the to-be-pushed record.
* <li> The very last access will always be recorded. This comes as a property of 1.
* <li> It is possible to retain more records than the target, based upon the probability distribution.
* <li> It is easy to keep a precise record of the number of elements in the stack, since each element has to
* know how tall the stack is.
* </ol>
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
*
* In this particular implementation, there are also some advantages. A thread local random is used to decide
* if something should be recorded. This means that if there is a deterministic access pattern, it is now
* possible to see what other accesses occur, rather than always dropping them. Second, after
* {@link #TARGET_RECORDS} accesses, backoff occurs. This matches typical access patterns,
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
* where there are either a high number of accesses (i.e. a cached buffer), or low (an ephemeral buffer), but
* not many in between.
*
* The use of atomics avoids serializing a high number of accesses, when most of the records will be thrown
* away. High contention only happens when there are very few existing records, which is only likely when the
* object isn't shared! If this is a problem, the loop can be aborted and the record dropped, because another
* thread won the race.
*/
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
private void record0(Object hint) {
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
// Check TARGET_RECORDS > 0 here to avoid similar check before remove from and add to lastRecords
if (TARGET_RECORDS > 0) {
Record oldHead;
Record prevHead;
Record newHead;
boolean dropped;
do {
if ((prevHead = oldHead = headUpdater.get(this)) == null) {
// already closed.
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
return;
}
final int numElements = oldHead.pos + 1;
if (numElements >= TARGET_RECORDS) {
final int backOffFactor = Math.min(numElements - TARGET_RECORDS, 30);
if (dropped = ThreadLocalRandom.current().nextInt(1 << backOffFactor) != 0) {
prevHead = oldHead.next;
}
} else {
dropped = false;
}
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
newHead = hint != null ? new Record(prevHead, hint) : new Record(prevHead);
} while (!headUpdater.compareAndSet(this, oldHead, newHead));
if (dropped) {
droppedRecordsUpdater.incrementAndGet(this);
}
}
}
boolean dispose() {
clear();
return allLeaks.remove(this);
}
@Override
public boolean close() {
if (allLeaks.remove(this)) {
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
// Call clear so the reference is not even enqueued.
clear();
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
headUpdater.set(this, null);
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
return true;
}
return false;
}
@Override
public boolean close(T trackedObject) {
// Ensure that the object that was tracked is the same as the one that was passed to close(...).
assert trackedHash == System.identityHashCode(trackedObject);
try {
return close();
} finally {
// This method will do `synchronized(trackedObject)` and we should be sure this will not cause deadlock.
// It should not, because somewhere up the callstack should be a (successful) `trackedObject.release`,
// therefore it is unreasonable that anyone else, anywhere, is holding a lock on the trackedObject.
// (Unreasonable but possible, unfortunately.)
reachabilityFence0(trackedObject);
}
}
/**
* Ensures that the object referenced by the given reference remains
* <a href="package-summary.html#reachability"><em>strongly reachable</em></a>,
* regardless of any prior actions of the program that might otherwise cause
* the object to become unreachable; thus, the referenced object is not
* reclaimable by garbage collection at least until after the invocation of
* this method.
*
* <p> Recent versions of the JDK have a nasty habit of prematurely deciding objects are unreachable.
* see: https://stackoverflow.com/questions/26642153/finalize-called-on-strongly-reachable-object-in-java-8
* The Java 9 method Reference.reachabilityFence offers a solution to this problem.
*
* <p> This method is always implemented as a synchronization on {@code ref}, not as
* {@code Reference.reachabilityFence} for consistency across platforms and to allow building on JDK 6-8.
* <b>It is the caller's responsibility to ensure that this synchronization will not cause deadlock.</b>
*
* @param ref the reference. If {@code null}, this method has no effect.
* @see java.lang.ref.Reference#reachabilityFence
*/
private static void reachabilityFence0(Object ref) {
if (ref != null) {
synchronized (ref) {
// Empty synchronized is ok: https://stackoverflow.com/a/31933260/1151521
}
}
}
@Override
public String toString() {
Record oldHead = headUpdater.getAndSet(this, null);
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
if (oldHead == null) {
// Already closed
return EMPTY_STRING;
}
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
final int dropped = droppedRecordsUpdater.get(this);
int duped = 0;
int present = oldHead.pos + 1;
// Guess about 2 kilobytes per stack trace
StringBuilder buf = new StringBuilder(present * 2048).append(NEWLINE);
buf.append("Recent access records: ").append(NEWLINE);
int i = 1;
Set<String> seen = new HashSet<>(present);
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
for (; oldHead != Record.BOTTOM; oldHead = oldHead.next) {
String s = oldHead.toString();
if (seen.add(s)) {
if (oldHead.next == Record.BOTTOM) {
buf.append("Created at:").append(NEWLINE).append(s);
} else {
buf.append('#').append(i++).append(':').append(NEWLINE).append(s);
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
}
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
} else {
duped++;
}
2013-12-06 14:35:14 +01:00
}
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
if (duped > 0) {
buf.append(": ")
.append(duped)
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
.append(" leak records were discarded because they were duplicates")
.append(NEWLINE);
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
}
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
if (dropped > 0) {
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
buf.append(": ")
.append(dropped)
.append(" leak records were discarded because the leak record count is targeted to ")
.append(TARGET_RECORDS)
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
.append(". Use system property ")
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
.append(PROP_TARGET_RECORDS)
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
.append(" to increase the limit.")
.append(NEWLINE);
}
buf.setLength(buf.length() - NEWLINE.length());
return buf.toString();
}
}
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
private static final AtomicReference<String[]> excludedMethods =
new AtomicReference<>(EmptyArrays.EMPTY_STRINGS);
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
public static void addExclusions(Class clz, String ... methodNames) {
Set<String> nameSet = new HashSet<>(Arrays.asList(methodNames));
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
// Use loop rather than lookup. This avoids knowing the parameters, and doesn't have to handle
// NoSuchMethodException.
for (Method method : clz.getDeclaredMethods()) {
if (nameSet.remove(method.getName()) && nameSet.isEmpty()) {
break;
}
}
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
if (!nameSet.isEmpty()) {
throw new IllegalArgumentException("Can't find '" + nameSet + "' in " + clz.getName());
}
String[] oldMethods;
String[] newMethods;
do {
oldMethods = excludedMethods.get();
newMethods = Arrays.copyOf(oldMethods, oldMethods.length + 2 * methodNames.length);
for (int i = 0; i < methodNames.length; i++) {
newMethods[oldMethods.length + i * 2] = clz.getName();
newMethods[oldMethods.length + i * 2 + 1] = methodNames[i];
}
} while (!excludedMethods.compareAndSet(oldMethods, newMethods));
}
private static final class Record extends Throwable {
private static final long serialVersionUID = 6065153674892850720L;
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
private static final Record BOTTOM = new Record();
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
private final String hintString;
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
private final Record next;
private final int pos;
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
Record(Record next, Object hint) {
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
// This needs to be generated even if toString() is never called as it may change later on.
hintString = hint instanceof ResourceLeakHint ? ((ResourceLeakHint) hint).toHintString() : hint.toString();
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
this.next = next;
this.pos = next.pos + 1;
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
}
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
Record(Record next) {
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
hintString = null;
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
this.next = next;
this.pos = next.pos + 1;
}
// Used to terminate the stack
private Record() {
hintString = null;
next = null;
pos = -1;
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
}
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
@Override
public String toString() {
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
StringBuilder buf = new StringBuilder(2048);
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
if (hintString != null) {
buf.append("\tHint: ").append(hintString).append(NEWLINE);
}
// Append the stack trace.
StackTraceElement[] array = getStackTrace();
// Skip the first three elements.
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
out: for (int i = 3; i < array.length; i++) {
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
StackTraceElement element = array[i];
// Strip the noisy stack trace elements.
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
String[] exclusions = excludedMethods.get();
for (int k = 0; k < exclusions.length; k += 2) {
if (exclusions[k].equals(element.getClassName())
&& exclusions[k + 1].equals(element.getMethodName())) {
continue out;
}
}
Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses. Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects. Modification: There are a number of changes here. In relative order of importance: API / Functionality changes: * Max records and max sample records are gone. Only "target" records, the number of records tries to retain is exposed. * Records are sampled based on the number of already stored records. The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements. * Records are stored in a concurrent stack structure rather than a list. This avoids a head and tail. Since the stack is only read once, there is no need to maintain head and tail pointers * The properties of this imply that the very first and very last access are always recorded. When deciding to sample, the top element is replaced rather than pushed. * Samples that happen between the first and last accesses now have a chance of being recorded. Previously only the final few were kept. * Sampling is no longer deterministic. Previously, a deterministic access pattern meant that you could conceivably always miss some access points. * Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n. This means that for 1,000,000 accesses, about 20 will actually be kept. I have an elegant proof for this which is too large to fit in this commit message. Code changes: * All locks are gone. Because sampling rarely needs to do a write, there is almost 0 contention. The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder. This was not done because of memory concerns. * Stack trace exclusion is done outside of RLD. Classes can opt to remove some of their methods. * Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning. Previously it used contains() * Leak printing is outputted fairly differently. I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep. Result: More useful leak reporting. Faster: ``` Before: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 136293.404 ± 7669.454 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 72805.720 ± 3710.864 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 139131.215 ± 4882.751 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 74146.313 ± 4999.246 ops/s After: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 155281.969 ± 5301.399 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 77866.239 ± 3821.054 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 153360.036 ± 8611.353 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 78670.804 ± 2399.149 ops/s ```
2017-09-19 02:46:39 +02:00
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
buf.append('\t');
buf.append(element.toString());
buf.append(NEWLINE);
}
Reduce performance overhead of ResourceLeakDetector Motiviation: The ResourceLeakDetector helps to detect and troubleshoot resource leaks and is often used even in production enviroments with a low level. Because of this its import that we try to keep the overhead as low as overhead. Most of the times no leak is detected (as all is correctly handled) so we should keep the overhead for this case as low as possible. Modifications: - Only call getStackTrace() if a leak is reported as it is a very expensive native call. Also handle the filtering and creating of the String in a lazy fashion - Remove the need to mantain a Queue to store the last access records - Add benchmark Result: Huge decrease of performance overhead. Before the patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 4358.367 ± 116.419 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 2306.027 ± 55.044 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 4220.979 ± 114.046 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 2250.734 ± 55.352 ops/s With this patch: Benchmark (recordTimes) Mode Cnt Score Error Units ResourceLeakDetectorRecordBenchmark.record 8 thrpt 20 71398.957 ± 2695.925 ops/s ResourceLeakDetectorRecordBenchmark.record 16 thrpt 20 38643.963 ± 1446.694 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 8 thrpt 20 71677.882 ± 2923.622 ops/s ResourceLeakDetectorRecordBenchmark.recordWithHint 16 thrpt 20 38660.176 ± 1467.732 ops/s
2017-09-17 16:32:31 +02:00
return buf.toString();
}
}
}