Motivation:
AbstractConstant.compareTo seems complex and hard to understand. Also it allocates unnecessary 1 byte in direct buffer and holds unnecessary pointer to this byte butter.
Modifications:
uniquifier (id) variable now initialized during Constant creation and thus no need in volatile and no need in uniquifier() method as it could be easily replaced with AtomicLong.
Result:
Every Constant instance now consumes less bytes for pointer, don't consume anything in direct buffer.
Motivation:
Old code doesn't needed anymore due to logger factory initialization.
Modifications :
Removed static section and useless static variables;
Logging concatenations replaced with placeholders.
Result:
Cleaner, simpler code doing the same
Motivation:
Some usages of findNextPositivePowerOfTwo assume that bounds checking is taken care of by this method. However bounds checking is not taken care of by findNextPositivePowerOfTwo and instead assert statements are used to imply the caller has checked the bounds. This can lead to unexpected non power of 2 return values if the caller is not careful and thus invalidate any logic which depends upon a power of 2.
Modifications:
- Add a safeFindNextPositivePowerOfTwo method which will do runtime bounds checks and always return a power of 2
Result:
Fixes https://github.com/netty/netty/issues/5601
Motivation:
NewLine initializing is complex, with unnecessary allocations and non-standard.
Static section is overloaded with StringBuilders for simple "s" + "s" concatenation pattern that compiler optimizes perfectly.
Modifications:
NewLine initializing replaced with standard System.getProperty("line.separator").
Removed StringBuilders in static section.
Result:
Less complex code.
Motivation:
When we try to close the Channel due a timeout we need to ensure we not log if the notification of the promise fails as it may be completed in the meantime.
Modifications:
Add another constructor to ChannelPromiseNotifier and PromiseNotifier which allows to log on notification failure.
Result:
No more miss-leading logs.
Motivation:
At the moment the Recyler is very sensitive to allocation bursts which means that if there is a need for X objects for only one time these will most likely end up in the Recycler and sit there forever as the normal workload only need a subset of this number.
Modifications:
Add a ratio which sets how many objects should be pooled for each new allocation. This allows to slowly increase the number of objects in the Recycler while not be to sensitive for bursts.
Result:
Less unused objects in the Recycler if allocation rate sometimes bursts.
Motivation:
Using Attribute.remove() and Attribute.getAndRemove() in a multi-threaded enviroment has its drawbacks. Make sure we document these.
Modifications:
Add javadocs and mark Attribute.remove() and Attribute.getAndRemove() as @Deprecated.
Result:
Hopefully less suprising behaviour.
Motivation:
We used a very high number for DEFAULT_INITIAL_MAX_CAPACITY (over 200k) which is not very relastic and my lead to very surprising memory usage if allocations happen in bursts.
Modifications:
Use a more sane default value of 32k
Result:
Less possible memory usage by default
Motivation:
AbstractReferenceCounted as independent conditional statements to check the bounds of the retain IllegalReferenceCountException condition. One of the exceptions also uses the incorrect increment.
Modifications:
- Combined independent conditional checks into 1 where possible
- Correct IllegalReferenceCountException with incorrect increment
- Remove the subtract to check for overflow and re-use the addition and check for overflow to remove 1 arithmetic operation (see http://docs.oracle.com/javase/specs/jls/se7/html/jls-15.html#jls-15.18.2)
Result:
AbstractReferenceCounted has less independent branch statements and more correct IllegalReferenceCountException. Compilation size of AbstractReferenceCounted.retain() is reduced from 58 bytes to 47 bytes.
Motivation:
We saw some sporadic test failures for GlobalEventExecutorTest.testAutomaticStartStop test. This is caused parallel execution of tests in combination with assert checks that will be affected.
Modifications:
Remove fragile assert checks.
Result:
No more sporadic test failures
Motivation:
We use a shared capacity per Stack for all its WeakOrderQueue elements. These relations are stored in a WeakHashMap to allow dropping these if memory pressure arise. The problem is that we not "reclaim" previous reserved space when this happens. This can lead to a Stack which has not shared capacity left which then will lead to an AssertError when we try to allocate a new WeakOderQueue.
Modifications:
- Ensure we never throw an AssertError if we not have enough space left for a new WeakOrderQueue
- Ensure we reclaim space when WeakOrderQueue is collected.
Result:
No more AssertError possible when new WeakOrderQueue is created and also correctly reclaim space that was reserved from the shared capacity.
Motivation:
Due an implementation flaw in DefaultAttributeMap it was possible that an attribute and its stored value/key could not be collected until the DefaultAttributeMap could be collected. This can lead to unexpected memory usage and strong reachability of objects that should be collected.
Modifications:
Use an special empty DefaultAttribute as head of the each bucket which will not store any key / value. With this change everything can be collected as expected as we not use any DefaultAttribute created by the user as head of a bucket.
Result:
DefaultAttributeMap does not store user data and thus the lifetime of this user data is not tied to the lifetime of the DefaultAttributeMap.
Motivation:
Commit afafadd3d7 introduced a change which stored the Stack in the WeakOrderQueue as field. This unfortunally had the effect that it was not removed from the WeakHashMap anymore as the Stack also is used as key.
Modifications:
Do not store a reference to the Stack in WeakOrderQueue.
Result:
WeakOrderQueue can be collected correctly again.
Motivation:
When resolving localhost on Windows where the hosts file does not contain a localhost entry by default, the resulting InetAddress object returned by the resolver does not have the hostname set so that getHostName returns the ip address 127.0.0.1. This behaviour is inconsistent with Windows where the hosts file does contain a localhost entry and with Linux in any case. It breaks at least some unit tests.
Modifications:
Create the LOCALHOST4 and LOCALHOST6 objects with hostname localhost in addition to the address.
Add unit test domain localhost to DnsNameResolverTest to check the resolution of localhost with ipv4 at least.
Result:
The resolver returns a InetAddress object for localhost with the hostname localhost in all cases.
Motivation:
The Java version is used for platform dependent logic. Yet, the logic
for acquiring the Java version requires special permissions (the runtime
permission "getClassLoader") that some downstream projects will never
grant. As such, these projects are doomed to have Netty act is their
Java major version is six. While there are ways to maintain the same
logic without requiring these special permissions, the logic is
needlessly complicated because it relies on loading classes that exist
in version n but not version n - 1. This complexity can be removed. As a
bonanza, the dangerous permission is no longer required.
Modifications:
Rather than attempting to load classes that exist in version n but not
in version n - 1, we can just parse the Java specification version. This
only requires a begign property (property permission
"java.specification.version") and is simple.
Result:
Acquisition of the Java version is safe and simple.
Motivation:
The clean method in java.base/jdk.internal.ref.Cleaner is not accessible
to methods outside java.base. This prevents Cleaner0.freeDirectBuffer
from actually calling the clean method on JDK9.
The issue could have been caught earlier if Cleaner0 is initialized when
PlatformDependent0 is initialized and logging statements in the static
initializer in Cleaner0 would be close to the logging statements in the
static initializer in PlatformDependent0.
Modifications:
Try casting the cleaner obtained from a ByteBuffer to Runnable and use
Runnable.run if possible. All Cleaners in JDK9 implements Runnable. Fall
back to the clean method if the cleaner does not implement Runnable.
The fallback preserves the behavior on JDK8 and earlier.
Try to free the direct ByteBuffer allocated during static initialization
of PlatformDependent0. This cause Cleaner0 to be initialized when
PlatformDependent0 is initialized, and logging statements from the
static initializers will be close together.
Result:
Cleaner0.freeDirectBuffer works as intended on JDK9 and logging shows
that Cleaner0.freeDirectBuffer works as intended.
Motivation:
A recent change to DefaultThreadFactory modified it so that it is sticky
with respect to thread groups. In particular, this change made it so
that DefaultThreadFactory would hold on to the thread group that created
it, and then use that thread group to create threads.
This can have problematic semantics since it can lead to violations of a
tenet of thread groups that a thread can only modify threads in its own
thread group and descendant thread groups. With a sticky thread group, a
thread triggering the creation of a new thread via
DefaultThreadFactory#newThread will be modifying a thread from the
sticky thread group which will not necessarily be its own nor a
descendant thread group. When a security manager is in place that
enforces this requirement, these modifications are now impossible. This
is especially problematic in the context of Netty because certain global
singletons like GlobalEventExecutor will create a
DefaultThreadFactory. If all DefaultThreadFactory instances are sticky
about their thread groups, it means that submitting tasks to the
GlobalEventExecutor singleton can cause a thread to be created from the
DefaultThreadFactory sticky thread group, exactly the problem with
DefaultThreadFactory being sticky about thread groups. A similar problem
arises from the ThreadDeathWatcher.
Modifications:
This commit modifies DefaultThreadFactory so that a null thread group
can be set with the behavior that all threads created by such an
instance will inherit the default thread group (the thread group
provided by the security manager if there is one, otherwise the thread
group of the creating thread). The construction of the instances of
DefaultThreadFactory used by the GlobalEventExecutor singleton and
ThreadDeathWatcher are modified to use this behavior. Additionally, we
also modify the chained constructor invocations of the
DefaultThreadFactory that do not have a parameter to specify a thread
group to use the thread group from the security manager is available,
otherwise the creating thread's thread group. We also add unit tests
ensuring that all of this behavior is maintained.
Result:
It will be possible to have DefaultThreadFactory instances that are not
sticky about the thread group that led to their creation. Instead,
threads created by such a DefaultThreadFactory will inherit the default
thread group which will either be the thread group from the security
manager or the current thread's thread group.
Motivation:
Currently, the recycler max capacity it's only enforced on the
thread-local stack which is used when the recycling happens on the
same thread that requested the object.
When the recycling happens in a different thread, then the objects
will be queued into a linked list (where each node holds N objects,
default=16). These objects are then transfered into the stack when
new objects are requested and the stack is empty.
The problem is that the queue doesn't have a max capacity and that
can lead to bad scenarios. Eg:
- Allocate 1M object from recycler
- Recycle all of them from different thread
- Recycler WeakOrderQueue will contain 1M objects
- Reference graph will be very long to traverse and GC timeseems to be negatively impacted
- Size of the queue will never shrink after this
Modifications:
Add some shared counter which is used to manage capacity limits when recycle from different thread then the allocation thread. We modify the counter whenever we allocate a new Link to reduce the overhead of increment / decrement it.
Result:
More predictable number of objects mantained in the recycler pool.
Motivation:
This change is part of the change done in PR #5395 to provide an `AUTO_FLUSH` capability.
Splitting this change will enable to try other ways of implementing `AUTO_FLUSH`.
Modifications:
Two methods:
```java
void executeAfterEventLoopIteration(Runnable task);
boolean removeAfterEventLoopIterationTask(Runnable task);
```
are added to `SingleThreadEventLoop` class for adding/removing a task to be executed at the end of current/next iteration of this `eventloop`.
In order to support the above, a few methods are added to `SingleThreadEventExecutor`
```java
protected void afterRunningAllTasks() { }
```
This is invoked after all tasks are run for this executor OR if the passed timeout value for `runAllTasks(long timeoutNanos)` is expired.
Added a queue of `tailTasks` to `SingleThreadEventLoop` to hold all tasks to be executed at the end of every iteration.
Result:
`SingleThreadEventLoop` now has the ability to execute tasks at the end of an eventloop iteration.
Motivation:
Today when awaiting uninterruptibly on a default promise, a race
condition can lead to a missed signal. Quite simply, the check for
whether the condition holds is not made inside a lock before
waiting. This means that the waiting thread can enter the wait after the
promise has completed and will thus not be notified, thus missing the
signal. This leads to the waiting thread to enter a timed wait that will
only trip with the timeout elapses leading to unnecessarily long waits
(imagine a connection timeout, and the waiting thread missed the signal
that the connection is ready).
Modification:
This commit fixes this missed signal by checking the condition inside a
lock. We also add a test that reliably fails without the non-racy
condition check.
Result:
Timed uninterruptible waits on default promise will not race against the
condition and possibly wait longer than necessary.
Motivation:
ExecutorService.invoke*(...) methods may block by API definition. This can lead to deadlocks if called from inside the EventLoop in SingleThreadEventExecutor as it only has one Thread that does all the work.
Modifications:
Throw a RejectedExectionException if someone tries to call SingleThreadEventExecutor.invoke*(...) while in the EventLoop.
Result:
No more deadlock possible.
Motivation:
The current DnsNameResolver does not support search domains resolution. Search domains resolution is supported out of the box by the java.net resolver, making the DnsNameResolver not able to be a drop in replacement for io.netty.resolver.DefaultNameResolver.
Modifications:
The DnsNameResolverContext resolution has been modified to resolve a list of search path first when it is configured so. The resolve method now uses the following algorithm:
if (hostname is absolute (start with dot) || no search domains) {
searchAsIs
} else {
if (numDots(name) >= ndots) {
searchAsIs
}
if (searchAsIs wasn't performed or failed) {
searchWithSearchDomainsSequenciallyUntilOneSucceeds
}
}
The DnsNameResolverBuilder provides configuration for the search domains and the ndots value. The default search domains value is configured with the OS search domains using the same native configuration the java.net resolver uses.
Result:
The DnsNameResolver performs search domains resolution when they are present.
Motivation:
DefaultPromise has a listeners member variable which is volatile to allow for an optimization which makes notification of listeners less expensive when there are no listeners to notify. However this change makes all other operations involving the listeners member variable more costly. This optimization which requires listeners to be volatile can be removed to avoid volatile writes/reads for every access on the listeners member variable.
Modifications:
- DefaultPromise listeners is made non-volatile and the null check optimization is removed
Result:
DefaultPromise.listeners is no longer volatile.
Motivation:
In commit f984870ccc I made a change which operated under invalide assumption that tasks executed by an EventExecutor will always be processed in a serial fashion. This is true for SingleThreadEventExecutor sub-classes but not part of the EventExecutor interface contract.
Because of this change implementations of EventExecutor which not strictly execute tasks in a serial fashion may miss events before handlerAdded(...) is called. This is strictly speaking not correct as there is not guarantee in this case that handlerAdded(...) will be called as first task (as there is no ordering guarentee).
Cassandra itself ships such an EventExecutor implementation which has no strict ordering to spread load across multiple threads.
Modifications:
- Add new OrderedEventExecutor interface and let SingleThreadEventExecutor / EventLoop implement / extend it.
- Only expose "restriction" of skipping events until handlerAdded(...) is called for OrderedEventExecutor implementations
- Add ThreadPoolEventExecutor implementation which executes tasks in an unordered fashion. This is used in added unit test but can also be used for protocols which not expose an strict ordering.
- Add unit test.
Result:
Resurrect the possibility to implement an EventExecutor which does not enforce serial execution of events and be able to use it with the DefaultChannelPipeline.
Motivation:
DefaultPromise has a listeners member variable which is volatile to allow for an optimization which makes notification of listeners less expensive when there are no listeners to notify. However this change makes all other operations involving the listeners member variable more costly. This optimization which requires listeners to be volatile can be removed to avoid volatile writes/reads for every access on the listeners member variable.
Modifications:
- DefaultPromise listeners is made non-volatile and the null check optimization is removed
Result:
DefaultPromise.listeners is no longer volatile.
Motivation:
The logging statements in i.n.u.c.DefaultPromise do not emit the
caught Throwable when a Throwable is thrown while a listener is being
notified of completed or progressed operations.
Modifications:
This issue arises because the logging message has a single placeholder
but is passing two additional arguments, the second one being the
caught Throwable that is thus quietly not logged. We address this by
modifying the logging statements to ensure the caught Throwable is
logged. In this case, the preferred approach is to use the logger
override that accepts a message and a Throwable parameter since logger
implementations might have special handling for this case.
Result:
Log messages from i.n.u.c.DefaultPromise when a Throwable is thrown
while notifying a listener of completed or progressed operations will
contain the caught Throwable.
Motivation:
A race detector found that DefaultPromise.listeners is improperly synchronized [1].
Worst case a listener will not be executed when the promise is completed.
Modifications:
Make DefaultPromise.listeners a volatile.
Result:
Hopefully, DefaultPromise is more correct under concurrent execution.
[1] https://github.com/grpc/grpc-java/issues/2015
Motivation:
AsciiString.hashCode(o) , if "o" is a subString, the hash code is not always same, when netty’s version is 4.1.1.Final and jdk’s version is 1.6.
Modifications:
Use a test to assert hash codes are equal between a new string and any sub string (a part of a char array),If their values are equal.
Result:
Create a test method to AsciiStringCharacterTest.
Motivation:
Currently in the single threaded and global event executors when the scheduled task queue is drained, there is a call to hasScheduledTasks(). If there are scheduled tasks then the the code polls the queue for tasks. The poll method duplicates the exact logic of hasScheduledTasks(). This involves two calls to nanoTime when one seems sufficient.
Modifications:
Directly poll the queue for tasks and break if the task returned is null.
Result:
Should be no noticeable impact on functionality. Two calls to nanoTime have been coarsened into a single call.
Motivation:
Project Jigsaw in JDK9 has moved the direct byte buffer cleaner from
sun.misc.Cleaner to java.lang.ref.Cleaner$Cleanable. This cause the
current platform tests to throw a ClassNotFoundException, disabling the
use of direct byte buffer cleaners.
Modifications:
I use reflection to find the clean method in either sun.misc.Cleaner or
java.lang.ref.Cleaner$Cleanable.
Result:
Netty uses direct byte buffers on JDK9 as it already do on earlier JDKs.
Motivation:
In JDK9 heap byte buffers have an address field, so we have to remove
the current check as it is invalid in JDK9.
Modifications:
Removed the address field check for heap byte buffers.
Result:
Netty continues to find sun.misc.Unsafe in JDK9 as in previous JDKs.
Motivation:
Netty's platform dependent parts should know about JDK9.
Modifications:
JDK9 introduce Runtime$Version Runtime.version() which has an int major()
method that always return the major Java version. I call that method to
get the Java major version.
Result:
Netty will recognize all future JDK versions.
Motivation:
HPACK Encoder has a data structure which is similar to a previous version of DefaultHeaders. Some of the same improvements can be made.
Motivation:
- Enforce the restriction that the Encoder's headerFields length must be a power of two so we can use masking instead of modulo
- Use AsciiString.hashCode which already has optimizations instead of having yet another hash code algorithm in Encoder
Result:
Fixes https://github.com/netty/netty/issues/5357
Motivation:
PlatformDependent attempts to use reflection to get the underlying char[] (or byte[]) from String objects. This is fragile as if the String implementation does not utilize the full array, and instead uses a subset of the array, this optimization is invalid. OpenJDK6 and some earlier versions of OpenJDK7 String have the capability to use a subsection of the underlying char[].
Modifications:
- PlatformDependent should not attempt to use the underlying array from String (or other data types) via reflection
Result:
PlatformDependent hash code generation for CharSequence does not depend upon specific JDK implementation details.
Motivation:
We recently added the ResourceLeakDetectorFactory but missed to updated HashedWheelTimer to use it.
Modifications:
- Add new abstract method to ResourceLeakDetectorFactory that allows to provide also samplingInterval and maxActive args.
- Deprecate most constructors in ResourceLeakDetector and add doc explaining that people should use ResourceLeakDetectorFactory
Result:
Custom ResourceLeakDetectorFactory will also be used in HashedWheelTimer if configured.
Motivation:
Sometimes a shared HashedWheelTimer can not easily be stopped in a good place. If the worker thread is daemon this is not a big deal and we should allow to not log a leak.
Modifications:
Add another constructor which allows to disable resource leak detection if worker thread is used.
Result:
Not log resource leak when HashedWheelTimer is not stopped and the worker thread is a deamon thread.
Motivation:
Some Netty use cases may want to configure the max allowed stack depth for promise listener notification.
Modifications:
- Add a system property so that this value can be configured.
Result:
DefaultPromise's max stack depth is configurable.
Motivation:
PR #5355 modified interfaces to reduce GC related to the HPACK code. However this came with an anticipated performance regression related to HpackUtil.equals due to AsciiString's increase cost of charAt(..). We should mitigate this performance regression.
Modifications:
- Introduce an equals method in PlatformDependent which doesn't leak timing information and use this in HpcakUtil.equals
Result:
Fixes https://github.com/netty/netty/issues/5436
Motivation:
It is good to have used dependencies and plugins up-to-date to fix any undiscovered bug fixed by the authors.
Modification:
Scanned dependencies and plugins and carefully updated one by one.
Result:
Dependencies and plugins are up-to-date.
Motiviation:
Sometimes it is useful to allow to specify a custom strategy to handle rejected tasks. For example if someone tries to add tasks from outside the eventloop it may make sense to try to backoff and retries and so give the executor time to recover.
Modification:
Add RejectedEventExecutor interface and implementations and allow to inject it.
Result:
More flexible handling of executor overload.
Motivation:
To restrict the memory usage of a system it is sometimes needed to adjust the number of max pending tasks in the tasks queue.
Modifications:
- Add new constructors to modify the number of allowed pending tasks.
- Add system properties to configure the default values.
Result:
More flexible configuration.
Motivation:
We should merge ThrowableUtils into ThrowableUtil as this name is more consistent with the naming of utility classes in netty.
Modifications:
Merge classes.
Result:
More consistent naming
Motivation:
We use pre-instantiated exceptions in various places for performance reasons. These exceptions don't include a stacktrace which makes it hard to know where the exception was thrown. This is especially true as we use the same exception type (for example ChannelClosedException) in different places. Setting some StackTraceElements will provide more context as to where these exceptions original and make debugging easier.
Modifications:
Set a generated StackTraceElement on these pre-instantiated exceptions which at least contains the origin class and method name. The filename and linenumber are specified as unkown (as stated in the javadocs of StackTraceElement).
Result:
Easier to find the origin of a pre-instantiated exception.
Allow users of Netty to plug in their own leak detector for the purpose
of instrumentation.
Motivation:
We are rolling out a large Netty deployment and want to be able to
track the amount of leaks we're seeing in production via custom
instrumentation. In order to achieve this today, I had to plug in a
custom `ByteBufAllocator` into the bootstrap and have it initialize a
custom `ResourceLeakDetector`. Due to these classes mostly being marked
`final` or having private or static methods, a lot of the code had to
be copy-pasted and it's quite ugly.
Modifications:
* I've added a static loader method for the `ResourceLeakDetector` in
`AbstractByteBuf` that tries to instantiate the class passed in via the
`-Dio.netty.customResourceLeakDetector`, otherwise falling back to the
default one.
* I've modified `ResourceLeakDetector` to be non-final and to have the
reporting broken out in to methods that can be overridden.
Result:
You can instrument leaks in your application by just adding something
like the following:
```java
public class InstrumentedResourceLeakDetector<T> extends
ResourceLeakDetector<T> {
@Monitor("InstanceLeakCounter")
private final AtomicInteger instancesLeakCounter;
@Monitor("LeakCounter")
private final AtomicInteger leakCounter;
public InstrumentedResourceLeakDetector(Class<T> resource) {
super(resource);
this.instancesLeakCounter = new AtomicInteger();
this.leakCounter = new AtomicInteger();
}
@Override
protected void reportTracedLeak(String records) {
super.reportTracedLeak(records);
leakCounter.incrementAndGet();
}
@Override
protected void reportUntracedLeak() {
super.reportUntracedLeak();
leakCounter.incrementAndGet();
}
@Override
protected void reportInstancesLeak() {
super.reportInstancesLeak();
instancesLeakCounter.incrementAndGet();
}
}
```
Motivation:
DefaultThreadFactory allows to override the newThread(...) method and so should have access to all fields that are set via the constructor.
Modifications:
Change threadGroup from private to protected visibility.
Result:
Easier to extend DefaultThreadFactory.
Motivation:
In case of exception in invokeExceptionCaught() only original exception passed to invokeExceptionCaught() will be logged on any log level.
+ AbstractChannelHandlerContext and CombinedChannelDuplexHandler log different exceptions.
Modifications:
Fix inconsistent logging code and add ability to see both stacktraces on DEBUG level.
Result:
Both handlers log now both original exception and thrown from invokeExceptionCaught. To see full stacktrace of exception thrown from invokeExceptionCaught DEBUG log level must be enabled.
Motivation:
JCTools supports both non-unsafe, unsafe versions of queues and JDK6 which allows us to shade the library in netty-common allowing it to stay "zero dependency".
Modifications:
- Remove copy paste JCTools code and shade the library (dependencies that are shaded should be removed from the <dependencies> section of the generated POM).
- Remove usage of OneTimeTask and remove it all together.
Result:
Less code to maintain and easier to update JCTools and less GC pressure as the queue implementation nt creates so much garbage