Motivation:
We not need to do an extra conditional check in retain(...) as we can just check for overflow after we did the increment.
Modifications:
- Remove extra conditional check
- Add test code.
Result:
One conditional check less.
Motivation:
OpenSslEngine and OpenSslContext currently rely on finalizers to ensure that native resources are cleaned up. Finalizers require the GC to do extra work, and this extra work can be avoided if the user instead takes responsibility of releasing the native resources.
Modifications:
- Make a base class for OpenSslENgine and OpenSslContext which does not have a finalizer but instead implements ReferenceCounted. If this engine is inserted into the pipeline it will be released by the SslHandler
- Add a new SslProvider which can be used to enable this new feature
Result:
Users can opt-in to a finalizer free OpenSslEngine and OpenSslContext.
Fixes https://github.com/netty/netty/issues/4958
Motivation:
AbstractReferenceCountedByteBuf as independent conditional statements to check the bounds of the retain IllegalReferenceCountException condition. One of the exceptions also uses the incorrect increment. The same fix was done for AbstractReferenceCounted as 01523e7835.
Modifications:
- Combined independent conditional checks into 1 where possible
- Correct IllegalReferenceCountException with incorrect increment
- Remove the subtract to check for overflow and re-use the addition and check for overflow to remove 1 arithmetic operation (see http://docs.oracle.com/javase/specs/jls/se7/html/jls-15.html#jls-15.18.2)
Result:
AbstractReferenceCountedByteBuf has less independent branch statements and more correct IllegalReferenceCountException. Compilation size of AbstractReferenceCountedByteBuf.retain() is reduced.
Motivation:
If the user uses 0 as quiet period we should shutdown without any delay if possible.
Modifications:
Ensure we not introduce extra delay when a shutdown quit period of 0 is used.
Result:
EventLoop shutdown as fast as expected.
Motivation:
Sometimes it may be useful to explicit disable the usage of the KeyManagerFactory when using OpenSsl.
Modifications:
Add io.netty.handler.ssl.openssl.useKeyManagerFactory which can be used to explicit disable KeyManagerFactory usage.
Result:
More flexible usage.
Motivation:
We should take the readerIndex into account whe write into the BIO. Its currently not a problem as we slice before and so the readerIndex is always 0 but we should better not depend on this as this will break easily if we ever refactor the code and not slice anymore.
Modifications:
Take readerIndex into acount.
Result:
More safe and correct use.
Current constant pool holds all data within HashMap and all access to this HashMap is done via synchronized blocks. Thus CuncurrentHashMap will be here more efficient as it designed for higher throughput and will use less locks. Also valueOf method was not very efficient as it performed get operation 2 times.
Modifications :
HashMap -> PlatformDependent.newConcurrentHashMap().
ValueOf is more efficient now, threadsafe and uses less locks. Downside is that final T tempConstant = newConstant(nextId(), name); could be called more than 1 time during high contention.
Result :
Less contention, cleaner code.
Motivation:
We offer DefaultEventExecutorGroup as an EventExecutorGroup which return OrderedEventExecutor and so provide strict ordering of event execution. One limitations of this implementation is that each contained DefaultEventExecutor will always be tied to a single thread, which can lead to a very unbalanced execution as one thread may be super busy while others are idling.
Modifications:
- Add NonStickyEventExecutorGroup which can be used to wrap another EventExecutorGroup (like UnorderedThreadPoolEventExecutor) and expose ordering while not be sticky with the thread that is used for a given EventExecutor. This basically means that Threads may change between execution of tasks for an EventExecutor but ordering is still guaranteed.
Result:
Better utalization of threads in some use-cases.
Motivation:
To better restrict resource usage we should limit the number of WeakOrderQueue instances per Thread. Once this limit is reached object that are recycled from a different Thread then the allocation Thread are dropped on the floor.
Modifications:
Add new system property io.netty.recycler.maxDelayedQueuesPerThread and constructor that allows to limit the max number of WeakOrderQueue instances per Thread for Recycler instance. The default is 2 * cores (the same as the default number of EventLoop instances per EventLoopGroup).
Result:
Better way to restrict resource / memory usage per Recycler instance.
Motivation:
Commons logger is dead and not updated for more than 2 years. #5615.
Modifications:
Added @Deprecated annotation to CommonsLoggerFactory and CommonsLogger.
Result:
Commons logger now deprecated.
Motivation:
When Netty components are initialized, Netty attempts to determine if it
has access to unsafe. If Netty is not able to access unsafe (because of
security permissions, or because the JVM was started with an explicit
flag to tell Netty to not look for unsafe), Netty logs an info-level
message that looks like a warning:
Your platform does not provide complete low-level API for accessing
direct buffers reliably. Unless explicitly requested, heap buffer will
always be preferred to avoid potential system unstability.
This log message can appear in applications that depend on Netty for
networking, and this log message can be scary to end-users of such
platforms. This log message should not be emitted if the application was
started with an explicit flag telling Netty to not look for unsafe.
Modifications:
This commit refactors the unsafe detection logic to expose whether or
not the JVM was started with a flag telling Netty to not look for
unsafe. With this exposed information, the log message on unsafe being
unavailable can be modified to not be emitted when Netty is explicitly
told to not look for unsafe.
Result:
No log message is produced when unsafe is unavailable because Netty was
told to not look for it.
Motivation:
AbstractConstant.compareTo seems complex and hard to understand. Also it allocates unnecessary 1 byte in direct buffer and holds unnecessary pointer to this byte butter.
Modifications:
uniquifier (id) variable now initialized during Constant creation and thus no need in volatile and no need in uniquifier() method as it could be easily replaced with AtomicLong.
Result:
Every Constant instance now consumes less bytes for pointer, don't consume anything in direct buffer.
Motivation:
At the moment we call initChannel(...) in the channelRegistered(...) method which has the effect that if another ChannelInitializer is added within the initChannel(...) method the ordering of the added handlers is not correct and surprising. This is as the whole initChannel(...) method block is executed before the initChannel(...) block of the added ChannelInitializer is handled.
Modifications:
Call initChannel(...) from within handlerAdded(...) if the Channel is registered already. This is true in all cases for our DefaultChannelPipeline implementation. This way the ordering is always as expected. We still keep the old behaviour as well to not break code for other ChannelPipeline implementations (if someone ever wrote one).
Result:
Correct and expected ordering of ChannelHandlers.
Motivation:
Old code doesn't needed anymore due to logger factory initialization.
Modifications :
Removed static section and useless static variables;
Logging concatenations replaced with placeholders.
Result:
Cleaner, simpler code doing the same
Motivation:
Some usages of findNextPositivePowerOfTwo assume that bounds checking is taken care of by this method. However bounds checking is not taken care of by findNextPositivePowerOfTwo and instead assert statements are used to imply the caller has checked the bounds. This can lead to unexpected non power of 2 return values if the caller is not careful and thus invalidate any logic which depends upon a power of 2.
Modifications:
- Add a safeFindNextPositivePowerOfTwo method which will do runtime bounds checks and always return a power of 2
Result:
Fixes https://github.com/netty/netty/issues/5601
Motivation:
AddressResolverGroup adds a listener to the termination future of an
EventExecutor when a new AddressResolver is created. The listener calls
AddressResolver.close() when the EventExecutor is terminated to give the
AddressResolver a chance to release its resources.
When using DnsAddressResolverGroup, the AddressResolver.close() will
eventually trigger DnsNameResolver.close(), which closes its underlying
DatagramChannel.
DatagramChannel.close() (or any Channel.close()) will travel through
pipeline and trigger EventExecutor.execute() because
DnsNameResolver.close() has been invoked from a non-I/O thread.
(NB: A terminationFuture is always notified from the GlobalEventExecutor
thread.)
However, because we are doing this in the listener of the termination
future of the terminated EventLoop we are trying to execute a task upon,
the attempt to close the channel fails due to RejectedExecutionException.
Modifications:
- Do not call Channel.close() in DnsNameResolver.close() if the Channel
has been closed by EventLoop already
Result:
No more RejectedExecutionException when shutting down an event loop.
Motivation:
When a hostname cannot be resolved, the message in the UnknownHostException mentions the hostname with the last attempted search domain appended, which is kind of confusing. I would prefer to see the original hostname supplied to the method in the exception.
Modifications:
Store the pristine hostname in the resolver context and use it to create the exception message instead of the hostname with search domain.
Add unit test to check that the exception does not mention the search domain.
Result:
The exception mentions the unmodified hostname in the message.
Motivation:
NewLine initializing is complex, with unnecessary allocations and non-standard.
Static section is overloaded with StringBuilders for simple "s" + "s" concatenation pattern that compiler optimizes perfectly.
Modifications:
NewLine initializing replaced with standard System.getProperty("line.separator").
Removed StringBuilders in static section.
Result:
Less complex code.
Motivation :
Unboxing operations allocate unnecessary objects when it could be avoided.
Modifications:
Replaced Float.valueOf with Number.parseFloat where possible.
Result:
Less unnecessary objects allocations.
Motivation:
When Unpooled.wrappedBuffer(...) is called with an array of ByteBuf with length >= 2 and the first ByteBuf is not readable it will result in double releasing of these empty buffers when release() is called on the returned buffer.
Modifications:
- Ensure we only wrap readable buffers.
- Add unit test
Result:
No double release of buffers.
Motivation:
When we try to close the Channel due a timeout we need to ensure we not log if the notification of the promise fails as it may be completed in the meantime.
Modifications:
Add another constructor to ChannelPromiseNotifier and PromiseNotifier which allows to log on notification failure.
Result:
No more miss-leading logs.
Motivation:
retainSlice() currently does not unwrap the ByteBuf when creating the ByteBuf wrapper. This effectivley forms a linked list of ByteBuf when it is only necessary to maintain a reference to the unwrapped ByteBuf.
Modifications:
- retainSlice() and retainDuplicate() variants should only maintain a reference to the unwrapped ByteBuf
- create new unit tests which generally verify the retainSlice() behavior
- Remove unecessary generic arguments from AbstractPooledDerivedByteBuf
- Remove unecessary int length member variable from the unpooled sliced ByteBuf implementation
- Rename the unpooled sliced/derived ByteBuf to include Unpooled in their name to be more consistent with the Pooled variants
Result:
Fixes https://github.com/netty/netty/issues/5582
Motivation:
At the moment the Recyler is very sensitive to allocation bursts which means that if there is a need for X objects for only one time these will most likely end up in the Recycler and sit there forever as the normal workload only need a subset of this number.
Modifications:
Add a ratio which sets how many objects should be pooled for each new allocation. This allows to slowly increase the number of objects in the Recycler while not be to sensitive for bursts.
Result:
Less unused objects in the Recycler if allocation rate sometimes bursts.
Motivation:
Using Attribute.remove() and Attribute.getAndRemove() in a multi-threaded enviroment has its drawbacks. Make sure we document these.
Modifications:
Add javadocs and mark Attribute.remove() and Attribute.getAndRemove() as @Deprecated.
Result:
Hopefully less suprising behaviour.
Motivation:
SwappedByteBuf.retainedSlice(...) did not return a retained buffer.
Modifications:
Correctly delegate to retainedSlice(..) calls.
Result:
Correctly return retained slice.
Motivation:
Currently, QueryStringDecoder#path simply returns the path info as is, without decoding it as the Javadoc states.
Modifications:
* Make QueryStringDecoder#path decode the path info.
* Add tests to QueryStringDecoderTest.
Result:
QueryStringDecoder#path now decodes the path info as expected.
Motivation:
I received a report the its not possible to add another ChannelInitialiter in the initChannel(...) method, so we should add a test case for it.
Modifications:
Added testcase.
Result:
Validate that all works as expected.
Motivation:
When a ChannelInitializer is used via ServerBootstrap.handler(...) the users handlers may be added after the internal ServerBootstrapAcceptor. This should not happen.
Modifications:
Delay the adding of the ServerBootstrapAcceptor until the initChannel(....) method returns.
Result:
Correct order of handlers in the ServerChannels ChannelPipeline.
Motivation:
We used a very high number for DEFAULT_INITIAL_MAX_CAPACITY (over 200k) which is not very relastic and my lead to very surprising memory usage if allocations happen in bursts.
Modifications:
Use a more sane default value of 32k
Result:
Less possible memory usage by default
Motivation:
We used Promise.setFailure(...) when fail a Promise in SimpleChannelPool. As this happens in multiple levels this can result in stackoverflow as setFailure(...) may throw an IllegalStateException which then again is propergated.
Modifications:
Use tryFailure(...)
Result:
No more possibility to cause a stack overflow when failing the promise.
Motivation:
FlushConsolidationHandler#flushIfNeeded has a conditional which is fixed based upon code path. This conditional can be removed and instead just manually set in each fixed code path.
Modifications:
- Remove boolean parameter on FlushConsolidationHandler#flushIfNeeded and set readInprogess to false manually when necessary
Result:
Less conditionals in FlushConsolidationHandler
Motivation:
AbstractReferenceCounted as independent conditional statements to check the bounds of the retain IllegalReferenceCountException condition. One of the exceptions also uses the incorrect increment.
Modifications:
- Combined independent conditional checks into 1 where possible
- Correct IllegalReferenceCountException with incorrect increment
- Remove the subtract to check for overflow and re-use the addition and check for overflow to remove 1 arithmetic operation (see http://docs.oracle.com/javase/specs/jls/se7/html/jls-15.html#jls-15.18.2)
Result:
AbstractReferenceCounted has less independent branch statements and more correct IllegalReferenceCountException. Compilation size of AbstractReferenceCounted.retain() is reduced from 58 bytes to 47 bytes.
Motivation:
We saw some sporadic test failures for GlobalEventExecutorTest.testAutomaticStartStop test. This is caused parallel execution of tests in combination with assert checks that will be affected.
Modifications:
Remove fragile assert checks.
Result:
No more sporadic test failures
Motivation:
We use a shared capacity per Stack for all its WeakOrderQueue elements. These relations are stored in a WeakHashMap to allow dropping these if memory pressure arise. The problem is that we not "reclaim" previous reserved space when this happens. This can lead to a Stack which has not shared capacity left which then will lead to an AssertError when we try to allocate a new WeakOderQueue.
Modifications:
- Ensure we never throw an AssertError if we not have enough space left for a new WeakOrderQueue
- Ensure we reclaim space when WeakOrderQueue is collected.
Result:
No more AssertError possible when new WeakOrderQueue is created and also correctly reclaim space that was reserved from the shared capacity.
Motivation:
The ndots = 0 is a valid value for ndots, it means that when using a non dotted name, the resolution should first try using a search and if it fails then use subdomains. Currently it is not allowed. Docker compose uses this when wiring up containers as names have usually no dots inside.
Modification:
Modify DnsNameResolver to accept ndots = 0 and handle the case in the resolution procedure. In this case a direct search is done and then a fallback on the search path is performed.
Result:
The ndots = 0 case is implemented.
Motivation:
We are currently doing a memory copy to verify the snapy version. This is not needed.
Modifications:
Remove memory copy and just compare byte per byte.
Result:
Less memory copies and allocations
Motivation:
Due an implementation flaw in DefaultAttributeMap it was possible that an attribute and its stored value/key could not be collected until the DefaultAttributeMap could be collected. This can lead to unexpected memory usage and strong reachability of objects that should be collected.
Modifications:
Use an special empty DefaultAttribute as head of the each bucket which will not store any key / value. With this change everything can be collected as expected as we not use any DefaultAttribute created by the user as head of a bucket.
Result:
DefaultAttributeMap does not store user data and thus the lifetime of this user data is not tied to the lifetime of the DefaultAttributeMap.
Motivation:
We need to ensure the uncompressed ByteBuf is released if an exception happens while calling decode(...). If we miss to do so we leak buffers.
Modifications:
Correctly release buffer on exception.
Result:
No more memory leak.
Motivation:
Commit afafadd3d7 introduced a change which stored the Stack in the WeakOrderQueue as field. This unfortunally had the effect that it was not removed from the WeakHashMap anymore as the Stack also is used as key.
Modifications:
Do not store a reference to the Stack in WeakOrderQueue.
Result:
WeakOrderQueue can be collected correctly again.
Motivation:
We not need to do any memory copies when doing CRC32 processing.
Modifications:
Use ByteBufChecksum to eliminate memory copies.
Result:
Less memory copies.
Motivation:
PR #5493 added support for KeyManagerFactories when using the OpenSsl context. This commit corrects a bug causing a NullPointerException that occurs when using a KeyManagerFactory without a certificate chain and private key.
Modifications:
Removes assertNotNull() assertions which were causing a certificate chain and private key to be required even when using a KeyManagerFactory. Also removed a redundant call to buildKeyManagerFactory() which was also causing a exception when a KeyManagerFactory is provided but a certificate chain and private key is not.
Result:
A KeyManagerFactory can now be used in the OpenSslServerContext without an independent certificate chain and private key.
Motivation:
We should try to minimize memory copies whenever possible.
Modifications:
- Refactor ByteBufChecksum to work with heap and direct ByteBuf always
- Remove memory copy in Snappy by let Crc32c extend ByteBufChecksum
Result:
Less memory copies when using Snappy
Motivation:
We did an unessary memory copy when doing bzip2 encoding.
Modifications:
Remove memory copy and just use a ByteProcessor.
Result:
Less memory copies and so faster.