Motivation:
We should not log by default if the promise is a VoidChannelPromise as its try* methods will always return false.
Modifications:
Do an instanceof check to determine if we should log or not by default
Result:
No more noise in the logs when using a VoidChannelPromise.
Motivation:
If AutoClose is false and there is a IoException then AbstractChannel will not close the channel but instead just fail flushed element in the ChannelOutboundBuffer. AbstractChannel also notifies of writability changes, which may lead to an infinite loop if the peer has closed its read side of the socket because we will keep accepting more data but continuously fail because the peer isn't accepting writes.
Modifications:
- If the transport throws on a write we should acknowledge that the output side of the channel has been shutdown and cleanup. If the channel can't accept more data because it is full, and still healthy it is not expected to throw. However if the channel is not healthy it will throw and is not expected to accept any more writes. In this case we should shutdown the output for Channels that support this feature and otherwise just close.
- Connection-less protocols like UDP can remain the same because the channel may disconnected temporarily.
- Make sure AbstractUnsafe#shutdownOutput is called because the shutdown on the socket may throw an exception.
Result:
More correct handling of write failure when AutoClose is false.
Motivation:
ShutdownOutput now fails all pending writes in the ChannelOutboundBuffer and sets it to null. However the Close code path uses the ChannelOutboundBuffer as an indication that the close operation is in progress and exits early and will not call doClose. This will lead to the Channel not actually being fully closed.
Bug introduced by 237a4da1b7
Modifications:
- AbstractChannel#close shouldn't exit early just because outboundBuffer is null, and instead should use additional state closeInitiated to avoid duplicate close operations
Result:
AbstractChannel#close(..) after AbstractChannel#shutdownOutbound() will still invoke doClose and cleanup Channel state.
Motivation:
Continuing to make netty happy when compiling through errorprone.
Modification:
Mostly comments, some minor switch statement changes.
Result:
No more compiler errors!
Motivation:
Calling `newInstance()` on a Class object can bypass compile time
checked Exception propagation. This is noted in Java Puzzlers,
as well as in ErrorProne:
http://errorprone.info/bugpattern/ClassNewInstance
Modifications:
Use the niladic constructor to create a new instance.
Result:
Compile time safety for checked exceptions
Motivation:
Our http2 child channel implementation was not 100 % complete and had a few bugs. Beside this the performance overhead was non-trivial.
Modifications:
There are a lot of modifications, the most important....
* Http2FrameCodec extends Http2ConnectionHandler and Http2MultiplexCodec extends Http2FrameCodec to reduce performance heads and inter-dependencies on handlers in the pipeline
* Correctly handle outbound flow control for child channels
* Support unknow frame types in Http2FrameCodec and Http2MultiplexCodec
* Use a consistent way how to create Http2ConnectionHandler, Http2FrameCodec and Http2MultiplexCodec (via a builder)
* Remove Http2Codec and Http2CodecBuilder as the user should just use Http2MultipleCodec and Http2MultiplexCodecBuilder now
* Smart handling of flushes from child channels to reduce overhead
* Reduce object allocations
* child channels always use the same EventLoop as the parent Channel to reduce overhead and simplify implementation.
* Not extend AbstractChannel for the child channel implementation to reduce overhead in terms of performance and memory usage
* Remove Http2FrameStream.managedState(...) as the user of the child channel api should just use Channel.attr(...)
Result:
Http2MultiplexCodec (and so child channels) and Http2FrameCodec are more correct, faster and more feature complete.
Motivation:
This PR (unfortunately) does 4 things:
1) Add outbound flow control to the Http2MultiplexCodec:
The HTTP/2 child channel API should interact with HTTP/2 outbound/remote flow control. That is,
if a H2 stream used up all its flow control window, the corresponding child channel should be
marked unwritable and a writability-changed event should be fired. Similarly, a unwritable
child channel should be marked writable and a writability-event should be fired, once a
WINDOW_UPDATE frame has been received. The changes are (mostly) contained in ChannelOutboundBuffer,
AbstractHttp2StreamChannel and Http2MultiplexCodec.
2) Introduce a Http2Stream2 object, that is used instead of stream identifiers on stream frames. A
Http2Stream2 object allows an application to attach state to it, and so a application handler
no longer needs to maintain stream state (i.e. in a map(id -> state)) himself.
3) Remove stream state events, which are no longer necessary due to the introduction of Http2Stream2.
Also those stream state events have been found hard and complex to work with, when porting gRPC
to the Http2FrameCodec.
4) Add support for HTTP/2 frames that have not yet been implemented, like PING and SETTINGS. Also add
a Http2FrameCodecBuilder that exposes options from the Http2ConnectionHandler API that couldn't else
be used with the frame codec, like buffering outbound streams, window update ratio, frame logger, etc.
Modifications:
1) A child channel's writability and a H2 stream's outbound flow control window interact, as described
in the motivation. A channel handler is free to ignore the channel's writability, in which case the
parent channel is reponsible for buffering writes until a WINDOW_UPDATE is received.
The connection-level flow control window is ignored for now. That is, a child channel's writability
is only affected by the stream-level flow control window. So a child channel could be marked writable,
even though the connection-level flow control window is zero.
2) Modify Http2StreamFrame and the Http2FrameCodec to take a Http2Stream2 object intstead of a primitive
integer. Introduce a special Http2ChannelDuplexHandler that has newStream() and forEachActiveStream()
methods. It's recommended for a user to extend from this handler, to use those advanced features.
3) As explained in the documentation, a new inbound stream active can be detected by checking if the
Http2Stream2.managedState() of a Http2HeadersFrame is null. An outbound stream active can be detected
by adding a listener to the ChannelPromise of the write of the first Http2HeadersFrame. A stream
closed event can be listened to by adding a listener to the Http2Stream2.closeFuture().
4) Add a simple Http2FrameCodecBuilder and implement the missing frame types.
Result:
1) The Http2MultiplexCodec supports outbound flow control.
2) The Http2FrameCodec API makes it easy for a user to manage custom stream specific state and to create
new outbound streams.
3) The Http2FrameCodec API is much cleaner and easier to work with. Hacks like the ChannelCarryingHeadersFrame
are no longer necessary.
4) The Http2FrameCodec now also supports PING and SETTINGS frames. The Http2FrameCodecBuilder allows the Http2FrameCodec
to use some of the rich features of the Http2ConnectionHandler API.
Motivation:
When using the OIO transport we need to act on byte[] when writing and reading from / to the underyling Socket. So we should ensure we use heap buffers by default to reduce memory copies.
Modifications:
Ensure we prefer heap buffers by default for the OIO transport.
Result:
Possible less memory copies.
Motivation:
We need to ensure we always null out (or set) the address on the java.net.DatagramPacket when doing read or write operation as the same instance is used across different calls.
Modifications:
Null out the address if needed.
Result:
Ensure the correct remote address is used when connect / disconnect between calls and also mix these with calls that directly specify the remote address for adatagram packets.
Motivation:
We need to support SO_TIMEOUT for the OioDatagramChannel but we miss this atm as we not have special handling for it in the DatagramChannelConfig impl that we use. Because of this the following log lines showed up when running the testsuite:
20:31:26.299 [main] WARN io.netty.bootstrap.Bootstrap - Unknown channel option 'SO_TIMEOUT' for channel '[id: 0x7cb9183c]'
Modifications:
- Add OioDatagramChannelConfig and impl
- Correctly set SO_TIMEOUT in testsuite
Result:
Support SO_TIMEOUT for OioDatagramChannel and so faster execution of datagram related tests in the testsuite
Motivation:
Implementations of DuplexChannel delegate the shutdownOutput to the underlying transport, but do not take any action on the ChannelOutboundBuffer. In the event of a write failure due to the underlying transport failing and application may attempt to shutdown the output and allow the read side the transport to finish and detect the close. However this may result in an issue where writes are failed, this generates a writability change, we continue to write more data, and this may lead to another writability change, and this loop may continue. Shutting down the output should fail all pending writes and not allow any future writes to avoid this scenario.
Modifications:
- Implementations of DuplexChannel should null out the ChannelOutboundBuffer and fail all pending writes
Result:
More controlled sequencing for shutting down the output side of a channel.
Motivation:
When a user called ctx.close() and used the EmbeddedChannel we did not correctly run all pending tasks which means channelInactive was never called.
Modifications:
Ensure we run all pending tasks after all operations that may change the Channel state and are part of the Channel.Unsafe impl.
Result:
Fixes [#6894].
Motivation:
ErrorProne complains that the array override doesn't match the
vararg super call. See http://errorprone.info/bugpattern/Overrides
Additionally, almost every other Future uses the vararg form, so
it would be stylistically consistent to keep it that way.
Modifications:
Use vararg override.
Result:
Cleaner, less naggy code.
Motivation:
DefaultChannelPipeline.estimatorHandle needs to be volatile as its accessed from different threads.
Modifications:
Make DefaultChannelPipeline.estimatorHandle volatile and correctly init it via CAS
Result:
No more race.
Motivation:
We previously used pollLast() to retrieve a Channel from the queue that backs SimpleChannelPool. This could lead to the problem that some Channels are very unfrequently used and so when these are used the connection was already be closed and so could not be reused.
Modifications:
Allow to configure if the last recent used Channel should be used or the "oldest".
Result:
More flexible usage of ChannelPools
Motivation:
Some ChannelOptions must be set before the Channel is really registered to have the desired effect.
Modifications:
Add another constructor argument which allows to not register the EmbeddedChannel to its EventLoop until the user calls register().
Result:
More flexible usage of EmbeddedChannel. Also Fixes [#6968].
Motivation:
We had recently a report that the issue [#6607] is still not fixed.
Modifications:
Add a testcase to prove the issue is fixed.
Result:
More tests.
Motivation:
JCTools 2.0.2 provides an unbounded MPSC linked queue. Before we shaded JCTools we had our own unbounded MPSC linked queue and used it in various places but gave this up because there was no public equivalent available in JCTools at the time.
Modifications:
- Use JCTool's MPSC linked queue when no upper bound is specified
Result:
Fixes https://github.com/netty/netty/issues/5951
Motivation:
Each call to SSL_write may introduce about ~100 bytes of overhead. The OpenSslEngine (based upon OpenSSL) is not able to do gathering writes so this means each wrap operation will incur the ~100 byte overhead. This commit attempts to increase goodput by aggregating the plaintext in chunks of <a href="https://tools.ietf.org/html/rfc5246#section-6.2">2^14</a>. If many small chunks are written this can increase goodput, decrease the amount of calls to SSL_write, and decrease overall encryption operations.
Modifications:
- Introduce SslHandlerCoalescingBufferQueue in SslHandler which will aggregate up to 2^14 chunks of plaintext by default
- Introduce SslHandler#setWrapDataSize to control how much data should be aggregated for each write. Aggregation can be disabled by setting this value to <= 0.
Result:
Better goodput when using SslHandler and the OpenSslEngine.
Motivation:
The behaviour of the FixedChannelPool.release was inconsistent with the
SimpleChannelPool implementation, in that given promise is returned.
In the FixedChannelPool implementation a new promise was return and
this meant that the completion of that promise can be different.
Specifically on releasing a channel to a closed pool, the parameter
promise is failed with an IllegalStateException but the returned one
will have been successful (as it was completed by call to super
.release)
Modification:
Return the given promise as the result of FixedChannelPool.release
Result:
Returned promise will reflect the result of the release operation.
Motivation:
Channels returned to a FixedChannelPool after closing it remain active.
Since channels that where acquired from the pool are not closed during the close operation, they remain open even after releasing the channel back to the pool where they are then in accessible and become in-effect a connection leak.
Modification:
Close the released channel on releasing back to a closed pool.
Result:
Much harder to create a connection leak by closing an active
FixedChannelPool instance.
Motivation:
We should not fail the promise when a closed Channel is offereed back to the ChannelPool as we explicit mention that the Channel must always be returned.
Modifications:
- Not fail the promise
- Add test-case
Result:
Fixes [#6831]
Motivation:
ChannelPipeline will happily add a handler to a closed Channel's pipeline and will call handlerAdded(...) but will not call handlerRemoved(...).
Modifications:
Check if pipeline was destroyed and if so not add the handler at all but propergate an exception.
Result:
Fixes [#6768]
Motivation:
We currently don't have a native transport which supports kqueue https://www.freebsd.org/cgi/man.cgi?query=kqueue&sektion=2. This can be useful for BSD systems such as MacOS to take advantage of native features, and provide feature parity with the Linux native transport.
Modifications:
- Make a new transport-native-unix-common module with all the java classes and JNI code for generic unix items. This module will build a static library for each unix platform, and included in the dynamic libraries used for JNI (e.g. transport-native-epoll, and eventually kqueue).
- Make a new transport-native-unix-common-tests module where the tests for the transport-native-unix-common module will live. This is so each unix platform can inherit from these test and ensure they pass.
- Add a new transport-native-kqueue module which uses JNI to directly interact with kqueue
Result:
JNI support for kqueue.
Fixes https://github.com/netty/netty/issues/2448
Fixes https://github.com/netty/netty/issues/4231
This fixes#6652.
Rationale
The invocation of initChannel of ChannelInitializer has been moved to as
early as during handlerAdded is invoked in 26aa34853, whereas it was
only invoked during channelRegistered is invoked before that. So the
comment does not describe how handlers are added in normal circumstances
anymore.
However, the code is kept as-is since there might be unusual cases, and
adding ServerBootstrapAcceptor via the event loop is always safe to
enforce the correct order.
Motivation:
In cases when an application is running in a container or is otherwise
constrained to the number of processors that it is using, the JVM
invocation Runtime#availableProcessors will not return the constrained
value but rather the number of processors available to the virtual
machine. Netty uses this number in sizing various resources.
Additionally, some applications will constrain the number of threads
that they are using independenly of the number of processors available
on the system. Thus, applications should have a way to globally
configure the number of processors.
Modifications:
Rather than invoking Runtime#availableProcessors, Netty should rely on a
method that enables configuration when the JVM is started or by the
application. This commit exposes a new class NettyRuntime for enabling
such configuraiton. This value can only be set once. Its default value
is Runtime#availableProcessors so that there is no visible change to
existing applications, but enables configuring either a system property
or configuring during application startup (e.g., based on settings used
to configure the application).
Additionally, we introduce the usage of forbidden-apis to prevent future
uses of Runtime#availableProcessors from creeping. Future work should
enable the bundled signatures and clean up uses of deprecated and
other forbidden methods.
Result:
Netty can be configured to not use the underlying number of processors,
but rather the constrained number of processors.
Motivation:
We need to release all the buffers that may be put into our inbound queue since we closed the Channel to ensure we not leak any memory. This is fine as it basically gives the same guarantees as TCP which means even if the promise was notified before its not really guaranteed that the "remote peer" will see the buffer at all.
Modifications:
Ensure we release all buffers in the inbound buffer if a doClose() is called.
Result:
No more leaks.
Motivation:
1. The use of InternetProtocolFamily is not consistent:
the DnsNameResolverContext and DnsNameResolver contains switches
instead of appropriate methods usage.
2. The InternetProtocolFamily class contains redundant switches in the
constructor.
Modifications:
1. Replacing switches to the use of an appropriate methods.
2. Simplifying the InternetProtocolFamily constructor.
Result:
Code is cleaner and simpler.
Motivation:
When a VoidChannelPromise is used by the user we need to ensure we propergate the exception through the ChannelPipeline otherwise the exception will just be swallowed and so the user has no idea whats going on.
Modifications:
- Always call tryFailure / trySuccess even when we use the VoidChannelPromise
- Add unit test
Result:
Fixes [#6622].
Motivation:
Commit 795f318 simplified some code related to the special case Set for the selected keys and introduced a Selector wrapper to make sure this set was properly reset. However the JDK makes assumptions about the type of Selector and this type is not extensible. This means whenever we call into the JDK we must provide the unwrapped version of the Selector or we get a ClassCastException. We missed a case of unwrapping in NioEventLoop#rebuildSelector0.
Modificaitons:
- NioEventLoop#openSelector should return a tuple so we can atomically set the wrapped and unwrapped Selector
- NioEventLoop#rebuildSelector0 should use the unwrapped version of the selector
Result:
Fixes https://github.com/netty/netty/issues/6607.
Motivation:
The code accidentally passes channel twice instead of value, resulting in logs like:
Failed to set channel option 'SO_SNDBUF' with value '[id: 0x2c5b2eb4]' for channel '[id: 0x2c5b2eb4]'
Modifications:
Pass value instead of channel where it needs to be.
Result:
Failed to set channel option 'SO_SNDBUF' with value '0' for channel '[id: 0x9bd3c5b8]'
Motivation:
We forked a new process to detect if the program is run by root. We should better just use user.name system property
Modifications:
- Change PlatformDependent.isRoot0() to read the user.name system property to detect if root runs the program and rename it to maybeSuperUser0().
- Rename PlatformDependent.isRoot() to maybeSuperUser() and let it init directly in the static block
Result:
Less heavy way to detect if the program is run by root.
Make the FileRegion comments about which transports are supported more accurate.
Also, eleminate any outstanding references to FileRegion.transfered as the method was renamed for spelling.
Modifications:
Class-level comment on FileRegion, can call renamed method.
Result:
More accurate documentation and less calls to deprecated methods.
Motivation:
There are numerous usages of internalNioBuffer which hard code 0 for the index when the intention was to use the readerIndex().
Modifications:
- Remove hard coded 0 for the index and use readerIndex()
Result:
We are less susceptible to using the wrong index, and don't make assumptions about the ByteBufAllocator.
Motivation:
Calling a static method is faster then dynamic
Modifications:
Add 'static' keyword for methods where it missed
Result:
A bit faster method calls
Motivation:
When "Too many open files" happens,the URLClassLoader cannot do any classloading because URLClassLoader need a FD for findClass. Because of this the anonymous inner class that is created to re-enable auto read may cause a problem.
Modification:
Pre-create Runnable that is scheduled and so ensure it is not lazy loaded.
Result:
No more problems when try to recover.
Motivation:
We have our own ThreadLocalRandom implementation to support older JDKs . That said we should prefer the JDK provided when running on JDK >= 7
Modification:
Using ThreadLocalRandom implementation of the JDK when possible.
Result:
Make use of JDK implementations when possible.
Motivation:
SelectedSelectionKeySet currently uses 2 arrays internally and users are expected to call flip() to access the underlying array and switch the active array. However we do not concurrently use 2 arrays at the same time and we can get away with using a single array if we are careful about when we reset the elements of the array.
Modifications:
- Introduce SelectedSelectionKeySetSelector which wraps a Selector and ensures we reset the underlying SelectedSelectionKeySet data structures before we select
- The loop bounds in NioEventLoop#processSelectedKeysOptimized can be defined more precisely because we know the real size of the underlying array
Result:
Fixes https://github.com/netty/netty/issues/6058
Motiviation:
Simplify implementation of compareTo/equals/hashCode for ChannelIds.
Modifications:
We simplfy the hashCode implementation for DefaultChannelId by not
making it random, but making it based on the underlying data. We fix the
compareTo implementation for DefaultChannelId by using lexicographic
comparison of the underlying data array. We fix the compareTo
implementation for CustomChannelId to avoid the possibility of overflow.
Result:
Cleaner code that is easier to maintain.
Motivation:
Initialization of PlatformDependent0 fails on Java 9 in static initializer when calling setAccessible(true).
Modifications:
Add RefelectionUtil which can be used to safely try if setAccessible(true) can be used or not and if not fail back to non reflection.
Result:
Fixed [#6345]
Motivation:
EPOLL annotates some exceptions to provide the remote address, but the original exception is not preserved. This may make determining a root cause more difficult. The static EPOLL exceptions references the native method that failed, but does not provide a description of the actual error number. Without the description users have to know intimate details about the native calls and how they may fail to debug issues.
Modifications:
- annotated exceptions should preserve the original exception
- static exceptions should include the string description of the expected errno
Result:
EPOLL exceptions provide more context and are more useful to end users.
Motivation:
EpollRecvByteAllocatorHandle intends to override the meaning of "maybe more data to read" which is a concept also used in all existing implementations of RecvByteBufAllocator$Handle but the interface doesn't support overriding. Because the interfaces lack the ability to propagate this computation EpollRecvByteAllocatorHandle attempts to implement a heuristic on top of the delegate which may lead to reading when we shouldn't or not reading data.
Modifications:
- Create a new interface ExtendedRecvByteBufAllocator and ExtendedHandle which allows the "maybe more data to read" between interfaces
- Deprecate RecvByteBufAllocator and change all existing implementations to extend ExtendedRecvByteBufAllocator
- transport-native-epoll should require ExtendedRecvByteBufAllocator so the "maybe more data to read" can be propagated to the ExtendedHandle
Result:
Fixes https://github.com/netty/netty/issues/6303.
Motivation:
Result of validatePromise() is always inverted with if (!validatePromise()).
Modification:
validatePromise() renamed to isNotValidPromise() and now returns inverted state so you don't need to invert state in conditions. Also name is now more meaningful according to returned result.
Added more tests for validatePromise corner cases with Exceptions.
Result:
Code easier to read. No need in inverted result.
Motivation:
We used various mocking frameworks. We should only use one...
Modifications:
Make usage of mocking framework consistent by only using Mockito.
Result:
Less dependencies and more consistent mocking usage.
Motivation:
NioDatagramChannel fails a write with NotYetConnectedException when the DatagramChannel was not yet connected and a ByteBuf is written. The same should be done for OioDatagramChannel as well.
Modifications:
Make OioDatagramChannel consistent with NioDatagramChannel
Result:
Correct and consistent implementations of DatagramChannel
Motivation:
Currently Netty does not wrap socket connect, bind, or accept
operations in doPrivileged blocks. Nor does it wrap cases where a dns
lookup might happen.
This prevents an application utilizing the SecurityManager from
isolating SocketPermissions to Netty.
Modifications:
I have introduced a class (SocketUtils) that wraps operations
requiring SocketPermissions in doPrivileged blocks.
Result:
A user of Netty can grant SocketPermissions explicitly to the Netty
jar, without granting it to the rest of their application.
Motivation:
https://github.com/netty/netty/pull/6042 only addressed PlatformDependent#getSystemClassLoader but getClassLoader is also called in an optional manner in some common code paths but fails to catch a general enough exception to continue working.
Modifications:
- Calls to getClassLoader which can continue if results fail should catch Throwable
Result:
More resilient code in the presense of restrictive class loaders.
Fixes https://github.com/netty/netty/issues/6246.
Motivation:
We not warned about not-supported ChannelOptions when set the options for the ServerChannel.
Modifications:
- Share code for setting ChannelOptions during bootstrap
Result:
Warning is logged when a ChannelOption is used that is not supported during bootstrap a Channel. See also [#6192]
Motivation:
The comment on AbstractChannelHandlerContext.invokeHandler() is incorrect and missleading. See [#6177]
Modifications:
Change true to false to correct the comment.
Result:
Fix missleading and incorrect comment.
Motivation:
`SimpleChannelPool` subclasses are likely to override the `connectChannel` method, and are likely to clobber the cloned `Bootstrap` handler in the process. To allow subclasses to properly notify the pool listener of new connections, we should expose (at least) the `handler` property of the pool to subclasses.
Modifications:
Expose `SimpleChannelPool` properties to subclasses via `protected` getters.
Result:
Subclasses can now use the bootstrap, handler, health checker, and health-check-on-release preoperties from their superclass.
Motivation:
DefaultChannelId provides a regular expression which validates if a user provided MAC address is valid. This regular expression may allow invalid MAC addresses and also not allow valid MAC addresses.
Modifications:
- Introduce a MacAddressUtil#parseMac method which can parse and validate the MAC address at the same time. The regular expression check before hand is additional overhead if we have to parse the MAC address.
Result:
Fixes https://github.com/netty/netty/issues/6132.
Motivation:
On some platforms the PID my be bigger then 4194304 so we should not limit it to 4194304.
Modifications:
Only check that the PID is a valid Integer
Result:
No more warnings on systems where the PID is bigger then 4194304.
Motivation:
In later Java8 versions our Atomic*FieldUpdater are slower then the JDK implementations so we should not use ours anymore. Even worse the JDK implementations provide for example an optimized version of addAndGet(...) using intrinsics which makes it a lot faster for this use-case.
Modifications:
- Remove methods that return our own Atomic*FieldUpdaters.
- Use the JDK implementations everywhere.
Result:
Faster code.
Motivation:
e102a008b6 changed a conditional where previously the NIO ServerChannel would not be closed in the event of an exception.
Modifications:
- Restore the logic prior to e102a008b6 which does not automatically close ServerChannels for IOExceptions
Result:
NIO ServerChannel doesn't close automatically for an IOException.
Motivation:
We should not catch ConcurrentModificationException as this can never happen because things are executed on the EventLoop thread.
Modifications:
Remove try / catch
Result:
Cleaner code.
Modifications:
LocalChannel#releaseInboundBuffers should always clear/release the queue and set readInProgress to false
Result:
LocalChannel queue is more reliably cleaned up.
Motivation:
LocalChannel attempts to close its peer socket when ever it is closed. However if the channels are on different EventLoops we may attempt to process events for the peer channel on the wrong EventLoop.
Modifications:
- Ensure the close process ensures we are on the correct thread before accessing data
Result:
More correct LocalChannel close code.
Motivation:
PlatformDependent#getSystemClassLoader may throw a wide variety of exceptions based upon the environment. We should handle all exceptions and continue initializing the slow path if an exception occurs.
Modifications:
- Catch Throwable in cases where PlatformDependent#getSystemClassLoader is used
Result:
Fixes https://github.com/netty/netty/issues/6038
Motivation:
Netty provides a adaptor from ByteBuf to Java's InputStream interface. The JDK Stream interfaces have an explicit lifetime because they implement the Closable interface. This lifetime may be differnt than the ByteBuf which is wrapped, and controlled by the interface which accepts the JDK Stream. However Netty's ByteBufInputStream currently does not take reference count ownership of the underlying ByteBuf. There may be no way for existing classes which only accept the InputStream interface to communicate when they are done with the stream, other than calling close(). This means that when the stream is closed it may be appropriate to release the underlying ByteBuf, as the ownership of the underlying ByteBuf resource may be transferred to the Java Stream.
Motivation:
- ByteBufInputStream.close() supports taking reference count ownership of the underyling ByteBuf
Result:
ByteBufInputStream can assume reference count ownership so the underlying ByteBuf can be cleaned up when the stream is closed.
Motivation:
To guard against the case that a user will enqueue a lot of empty or small buffers and so raise an OOME we need to also take the overhead of the ChannelOutboundBuffer / PendingWriteQueue into account when detect if a Channel is writable or not. This is related to #5856.
Modifications:
When calculate the memory for an message that is enqueued also add some extra bytes depending on the implementation.
Result:
Better guard against OOME.
Motivation
It's possible to extend LocalChannel as well as LocalServerChannel but the LocalServerChannel's serve(peer) method is hardcoded to create only instances of LocalChannel.
Modifications
Add a protected factory method that returns by default new LocalChannel(...) but users may override it to customize it.
Result
It's possible to customize the LocalChannel instance on either end of the virtual connection.
Motivation:
Some unit tests in SingleThreadEventLoopTest rely upon Thread.sleep for sequencing events between threads. This can be unreliable and result in spurious test failures if thread scheduling does not occur in a fair predictable manner.
Modifications:
- Reduce the reliance on Thread.sleep in SingleThreadEventLoopTest
Result:
Fixes https://github.com/netty/netty/issues/5851
Motivation:
The local transport is used to communicate in the same JVM so we should use heap buffers.
Modifications:
Use heapbuffers by default if not requested otherwise.
Result:
No allocating of direct buffers by default when using local transport
Motivation:
When using java.nio.DatagramChannel we should not close the channel when a SocketException was thrown as we can still use the channel.
Modifications:
Not close the Channel when SocketException is thrown
Result:
More robust and correct handling of exceptions when using NioDatagramChannel.
Motivation:
If an exception is thrown while processing the ready channels in the EventLoop we should still run all tasks as this may allow to recover. For example a OutOfMemoryError may be thrown and runAllTasks() will free up memory again. Beside this we should also ensure we always allow to shutdown even if an exception was thrown.
Modifications:
- Call runAllTasks() in a finally block
- Ensure shutdown is always handles.
Result:
More robust EventLoop implementations for NIO and Epoll.
Motivation:
We should better first process OP_WRITE before OP_READ as this may allow us to free memory in a faster fashion for previous queued writes.
Modifications:
Process OP_WRITE before OP_READ
Result:
Free memory faster for queued writes.
Motivation:
the build doesnt seem to enforce this, so they piled up
Modifications:
removed unused import lines
Result:
less unused imports
Signed-off-by: radai-rosenblatt <radai.rosenblatt@gmail.com>
the implicit #fireChannelReadComplete() in EmbeddedChannel#writeInbound().
Motivation
We use EmbeddedChannels to implement a ProxyChannel of some sorts that shovels
messages between a source and a destination Channel. The latter are real network
channels (such as Epoll) and they may or may not be managed in a ChannelPool. We
could fuse both ends directly together but the EmbeddedChannel provides a nice
disposable section of a ChannelPipeline that can be used to instrument the messages
that are passing through the proxy portion.
The ideal flow looks abount like this:
source#channelRead() -> proxy#writeOutbound() -> destination#write()
source#channelReadComplete() -> proxy#flushOutbound() -> destination#flush()
destination#channelRead() -> proxy#writeInbound() -> source#write()
destination#channelReadComplete() -> proxy#flushInbound() -> source#flush()
The problem is that #writeOutbound() and #writeInbound() emit surplus #flush()
and #fireChannelReadComplete() events which in turn yield to surplus #flush()
calls on both ends of the pipeline.
Modifications
Introduce a new set of write methods that reain the same sematics as the #write()
method and #flushOutbound() and #flushInbound().
Result
It's possible to implement the above ideal flow.
Fix for EmbeddedChannel#ensureOpen() and Unit Tests for it
Some PR stuff.
Motivation:
To make it easier to debug why notification of a promise failed we should log extra info and make it consistent.
Modifications:
- Create a new PromiseNotificationUtil that has static methods that can be used to try notify a promise and log.
- Reuse this in AbstractChannelHandlerContext, ChannelOutboundBuffer and PromiseNotifier
Result:
Easier to debug why a promise could not be notified.
Motivation:
RFC7871 defines an extension which allows to request responses for a given subset.
Modifications:
- Add DnsOptPseudoRrRecord which can act as base class for extensions based on EDNS(0) as defined in RFC6891
- Add DnsOptEcsRecord to support the Client Subnet in DNS Queries extension
- Add tests
Result:
Client Subnet in DNS Queries extension is now supported.
Motivation:
For use cases that demand frequent updates of the write watermarks, an
API that requires immutable WriteWaterMark objects is not ideal, as it
implies a lot of object allocation.
For example, the HTTP/2 child channel API uses write watermarks for outbound
flow control and updates the write watermarks on every DATA frame write.
Modifications:
Remote @Deprecated tag from primitive getters and setters, however the corresponding
channel options remain deprecated.
Result:
Primitive getters and setters for write watermarks are no longer marked @Deprecated.
Motivation:
The JDK implementation of SocketChannel has an internal state that is tracked for its operations. Because of this we need to ensure we call finishConnect() before try to call read(...) / write(...) as otherwise it may produce a NotYetConnectedException.
Modifications:
First process OP_CONNECT flag.
Result:
No more possibility of NotYetConnectedException because OP_CONNECT is handled not early enough when processing interestedOps for a Channel.
Motivation:
The DefaultEventLoopGroup class extends MultithreadEventExecutorGroup but doesn't expose the ctor variants that accept a custom Executor like NioEventLoopGroup and EpollEventLoopGroup do.
Modifications:
Add missing constructor.
Result:
Be able to use custom Executor with DefaultEventLoopGroup.
Motivation:
When attempting to set the selectedKeys fields on the selector
implementation, JDK 9 can throw an inaccessible object exception.
Modications:
Catch and log this exception as an possible course of action if the
sun.nio.ch package is not exported from java.base.
Result:
The selector replacement will fail gracefully as an expected course of
action if the sun.nio.ch package is not exported from java.base.
Motivation:
The NIO transport used an IllegalStateException if a user tried to issue another connect(...) while the connect was still in process. For this case the JDK specified a ConnectPendingException which we should use. The same issues exists in the EPOLL transport. Beside this the EPOLL transport also does not throw the right exceptions for ENETUNREACH and EISCONN errno codes.
Modifications:
- Replace IllegalStateException with ConnectPendingException in NIO and EPOLL transport
- throw correct exceptions for ENETUNREACH and EISCONN in EPOLL transport
- Add test case
Result:
More correct error handling for connect attempts when using NIO and EPOLL transport
Motivation:
The API documentation in ChannelConfig states that a a channel is writable,
if the number of pending bytes is below the low watermark and a
channel is not writable, if the number of pending bytes exceeds the high
watermark.
Therefore, we should use < operators instead of <= as well as > instead of >=.
Using <= and >= is also problematic, if the low watermark is equal to the high watermark,
as then a channel could be both writable and unwritable with the same number of pending
bytes (depending on whether remove() or addMessage() is called first).
The use of <= and >= was introduced in PR https://github.com/netty/netty/pull/3036, but
I don't understand why, as there doesn't seem to have been any discussion around that.
Modifications:
Use < and > operators instead of <= and >=.
Result:
High and low watermarks are treated as stated in the API docs.
Motivation:
We need to ensure we also call fireChannelActive() if the Channel is directly closed in a ChannelFutureListener that is belongs to the promise for the connect. Otherwise we will see missing active events.
Modifications:
Ensure we always call fireChannelActive() if the Channel was active.
Result:
No missing events.
Motivation:
We use often javachannel().socket().* in NIO as these methods exists in java6. The problem is that these will throw often very general Exceptions (Like SocketException) while it is more expected to throw the Exceptions listed in the nio interfaces. When possible we should use the new methods available in java7+ which throw the correct exceptions.
Modifications:
Check for java version and depending on it using the socket or the javachannel.
Result:
Throw expected Exceptions.
Motivation:
To make it easier to debug connect exceptions we create new exceptions which also contain the remote address. For this we basically created a new instance and call setStackTrace(...). When doing this we pay an extra penality because it calls fillInStackTrace() when calling the super constructor.
Modifications:
Create special sub-classes of Exceptions that override the fillInStackTrace() method and so eliminate the overhead.
Result:
Less overhead when "annotate" connect exceptions.
Motivation:
Comments stating that AUTO_CLOSE will be removed in Netty 5.0 are wrong,
as there is no Netty 5.0.
Modifications:
Removed comment.
Result:
No more references to Netty 5.0
Motivation:
PendingWriteQueue should guard against re-entrant writes once removeAndWriteAll() is run.
Modifications:
Continue writing until queue is empty.
Result:
Correctly guard against re-entrance.
Motivation:
Instrumenting the NIO selector implementation requires special
permissions. Yet, the code for performing this instrumentation is
executed in a manner that would require all code leading up to the
initialization to have the requisite permissions. In a restrictive
environment (e.g., under a security policy that only grants the
requisite permissions the Netty transport jar but not to application
code triggering the Netty initialization), then instrumeting the
selector will not succeed even if the security policy would otherwise
permit it.
Modifications:
This commit marks the necessary blocks as privileged. This enables
access to the necessary resources for instrumenting the selector. The
idea is that we are saying the Netty code is trusted, and as long as the
Netty code has been granted the necessary permissions, then we will
allow the caller access to these resources even though the caller itself
might not have the requisite permissions.
Result:
The selector can be instrumented in a restrictive security environment.
Motivation:
Writing to a system property requires permissions. Yet the code for
setting sun.nio.ch.bugLevel is not marked as privileged. In a
restrictive environment (e.g., under a security policy that only grants
the requisite permissions the Netty transport jar but not to application
code triggering the Netty initialization), writing to this system
property will not succeed even if the security policy would otherwise
permit it.
Modifications:
This commt marks the necessary code block as privileged. This enables
writing to this system property. The idea is that we are saying the
Netty code is trusted, and as long as the Netty code has been granted
the necessary permissions, then we will allow the caller access to these
resources even though the caller itself might not have the requisite
permissions.
Result:
The system property sun.nio.ch.bugLevel can be written to in a
restrictive security environment.
Motivation:
If the user uses 0 as quiet period we should shutdown without any delay if possible.
Modifications:
Ensure we not introduce extra delay when a shutdown quit period of 0 is used.
Result:
EventLoop shutdown as fast as expected.
Motivation:
At the moment we call initChannel(...) in the channelRegistered(...) method which has the effect that if another ChannelInitializer is added within the initChannel(...) method the ordering of the added handlers is not correct and surprising. This is as the whole initChannel(...) method block is executed before the initChannel(...) block of the added ChannelInitializer is handled.
Modifications:
Call initChannel(...) from within handlerAdded(...) if the Channel is registered already. This is true in all cases for our DefaultChannelPipeline implementation. This way the ordering is always as expected. We still keep the old behaviour as well to not break code for other ChannelPipeline implementations (if someone ever wrote one).
Result:
Correct and expected ordering of ChannelHandlers.
Motivation:
When we try to close the Channel due a timeout we need to ensure we not log if the notification of the promise fails as it may be completed in the meantime.
Modifications:
Add another constructor to ChannelPromiseNotifier and PromiseNotifier which allows to log on notification failure.
Result:
No more miss-leading logs.
Motivation:
I received a report the its not possible to add another ChannelInitialiter in the initChannel(...) method, so we should add a test case for it.
Modifications:
Added testcase.
Result:
Validate that all works as expected.
Motivation:
When a ChannelInitializer is used via ServerBootstrap.handler(...) the users handlers may be added after the internal ServerBootstrapAcceptor. This should not happen.
Modifications:
Delay the adding of the ServerBootstrapAcceptor until the initChannel(....) method returns.
Result:
Correct order of handlers in the ServerChannels ChannelPipeline.
Motivation:
We used Promise.setFailure(...) when fail a Promise in SimpleChannelPool. As this happens in multiple levels this can result in stackoverflow as setFailure(...) may throw an IllegalStateException which then again is propergated.
Modifications:
Use tryFailure(...)
Result:
No more possibility to cause a stack overflow when failing the promise.
Motivation:
The SimpleChannelPool#notifyConnect() method will leak Channels if the user cancelled the Promise in between.
Modifications:
Release the channel if the Promise was complete before.
Result:
No more channel leaks.
Motiviation:
DefaultChannelId attempts to acquire a default process ID by determining
the process PID. However, to do this it attempts to punch through to the
system classloader, a permission that in the face of a restrictive
security manager is unlikely to be granted. Looking past this, it then
attempts to load a declared method off a reflectively loaded class,
another permission that is not likely to be granted in the face of a
restrictive security manager. However, neither of these permissions are
necessary as the punching through to the system security manager is
completely unneeded, and there is no need to load a public method as a
declared method.
Modifications:
Instead of punching through to the system classloader requiring
restricted permissions, we can just use current classloader. To address
the access declared method permission, we instead just reflectively
obtain the desired public method via Class#getMethod.
Result:
Acquiring the default process ID from the PID will succeed without
requiring the runtime permissions "getClassLoader" and
"accessDeclaredMembers".
Motivation:
In 4.0 AbstractNioByteChannel has a default of 16 max messages per read. However in 4.1 that constraint was applied at the NioSocketChannel which is not equivalent. In 4.1 AbstractEpollStreamChannel also did not have the default of 16 max messages per read applied.
Modifications:
- Make Nio consistent with 4.0
- Make Epoll consistent with Nio
Result:
Nio and Epoll both have consistent ChannelMetadata and are consistent with 4.0.
Motivation:
This change is part of the change done in PR #5395 to provide an `AUTO_FLUSH` capability.
Splitting this change will enable to try other ways of implementing `AUTO_FLUSH`.
Modifications:


Two methods:
```java
void executeAfterEventLoopIteration(Runnable task);


boolean removeAfterEventLoopIterationTask(Runnable task);
```
are added to `SingleThreadEventLoop` class for adding/removing a task to be executed at the end of current/next iteration of this `eventloop`.
In order to support the above, a few methods are added to `SingleThreadEventExecutor`
```java
protected void afterRunningAllTasks() { }
```
This is invoked after all tasks are run for this executor OR if the passed timeout value for `runAllTasks(long timeoutNanos)` is expired.
Added a queue of `tailTasks` to `SingleThreadEventLoop` to hold all tasks to be executed at the end of every iteration.


Result:


`SingleThreadEventLoop` now has the ability to execute tasks at the end of an eventloop iteration.
Motivation:
For some use-cases it would be useful to know the number of bytes queued in the PendingWriteQueue without the need to dequeue them.
Modifications:
Add PendingWriteQueue.bytes().
Result:
Be able to get the number of bytes queued.
Motivation:
Commit 4c048d069d moved the logic of calling handlerAdded(...) to the channelRegistered(...) callback of the head of the DefaultChannelPipeline. Unfortunatlly this may execute the callbacks to late as a user may add handlers to the pipeline in the ChannelFutureListener attached to the registration future. This can lead to incorrect ordering.
Modifications:
Ensure we always invoke ChannelHandler.handlerAdded(...) for all handlers before the registration promise is notified.
Result:
Not possible of incorrect ordering or missed events.
Motivation:
We pinned the EventExecutor for a Channel in DefaultChannelPipeline. Which means if the user added multiple handlers with the same EventExecutorGroup to the ChannelPipeline it will use the same EventExecutor for all of these handlers. This may be unexpected and even not what the user wants. If the user want to use the same one for all of them it can be done by obtain an EventExecutor and pass the same instance to the add methods. Because of this we should allow to not pin.
Modifications:
Allow to disable pinning of EventExecutor for Channel based on EventExecutorGroup via ChannelOption.
Result:
Less confusing and more flexible usage of EventExecutorGroup when adding ChannelHandlers to the ChannelPipeline.
Motivation
When I override ChannelHandler methods I usually (always) refire events myself via
ChannelHandlerContext instead of relieing on calling the super method (say
`super.write(ctx, ...)`). This works great and the IDE actually auto completes/generates
the right code for it except `#fireUserEventTriggered()` and `#userEventTriggered()`
which have a mismatching argument names and I have to manually "intervene".
Modification
Rename `ChannelHandlerContext#fireUserEventTriggered()` argument from `event` to `evt`
to match its handler counterpart.
Result
The IDE's auto generated code will reference the correct variable.
Motivation:
In commit f984870ccc I made a change which operated under invalide assumption that tasks executed by an EventExecutor will always be processed in a serial fashion. This is true for SingleThreadEventExecutor sub-classes but not part of the EventExecutor interface contract.
Because of this change implementations of EventExecutor which not strictly execute tasks in a serial fashion may miss events before handlerAdded(...) is called. This is strictly speaking not correct as there is not guarantee in this case that handlerAdded(...) will be called as first task (as there is no ordering guarentee).
Cassandra itself ships such an EventExecutor implementation which has no strict ordering to spread load across multiple threads.
Modifications:
- Add new OrderedEventExecutor interface and let SingleThreadEventExecutor / EventLoop implement / extend it.
- Only expose "restriction" of skipping events until handlerAdded(...) is called for OrderedEventExecutor implementations
- Add ThreadPoolEventExecutor implementation which executes tasks in an unordered fashion. This is used in added unit test but can also be used for protocols which not expose an strict ordering.
- Add unit test.
Result:
Resurrect the possibility to implement an EventExecutor which does not enforce serial execution of events and be able to use it with the DefaultChannelPipeline.
Motivation:
We should make it clear that each acquired Channel needs to be released in all cases.
Modifications:
More clear javadocs.
Result:
Harder for users to leak Channel.
Motivation:
The field can be read from arbitrary threads via Channel.(isWritable()|bytesBeforeWritable()|bytesBeforeUnwritable()), WriteAndFlushTask.newInstance(), PendingWriteQueue, etc.
Modifications:
Make AbstractChannel.outboundBuffer volatile.
Result:
More correct in a concurrent use case.
Motivation:
We used future in many method of ChannelDuplexHandler as argument name of ChannelPromise. We should make it more consistent and correct.
Modifications:
Replace future with promise.
Result:
More correct and consistent naming.
Motiviation:
Sometimes it is useful to allow to specify a custom strategy to handle rejected tasks. For example if someone tries to add tasks from outside the eventloop it may make sense to try to backoff and retries and so give the executor time to recover.
Modification:
Add RejectedEventExecutor interface and implementations and allow to inject it.
Result:
More flexible handling of executor overload.
Motivation:
To restrict the memory usage of a system it is sometimes needed to adjust the number of max pending tasks in the tasks queue.
Modifications:
- Add new constructors to modify the number of allowed pending tasks.
- Add system properties to configure the default values.
Result:
More flexible configuration.
Motivation:
We should merge ThrowableUtils into ThrowableUtil as this name is more consistent with the naming of utility classes in netty.
Modifications:
Merge classes.
Result:
More consistent naming
Motivation:
These methods were recently deprecated. However, they remained in use in several locations in Netty's codebase.
Modifications:
Netty's code will now access the bootstrap config to get the group or child group.
Result:
No impact on functionality.
Motivation:
If a user writes an own nio based transport which uses a special SelectorProvider it is useful to be able to get the SelectorProvider that is used by a NioEventLoop. This way this can be used when implement AbstractChannel.isCompatible(...) and check that the SelectorProvider is the correct one.
Modifications:
Expose the SelectorProvider.
Result:
Be able to get the SelectorProvider used by a NioEventLoop.
Motivation:
We use pre-instantiated exceptions in various places for performance reasons. These exceptions don't include a stacktrace which makes it hard to know where the exception was thrown. This is especially true as we use the same exception type (for example ChannelClosedException) in different places. Setting some StackTraceElements will provide more context as to where these exceptions original and make debugging easier.
Modifications:
Set a generated StackTraceElement on these pre-instantiated exceptions which at least contains the origin class and method name. The filename and linenumber are specified as unkown (as stated in the javadocs of StackTraceElement).
Result:
Easier to find the origin of a pre-instantiated exception.
Motivation:
To better debug why a Selector need to be rebuild it is useful to also log the instance of the Selector.
Modifications:
Add logger instance to the log message.
Result:
More useful log message.
Motivation:
When `ChannelFactory#newChannel` crashed, `AbstractBootstrap#initAndRegister` propagates the exception to the caller instead of failing the promise.
Modifications:
- Catch exceptions from `ChannelFactory#newChannel`.
- Notify promise of such failure.
Result:
`AbstractBootstrap` gracefully handles connect failures.
Motivation:
In case of exception in invokeExceptionCaught() only original exception passed to invokeExceptionCaught() will be logged on any log level.
+ AbstractChannelHandlerContext and CombinedChannelDuplexHandler log different exceptions.
Modifications:
Fix inconsistent logging code and add ability to see both stacktraces on DEBUG level.
Result:
Both handlers log now both original exception and thrown from invokeExceptionCaught. To see full stacktrace of exception thrown from invokeExceptionCaught DEBUG log level must be enabled.
Motivation:
JCTools supports both non-unsafe, unsafe versions of queues and JDK6 which allows us to shade the library in netty-common allowing it to stay "zero dependency".
Modifications:
- Remove copy paste JCTools code and shade the library (dependencies that are shaded should be removed from the <dependencies> section of the generated POM).
- Remove usage of OneTimeTask and remove it all together.
Result:
Less code to maintain and easier to update JCTools and less GC pressure as the queue implementation nt creates so much garbage
Motivation:
`Bootstrap` has a notion of a default resolver group, but it's hidden from the public. To allow callers to reset a `Bootstrap` instance's resolver group, we could either make `DEFAULT_RESOLVER` public, or we could allow callers to pass `null` as an argument to `Bootstrap#resolver(AddressResolverGroup<?>)`. This pull request does the latter.
Modifications:
- Allow `Bootstrap#resolver(AddressResolverGroup<?>)` to accept `null` as an argument
Result:
Callers may pass `null` to `Bootstrap#resolver(AddressResolverGroup<?>)` to cause the `Bootstrap` instance to use its default resolver group.
Motivation:
Sometimes it may be benefitially for an user to specify a custom algorithm when choose the next EventExecutor/EventLoop.
Modifications:
Allow to specify a custom EventExecutorChooseFactory that allows to customize algorithm.
Result:
More flexible api.
Motivation:
We need to ensure we not hold a lock while executor callHandlerRemoved(...) as this may lead to a deadlock if handlerRemoved(...) will call another method in DEfaultChannelPipeline from another thread that will need to obtain the lock as well and wait for the result.
Modifications:
Release the lock before call handlerRemoved0(...).
Result:
No more deadlock possible
Motivation:
We not correctly catched errors during resolving in bootstrap and so may not have notified the future correctly.
Modifications:
Move code into try / catch block and try to fail the promise.
Result:
Promise is always notified
Motivation:
The user may specify to use a different allocator then the default. In this case we need to ensure it is shared when creating the EmbeddedChannel inside of a ChannelHandler
Modifications:
Use the config of the "original" Channel in the EmbeddedChannel and so share the same allocator etc.
Result:
Same type of buffers are used.
Motivation:
There is a small race while adding handlers to the pipeline because callHandlerAddedForAllHandlers() may not be run when the user calls add* but the Channel is already registered.
Modifications:
Ensure we always delay handlerAdded(..) / handlerRemoved(...) until callHandlerAddedForAllHandlers() was called.
Result:
No more race on pipeline modifications possible.
Motivation:
We can remove the volatile keyword from the cached Runnables as at worse these will just be re-created.
Modifications:
Remove volatile.
Result:
Less overhead.
Motivation:
SingleThreadEventExecutor.pendingTasks() will call taskQueue.size() to get the number of pending tasks in the queue. This is not safe when using MpscLinkedQueue as size() is only allowed to be called by a single consumer.
Modifications:
Ensure size() is only called from the EventLoop.
Result:
No more livelock possible when call pendingTasks, no matter from which thread it is done.
Motivation:
While doing 8fe3c83e4c I made a change which disallowed using null as name for handlers in the pipeline (this generated a new name before).
Modifications:
Revert to old behaviour and adding test case.
Result:
Allow null name again
Motivation:
At the moment the user is responsible to increase the writer index of the composite buffer when a new component is added. We should add some methods that handle this for the user as this is the most popular usage of the composite buffer.
Modifications:
Add new methods that autoamtically increase the writerIndex when buffers are added.
Result:
Easier usage of CompositeByteBuf.
Motivation:
EventLoopGroup.register doesn't need the Channel paramter when ChannelPromise is provided as we can get the Channel from ChannelPromise. Resolves#2422.
Modifications:
- Add EventLoopGroup.register(ChannelPromise)
- Deprecate EventLoopGroup.register(Channel, ChannelPromise)
Result:
EventLoopGroup.register is more convenient as people only need to set one parameter.
Motivation:
DefaultChannelPipeline was tightly coupled to AbstractChannel which is not really needed.
Modifications:
Move logic of calling handlerAdded(...) for handlers that were added before the Channel was registered to DefaultChannelPipeline by making it part of the head context.
Result:
Less coupling and so be able to use DefaultChannelPipeline also with other Channel implementations that not extend AbstractChannel
Motivation:
We do a "blind" cast to AbstractChannel in AbstractChannelHandlerContext which we should better no do. It would be better to decouble AbstractChannelHandlerContext from AbstractChannel.
Modifications:
Decouble AbstractChannelHandlerContext from AbstractChannel by move logic to DefaultChannelPipeline
Result:
Less coubling and less casting.
Motivation:
If the user will use addLast(...) on the ChannelPipeline of EmbeddedChannel after its constructor was run it will break the EmbeddedChannel as it will not be able to collect inbound messages and exceptions.
Modifications:
Ensure addLast(...) work as expected by move the logic of handling messages and exceptions ti protected methods of DefaultChannelPipeline and use a custom implementation for EmbeddedChannel
Result:
addLast(...) works as expected when using EmbeddedChannel.
Motivation:
Recycler.recycle(...) should not be used anymore and be replaced by Handle.recycle().
Modifications:
Mark it as deprecated and update usage.
Result:
Correctly document deprecated api.
Motivation:
The Bootstrap class (applies also to AbstractBootstrap and ServerBootstrap) has a few package private getter methods and some things such as #attr() and #options() aren't exposed at all.
Modifications:
Expose "getters" for configured things in a safe-manner via the config() method.
Result:
Easier for the user to check what is configured for a Bootstrap/ServerBootstrap.
Motivation:
The DuplexChannel is currently incomplete and only supports shutting down the output side of a channel. This interface should also support shutting down the input side of the channel.
Modifications:
- Add shutdownInput and shutdown methods to the DuplexChannel interface
- Remove state in NIO and OIO for tracking input being shutdown independent of the underlying transport's socket type. Tracking the state independently may lead to inconsistent state.
Result:
DuplexChannel supports shutting down the input side of the channel
Fixes https://github.com/netty/netty/issues/5175
Motivation:
When a user has multiple EventLoops in an EventLoopGroup and calls pipeline.add* / remove* / replace from an EventLoop that belongs to another Channel it is possible to deadlock if the other EventLoop does the same.
Modification:
- Only ensure the actual modification takes place in a synchronized block and not wait until the handlerAdded(...) / handlerRemoved(...) method is called. This is ok as we submit the task to the executor while still holding the look and so ensure correct order of pipeline modifications.
- Ensure if an AbstractChannelHandlerContext is put in the linked-list structure but the handlerAdded(...) method was not called we skip it until handlerAdded(...) was called. This is needed to ensure handlerAdded(...) is always called first.
Result:
Its not possible to deadlock when modify the DefaultChannelPipeline.
Related: #4333#4421#5128
Motivation:
slice(), duplicate() and readSlice() currently create a non-recyclable
derived buffer instance. Under heavy load, an application that creates a
lot of derived buffers can put the garbage collector under pressure.
Modifications:
- Add the following methods which creates a non-recyclable derived buffer
- retainedSlice()
- retainedDuplicate()
- readRetainedSlice()
- Add the new recyclable derived buffer implementations, which has its
own reference count value
- Add ByteBufHolder.retainedDuplicate()
- Add ByteBufHolder.replace(ByteBuf) so that..
- a user can replace the content of the holder in a consistent way
- copy/duplicate/retainedDuplicate() can delegate the holder
construction to replace(ByteBuf)
- Use retainedDuplicate() and retainedSlice() wherever possible
- Miscellaneous:
- Rename DuplicateByteBufTest to DuplicatedByteBufTest (missing 'D')
- Make ReplayingDecoderByteBuf.reject() return an exception instead of
throwing it so that its callers don't need to add dummy return
statement
Result:
Derived buffers are now recycled when created via retainedSlice() and
retainedDuplicate() and derived from a pooled buffer
Motivation:
We tried to provide the ability for the user to change the semantics of the threading-model by delegate the invoking of the ChannelHandler to the ChannelHandlerInvoker. Unfortunually this not really worked out quite well and resulted in just more complexity and splitting of code that belongs together. We should remove the ChannelHandlerInvoker again and just do the same as in 4.0
Modifications:
Remove ChannelHandlerInvoker again and replace its usage in Http2MultiplexCodec
Result:
Easier code and less bad abstractions.
Motivation:
If a task was submitted when wakenUp value was true, the task didn't get a chance to call Selector#wakeup.
So we need to check task queue again before executing select operation. If we don't, the task might be pended until select operation was timed out.
It might be pended until idle timeout if IdleStateHandler existed in pipeline.
Modifications:
Execute Selector#select in a non-blocking manner if there's a task submitted when wakenUp value was true.
Result:
Every tasks in NioEventLoop will not be pended.
Motivation:
Users sometimes want to use Channel.voidPromise() when write to a Channel to reduce GC-pressure. This should be also possible when write via a ChannelGroup.
Modifications:
Add new write(...) and writeAndFlush(...) overloads which allow to signale that a VoidPromise should be used to write to the Channel
Result:
Users can write with VoidPromise when using ChannelGroup
Motivation:
ChannelHandlerContext, ChannelPipeline and Channel share various method signatures. We should provide an interface to share.
Modifications:
Add ChannelInboundInvoker and ChannelOutboundInvoker and extend these.
Result:
Better API abstraction.
Motivation:
To be more consistent we should use ConnectException when we fail the connect attempt because no LocalServerChannel exists with the given address.
Modifications:
Use correct exception.
Result:
More consistent handling of connection refused between different transports.
Motivation:
Bootstrap.connect(...) tries to obtain the EventLoop of a Channel before it may be registered. This will cause an IllegalStateException. We need to ensure we handle the cause of late registration and not throw in this case.
Modifications:
Ensure we only try to access the EventLoop after the Channel is registered and handle the case of late registration.
Result:
Bootstrap.connect(...) not fails on late registration.
Motivation:
EventExecutor.children uses generics in such a way that an entire colleciton must be cast to a specific type of object. This interface is not very flexible and is impossible to implement if the EventExecutor type must be wrapped. The current usage of this method also does not have any clear need within Netty. The Iterator interface allows for EventExecutor to be wrapped and forces the caller to make assumptions about types instead of building the assumptions into the interface.
Motivation:
- Remove EventExecutor.children and undeprecate the iterator() interface
Result:
EventExecutor interface has one less method and is easier to wrap.
Motivation:
The ChannelHandlerContext.attr(...) and ChannelHandlerContext.hasAttr(...) delegated to Channel for the attributes which is a semantic change compared to 4.0 releases. We should not change the semantic to not break users applications when upgrading to 4.1.0
Modifications:
- Revert semantic change
- Mark ChannelHandlerContext.attr(...) and hasAttr(...) as @deprecated so we can remove these later
Result:
Semantic of attribute operations on ChannelHandlerContext is the same in 4.1 as in 4.0 again.
Motivation:
We should not try to call bind if registration failed.
Modifications:
Only call doBind0(...) when the registration not failed.
Result:
Not try to to bind if the registration failed.
Motivation:
We use channel.unsafe().invoker().executor() in DefaultChannelPipeline.executorSafe(...) which is an unnecessary indirection to channel.eventLoop().
Modifications:
Remove indirection by using channel.eventLoop().
Result:
Cleaner code.
Motivation:
The LateListener logic is prone to infinite loops and relies on being processed in the EventExecutor's thread for synchronization, but this EventExecutor may not be constant. An infinite loop can occur if the EventExecutor's execute method does not introduce a context switch in LateListener.run. The EventExecutor can be changed by classes which inherit from DefaultPromise. For example the DefaultChannelPromise will return w/e EventLoop the channel is registered to, but this EventLoop can change (re-registration).
Modifications:
- Remove the LateListener concept and instead use a single Object to maintain the listeners while still preserving notification order
- Make the result member variable an atomic variable so it can be outside the synchronized(this) blocks
- Cleanup/simplify existing state management code
Result:
Fixes https://github.com/netty/netty/issues/5185
Motivation:
NioDatagramChannelConfig currently uses NetworkChannel in its static { } block and so fails to init on android which not has this class.
Modifications:
Use reflection to load the NetworkChannel.class
Result:
Be able to use NIO Datagram on android as well.
Motivation:
We may produce a NPE due a race that can happen while check if a resolution was done and failed.
Modifications:
Correctly first check if the resultion is done before try to detect if it failed and so remove the race that can produce a NPE.
Result:
No more race possible while resolve address during connect.
Motivation:
Reduce nag warnings when compiling, make it easier for IDEs to display what's deprecated.
Modifications:
Added @Deprecated in a few places
Result:
No more warnings.
Motivation:
If a channel is deregistered from an NioEventLoop the associated SelectionKey is cancelled. If the NioEventLoop has yet to process a pending event as a result of that SelectionKey then the NioEventLoop will see the SelecitonKey is invalid and close the channel. The NioEventLoop should not close a channel if it is not registered with that NioEventLoop.
Modifications:
- NioEventLoop.processSelectedKeys should check that the channel is still registered to itself before closing the channel
Result:
NioEventLoop doesn't close a channel that is no longer registered to it when the SelectionKey is invalid
Fixes https://github.com/netty/netty/issues/5125
Motivation:
Revert d0943dcd30. Delaying the notification of writability change may lead to notification being missed. This is a ABA type of concurrency problem.
Modifications:
- Revert d0943dcd30.
Result:
channelWritabilityChange will be called on every change, and will not be suppressed due to ABA scenario.
Motivation:
Commit 0bc93dd introduced a potential assertion failure, if the deprecated method would be used.
Modifications:
Fix the potential assertion error.
Result:
Regression removed
Motivation:
There is one extra } for WriteBufferWaterMark's javadoc:
{@linkplain #high} high water mark}
The generated javadoc will show the content: "the high high water mark}"
Modifications:
remove the }
Result:
The generated javadoc will show the content: "the high water mark" instead of "the high high water mark}"
Motivation:
Prior to 5b48fc284e setting readPending to false when autoReadClear was called was enough because when/if the EventLoop woke up with a read event we would first check if readPending was true and if it is not, we would return early. Now that readPending will only be set in the EventLoop it is not necessary to check readPending at the start of the read methods, and we should instead remove the OP_READ from the SelectionKey when we also set readPending to false.
Modifications:
- autoReadCleared should call AbstractNioUnsafe.removeReadOp
Result:
NIO is now consistent with EPOLL and removes the READ operation on the selector when autoRead is cleared.
Motivation:
OIO/NIO use a volatile variable to track if a read is pending. EPOLL does not use a volatile an executes a Runnable on the event loop thread to set readPending to false. These mechansims should be consistent, and not using a volatile variable is preferable because the variable is written to frequently in the event loop thread.
OIO also does not set readPending to false before each fireChannelRead operation and may result in reading more data than the user desires.
Modifications:
- OIO/NIO should not use a volatile variable for readPending
- OIO should set readPending to false before each fireChannelRead
Result:
OIO/NIO/EPOLL are more consistent w.r.t. readPending and volatile variable operations are reduced
Fixes https://github.com/netty/netty/issues/5069
Motivation:
When always triggered fireChannelWritabilityChanged() directly when the update the pending bytes in the ChannelOutboundBuffer was made from within the EventLoop. This is problematic as this can cause some re-entrance issue if the user has a custom ChannelOutboundHandler that does multiple writes from within the write(...) method and also has a handler that will intercept the channelWritabilityChanged event and trigger another write when the Channel is writable. This can also easily happen if the user just use a MessageToMessageEncoder subclass and triggers a write from channelWritabilityChanged().
Beside this we also triggered fireChannelWritabilityChanged() too often when a user did a write from outside the EventLoop. In this case we increased the pending bytes of the outboundbuffer before scheduled the actual write and decreased again before the write then takes place. Both of this may trigger a fireChannelWritabilityChanged() event which then may be re-triggered once the actual write ends again in the ChannelOutboundBuffer.
The third gotcha was that a user may get multiple events even if the writability of the channel not changed.
Modification:
- Always invoke the fireChannelWritabilityChanged() later on the EventLoop.
- Only trigger the fireChannelWritabilityChanged() if the channel is still active and if the writability of the channel changed. No need to cause events that were already triggered without a real writability change.
- when write(...) is called from outside the EventLoop we only increase the pending bytes in the outbound buffer (so that Channel.isWritable() is updated directly) but not cause a fireChannelWritabilityChanged(). The fireChannelWritabilityChanged() is then triggered once the task is picked up by the EventLoop as usual.
Result:
No more re-entrance possible because of writes from within channelWritabilityChanged(...) method and no events without a real writability change.
Motivation:
441aa4c575 introduced a bug in transport-native-epoll where readPending is set to false before a read is attempted, but this should happen before fireChannelRead is called. The NIO transport also only sets the readPending variable to false on the first read in the event loop. This means that if the user only calls read() on the first channelRead(..) the select loop will still listen for read events even if the user does not call read() on subsequent channelRead() or channelReadComplete() in the same event loop run. If the user only needs 2 channelRead() calls then by default they will may get 14 more channelRead() calls in the current event loop, and then 16 more when the event loop is woken up for a read event. This will also read data off the TCP stack and allow the peer to queue more data in the local RECV buffers.
Modifications:
- readPending should be set to false before each call to channelRead()
- make NIO readPending set to false consistent with EPOLL
Result:
NIO and EPOLL transport set readPending to false at correct times which don't read more data than intended by the user.
Fixes https://github.com/netty/netty/issues/5082
Motivation:
When a promise is notified that was already added to the ChannelOutboundBuffer and we try to notify it later on we only see a warning that it was notified before. This is often not very useful as we have no idea where it was notified at all. We can do better in case it was failed before (which is most of the times the case) and just also log the cause that was used for it.
Modifications:
Add the cause that was used to notify the promise when we fail to notify it as part of the ChannelOutboundBuffer.
Result:
Easier to debug user errors.
Motivation:
There is a spelling error in FileRegion.transfered() as it should be transferred().
Modifications:
Deprecate old method and add a new one.
Result:
Fix typo and can remove the old method later.
Motivation:
DefaultChannelHandlerInvoker currently blindly cast to AbstractChannelHandlerContext without checking if the ChannelHandlerContext is really a sub-type of it. It should check it first and if not just use slow-path implementation.
Modifications:
Do instanceof check first and if it fails just create a new Runnable instance of used the cached.
Result:
DefaultChannelHandlerInvoker works with any ChannelHandlerContext implementations.
Motivation:
Setting the WRITE_BUFFER_LOW_WATER_MARK before WRITE_BUFFER_HIGH_WATER_MARK results in an internal Exception (appears only in the logs) if the value is larger than the default high water mark value. The WRITE_BUFFER_HIGH_WATER_MARK call appears to have no effect in this context.
Setting the values in the reverse order works.
Modifications:
- deprecated ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK and
ChannelOption.WRITE_BUFFER_LOW_WATER_MARK.
- add one new option called ChannelOption.WRITE_BUFFER_WATER_MARK.
Result:
The high/low water mark values limits caused by default values are removed.
Setting the WRITE_BUFFER_LOW_WATER_MARK before WRITE_BUFFER_HIGH_WATER_MARK results in an internal Exception (appears only in the logs) if the value is larger than the default high water mark value. The WRITE_BUFFER_HIGH_WATER_MARK call appears to have no effect in this context.
Setting the values in the reverse order works.
Motivation:
If a handler is added to the pipeline within ChannelInitializer::initChannel via
addFirst(...) then it will not receive the channelRegistered event. The same
handler added via addLast(...) will receive the event. This different behavior
is unlikely to be expected by users and can cause confusion.
Modifications:
Let ChannelInitializer::channelRegistered propagate the event by passing it to
the pipeline instead of firing it on the ChannelHandlerContext.
Result:
The channelRegistered event is propagated to handlers regardless of the method
used to add it to the pipeline (addFirst/addLast).
Motivation:
NIO now supports a pluggable select strategy, but EPOLL currently doesn't support this. We should strive for feature parity for EPOLL.
Modifications:
- Add SelectStrategy to EPOLL transport.
Result:
EPOLL transport supports SelectStategy.
Motivation:
Under high throughput/low latency workloads, selector wakeups are
degrading performance when the incoming operations are triggered
from outside of the event loop. This is a common scenario for
"client" applications where the originating input is coming from
application threads rather from the socket attached inside the
event loops.
As a result, it can be desirable to defer the blocking select
so that incoming tasks (write/flush) do not need to wakeup
the selector.
Modifications:
This changeset adds the notion of a generic SelectStrategy which,
based on its contract, allows the implementation to optionally
defer the blocking select based on some custom criteria.
The default implementation resembles the original behaviour, that
is if tasks are in the queue `selectNow()` and move on, and if no
tasks need to be processed go into the blocking select and wait
for wakeup.
The strategy can be customized per `NioEventLoopGroup` in the
constructor.
Result:
High performance client applications are now given the chance to
customize for how long the actual selector blocking should be
deferred by employing a custom select strategy.
Motivation:
There is no need to make DefaultChannelId package private as it may be useful for the user. For example EmbeddedChannel allows to inject a ChannelId when it is constructed. For this case the user can just use DefaultChannelId.
Modifications:
Change visibility of DefaultChannelId to public.
Result:
It's possible to create a new instance of DefaultChannelId by the user.
Motivation:
We need to ensure we run all pending tasks before doing any flush in writeOutbound(...) to ensure all pending tasks are run first. Also we should remove the assert of the future and just add a listener to it so it is processed later if needed. This is true as a user may schedule a write for later execution.
Modifications:
- Remove assert of future in writeOutbound(...)
- Correctly run pending tasks before doing the flush and also before doing the close of the channel.
- Add unit tests to proof the defect is fixed.
Result:
Correclty handle the situation of delayed writes.
Motivation:
cf171ff525 introduced a change in behavior when dealing with closing channel in the read loop. This changed behavior may use stale state to determine if a channel should be shutdown and may be incorrect.
Modifications:
- Revert the usage of potentially stale state
Result:
Closing a channel in the read loop is based upon current state instead of potentially stale state.
Motivation:
Often the user uses EmbeddedChannel within unit tests where the only "important" thing is to know if any pending messages were in the buffer and then release these.
We should provide methods for this so the user not need to manually loop through these and release.
Modifications:
Add methods to easily handle releasing of messages.
Result:
Less boiler-plate code for the user to write.
Motivation:
If an error occurs during a write operation then DefaultHttp2ConnectionEncoder.FlowControlledData will clear the CoalescingBufferQueue which will reset the queue's readable bytes to 0. To recover from an error the DefaultHttp2RemoteFlowController will attempt to return bytes to the flow control window, but since the frame has reset its own size this will lead to invalid flow control accounting.
Modifications:
- DefaultHttp2ConnectionEncoder.FlowControlledData should not reset its size if an error occurs
Result:
No more flow controller errors due to DefaultHttp2ConnectionEncoder.FlowControlledData setting its size to 0 if an error occurs.
Motivation:
PromiseAggregator's API allows for the aggregate promise to complete before the user is done adding promises. In order to support this use case the API structure would need to change in a breaking manner.
Modifications:
- Deprecate PromiseAggregator and subclasses
- Introduce PromiseCombiner which corrects these issues
Result:
PromiseCombiner corrects the deficiencies in PromiseAggregator.
Motivation:
PendingWriteQueue should guard against re-entrant writes once
removeAndFailAll() is run.
Modifications:
removeAndFailAll() should repeat until the queue is finally empty.
Result:
assertEmpty() will always hold.
Motivation:
We should guard users from using Unsafe methods from outside the EventLoop if not designed to do so.
Modifications:
Add asserts
Result:
Easier for users to detect miss-use.
Motivation:
e2f5012 added unit tests which did not verify the buffer was released as it was intended to.
Modification:
- Unit tests must verify release is called
Result:
Unit tests enforce that ByteBufs are released.
Motivation:
DefaultChannelHandlerInvoker.invokeWrite calls a utility method validatePromise which may throw if the arguments are not valid. If this method throws then the message will not be released.
Modifications:
- If an exception is thrown the message should be released
Result:
No more leak in DefaultChannelHandlerInvoker.invokeWrite
Motivation:
See #3746.
Modifications:
Fork SpscLinkedQueue and SpscLinkedAtomicQueue from JCTools based on 7846450e28
Result:
Add SpscLinkedQueue and SpscLinkedAtomicQueue and apply it in LocalChannel.
Motiviation:
We should ensure that handlerAdded(...) and handlerRemoved(...) is always called from the EventExecutor that also invokes the other methods of the ChannelHandler. Also we need to ensure we always call handlerAdded(...) before any other method can be calld to ensure correct ordering.
Motifications:
- Ensure that the right thread is used to call the methods
- Ensure correct ordering
- Add tests
Result:
Respect the thread-model for handlerAdded(...) and handlerRemoved(...) and preserve correct ordering in all cases.
Motivation:
The implementation of obtaining the best possible mac address is very good. There are many sub-par implementations proposed on stackoverflow.
While not strictly a netty concern, it would be nice to offer this util also to netty users.
Modifications:
extract DefaultChannelId#defaultMachineId code obtaining the "best" mac into a new helper called MacAddress, keep the random bytes fallback in DefaultChannelID.
Result:
New helper available.
Motivation:
When a channel was registered before and is re-registered we need to respect ChannelConfig.isAutoRead() and so start reading one the registration task completes. This was done "by luck" before 15162202fb.
Modifications:
Explicit start reading once a Channel was re-registered if isAutoRead() is true.
Result:
Correctly receive data after re-registration completes.
Motivation:
Due a regression introduced by e969b6917c we missed to pass the original ChannelPromise to the next ChannelOutboundHandler and so
may never notify the origin ChannelPromise. This is related to #4805.
Modifications:
- Correctly pass the ChannelPromise
- Add unit test.
Result:
Correctly pass the ChannelPromise on deregister(...)
Motivation:
fix the issue netty#2944
Modifications:
use - instead of =>, use ! instead of :> due to the connection is bidirectional. What's more, toString() method don't know the direction or there is no need to know the direction when only log channel information.
add L: before local address and R: before remote address.
Result:
after the fix, log won't confuse the user
Motivation:
The AbstractChannel(Channel parent) constructor was previously hard-coded to always
call DefaultChannelId.newInstance(), and this made it difficult to use a custom
ChannelId implementation with some commonly used Channel implementations.
Modifications:
Introduced newId() method in AbstractChannel, which by default returns
DefaultChannelId.newInstance() but can be overridden by subclasses. Added
ensureDefaultChannelId() test to AbstractChannelTest, to ensure the prior
behavior of calling DefaultChannelId.newInstance() still holds with the
AbstractChannel(Channel parent) constructor.
Result:
AbstractChannel now has the protected newId() method, but there is no functional
difference.
Motivation:
A few implementations of OioServerChannel have a default max messages per read set to 16. We should set the default to 1 to prevent blocking on a read before setting a socket that has just been accepted.
Modifications:
- OioSctpServerChannel and OioServerSocketChannel metadata changed to use the default (1) max messages per read
Result:
Oio based servers will complete accepting a socket before potentially blocking waiting to accept other sockets.
Motivation:
If a user adds a ChannelHandler from outside the EventLoop it is possible to get into the situation that handlerAdded(...) is scheduled on the EventLoop and so called after another methods of the ChannelHandler as the EventLoop may already be executing on this point in time.
Modification:
- Ensure we always check if the handlerAdded(...) method was called already and if not add the currently needed call to the EventLoop so it will be picked up after handlerAdded(...) was called. This works as if the handler is added to the ChannelPipeline from outside the EventLoop the actual handlerAdded(...) operation is scheduled on the EventLoop.
- Some cleanup in the DefaultChannelPipeline
Result:
Correctly order of method executions of ChannelHandler.
Motivation:
ChannelOutboundHandlerAdapter's javadoc has some minor issues.
Modifications:
Fix the minor javadoc issues and resolves#4752.
Result:
ChannelOutboundHandlerAdapter's javadoc issues are fixed.
Motivation:
Being able to access the invoker() is useful when adding additional
handlers that should be running in the same thread. Since an application
may be using a threading model unsupported by the default invoker, they
can specify their own. Because of that, in a handler that auto-adds
other handlers:
// This is a good pattern
ctx.pipeline().addBefore(ctx.invoker(), ctx.name(), null, newHandler);
// This will generally work, but prevents using custom invoker.
ctx.pipeline().addBefore(ctx.executor(), ctx.name(), null, newHandler);
That's why I believe in commit 110745b0, for the now-defunct 5.0 branch,
when ChannelHandlerAppender was added the invoker() method was also
necessary.
There is a side-benefit to exposing the invoker: in certain advanced
use-cases using the invoker for a particular handler is useful. Using
the invoker you are able to invoke a _particular_ handler, from possibly
a different thread yet still using standard exception processing.
ChannelHandlerContext does part of that, but is unwieldy when trying to
invoke a particular handler because it invokes the prev or next handler,
not the one the context is for. A workaround is to use the next or prev
context (respectively), but this breaks when the pipeline changes.
This came up during writing the Http2MultiplexCodec which uses a
separate child channel for each http/2 stream and wants to send messages
from the child channel directly to the Http2MultiplexCodec handler that
created it.
Modifications:
Add the invoker() method to ChannelHandlerContext. It was already being
implemented by AbstractChannelHandlerContext. The two other
implementations of ChannelHandlerContext needed minor tweaks.
Result:
Access to the invoker used for a particular handler, for either reusing
for other handlers or for advanced use-cases. Fixes#4738
Motivation:
ChannelInboundHandler and ChannelOutboundHandler both can implement exceptionCaught(...) method and so we need to dispatch to both of them.
Modifications:
- Correctly first dispatch exceptionCaught to the ChannelInboundHandler but also make sure the next handler it will be dispatched to will be the ChannelOutboundHandler
- Add removeInboundHandler() and removeOutboundHandler() which allows to remove one of the combined handlers
- Let *Codec extends it and not ChannelHandlerAppender
- Remove ChannelHandlerAppender
Result:
Correctly handle events and also have same behavior as in 4.0
Motivation:
Our contract in Channels is that the promise should always be notified before the actual callbacks of the ChannelInboundHandler are called. This was not done in the LocalChannel and so the behavior was different to other Channel implementations.
Modifications:
- First complete the ChannelPromise then call fireChannelActive()
- Guard against NPE when doClose() was called before the task was executed.
Result:
Consistent behavior between LocalChannel and other Channel implementations.
Motivation:
In AbstractChannelTest we not correctly mocked some methods which could lead to test errors. That said it only showed up here when running from the IDE and not from the cmdLine.
Modifications:
Mock methods that are needed for the test
Result:
Test pass in the IDE as well.
Motivation:
At the moment EmbeddedChannel always handle close() and disconnect() the same way which also means that ChannelOutboundHandler.disconnect(...) will never called. We should allow to specify if these are handle different or not to make the use of EmbeddedChannel more flexible.
Modifications:
Add 2 other constructors which allow to specify if disconnect / close are handled the same way or differently.
Result:
More flexible usage of EmbeddedChannel possible.
Motivation:
Boxing/unboxing can be avoided.
Modifications:
Use parseInt/parseLong to avoid unnecessary boxing/unboxing.
Result:
Remove unnecessary boxing/unboxing.
Motivation:
The prefix fix of #2363 did not correctly handle the case when selectAgain is true and so missed to null out entries.
Modifications:
Move the i++ from end of loop to beginning of loop
Result:
Entries in the array will be null out so allow to have these GC'ed once the Channel close
Motivation:
We often used synchronized(this) while the whole method was synchronized, which can be simplified by just mark the whole method as synchronized.
Modifications:
Replace synchronized(this) with synchronized on the method
Result:
Cleaner code
Motivation:
Warnings in IDE, unclean code, negligible performance impact.
Modification:
Deletion of unused imports
Result:
No more warnings in IDE, cleaner code, negligible performance improvement.
Motivation:
Javadoc reports errors about invalid docs.
Modifications:
Fix some errors reported by javadoc.
Result:
A lot of javadoc errors are fixed by this patch.
Motivation:
There are some wrong links and tags in javadoc.
Modifications:
Fix the wrong links and tags in javadoc.
Result:
These links will work correctly in javadoc.
Motivation:
As a user may call deregister() from within any method while doing processing in the ChannelPipeline, we need to ensure we do the actual deregister operation later. This is needed as for example, we may be in the ByteToMessageDecoder.callDecode(...) method and so still try to do processing in the old EventLoop while the user already registered the Channel to a new EventLoop. Without delay, the deregister operation this could lead to have a handler invoked by different EventLoop and so threads.
Modifications:
Ensure the actual deregister will be done later on and not directly when invoked.
Result:
Calling deregister() within ByteToMessageDecoder.decode(..) is safe.
Motivation:
Estimation algorithm currently used for WriteTasks is complicated and
wrong. Additionally, some code relies on outbound buffer size
incremented only on actual writes to the outbound buffer.
Modifications:
- Throw away the old estimator and replace with a simple algorithm that
uses the client-provided estimator along with a statically configured
WriteTask overhead (io.netty.transport.writeTaskSizeOverhead system
property with the default value of 48 bytes)
- Add a io.netty.transport.estimateSizeOnSubmit boolean system property
allowing the clients to disable the message estimation outside the
event loop
Result:
Task estimation is user controllable and produces better results by
default
Motivation:
As discussed in #4529, NameResolver design shouldn't be resolving SocketAddresses (or String name + port) and return InetSocketAddresses. It should resolve String names and return InetAddresses.
This SocketAddress to InetSocketAddresses resolution is actually a different concern, used by Bootstrap.
Modifications:
Extract SocketAddress to InetSocketAddresses resolution concern to a new class hierarchy named AddressResolver.
These AddressResolvers delegate to NameResolvers.
Result:
Better separation of concerns.
Note that new AddressResolvers generate a bit more allocations because of the intermediate Promise and List<InetAddress>.
Motivation:
We need to remove all registered events for a Channel from the EventLoop before doing the actual close to ensure we not produce a cpu spin when the actual close operation is delayed or executed outside of the EventLoop.
Modifications:
Deregister for events for NIO and EPOLL socket implementations when SO_LINGER is used.
Result:
No more cpu spin.
Motivation:
Fix a race-condition when closing NioSocketChannel or EpollSocketChannel while try to detect if a close executor should be used and the underlying socket was already closed. This could lead to an exception that then leave the channel / in an invalid state and so could lead to side-effects like heavy CPU usage.
Modifications:
Catch possible socket exception while try to get the SO_LINGER options from the underlying socket.
Result:
No more race-condition when closing the channel is possible with bad side-effects.
Motivation:
ChannelMetadata has a field minMaxMessagesPerRead which can be confusing. There are also some cases where static instances are used and the default value for channel type is not being applied.
Modifications:
- use a default value which is set unconditionally to simplify
- make sure static instances of MaxMessagesRecvByteBufAllocator are not used if the intention is that the default maxMessagesPerRead should be derived from the channel type.
Result:
Less confusing interfaces in ChannelMetadata and ChannelConfig. Default maxMessagesPerRead is correctly applied.
Motivation:
The CoalescingBufferQueueTest is somewhat relaxed with its releasing of test buffers, using safeRelease to generically deal with tests that may or may not release the buffers. SafeRelease generates logs, however, when the release fails.
Modifications:
Tightened up the individual test methods to verify that the buffers are released properly.
Result:
Fixes#4497
Motivation:
The javadocs for ChannelOption.AUTO_CLOSE say the default is false, but the default is currently true.
Modifications:
- Make javadocs consistent with code
Result:
Less confusing docs.
Motivation:
exceptionCaught(...) will only handle inbound exceptions which means it makes not much sense to have it also on ChannelOutboundHandler. Because of this we should move it to ChannelInboundHandler.
Modifications:
Add @deprecated annotation to ChannelHandler.exceptionCaught(...).
Result:
Preapre to cleanup the API in later release.
Motivation:
We should use OneTimeTask where possible to reduce object creation.
Modifications:
Replace Runnable with OneTimeTask
Result:
Less object creation
Motivation:
We not need to store another reference to AbstractChannel as we can access it through DefaultChannelHandlerContext.
Modifications:
Remove reference.
Result:
Cleaner code.
Motivation:
If you start to have 1M+ concurrent connections memory footprint can be come a big issue. We should try to reduce it as much as possible in the core of netty.
Modifications:
- Remove HashMap that was used to store name to ctx mapping. This was only used for validation and access a handler by name. As a pipeline is not expected to be very long (like 100+ handlers) we can just walk the linked list structure to find the ctx with a given name.
Result:
Less memory footprint of the DefaultChannelPipeline.
Motivation:
The previous DefaultChannelPipeline#destroy() implementation, introduced in #3156, is suboptimal as it can cause the for loop to continuously spin if the executor used by a given handler is unable to "recognize" the event loop.
It could be objected that it's the custom executor responsibility to properly implement the inEventLoop() method, but some implementetaions might not be able to do that for performance reasons, and even so, it's always better to be safe against API misuse, in particular when it is not possible to fail fast and the alternative is rather some sutle behaviour.
Modifications:
The patch simply avoids the recursive spin by explicitly passing the "in event loop" condition as a boolean parameter, preserving the same guarantees offered by #3156. A unit test has also been added.
Result:
All channel events are correctly called and no high CPU usage is seen anymore.
Motivation:
Changing the chache of generated names to use a cache per thread. This will remove the bottleneck when many eventloops are used and names need to generate.
Modifications:
Use a FastThreadLocal to store the cached names.
Result:
Less locking between threads.
Motivation:
We should only call ReferenceCountUtil.touch(...) if needed as otherwise we pay the overhead of instanceof and cast
everytime.
Modifications:
Add boolean flag which indicates if touch(...) should be called.
Result:
Less overhead when leak detection is not enabled.
Motiviation:
If a user writes from outside the EventLoop we increase the pending bytes of the outbound buffer before submitting the write request. This is done so the user can stop writing asap once the channel turns unwritable. Unfortunally this doesn't take the overhead of adding the task into the account and so it is very easy for an user to full up the task queue. Beside this we use a value of 0 for an unown message by default which is not ideal.
Modifications:
- port the message calculation we used in netty 3.x into AbstractChannelHandlerContext and so better calculate the overhead of a message that is submitted from outside the EventLoop
- change the default estimated size for an unknown message to 8.
Result:
Better behaviour when submiting writes from outside the EventLoop.
Motivation:
Once a FixedChannelPool was closed we must not allow to acquire or release Channels to prevent assert errors.
Modifications:
Fail release and acquire calls when FixedChannelPool is closed.
Result:
No more assert errors.1
Motivation:
As MaxMessageHandle is stateful we can not share the same HandleImpl instance as otherwise we will see race conditions.
Modifications:
Create a new HandleImpl instance on each newHandle() call.
Result:
No more races.
Motivation:
EmbeddedChannelId#hashCode() and equals() do not behave correctly if an
instance is serialized and then deserialized. Additionally,
EmbeddedChannel does not allow use of any other type of ChannelId, and
EmbeddedChannelId is (mostly) a singleton instance. This creates a
problem for unit tests that construct multiple EmbeddedChannels and
expect each channel to have a unique ID.
Modifications:
EmbeddedChannelId is modified so equals() will return true and
hashCode() will return the same value for any instance of the class.
EmbeddedChannel is modified to allow a ChannelId to be specified when
the channel is constructed. Tests added for both changes.
Result:
EmbeddedChannelId now behaves correctly when deserialized, and
EmbeddedChannels can now have unique IDs.
Motivation:
PendingWriteQueueTest needs some cleanup.
Modifications:
- Cleanup code to remove deprecation warnings
- use static imports
Result:
No more warnings
Motivation:
We missed to run all pending tasks when EmbeddedChannel.close(...) or disconnect(...) was called. Because of this channelInactive(...) / channelUnregistered(...) of the handlers were never called.
Modifications:
Correctly run all pending tasks and cancel all not ready scheduled tasks when close or disconnect was called.
Result:
Correctly run tasks on close / disconnect and have channelInactive(...) / channelUnregistered(...) called.
Motiviation:
We need to ensure the actual close to the transport takes place before the promsie of the write is notified that triggered it. This is needed as otherwise Channel.isActive(), isOpen() and isWritable() may return true even if the Channel should be closed already.
Modifications:
- Ensure the close takes place first
Result:
ChannelFutureListener will see the correct state of the Channel.
Motivation:
writeBytes(...) missed to set EPOLLOUT flag when not all bytes were written. This could lead to have the EpollEventLoop not try to flush the remaining bytes once the socket becomes writable again.
Modifications:
- Move setting EPOLLOUT flag logic to one point so we are sure we always do it.
- Move OP_WRITE flag logic to one point as well.
Result:
Correctly try to write pending data if socket becomes writable again.
Motivation:
RecvByteBufAllocator.DelegatingHandle does not provide an accessor to get the delegate handle. This may be useful for classes that extend DelegatingHandle.
Modifications:
- add delegate() method to DelegatingHandle
Result:
Classes which inherit from DelegatingHandle can now access the delegate Handle.
Motivation:
If LocalChannel doWrite executes while the peer's state changes from CONNECTED to CLOSED it is possible that some promise's won't be completed and buffers will be leaked.
Modifications:
- Check the peer's state in doWrite to avoid a race condition
Result:
All write operations should release, and the associated promise should be completed.
Motivation:
https://github.com/netty/netty/pull/4143 addressed a few ordering issues but an ordering issue still remained if the Promise for a write completes, and a listener of that promise does a write on a peer channel. The ordering was subject to how potentially 2 different executors would run a task, but it should be coordinated such that the first write is read first.
Modifications:
- Keep track of the finishPeerRead task run on the executor if necessary and ensure it completes before current channel read occurs
Result:
Ordering of events for echo type situations is preserved.
Motivation:
When a LocalChannel write operation occurs, the promise associated with the write operation is marked successful when it is added to the peer's queue, but before the peer has actually received the data. If the promise callback closes the channel then a race condition exists where the close event may occur before the data is delivered. We should preserve ordering of events.
Modifications:
- LocalChannel should track when a write is in progress, and if a close operation happens make sure the peer gets all pending read operations.
Result:
LocalChannel preserves order of operations.
Fixes https://github.com/netty/netty/issues/4118
Motivation:
Doc of ChannelGroup says, that it can be used to manage server and child channels at once.
However, in DefaultChannelGroup, there is a race condition. When a server channel accepts a child, it schedules its
registration on an event loop, which takes some time. If the ChannelGroup, which is supposed
to close server and child channels at once, is closed after the child channel has been scheduled
for registration and before this registration actually happens, this child channel is not closed
and remains connected. This could lead to connection leaks.
Modifications:
To fix this, the DefaultChannelGroup is changed to has a closed flag.
This flag is set to true, just before the close() method is actually closing channels.
The add() method checks after adding a new channel, if this flag has been set to true.
If yes, the new channel is closed. If not, we have the guarantee, that this channel will be
closed by the ChannelGroup, because setting the closed flag to true happens-before closing any channels.
This behaviour can be activated by two new constructors. The old constructors are still there and behave like before.
Therefore, no existing code should be affected directly.
Result:
If activating this feature, the DefaultChannelGroup can be used, for managing server and child channels at once.
But this activating this feature means also, that a ChannelGroup cannot be reused after calling close().
Related issues:
- #3971
- #3973
- #3976
- #4035
Motivation:
1. Previously, DnsNameResolver.query() retried the request query by its
own. It prevents a user from deciding when to retry or stop. It is also
impossible to get the response object whose code is not NOERROR.
2. NameResolver does not have an operation that resolves a host name
into multiple addresses, like InetAddress.getAllByName()
Modifications:
- Changes related with DnsNameResolver.query()
- Make query() not retry
- Move the retry logic to DnsNameResolver.resolve() instead.
- Make query() fail the promise only when I/O error occurred or it
failed to get a response
- Add DnsNameResolverException and use it when query() fails so that
the resolver can give more information about the failure
- query() does not cache anymore.
- Changes related with NameResolver.resolveAll()
- Add NameResolver.resolveAll()
- Add SimpleNameResolver.doResolveAll()
- Changes related with DnsNameResolver.resolve() and resolveAll()
- Make DnsNameResolveContext abstract so that DnsNameResolver can
decide to get single or multiple addresses from it
- Re-implement cache so that the cache works for resolve() and
resolveAll()
- Add 'traceEnabled' property to enable/disable trace information
- Miscellaneous changes
- Use ObjectUtil.checkNotNull() wherever possible
- Add InternetProtocolFamily.addressType() to remove repetitive
switch-case blocks in DnsNameResolver(Context)
- Do not raise an exception when decoding a truncated DNS response
Result:
- Full control over query()
- A user can now retrieve all addresses via (Dns)NameResolver.resolveAll()
- DNS cache works only for resolve() and resolveAll() now.
Motivation:
When releasing unhealthy channel back to a pool we don't have to offer it since on acquire it will be discarded anyways.
Also checking healthiness at release is a good idea so we don't end up having tons of unhealthy channels in the pool(unless they became unhealthy after being offered)
Modifications:
private SimpleChannelPool.offerIfHealthy() method added that is called from SimpleChannelPool.doReleaseChannel(). SimpleChannelPool.offerIfHealthy() offers channel back to pool only if channel is healthy.
Otherwise it throws setFailure exception to the promise.
Result:
The pool is now much cleaner and not spammed with unhealthy channels.
Added ability to choose if channel health has to be validated on release by passing boolean flag.
Motivation:
Depending on performance preferences and individual use cases sometimes we would like to be able force health check of a channel at release time and do not offer it back to the pool. Other times we would want to just release channel and offer it back to the pool and check health only when we try to acquire that channel from the pool. See more details here: https://github.com/netty/netty/issues/4077#issuecomment-130461684
Modifications:
Future<Void> release(Channel channel, Promise<Void> promise, boolean offerHealthyOnly);
The offerHealthyOnly boolean flag allows developers to choose whether to do channel validation before offering it back to pool or not.
Appropriate modifications made to hierarchy of implementations of ChannelPool. offerHealthyOnly=true will force channel health to be checked before offering back to pool. offerHealthyOnly=false will ignore channel health check and will just try just offer it back to the pool
offerHealthyOnly=true by default.
Result:
Channel health check before offer back to pool is controlled by a flag now.
Code changed to satisfy checkstyle requirements.
Motivation:
Code needs to satisfy checkstyle requirements.
Modifications:
SimpleChannelPool.java:279 line split to be less then 120 characters.
SimpleChannelPool.java:280:31 space added after '{'
SimpleChannelPool.java:282:17 space added after '{'
SimpleChannelPoolTest.java:198 - extra white space line removed.
Result:
Code satisfies checkstyle requirements.
offerHealthyOnly is passed as a constructor parameter now.
Motivation:
Instead of passing offerHealthyOnly as a method parameter it is better to pass it in as SimpleChannelPool or FixedChannelPool constructor.
Modifications:
Redundant release method that takes offerHealthyOnly removed from ChannelPool.
offerHealthyOnly parameter added to constructor for FixedChannelPool and SimpleChannelPool.
Result:
SimpleChannelPool and FixedChannelPool are now take offerHealthyOnly as a constructor parameter. Default behavior is: offerHealthyOnly=true.
Code changed to satisfy checkstyle requirements.
Motivation:
Code needs to satisfy checkstyle requirements.
Modifications:
SimpleChannelPool.java:84: line made to be no longer then 120 characters.
SimpleChannelPool.java:237: extra white space line removed.
Result:
Code satisfies checkstyle requirements.
Tests do not need to be too copled to the code. Exception message should not be validated
Motivation:
We don't need our tests to be too coupled to the code. Exception type validation in tests is just good enough.
Modifications:
Exception validation message removed from SimpleChannelPoolTest.testUnhealthyChannelIsNotOffered() test.
Result:
The SimpleChannelPoolTest test is less coupled to the code now.
Stack trace set to empty for UNHEALTHY_NON_OFFERED_TO_POOL.
Motivation:
We don't need stack trace for UNHEALTHY_NON_OFFERED_TO_POOL.
Modifications:
Added UNHEALTHY_NON_OFFERED_TO_POOL.setStackTrace(EmptyArrays.EMPTY_STACK_TRACE) to static init block.
Result:
UNHEALTHY_NON_OFFERED_TO_POOL's stack trace set to empty.
Minor code re-factorings.
Motivation:
For better code readability we need to apply several minor code re-factorings.
Modifications:
javadocs true -> {@code true}
offerHealthyOnly variable name changed to releaseHeathCheck
<p/> -> <p> in javadocs
offerHealthyOnly removed from doReleaseChannel as it not needed there.
Result:
Code quality is improved.
Code changed to satisfy checkstyle requirements.
Motivation:
Code needs to satisfy checkstyle requirements.
Modifications:
SimpleChannelPool.java:87: line made to be no longer then 120 characters.
Result:
Code satisfies checkstyle requirements.
Pull request needs to contain only necessary changes
Motivation:
The pull request should not contain unnecessary changes that are not needed as part of required functionality of pull request.
Modifications:
private void doReleaseChannel(final Channel channel, final Promise<Void> promise) - > private void doReleaseChannel(Channel channel, Promise<Void> promise)
Result:
Pull request contains less unnecessary modifications.
Motivation:
The acquire channel function resulted in calling itself several times in case when channel polled from the pool queue was unhealthy, which resulted FixedChannelPool to be called several times which in it's turn caused FixedChannelPool.acquire() to be called and resulted into acquireChannelCount to be unnecessary increased.
Example use case:
1) Create FixedChannelPool instance with one channel in the pool: new FixedChannelPool(cb, handler, 1)
2) Acquire channel A from the pool
3) close the channel A
4) Return it back to the pool
5) Acquire channel from the same pool again
Expected result:
new channel created and acquired, channel A that has been closed discarded and removed from the pool from being unhealthy
Actual result:
Channel A had been removed from the pool, how ever the new channel had never be acquired, instead the request to acquire had been added to the pending queue in FixedChannelPool and the acquireChannelCount is increased by one. The reason is that at the time when SimpleChannelPool figured out that the channel was unhealthy called FixedChannelPool.acquire to try to acquire new channel, how ever the request was added to the pendingTakQueue because by the time when FixedChannelPool.acquire was called, the acquireChannelCount was already "1" so new channel ould not be created cause of maxChannelsLimit=1.
Modifications:
The suggested approach modifies the SimpleChannelPool in a way so that when channel detected to be unhealthy it calls private method SimpleChannelPool.acquireHealthyFromPoolOrNew() which guarantees that SimpleChannelPool actually either finds a healthy channel in the pool and returns it or causes the promise.cause() in case when new channel was failed to be created.
Result:
The ```acquiredChannelCount``` is now calculated correctly as a result of SimpleChannelPool.acquire() of not being recursive on overridable acquire method.
Motiviation:
The current read loops don't fascilitate reading a maximum amount of bytes. This capability is useful to have more fine grain control over how much data is injested.
Modifications:
- Add a setMaxBytesPerRead(int) and getMaxBytesPerRead() to ChannelConfig
- Add a setMaxBytesPerIndividualRead(int) and getMaxBytesPerIndividualRead to ChannelConfig
- Add methods to RecvByteBufAllocator so that a pluggable scheme can be used to control the behavior of the read loop.
- Modify read loop for all transport types to respect the new RecvByteBufAllocator API
Result:
The ability to control how many bytes are read for each read operation/loop, and a more extensible read loop.
Motivation:
We don't decrease acquired channel count in FixedChannelPool when timeout occurs by AcquireTimeoutAction.NEW and eventually fails.
Modifications:
Set AcquireTask.acquired=true to call decrementAndRunTaskQueue when timeout action fails.
Result:
Acquired channel count decreases correctly.
Motivation:
We missed to correctly count acquired channels in FixedChannelPool which could produce an assert error.
Modifications:
Only try to decrement acquired count if the channel was really acuired.
Result:
No more assert error possible.
Motivation:
If the Channel is already closed when the PendingWriteQueue is created it will generate a NPE when add or remove is called later.
Modifications:
Add null checks to guard against NPE.
Result:
No more NPE possible.
Motivation:
Test was leaving composite buffers taken from the queue unreleased.
Modifications:
Make the test release buffers.
Result:
Nagging about leaked buffers should stop.
Motivation:
Simplifies writing code that needs to merge or slice a sequence of buffer & promise pairs into chunks of arbitrary sizes.
For example in HTTP2 we merge or split buffers across fixed-size DATA frame boundaries.
Modifications:
Add new utility class CoalescingBufferQueue
Result:
Following this change HTTP2 code will switch to use it instead of CompositeByteBuffer for DATA frame coalescing.
Motivation:
While cherry-picked 11f9e9084b I changed the EmbeddedChannel implementation to not allow no ChannelHandlers when constructing it.
This was done by mistake.
Modifications:
Revert change and add unit test.
Result:
Restore old behavior.
Motivation:
When using an EmbeddedChannel often it either does inbound or outbound processing which means we only often need one queue.
Modifications:
Lazy init the inbound and outbound message queues.
Result:
Less memory usage.
Motivation:
At the moment we directly closed the Channel when an exception accoured durring initChannel(...) without giving the user any way to do extra or special handling.
Modifications:
Handle the exception in exceptionCaught(...) of the ChannelInitializer which will by default log and close the Channel. This way the user can override this.
Result:
More felixible handling of exceptions.
Motivation:
Currently in EmbeddedChannel we add the ChannelHandlers before the Channel is registered which leads to have the handlerAdded(...) callback
be called from outside the EventLoop and also prevent the user to obtain a reference to the EventLoop in the callback itself.
Modifications:
Delay adding ChannelHandlers until EmbeddedChannel is registered.
Result:
Correctly call handlerAdded(...) after EmbeddedChannel is registered.
Motivation:
If you set a ChannelHandler via ServerBootstrap.handler(...) it is added to the ChannelPipeline before the Channel is registered. This will lead to and IllegalStateException if a user tries to access the EventLoop in the ChannelHandler.handlerAdded(...) method.
Modifications:
Delay the adding of the ChannelHandler until the Channel was registered.
Result:
No more IllegalStateException.
Motivation:
Only one of the three FixedChannelPool constructors checks for the constructor
arguments. Therfore it was possible to create a pool with zero maxConnections.
This change chains all constructors together, so that the last one
in the chain always checks the validity of the arguments, regardless of the
constructor used.
Result:
It is no longer possible to create a FixedChannelPool instance with invalid
maxConnections or maxPendingAcquires parameters.
Motivation:
FixedChannelPool should enforce a number of maximal used channels, but due a bug we fail to correctly enforce this.
Modifications:
Change check to correctly only acquire channel if we not hit the limit yet.
Result:
Correct limiting.
Motivation:
To avoid buffering too much it would be useful to get an estimate of how many bytes can be written to a Channel before it becomes unwritable.
Modifications:
- Update the Channel interface to support 2 new methods. 1 to give how many bytes before unwritable. 1 to give how many bytes before writable.
- Update the AbstractChannel implementation to delegate to the ChannelOutboundBuffer.
Result:
The Channel interface supports 2 new methods which provide more visibility into writability.
Motivation:
It's useful to be able to be notified once all Channels that are part of the ChannelGroup are notified. This can for example be useful if you want to do a graceful shutdown.
Modifications:
- Add ChannelGroup.newCloseFuture(...) which will be notified once all Channels are notified that are part of the ChannelGroup at the time of calling.
Result:
Easier to be notified once all Channels within a ChannelGroup are closed.
Motiviation:
There are currently no accessors which provide visbility into how many bytes must be written in order for a writability change to occur. This feature would be useful for codecs which intent to control how many bytes are queued at any given time.
Modifications:
- add bytesBeforeUnWritable() which will give the number of bytes before the buffer (and associated channel) transitions to not writable
- add bytesBeforeWritable() which will give the number of bytes that must be drained from the queue until the channel becomes writable.
Result:
More visibility into writability for the ChannelOutboundBuffer.
Motivation:
the ByteBuffer[] that we keep in the ThreadLocal are never nulled out which can lead to have ByteBuffer instances sit there forever.
This is even a bigger problem if nioBuffer() of ByteBuffer returns a new ByteBuffer that can not be destroyed by ByteBuffer.release().
Modifications:
Null out ByteBuffer array after processing.
Result:
No more dangling references after done.
Motivation:
SingleThreadEventLoopTest.testScheduleTaskAtFixedRate() fails often due to:
- too little tolerance
- incorrect assertion (it compares only with the previous timestamp)
Modifications:
- Increase the timestamp difference tolerance from 10ms to 20ms
- Improve the timestamp assertion so that the comparison is performed against the first recorded timestamp
- Misc: Fix broken Javadoc tag
Result:
More build stability
Motivation:
When trying to write more then Integer.MAX_VALUE / SSIZE_MAX via writev(...) the OS may return EINVAL depending on the kernel or the actual OS (bsd / osx always return EINVAL). This will trigger an IOException.
Modifications:
Never try to write more then Integer.MAX_VALUE / SSIZE_MAX when using writev.
Result:
No more IOException when write more data then Integer.MAX_VALUE / SSIZE_MAX via writev.
Motivation:
In the SslHandler we schedule a timeout at which we close the Channel if a timeout was detected during close_notify. Because this can race with notify the flushFuture we can see an IllegalStateException when the Channel is closed.
Modifications:
- Use a trySuccess() and tryFailure(...) to guard against race.
Result:
No more race.
Motivation:
We should not trigger channelWritabilityChanged during failing message when we are about to close the Channel as otherwise the use may try again writing even if the Channel is about to get closed.
Modifications:
Add new boolean param to ChannelOutboundBuffer.failFlushed(...) which allows to specify if we should notify or not.
Result:
channelWritabilityChanged is not triggered anymore if we cloe the Channel because of an IOException during write.
Motivation:
Previously, we deferred the closing of the Channel when we were flushing. This is problematic as this means that if the user adds a ChannelFutureListener, that will close the Channel, the closing will not happen until we are done with flushing. This can lead to more data is sent than expected.
Modifications:
- Do not defer closing when in flush
Result:
Correctly respect order of events and closing the Channel ASAP
Motivation:
The semantic of LocalChannel.doWrite(...) were a bit off as it notified the ChannelFuture before the data was actual moved to the peer buffer.
Modifications:
- Use our MPSC queue as inbound buffer
- Directly copy to data to the inbound buffer of the peer and either success or fail the promise after each copy.
Result:
Correct semantic and less memory copies.
Motiviation:
If user events or excpetions reach the tail end of the pipeline they are not released. This could result in buffer leaks.
Motivation:
- Use the ReferenceCountUtil.release to release objects for the userEventTriggered and exceptionCaught methods on DefaultChannelPipeline
Result:
2 less areas where buffer leaks can occur.
Motivation:
There are various known issues in netty-codec-dns:
- Message types are not interfaces, which can make it difficult for a
user to implement his/her own message implementation.
- Some class names and field names do not match with the terms in the
RFC.
- The support for decoding a DNS record was limited. A user had to
encode and decode by him/herself.
- The separation of DnsHeader from DnsMessage was unnecessary, although
it is fine conceptually.
- Buffer leak caused by DnsMessage was difficult to analyze, because the
leak detector tracks down the underlying ByteBuf rather than the
DnsMessage itself.
- DnsMessage assumes DNS-over-UDP.
- To send an EDNS message, a user have to create a new DNS record class
instance unnecessarily.
Modifications:
- Make all message types interfaces and add default implementations
- Rename some classes, properties, and constants to match the RFCs
- DnsResource -> DnsRecord
- DnsType -> DnsRecordType
- and many more
- Remove DnsClass and use an integer to support EDNS better
- Add DnsRecordEncoder/DnsRecordDecoder and their default
implementations
- DnsRecord does not require RDATA to be ByteBuf anymore.
- Add DnsRawRecord as the catch-all record type
- Merge DnsHeader into DnsMessage
- Make ResourceLeakDetector track AbstractDnsMessage
- Remove DnsMessage.sender/recipient properties
- Wrap DnsMessage with AddressedEnvelope
- Add DatagramDnsQuest and DatagramDnsResponse for ease of use
- Rename DnsQueryEncoder to DatagramDnsQueryEncoder
- Rename DnsResponseDecoder to DatagramDnsResponseDecoder
- Miscellaneous changes
- Add StringUtil.TAB
Result:
- Cleaner APi
- Can support DNS-over-TCP more easily in the future
- Reduced memory footprint in the default DnsQuery/Response
implementations
- Better leak tracking for DnsMessages
- Possibility to introduce new DnsRecord types in the future and provide
full record encoder/decoder implementation.
- No unnecessary instantiation for an EDNS pseudo resource record
Motivation:
Many projects need some kind a Channel/Connection pool implementation. While the protocols are different many things can be shared, so we should provide a generic API and implementation.
Modifications:
Add ChannelPool / ChannelPoolMap API and implementations.
Result:
Reusable / Generic pool implementation that users can use.
Motivation:
Because of a bug we missed to fail the connect future when doClose() is called. This can lead to a future which is never notified and so may lead to deadlocks in user-programs.
Modifications:
Correctly fail the connect future when doClose() is called and the connection was not established yet.
Result:
Connect future is always notified.