Motivation:
In next major version of netty users should use ChannelHandler everywhere. We should ensure we do the same
Modifications:
Replace usage of deprecated classes / interfaces with ChannelHandler
Result:
Use non-deprecated code
Motivation:
97361fa2c8 replace synchronized with ConcurrentHashMap in *Bootstrap classes but missed to do the same for the Http2 variant.
Modifications:
- Use ConcurrentHashMap
- Simplify code in *Bootstrap classes
Result:
Less contention
Motivation:
In netty 5.x we changed to have all pipeline operations be executed on the EventLoop so there is no need to have an atomic operation involved anymore to update the handler state.
Modifications:
Remove atomic usage to handle the handler state
Result:
Simpler code and less overhead
Motivation:
In most cases, we want to use MultithreadEventLoopGroup without setting thread numbers but thread name only. We should simplify this for the user
Modifications:
Add a new constructor
Result:
User can only set ThreadFactory / Executor without setting the thread number to 0:
Motivation
Per javadoc in 4.1.x SimpleChannelInboundHandler:
"Please keep in mind that channelRead0(ChannelHandlerContext, I) will be
renamed to messageReceived(ChannelHandlerContext, I) in 5.0."
Modifications
Rename aforementioned method and all references/overrides.
Result
Method is renamed.
Motivation:
At the moment we directly extend the Recycler base class in our code which makes it hard to experiment with different Object pool implementation. It would be nice to be able to switch from one to another by using a system property in the future. This would also allow to more easily test things like https://github.com/netty/netty/pull/8052.
Modifications:
- Introduce ObjectPool class with static method that we now use internally to obtain an ObjectPool implementation.
- Wrap the Recycler into an ObjectPool and return it for now
Result:
Preparation for different ObjectPool implementations
Motivation
The current event loop shutdown logic is quite fragile and in the
epoll/NIO cases relies on the default 1 second wait/select timeout that
applies when there are no scheduled tasks. Without this default timeout
the shutdown would hang indefinitely.
The timeout only takes effect in this case because queued scheduled
tasks are first cancelled in
SingleThreadEventExecutor#confirmShutdown(), but I _think_ even this
isn't robust, since the main task queue is subsequently serviced which
could result in some new scheduled task being queued with much later
deadline.
It also means shutdowns are unnecessarily delayed by up to 1 second.
Modifications
- Add/extend unit tests to expose the issue
- Adjust SingleThreadEventExecutor shutdown and confirmShutdown methods
to explicitly add no-op tasks to the taskQueue so that the subsequent
event loop iteration doesn't enter blocking wait (as looks like was
originally intended)
Results
Faster and more robust shutdown of event loops, allows removal of the default wait timeout.
This is a port of https://github.com/netty/netty/pull/9616
Motivation:
On JDK > 9 Netty uses Unsafe to write two internal JDK fields: sun.nio.ch.SelectorImp.selectedKeys and sun.nio.ch.SelectorImpl.publicSelectedKeys. This is done in transport/src/main/java/io/netty/channel/nio/NioEventLoop.java:225, in openSelector() method. The GraalVM analysis cannot do the Unsafe registration automatically because the object field offset computation is hidden behind two layers of calls.
Modifications:
This PR updates the Netty GraalVM configuration by registering those fields for unsafe access.
Result:
Improved support for Netty on GraalVM with JDK > 9.
Motivation:
Due a bug we did not always correctly calculate the next buffer size in AdaptiveRecvByteBufAllocator.
Modification:
Fix calculation and add unit test
Result:
Correct calculation is always used.
Motiviation:
A lot of reentrancy bugs and cycles can happen because the DefaultPromise will notify the FutureListener directly when completely in the calling Thread if the Thread is the EventExecutor Thread. To reduce the risk of this we should always notify the listeners via the EventExecutor which basically means that we will put a task into the taskqueue of the EventExecutor and pick it up for execution after the setSuccess / setFailure methods complete the promise.
Modifications:
- Always notify via the EventExecutor
- Adjust test to ensure we correctly account for this
- Adjust tests that use the EmbeddedChannel to ensure we execute the scheduled work.
Result:
Reentrancy bugs related to the FutureListeners cant happen anymore.
Motivation:
We should correctly reset the cached local and remote address when a Channel.disconnect() is called and the channel has a notion of disconnect vs close (for example DatagramChannel implementations).
Modifications:
- Correctly reset cached kicak abd remote address
- Update testcase to cover it and so ensure all transports work in a consistent way
Result:
Correctly handle disconnect()
Motivation:
calculateMaxBytesPerGatheringWrite() contains duplicated calculation: getSendBufferSize() << 1
Modifications:
Remove the duplicated calculation
Result:
The method will be clear and better
Motiviation:
EmbeddedChannel currently is quite differently in terms of semantics to other Channel implementations. We should better change it to be more closely aligned and so have the testing code be more robust.
Modifications:
- Change EmbeddedEventLoop.inEventLoop() to only return true if we currenlty run pending / scheduled tasks
- Change EmbeddedEventLoop.execute(...) to automatically process pending tasks if not already doing so
- Adjust a few tests for the new semantics (which is closer to other Channel implementations)
Result:
EmbeddedChannel works more like other Channel implementations
Motivation:
At the moment it is quite easy to hit reentrance issues when you have multiple handlers in the pipeline and each of the handlers does not correctly protect against these. To make it easier for the user we should try to protect from these. The issue is usually if and inbound event will trigger and outbound event and this outbound event then against triggeres an inbound event. This may result in having methods in a ChannelHandler re-enter some method and so state can be corrupted or messages be re-ordered.
Modifications:
- Keep track of inbound / outbound operations in DefaultChannelHandlerContext and if reentrancy is detected break it by scheduling the action on the EventLoop. This will then be picked up once the method returns and so the reentrancy is broken up.
- Adjust tests which made strange assumptions about execution order
Result:
No more reentrancy of handlers possible.
Motivation:
There are some extra log level checks (logger.isWarnEnabled()).
Modification:
Remove log level checks (logger.isWarnEnabled()) from io.netty.channel.epoll.AbstractEpollStreamChannel, io.netty.channel.DefaultFileRegion, io.netty.channel.nio.AbstractNioChannel, io.netty.util.HashedWheelTimer, io.netty.handler.stream.ChunkedWriteHandler and io.netty.channel.udt.nio.NioUdtMessageConnectorChannel
Result:
Fixes#9456
Motivation:
In AbstractBoostrap, options and attrs are LinkedHashMap that are synchronized on for every read, copy/clone, write operation.
When a lot of connections are triggered concurrently on the same bootstrap instance, the synchronized blocks lead to contention, Netty IO threads get blocked, and performance may be severely degraded.
Modifications:
Use ConcurrentHashMap
Result:
Less contention. Fixes https://github.com/netty/netty/issues/9426
Look like `EmbeddedChannelPipeline` should also override `onUnhandledInboundMessage(ChannelHandlerContext ctx, Object msg)` in order to do not print "Discarded message pipeline" because in case of `EmbeddedChannelPipeline` discarding actually not happens.
This fixes next warning in the latest netty version with websocket and `WebSocketServerCompressionHandler`:
```
13:36:36.231 DEBUG- Decoding WebSocket Frame opCode=2
13:36:36.231 DEBUG- Decoding WebSocket Frame length=5
13:36:36.231 DEBUG- Discarded message pipeline : [JdkZlibDecoder#0, DefaultChannelPipeline$TailContext#0]. Channel : [id: 0xembedded, L:embedded - R:embedded].
```
Modification:
Override correct method
Result:
Follow up fix after https://github.com/netty/netty/pull/9286
On servers with many pipelines or dynamic pipelines, it is easy for end user to make mistake during pipeline configuration. Current message:
`Discarded inbound message PooledUnsafeDirectByteBuf(ridx: 0, widx: 2, cap: 2) that reached at the tail of the pipeline. Please check your pipeline configuration.`
Is not always meaningful and doesn't allow to find the wrong pipeline quickly.
Modification:
Added additional log placeholder that identifies pipeline handlers and channel info. This will allow for the end users quickly find the problem pipeline.
Result:
Meaningful warning when the message reaches the end of the pipeline. Fixes#7285
Motivation:
Currently GraalVM substrate returns null for reflective calls if the reflection access is not declared up front.
A change introduced in Netty 4.1.35 results in needing to register every Netty handler for reflection. This complicates matters as it is difficult to know all the possible handlers that need to be registered.
Modification:
This change adds a simple
null check such that Netty does not break on GraalVM substrate without the reflection information registration.
Result:
Fixes#9278
Motivation:
Sometimes it is beneficial to be able to set a parent Channel in EmbeddedChannel if the handler that should be tested depend on the parent.
Modifications:
- Add another constructor which allows to specify a parent
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/9228.
Motivation:
When Netty is run through ProGuard, seemingly unused methods are removed. This breaks reflection, making the Handler skipping throw a reflective error.
Modification:
If a method is seemingly absent, just disable the optimization.
Result:
Dealing with ProGuard sucks infinitesimally less.
Motivation:
8e72071d76 did adjust how synchronization is done but missed to update one block and so used synchronized (this) while it should be synchronized (handlers) .
Modifications:
Use synchronized (handlers)
Result:
Correctly synchronize
Motivation:
When initializing the AnnotatedSocketException in AbstractChannel, both
the cause and the stack trace are set, leaving a trailing "Caused By"
that is compressed when printing the trace.
Modification:
Don't include the stack trace in the exception, but leave it in the cause.
Result:
Clearer stack trace
Motivation:
OOME is occurred by increasing suppressedExceptions because other libraries call Throwable#addSuppressed. As we have no control over what other libraries do we need to ensure this can not lead to OOME.
Modifications:
Only use static instances of the Exceptions if we can either dissable addSuppressed or we run on java6.
Result:
Not possible to OOME because of addSuppressed. Fixes https://github.com/netty/netty/issues/9151.
Motivation:
GraalVM native images are a new way to deliver java applications. Netty is one of the most popular libraries however there are a few limitations that make it impossible to use with native images out of the box. Adding a few metadata (in specific modules will allow the compilation to success and produce working binaries)
Modification:
Added properties files in `META-INF` and substitutions classes (under `internal.svm`) will solve the compilation issues. The substitutions classes are not visible and do not have a public constructor so they are not visible to end users.
Result:
Fixes#8959
This fix is very conservative as it applies the minimum config required to build:
* pure netty servers
* vert.x applications
* grpc applications
The build is having trouble due to checkstyle which does not seem to be able to find the copyright notice on property files.
Motivation
The optimization in #8988 didn't correctly handle the specific case
where the channel hasDisconnect == false, and a
ChannelOutboundHandlerAdapter subclass overrides only the close(ctx,
promise) method without also overriding the disconnect(ctx, promise)
method.
Modifications
Adjust AbstractChannelHandler.disconnect(...) method to divert to
close(...) in !hasDisconnect case before computing target context for
the event.
Result
Fixes#9092
Motivation:
806dace32d introduce a compilation error due a bad cherry-pick from 4.1
Modifications:
Use correct API for master branch.
Result:
No compile error anymore
Motivation:
When a Channel was closed its isActive() method must return false.
Modifications:
First check for isOpen() before isBound() as isBound() will continue to return true even after the underyling fd was closed.
Result:
Fixes https://github.com/netty/netty/issues/9026.
Motivation:
CompletionStage is the new standard for async operation chaining in JDK8+ that is supported by various of libs. To make it easer to interopt with other libs and to allow users to make good use of lambdas and functional programming style we should allow to convert from our Future to a CompletionStage while still provide the same ordering guarantees.
The reason why we expose this as toStage() and not jus have Future extend CompletionStage is for two reasons:
- Keep our interface norrow
- Keep semantics clear (Future.addListener(...) methods return this while all chaining methods of CompletionStage return a new instance).
Modifications:
- Merge implements in AbstractFuture to Future (by make these default methods)
- Add Future.toStage() as a default method and a special implemention in DefaultPromise (to reduce GC).
- Add Future.executor() which returns the EventExecutor that is pinned to the Future
- Introduce FutureCompletionStage that extends CompletionStage to clarify threading semantics and guarantees.
Result:
Easier inter-op with other Java8+ libaries. Related to https://github.com/netty/netty/issues/8523.
Motivation:
We should not throw check exceptions when the user calls sync*() but should better wrap it in a CompletionException to make it easier for people to reason about what happens.
Modifications:
- Change sync*() to throw CompletionException
- Adjust tests
- Add some more tests
Result:
Fixes https://github.com/netty/netty/issues/8521.
Motivation:
IdleStateHandler may trigger unexpected idle events when flushing large entries to slow clients.
Modification:
In netty design, we check the identity hash code and total pending write bytes of the current flush entry to determine whether there is a change in output. But if a large entry has been flushing slowly (for some reason, the network speed is slow, or the client processing speed is too slow to cause the TCP sliding window to be zero), the total pending write bytes size and identity hash code would remain unchanged.
Avoid this issue by adding checks for the current entry flush progress.
Result:
Fixes#8912 .
Motivation:
Deprecate ChannelOption.newInstance(...) as it is not used.
Modifications:
Deprecate ChannelOption.newInstance(...) as valueOf(...) should be used as a replacement.
Result:
Fixes https://github.com/netty/netty/issues/8983.
Motivation:
DefaultPromise requires an EventExecutor which provides the thread to notify listeners on and this EventExecutor can never change. We can remove the code that supported the possibility of a changing the executor as this is not possible anymore.
Modifications:
- Remove constructor which allowed to construct a *Promise without an EventExecutor
- Remove extra state
- Adjusted SslHandler and ProxyHandler for new code
Result:
Fixes https://github.com/netty/netty/issues/8517.
Motivation:
8fdf373557 introduced the @Skip annotation which allows to optimize the way we invoke ChannelHandlers when traversing the pipeline. Now that we moved all the "default" code to the ChannelHandler interface we can make the annotation package-private to guard the user to make any mistakes which will lead to hard to debug issues.
Modifications:
Move ChannelHandler.Skip to ChannelHandlerMask.Skip and make it package-private
Result:
Guard users from introduce hard to debug issues.
Motivation:
In 42742e233f we already added default methods to Channel*Handler and deprecated the Adapter classes to simplify the class hierarchy. With this change we go even further and merge everything into just ChannelHandler. This simplifies things even more in terms of class-hierarchy.
Modifications:
- Merge ChannelInboundHandler | ChannelOutboundHandler into ChannelHandler
- Adjust code to just use ChannelHandler
- Deprecate old interfaces.
Result:
Cleaner and simpler code in terms of class-hierarchy.
Motivation:
It appears this was an oversight, maybe was valid at some point in the past. Noticed while reviewing #8958.
Modifications:
Change DefaultChannelHandlerContext to not extend DefaultAttributeMap.
Result:
Simpler hierarchy, eliminate unused attributes field from each context instance.
Motivation:
As we now us java8 as minimum java version we can deprecate ChannelInboundHandlerAdapter / ChannelOutboundHandlerAdapter and just move the default implementations into the interfaces. This makes things a bit more flexible for the end-user and also simplifies the class-hierarchy.
Modifications:
- Mark ChannelInboundHandlerAdapter and ChannelOutboundHandlerAdapter as deprecated
- Add default implementations to ChannelInboundHandler / ChannelOutboundHandler
- Refactor our code to not use ChannelInboundHandlerAdapter / ChannelOutboundHandlerAdapter anymore
Result:
Cleanup class-hierarchy and make things a bit more flexible.
Motivation:
PromiseCombiner is not thread-safe and even assumes all added Futures are using the same EventExecutor. This is kind of fragile as we do not enforce this. We need to enforce this contract to ensure it's safe to use and easy to spot concurrency problems.
Modifications:
- Add new contructor to PromiseCombiner that takes an EventExecutor and deprecate the old non-arg constructor.
- Check if methods are called from within the EventExecutor thread and if not fail
- Correctly dispatch on the right EventExecutor if the Future uses a different EventExecutor to eliminate concurrency issues.
Result:
More safe use of PromiseCombiner + enforce correct usage / contract.
Motivation:
`DefaultFileRegion.transferTo` will return 0 all the time when we request more data then the actual file size. This may result in a busy spin while processing the fileregion during writes.
Modifications:
- If we wrote 0 bytes check if the underlying file size is smaller then the requested count and if so throw an IOException
- Add DefaultFileRegionTest
- Add a test to the testsuite
Result:
Fixes https://github.com/netty/netty/issues/8868.
Motivation:
To make it easier to understand why a Channel was closed previously and so why the operation failed with a ClosedChannelException we should include the original Exception.
Modifications:
- Store the original exception that lead to the closed Channel and include it in the ClosedChannelException that is used to fail the operation.
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/8862.
Motivation:
We can just use Objects.requireNonNull(...) as a replacement for ObjectUtil.checkNotNull(....)
Modifications:
- Use Objects.requireNonNull(...)
Result:
Less code to maintain.
Motivation:
ChannelHandler.exceptionCaught(...) was marked as @deprecated as it should only exist in inbound handlers.
Modifications:
Remove ChannelHandler.exceptionCaught(...) and adjust code / tests.
Result:
Fixes https://github.com/netty/netty/issues/8527
Motivation:
We have a utility method to check for > 0 and >0 arguments. We should use it.
Modification:
use checkPositive/checkPositiveOrZero instead of if statement.
Result:
Re-use utility method.
Motivation:
Make @sharable annotation works with anonymous inner types. Add Java 8 ElementType.TYPE_USE feature that makes easy to use @sharable annotation.
Modification:
transport/src/main/java/io/netty/channel/ChannelHandler.java - Target ElementType.TYPE_USE added.
transport/src/main/java/io/netty/channel/ChannelHandlerAdapter.java - isSharable method improved to verify AnnotatedSuperclass for annotation.
transport/src/test/java/io/netty/channel/ChannelHandlerAdapterTest.java - Tests added.
Result:
ChannelInboundHandler handler = new @Sharable ChannelInboundHandlerAdapter() {
@Override
public void channelRead(ChannelHandlerContext context, Object message) {
context.write(message);
}
};
Note:
The following changes don't support local variable annotation:
ChannelInboundHandler handler1 = new @sharable ChannelInboundHandlerAdapter();
@sharable ChannelInboundHandler handler2 = new ChannelInboundHandlerAdapter();
Fixes#7756
Motivation:
The DefaultChannelPipeline implementation can be cleaned up a bit and so we can remove the need for AbstractChannelHandlerContext all together.
Modifications:
- Merge DefautChannelHandlerContext and AbstractChannelHandlerContext
- Remove some unnecessary fields
- Some other minor cleanup
Result:
Cleaner code.
Motiviation:
In the past we allowed to use different EventExecutors for different ChannelHandlers in the ChannelPipeline. This introduced a lot of complexity while not providing much gain. Also it made the pipeline racy in terms of adding / remove handlers in some situations. This feature is not really used in the wild and can be easily archived by offloading heavy logic to an Executor by the user itself.
Modifications:
- Remove the ability to provide custom EventExecutor when adding handlers to the pipeline.
- Remove testcode that is not needed any more
- Ensure a handler is correctly visible in the pipeline when asked for it by the user while not be used until the EventLoop runs. This ensures correct ordering and visibility.
- Correctly remove ChannelHandlers from pipeline when scheduling of handlerAdded(...) callbacks fail.
Result:
Remove races in DefaultChannelPipeline and simplify implementation of AbstractChannelHandlerContext.
Motivation:
We cache the Runnable for some tasks to reduce GC pressure in 4 different fields. This gives overhead in terms of memory usage in all cases, even if we always execute in the EventExecutor (which is the case most of the times).
Modifications:
Move the 4 fields to another class and only have one reference to this in AbstractChannelHandlerContext. This gives a small overhead in the case of execution that is done outside of the EventExecutor but reduce memory footprint in the more likily execution case.
Result:
Less memory used per AbstractChannelHandlerContext in most cases.
Motivation:
We can use lambdas instead of anonymous inner class to improve readablity
Modification:
Replace anonymous inner class with lambda
Result:
Cleaner code that uses Java8 features
Motivation:
As netty 4.x supported Java 6 we had various if statements to check for java versions < 8. We can remove these now.
Modification:
Remove unnecessary if statements that check for java versions < 8.
Result:
Cleanup code.
Motivation:
We need to update to a new checkstyle plugin to allow the usage of lambdas.
Modifications:
- Update to new plugin version.
- Fix checkstyle problems.
Result:
Be able to use checkstyle plugin which supports new Java syntax.
Motivation:
We need to release the message when we throw an IllegalArgumentException because of a validation failure of the promise to eliminate the risk of a memory leak.
Modifications:
- Consistently release the message before rethrow
- Add testcase.
Result:
Fixes https://github.com/netty/netty/issues/8765.
* Decouble EventLoop details from the IO handling for each transport to allow easy re-use of code and customization
Motiviation:
As today extending EventLoop implementations to add custom logic / metrics / instrumentations is only possible in a very limited way if at all. This is due the fact that most implementations are final or even package-private. That said even if these would be public there are the ability to do something useful with these is very limited as the IO processing and task processing are very tightly coupled. All of the mentioned things are a big pain point in netty 4.x and need improvement.
Modifications:
This changeset decoubled the IO processing logic from the task processing logic for the main transport (NIO, Epoll, KQueue) by introducing the concept of an IoHandler. The IoHandler itself is responsible to wait for IO readiness and process these IO events. The execution of the IoHandler itself is done by the SingleThreadEventLoop as part of its EventLoop processing. This allows to use the same EventLoopGroup (MultiThreadEventLoupGroup) for all the mentioned transports by just specify a different IoHandlerFactory during construction.
Beside this core API change this changeset also allows to easily extend SingleThreadEventExecutor / SingleThreadEventLoop to add custom logic to it which then can be reused by all the transports. The ideas are very similar to what is provided by ScheduledThreadPoolExecutor (that is part of the JDK). This allows for example things like:
* Adding instrumentation / metrics:
* how many Channels are registered on an SingleThreadEventLoop
* how many Channels were handled during the IO processing in an EventLoop run
* how many task were handled during the last EventLoop / EventExecutor run
* how many outstanding tasks we have
...
...
* Implementing custom strategies for choosing the next EventExecutor / EventLoop to use based on these metrics.
* Use different Promise / Future / ScheduledFuture implementations
* decorate Runnable / Callables when submitted to the EventExecutor / EventLoop
As a lot of functionalities are folded into the MultiThreadEventLoopGroup and SingleThreadEventLoopGroup this changeset also removes:
* AbstractEventLoop
* AbstractEventLoopGroup
* EventExecutorChooser
* EventExecutorChooserFactory
* DefaultEventLoopGroup
* DefaultEventExecutor
* DefaultEventExecutorGroup
Result:
Fixes https://github.com/netty/netty/issues/8514 .
Motivation:
We should leave the responsibility to choose the EventExecutor for a ChannelHandler to the user for more flexibility and to keep things simple.
Modification:
- Change method signatures to take an EventExecutor and not an EventExecutorGroup
- Remove special ChannelOption that allowed to enable / disable EventExecutor pinning
Result:
Simpler and more flexible code.
Motivation:
Custom Netty ThreadLocalRandom and ThreadLocalRandomProvider classes are no longer needed and can be removed.
Modification:
Remove own ThreadLocalRandom
Result:
Less code to maintain
Motivation:
PlatformDependent.newConcurrentHashMap() is no longer needed so it could be easily removed and new ConcurrentHashMap<>() inlined instead of invoking PlatformDependent.newConcurrentHashMap().
Modification:
Use ConcurrentHashMap provided by the JDK directly.
Result:
Less code to maintain.
Motivation:
We can use the diamond operator these days.
Modification:
Use diamond operator whenever possible.
Result:
More modern code and less boiler-plate.
Motivation:
Netty uses own Integer.compare and Long.compare methods. Since Java 7 we can use Java implementation instead.
Modification:
Remove own implementation
Result:
Less code to maintain
Motivation:
Invoking ChannelHandlers is not free and can result in some overhead when the ChannelPipeline becomes very long. This is especially true if most handlers will just forward the call to the next handler in the pipeline. When the user extends Channel*HandlerAdapter we can easily detect if can just skip the handler and invoke the next handler in the pipeline directly. This reduce the overhead of dispatch but also reduce the call-stack in many cases.
Modifications:
Detect if we can skip the handler when walking the pipeline.
Result:
Reduce overhead for long pipelines.
Benchmark (extraHandlers) Mode Cnt Score Error Units
DefaultChannelPipelineBenchmark.propagateEventOld 4 thrpt 10 267313.031 ± 9131.140 ops/s
DefaultChannelPipelineBenchmark.propagateEvent 4 thrpt 10 824825.673 ± 12727.594 ops/s
Motivation:
While we are not yet quite sure if we want to require Java11 as minimum we are at least sure we want to use java8 as minimum.
Modifications:
Change minimum version to java8 and update some tests which failed compilation after this change.
Result:
Use Java8 as minimum and be able to use Java8 features.
Motivation:
testChannelInitializerEventExecutor() did sometimes fail as we sometimes miss to count down the latch. This can happen when we remove the handler from the pipeline before channelUnregistered(...) was called for it.
Modifications:
Countdown the latch in handlerRemoved(...).
Result:
Fix flaky test.
Motivation:
testWriteTaskRejected was racy as we did not ensure we dispatched all events to the executor before shutting it down.
Modifications:
Add a latch to ensure we dispatched everything.
Result:
Fix racy test that failed sometimes before.
Motiviation:
Because of how we implemented the registration / deregistration of an EventLoop it was not possible to wrap an EventLoop implementation and use it with a Channel.
Modification:
- Introduce EventLoop.Unsafe which is responsible for the actual registration.
- Move validation of EventLoop / Channel combo to the EventLoop
- Add unit test that verifies that wrapping works
Result:
Be able to wrap an EventLoop and so add some extra functionality.
Motivation:
We should access the Constructor of the passed in class in the Constructor of ReflectiveChannelFactory only to reduce the overhead but also fail-fast.
Modifications:
Access the Constructor early.
Result:
Fails fast and less performance overhead.
Motivation:
At the moment it’s possible to have a Channel in Netty that is not registered / assigned to an EventLoop until register(...) is called. This is suboptimal as if the Channel is not registered it is also not possible to do anything useful with a ChannelFuture that belongs to the Channel. We should think about if we should have the EventLoop as a constructor argument of a Channel and have the register / deregister method only have the effect of add a Channel to KQueue/Epoll/... It is also currently possible to deregister a Channel from one EventLoop and register it with another EventLoop. This operation defeats the threading model assumptions that are wide spread in Netty, and requires careful user level coordination to pull off without any concurrency issues. It is not a commonly used feature in practice, may be better handled by other means (e.g. client side load balancing), and therefore we propose removing this feature.
Modifications:
- Change all Channel implementations to require an EventLoop for construction ( + an EventLoopGroup for all ServerChannel implementations)
- Remove all register(...) methods from EventLoopGroup
- Add ChannelOutboundInvoker.register(...) which now basically means we want to register on the EventLoop for IO.
- Change ChannelUnsafe.register(...) to not take an EventLoop as parameter (as the EventLoop is supplied on custruction).
- Change ChannelFactory to take an EventLoop to create new Channels and introduce ServerChannelFactory which takes an EventLoop and one EventLoopGroup to create new ServerChannel instances.
- Add ServerChannel.childEventLoopGroup()
- Ensure all operations on the accepted Channel is done in the EventLoop of the Channel in ServerBootstrap
- Change unit tests for new behaviour
Result:
A Channel always has an EventLoop assigned which will never change during its life-time. This ensures we are always be able to call any operation on the Channel once constructed (unit the EventLoop is shutdown). This also simplifies the logic in DefaultChannelPipeline a lot as we can always call handlerAdded / handlerRemoved directly without the need to wait for register() to happen.
Also note that its still possible to deregister a Channel and register it again. It's just not possible anymore to move from one EventLoop to another (which was not really safe anyway).
Fixes https://github.com/netty/netty/issues/8513.
Motivation:
We should remove the ChannelPool and related implementations. It is often the case that having protocol knowledge can result in more effective pooling and ChannelPool currently doesn’t have this knowledge. This responsibility is assumed to be implemented at layers higher in the stack than Netty.
Modifications:
Remove io.netty.channel.pool.*
Result:
Less code to maintain, fixes https://github.com/netty/netty/issues/8549.
Motivation:
Due a race in DefaultChannelPipeline / AbstractChannelHandlerContext it was possible to have only handlerRemoved(...) called during tearing down the pipeline, even when handlerAdded(...) was never called. We need to ensure we either call both of none to guarantee a proper lifecycle of the handler.
Modifications:
- Enforce handlerAdded(...) / handlerRemoved(...) semantics / ordering
- Add unit test.
Result:
Fixes https://github.com/netty/netty/issues/8676 / https://github.com/netty/netty/issues/6536 .
* Handling AUTO_READ should not be the responsibility of DefaultChannelPipeline but the Channel itself.
Motivation:
At the moment we do automatically call read() in the DefaultChannelPipeline when fireChannelReadComplete() / fireChannelActive() is called and the Channel is using auto read. This is nice in terms of sharing code but imho is not the responsibility of the ChannelPipeline implementation but the responsibility of the Channel implementation.
Modifications:
Move handing of auto read from DefaultChannelPipeline to Channel implementations.
Result:
More clear responsibiliy and not depending on implemention details of the ChannelPipeline.
Motivation:
executeAfterEventLoopIteration is an Unstable API and isnt used in Netty. We should remove it to reduce complexity.
Changes:
This reverts commit 77770374fb.
Result:
Simplify implementation / cleanup.
Motivation:
8331248671 did make some changes to fix a race in ChannelInitializer when using with a custom EventExecutor. Unfortunally these where a bit racy and so the testcase failed sometimes.
Modifications:
- More correct fix when using a custom EventExecutor
- Adjust the testcase to be more correct.
Result:
Proper fix for https://github.com/netty/netty/issues/8616.
Motivation:
Most of the maven modules do not explicitly declare their
dependencies and rely on transitivity, which is not always correct.
Modifications:
For all maven modules, add all of their dependencies to pom.xml
Result:
All of the (essentially non-transitive) depepdencies of the modules are explicitly declared in pom.xml
Motivation:
The ChannelInitializer may be invoked multipled times when used with a custom EventExecutor as removal operation may be done asynchronously. We need to guard against this.
Modifications:
- Change Map to Set which is more correct in terms of how we use it.
- Ensure we only modify the internal Set when the handler was removed yet
- Add unit test.
Result:
Fixes https://github.com/netty/netty/issues/8616.
Motivation:
java.nio.channels.spi.AbstractSelectableChannel.register(...) need to obtain multiple locks during execution which may produce a long wait time if we currently select. This lead to multiple CI failures in the past.
Modifications:
Ensure the register call takes place on the EventLoop.
Result:
No more flacky CI test timeouts.
Motivation:
During benchmarks two methods showed up as "hot method too big". We can easily make these smaller by factor out some less common code-path to an extra method and so allow inlining.
Modifications:
Factor out less common code path to an extra method.
Result:
Hot methods can be inlined.
Motivation:
Our HeadContext in DefaultChannelPipeline does handle inbound and outbound but we only marked it as outbound. While this does not have any effect in the current code-base it can lead to problems when we change our internals (this is also how I found the bug).
Modifications:
Construct HeadContext so it is also marked as handling inbound.
Result:
More correct code.
Motivation:
This transport is unique because it uses Java's blocking IO (java.io / java.net) under the hood. However it is not clear if this transport is actually useful so it should be removed.
Modifications:
- Remove OIO transport and RXTX transport which depend on it.
- Remove Oio*Sctp* implementations
- Remove PerThreadEventLoop* which was only used by OIO transport.
Result:
Fixes https://github.com/netty/netty/issues/8510.
Motivation:
We plan to remove the OIO based transports in Netty 5 so we should mark these as deprecated already.
Modifications:
Mark all OIO based transports as deprecated.
Result:
Give the user a heads-up for removal.
Motivation:
When the Selector throws an IOException during our EventLoop processing we should rebuild it and transfer the registered Channels. At the moment we will continue trying to use it which will never work.
Modifications:
- Rebuild Selector when an IOException is thrown during any select*(...) methods.
- Add unit test.
Result:
Fixes https://github.com/netty/netty/issues/8566.
Motivation:
Besides an error caused by closing socket in Windows a bunch of other errors may happen at this place which won't be somehow logged. For instance any VirtualMachineError as OutOfMemoryError will be simply ignored. The library should at least log the problem.
Modification:
Added logging of the throwable object.
Result:
Fixes#8499.
Motivation:
There is a racy UnsupportedOperationException instead because the task removal is delegated to MpscChunkedArrayQueue that does not support removal. This happens with SingleThreadEventExecutor that overrides the newTaskQueue to return an MPSC queue instead of the LinkedBlockingQueue returned by the base class such as NioEventLoop, EpollEventLoop and KQueueEventLoop.
Modifications:
- Catch the UnsupportedOperationException
- Add unit test.
Result:
Fix#8475
Motivation:
There are currently many more places where this could be used which were
possibly not considered when the method was added.
If https://github.com/netty/netty/pull/8388 is included in its current
form, a number of these places could additionally make use of the same
BYTE_ARRAYS threadlocal.
There's also a couple of adjacent places where an optimistically-pooled
heap buffer is used for temp byte storage which could use the
threadlocal too in preference to allocating a temp heap bytebuf wrapper.
For example
https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java#L1417.
Modifications:
Replace new byte[] with PlatformDependent.allocateUninitializedArray()
where appropriate; make use of ByteBufUtil.getBytes() in some places
which currently perform the equivalent logic, including avoiding copy of
backing array if possible (although would be rare).
Result:
Further potential speed-up with java9+ and appropriate compile flags.
Many of these places could be on latency-sensitive code paths.
Motivation:
It has shown that the used test timeout may be too low when the CI is busy.
Modifications:
Increase timeout to 3 seconds.
Result:
Less false-positives.
Motivation:
Currently we may end up in the situation that we incremented the pending bytes before submitting the AbstractWriteTask but never decrement these again if the submitting of the task fails. This may result in incorrect watermark handling.
Modifications:
- Correctly decrement pending bytes if subimitting of task fails and also ensure we recycle it correctly.
- Add unit test.
Result:
Fixes https://github.com/netty/netty/issues/8343.
Motivation:
Unless the 'io.netty.noKeySetOptimization' system property is set,
registering a SelectableChannel instance to a NioEventLoop results
in a ClassCastException:
io.netty.channel.nio.SelectedSelectionKeySetSelector cannot be cast
to java.nio.channels.spi.AbstractSelector
Modifications:
Instead of 'selector', pass 'unwrappedSelector' to SelectableChannel.
Result:
It is possible to register a SelectableChannel instance without
setting the 'io.netty.noKeySetOptimization' system property.
Motivation:
Add an option (through a SelectStrategy return code) to have the Netty event loop thread to do busy-wait on the epoll.
The reason for this change is to avoid the context switch cost that comes when the event loop thread is blocked on the epoll_wait() call.
On average, the context switch has a penalty of ~13usec.
This benefits both:
The latency when reading from a socket
Scheduling tasks to be executed on the event loop thread.
The tradeoff, when enabling this feature, is that the event loop thread will be using 100% cpu, even when inactive.
Modification:
Added SelectStrategy option to return BUSY_WAIT
Epoll loop will do a epoll_wait() with no timeout
Use pause instruction to hint to processor that we're in a busy loop
Result:
When enabled, minimizes impact of context switch in the critical path