Motivation:
It is noticed that SimpleChannelPool's POOL_KEY attribute name channelPool is easy to get conflict with user code and throws an exception 'channelPool' is already in use. Being a generic framework - it would be great if we can name the attribute something unique - may be use UUID for the name since the name is not required later.
Modifications:
This change make sure that the POOL_KEY used inside SimpleChannelPool is unique by appending the object hashcode in the name.
Result:
No unwanted channel attribute name conflict with user code.
Motivation:
Following up on discussion with @normanmaurer with suggestion to improve code clarity.
Modification:
Method is synchronized, no need for assert or verbose sync blocks around calls.
Result:
Reduce verbosity and more idiomatic use of keyword. Also rename the method to better describe what it's for.
Motivation:
We need to update the doubly-linked list nodes while holding a lock via synchronized in all cases as otherwise we may end-up with a corrupted pipeline. We missed this when calling remove0(...) due handlerAdded(...) throwing an exception.
Modifications:
- Correctly hold lock while update node
- Add assert
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9528
Motivation:
There are some extra log level checks (logger.isWarnEnabled()).
Modification:
Remove log level checks (logger.isWarnEnabled()) from io.netty.channel.epoll.AbstractEpollStreamChannel, io.netty.channel.DefaultFileRegion, io.netty.channel.nio.AbstractNioChannel, io.netty.util.HashedWheelTimer, io.netty.handler.stream.ChunkedWriteHandler and io.netty.channel.udt.nio.NioUdtMessageConnectorChannel
Result:
Fixes#9456
Motivation
The epoll transport was updated in #7834 to decouple setting of the
timerFd from the event loop, so that scheduling delayed tasks does not
require waking up epoll_wait. To achieve this, new overridable hooks
were added in the AbstractScheduledEventExecutor and
SingleThreadEventExecutor superclasses.
However, the minimumDelayScheduledTaskRemoved hook has no current
purpose and I can't envisage a _practical_ need for it. Removing
it would reduce complexity and avoid supporting this specific
API indefinitely. We can add something similar later if needed
but the opposite is not true.
There also isn't a _nice_ way to use the abstractions for
wakeup-avoidance optimizations in other EventLoops that don't have a
decoupled timer.
This PR replaces executeScheduledRunnable and
wakesUpForScheduledRunnable
with two new methods before/afterFutureTaskScheduled that have slightly
different semantics:
- They only apply to additions; given the current internals there's no
practical use for removals
- They allow per-submission wakeup decisions via a boolean return val,
which makes them easier to exploit from other existing EL impls (e.g.
NIO/KQueue)
- They are subjectively "cleaner", taking just the deadline parameter
and not exposing Runnables
- For current EL/queue impls, only the "after" hook is really needed,
but specialized blocking queue impls can conditionally wake on task
submission (I have one lined up)
Also included are further optimization/simplification/fixes to the
timerFd manipulation logic.
Modifications
- Remove AbstractScheduledEventExecutor#minimumDelayScheduledTaskRemoved()
and supporting methods
- Uplift NonWakeupRunnable and corresponding default wakesUpForTask()
impl from SingleThreadEventLoop to SingleThreadEventExecutor
- Change executeScheduledRunnable() to be package-private, and have a
final impl in SingleThreadEventExecutor which triggers new overridable
hooks before/afterFutureTaskScheduled()
- Remove unnecessary use of bookend tasks while draining the task queue
- Use new hooks to add simpler wake-up avoidance optimization to
NioEventLoop (primarily to demonstrate utility/simplicity)
- Reinstate removed EpollTest class
In EpollEventLoop:
- Refactor to use only the new afterFutureTaskScheduled() hook for
updating timerFd
- Fix setTimerFd race condition using a monitor
- Set nextDeadlineNanos to a negative value while the EL is awake and
use this to block timer changes from outside the EL. Restore the
known-set value prior to sleeping, updating timerFd first if necessary
- Don't read from timerFd when processing expiry event
Result
- Cleaner API for integrating with different EL/queue timing impls
- Fixed race condition to avoid missing scheduled wakeups
- Eliminate unnecessary timerFd updates while EL is awake, and
unnecessary expired timerFd reads
- Avoid unnecessary scheduled-task wakeups when using NIO transport
I did not yet further explore the suggestion of using
TFD_TIMER_ABSTIME for the timerFd.
Motivation:
In AbstractBoostrap, options and attrs are LinkedHashMap that are synchronized on for every read, copy/clone, write operation.
When a lot of connections are triggered concurrently on the same bootstrap instance, the synchronized blocks lead to contention, Netty IO threads get blocked, and performance may be severely degraded.
Modifications:
Use ConcurrentHashMap
Result:
Less contention. Fixes https://github.com/netty/netty/issues/9426
Motivation:
EPOLL supports decoupling the timed wakeup mechanism from the selector call. The EPOLL transport takes advantage of this in order to offer more fine grained timer resolution. However we are current calling timerfd_settime on each call to epoll_wait and this is expensive. We don't have to re-arm the timer on every call to epoll_wait and instead only have to arm the timer when a task is scheduled with an earlier expiration than any other existing scheduled task.
Modifications:
- Before scheduled tasks are added to the task queue, we determine if the new
duration is the soonest to expire, and if so update with timerfd_settime. We
also drain all the tasks at the end of the event loop to make sure we service
any expired tasks and get an accurate next time delay.
- EpollEventLoop maintains a volatile variable which represents the next deadline to expire. This variable is modified inside the event loop thread (before calling epoll_wait) and out side the event loop thread (immediately to ensure proper wakeup time).
- Execute the task queue before the schedule task priority queue. This means we
may delay the processing of scheduled tasks but it ensures we transfer all
pending tasks from the task queue to the scheduled priority queue to run the
soonest to expire scheduled task first.
- Deprecate IORatio on EpollEventLoop, and drain the executor and scheduled queue on each event loop wakeup. Coupling the amount of time we are allowed to drain the executor queue to a proportion of time we process inbound IO may lead to unbounded queue sizes and unpredictable latency.
Result:
Fixes https://github.com/netty/netty/issues/7829
- In most cases this results in less calls to timerfd_settime
- Less event loop wakeups just to check for scheduled tasks executed outside the event loop
- More predictable executor queue and scheduled task queue draining
- More accurate and responsive scheduled task execution
Motivation:
Look like `EmbeddedChannelPipeline` should also override `onUnhandledInboundMessage(ChannelHandlerContext ctx, Object msg)` in order to do not print "Discarded message pipeline" because in case of `EmbeddedChannelPipeline` discarding actually not happens.
This fixes next warning in the latest netty version with websocket and `WebSocketServerCompressionHandler`:
```
13:36:36.231 DEBUG- Decoding WebSocket Frame opCode=2
13:36:36.231 DEBUG- Decoding WebSocket Frame length=5
13:36:36.231 DEBUG- Discarded message pipeline : [JdkZlibDecoder#0, DefaultChannelPipeline$TailContext#0]. Channel : [id: 0xembedded, L:embedded - R:embedded].
```
Modification:
Override correct method
Result:
Follow up fix after https://github.com/netty/netty/pull/9286
Motivation:
On servers with many pipelines or dynamic pipelines, it is easy for end user to make mistake during pipeline configuration. Current message:
`Discarded inbound message PooledUnsafeDirectByteBuf(ridx: 0, widx: 2, cap: 2) that reached at the tail of the pipeline. Please check your pipeline configuration.`
Is not always meaningful and doesn't allow to find the wrong pipeline quickly.
Modification:
Added additional log placeholder that identifies pipeline handlers and channel info. This will allow for the end users quickly find the problem pipeline.
Result:
Meaningful warning when the message reaches the end of the pipeline. Fixes#7285
Motivation:
We observed some test-failues sometimes in the CI which happened if sc.close() was not completed before the next test did run. If this happened we would fail the bind(...) as the LocalAddress was still in use.
Modifications:
Await the close before return
Result:
Fixes race in testsuite which resulted in FixedChannelPoolTest.testAcquireNewConnection to fail if FixedChannelPoolTest.testCloseAsync() did run before it.
Motivation:
Currently GraalVM substrate returns null for reflective calls if the reflection access is not declared up front.
A change introduced in Netty 4.1.35 results in needing to register every Netty handler for reflection. This complicates matters as it is difficult to know all the possible handlers that need to be registered.
Modification:
This change adds a simple
null check such that Netty does not break on GraalVM substrate without the reflection information registration.
Result:
Fixes#9278
Motivation:
c9aaa93d83 added the ability to specify an EventLoopTaskQueueFactory but did place it under MultithreadEventLoopGroup while not really belongs there.
Modifications:
Make EventLoopTaskQueueFactory a top-level interface
Result:
More logical code layout.
Motivation:
Sometimes it is desirable to be able to use a different Queue implementation for the EventLoop of a Channel. This is currently not possible without resort to reflection.
Modifications:
- Add a new constructor to Nio|Epoll|KQueueEventLoopGroup which allows to specify a factory which is used to create the task queue. This was the user can override the default implementation.
- Add test
Result:
Be able to change Queue that is used for the EventLoop.
Motivation:
In the current implementation, the synchronous close() method for FixedChannelPool returns
after scheduling the channels to close via a single threaded executor asynchronously. Closing a channel
requires event loop group, however, there might be a scenario when the application has closed
the event loop group after the sync close() completes. In this scenario an exception is thrown
(event loop rejected the execution) when the single threaded executor tries to close the channel.
Modifications:
Complete the close function only after all the channels have been close and introduce
closeAsync() method for cases when the current/existing behaviour is desired.
Result:
Close function would completely when the channels have been closed
Motivation:
Sometimes it is beneficial to be able to set a parent Channel in EmbeddedChannel if the handler that should be tested depend on the parent.
Modifications:
- Add another constructor which allows to specify a parent
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/9228.
Motivation:
When Netty is run through ProGuard, seemingly unused methods are removed. This breaks reflection, making the Handler skipping throw a reflective error.
Modification:
If a method is seemingly absent, just disable the optimization.
Result:
Dealing with ProGuard sucks infinitesimally less.
Motivation:
When initializing the AnnotatedSocketException in AbstractChannel, both
the cause and the stack trace are set, leaving a trailing "Caused By"
that is compressed when printing the trace.
Modification:
Don't include the stack trace in the exception, but leave it in the cause.
Result:
Clearer stack trace
Motivation:
OOME is occurred by increasing suppressedExceptions because other libraries call Throwable#addSuppressed. As we have no control over what other libraries do we need to ensure this can not lead to OOME.
Modifications:
Only use static instances of the Exceptions if we can either dissable addSuppressed or we run on java6.
Result:
Not possible to OOME because of addSuppressed. Fixes https://github.com/netty/netty/issues/9151.
Motivation:
Clean the code , replace all logic that checks Null with the ObjectUtil utility class in bootstrap package
Modification:
Replace all logic that checks null with the ObjectUtil utility class
Result:
Less verbose code.
Motivation:
SimpleChannelPool provides ability to provide custom callbacks/handlers
on major events such as "channel acquired", "channel created" and
"channel released". In the current implementation, when a request to
acquire a channel is made for the first time, the internal channel pool
creates the channel lazily. This triggers the "channel created" callback
but does not invoke the "channel acquired" callback. This is contrary to
caller expectations who assumes that "channel acquired" will be invoked
at the end of every successful acquire call. It also leads to an
inconsistent API experience where the acquired callback is sometimes
invoked and sometimes it isn't depending on wheather the internal
mechanism is creating a new channel or re-using an existing one.
Modifications:
Invoke acquired callback consistenly even when creating a new channel
and modify the tests to support this behaviour
Result:
Consistent experience for the caller of acquire API. Every time they
call the API, the acquired callback will be invoked.
Motivation:
GraalVM native images are a new way to deliver java applications. Netty is one of the most popular libraries however there are a few limitations that make it impossible to use with native images out of the box. Adding a few metadata (in specific modules will allow the compilation to success and produce working binaries)
Modification:
Added properties files in `META-INF` and substitutions classes (under `internal.svm`) will solve the compilation issues. The substitutions classes are not visible and do not have a public constructor so they are not visible to end users.
Result:
Fixes#8959
This fix is very conservative as it applies the minimum config required to build:
* pure netty servers
* vert.x applications
* grpc applications
The build is having trouble due to checkstyle which does not seem to be able to find the copyright notice on property files.
Motivation
The optimization in #8988 didn't correctly handle the specific case
where the channel hasDisconnect == false, and a
ChannelOutboundHandlerAdapter subclass overrides only the close(ctx,
promise) method without also overriding the disconnect(ctx, promise)
method.
Modifications
Adjust AbstractChannelHandler.disconnect(...) method to divert to
close(...) in !hasDisconnect case before computing target context for
the event.
Result
Fixes#9092
Motivation:
When a Channel was closed its isActive() method must return false.
Modifications:
First check for isOpen() before isBound() as isBound() will continue to return true even after the underyling fd was closed.
Result:
Fixes https://github.com/netty/netty/issues/9026.
Motivation:
IdleStateHandler may trigger unexpected idle events when flushing large entries to slow clients.
Modification:
In netty design, we check the identity hash code and total pending write bytes of the current flush entry to determine whether there is a change in output. But if a large entry has been flushing slowly (for some reason, the network speed is slow, or the client processing speed is too slow to cause the TCP sliding window to be zero), the total pending write bytes size and identity hash code would remain unchanged.
Avoid this issue by adding checks for the current entry flush progress.
Result:
Fixes#8912 .
Motivation:
Invoking ChannelHandlers is not free and can result in some overhead when the ChannelPipeline becomes very long. This is especially true if most handlers will just forward the call to the next handler in the pipeline. When the user extends Channel*HandlerAdapter we can easily detect if can just skip the handler and invoke the next handler in the pipeline directly. This reduce the overhead of dispatch but also reduce the call-stack in many cases.
This backports https://github.com/netty/netty/pull/8723 and https://github.com/netty/netty/pull/8987 to 4.1
Modifications:
Detect if we can skip the handler when walking the pipeline.
Result:
Reduce overhead for long pipelines.
Benchmark (extraHandlers) Mode Cnt Score Error Units
DefaultChannelPipelineBenchmark.propagateEventOld 4 thrpt 10 267313.031 ± 9131.140 ops/s
DefaultChannelPipelineBenchmark.propagateEvent 4 thrpt 10 824825.673 ± 12727.594 ops/s
Motivation:
Deprecate ChannelOption.newInstance(...) as it is not used.
Modifications:
Deprecate ChannelOption.newInstance(...) as valueOf(...) should be used as a replacement.
Result:
Fixes https://github.com/netty/netty/issues/8983.
Motivation:
0a0da67f43 introduced a testcase which is flacky. We need to fix it and enable it again.
Modifications:
Mark flaky test as ignore.
Result:
No flaky build anymore.
Motivation:
https://github.com/netty/netty/pull/8826 added @Deprecated to the exceptionCaught(...) method but we missed to add @SupressWarnings(...) to it's sub-types. Beside this we can make the deprecated docs a bit more clear.
Modifications:
- Add @SupressWarnings("deprecated")
- Clarify docs.
Result:
Less warnings and more clear deprecated docs.
Motivation:
Systems depending on Netty may benefit (telemetry, alternative even loop scheduling algorithms) from knowing the number of channels assigned to each EventLoop.
Modification:
Expose the number of channels registered in the EventLoop via SingleThreadEventLoop.registeredChannels.
Result:
Fixes#8276.
Motivation:
It appears this was an oversight, maybe was valid at some point in the past. Noticed while reviewing #8958.
Modifications:
Change AbstractChannelHandlerContext to not extend DefaultAttributeMap.
Result:
Simpler hierarchy, eliminate unused attributes field from each context instance.
Motivation:
PromiseCombiner is not thread-safe and even assumes all added Futures are using the same EventExecutor. This is kind of fragile as we do not enforce this. We need to enforce this contract to ensure it's safe to use and easy to spot concurrency problems.
Modifications:
- Add new contructor to PromiseCombiner that takes an EventExecutor and deprecate the old non-arg constructor.
- Check if methods are called from within the EventExecutor thread and if not fail
- Correctly dispatch on the right EventExecutor if the Future uses a different EventExecutor to eliminate concurrency issues.
Result:
More safe use of PromiseCombiner + enforce correct usage / contract.
Motivation:
`DefaultFileRegion.transferTo` will return 0 all the time when we request more data then the actual file size. This may result in a busy spin while processing the fileregion during writes.
Modifications:
- If we wrote 0 bytes check if the underlying file size is smaller then the requested count and if so throw an IOException
- Add DefaultFileRegionTest
- Add a test to the testsuite
Result:
Fixes https://github.com/netty/netty/issues/8868.
Motivation:
To make it easier to understand why a Channel was closed previously and so why the operation failed with a ClosedChannelException we should include the original Exception.
Modifications:
- Store the original exception that lead to the closed Channel and include it in the ClosedChannelException that is used to fail the operation.
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/8862.
Motivation:
41e03adf24 marked ChannelHandler.exceptionCaught(...) as @deprecated but missed to also mark ChannelHandlerAdapter.exceptionCaught(...) as @deprecated. We should do so as most people extend the base classes and not implement the interfaces directly.
Modifications:
Mark ChannelHandlerAdapter.exceptionCaught(...) as @deprecated as well.
Result:
Mark method as @deprecated to warn users about its removal.
Motivation:
We have a utility method to check for > 0 and >0 arguments. We should use it.
Modification:
use checkPositive/checkPositiveOrZero instead of if statement.
Result:
Re-use utility method.
Motivation:
We cache the Runnable for some tasks to reduce GC pressure in 4 different fields. This gives overhead in terms of memory usage in all cases, even if we always execute in the EventExecutor (which is the case most of the times).
Modifications:
Move the 4 fields to another class and only have one reference to this in AbstractChannelHandlerContext. This gives a small overhead in the case of execution that is done outside of the EventExecutor but reduce memory footprint in the more likily execution case.
Result:
Less memory used per AbstractChannelHandlerContext in most cases.
Motivation:
We need to update to a new checkstyle plugin to allow the usage of lambdas.
Modifications:
- Update to new plugin version.
- Fix checkstyle problems.
Result:
Be able to use checkstyle plugin which supports new Java syntax.
Motivation:
We need to release the message when we throw an IllegalArgumentException because of a validation failure of the promise to eliminate the risk of a memory leak.
Modifications:
- Consistently release the message before rethrow
- Add testcase.
Result:
Fixes https://github.com/netty/netty/issues/8765.
Motivation:
testChannelInitializerEventExecutor() did sometimes fail as we sometimes miss to count down the latch. This can happen when we remove the handler from the pipeline before channelUnregistered(...) was called for it.
Modifications:
Countdown the latch in handlerRemoved(...).
Result:
Fix flaky test.
Motivation:
testWriteTaskRejected was racy as we did not ensure we dispatched all events to the executor before shutting it down.
Modifications:
Add a latch to ensure we dispatched everything.
Result:
Fix racy test that failed sometimes before.
Motivation:
We should access the Constructor of the passed in class in the Constructor of ReflectiveChannelFactory only to reduce the overhead but also fail-fast.
Modifications:
Access the Constructor early.
Result:
Fails fast and less performance overhead.
Motivation:
Due a race in DefaultChannelPipeline / AbstractChannelHandlerContext it was possible to have only handlerRemoved(...) called during tearing down the pipeline, even when handlerAdded(...) was never called. We need to ensure we either call both of none to guarantee a proper lifecycle of the handler.
Modifications:
- Enforce handlerAdded(...) / handlerRemoved(...) semantics / ordering
- Add unit test.
Result:
Fixes https://github.com/netty/netty/issues/8676 / https://github.com/netty/netty/issues/6536 .
Motivation:
8331248671 did make some changes to fix a race in ChannelInitializer when using with a custom EventExecutor. Unfortunally these where a bit racy and so the testcase failed sometimes.
Modifications:
- More correct fix when using a custom EventExecutor
- Adjust the testcase to be more correct.
Result:
Proper fix for https://github.com/netty/netty/issues/8616.
Motivation:
Most of the maven modules do not explicitly declare their
dependencies and rely on transitivity, which is not always correct.
Modifications:
For all maven modules, add all of their dependencies to pom.xml
Result:
All of the (essentially non-transitive) depepdencies of the modules are explicitly declared in pom.xml
Motivation:
The ChannelInitializer may be invoked multipled times when used with a custom EventExecutor as removal operation may be done asynchronously. We need to guard against this.
Modifications:
- Change Map to Set which is more correct in terms of how we use it.
- Ensure we only modify the internal Set when the handler was removed yet
- Add unit test.
Result:
Fixes https://github.com/netty/netty/issues/8616.
Motivation:
java.nio.channels.spi.AbstractSelectableChannel.register(...) need to obtain multiple locks during execution which may produce a long wait time if we currently select. This lead to multiple CI failures in the past.
Modifications:
Ensure the register call takes place on the EventLoop.
Result:
No more flacky CI test timeouts.
Motivation:
During benchmarks two methods showed up as "hot method too big". We can easily make these smaller by factor out some less common code-path to an extra method and so allow inlining.
Modifications:
Factor out less common code path to an extra method.
Result:
Hot methods can be inlined.
Motivation:
Our HeadContext in DefaultChannelPipeline does handle inbound and outbound but we only marked it as outbound. While this does not have any effect in the current code-base it can lead to problems when we change our internals (this is also how I found the bug).
Modifications:
Construct HeadContext so it is also marked as handling inbound.
Result:
More correct code.
Motivation:
31fd66b617 added @Deprecated to some classes but also to the package-info.java files. IntelliJ does not like to have these annotations on package-info.java
Modifications:
Remove annotation from package-info.java
Result:
Be able to compile against via IntelliJ
Motivation:
We plan to remove the OIO based transports in Netty 5 so we should mark these as deprecated already.
Modifications:
Mark all OIO based transports as deprecated.
Result:
Give the user a heads-up for removal.
Motivation:
When the Selector throws an IOException during our EventLoop processing we should rebuild it and transfer the registered Channels. At the moment we will continue trying to use it which will never work.
Modifications:
- Rebuild Selector when an IOException is thrown during any select*(...) methods.
- Add unit test.
Result:
Fixes https://github.com/netty/netty/issues/8566.
Motivation:
Besides an error caused by closing socket in Windows a bunch of other errors may happen at this place which won't be somehow logged. For instance any VirtualMachineError as OutOfMemoryError will be simply ignored. The library should at least log the problem.
Modification:
Added logging of the throwable object.
Result:
Fixes#8499.
Motivation:
There is a racy UnsupportedOperationException instead because the task removal is delegated to MpscChunkedArrayQueue that does not support removal. This happens with SingleThreadEventExecutor that overrides the newTaskQueue to return an MPSC queue instead of the LinkedBlockingQueue returned by the base class such as NioEventLoop, EpollEventLoop and KQueueEventLoop.
Modifications:
- Catch the UnsupportedOperationException
- Add unit test.
Result:
Fix#8475
Motivation:
There are currently many more places where this could be used which were
possibly not considered when the method was added.
If https://github.com/netty/netty/pull/8388 is included in its current
form, a number of these places could additionally make use of the same
BYTE_ARRAYS threadlocal.
There's also a couple of adjacent places where an optimistically-pooled
heap buffer is used for temp byte storage which could use the
threadlocal too in preference to allocating a temp heap bytebuf wrapper.
For example
https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java#L1417.
Modifications:
Replace new byte[] with PlatformDependent.allocateUninitializedArray()
where appropriate; make use of ByteBufUtil.getBytes() in some places
which currently perform the equivalent logic, including avoiding copy of
backing array if possible (although would be rare).
Result:
Further potential speed-up with java9+ and appropriate compile flags.
Many of these places could be on latency-sensitive code paths.
Motivation:
It has shown that the used test timeout may be too low when the CI is busy.
Modifications:
Increase timeout to 3 seconds.
Result:
Less false-positives.
Motivation:
Currently we may end up in the situation that we incremented the pending bytes before submitting the AbstractWriteTask but never decrement these again if the submitting of the task fails. This may result in incorrect watermark handling.
Modifications:
- Correctly decrement pending bytes if subimitting of task fails and also ensure we recycle it correctly.
- Add unit test.
Result:
Fixes https://github.com/netty/netty/issues/8343.
Motivation:
Unless the 'io.netty.noKeySetOptimization' system property is set,
registering a SelectableChannel instance to a NioEventLoop results
in a ClassCastException:
io.netty.channel.nio.SelectedSelectionKeySetSelector cannot be cast
to java.nio.channels.spi.AbstractSelector
Modifications:
Instead of 'selector', pass 'unwrappedSelector' to SelectableChannel.
Result:
It is possible to register a SelectableChannel instance without
setting the 'io.netty.noKeySetOptimization' system property.
Motivation:
Add an option (through a SelectStrategy return code) to have the Netty event loop thread to do busy-wait on the epoll.
The reason for this change is to avoid the context switch cost that comes when the event loop thread is blocked on the epoll_wait() call.
On average, the context switch has a penalty of ~13usec.
This benefits both:
The latency when reading from a socket
Scheduling tasks to be executed on the event loop thread.
The tradeoff, when enabling this feature, is that the event loop thread will be using 100% cpu, even when inactive.
Modification:
Added SelectStrategy option to return BUSY_WAIT
Epoll loop will do a epoll_wait() with no timeout
Use pause instruction to hint to processor that we're in a busy loop
Result:
When enabled, minimizes impact of context switch in the critical path
Motivation:
In Java8 and earlier we used reflection to replace the used key set if not otherwise told. This does not work on Java9 and later without special flags as its not possible to call setAccessible(true) on the Field anymore.
Modifications:
- Use Unsafe to instrument the Selector with out special set when sun.misc.Unsafe is present and we are using Java9+.
Result:
NIO transport produce less GC on Java9 and later as well.
Motivation:
We need to implement remove() by ourselves to make it work on Java7 as otherwise it will throw an AbstractMethodError. This is a followup of c1a335446d.
Modifications:
Just implemented remove()
Result:
Works on Java7 as well.
Motivation:
c1a335446d reimplemented remove(...) and contains(...) in a way which made it not work anymore when used by the Selector.
Modifications:
Partly revert changes in c1a335446d.
Result:
Works again as expected
Motivation:
Our SelectedSelectionKeySet does not correctly implement various methods which can be done without any performance overhead.
Modifications:
Implement iterator(), contains(...) and remove(...)
Result:
Related to https://github.com/netty/netty/issues/8242.
Motivation:
It seems to sometimes confuse people what to do to replace setMaxMessagePerRead(...).
Modifications:
Add some more details to the javadocs about the correct replacement.
Result:
Related to https://github.com/netty/netty/issues/8214.
Motivation:
We had a report that the exception may not be correctly propagated. This test shows it is.
Modifications:
Add testcase.
Result:
Test for https://github.com/netty/netty/issues/8158
Motivation:
There is a JDK bug which will return IP_TOS as supported option for ServerSocketChannel even if its not supported afterwards and cause an AssertionError.
See http://mail.openjdk.java.net/pipermail/nio-dev/2018-August/005365.html.
Modifications:
Add a workaround for the JDK bug.
Result:
ServerSocketChannel.config().getOptions() will not throw anymore and work as expected.
Motivation:
952eeb8e1e introduced the possibility to use any JDK SocketOption when using the NIO transport but broke the possibility to use netty with java6.
Modifications:
Do not use java7 types in method signatures of the static methods in NioChannelOption to prevent class-loader issues on java6.
Result:
Fixes https://github.com/netty/netty/issues/8166.
* Support the usage of SocketOption when nio is used and the java version >= 7.
Motivation:
The JDK uses SocketOption since java7 to support configuration options on the underyling Channel. We should allow to create a ChannelOption from a given SocketOption if nio is used. This also allows us to expose the same featureset in terms of configuration as the java nio implementation does without any extra effort.
Modifications:
- Add NioChannelOption which allows to wrap an existing SocketOption which then can be applied to the nio transport.
- Add test-cases
Result:
Support the same configuration options as the JDK. Also fixes https://github.com/netty/netty/issues/8072.
Motivation:
Some code that was shown as part of the ChannelHandler javadoc was not 100 % correct and used some constructs that we used in netty 3. Also we never called flush() in the code which is a bad example for users.
Modifications:
- Remove netty 3 code references
- Replace channel.write(...) with ctx.writeAndFlush(...)
Result:
More correct code in the javadocs.
Motivation:
Currently, the vast majority of userEventTriggered() implementations
require the user to supply the boilerplate behavior of performing an
instanceof check, handling if appropriate, and calling
fireUserEventTriggered() otherwise.
We can simplify this very common use case by creating a class that only
matches user events of a given type, similar to the existing
SimpleChannelInboundHandler class.
Modifications:
Create a new SimpleUserEventChannelHandler class
Create accompanying SimpleUserEventChannelHandlerTest class
Result:
Users will be able to handle most events in a less verbose manner.
Motivation:
We use FixedChannelPool in production, and we believe we have a leak that doesn't return sockets to the pool (but they should be closed), thus blocking us from creating new connections when we need them. I haven't confirmed this yet, but right now I have to resort to reflection to access this field which makes me sad.
Modification:
Expose the acquiredChannelCount field through a getter method.
Result:
Allows introspection of the pool size in FixedChannelPool.
Motivation
There is a cost to concatenating strings and calling methods that will be wasted if the Logger's level is not enabled.
Modifications
Check if Log level is enabled before producing log statement. These are just a few cases found by RegEx'ing in the code.
Result
Tiny bit more efficient code.
Motivation:
If we can not replace the internal used Set of the Selector there is no need to create an SelectedSelectionKeySet instance.
Modification:
Only create SelectedSelectionKeySet if we will replace the internal set.
Result:
Less object creation in some cases and cleaner code.
Motivation:
We should allow to schedule tasks with a delay up to Long.MAX_VALUE as we did pre 4.1.25.Final.
Modifications:
Just ensure we not overflow and put the correct max limits in place when schedule a timer. At worse we will get a wakeup to early and then schedule a new timeout.
Result:
Fixes https://github.com/netty/netty/issues/7970.
Motivation:
A long time ago we deprecated AUTO_CLOSE but it turned out this feature is still useful because if a write error is detected there still maybe data to read, and if we close the channel automatically we will lose data
Modifications:
- Remove `@Deprecated` tag for AUTO_CLOSE, setAutoClose(...) and isAutoClose(...)
- Fix javadocs on ChannelConfig to correctly tell the default value of AUTO_CLOSE.
Result:
Less warnings.
Motivation:
We need to ensure we only return from close() after all work is done as otherwise we may close the EventExecutor before we dispatched everything.
Modifications:
Correctly wait on operations to complete before return.
Result:
Fixes https://github.com/netty/netty/issues/7901.
Motivation:
We added some code to guard against thread.interrupt() in NioEventLoop but did not added a test.
Modifications:
Add testcase.
Result:
Verify that we correctly handle interrupt().
Motivation:
Closed `FixedChannelPool` fails acquire and release operations with
`IllegalStateException`s. These exceptions had message
"FixedChannelPooled was closed". Here "FixedChannelPooled" looks like
a typo and should probably be "FixedChannelPool".
Modifications:
Changed exception message to "FixedChannelPool was closed".
Result:
A tiny bit cleaner exception message.
Motivation:
ChannelReadHandler is used in tests added via f4d7e8de14. In the handler we verify the number of messages we receive per read() call but missed to sometimes reset the counter which resulted in exceptions.
Modifications:
Correctly reset read counter in all cases.
Result:
No more unexpected exceptions when running LocalChannel tests.
Motivation:
LocalChannel / LocalServerChannel did not respect read limits and just always read all of the messages.
Modifications:
- Correct respect MAX_MESSAGES_PER_READ settings
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/7880.
Motivation:
Using a very huge delay when calling schedule(...) may cause an Selector error when calling select(...) later on. We should gaurd against such a big value.
Modifications:
- Add guard against a very huge value.
- Added tests.
Result:
Fixes [#7365]
Motivation:
We need to ensure we only reset readInProgress if the outboundBuffer is not empty as otherwise we may miss to call fireChannelRead(...) later on when using the LocalChannel.
Modifications:
Also check if the outboundBuffer is not empty before setting readInProgress to false again
Result:
Fixes https://github.com/netty/netty/issues/7855
Motivation:
Some `if` statements contains common parts that can be extracted.
Modifications:
Extract common parts from `if` statements.
Result:
Less code and bytecode. The code is simpler and more clear.
Motivation:
AbstractNioByteChannel will detect that the remote end of the socket has
been closed and propagate a user event through the pipeline. However if
the user has auto read on, or calls read again, we may propagate the
same user events again. If the underlying transport continuously
notifies us that there is read activity this will happen in a spin loop
which consumes unnecessary CPU.
Modifications:
- AbstractNioByteChannel's unsafe read() should check if the input side
of the socket has been shutdown before processing the event. This is
consistent with EPOLL and KQUEUE transports.
- add unit test with @normanmaurer's help, and make transports consistent with respect to user events
Result:
No more read spin loop in NIO when the channel is half closed.
Motivation:
Sometimes it is very convenient to remove the handler from pipeline without throwing the exception in case those handler doesn't exist in the pipeline.
Modification:
Added 3 overloaded methods to DefaultChannelPipeline, but not added to ChannelHandler due to back compatibility.
Result:
Fixes#7662
Motivation:
Our code was not correct in AbstractNioMessageChannel.closeOnReadError(....) which lead to the situation that we always tried to continue reading no matter what exception was thrown when using the NioServerSocketChannel. Also even on an IOException we should check if the Channel itself is still active or not and if not stop reading.
Modifications:
Fix closeOnReadError impl and added test.
Result:
Correctly stop reading on NioServerSocketChannel when error happens during read.
Motivation:
DefaultChannelGroup.contains(...) did one more instanceof check then needed.
Modifications:
Simplify contains(...) and remove one instanceof check.
Result:
Simplier and cheaper implementation.
Motivation:
Right now PendingWriteQueue.removeAndWriteAll collects all promises to
PromiseCombiner instance which sets listener to each given promise throwing
IllegalStateException on VoidChannelPromise which breaks while loop
and "reports" operation as failed (when in fact part of writes might be
actually written).
Modifications:
Check if the promise is not void before adding it to the PromiseCombiner
instance.
Result:
PendingWriteQueue.removeAndWriteAll succesfully writes all pendings
even in case void promise was used.
Motivation:
The flush task is currently using flush() which will have the affect of have the flush traverse the whole ChannelPipeline and also flush messages that were written since we gave up flushing. This is not really correct as we should only continue to flush messages that were flushed at the point in time when the flush task was submitted for execution if the user not explicit call flush() by him/herself.
Modification:
Call *Unsafe.flush0() via the flush task which will only continue flushing messages that were marked as flushed before.
Result:
More correct behaviour when the flush task is used.
Motivation:
b215794de3 recently introduced a change in behavior where writeSpinCount provided a limit for how many write operations were attempted per flush operation. However when the write quantum was meet the selector write flag was not cleared, and the channel unsafe flush0 method has an optimization which prematurely exits if the write flag is set. This may lead to no write progress being made under the following scenario:
- flush is called, but the socket can't accept all data, we set the write flag
- the selector wakes us up because the socket is writable, we write data and use the writeSpinCount quantum
- we then schedule a flush() on the EventLoop to execute later, however it the flush0 optimization prematurely exits because the write flag is still set
In this scenario the socket is still writable so the EventLoop may never notify us that the socket is writable, and therefore we may never attempt to flush data to the OS.
Modifications:
- When the writeSpinCount quantum is exceeded we should clear the selector write flag
Result:
Fixes https://github.com/netty/netty/issues/7729
Motivation:
NioDatagramChannel attempts to unpack a AddressedEnvelope and unconditionally uses internalNioBuffer. However if the ByteBuf is a CompositeByteBuf with more than 1 components, the write will fail and throw an exception.
Modifications:
- NioDatagramChannel should check the nioBufferCount before attempting
to use internalNioBuffer
Result:
No more failure to write UDP packets on NIO when a CompositeByteBuf is
used.
Motivation:
Reflective setAccessible(true) will produce scary warnings on the console when using java9+, while netty still works. That said users may feel uncomfortable with these warnings, we should not try to do it by default when using java9+.
Modifications:
Add io.netty.tryReflectionSetAccessible system property which controls if setAccessible(...) will be used. By default it will bet set to false when using java9+.
Result:
Fixes [#7254].
Motivation:
The methods implement io.netty.util.concurrent.Future#cancel(boolean mayInterruptIfRunning) which actually ignored the param mayInterruptIfRunning.We need to add comments for the `mayInterruptIfRunning` param.
Modifications:
Add comments for the `mayInterruptIfRunning` param.
Result:
People who call the `cancel` method will be more clear about the effect of `mayInterruptIfRunning` param.
Motivation:
When VoidChannelPromise.unvoid() was called we created a new ChannelFutureListener everytime. This is not needed as its stateless.
Modifications:
Reuse the ChannelFutureListener.
Result:
Less object allocations
Motiviation:
DefaultChannelPipeline and AbstractChannelHandlerContext maintain state
which indicates if a ChannelHandler should be invoked or not. However
the state is updated to allow the handler to be invoked only after the
handlerAdded method completes. If the handlerAdded method generates
events which may result in other methods being invoked on that handler
they will be missed.
Modifications:
- DefaultChannelPipeline should set the state before calling
handlerAdded
Result:
DefaultChannelPipeline will allow events to be processed during the
handlerAdded process.