Motivation:
junit deprecated Assert.assertThat(...)
Modifications:
Use MatcherAssert.assertThat(...) as replacement for deprecated method
Result:
Less deprecation warnings
Motivation:
LGTM reports multiple issues. They need to be triaged,
and real ones should be fixed.
Modifications:
- Fixed multiple issues reported by LGTM, such as redundant conditions,
resource leaks, typos, possible integer overflows.
- Suppressed false-positives.
- Added a few testcases.
Result:
Fixed several possible issues, get rid of false alarms in the LGTM report.
Motivation:
We wish to use Unsafe as little as possible, and Java 8 allows us
to take some short-cuts or play some tricks with generics,
for the purpose of working around having to declare all checked
exceptions. Ideally all checked exceptions would be declared, but
the code base is not ready for that yet.
Modification:
The call to UNSAFE.throwException has been removed, so when we need
that feature, we instead use the generic exception trick.
In may cases, Java 8 allows us to throw Throwable directly. This
happens in cases where no exception is declared to be thrown in a
scope.
Finally, some warnings have also been fixed, and some imports have
been reorganised and cleaned up while I was modifying the files
anyway.
Result:
We no longer use Unsafe for throwing any exceptions.
Motivation:
We should use an initial buffer size with is >= 1500 (which is a common setting for MTU) to reduce the need for memory copies when a new connection is established. This is especially interesting when SSL / TLS comes into the mix.
This was ported from swiftnio:
https://github.com/apple/swift-nio/pull/1641
Modifications:
Increase the initial size from 1024 to 2048.
Result:
Possible less memory copies on new connections
Motivation:
Creating exceptions is expensive so we should only do so if really needed.
Modifications:
Only create the ConnectTimeoutException if we really need it.
Result:
Less overhead
Motivation:
In some benchmarks closing the Channel attributes to a lot of overhead due the call of fillInStackTrace(). We should reduce this overhead.
Modifications:
- Create a StacklessClosedChannelException and use it to reduce overhead.
- Only call ChannelOutboundBuffer.failFlushed(...) when there was a flushed message at all.
Result:
Less performance overhead when closing the Channel
Motivation:
I was working on the transport part in Netty (ofc, solving a major issue) and I found this typo so thought to fix it.
Modification:
Fixed Typo
Result:
No more confusion between `us` and `use`.
Motivation:
Noticed we had some unused non-public classes.
There is no reason to keep these around.
Modification:
Remove unused non-public classes.
Result:
Less code to worry about.
Motivation:
When we were using the netty http protocol, OOM occurred, this problem has been in 4.1.51.Final Fix [# 10424](https://github.com/netty/netty/issues/10424), even if OOM is up, the service will still receive new connection events, will occur again OOM and eventually cause the connection not to be released.
code `byteBuf = allocHandle.allocate(allocator);`
Modification:
I fail to create buffer when I try to receive new data, i determine if it is OOM then the close read event releases the connection.
```java
if (close || cause instanceof OutOfMemoryError || cause instanceof IOException) {
closeOnRead(pipeline);
}
```
Result:
Fixes # [10434](https://github.com/netty/netty/issues/10434).
Motivation:
When a switch statement is used we should always define a `default:` so we don't introduce bugs due fall-through.
Modification:
Add missing `default:`s
Result:
Less error-prone code
Motivation:
AbstractCoalescingBufferQueue had a bug which could lead to an empty queue while still report bytes left. This was due the fact that we decremented the pending bytes before draining the queue one-by-one. The problem here is that while the queue is drained we may notify the promise which may add again buffers to the queue for which we never decrement the bytes while we drain these
Modifications:
- Decrement the pending bytes every time we drain a buffer from the queue
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/10286
Motivation:
In DefaultChannelHandlerContext we had some code where we tried to guard against endless loops caused by exceptions thrown by exceptionCaught(...) that would trigger exceptionCaught again. This code was proplematic for two reasons:
- It is quite expensive as we need to compare all the stacks
- We may end up not notify another handlers exceptionCaught(...) if in our exeuction stack we triggered actions that will cause an exceptionCaught somewhere else in the pipeline
Modifications:
- Just remove the detection code as we already handle everything correctly when we invoke exceptionCaught(...)
- Add unit tests
Result:
Ensure we always notify correctly and also fixes performance issue reported as https://github.com/netty/netty/issues/10165
Motivation:
Parameter maxCount needs the unit test.
Modifications:
1. Change the conditional statement to avoid the ineffective maxCount (enhance the robustness of the code merely).
2. Add the unit test for maxCount.
Result:
Enable this parameter to be tested.
Motivation:
For whatever reason IntelliJ produces some testfailures when using mock(ChannelHandler.class). In this case it does not "correctly" detect that the @Skip annotation should not be taken into effect.
Modifications:
Use spy(...) to ensure we correctly invoke methods
Result:
No more tests failures when running via IntelliJ.
Motivation:
As we have java8 as a minimum target we can use MethodHandles. We should do so when we expect to have a method called multiple times.
Modifications:
- Replace usage of reflection with MethodHandles where it makes sense
- Remove some code which was there to support java < 8
Result:
Faster code
Motivation:
In general, we will close the debug log in a product environment. However, logging without external level check may still affect performance as varargs will need to allocate an array.
Modification:
Add log level check simply before logging.
Result:
Improve performance slightly in a product environment.
Motivation:
At the moment the next / prev references are not set to "null" in the DefaultChannelHandlerContext once the ChannelHandler is removed. This is bad as it basically let users still use the ChannelHandlerContext of a ChannelHandler after it is removed and may produce very suprising behaviour.
Modifications:
- Fail if someone tries to use the ChannelHandlerContext once the ChannelHandler was removed (for outbound operations fail the promise, for inbound fire the error through the ChannelPipeline)
- Fix some handlers to ensure we not use the ChannelHandlerContext after the handler was removed
- Adjust DefaultChannelPipeline / DefaultChannelHandlerContext to fixes races with removal / replacement of handlers
Result:
Cleanup behaviour and make it more predictable for pipeline modifications
Motivation:
We should not include the number of ServerChannel that are part of the DefaultChannelGroup when specify the initial size of the LinkedHashMap
Modifications:
Only use the number of the non ServerChannel
Result:
Reduce memory-footprint
Motivation:
Current pipeline handler replace tests are replacing handler with same name.
we need test that test handler can renamed, old handlers are removed from pipeline.
Modification:
There is coverage missing related to renaming of handlers
Result:
Adds missing tests
Co-authored-by: phani254 <phani254@yahoo.com>
Motivation:
https://github.com/netty/netty/pull/9458 changed how we handled ChannelOptions internally to use a ConcurrentHashMap. This unfortunally had the side-effect that the ordering may be affected and not stable anymore. Here the problem is that sometimes we do validation based on two different ChannelOptions (for example we validate high and low watermarks against each other). Thus even if the user specified the options in the same order we may fail to configure them.
Modifications:
- Use again a LinkedHashMap to preserve order
- Add unit test
Result:
Apply ChannelOptions in correct and expected order
Motivation:
Due a bug in DefaultChannelPipeline we did not call handlerRemoved(...) if the handlers was removed via removeLast() or removeFirst()
Modifications:
- Correctly implement removeLast() / removeFirst()
- Add unit tests
Result:
handlerRemoved(...) always called on removal
Motivation:
Because we now tie and EventLoop to a Channel from the beginning its possible that we schedule something on it before the registration with the Selector is done. This can lead to AssertionErrors in the current form of the code like:
java.lang.AssertionError
at io.netty.channel.nio.AbstractNioChannel.selectionKey(AbstractNioChannel.java:109)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.isFlushPending(AbstractNioChannel.java:344)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:332)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:855)
at io.netty.channel.DefaultChannelPipeline$HeadHandler.flush(DefaultChannelPipeline.java:1253)
at io.netty.channel.DefaultChannelHandlerContext.invokeFlush(DefaultChannelHandlerContext.java:747)
at io.netty.channel.DefaultChannelHandlerContext.access$1300(DefaultChannelHandlerContext.java:38)
at io.netty.channel.DefaultChannelHandlerContext$WriteAndFlushTask.write(DefaultChannelHandlerContext.java:1144)
at io.netty.channel.DefaultChannelHandlerContext$AbstractWriteTask.run(DefaultChannelHandlerContext.java:1055)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:360)
at io.netty.channel.SingleThreadEventLoop.run(SingleThreadEventLoop.java:200)
at io.netty.util.concurrent.SingleThreadEventExecutor.lambda$doStartThread$4(SingleThreadEventExecutor.java:855)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:75)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Modifications:
Remove assert and add a few null checks.
Result:
No more java.lang.AssertionError.
Motivation:
In next major version of netty users should use ChannelHandler everywhere. We should ensure we do the same
Modifications:
Replace usage of deprecated classes / interfaces with ChannelHandler
Result:
Use non-deprecated code
Motivation:
97361fa2c8 replace synchronized with ConcurrentHashMap in *Bootstrap classes but missed to do the same for the Http2 variant.
Modifications:
- Use ConcurrentHashMap
- Simplify code in *Bootstrap classes
Result:
Less contention
Motivation:
In netty 5.x we changed to have all pipeline operations be executed on the EventLoop so there is no need to have an atomic operation involved anymore to update the handler state.
Modifications:
Remove atomic usage to handle the handler state
Result:
Simpler code and less overhead
Motivation:
In most cases, we want to use MultithreadEventLoopGroup without setting thread numbers but thread name only. We should simplify this for the user
Modifications:
Add a new constructor
Result:
User can only set ThreadFactory / Executor without setting the thread number to 0:
Motivation
Per javadoc in 4.1.x SimpleChannelInboundHandler:
"Please keep in mind that channelRead0(ChannelHandlerContext, I) will be
renamed to messageReceived(ChannelHandlerContext, I) in 5.0."
Modifications
Rename aforementioned method and all references/overrides.
Result
Method is renamed.
Motivation:
At the moment we directly extend the Recycler base class in our code which makes it hard to experiment with different Object pool implementation. It would be nice to be able to switch from one to another by using a system property in the future. This would also allow to more easily test things like https://github.com/netty/netty/pull/8052.
Modifications:
- Introduce ObjectPool class with static method that we now use internally to obtain an ObjectPool implementation.
- Wrap the Recycler into an ObjectPool and return it for now
Result:
Preparation for different ObjectPool implementations
Motivation
The current event loop shutdown logic is quite fragile and in the
epoll/NIO cases relies on the default 1 second wait/select timeout that
applies when there are no scheduled tasks. Without this default timeout
the shutdown would hang indefinitely.
The timeout only takes effect in this case because queued scheduled
tasks are first cancelled in
SingleThreadEventExecutor#confirmShutdown(), but I _think_ even this
isn't robust, since the main task queue is subsequently serviced which
could result in some new scheduled task being queued with much later
deadline.
It also means shutdowns are unnecessarily delayed by up to 1 second.
Modifications
- Add/extend unit tests to expose the issue
- Adjust SingleThreadEventExecutor shutdown and confirmShutdown methods
to explicitly add no-op tasks to the taskQueue so that the subsequent
event loop iteration doesn't enter blocking wait (as looks like was
originally intended)
Results
Faster and more robust shutdown of event loops, allows removal of the default wait timeout.
This is a port of https://github.com/netty/netty/pull/9616
Motivation:
On JDK > 9 Netty uses Unsafe to write two internal JDK fields: sun.nio.ch.SelectorImp.selectedKeys and sun.nio.ch.SelectorImpl.publicSelectedKeys. This is done in transport/src/main/java/io/netty/channel/nio/NioEventLoop.java:225, in openSelector() method. The GraalVM analysis cannot do the Unsafe registration automatically because the object field offset computation is hidden behind two layers of calls.
Modifications:
This PR updates the Netty GraalVM configuration by registering those fields for unsafe access.
Result:
Improved support for Netty on GraalVM with JDK > 9.
Motivation:
Due a bug we did not always correctly calculate the next buffer size in AdaptiveRecvByteBufAllocator.
Modification:
Fix calculation and add unit test
Result:
Correct calculation is always used.
Motiviation:
A lot of reentrancy bugs and cycles can happen because the DefaultPromise will notify the FutureListener directly when completely in the calling Thread if the Thread is the EventExecutor Thread. To reduce the risk of this we should always notify the listeners via the EventExecutor which basically means that we will put a task into the taskqueue of the EventExecutor and pick it up for execution after the setSuccess / setFailure methods complete the promise.
Modifications:
- Always notify via the EventExecutor
- Adjust test to ensure we correctly account for this
- Adjust tests that use the EmbeddedChannel to ensure we execute the scheduled work.
Result:
Reentrancy bugs related to the FutureListeners cant happen anymore.
Motivation:
We should correctly reset the cached local and remote address when a Channel.disconnect() is called and the channel has a notion of disconnect vs close (for example DatagramChannel implementations).
Modifications:
- Correctly reset cached kicak abd remote address
- Update testcase to cover it and so ensure all transports work in a consistent way
Result:
Correctly handle disconnect()
Motivation:
calculateMaxBytesPerGatheringWrite() contains duplicated calculation: getSendBufferSize() << 1
Modifications:
Remove the duplicated calculation
Result:
The method will be clear and better
Motiviation:
EmbeddedChannel currently is quite differently in terms of semantics to other Channel implementations. We should better change it to be more closely aligned and so have the testing code be more robust.
Modifications:
- Change EmbeddedEventLoop.inEventLoop() to only return true if we currenlty run pending / scheduled tasks
- Change EmbeddedEventLoop.execute(...) to automatically process pending tasks if not already doing so
- Adjust a few tests for the new semantics (which is closer to other Channel implementations)
Result:
EmbeddedChannel works more like other Channel implementations
Motivation:
At the moment it is quite easy to hit reentrance issues when you have multiple handlers in the pipeline and each of the handlers does not correctly protect against these. To make it easier for the user we should try to protect from these. The issue is usually if and inbound event will trigger and outbound event and this outbound event then against triggeres an inbound event. This may result in having methods in a ChannelHandler re-enter some method and so state can be corrupted or messages be re-ordered.
Modifications:
- Keep track of inbound / outbound operations in DefaultChannelHandlerContext and if reentrancy is detected break it by scheduling the action on the EventLoop. This will then be picked up once the method returns and so the reentrancy is broken up.
- Adjust tests which made strange assumptions about execution order
Result:
No more reentrancy of handlers possible.
Motivation:
There are some extra log level checks (logger.isWarnEnabled()).
Modification:
Remove log level checks (logger.isWarnEnabled()) from io.netty.channel.epoll.AbstractEpollStreamChannel, io.netty.channel.DefaultFileRegion, io.netty.channel.nio.AbstractNioChannel, io.netty.util.HashedWheelTimer, io.netty.handler.stream.ChunkedWriteHandler and io.netty.channel.udt.nio.NioUdtMessageConnectorChannel
Result:
Fixes#9456
Motivation:
In AbstractBoostrap, options and attrs are LinkedHashMap that are synchronized on for every read, copy/clone, write operation.
When a lot of connections are triggered concurrently on the same bootstrap instance, the synchronized blocks lead to contention, Netty IO threads get blocked, and performance may be severely degraded.
Modifications:
Use ConcurrentHashMap
Result:
Less contention. Fixes https://github.com/netty/netty/issues/9426
Look like `EmbeddedChannelPipeline` should also override `onUnhandledInboundMessage(ChannelHandlerContext ctx, Object msg)` in order to do not print "Discarded message pipeline" because in case of `EmbeddedChannelPipeline` discarding actually not happens.
This fixes next warning in the latest netty version with websocket and `WebSocketServerCompressionHandler`:
```
13:36:36.231 DEBUG- Decoding WebSocket Frame opCode=2
13:36:36.231 DEBUG- Decoding WebSocket Frame length=5
13:36:36.231 DEBUG- Discarded message pipeline : [JdkZlibDecoder#0, DefaultChannelPipeline$TailContext#0]. Channel : [id: 0xembedded, L:embedded - R:embedded].
```
Modification:
Override correct method
Result:
Follow up fix after https://github.com/netty/netty/pull/9286
On servers with many pipelines or dynamic pipelines, it is easy for end user to make mistake during pipeline configuration. Current message:
`Discarded inbound message PooledUnsafeDirectByteBuf(ridx: 0, widx: 2, cap: 2) that reached at the tail of the pipeline. Please check your pipeline configuration.`
Is not always meaningful and doesn't allow to find the wrong pipeline quickly.
Modification:
Added additional log placeholder that identifies pipeline handlers and channel info. This will allow for the end users quickly find the problem pipeline.
Result:
Meaningful warning when the message reaches the end of the pipeline. Fixes#7285
Motivation:
Currently GraalVM substrate returns null for reflective calls if the reflection access is not declared up front.
A change introduced in Netty 4.1.35 results in needing to register every Netty handler for reflection. This complicates matters as it is difficult to know all the possible handlers that need to be registered.
Modification:
This change adds a simple
null check such that Netty does not break on GraalVM substrate without the reflection information registration.
Result:
Fixes#9278
Motivation:
Sometimes it is beneficial to be able to set a parent Channel in EmbeddedChannel if the handler that should be tested depend on the parent.
Modifications:
- Add another constructor which allows to specify a parent
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/9228.
Motivation:
When Netty is run through ProGuard, seemingly unused methods are removed. This breaks reflection, making the Handler skipping throw a reflective error.
Modification:
If a method is seemingly absent, just disable the optimization.
Result:
Dealing with ProGuard sucks infinitesimally less.
Motivation:
8e72071d76 did adjust how synchronization is done but missed to update one block and so used synchronized (this) while it should be synchronized (handlers) .
Modifications:
Use synchronized (handlers)
Result:
Correctly synchronize
Motivation:
When initializing the AnnotatedSocketException in AbstractChannel, both
the cause and the stack trace are set, leaving a trailing "Caused By"
that is compressed when printing the trace.
Modification:
Don't include the stack trace in the exception, but leave it in the cause.
Result:
Clearer stack trace
Motivation:
OOME is occurred by increasing suppressedExceptions because other libraries call Throwable#addSuppressed. As we have no control over what other libraries do we need to ensure this can not lead to OOME.
Modifications:
Only use static instances of the Exceptions if we can either dissable addSuppressed or we run on java6.
Result:
Not possible to OOME because of addSuppressed. Fixes https://github.com/netty/netty/issues/9151.
Motivation:
GraalVM native images are a new way to deliver java applications. Netty is one of the most popular libraries however there are a few limitations that make it impossible to use with native images out of the box. Adding a few metadata (in specific modules will allow the compilation to success and produce working binaries)
Modification:
Added properties files in `META-INF` and substitutions classes (under `internal.svm`) will solve the compilation issues. The substitutions classes are not visible and do not have a public constructor so they are not visible to end users.
Result:
Fixes#8959
This fix is very conservative as it applies the minimum config required to build:
* pure netty servers
* vert.x applications
* grpc applications
The build is having trouble due to checkstyle which does not seem to be able to find the copyright notice on property files.