Motivation:
executeAfterEventLoopIteration is an Unstable API and isnt used in Netty. We should remove it to reduce complexity.
Changes:
This reverts commit 77770374fb.
Result:
Simplify implementation / cleanup.
Motivation:
8331248671 did make some changes to fix a race in ChannelInitializer when using with a custom EventExecutor. Unfortunally these where a bit racy and so the testcase failed sometimes.
Modifications:
- More correct fix when using a custom EventExecutor
- Adjust the testcase to be more correct.
Result:
Proper fix for https://github.com/netty/netty/issues/8616.
Motivation:
Most of the maven modules do not explicitly declare their
dependencies and rely on transitivity, which is not always correct.
Modifications:
For all maven modules, add all of their dependencies to pom.xml
Result:
All of the (essentially non-transitive) depepdencies of the modules are explicitly declared in pom.xml
Motivation:
The ChannelInitializer may be invoked multipled times when used with a custom EventExecutor as removal operation may be done asynchronously. We need to guard against this.
Modifications:
- Change Map to Set which is more correct in terms of how we use it.
- Ensure we only modify the internal Set when the handler was removed yet
- Add unit test.
Result:
Fixes https://github.com/netty/netty/issues/8616.
Motivation:
java.nio.channels.spi.AbstractSelectableChannel.register(...) need to obtain multiple locks during execution which may produce a long wait time if we currently select. This lead to multiple CI failures in the past.
Modifications:
Ensure the register call takes place on the EventLoop.
Result:
No more flacky CI test timeouts.
Motivation:
During benchmarks two methods showed up as "hot method too big". We can easily make these smaller by factor out some less common code-path to an extra method and so allow inlining.
Modifications:
Factor out less common code path to an extra method.
Result:
Hot methods can be inlined.
Motivation:
Our HeadContext in DefaultChannelPipeline does handle inbound and outbound but we only marked it as outbound. While this does not have any effect in the current code-base it can lead to problems when we change our internals (this is also how I found the bug).
Modifications:
Construct HeadContext so it is also marked as handling inbound.
Result:
More correct code.
Motivation:
This transport is unique because it uses Java's blocking IO (java.io / java.net) under the hood. However it is not clear if this transport is actually useful so it should be removed.
Modifications:
- Remove OIO transport and RXTX transport which depend on it.
- Remove Oio*Sctp* implementations
- Remove PerThreadEventLoop* which was only used by OIO transport.
Result:
Fixes https://github.com/netty/netty/issues/8510.
Motivation:
We plan to remove the OIO based transports in Netty 5 so we should mark these as deprecated already.
Modifications:
Mark all OIO based transports as deprecated.
Result:
Give the user a heads-up for removal.
Motivation:
When the Selector throws an IOException during our EventLoop processing we should rebuild it and transfer the registered Channels. At the moment we will continue trying to use it which will never work.
Modifications:
- Rebuild Selector when an IOException is thrown during any select*(...) methods.
- Add unit test.
Result:
Fixes https://github.com/netty/netty/issues/8566.
Motivation:
Besides an error caused by closing socket in Windows a bunch of other errors may happen at this place which won't be somehow logged. For instance any VirtualMachineError as OutOfMemoryError will be simply ignored. The library should at least log the problem.
Modification:
Added logging of the throwable object.
Result:
Fixes#8499.
Motivation:
There is a racy UnsupportedOperationException instead because the task removal is delegated to MpscChunkedArrayQueue that does not support removal. This happens with SingleThreadEventExecutor that overrides the newTaskQueue to return an MPSC queue instead of the LinkedBlockingQueue returned by the base class such as NioEventLoop, EpollEventLoop and KQueueEventLoop.
Modifications:
- Catch the UnsupportedOperationException
- Add unit test.
Result:
Fix#8475
Motivation:
There are currently many more places where this could be used which were
possibly not considered when the method was added.
If https://github.com/netty/netty/pull/8388 is included in its current
form, a number of these places could additionally make use of the same
BYTE_ARRAYS threadlocal.
There's also a couple of adjacent places where an optimistically-pooled
heap buffer is used for temp byte storage which could use the
threadlocal too in preference to allocating a temp heap bytebuf wrapper.
For example
https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java#L1417.
Modifications:
Replace new byte[] with PlatformDependent.allocateUninitializedArray()
where appropriate; make use of ByteBufUtil.getBytes() in some places
which currently perform the equivalent logic, including avoiding copy of
backing array if possible (although would be rare).
Result:
Further potential speed-up with java9+ and appropriate compile flags.
Many of these places could be on latency-sensitive code paths.
Motivation:
It has shown that the used test timeout may be too low when the CI is busy.
Modifications:
Increase timeout to 3 seconds.
Result:
Less false-positives.
Motivation:
Currently we may end up in the situation that we incremented the pending bytes before submitting the AbstractWriteTask but never decrement these again if the submitting of the task fails. This may result in incorrect watermark handling.
Modifications:
- Correctly decrement pending bytes if subimitting of task fails and also ensure we recycle it correctly.
- Add unit test.
Result:
Fixes https://github.com/netty/netty/issues/8343.
Motivation:
Unless the 'io.netty.noKeySetOptimization' system property is set,
registering a SelectableChannel instance to a NioEventLoop results
in a ClassCastException:
io.netty.channel.nio.SelectedSelectionKeySetSelector cannot be cast
to java.nio.channels.spi.AbstractSelector
Modifications:
Instead of 'selector', pass 'unwrappedSelector' to SelectableChannel.
Result:
It is possible to register a SelectableChannel instance without
setting the 'io.netty.noKeySetOptimization' system property.
Motivation:
Add an option (through a SelectStrategy return code) to have the Netty event loop thread to do busy-wait on the epoll.
The reason for this change is to avoid the context switch cost that comes when the event loop thread is blocked on the epoll_wait() call.
On average, the context switch has a penalty of ~13usec.
This benefits both:
The latency when reading from a socket
Scheduling tasks to be executed on the event loop thread.
The tradeoff, when enabling this feature, is that the event loop thread will be using 100% cpu, even when inactive.
Modification:
Added SelectStrategy option to return BUSY_WAIT
Epoll loop will do a epoll_wait() with no timeout
Use pause instruction to hint to processor that we're in a busy loop
Result:
When enabled, minimizes impact of context switch in the critical path
Motivation:
In Java8 and earlier we used reflection to replace the used key set if not otherwise told. This does not work on Java9 and later without special flags as its not possible to call setAccessible(true) on the Field anymore.
Modifications:
- Use Unsafe to instrument the Selector with out special set when sun.misc.Unsafe is present and we are using Java9+.
Result:
NIO transport produce less GC on Java9 and later as well.
Motivation:
We need to implement remove() by ourselves to make it work on Java7 as otherwise it will throw an AbstractMethodError. This is a followup of c1a335446d.
Modifications:
Just implemented remove()
Result:
Works on Java7 as well.
Motivation:
c1a335446d reimplemented remove(...) and contains(...) in a way which made it not work anymore when used by the Selector.
Modifications:
Partly revert changes in c1a335446d.
Result:
Works again as expected
Motivation:
Our SelectedSelectionKeySet does not correctly implement various methods which can be done without any performance overhead.
Modifications:
Implement iterator(), contains(...) and remove(...)
Result:
Related to https://github.com/netty/netty/issues/8242.
Motivation:
It seems to sometimes confuse people what to do to replace setMaxMessagePerRead(...).
Modifications:
Add some more details to the javadocs about the correct replacement.
Result:
Related to https://github.com/netty/netty/issues/8214.
Motivation:
We had a report that the exception may not be correctly propagated. This test shows it is.
Modifications:
Add testcase.
Result:
Test for https://github.com/netty/netty/issues/8158
Motivation:
There is a JDK bug which will return IP_TOS as supported option for ServerSocketChannel even if its not supported afterwards and cause an AssertionError.
See http://mail.openjdk.java.net/pipermail/nio-dev/2018-August/005365.html.
Modifications:
Add a workaround for the JDK bug.
Result:
ServerSocketChannel.config().getOptions() will not throw anymore and work as expected.
Motivation:
952eeb8e1e introduced the possibility to use any JDK SocketOption when using the NIO transport but broke the possibility to use netty with java6.
Modifications:
Do not use java7 types in method signatures of the static methods in NioChannelOption to prevent class-loader issues on java6.
Result:
Fixes https://github.com/netty/netty/issues/8166.
* Support the usage of SocketOption when nio is used and the java version >= 7.
Motivation:
The JDK uses SocketOption since java7 to support configuration options on the underyling Channel. We should allow to create a ChannelOption from a given SocketOption if nio is used. This also allows us to expose the same featureset in terms of configuration as the java nio implementation does without any extra effort.
Modifications:
- Add NioChannelOption which allows to wrap an existing SocketOption which then can be applied to the nio transport.
- Add test-cases
Result:
Support the same configuration options as the JDK. Also fixes https://github.com/netty/netty/issues/8072.
Motivation:
Some code that was shown as part of the ChannelHandler javadoc was not 100 % correct and used some constructs that we used in netty 3. Also we never called flush() in the code which is a bad example for users.
Modifications:
- Remove netty 3 code references
- Replace channel.write(...) with ctx.writeAndFlush(...)
Result:
More correct code in the javadocs.
Motivation:
Currently, the vast majority of userEventTriggered() implementations
require the user to supply the boilerplate behavior of performing an
instanceof check, handling if appropriate, and calling
fireUserEventTriggered() otherwise.
We can simplify this very common use case by creating a class that only
matches user events of a given type, similar to the existing
SimpleChannelInboundHandler class.
Modifications:
Create a new SimpleUserEventChannelHandler class
Create accompanying SimpleUserEventChannelHandlerTest class
Result:
Users will be able to handle most events in a less verbose manner.
Motivation:
We use FixedChannelPool in production, and we believe we have a leak that doesn't return sockets to the pool (but they should be closed), thus blocking us from creating new connections when we need them. I haven't confirmed this yet, but right now I have to resort to reflection to access this field which makes me sad.
Modification:
Expose the acquiredChannelCount field through a getter method.
Result:
Allows introspection of the pool size in FixedChannelPool.
Motivation
There is a cost to concatenating strings and calling methods that will be wasted if the Logger's level is not enabled.
Modifications
Check if Log level is enabled before producing log statement. These are just a few cases found by RegEx'ing in the code.
Result
Tiny bit more efficient code.
Motivation:
If we can not replace the internal used Set of the Selector there is no need to create an SelectedSelectionKeySet instance.
Modification:
Only create SelectedSelectionKeySet if we will replace the internal set.
Result:
Less object creation in some cases and cleaner code.
Motivation:
We should allow to schedule tasks with a delay up to Long.MAX_VALUE as we did pre 4.1.25.Final.
Modifications:
Just ensure we not overflow and put the correct max limits in place when schedule a timer. At worse we will get a wakeup to early and then schedule a new timeout.
Result:
Fixes https://github.com/netty/netty/issues/7970.
Motivation:
A long time ago we deprecated AUTO_CLOSE but it turned out this feature is still useful because if a write error is detected there still maybe data to read, and if we close the channel automatically we will lose data
Modifications:
- Remove `@Deprecated` tag for AUTO_CLOSE, setAutoClose(...) and isAutoClose(...)
- Fix javadocs on ChannelConfig to correctly tell the default value of AUTO_CLOSE.
Result:
Less warnings.
Motivation:
We need to ensure we only return from close() after all work is done as otherwise we may close the EventExecutor before we dispatched everything.
Modifications:
Correctly wait on operations to complete before return.
Result:
Fixes https://github.com/netty/netty/issues/7901.
Motivation:
We added some code to guard against thread.interrupt() in NioEventLoop but did not added a test.
Modifications:
Add testcase.
Result:
Verify that we correctly handle interrupt().
Motivation:
Closed `FixedChannelPool` fails acquire and release operations with
`IllegalStateException`s. These exceptions had message
"FixedChannelPooled was closed". Here "FixedChannelPooled" looks like
a typo and should probably be "FixedChannelPool".
Modifications:
Changed exception message to "FixedChannelPool was closed".
Result:
A tiny bit cleaner exception message.
Motivation:
ChannelReadHandler is used in tests added via f4d7e8de14. In the handler we verify the number of messages we receive per read() call but missed to sometimes reset the counter which resulted in exceptions.
Modifications:
Correctly reset read counter in all cases.
Result:
No more unexpected exceptions when running LocalChannel tests.
Motivation:
LocalChannel / LocalServerChannel did not respect read limits and just always read all of the messages.
Modifications:
- Correct respect MAX_MESSAGES_PER_READ settings
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/7880.
Motivation:
Using a very huge delay when calling schedule(...) may cause an Selector error when calling select(...) later on. We should gaurd against such a big value.
Modifications:
- Add guard against a very huge value.
- Added tests.
Result:
Fixes [#7365]
Motivation:
We need to ensure we only reset readInProgress if the outboundBuffer is not empty as otherwise we may miss to call fireChannelRead(...) later on when using the LocalChannel.
Modifications:
Also check if the outboundBuffer is not empty before setting readInProgress to false again
Result:
Fixes https://github.com/netty/netty/issues/7855
Motivation:
Some `if` statements contains common parts that can be extracted.
Modifications:
Extract common parts from `if` statements.
Result:
Less code and bytecode. The code is simpler and more clear.
Motivation:
AbstractNioByteChannel will detect that the remote end of the socket has
been closed and propagate a user event through the pipeline. However if
the user has auto read on, or calls read again, we may propagate the
same user events again. If the underlying transport continuously
notifies us that there is read activity this will happen in a spin loop
which consumes unnecessary CPU.
Modifications:
- AbstractNioByteChannel's unsafe read() should check if the input side
of the socket has been shutdown before processing the event. This is
consistent with EPOLL and KQUEUE transports.
- add unit test with @normanmaurer's help, and make transports consistent with respect to user events
Result:
No more read spin loop in NIO when the channel is half closed.
Motivation:
Sometimes it is very convenient to remove the handler from pipeline without throwing the exception in case those handler doesn't exist in the pipeline.
Modification:
Added 3 overloaded methods to DefaultChannelPipeline, but not added to ChannelHandler due to back compatibility.
Result:
Fixes#7662
Motivation:
Our code was not correct in AbstractNioMessageChannel.closeOnReadError(....) which lead to the situation that we always tried to continue reading no matter what exception was thrown when using the NioServerSocketChannel. Also even on an IOException we should check if the Channel itself is still active or not and if not stop reading.
Modifications:
Fix closeOnReadError impl and added test.
Result:
Correctly stop reading on NioServerSocketChannel when error happens during read.
Motivation:
DefaultChannelGroup.contains(...) did one more instanceof check then needed.
Modifications:
Simplify contains(...) and remove one instanceof check.
Result:
Simplier and cheaper implementation.
Motivation:
Right now PendingWriteQueue.removeAndWriteAll collects all promises to
PromiseCombiner instance which sets listener to each given promise throwing
IllegalStateException on VoidChannelPromise which breaks while loop
and "reports" operation as failed (when in fact part of writes might be
actually written).
Modifications:
Check if the promise is not void before adding it to the PromiseCombiner
instance.
Result:
PendingWriteQueue.removeAndWriteAll succesfully writes all pendings
even in case void promise was used.
Motivation:
The flush task is currently using flush() which will have the affect of have the flush traverse the whole ChannelPipeline and also flush messages that were written since we gave up flushing. This is not really correct as we should only continue to flush messages that were flushed at the point in time when the flush task was submitted for execution if the user not explicit call flush() by him/herself.
Modification:
Call *Unsafe.flush0() via the flush task which will only continue flushing messages that were marked as flushed before.
Result:
More correct behaviour when the flush task is used.
Motivation:
b215794de3 recently introduced a change in behavior where writeSpinCount provided a limit for how many write operations were attempted per flush operation. However when the write quantum was meet the selector write flag was not cleared, and the channel unsafe flush0 method has an optimization which prematurely exits if the write flag is set. This may lead to no write progress being made under the following scenario:
- flush is called, but the socket can't accept all data, we set the write flag
- the selector wakes us up because the socket is writable, we write data and use the writeSpinCount quantum
- we then schedule a flush() on the EventLoop to execute later, however it the flush0 optimization prematurely exits because the write flag is still set
In this scenario the socket is still writable so the EventLoop may never notify us that the socket is writable, and therefore we may never attempt to flush data to the OS.
Modifications:
- When the writeSpinCount quantum is exceeded we should clear the selector write flag
Result:
Fixes https://github.com/netty/netty/issues/7729
Motivation:
NioDatagramChannel attempts to unpack a AddressedEnvelope and unconditionally uses internalNioBuffer. However if the ByteBuf is a CompositeByteBuf with more than 1 components, the write will fail and throw an exception.
Modifications:
- NioDatagramChannel should check the nioBufferCount before attempting
to use internalNioBuffer
Result:
No more failure to write UDP packets on NIO when a CompositeByteBuf is
used.
Motivation:
Reflective setAccessible(true) will produce scary warnings on the console when using java9+, while netty still works. That said users may feel uncomfortable with these warnings, we should not try to do it by default when using java9+.
Modifications:
Add io.netty.tryReflectionSetAccessible system property which controls if setAccessible(...) will be used. By default it will bet set to false when using java9+.
Result:
Fixes [#7254].
Motivation:
The methods implement io.netty.util.concurrent.Future#cancel(boolean mayInterruptIfRunning) which actually ignored the param mayInterruptIfRunning.We need to add comments for the `mayInterruptIfRunning` param.
Modifications:
Add comments for the `mayInterruptIfRunning` param.
Result:
People who call the `cancel` method will be more clear about the effect of `mayInterruptIfRunning` param.
Motivation:
When VoidChannelPromise.unvoid() was called we created a new ChannelFutureListener everytime. This is not needed as its stateless.
Modifications:
Reuse the ChannelFutureListener.
Result:
Less object allocations
Motiviation:
DefaultChannelPipeline and AbstractChannelHandlerContext maintain state
which indicates if a ChannelHandler should be invoked or not. However
the state is updated to allow the handler to be invoked only after the
handlerAdded method completes. If the handlerAdded method generates
events which may result in other methods being invoked on that handler
they will be missed.
Modifications:
- DefaultChannelPipeline should set the state before calling
handlerAdded
Result:
DefaultChannelPipeline will allow events to be processed during the
handlerAdded process.
Motivation:
We should fail fast when DefaultChannelPromise is constructed with null as Channel as otherwise it will fail with a NPE once we call setSuccess / setFailure.
Modifications:
Add null check and test.
Result:
Fail fast.
Motivation:
Will allow easy removal of deprecated methods in future.
Modification:
Replaced ctx.attr(), ctx.hasAttr() with ctx.channel().attr(), ctx.channel().hasAttr().
Result:
No deprecated ctx.attr(), ctx.hasAttr() methods usage.
Motivation:
As shown in issues it is sometimes hard to understand why a leak was reported when the user just calles EmbeddedChannel.readInbound() / EmbeddedChannel.readOutbound() and drop the message on the floor.
Modifications:
Add a hint before handover the message to the user and transfer the ownership.
Result:
Easier debugging of leaks caused by EmbeddedChannel.read*().
Motivation :
Avoid unnecessary array allocation when using the function with varargs in the DefaultChannelPipeline class.
Modifications :
Added addLast and addFirst overloaded methods with 1 handler instead of varargs.
Result :
No array allocation when using simple construction like pipeline.addLast(new Handler());
Motivation
There is currently no way to enforce the position of a handler in a ChannelPipeline and assume you wanted to write something like a custom Channel type that acts as a proxy between two other Channels.
ProxyChannel(Channel client, Channel server) {
client calls write(msg) -> server.write(msg)
client calls flush() -> server.flush()
server calls fireChannelRead(msg) -> client.write(msg)
server calls fireChannelReadComplete() -> client.flush()
}
In order to make it work reliably one needs to be able to scoop up the various events at the head and tail of the pipeline. The head side of the pipeline is covered by Unsafe and it's also relatively safe to count on the user to not use the addFirst() method to manipulate the pipeline. The tail side is always at a risk of getting broken because addLast() is the goto method to add handlers.
Modifications
Adding a few extra methods to DefaultChannelPipeline that expose some of the events that reach the pipeline's TailContext.
Result
Fixes#7484
* FIX: force a read operation for peer instead of self
Motivation:
When A is in `writeInProgress` and call self close, A should
`finishPeerRead` for B(A' peer).
Modifications:
Call `finishPeerRead` with peer in `LocalChannel#doClose`
Result:
Clear confuse of code logic
* FIX: preserves order of close after write in same event loop
Motivation:
If client and server(client's peer channel) are in same event loop, client writes data to
server in `ChannelActive`. Server receives the data and write it
back. The client's read can't be triggered becasue client's
`ChannelActive` is not finished at this point and its `readInProgress`
is false. Then server closes itself, it will also close the client's
channel. And client has no chance to receive the data.
Modifications:
1. Add a test case to demonstrate the problem
2. When `doClose` peer, we always call
`peer.eventLoop().execute()` and `registerInProgress` is not needed.
3. Remove test case
`testClosePeerInWritePromiseCompleteSameEventLoopPreservesOrder`. This
test case can't pass becasue of this commit. IMHO, I think it is OK,
becasue it is reasonable that the client flushes the data to socket,
then server close the channel without received the data.
4. For mismatch test in SniClientTest, the client should receive server's alert before closed(caused by server's close)
Result:
The problem is gone.
Motivation:
The writeSpinCount currently loops over the same buffer, gathering
write, file write, or other write operation multiple times but will
continue writing until there is nothing left or the OS doesn't accept
any data for that specific write. However if the OS keeps accepting
writes there is no way to limit how much time we spend on a specific
socket. This can lead to unfair consumption of resources dedicated to a
single socket.
We currently don't limit the amount of bytes we attempt to write per
gathering write. If there are many more bytes pending relative to the
SO_SNDBUF size we will end up building iov arrays with more elements
than can be written, which results in extra iteration, conditionals,
and book keeping.
Modifications:
- writeSpinCount should limit the number of system calls we make to
write data, instead of applying to individual write operations
- IovArray should support a maximum number of bytes
- IovArray should support composite buffers of greater than size 1024
- We should auto-scale the amount of data that we attempt to write per
gathering write operation relative to SO_SNDBUF and how much data is
successfully written
- The non-unsafe path should also support a maximum number of bytes,
and respect the IOV_MAX limit
Result:
Write resource consumption can be bounded and gathering writes have
a limit relative to the amount of data which can actually be accepted
by the socket.
Motivation:
If large amounts of data is being transferred it is difficult to correlate the amount we attempt to read vs the maximum amount that the OS will actually buffer and deliver to the application. For exmaple some OSes may dynicamlly update the SO_RCVBUF size or otherwise dynamically adjust how much data is delieved to the application. In these circumstances it can reduce latency to just call read() on the socket another time to see if there is really any data remaining instead of giving up the maxMessagesPerRead quantum and going back to the selector to read later.
Motifications:
- Add DefaultMaxMessagesRecvByteBufAllocator#respectMaybeMoreData which provides a way to ignore the maybeMoreData function which may not account for the current data pending, and if it does this maybe racy.
Result:
Option to always use the full maxMessagesPerRead quantum before going back to the selector.
Motivation:
SslHandler will do aggregation of writes by default in an attempt to improve goodput and reduce the number of discrete buffers which must be accumulated. However if aggregation is not possible then a CompositeByteBuf is used to accumulate multiple buffers. Using a CompositeByteBuf doesn't provide any of the benefits of better goodput and in the case of small + large writes (e.g. http/2 frame header + data) this can reduce the amount of data that can be passed to writev by about half. This has the impact of increasing latency as well as reducing goodput.
Modifications:
- SslHandler should prefer copying instead of using a CompositeByteBuf
Result:
Better goodput (and potentially improved latency) at the cost of copy operations.
Motivation:
AdaptiveRecvByteBufAllocator currently adjusts the ByteBuf allocation size guess when readComplete is called. However the default configuration for number of reads before readComplete is called is 16. This means that there will be 16 reads done before any adjustment is done. If there is a large amount of data pending AdaptiveRecvByteBufAllocator will be slow to adjust the allocation size guess. In addition to being slow the result of only updating the guess in readComplete means that we must go back to the selector and wait to be woken up again when data is ready to read. Going back to the selector is an expensive operations and can add significant latency if there is large amount of data pending to read.
Modifications:
- AdaptiveRecvByteBufAllocator should check on each read if a step up is necessary. The step down process is left unchanged and can be more gradual at the cost of potentially over allocating.
Result:
AdaptiveRecvByteBufAllocator increases the guess size during the read loop to reduce latency when large amounts of data is being read.
Automatic-Module-Name entry provides a stable JDK9 module name, when Netty is used in a modular JDK9 applications. More info: http://blog.joda.org/2017/05/java-se-9-jpms-automatic-modules.html
When Netty migrates to JDK9 in the future, the entry can be replaced by actual module-info descriptor.
Modification:
The POM-s are configured to put the correct module names to the manifest.
Result:
Fixes#7218.
Motivation:
`FixedChannelPool` allows users to configure `acquireTimeoutMillis`
and expects given value to be greater or equal to zero when timeout
action is supplied. However, validation error message said that
value is expected to be greater or equal to one. Code performs
check against zero.
Modifications:
Changed error message to say that value greater or equal to
zero is expected. Added test to check that zero is an acceptable
value.
Result:
Exception with right error message is thrown.