Motivation:
There is a racy UnsupportedOperationException instead because the task removal is delegated to MpscChunkedArrayQueue that does not support removal. This happens with SingleThreadEventExecutor that overrides the newTaskQueue to return an MPSC queue instead of the LinkedBlockingQueue returned by the base class such as NioEventLoop, EpollEventLoop and KQueueEventLoop.
Modifications:
- Catch the UnsupportedOperationException
- Add unit test.
Result:
Fix#8475
Motivation:
There are currently many more places where this could be used which were
possibly not considered when the method was added.
If https://github.com/netty/netty/pull/8388 is included in its current
form, a number of these places could additionally make use of the same
BYTE_ARRAYS threadlocal.
There's also a couple of adjacent places where an optimistically-pooled
heap buffer is used for temp byte storage which could use the
threadlocal too in preference to allocating a temp heap bytebuf wrapper.
For example
https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java#L1417.
Modifications:
Replace new byte[] with PlatformDependent.allocateUninitializedArray()
where appropriate; make use of ByteBufUtil.getBytes() in some places
which currently perform the equivalent logic, including avoiding copy of
backing array if possible (although would be rare).
Result:
Further potential speed-up with java9+ and appropriate compile flags.
Many of these places could be on latency-sensitive code paths.
Motivation:
It has shown that the used test timeout may be too low when the CI is busy.
Modifications:
Increase timeout to 3 seconds.
Result:
Less false-positives.
Motivation:
Currently we may end up in the situation that we incremented the pending bytes before submitting the AbstractWriteTask but never decrement these again if the submitting of the task fails. This may result in incorrect watermark handling.
Modifications:
- Correctly decrement pending bytes if subimitting of task fails and also ensure we recycle it correctly.
- Add unit test.
Result:
Fixes https://github.com/netty/netty/issues/8343.
Motivation:
Unless the 'io.netty.noKeySetOptimization' system property is set,
registering a SelectableChannel instance to a NioEventLoop results
in a ClassCastException:
io.netty.channel.nio.SelectedSelectionKeySetSelector cannot be cast
to java.nio.channels.spi.AbstractSelector
Modifications:
Instead of 'selector', pass 'unwrappedSelector' to SelectableChannel.
Result:
It is possible to register a SelectableChannel instance without
setting the 'io.netty.noKeySetOptimization' system property.
Motivation:
Add an option (through a SelectStrategy return code) to have the Netty event loop thread to do busy-wait on the epoll.
The reason for this change is to avoid the context switch cost that comes when the event loop thread is blocked on the epoll_wait() call.
On average, the context switch has a penalty of ~13usec.
This benefits both:
The latency when reading from a socket
Scheduling tasks to be executed on the event loop thread.
The tradeoff, when enabling this feature, is that the event loop thread will be using 100% cpu, even when inactive.
Modification:
Added SelectStrategy option to return BUSY_WAIT
Epoll loop will do a epoll_wait() with no timeout
Use pause instruction to hint to processor that we're in a busy loop
Result:
When enabled, minimizes impact of context switch in the critical path
Motivation:
In Java8 and earlier we used reflection to replace the used key set if not otherwise told. This does not work on Java9 and later without special flags as its not possible to call setAccessible(true) on the Field anymore.
Modifications:
- Use Unsafe to instrument the Selector with out special set when sun.misc.Unsafe is present and we are using Java9+.
Result:
NIO transport produce less GC on Java9 and later as well.
Motivation:
We need to implement remove() by ourselves to make it work on Java7 as otherwise it will throw an AbstractMethodError. This is a followup of c1a335446d.
Modifications:
Just implemented remove()
Result:
Works on Java7 as well.
Motivation:
c1a335446d reimplemented remove(...) and contains(...) in a way which made it not work anymore when used by the Selector.
Modifications:
Partly revert changes in c1a335446d.
Result:
Works again as expected
Motivation:
Our SelectedSelectionKeySet does not correctly implement various methods which can be done without any performance overhead.
Modifications:
Implement iterator(), contains(...) and remove(...)
Result:
Related to https://github.com/netty/netty/issues/8242.
Motivation:
It seems to sometimes confuse people what to do to replace setMaxMessagePerRead(...).
Modifications:
Add some more details to the javadocs about the correct replacement.
Result:
Related to https://github.com/netty/netty/issues/8214.
Motivation:
We had a report that the exception may not be correctly propagated. This test shows it is.
Modifications:
Add testcase.
Result:
Test for https://github.com/netty/netty/issues/8158
Motivation:
There is a JDK bug which will return IP_TOS as supported option for ServerSocketChannel even if its not supported afterwards and cause an AssertionError.
See http://mail.openjdk.java.net/pipermail/nio-dev/2018-August/005365.html.
Modifications:
Add a workaround for the JDK bug.
Result:
ServerSocketChannel.config().getOptions() will not throw anymore and work as expected.
Motivation:
952eeb8e1e introduced the possibility to use any JDK SocketOption when using the NIO transport but broke the possibility to use netty with java6.
Modifications:
Do not use java7 types in method signatures of the static methods in NioChannelOption to prevent class-loader issues on java6.
Result:
Fixes https://github.com/netty/netty/issues/8166.
* Support the usage of SocketOption when nio is used and the java version >= 7.
Motivation:
The JDK uses SocketOption since java7 to support configuration options on the underyling Channel. We should allow to create a ChannelOption from a given SocketOption if nio is used. This also allows us to expose the same featureset in terms of configuration as the java nio implementation does without any extra effort.
Modifications:
- Add NioChannelOption which allows to wrap an existing SocketOption which then can be applied to the nio transport.
- Add test-cases
Result:
Support the same configuration options as the JDK. Also fixes https://github.com/netty/netty/issues/8072.
Motivation:
Some code that was shown as part of the ChannelHandler javadoc was not 100 % correct and used some constructs that we used in netty 3. Also we never called flush() in the code which is a bad example for users.
Modifications:
- Remove netty 3 code references
- Replace channel.write(...) with ctx.writeAndFlush(...)
Result:
More correct code in the javadocs.
Motivation:
Currently, the vast majority of userEventTriggered() implementations
require the user to supply the boilerplate behavior of performing an
instanceof check, handling if appropriate, and calling
fireUserEventTriggered() otherwise.
We can simplify this very common use case by creating a class that only
matches user events of a given type, similar to the existing
SimpleChannelInboundHandler class.
Modifications:
Create a new SimpleUserEventChannelHandler class
Create accompanying SimpleUserEventChannelHandlerTest class
Result:
Users will be able to handle most events in a less verbose manner.
Motivation:
We use FixedChannelPool in production, and we believe we have a leak that doesn't return sockets to the pool (but they should be closed), thus blocking us from creating new connections when we need them. I haven't confirmed this yet, but right now I have to resort to reflection to access this field which makes me sad.
Modification:
Expose the acquiredChannelCount field through a getter method.
Result:
Allows introspection of the pool size in FixedChannelPool.
Motivation
There is a cost to concatenating strings and calling methods that will be wasted if the Logger's level is not enabled.
Modifications
Check if Log level is enabled before producing log statement. These are just a few cases found by RegEx'ing in the code.
Result
Tiny bit more efficient code.
Motivation:
If we can not replace the internal used Set of the Selector there is no need to create an SelectedSelectionKeySet instance.
Modification:
Only create SelectedSelectionKeySet if we will replace the internal set.
Result:
Less object creation in some cases and cleaner code.
Motivation:
We should allow to schedule tasks with a delay up to Long.MAX_VALUE as we did pre 4.1.25.Final.
Modifications:
Just ensure we not overflow and put the correct max limits in place when schedule a timer. At worse we will get a wakeup to early and then schedule a new timeout.
Result:
Fixes https://github.com/netty/netty/issues/7970.
Motivation:
A long time ago we deprecated AUTO_CLOSE but it turned out this feature is still useful because if a write error is detected there still maybe data to read, and if we close the channel automatically we will lose data
Modifications:
- Remove `@Deprecated` tag for AUTO_CLOSE, setAutoClose(...) and isAutoClose(...)
- Fix javadocs on ChannelConfig to correctly tell the default value of AUTO_CLOSE.
Result:
Less warnings.
Motivation:
We need to ensure we only return from close() after all work is done as otherwise we may close the EventExecutor before we dispatched everything.
Modifications:
Correctly wait on operations to complete before return.
Result:
Fixes https://github.com/netty/netty/issues/7901.
Motivation:
We added some code to guard against thread.interrupt() in NioEventLoop but did not added a test.
Modifications:
Add testcase.
Result:
Verify that we correctly handle interrupt().
Motivation:
Closed `FixedChannelPool` fails acquire and release operations with
`IllegalStateException`s. These exceptions had message
"FixedChannelPooled was closed". Here "FixedChannelPooled" looks like
a typo and should probably be "FixedChannelPool".
Modifications:
Changed exception message to "FixedChannelPool was closed".
Result:
A tiny bit cleaner exception message.
Motivation:
ChannelReadHandler is used in tests added via f4d7e8de14. In the handler we verify the number of messages we receive per read() call but missed to sometimes reset the counter which resulted in exceptions.
Modifications:
Correctly reset read counter in all cases.
Result:
No more unexpected exceptions when running LocalChannel tests.
Motivation:
LocalChannel / LocalServerChannel did not respect read limits and just always read all of the messages.
Modifications:
- Correct respect MAX_MESSAGES_PER_READ settings
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/7880.
Motivation:
Using a very huge delay when calling schedule(...) may cause an Selector error when calling select(...) later on. We should gaurd against such a big value.
Modifications:
- Add guard against a very huge value.
- Added tests.
Result:
Fixes [#7365]
Motivation:
We need to ensure we only reset readInProgress if the outboundBuffer is not empty as otherwise we may miss to call fireChannelRead(...) later on when using the LocalChannel.
Modifications:
Also check if the outboundBuffer is not empty before setting readInProgress to false again
Result:
Fixes https://github.com/netty/netty/issues/7855
Motivation:
Some `if` statements contains common parts that can be extracted.
Modifications:
Extract common parts from `if` statements.
Result:
Less code and bytecode. The code is simpler and more clear.
Motivation:
AbstractNioByteChannel will detect that the remote end of the socket has
been closed and propagate a user event through the pipeline. However if
the user has auto read on, or calls read again, we may propagate the
same user events again. If the underlying transport continuously
notifies us that there is read activity this will happen in a spin loop
which consumes unnecessary CPU.
Modifications:
- AbstractNioByteChannel's unsafe read() should check if the input side
of the socket has been shutdown before processing the event. This is
consistent with EPOLL and KQUEUE transports.
- add unit test with @normanmaurer's help, and make transports consistent with respect to user events
Result:
No more read spin loop in NIO when the channel is half closed.
Motivation:
Sometimes it is very convenient to remove the handler from pipeline without throwing the exception in case those handler doesn't exist in the pipeline.
Modification:
Added 3 overloaded methods to DefaultChannelPipeline, but not added to ChannelHandler due to back compatibility.
Result:
Fixes#7662
Motivation:
Our code was not correct in AbstractNioMessageChannel.closeOnReadError(....) which lead to the situation that we always tried to continue reading no matter what exception was thrown when using the NioServerSocketChannel. Also even on an IOException we should check if the Channel itself is still active or not and if not stop reading.
Modifications:
Fix closeOnReadError impl and added test.
Result:
Correctly stop reading on NioServerSocketChannel when error happens during read.
Motivation:
DefaultChannelGroup.contains(...) did one more instanceof check then needed.
Modifications:
Simplify contains(...) and remove one instanceof check.
Result:
Simplier and cheaper implementation.
Motivation:
Right now PendingWriteQueue.removeAndWriteAll collects all promises to
PromiseCombiner instance which sets listener to each given promise throwing
IllegalStateException on VoidChannelPromise which breaks while loop
and "reports" operation as failed (when in fact part of writes might be
actually written).
Modifications:
Check if the promise is not void before adding it to the PromiseCombiner
instance.
Result:
PendingWriteQueue.removeAndWriteAll succesfully writes all pendings
even in case void promise was used.
Motivation:
The flush task is currently using flush() which will have the affect of have the flush traverse the whole ChannelPipeline and also flush messages that were written since we gave up flushing. This is not really correct as we should only continue to flush messages that were flushed at the point in time when the flush task was submitted for execution if the user not explicit call flush() by him/herself.
Modification:
Call *Unsafe.flush0() via the flush task which will only continue flushing messages that were marked as flushed before.
Result:
More correct behaviour when the flush task is used.
Motivation:
b215794de3 recently introduced a change in behavior where writeSpinCount provided a limit for how many write operations were attempted per flush operation. However when the write quantum was meet the selector write flag was not cleared, and the channel unsafe flush0 method has an optimization which prematurely exits if the write flag is set. This may lead to no write progress being made under the following scenario:
- flush is called, but the socket can't accept all data, we set the write flag
- the selector wakes us up because the socket is writable, we write data and use the writeSpinCount quantum
- we then schedule a flush() on the EventLoop to execute later, however it the flush0 optimization prematurely exits because the write flag is still set
In this scenario the socket is still writable so the EventLoop may never notify us that the socket is writable, and therefore we may never attempt to flush data to the OS.
Modifications:
- When the writeSpinCount quantum is exceeded we should clear the selector write flag
Result:
Fixes https://github.com/netty/netty/issues/7729
Motivation:
NioDatagramChannel attempts to unpack a AddressedEnvelope and unconditionally uses internalNioBuffer. However if the ByteBuf is a CompositeByteBuf with more than 1 components, the write will fail and throw an exception.
Modifications:
- NioDatagramChannel should check the nioBufferCount before attempting
to use internalNioBuffer
Result:
No more failure to write UDP packets on NIO when a CompositeByteBuf is
used.
Motivation:
Reflective setAccessible(true) will produce scary warnings on the console when using java9+, while netty still works. That said users may feel uncomfortable with these warnings, we should not try to do it by default when using java9+.
Modifications:
Add io.netty.tryReflectionSetAccessible system property which controls if setAccessible(...) will be used. By default it will bet set to false when using java9+.
Result:
Fixes [#7254].
Motivation:
The methods implement io.netty.util.concurrent.Future#cancel(boolean mayInterruptIfRunning) which actually ignored the param mayInterruptIfRunning.We need to add comments for the `mayInterruptIfRunning` param.
Modifications:
Add comments for the `mayInterruptIfRunning` param.
Result:
People who call the `cancel` method will be more clear about the effect of `mayInterruptIfRunning` param.
Motivation:
When VoidChannelPromise.unvoid() was called we created a new ChannelFutureListener everytime. This is not needed as its stateless.
Modifications:
Reuse the ChannelFutureListener.
Result:
Less object allocations
Motiviation:
DefaultChannelPipeline and AbstractChannelHandlerContext maintain state
which indicates if a ChannelHandler should be invoked or not. However
the state is updated to allow the handler to be invoked only after the
handlerAdded method completes. If the handlerAdded method generates
events which may result in other methods being invoked on that handler
they will be missed.
Modifications:
- DefaultChannelPipeline should set the state before calling
handlerAdded
Result:
DefaultChannelPipeline will allow events to be processed during the
handlerAdded process.
Motivation:
We should fail fast when DefaultChannelPromise is constructed with null as Channel as otherwise it will fail with a NPE once we call setSuccess / setFailure.
Modifications:
Add null check and test.
Result:
Fail fast.
Motivation:
Will allow easy removal of deprecated methods in future.
Modification:
Replaced ctx.attr(), ctx.hasAttr() with ctx.channel().attr(), ctx.channel().hasAttr().
Result:
No deprecated ctx.attr(), ctx.hasAttr() methods usage.
Motivation:
As shown in issues it is sometimes hard to understand why a leak was reported when the user just calles EmbeddedChannel.readInbound() / EmbeddedChannel.readOutbound() and drop the message on the floor.
Modifications:
Add a hint before handover the message to the user and transfer the ownership.
Result:
Easier debugging of leaks caused by EmbeddedChannel.read*().
Motivation :
Avoid unnecessary array allocation when using the function with varargs in the DefaultChannelPipeline class.
Modifications :
Added addLast and addFirst overloaded methods with 1 handler instead of varargs.
Result :
No array allocation when using simple construction like pipeline.addLast(new Handler());
Motivation
There is currently no way to enforce the position of a handler in a ChannelPipeline and assume you wanted to write something like a custom Channel type that acts as a proxy between two other Channels.
ProxyChannel(Channel client, Channel server) {
client calls write(msg) -> server.write(msg)
client calls flush() -> server.flush()
server calls fireChannelRead(msg) -> client.write(msg)
server calls fireChannelReadComplete() -> client.flush()
}
In order to make it work reliably one needs to be able to scoop up the various events at the head and tail of the pipeline. The head side of the pipeline is covered by Unsafe and it's also relatively safe to count on the user to not use the addFirst() method to manipulate the pipeline. The tail side is always at a risk of getting broken because addLast() is the goto method to add handlers.
Modifications
Adding a few extra methods to DefaultChannelPipeline that expose some of the events that reach the pipeline's TailContext.
Result
Fixes#7484
* FIX: force a read operation for peer instead of self
Motivation:
When A is in `writeInProgress` and call self close, A should
`finishPeerRead` for B(A' peer).
Modifications:
Call `finishPeerRead` with peer in `LocalChannel#doClose`
Result:
Clear confuse of code logic
* FIX: preserves order of close after write in same event loop
Motivation:
If client and server(client's peer channel) are in same event loop, client writes data to
server in `ChannelActive`. Server receives the data and write it
back. The client's read can't be triggered becasue client's
`ChannelActive` is not finished at this point and its `readInProgress`
is false. Then server closes itself, it will also close the client's
channel. And client has no chance to receive the data.
Modifications:
1. Add a test case to demonstrate the problem
2. When `doClose` peer, we always call
`peer.eventLoop().execute()` and `registerInProgress` is not needed.
3. Remove test case
`testClosePeerInWritePromiseCompleteSameEventLoopPreservesOrder`. This
test case can't pass becasue of this commit. IMHO, I think it is OK,
becasue it is reasonable that the client flushes the data to socket,
then server close the channel without received the data.
4. For mismatch test in SniClientTest, the client should receive server's alert before closed(caused by server's close)
Result:
The problem is gone.
Motivation:
The writeSpinCount currently loops over the same buffer, gathering
write, file write, or other write operation multiple times but will
continue writing until there is nothing left or the OS doesn't accept
any data for that specific write. However if the OS keeps accepting
writes there is no way to limit how much time we spend on a specific
socket. This can lead to unfair consumption of resources dedicated to a
single socket.
We currently don't limit the amount of bytes we attempt to write per
gathering write. If there are many more bytes pending relative to the
SO_SNDBUF size we will end up building iov arrays with more elements
than can be written, which results in extra iteration, conditionals,
and book keeping.
Modifications:
- writeSpinCount should limit the number of system calls we make to
write data, instead of applying to individual write operations
- IovArray should support a maximum number of bytes
- IovArray should support composite buffers of greater than size 1024
- We should auto-scale the amount of data that we attempt to write per
gathering write operation relative to SO_SNDBUF and how much data is
successfully written
- The non-unsafe path should also support a maximum number of bytes,
and respect the IOV_MAX limit
Result:
Write resource consumption can be bounded and gathering writes have
a limit relative to the amount of data which can actually be accepted
by the socket.
Motivation:
If large amounts of data is being transferred it is difficult to correlate the amount we attempt to read vs the maximum amount that the OS will actually buffer and deliver to the application. For exmaple some OSes may dynicamlly update the SO_RCVBUF size or otherwise dynamically adjust how much data is delieved to the application. In these circumstances it can reduce latency to just call read() on the socket another time to see if there is really any data remaining instead of giving up the maxMessagesPerRead quantum and going back to the selector to read later.
Motifications:
- Add DefaultMaxMessagesRecvByteBufAllocator#respectMaybeMoreData which provides a way to ignore the maybeMoreData function which may not account for the current data pending, and if it does this maybe racy.
Result:
Option to always use the full maxMessagesPerRead quantum before going back to the selector.
Motivation:
SslHandler will do aggregation of writes by default in an attempt to improve goodput and reduce the number of discrete buffers which must be accumulated. However if aggregation is not possible then a CompositeByteBuf is used to accumulate multiple buffers. Using a CompositeByteBuf doesn't provide any of the benefits of better goodput and in the case of small + large writes (e.g. http/2 frame header + data) this can reduce the amount of data that can be passed to writev by about half. This has the impact of increasing latency as well as reducing goodput.
Modifications:
- SslHandler should prefer copying instead of using a CompositeByteBuf
Result:
Better goodput (and potentially improved latency) at the cost of copy operations.
Motivation:
AdaptiveRecvByteBufAllocator currently adjusts the ByteBuf allocation size guess when readComplete is called. However the default configuration for number of reads before readComplete is called is 16. This means that there will be 16 reads done before any adjustment is done. If there is a large amount of data pending AdaptiveRecvByteBufAllocator will be slow to adjust the allocation size guess. In addition to being slow the result of only updating the guess in readComplete means that we must go back to the selector and wait to be woken up again when data is ready to read. Going back to the selector is an expensive operations and can add significant latency if there is large amount of data pending to read.
Modifications:
- AdaptiveRecvByteBufAllocator should check on each read if a step up is necessary. The step down process is left unchanged and can be more gradual at the cost of potentially over allocating.
Result:
AdaptiveRecvByteBufAllocator increases the guess size during the read loop to reduce latency when large amounts of data is being read.
Automatic-Module-Name entry provides a stable JDK9 module name, when Netty is used in a modular JDK9 applications. More info: http://blog.joda.org/2017/05/java-se-9-jpms-automatic-modules.html
When Netty migrates to JDK9 in the future, the entry can be replaced by actual module-info descriptor.
Modification:
The POM-s are configured to put the correct module names to the manifest.
Result:
Fixes#7218.
Motivation:
`FixedChannelPool` allows users to configure `acquireTimeoutMillis`
and expects given value to be greater or equal to zero when timeout
action is supplied. However, validation error message said that
value is expected to be greater or equal to one. Code performs
check against zero.
Modifications:
Changed error message to say that value greater or equal to
zero is expected. Added test to check that zero is an acceptable
value.
Result:
Exception with right error message is thrown.
Motivation:
AbstractCoalescingBufferQueue#add accounts for void promises, but AbstractCoalescingBufferQueue#addFirst does not. These methods should be consistent.
Modifications:
- AbstractCoalescingBufferQueue#addFirst should account for void promises and share code with AbstractCoalescingBufferQueue#add
Result:
More correct void promise handling in AbstractCoalescingBufferQueue.
complete
Motivation:
SslHandler removes a Buffer/Promise pair from
AbstractCoalescingBufferQueue when wrapping data. However it is possible
the SSLEngine will not consume the entire buffer. In this case
SslHandler adds the Buffer back to the queue, but doesn't add the
Promise back to the queue. This may result in the promise completing
immediately in finishFlush, and generally not correlating to the
completion of writing the corresponding Buffer
Modifications:
- AbstractCoalescingBufferQueue#addFirst should also support adding the
ChannelPromise
- In the event of a handshake timeout we should immediately fail pending
writes immediately to get a more accurate exception
Result:
Fixes https://github.com/netty/netty/issues/7378.
Motivation:
We need to set readPending to false when we detect a EOF while issue a read as otherwise we may not unregister from the Selector / Epoll / KQueue and so keep on receving wakeups.
The important bit is that we may even get a wakeup for a read event but will still will only be able to read 0 bytes from the socket, so we need to be very careful when we clear the readPending. This can happen because we generally using edge-triggered mode for our native transports and because of the nature of edge-triggered we may schedule an read event just to find out there is nothing left to read atm (because we completely drained the socket on the previous read).
Modifications:
Set readPending to false when EOF is detected.
Result:
Fixes [#7255].
Motivation:
HTTP/2 allows writes of 0 length data frames. However in some cases EMPTY_BUFFER is used instead of the actual buffer that was written. This may mask writes of released buffers or otherwise invalid buffer objects. It is also possible that if the buffer is invalid AbstractCoalescingBufferQueue will not release the aggregated buffer nor fail the associated promise.
Modifications:
- DefaultHttp2FrameCodec should take care to fail the promise, even if releasing the data throws
- AbstractCoalescingBufferQueue should release any aggregated data and fail the associated promise if something goes wrong during aggregation
Result:
More correct handling of invalid buffers in HTTP/2 code.
This reverts commit 413c7c2cd8 as it introduced an regression when edge-triggered mode is used which is true for our native transports by default. With 413c7c2cd8 included it was possible that we set readPending to false by mistake even if we would be interested in read more.
Motivation:
readPending is currently only set to false if data is delivered to the application, however this may result in duplicate events being received from the selector in the event that the socket was closed.
Modifications:
- We should set readPending to false before each read attempt for all
transports besides NIO.
- Based upon the Javadocs it is possible that NIO may have spurious
wakeups [1]. In this case we should be more cautious and only set
readPending to false if data was actually read.
[1] https://docs.oracle.com/javase/7/docs/api/java/nio/channels/SelectionKey.html
That a selection key's ready set indicates that its channel is ready for some operation category is a hint, but not a guarantee, that an operation in such a category may be performed by a thread without causing the thread to block.
Result:
Notification from the selector (or simulated events from kqueue/epoll ET) in the event of socket closure.
Fixes https://github.com/netty/netty/issues/7255
Motivation:
A regression was introduced in 86e653e which had the effect that the writability was not updated for a Channel while queueing data in the SslHandler.
Modifications:
- Factor out code that will increment / decrement pending bytes and use it in AbstractCoalescingBufferQueue and PendingWriteQueue
- Add test-case
Result:
Channel writability changes are triggered again.
Motivation:
Without a 'serialVersionUID' field, any change to a class will make
previously serialized versions unreadable.
Modifications:
Add missed 'serialVersionUID' field for all Serializable
classes.
Result:
Proper deserialization of previously serialized objects.
Motivation:
There are many @SuppressWarnings("unchecked") in the code for the same purpose that we want to do this return:
@SuppressWarnings("unchecked")
public B someMethod() {
......
return (B) this;
}
Modification:
Add a method self() and reuse in all these return lines:
@SuppressWarnings("unchecked")
private B self() {
return (B) this;
}
Result:
Then only one @SuppressWarnings("unchecked") left in the code.
Motivation:
When SO_LINGER is used we run doClose() on the GlobalEventExecutor by default so we need to ensure we schedule all code that needs to be run on the EventLoop on the EventLoop in doClose. Beside this there are also threading issues when calling shutdownOutput(...)
Modifications:
- Schedule removal from EventLoop to the EventLoop
- Correctly handle shutdownOutput and shutdown in respect with threading-model
- Add unit tests
Result:
Fixes [#7159].
Motivation:
A `DefaultChannelId` has final `hashCode` field calculated in the constructor. We can use it in `equals` to the fast return for different objects.
Modifications:
Use `hashCode` field in `DefaultChannelId.equals()`.
Result:
Fast `equals` on negative scenarios.
Motivation:
We should not log by default if the promise is a VoidChannelPromise as its try* methods will always return false.
Modifications:
Do an instanceof check to determine if we should log or not by default
Result:
No more noise in the logs when using a VoidChannelPromise.
Motivation:
If AutoClose is false and there is a IoException then AbstractChannel will not close the channel but instead just fail flushed element in the ChannelOutboundBuffer. AbstractChannel also notifies of writability changes, which may lead to an infinite loop if the peer has closed its read side of the socket because we will keep accepting more data but continuously fail because the peer isn't accepting writes.
Modifications:
- If the transport throws on a write we should acknowledge that the output side of the channel has been shutdown and cleanup. If the channel can't accept more data because it is full, and still healthy it is not expected to throw. However if the channel is not healthy it will throw and is not expected to accept any more writes. In this case we should shutdown the output for Channels that support this feature and otherwise just close.
- Connection-less protocols like UDP can remain the same because the channel may disconnected temporarily.
- Make sure AbstractUnsafe#shutdownOutput is called because the shutdown on the socket may throw an exception.
Result:
More correct handling of write failure when AutoClose is false.
Motivation:
ShutdownOutput now fails all pending writes in the ChannelOutboundBuffer and sets it to null. However the Close code path uses the ChannelOutboundBuffer as an indication that the close operation is in progress and exits early and will not call doClose. This will lead to the Channel not actually being fully closed.
Bug introduced by 237a4da1b7
Modifications:
- AbstractChannel#close shouldn't exit early just because outboundBuffer is null, and instead should use additional state closeInitiated to avoid duplicate close operations
Result:
AbstractChannel#close(..) after AbstractChannel#shutdownOutbound() will still invoke doClose and cleanup Channel state.
Motivation:
Continuing to make netty happy when compiling through errorprone.
Modification:
Mostly comments, some minor switch statement changes.
Result:
No more compiler errors!
Motivation:
Calling `newInstance()` on a Class object can bypass compile time
checked Exception propagation. This is noted in Java Puzzlers,
as well as in ErrorProne:
http://errorprone.info/bugpattern/ClassNewInstance
Modifications:
Use the niladic constructor to create a new instance.
Result:
Compile time safety for checked exceptions
Motivation:
Our http2 child channel implementation was not 100 % complete and had a few bugs. Beside this the performance overhead was non-trivial.
Modifications:
There are a lot of modifications, the most important....
* Http2FrameCodec extends Http2ConnectionHandler and Http2MultiplexCodec extends Http2FrameCodec to reduce performance heads and inter-dependencies on handlers in the pipeline
* Correctly handle outbound flow control for child channels
* Support unknow frame types in Http2FrameCodec and Http2MultiplexCodec
* Use a consistent way how to create Http2ConnectionHandler, Http2FrameCodec and Http2MultiplexCodec (via a builder)
* Remove Http2Codec and Http2CodecBuilder as the user should just use Http2MultipleCodec and Http2MultiplexCodecBuilder now
* Smart handling of flushes from child channels to reduce overhead
* Reduce object allocations
* child channels always use the same EventLoop as the parent Channel to reduce overhead and simplify implementation.
* Not extend AbstractChannel for the child channel implementation to reduce overhead in terms of performance and memory usage
* Remove Http2FrameStream.managedState(...) as the user of the child channel api should just use Channel.attr(...)
Result:
Http2MultiplexCodec (and so child channels) and Http2FrameCodec are more correct, faster and more feature complete.
Motivation:
This PR (unfortunately) does 4 things:
1) Add outbound flow control to the Http2MultiplexCodec:
The HTTP/2 child channel API should interact with HTTP/2 outbound/remote flow control. That is,
if a H2 stream used up all its flow control window, the corresponding child channel should be
marked unwritable and a writability-changed event should be fired. Similarly, a unwritable
child channel should be marked writable and a writability-event should be fired, once a
WINDOW_UPDATE frame has been received. The changes are (mostly) contained in ChannelOutboundBuffer,
AbstractHttp2StreamChannel and Http2MultiplexCodec.
2) Introduce a Http2Stream2 object, that is used instead of stream identifiers on stream frames. A
Http2Stream2 object allows an application to attach state to it, and so a application handler
no longer needs to maintain stream state (i.e. in a map(id -> state)) himself.
3) Remove stream state events, which are no longer necessary due to the introduction of Http2Stream2.
Also those stream state events have been found hard and complex to work with, when porting gRPC
to the Http2FrameCodec.
4) Add support for HTTP/2 frames that have not yet been implemented, like PING and SETTINGS. Also add
a Http2FrameCodecBuilder that exposes options from the Http2ConnectionHandler API that couldn't else
be used with the frame codec, like buffering outbound streams, window update ratio, frame logger, etc.
Modifications:
1) A child channel's writability and a H2 stream's outbound flow control window interact, as described
in the motivation. A channel handler is free to ignore the channel's writability, in which case the
parent channel is reponsible for buffering writes until a WINDOW_UPDATE is received.
The connection-level flow control window is ignored for now. That is, a child channel's writability
is only affected by the stream-level flow control window. So a child channel could be marked writable,
even though the connection-level flow control window is zero.
2) Modify Http2StreamFrame and the Http2FrameCodec to take a Http2Stream2 object intstead of a primitive
integer. Introduce a special Http2ChannelDuplexHandler that has newStream() and forEachActiveStream()
methods. It's recommended for a user to extend from this handler, to use those advanced features.
3) As explained in the documentation, a new inbound stream active can be detected by checking if the
Http2Stream2.managedState() of a Http2HeadersFrame is null. An outbound stream active can be detected
by adding a listener to the ChannelPromise of the write of the first Http2HeadersFrame. A stream
closed event can be listened to by adding a listener to the Http2Stream2.closeFuture().
4) Add a simple Http2FrameCodecBuilder and implement the missing frame types.
Result:
1) The Http2MultiplexCodec supports outbound flow control.
2) The Http2FrameCodec API makes it easy for a user to manage custom stream specific state and to create
new outbound streams.
3) The Http2FrameCodec API is much cleaner and easier to work with. Hacks like the ChannelCarryingHeadersFrame
are no longer necessary.
4) The Http2FrameCodec now also supports PING and SETTINGS frames. The Http2FrameCodecBuilder allows the Http2FrameCodec
to use some of the rich features of the Http2ConnectionHandler API.
Motivation:
When using the OIO transport we need to act on byte[] when writing and reading from / to the underyling Socket. So we should ensure we use heap buffers by default to reduce memory copies.
Modifications:
Ensure we prefer heap buffers by default for the OIO transport.
Result:
Possible less memory copies.
Motivation:
We need to ensure we always null out (or set) the address on the java.net.DatagramPacket when doing read or write operation as the same instance is used across different calls.
Modifications:
Null out the address if needed.
Result:
Ensure the correct remote address is used when connect / disconnect between calls and also mix these with calls that directly specify the remote address for adatagram packets.
Motivation:
We need to support SO_TIMEOUT for the OioDatagramChannel but we miss this atm as we not have special handling for it in the DatagramChannelConfig impl that we use. Because of this the following log lines showed up when running the testsuite:
20:31:26.299 [main] WARN io.netty.bootstrap.Bootstrap - Unknown channel option 'SO_TIMEOUT' for channel '[id: 0x7cb9183c]'
Modifications:
- Add OioDatagramChannelConfig and impl
- Correctly set SO_TIMEOUT in testsuite
Result:
Support SO_TIMEOUT for OioDatagramChannel and so faster execution of datagram related tests in the testsuite
Motivation:
Implementations of DuplexChannel delegate the shutdownOutput to the underlying transport, but do not take any action on the ChannelOutboundBuffer. In the event of a write failure due to the underlying transport failing and application may attempt to shutdown the output and allow the read side the transport to finish and detect the close. However this may result in an issue where writes are failed, this generates a writability change, we continue to write more data, and this may lead to another writability change, and this loop may continue. Shutting down the output should fail all pending writes and not allow any future writes to avoid this scenario.
Modifications:
- Implementations of DuplexChannel should null out the ChannelOutboundBuffer and fail all pending writes
Result:
More controlled sequencing for shutting down the output side of a channel.
Motivation:
When a user called ctx.close() and used the EmbeddedChannel we did not correctly run all pending tasks which means channelInactive was never called.
Modifications:
Ensure we run all pending tasks after all operations that may change the Channel state and are part of the Channel.Unsafe impl.
Result:
Fixes [#6894].
Motivation:
ErrorProne complains that the array override doesn't match the
vararg super call. See http://errorprone.info/bugpattern/Overrides
Additionally, almost every other Future uses the vararg form, so
it would be stylistically consistent to keep it that way.
Modifications:
Use vararg override.
Result:
Cleaner, less naggy code.
Motivation:
DefaultChannelPipeline.estimatorHandle needs to be volatile as its accessed from different threads.
Modifications:
Make DefaultChannelPipeline.estimatorHandle volatile and correctly init it via CAS
Result:
No more race.
Motivation:
We previously used pollLast() to retrieve a Channel from the queue that backs SimpleChannelPool. This could lead to the problem that some Channels are very unfrequently used and so when these are used the connection was already be closed and so could not be reused.
Modifications:
Allow to configure if the last recent used Channel should be used or the "oldest".
Result:
More flexible usage of ChannelPools
Motivation:
Some ChannelOptions must be set before the Channel is really registered to have the desired effect.
Modifications:
Add another constructor argument which allows to not register the EmbeddedChannel to its EventLoop until the user calls register().
Result:
More flexible usage of EmbeddedChannel. Also Fixes [#6968].
Motivation:
We had recently a report that the issue [#6607] is still not fixed.
Modifications:
Add a testcase to prove the issue is fixed.
Result:
More tests.
Motivation:
JCTools 2.0.2 provides an unbounded MPSC linked queue. Before we shaded JCTools we had our own unbounded MPSC linked queue and used it in various places but gave this up because there was no public equivalent available in JCTools at the time.
Modifications:
- Use JCTool's MPSC linked queue when no upper bound is specified
Result:
Fixes https://github.com/netty/netty/issues/5951
Motivation:
Each call to SSL_write may introduce about ~100 bytes of overhead. The OpenSslEngine (based upon OpenSSL) is not able to do gathering writes so this means each wrap operation will incur the ~100 byte overhead. This commit attempts to increase goodput by aggregating the plaintext in chunks of <a href="https://tools.ietf.org/html/rfc5246#section-6.2">2^14</a>. If many small chunks are written this can increase goodput, decrease the amount of calls to SSL_write, and decrease overall encryption operations.
Modifications:
- Introduce SslHandlerCoalescingBufferQueue in SslHandler which will aggregate up to 2^14 chunks of plaintext by default
- Introduce SslHandler#setWrapDataSize to control how much data should be aggregated for each write. Aggregation can be disabled by setting this value to <= 0.
Result:
Better goodput when using SslHandler and the OpenSslEngine.
Motivation:
The behaviour of the FixedChannelPool.release was inconsistent with the
SimpleChannelPool implementation, in that given promise is returned.
In the FixedChannelPool implementation a new promise was return and
this meant that the completion of that promise can be different.
Specifically on releasing a channel to a closed pool, the parameter
promise is failed with an IllegalStateException but the returned one
will have been successful (as it was completed by call to super
.release)
Modification:
Return the given promise as the result of FixedChannelPool.release
Result:
Returned promise will reflect the result of the release operation.
Motivation:
Channels returned to a FixedChannelPool after closing it remain active.
Since channels that where acquired from the pool are not closed during the close operation, they remain open even after releasing the channel back to the pool where they are then in accessible and become in-effect a connection leak.
Modification:
Close the released channel on releasing back to a closed pool.
Result:
Much harder to create a connection leak by closing an active
FixedChannelPool instance.
Motivation:
We should not fail the promise when a closed Channel is offereed back to the ChannelPool as we explicit mention that the Channel must always be returned.
Modifications:
- Not fail the promise
- Add test-case
Result:
Fixes [#6831]
Motivation:
ChannelPipeline will happily add a handler to a closed Channel's pipeline and will call handlerAdded(...) but will not call handlerRemoved(...).
Modifications:
Check if pipeline was destroyed and if so not add the handler at all but propergate an exception.
Result:
Fixes [#6768]
Motivation:
We currently don't have a native transport which supports kqueue https://www.freebsd.org/cgi/man.cgi?query=kqueue&sektion=2. This can be useful for BSD systems such as MacOS to take advantage of native features, and provide feature parity with the Linux native transport.
Modifications:
- Make a new transport-native-unix-common module with all the java classes and JNI code for generic unix items. This module will build a static library for each unix platform, and included in the dynamic libraries used for JNI (e.g. transport-native-epoll, and eventually kqueue).
- Make a new transport-native-unix-common-tests module where the tests for the transport-native-unix-common module will live. This is so each unix platform can inherit from these test and ensure they pass.
- Add a new transport-native-kqueue module which uses JNI to directly interact with kqueue
Result:
JNI support for kqueue.
Fixes https://github.com/netty/netty/issues/2448
Fixes https://github.com/netty/netty/issues/4231
This fixes#6652.
Rationale
The invocation of initChannel of ChannelInitializer has been moved to as
early as during handlerAdded is invoked in 26aa34853, whereas it was
only invoked during channelRegistered is invoked before that. So the
comment does not describe how handlers are added in normal circumstances
anymore.
However, the code is kept as-is since there might be unusual cases, and
adding ServerBootstrapAcceptor via the event loop is always safe to
enforce the correct order.
Motivation:
In cases when an application is running in a container or is otherwise
constrained to the number of processors that it is using, the JVM
invocation Runtime#availableProcessors will not return the constrained
value but rather the number of processors available to the virtual
machine. Netty uses this number in sizing various resources.
Additionally, some applications will constrain the number of threads
that they are using independenly of the number of processors available
on the system. Thus, applications should have a way to globally
configure the number of processors.
Modifications:
Rather than invoking Runtime#availableProcessors, Netty should rely on a
method that enables configuration when the JVM is started or by the
application. This commit exposes a new class NettyRuntime for enabling
such configuraiton. This value can only be set once. Its default value
is Runtime#availableProcessors so that there is no visible change to
existing applications, but enables configuring either a system property
or configuring during application startup (e.g., based on settings used
to configure the application).
Additionally, we introduce the usage of forbidden-apis to prevent future
uses of Runtime#availableProcessors from creeping. Future work should
enable the bundled signatures and clean up uses of deprecated and
other forbidden methods.
Result:
Netty can be configured to not use the underlying number of processors,
but rather the constrained number of processors.
Motivation:
We need to release all the buffers that may be put into our inbound queue since we closed the Channel to ensure we not leak any memory. This is fine as it basically gives the same guarantees as TCP which means even if the promise was notified before its not really guaranteed that the "remote peer" will see the buffer at all.
Modifications:
Ensure we release all buffers in the inbound buffer if a doClose() is called.
Result:
No more leaks.
Motivation:
1. The use of InternetProtocolFamily is not consistent:
the DnsNameResolverContext and DnsNameResolver contains switches
instead of appropriate methods usage.
2. The InternetProtocolFamily class contains redundant switches in the
constructor.
Modifications:
1. Replacing switches to the use of an appropriate methods.
2. Simplifying the InternetProtocolFamily constructor.
Result:
Code is cleaner and simpler.
Motivation:
When a VoidChannelPromise is used by the user we need to ensure we propergate the exception through the ChannelPipeline otherwise the exception will just be swallowed and so the user has no idea whats going on.
Modifications:
- Always call tryFailure / trySuccess even when we use the VoidChannelPromise
- Add unit test
Result:
Fixes [#6622].
Motivation:
Commit 795f318 simplified some code related to the special case Set for the selected keys and introduced a Selector wrapper to make sure this set was properly reset. However the JDK makes assumptions about the type of Selector and this type is not extensible. This means whenever we call into the JDK we must provide the unwrapped version of the Selector or we get a ClassCastException. We missed a case of unwrapping in NioEventLoop#rebuildSelector0.
Modificaitons:
- NioEventLoop#openSelector should return a tuple so we can atomically set the wrapped and unwrapped Selector
- NioEventLoop#rebuildSelector0 should use the unwrapped version of the selector
Result:
Fixes https://github.com/netty/netty/issues/6607.
Motivation:
The code accidentally passes channel twice instead of value, resulting in logs like:
Failed to set channel option 'SO_SNDBUF' with value '[id: 0x2c5b2eb4]' for channel '[id: 0x2c5b2eb4]'
Modifications:
Pass value instead of channel where it needs to be.
Result:
Failed to set channel option 'SO_SNDBUF' with value '0' for channel '[id: 0x9bd3c5b8]'
Motivation:
We forked a new process to detect if the program is run by root. We should better just use user.name system property
Modifications:
- Change PlatformDependent.isRoot0() to read the user.name system property to detect if root runs the program and rename it to maybeSuperUser0().
- Rename PlatformDependent.isRoot() to maybeSuperUser() and let it init directly in the static block
Result:
Less heavy way to detect if the program is run by root.
Make the FileRegion comments about which transports are supported more accurate.
Also, eleminate any outstanding references to FileRegion.transfered as the method was renamed for spelling.
Modifications:
Class-level comment on FileRegion, can call renamed method.
Result:
More accurate documentation and less calls to deprecated methods.
Motivation:
There are numerous usages of internalNioBuffer which hard code 0 for the index when the intention was to use the readerIndex().
Modifications:
- Remove hard coded 0 for the index and use readerIndex()
Result:
We are less susceptible to using the wrong index, and don't make assumptions about the ByteBufAllocator.
Motivation:
Calling a static method is faster then dynamic
Modifications:
Add 'static' keyword for methods where it missed
Result:
A bit faster method calls
Motivation:
When "Too many open files" happens,the URLClassLoader cannot do any classloading because URLClassLoader need a FD for findClass. Because of this the anonymous inner class that is created to re-enable auto read may cause a problem.
Modification:
Pre-create Runnable that is scheduled and so ensure it is not lazy loaded.
Result:
No more problems when try to recover.
Motivation:
We have our own ThreadLocalRandom implementation to support older JDKs . That said we should prefer the JDK provided when running on JDK >= 7
Modification:
Using ThreadLocalRandom implementation of the JDK when possible.
Result:
Make use of JDK implementations when possible.
Motivation:
SelectedSelectionKeySet currently uses 2 arrays internally and users are expected to call flip() to access the underlying array and switch the active array. However we do not concurrently use 2 arrays at the same time and we can get away with using a single array if we are careful about when we reset the elements of the array.
Modifications:
- Introduce SelectedSelectionKeySetSelector which wraps a Selector and ensures we reset the underlying SelectedSelectionKeySet data structures before we select
- The loop bounds in NioEventLoop#processSelectedKeysOptimized can be defined more precisely because we know the real size of the underlying array
Result:
Fixes https://github.com/netty/netty/issues/6058
Motiviation:
Simplify implementation of compareTo/equals/hashCode for ChannelIds.
Modifications:
We simplfy the hashCode implementation for DefaultChannelId by not
making it random, but making it based on the underlying data. We fix the
compareTo implementation for DefaultChannelId by using lexicographic
comparison of the underlying data array. We fix the compareTo
implementation for CustomChannelId to avoid the possibility of overflow.
Result:
Cleaner code that is easier to maintain.
Motivation:
Initialization of PlatformDependent0 fails on Java 9 in static initializer when calling setAccessible(true).
Modifications:
Add RefelectionUtil which can be used to safely try if setAccessible(true) can be used or not and if not fail back to non reflection.
Result:
Fixed [#6345]
Motivation:
EPOLL annotates some exceptions to provide the remote address, but the original exception is not preserved. This may make determining a root cause more difficult. The static EPOLL exceptions references the native method that failed, but does not provide a description of the actual error number. Without the description users have to know intimate details about the native calls and how they may fail to debug issues.
Modifications:
- annotated exceptions should preserve the original exception
- static exceptions should include the string description of the expected errno
Result:
EPOLL exceptions provide more context and are more useful to end users.
Motivation:
EpollRecvByteAllocatorHandle intends to override the meaning of "maybe more data to read" which is a concept also used in all existing implementations of RecvByteBufAllocator$Handle but the interface doesn't support overriding. Because the interfaces lack the ability to propagate this computation EpollRecvByteAllocatorHandle attempts to implement a heuristic on top of the delegate which may lead to reading when we shouldn't or not reading data.
Modifications:
- Create a new interface ExtendedRecvByteBufAllocator and ExtendedHandle which allows the "maybe more data to read" between interfaces
- Deprecate RecvByteBufAllocator and change all existing implementations to extend ExtendedRecvByteBufAllocator
- transport-native-epoll should require ExtendedRecvByteBufAllocator so the "maybe more data to read" can be propagated to the ExtendedHandle
Result:
Fixes https://github.com/netty/netty/issues/6303.
Motivation:
Result of validatePromise() is always inverted with if (!validatePromise()).
Modification:
validatePromise() renamed to isNotValidPromise() and now returns inverted state so you don't need to invert state in conditions. Also name is now more meaningful according to returned result.
Added more tests for validatePromise corner cases with Exceptions.
Result:
Code easier to read. No need in inverted result.
Motivation:
We used various mocking frameworks. We should only use one...
Modifications:
Make usage of mocking framework consistent by only using Mockito.
Result:
Less dependencies and more consistent mocking usage.
Motivation:
NioDatagramChannel fails a write with NotYetConnectedException when the DatagramChannel was not yet connected and a ByteBuf is written. The same should be done for OioDatagramChannel as well.
Modifications:
Make OioDatagramChannel consistent with NioDatagramChannel
Result:
Correct and consistent implementations of DatagramChannel
Motivation:
Currently Netty does not wrap socket connect, bind, or accept
operations in doPrivileged blocks. Nor does it wrap cases where a dns
lookup might happen.
This prevents an application utilizing the SecurityManager from
isolating SocketPermissions to Netty.
Modifications:
I have introduced a class (SocketUtils) that wraps operations
requiring SocketPermissions in doPrivileged blocks.
Result:
A user of Netty can grant SocketPermissions explicitly to the Netty
jar, without granting it to the rest of their application.
Motivation:
https://github.com/netty/netty/pull/6042 only addressed PlatformDependent#getSystemClassLoader but getClassLoader is also called in an optional manner in some common code paths but fails to catch a general enough exception to continue working.
Modifications:
- Calls to getClassLoader which can continue if results fail should catch Throwable
Result:
More resilient code in the presense of restrictive class loaders.
Fixes https://github.com/netty/netty/issues/6246.
Motivation:
We not warned about not-supported ChannelOptions when set the options for the ServerChannel.
Modifications:
- Share code for setting ChannelOptions during bootstrap
Result:
Warning is logged when a ChannelOption is used that is not supported during bootstrap a Channel. See also [#6192]
Motivation:
The comment on AbstractChannelHandlerContext.invokeHandler() is incorrect and missleading. See [#6177]
Modifications:
Change true to false to correct the comment.
Result:
Fix missleading and incorrect comment.
Motivation:
`SimpleChannelPool` subclasses are likely to override the `connectChannel` method, and are likely to clobber the cloned `Bootstrap` handler in the process. To allow subclasses to properly notify the pool listener of new connections, we should expose (at least) the `handler` property of the pool to subclasses.
Modifications:
Expose `SimpleChannelPool` properties to subclasses via `protected` getters.
Result:
Subclasses can now use the bootstrap, handler, health checker, and health-check-on-release preoperties from their superclass.
Motivation:
DefaultChannelId provides a regular expression which validates if a user provided MAC address is valid. This regular expression may allow invalid MAC addresses and also not allow valid MAC addresses.
Modifications:
- Introduce a MacAddressUtil#parseMac method which can parse and validate the MAC address at the same time. The regular expression check before hand is additional overhead if we have to parse the MAC address.
Result:
Fixes https://github.com/netty/netty/issues/6132.
Motivation:
On some platforms the PID my be bigger then 4194304 so we should not limit it to 4194304.
Modifications:
Only check that the PID is a valid Integer
Result:
No more warnings on systems where the PID is bigger then 4194304.
Motivation:
In later Java8 versions our Atomic*FieldUpdater are slower then the JDK implementations so we should not use ours anymore. Even worse the JDK implementations provide for example an optimized version of addAndGet(...) using intrinsics which makes it a lot faster for this use-case.
Modifications:
- Remove methods that return our own Atomic*FieldUpdaters.
- Use the JDK implementations everywhere.
Result:
Faster code.
Motivation:
e102a008b6 changed a conditional where previously the NIO ServerChannel would not be closed in the event of an exception.
Modifications:
- Restore the logic prior to e102a008b6 which does not automatically close ServerChannels for IOExceptions
Result:
NIO ServerChannel doesn't close automatically for an IOException.
Motivation:
We should not catch ConcurrentModificationException as this can never happen because things are executed on the EventLoop thread.
Modifications:
Remove try / catch
Result:
Cleaner code.
Modifications:
LocalChannel#releaseInboundBuffers should always clear/release the queue and set readInProgress to false
Result:
LocalChannel queue is more reliably cleaned up.
Motivation:
LocalChannel attempts to close its peer socket when ever it is closed. However if the channels are on different EventLoops we may attempt to process events for the peer channel on the wrong EventLoop.
Modifications:
- Ensure the close process ensures we are on the correct thread before accessing data
Result:
More correct LocalChannel close code.
Motivation:
PlatformDependent#getSystemClassLoader may throw a wide variety of exceptions based upon the environment. We should handle all exceptions and continue initializing the slow path if an exception occurs.
Modifications:
- Catch Throwable in cases where PlatformDependent#getSystemClassLoader is used
Result:
Fixes https://github.com/netty/netty/issues/6038
Motivation:
Netty provides a adaptor from ByteBuf to Java's InputStream interface. The JDK Stream interfaces have an explicit lifetime because they implement the Closable interface. This lifetime may be differnt than the ByteBuf which is wrapped, and controlled by the interface which accepts the JDK Stream. However Netty's ByteBufInputStream currently does not take reference count ownership of the underlying ByteBuf. There may be no way for existing classes which only accept the InputStream interface to communicate when they are done with the stream, other than calling close(). This means that when the stream is closed it may be appropriate to release the underlying ByteBuf, as the ownership of the underlying ByteBuf resource may be transferred to the Java Stream.
Motivation:
- ByteBufInputStream.close() supports taking reference count ownership of the underyling ByteBuf
Result:
ByteBufInputStream can assume reference count ownership so the underlying ByteBuf can be cleaned up when the stream is closed.
Motivation:
To guard against the case that a user will enqueue a lot of empty or small buffers and so raise an OOME we need to also take the overhead of the ChannelOutboundBuffer / PendingWriteQueue into account when detect if a Channel is writable or not. This is related to #5856.
Modifications:
When calculate the memory for an message that is enqueued also add some extra bytes depending on the implementation.
Result:
Better guard against OOME.
Motivation
It's possible to extend LocalChannel as well as LocalServerChannel but the LocalServerChannel's serve(peer) method is hardcoded to create only instances of LocalChannel.
Modifications
Add a protected factory method that returns by default new LocalChannel(...) but users may override it to customize it.
Result
It's possible to customize the LocalChannel instance on either end of the virtual connection.
Motivation:
Some unit tests in SingleThreadEventLoopTest rely upon Thread.sleep for sequencing events between threads. This can be unreliable and result in spurious test failures if thread scheduling does not occur in a fair predictable manner.
Modifications:
- Reduce the reliance on Thread.sleep in SingleThreadEventLoopTest
Result:
Fixes https://github.com/netty/netty/issues/5851
Motivation:
The local transport is used to communicate in the same JVM so we should use heap buffers.
Modifications:
Use heapbuffers by default if not requested otherwise.
Result:
No allocating of direct buffers by default when using local transport
Motivation:
When using java.nio.DatagramChannel we should not close the channel when a SocketException was thrown as we can still use the channel.
Modifications:
Not close the Channel when SocketException is thrown
Result:
More robust and correct handling of exceptions when using NioDatagramChannel.
Motivation:
If an exception is thrown while processing the ready channels in the EventLoop we should still run all tasks as this may allow to recover. For example a OutOfMemoryError may be thrown and runAllTasks() will free up memory again. Beside this we should also ensure we always allow to shutdown even if an exception was thrown.
Modifications:
- Call runAllTasks() in a finally block
- Ensure shutdown is always handles.
Result:
More robust EventLoop implementations for NIO and Epoll.
Motivation:
We should better first process OP_WRITE before OP_READ as this may allow us to free memory in a faster fashion for previous queued writes.
Modifications:
Process OP_WRITE before OP_READ
Result:
Free memory faster for queued writes.
Motivation:
the build doesnt seem to enforce this, so they piled up
Modifications:
removed unused import lines
Result:
less unused imports
Signed-off-by: radai-rosenblatt <radai.rosenblatt@gmail.com>
the implicit #fireChannelReadComplete() in EmbeddedChannel#writeInbound().
Motivation
We use EmbeddedChannels to implement a ProxyChannel of some sorts that shovels
messages between a source and a destination Channel. The latter are real network
channels (such as Epoll) and they may or may not be managed in a ChannelPool. We
could fuse both ends directly together but the EmbeddedChannel provides a nice
disposable section of a ChannelPipeline that can be used to instrument the messages
that are passing through the proxy portion.
The ideal flow looks abount like this:
source#channelRead() -> proxy#writeOutbound() -> destination#write()
source#channelReadComplete() -> proxy#flushOutbound() -> destination#flush()
destination#channelRead() -> proxy#writeInbound() -> source#write()
destination#channelReadComplete() -> proxy#flushInbound() -> source#flush()
The problem is that #writeOutbound() and #writeInbound() emit surplus #flush()
and #fireChannelReadComplete() events which in turn yield to surplus #flush()
calls on both ends of the pipeline.
Modifications
Introduce a new set of write methods that reain the same sematics as the #write()
method and #flushOutbound() and #flushInbound().
Result
It's possible to implement the above ideal flow.
Fix for EmbeddedChannel#ensureOpen() and Unit Tests for it
Some PR stuff.
Motivation:
To make it easier to debug why notification of a promise failed we should log extra info and make it consistent.
Modifications:
- Create a new PromiseNotificationUtil that has static methods that can be used to try notify a promise and log.
- Reuse this in AbstractChannelHandlerContext, ChannelOutboundBuffer and PromiseNotifier
Result:
Easier to debug why a promise could not be notified.
Motivation:
RFC7871 defines an extension which allows to request responses for a given subset.
Modifications:
- Add DnsOptPseudoRrRecord which can act as base class for extensions based on EDNS(0) as defined in RFC6891
- Add DnsOptEcsRecord to support the Client Subnet in DNS Queries extension
- Add tests
Result:
Client Subnet in DNS Queries extension is now supported.
Motivation:
For use cases that demand frequent updates of the write watermarks, an
API that requires immutable WriteWaterMark objects is not ideal, as it
implies a lot of object allocation.
For example, the HTTP/2 child channel API uses write watermarks for outbound
flow control and updates the write watermarks on every DATA frame write.
Modifications:
Remote @Deprecated tag from primitive getters and setters, however the corresponding
channel options remain deprecated.
Result:
Primitive getters and setters for write watermarks are no longer marked @Deprecated.
Motivation:
The JDK implementation of SocketChannel has an internal state that is tracked for its operations. Because of this we need to ensure we call finishConnect() before try to call read(...) / write(...) as otherwise it may produce a NotYetConnectedException.
Modifications:
First process OP_CONNECT flag.
Result:
No more possibility of NotYetConnectedException because OP_CONNECT is handled not early enough when processing interestedOps for a Channel.
Motivation:
The DefaultEventLoopGroup class extends MultithreadEventExecutorGroup but doesn't expose the ctor variants that accept a custom Executor like NioEventLoopGroup and EpollEventLoopGroup do.
Modifications:
Add missing constructor.
Result:
Be able to use custom Executor with DefaultEventLoopGroup.
Motivation:
When attempting to set the selectedKeys fields on the selector
implementation, JDK 9 can throw an inaccessible object exception.
Modications:
Catch and log this exception as an possible course of action if the
sun.nio.ch package is not exported from java.base.
Result:
The selector replacement will fail gracefully as an expected course of
action if the sun.nio.ch package is not exported from java.base.
Motivation:
The NIO transport used an IllegalStateException if a user tried to issue another connect(...) while the connect was still in process. For this case the JDK specified a ConnectPendingException which we should use. The same issues exists in the EPOLL transport. Beside this the EPOLL transport also does not throw the right exceptions for ENETUNREACH and EISCONN errno codes.
Modifications:
- Replace IllegalStateException with ConnectPendingException in NIO and EPOLL transport
- throw correct exceptions for ENETUNREACH and EISCONN in EPOLL transport
- Add test case
Result:
More correct error handling for connect attempts when using NIO and EPOLL transport
Motivation:
The API documentation in ChannelConfig states that a a channel is writable,
if the number of pending bytes is below the low watermark and a
channel is not writable, if the number of pending bytes exceeds the high
watermark.
Therefore, we should use < operators instead of <= as well as > instead of >=.
Using <= and >= is also problematic, if the low watermark is equal to the high watermark,
as then a channel could be both writable and unwritable with the same number of pending
bytes (depending on whether remove() or addMessage() is called first).
The use of <= and >= was introduced in PR https://github.com/netty/netty/pull/3036, but
I don't understand why, as there doesn't seem to have been any discussion around that.
Modifications:
Use < and > operators instead of <= and >=.
Result:
High and low watermarks are treated as stated in the API docs.
Motivation:
We need to ensure we also call fireChannelActive() if the Channel is directly closed in a ChannelFutureListener that is belongs to the promise for the connect. Otherwise we will see missing active events.
Modifications:
Ensure we always call fireChannelActive() if the Channel was active.
Result:
No missing events.
Motivation:
We use often javachannel().socket().* in NIO as these methods exists in java6. The problem is that these will throw often very general Exceptions (Like SocketException) while it is more expected to throw the Exceptions listed in the nio interfaces. When possible we should use the new methods available in java7+ which throw the correct exceptions.
Modifications:
Check for java version and depending on it using the socket or the javachannel.
Result:
Throw expected Exceptions.
Motivation:
To make it easier to debug connect exceptions we create new exceptions which also contain the remote address. For this we basically created a new instance and call setStackTrace(...). When doing this we pay an extra penality because it calls fillInStackTrace() when calling the super constructor.
Modifications:
Create special sub-classes of Exceptions that override the fillInStackTrace() method and so eliminate the overhead.
Result:
Less overhead when "annotate" connect exceptions.
Motivation:
Comments stating that AUTO_CLOSE will be removed in Netty 5.0 are wrong,
as there is no Netty 5.0.
Modifications:
Removed comment.
Result:
No more references to Netty 5.0
Motivation:
PendingWriteQueue should guard against re-entrant writes once removeAndWriteAll() is run.
Modifications:
Continue writing until queue is empty.
Result:
Correctly guard against re-entrance.
Motivation:
Instrumenting the NIO selector implementation requires special
permissions. Yet, the code for performing this instrumentation is
executed in a manner that would require all code leading up to the
initialization to have the requisite permissions. In a restrictive
environment (e.g., under a security policy that only grants the
requisite permissions the Netty transport jar but not to application
code triggering the Netty initialization), then instrumeting the
selector will not succeed even if the security policy would otherwise
permit it.
Modifications:
This commit marks the necessary blocks as privileged. This enables
access to the necessary resources for instrumenting the selector. The
idea is that we are saying the Netty code is trusted, and as long as the
Netty code has been granted the necessary permissions, then we will
allow the caller access to these resources even though the caller itself
might not have the requisite permissions.
Result:
The selector can be instrumented in a restrictive security environment.
Motivation:
Writing to a system property requires permissions. Yet the code for
setting sun.nio.ch.bugLevel is not marked as privileged. In a
restrictive environment (e.g., under a security policy that only grants
the requisite permissions the Netty transport jar but not to application
code triggering the Netty initialization), writing to this system
property will not succeed even if the security policy would otherwise
permit it.
Modifications:
This commt marks the necessary code block as privileged. This enables
writing to this system property. The idea is that we are saying the
Netty code is trusted, and as long as the Netty code has been granted
the necessary permissions, then we will allow the caller access to these
resources even though the caller itself might not have the requisite
permissions.
Result:
The system property sun.nio.ch.bugLevel can be written to in a
restrictive security environment.
Motivation:
If the user uses 0 as quiet period we should shutdown without any delay if possible.
Modifications:
Ensure we not introduce extra delay when a shutdown quit period of 0 is used.
Result:
EventLoop shutdown as fast as expected.
Motivation:
At the moment we call initChannel(...) in the channelRegistered(...) method which has the effect that if another ChannelInitializer is added within the initChannel(...) method the ordering of the added handlers is not correct and surprising. This is as the whole initChannel(...) method block is executed before the initChannel(...) block of the added ChannelInitializer is handled.
Modifications:
Call initChannel(...) from within handlerAdded(...) if the Channel is registered already. This is true in all cases for our DefaultChannelPipeline implementation. This way the ordering is always as expected. We still keep the old behaviour as well to not break code for other ChannelPipeline implementations (if someone ever wrote one).
Result:
Correct and expected ordering of ChannelHandlers.
Motivation:
When we try to close the Channel due a timeout we need to ensure we not log if the notification of the promise fails as it may be completed in the meantime.
Modifications:
Add another constructor to ChannelPromiseNotifier and PromiseNotifier which allows to log on notification failure.
Result:
No more miss-leading logs.
Motivation:
I received a report the its not possible to add another ChannelInitialiter in the initChannel(...) method, so we should add a test case for it.
Modifications:
Added testcase.
Result:
Validate that all works as expected.
Motivation:
When a ChannelInitializer is used via ServerBootstrap.handler(...) the users handlers may be added after the internal ServerBootstrapAcceptor. This should not happen.
Modifications:
Delay the adding of the ServerBootstrapAcceptor until the initChannel(....) method returns.
Result:
Correct order of handlers in the ServerChannels ChannelPipeline.
Motivation:
We used Promise.setFailure(...) when fail a Promise in SimpleChannelPool. As this happens in multiple levels this can result in stackoverflow as setFailure(...) may throw an IllegalStateException which then again is propergated.
Modifications:
Use tryFailure(...)
Result:
No more possibility to cause a stack overflow when failing the promise.
Motivation:
The SimpleChannelPool#notifyConnect() method will leak Channels if the user cancelled the Promise in between.
Modifications:
Release the channel if the Promise was complete before.
Result:
No more channel leaks.
Motiviation:
DefaultChannelId attempts to acquire a default process ID by determining
the process PID. However, to do this it attempts to punch through to the
system classloader, a permission that in the face of a restrictive
security manager is unlikely to be granted. Looking past this, it then
attempts to load a declared method off a reflectively loaded class,
another permission that is not likely to be granted in the face of a
restrictive security manager. However, neither of these permissions are
necessary as the punching through to the system security manager is
completely unneeded, and there is no need to load a public method as a
declared method.
Modifications:
Instead of punching through to the system classloader requiring
restricted permissions, we can just use current classloader. To address
the access declared method permission, we instead just reflectively
obtain the desired public method via Class#getMethod.
Result:
Acquiring the default process ID from the PID will succeed without
requiring the runtime permissions "getClassLoader" and
"accessDeclaredMembers".
Motivation:
In 4.0 AbstractNioByteChannel has a default of 16 max messages per read. However in 4.1 that constraint was applied at the NioSocketChannel which is not equivalent. In 4.1 AbstractEpollStreamChannel also did not have the default of 16 max messages per read applied.
Modifications:
- Make Nio consistent with 4.0
- Make Epoll consistent with Nio
Result:
Nio and Epoll both have consistent ChannelMetadata and are consistent with 4.0.
Motivation:
This change is part of the change done in PR #5395 to provide an `AUTO_FLUSH` capability.
Splitting this change will enable to try other ways of implementing `AUTO_FLUSH`.
Modifications:
Two methods:
```java
void executeAfterEventLoopIteration(Runnable task);
boolean removeAfterEventLoopIterationTask(Runnable task);
```
are added to `SingleThreadEventLoop` class for adding/removing a task to be executed at the end of current/next iteration of this `eventloop`.
In order to support the above, a few methods are added to `SingleThreadEventExecutor`
```java
protected void afterRunningAllTasks() { }
```
This is invoked after all tasks are run for this executor OR if the passed timeout value for `runAllTasks(long timeoutNanos)` is expired.
Added a queue of `tailTasks` to `SingleThreadEventLoop` to hold all tasks to be executed at the end of every iteration.
Result:
`SingleThreadEventLoop` now has the ability to execute tasks at the end of an eventloop iteration.
Motivation:
For some use-cases it would be useful to know the number of bytes queued in the PendingWriteQueue without the need to dequeue them.
Modifications:
Add PendingWriteQueue.bytes().
Result:
Be able to get the number of bytes queued.
Motivation:
Commit 4c048d069d moved the logic of calling handlerAdded(...) to the channelRegistered(...) callback of the head of the DefaultChannelPipeline. Unfortunatlly this may execute the callbacks to late as a user may add handlers to the pipeline in the ChannelFutureListener attached to the registration future. This can lead to incorrect ordering.
Modifications:
Ensure we always invoke ChannelHandler.handlerAdded(...) for all handlers before the registration promise is notified.
Result:
Not possible of incorrect ordering or missed events.
Motivation:
We pinned the EventExecutor for a Channel in DefaultChannelPipeline. Which means if the user added multiple handlers with the same EventExecutorGroup to the ChannelPipeline it will use the same EventExecutor for all of these handlers. This may be unexpected and even not what the user wants. If the user want to use the same one for all of them it can be done by obtain an EventExecutor and pass the same instance to the add methods. Because of this we should allow to not pin.
Modifications:
Allow to disable pinning of EventExecutor for Channel based on EventExecutorGroup via ChannelOption.
Result:
Less confusing and more flexible usage of EventExecutorGroup when adding ChannelHandlers to the ChannelPipeline.
Motivation
When I override ChannelHandler methods I usually (always) refire events myself via
ChannelHandlerContext instead of relieing on calling the super method (say
`super.write(ctx, ...)`). This works great and the IDE actually auto completes/generates
the right code for it except `#fireUserEventTriggered()` and `#userEventTriggered()`
which have a mismatching argument names and I have to manually "intervene".
Modification
Rename `ChannelHandlerContext#fireUserEventTriggered()` argument from `event` to `evt`
to match its handler counterpart.
Result
The IDE's auto generated code will reference the correct variable.
Motivation:
In commit f984870ccc I made a change which operated under invalide assumption that tasks executed by an EventExecutor will always be processed in a serial fashion. This is true for SingleThreadEventExecutor sub-classes but not part of the EventExecutor interface contract.
Because of this change implementations of EventExecutor which not strictly execute tasks in a serial fashion may miss events before handlerAdded(...) is called. This is strictly speaking not correct as there is not guarantee in this case that handlerAdded(...) will be called as first task (as there is no ordering guarentee).
Cassandra itself ships such an EventExecutor implementation which has no strict ordering to spread load across multiple threads.
Modifications:
- Add new OrderedEventExecutor interface and let SingleThreadEventExecutor / EventLoop implement / extend it.
- Only expose "restriction" of skipping events until handlerAdded(...) is called for OrderedEventExecutor implementations
- Add ThreadPoolEventExecutor implementation which executes tasks in an unordered fashion. This is used in added unit test but can also be used for protocols which not expose an strict ordering.
- Add unit test.
Result:
Resurrect the possibility to implement an EventExecutor which does not enforce serial execution of events and be able to use it with the DefaultChannelPipeline.
Motivation:
We should make it clear that each acquired Channel needs to be released in all cases.
Modifications:
More clear javadocs.
Result:
Harder for users to leak Channel.
Motivation:
The field can be read from arbitrary threads via Channel.(isWritable()|bytesBeforeWritable()|bytesBeforeUnwritable()), WriteAndFlushTask.newInstance(), PendingWriteQueue, etc.
Modifications:
Make AbstractChannel.outboundBuffer volatile.
Result:
More correct in a concurrent use case.
Motivation:
We used future in many method of ChannelDuplexHandler as argument name of ChannelPromise. We should make it more consistent and correct.
Modifications:
Replace future with promise.
Result:
More correct and consistent naming.