Motivation:
ByteToMessageDecoder's default MERGE_CUMULATOR will allocate a new buffer and
copy if the refCnt() of the cumulation is > 1. However this is overly
conservative because we maybe able to avoid allocate/copy if the current
cumulation can accommodate the input buffer without a reallocation. Also when the
reallocation and copy does occur the new buffer is sized just large enough to
accommodate the current the current amount of data. If some data remains in the
cumulation after decode this will require a new allocation/copy when more data
arrives.
Modifications:
- Use maxFastWritableBytes to avoid allocation/copy if the current buffer can
accommodate the input data without a reallocation operation.
- Use ByteBufAllocator#calculateNewCapacity(..) to get the size of the buffer
when a reallocation/copy operation is necessary.
Result:
ByteToMessageDecoder MERGE_CUMULATOR won't allocate/copy if the cumulation
buffer can accommodate data without a reallocation, and when a reallocation
occurs we are more likely to leave additional space for future data in an effort
to reduce overall reallocations.
Motivation:
The Channel Pool tests commonly use the same fixed local address String. This
has been observed to result in test failures if a single test fails and cleanup
is not done properly, or if tests are run in parallel. Also each test should
close any channel pool objects or maps to make sure resources are reclaimed.
Modifications:
- Use a random string for the address on each test to reduce the chance of
collision.
- close all ChannelPool and ChannelPoolMap objects at the end of each test
Result:
Less likely to observe spurious failures due to LocalAddress collisions and more
complete test cleanup.
Motivation:
RFC7230 states that we should not accept multiple content-length headers and also should not accept a content-length header in combination with transfer-encoding: chunked
Modifications:
- Check for multiple content-length headers and if found mark message as invalid
- Check if we found a content-length header and also a transfer-encoding: chunked and if so mark the message as invalid
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9861
Motivation:
Technical speaking its valid to have http headers with no values so we should support it. That said we need to detect if these are "generated" because of an "invalid" fold.
Modifications:
- Detect if a colon is missing when parsing headers.
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9866
Motivation:
Client can split data into different numbers of fragments and sometimes the last frame may contain trash data that doesn't affect decompression process.
Modification:
Added check if last frame is `ContinuationWebSocketFrame` and decompression data is empty
then don't throw an exception.
Result:
Fixes#9770
Motivation:
In #9603 the executor hung on shutdown because of an abandoned task
on another executor the first was waiting for.
Modifications:
This commit modifies the executor shutdown sequence to include
switching to SHUTDOWN state and then running all remaining tasks.
This ensures that no more tasks are scheduled after SHUTDOWN and
the last pass of running remaining tasks will take it all.
Any tasks scheduled after SHUTDOWN will be rejected.
This change preserves the functionality of graceful shutdown with
quiet period and only adds one more pass of task execution after
the default shutdown process has finished and the executor is
ready for termination.
Result:
After this change tasks that succeed to be added to the executor will
be always executed. Tasks which come late will be rejected instead of
abandoned.
Motivation:
In #9830 the get/remove/close methods implementation changed to avoid
deadlocks on event loops. The change involved modifying the methods to
close the managed ChannelPools asynchronously and return immediately.
While this behavior might be fine for get/remove, it is changing what
a user expects from a close() method and after returning from close()
there might be still resources open.
Modifications:
This change is a follow-up for #9830 to preserve the synchronous
behavior of the AbstractChannelPoolMap#close() method.
Result:
AbstractChannelPoolMap#close() returns once the managed pools have
been closed.
Motivation:
Multiple cookie values can be present in a single header.
Modification:
Add `decodeAll` overload which returns all cookies
Result:
Fixes#7210
Note:
This change is not as perscriptive as the ideas brought up in the linked issue. Changing the Set implementation or the equals/compareTo definition is likely a breaking change, so they are practical.
# Motivation:
`DefaultByteBufHolder.equals()` considers another object equal if it's an instance of `ByteBufferHolder` and if the contents of two objects are equal. However, the behavior of `equals` method is not a part of the `ByteBufHolder` contract so `DefaultByteBufHolder`'s version may be causing violation of the symmetric property if other classes have different logic.
There are already a few classes that are affected by this: `DefaultHttp2GoAwayFrame`, `DefaultHttp2UnknownFrame`, and `SctpMessage` are all overriding `equals` method breaking the symmetric property.
Another effect of this behavior is that all instances with empty data are considered equal. That may not be desireable in the situations when instances are created for predefined constants, e.g. `FullBulkStringRedisMessage.NULL_INSTANCE` and `FullBulkStringRedisMessage.EMPTY_INSTANCE` in `codec-redis`.
# Modification:
Make `DefaultByteBufHolder.equals()` implementation only work for the objects of the same class.
# Result:
- The symmetric property of the `equals` method is restored for the classes in question.
- Instances of different classes are not considered equal even if the content of the data they hold are the same.
Motivation:
The current implementation delegates to writeHeaders(ChannelHandlerContext ctx, int streamId, Http2Headers headers, int streamDependency, short weight, boolean exclusive, int padding, boolean endStream, ChannelPromise promise) that will send an header frame with the priority flag set and the default priority values even if the user didnt want too.
Modifications:
- Change DefaultHttp2ConnectionEncoder to call the correct Http2FrameWriter method depending on if the user wants to use priorities or not
- Adjust tests
Result:
Fixes https://github.com/netty/netty/issues/9842
Motivation:
SnappyFrameDecoderTest has a few tests which fail to close the EmbeddedChannel
and therefore may leak ByteBuf objects.
Modifications:
- Make sure EmbeddedChannel#finishAndReleaseAll() is called in all tests
Result:
No more leaks from SnappyFrameDecoderTest.
Motivation:
We did not correctly close the `EmbeddedChannel` which would lead to not have `handlerRemoved(...)` called. This can lead to leaks. Beside this we also did not correctly consume produced data which could also show up as a leak.
Modifications:
- Always call `EmbeddedChannel.finish()`
- Ensure we consume all produced data and release it
Result:
No more leaks in test. This showed up in https://github.com/netty/netty/pull/9850#issuecomment-562504863.
Motivation:
97361fa2c8 replace synchronized with ConcurrentHashMap in *Bootstrap classes but missed to do the same for the Http2 variant.
Modifications:
- Use ConcurrentHashMap
- Simplify code in *Bootstrap classes
Result:
Less contention
Motivation
While working on other changes I noticed some opportunities to
streamline a few things in AbstractByteBuf.
Modifications
- Avoid duplicate ensureAccessible() checks in discard(Some)ReadBytes()
and ensureWritable0(int) methods
- Simplify ensureWritable0(int) logic
- Make some conditional checks more concise
Result
Cleaner, possibly faster code
Motivation:
In certain scenarios mutliple concurrent AbstractChannelPoolMap
operations might be called from event loops that handle also
ChannelPool close operations. If the map uses synchronous close
it could end up blocking the event loop and if multiple threads
are waiting for each other a deadlock might occur.
Modifications:
Previously #9226 introduced a closeAsync operation for
FixedChannelPool, which is now extended to SimpleChannelPool class.
The AbstractChannelPoolMap now uses the closeAsync operations when
closing redundant or removed SimpleChannelPool instances.
Result:
The AbstractChannelPoolMap get/remove operations will not wait
until the pools are closed as they will happen asynchronously and
avoid situations that could cause the event loop being blocked in
deadlocks.
Motivation:
We should include the shaded sources for JCTools in our sources jar to make it easier to debug.
Modifications:
- Adjust plugin configuration to execute plugins in correct order
- Update source plugin
- Add configuration for shade plugin to generate source jar content
Result:
Fixes https://github.com/netty/netty/issues/6640.
Motivation:
https://github.com/netty/netty/pull/9797 changed the code for recvmmsg and sendmmsg to use the syscalls directly to remvove the dependency on newer GLIBC versions. Unfortunally it made the assumption that the syscall numbers are the same for different architectures, which is not the case.
Thanks to @jayv for pointing it out
Modifications:
Add #if, #elif and #else declarations to ensure we pick the correct syscall number (or not support if if the architecture is not supported atm).
Result:
Pick the correct syscall number depending on the architecture.
Motivation:
HttpPostStandardRequestDecoder may throw multiple different exceptions in the constructor which could lead to memory leaks. We need to guard against this by explicit catch all of them and rethrow after we released any allocated memory.
Modifications:
- Catch, destroy and rethrow in any case
- Ensure we correctly wrap IllegalArgumentExceptions
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/9829
Motivation:
Replace Map with Set. `reportedLeaks` has better semantics as a Set, and if it is a Map, it seems that the value of this Map has no meaning to us.
Modifications:
Use Set.
Result:
Cleaner code
Motivation:
At the moment our AbstractSniHandler makes the assemption that Handshake messages are not fragmented. This is incorrect as it is completely valid to split these across multiple TLSPlaintext records.
Thanks to @sskrobotov for bringing this to my attentation and to @Lukasa for the help.
Modifications:
- Adjust logic in AbstractSniHandler to handle fragmentation
- Add unit tests
Result:
Correctly handle fragmented Handshake message in AbstractSniHandler (and so SniHandler).
Motivation:
The buffer which the decoder allocates for the expansion can be
leaked if there is a subsequent issue writing to it.
Modifications:
The error handling has been improved so that the new buffer always
is released on failure in the expand.
Result:
The decoder will not leak in this scenario any more.
Fixes: https://github.com/netty/netty/issues/9812
Motivation:
We use the onStreamClosed(...) callback to return unconsumed bytes back to the window of the connection when needed. When this happens we will write a window update frame but not automatically call ctx.flush(). As the user has no insight into this it could in the worst case result in a "deadlock" as the frame is never written out ot the socket.
Modifications:
- If onStreamClosed(...) produces a window update frame call ctx.flush()
- Add unit test
Result:
No stales possible due unflushed window update frames produced by onStreamClosed(...) when not all bytes were consumed before the stream was closed
Motivation:
We can make use of EmbeddedChannel.finishAndReleaseAll() and so remove some code
Modifications:
Use finishAndReleaseAll()
Result:
Less code to maintain
Motivation:
At the moment we send a window update frame for the connection + stream when a stream is closed and there are unconsumed bytes left. While we need to do this for the connection it makes no sense to write a window update frame for the stream itself as it is already closed
Modifications:
- Don't write the window update frame for the stream when the stream is closed
- Add unit test
Result:
Don't write the window frame for closed streams
Motivation:
Due a bug we did not correctly set the writerIndex of the ByteBuf when a
user specified EpollChannelOption.MAX_DATAGRAM_PAYLOAD_SIZE but we ended
up with a non scattering read.
Modifications:
- Set writerIndex to the correct value
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/9788
### Motivation:
Those who need 'Origin' or 'Sec-WebSocket-Origin' headers should provide them explicitly, like it is stated in WebSocket specs.
E.g. through custom headers:
HttpHeaders customHeaders = new DefaultHttpHeaders()
.add(HttpHeaderNames.ORIGIN, "http://localhost:8080");
new WebSocketClientProtocolHandler(
new URI("ws://localhost:1234/test"), WebSocketVersion.V13, subprotocol,
allowExtensions, customHeaders, maxFramePayloadLength, handshakeTimeoutMillis)
### Modification:
* Remove enforced origin headers.
* Update tests
### Result:
Fixes#9673: Origin header is always sent from WebSocket client
Motiviation
#9800 was just merged which consolidates the flush/no-flush WriteTasks
in AbstractChannelHandlerContext, but after looking at the changes again
I noticed a tiny simplification that would be good to make imo.
Modification
Remove use of conditional operator in decrementPendingOutboundBytes()
Result
Simpler code, one less branch
Motivation:
Http2ConnectionHandler tries to guard against sending multiple RST frames for the same stream. Unfortunally the code is not 100 % correct as it only updates the state after it calls write. This may lead to the situation of have an extra RST frame slip through if the second write for the RST frame is done from a listener that is attached to the promise.
Modifications:
- Update state before calling write
- Add unit test
Result:
Only ever send one RST frame per stream
Motivation:
Modern versions of FreeBSD define IP_RECVORIGDSTADDR, but don't define
SOL_IP. This causes the build to fail.
Modifications:
The equivalent to SOL_IP on FreeBSD is IPPROTO_IP. Define SOL_IP as that
if SOL_IP is not defined and IPPROTO_IP is.
Result:
This allows a successful build on FreeBSD
Motivation:
This is a PR to solve the problem described here: https://github.com/netty/netty/issues/9767
Basically this PR is to add two more APIs in SslContextBuilder, for users to directly specify
the KeyManager or TrustManager they want to use when building SslContext. This is very helpful
when users want to pass in some customized implementation of KeyManager or TrustManager.
Modification:
This PR takes the first approach in here:
https://github.com/netty/netty/issues/9767#issuecomment-551927994 (comment)
which is to immediately convert the managers into factories and let factories continue to pass
through Netty.
1. Add in SslContextBuilder the two APIs mentioned above
2. Create a KeyManagerFactoryWrapper and a TrustManagerFactoryWrapper, which take a KeyManager
and a TrustManager respectively. These are two simple wrappers that do the conversion from
XXXManager class to XXXManagerFactory class
3.Create a SimpleKeyManagerFactory class(and internally X509KeyManagerWrapper for compatibility),
which hides the unnecessary details such as KeyManagerFactorySpi. This serves the similar
functionalities with SimpleTrustManagerFactory, which was already inside Netty.
Result:
Easier usage.
Motivation
The event loop implementations had become somewhat tangled over time and
work was done recently to streamline EpollEventLoop. NioEventLoop would
benefit from the same treatment and it is more straighforward now that
we can follow the same structure as was done for epoll.
Modifications
Untangle NioEventLoop logic and mirror what's now done in EpollEventLoop
w.r.t. the volatile selector wake-up guard and scheduled task deadline
handling.
Some common refinements to EpollEventLoop have also been included - to
use constants for the "special" deadline/wakeup volatile values and to
avoid some unnecessary calls to System.nanoTime() on task-only
iterations.
Result
Hopefully cleaner, more efficient and less fragile NIO transport
implementation.
Motivation:
21720e4a78 introduced a change which aimed to enable the not_x86_64 profile when building on a x86_64 platform. Unfortunaly it made an assemption which not holds true and so the profile was already enabled. This lead to the situation that native SSL tests were skipped if non boringssl impl was used.
Modifications:
Fix profile activation to work as expected
Result:
Correctly run aal native SSL tests
Motivation:
There is an intrinsic race between a local session resetting a stream
and the peer no longer sending any frames. This can result in the
session receiving frames for a stream that the local peer no longer
tracks. This results in a StreamException being thrown which triggers a
RST_STREAM frame, which is a good thing, but also logging at level WARN,
which is noisy for an expected and benign condition.
Modification:
Change the log level to DEBUG when logging stream errors with code
STREAM_CLOSED. All others are more interesting and will continue to be
logged at level WARN.
Additionally, it was found that DATA frames for streams that could not
have existed only resulted in a StreamException when the spec is clear
that such a situation should be fatal to the connection, resulting in a
GOAWAY(PROTOCOL_ERROR).
Fixes#8025.
Motivation:
394a1b3485 introduced a hard dependency on GLIBC 2.12 which was not the case before. This had the effect of not be able to use the native epoll transports on platforms which ship with earlier versions of GLIBC.
To make things a backward compatible as possible we should not introduce such changes in a bugfix release.
Special thanks to @weissi with all the help to fix this.
Modifications:
- Use syscalls directly to remove dependency on GLIBC 2.12
- Make code consistent that needs newer GLIBC versions
- Adjust scattering read test to only run if recvmmsg syscall is supported
- Cleanup pom.xml as some stuff is not needed anymore after using syscalls.
Result:
Fixes https://github.com/netty/netty/issues/9758.
Motivation
AbstractChannelHandlerContext uses recyclable tasks when performing
writes from outside of the event loop. There's currently two distinct
classes WriteTask and WriteAndFlushTask used for executing writes versus
writeAndFlushes, and these are recycled in separate pools. However it is
straightforward to just have a single class / recycler pool, with a
flush flag.
Modifications
- Unify WriteTasks into a single class using the sign bit of the
existing size field to indicate whether a flush should be performed
- Use the new executor lazyExecute() method to lazily execute the
non-flush write tasks explicitly
- Change AbstractChannelHandlerContext#invokeWrite and
AbstractChannelHandlerContext#invokeWriteAndFlush from private to
package-private to avoid synthetic methods
- Correct the default object size estimate for WriteTask
Results
- Possibly improved reuse of recycled write tasks
- Fewer virtual method calls and shorter path lengths
- Less code
Motivation
JMH 1.22 was released recently, we might as well use the latest when
running benchmarks.
Summary of changes:
https://mail.openjdk.java.net/pipermail/jmh-dev/2019-November/002879.html
Modifications
Update jmh dependencies in microbench module from version 1.21 to 1.22.
Result
Benchmarks run using latest JMH
Motivation:
By default CloseWebSocketFrames are handled automatically.
However I need manually manage their sending both on client- and on server-sides.
Modification:
Send close frame on channel close automatically, when it was not send before explicitly.
Result:
No more messages like "Connection closed by remote peer" for normal close flows.
Motivation:
Latest recommended maven version is 3.6.2 so we should use it
Modifications:
Update from 3.5.2 to 3.6.2
Result:
Use latest recommended maven version
Motivation:
In most cases, we want to use MultithreadEventLoopGroup such as NioEventLoopGroup without setting thread numbers but thread name only. So we need to use followed code:
NioEventLoopGroup boss = new NioEventLoopGroup(0, new DefaultThreadFactory("boss"));
It looks a bit confuse or strange for the number 0 due to we only want to set thread name. So it will be better to add new constructor for this case.
Modifications:
add new constructor into all event loop groups, for example: public NioEventLoopGroup(ThreadFactory threadFactory)
Result:
User can only set thread factory without setting the thread number to 0:
NioEventLoopGroup boss = new NioEventLoopGroup(new DefaultThreadFactory("boss"));
Motivation:
Netty currently doesn't build and distribute the test JARs. Having easy access to the test JARs would enable downstream projects (such as GraalVM) to integrate the Netty unit tests in their CI pipeline to ensure continous compatibility with Netty features. The alternative would be to build Netty from source every time to obtain the test jars, however, depending on the CI setup, that may not always be possible.
Modifications:
Modify `pom.xml` to enable generation of test JARs and corresponding source JARs.
Result:
Running the Maven build will create the test JARs and corresponding source JARs. This change was tested locally via `mvn install` and the test JARs are correctly copied under the Maven cache. The expectation is that running `mvn deploy` will also copy the additional JARs to the maven repository.