Motivation:
HttpPostRequestEncoder maintains an internal buffer that holds the
current encoded data. There are use cases when this internal buffer
becomes null, the next chunk processing implementation should take
this into consideration.
Modifications:
- When preparing the last chunk if currentBuffer is null, mark
isLastChunkSent as true and send LastHttpContent.EMPTY_LAST_CONTENT
- When calculating the remaining size take into consideration that the
currentBuffer might be null
- Tests are based on those provided in the issue by @nebhale and @bfiorini
Result:
Fixes#5478
Motivation:
There was a typo in a dependency in the bom pom.xml which lead to have it specify a non-existing artifact and also so not have the maven release plugin update the version correctly.
Modifications:
Rename netty-transport-unix-common to netty-transport-native-unix-common and also fix the version.
Result:
Fixes [#6979]
Motivation:
Http2ServerUpgradeCodec should support Http2FrameCodec.
Modifications:
- Add support for Http2FrameCodec
- Add example that uses Http2FrameCodec
Result:
More flexible use of Http2ServerUpgradeCodec
Motivation:
DnsNameResolverTest has been observed to timeout on the CI servers. We should increase the timeout from 5 seconds to 30 seconds.
Modifications:
- Increase timeout from 5 to 30 seconds.
Result:
Less false failures due to slower CI machines.
Motivation:
AbstractByteBuf.ensureWritable(...) should check if buffer was released and if so throw an IllegalReferenceCountException
Modifications:
Ensure we throw in all cases.
Result:
More consistent and correct behaviour
Motivation:
ResourceLeakDetector records at most MAX_RECORDS+1 records
Modifications:
Make room before add to lastRecords
Result:
ResourceLeakDetector will record at most MAX_RECORDS records
Motivation:
MQTT unknown message type isn't handled as decoding error
Modification:
Catching exception during the MQTT decoding of the fixed header
Adding a unit test for unknown MQTT message type
Result:
Fixes#6984.
Motivation:
- A `HttpPostMultipartRequestDecoder` contains two pairs of the same methods: `readFileUploadByteMultipartStandard`+`readFileUploadByteMultipart` and `loadFieldMultipartStandard`+`loadFieldMultipart`.
- These methods use `NotEnoughDataDecoderException` to detecting not last data chunk (exception handling is very expensive).
- These methods can be greatly simplified.
- Methods `loadFieldMultipart` and `loadFieldMultipartStandard` has an unnecessary catching for the `IndexOutOfBoundsException`.
Modifications:
- Remove duplicate methods.
- Replace handling `NotEnoughDataDecoderException` by the return of a boolean result.
- Simplify code.
Result:
The code is cleaner and easier to support. Less exception handling logic.
Motivation:
6152990073 introduced a test-case in SSLEngineTest which used OpenSsl.* which should not be done as this is am abstract bass class that is also used for non OpenSsl tests.
Modifications:
Move the protocol definations into SslUtils.
Result:
Cleaner code.
Motivation:
It would be easier to find where is missing release call in several retain release calls on a ByteBuf
Modifications:
Remove final modifier on SimpleLeakAwareByteBuf and SimpleLeakAwareByteBuf release function and override it to record release in AdvancedLeakAwareByteBuf and AdvancedLeakAwareCompositeByteBuf
Result:
Release will be recorded when enable detailed leak detection
Motivation:
When run the current testsuite on docker (mac) it will fail a few tests with:
io.netty.channel.AbstractChannel$AnnotatedConnectException: connect(..) failed: Cannot assign requested address: /0:0:0:0:0:0:0:0%0:46607
Caused by: java.net.ConnectException: connect(..) failed: Cannot assign requested address
Modifications:
Specify host explicit as done in other tests to only use ipv6 when really supported.
Result:
Build pass on docker as well
Motivation:
Code introduced in 6152990073 can be cleaned up and use array initializer expressions.
Modifications:
Use array initializer expressions.
Result:
Cleaner code.
Motivation:
We had recently a report that the issue [#6607] is still not fixed.
Modifications:
Add a testcase to prove the issue is fixed.
Result:
More tests.
Motivation:
Calling JsonObjectDecoder#reset while streaming Json array over multiple
writes causes CorruptedFrameException to be thrown.
Modifications:
While streaming Json array and if the current readerIndex has been reset,
ensure that the states will not be reset.
Result:
Fixes#6969
Motivation:
TLS doesn't support a way to advertise non-contiguous versions from the client's perspective, and the client just advertises the max supported version. The TLS protocol also doesn't support all different combinations of discrete protocols, and instead assumes contiguous ranges. OpenSSL has some unexpected behavior (e.g. handshake failures) if non-contiguous protocols are used even where there is a compatible set of protocols and ciphers. For these reasons this method will determine the minimum protocol and the maximum protocol and enabled a contiguous range from [min protocol, max protocol] in OpenSSL.
Modifications:
- ReferenceCountedOpenSslEngine#setEnabledProtocols should determine the min/max protocol versions and enable a contiguous range
Result:
OpenSslEngine is more consistent with the JDK's SslEngineImpl and no more unexpected handshake failures due to protocol selection quirks.
Motivation:
Currently the default cipher suites are set independently between JDK and OpenSSL. We should use a common approach to setting the default ciphers. Also the OpenSsl default ciphers are expressed in terms of the OpenSSL cipher name conventions, which is not correct and may be exposed to the end user. OpenSSL should also use the RFC cipher names like the JDK defaults.
Modifications:
- Move the default cipher definition to a common location and use it in JDK and OpenSSL initialization
- OpenSSL should not expose OpenSSL cipher names externally
Result:
Common initialization and OpenSSL doesn't expose custom cipher names.
Motivation:
The DNS resolver may use default configuration inherited from the environment. This means the ndots value may change and result in test failure if the tests don't explicitly set the assumed value.
Modifications:
- Explicitly set ndots in resolver-dns unit tests so we don't fail if the environment overrides the search domain and ndots
Result:
Unit tests are less dependent upon the enviroment they run in.
Fixes https://github.com/netty/netty/issues/6966.
Motivation:
PR https://github.com/netty/netty/pull/6803 corrected an error in the return status of the OpenSslEngine. We should now be able to restore the SslHandler optimization.
Modifications:
- This reverts commit 7f3b75a509.
Result:
SslHandler optimization is restored.
Motivation:
JCTools 2.0.2 provides an unbounded MPSC linked queue. Before we shaded JCTools we had our own unbounded MPSC linked queue and used it in various places but gave this up because there was no public equivalent available in JCTools at the time.
Modifications:
- Use JCTool's MPSC linked queue when no upper bound is specified
Result:
Fixes https://github.com/netty/netty/issues/5951
Motivation:
Each call to SSL_write may introduce about ~100 bytes of overhead. The OpenSslEngine (based upon OpenSSL) is not able to do gathering writes so this means each wrap operation will incur the ~100 byte overhead. This commit attempts to increase goodput by aggregating the plaintext in chunks of <a href="https://tools.ietf.org/html/rfc5246#section-6.2">2^14</a>. If many small chunks are written this can increase goodput, decrease the amount of calls to SSL_write, and decrease overall encryption operations.
Modifications:
- Introduce SslHandlerCoalescingBufferQueue in SslHandler which will aggregate up to 2^14 chunks of plaintext by default
- Introduce SslHandler#setWrapDataSize to control how much data should be aggregated for each write. Aggregation can be disabled by setting this value to <= 0.
Result:
Better goodput when using SslHandler and the OpenSslEngine.
Motivation:
The JDK SSLEngine documentation says that a call to wrap/unwrap "will attempt to consume one complete SSL/TLS network packet" [1]. This limitation can result in thrashing in the pipeline to decode and encode data that may be spread amongst multiple SSL/TLS network packets.
ReferenceCountedOpenSslEngine also does not correct account for the overhead introduced by each individual SSL_write call if there are multiple ByteBuffers passed to the wrap() method.
Modifications:
- OpenSslEngine and SslHandler supports a mode to not comply with the limitation to only deal with a single SSL/TLS network packet per call
- ReferenceCountedOpenSslEngine correctly accounts for the overhead of each call to SSL_write
- SslHandler shouldn't cache maxPacketBufferSize as aggressively because this value may change before/after the handshake.
Result:
OpenSslEngine and SslHanadler can handle multiple SSL/TLS network packet per call.
[1] https://docs.oracle.com/javase/7/docs/api/javax/net/ssl/SSLEngine.html
Motivation:
1. Some encoders used a `ByteBuf#writeBytes` to write short constant byte array (2-3 bytes). This can be replaced with more faster `ByteBuf#writeShort` or `ByteBuf#writeMedium` which do not access the memory.
2. Two chained calls of the `ByteBuf#setByte` with constants can be replaced with one `ByteBuf#setShort` to reduce index checks.
3. The signature of method `HttpHeadersEncoder#encoderHeader` has an unnecessary `throws`.
Modifications:
1. Use `ByteBuf#writeShort` or `ByteBuf#writeMedium` instead of `ByteBuf#writeBytes` for the constants.
2. Use `ByteBuf#setShort` instead of chained call of the `ByteBuf#setByte` with constants.
3. Remove an unnecessary `throws` from `HttpHeadersEncoder#encoderHeader`.
Result:
A bit faster writes constants into buffers.
Motivation:
We should also use realloc when shrink the buffer to eliminate extra allocations / memory copies when possible.
Modifications:
Use realloc for expanding and shrinking when possible.
Result:
Less memory copies and allocations
Motivation:
In some environments, the HTTP CONNECT handshake requires special headers to work.
Modification:
Update HttpProxyHandler to accept a HttpHeaders argument.
Result:
The header is passed along in the HTTP CONNECT request, and the proxy request can be successfully completed.
Motivation:
An intermediate list is creating in the `EpollEventLoop#closeAll` to prevent ConcurrentModificationException. But this is not the obvious purpose has no comment.
Modifications:
Add comment to clarify the appointment of the intermediate collection.
Result:
More clear code.
Motivation:
UnixResolverDnsServerAddressStreamProvider currently throws an exception if /etc/resolver exists but it empty. This shouldn't be an exception and can be tolerated as if there is no contribution from /etc/resolver.
Modifications:
- Treat /etc/resolver as present and empty the same as not being present
Result:
UnixResolverDnsServerAddressStreamProvider initialization can tolerate empty /etc/resolver directory.
Motivation:
InetSocketAddress#getHostName() may attempt a reverse lookup which may lead to test failures because the expected address will not match.
Modifications:
- Use InetSocketAddress#getHostString() which will not attempt any lookups and instead return the original String
Result:
UnixResolverDnsServerAddressStreamProviderTest is more reliable.
Motivation:
SslHandlerTest#testCompositeBufSizeEstimationGuaranteesSynchronousWrite has been observed to fail on CI servers. Knowing how many bytes were seen by the client would be helpful.
Modifications:
- Add bytesSeen to the exception if the client closes early.
Result:
More debug info available.
Motivation:
We used an intermediate collection to store the read DatagramPackets and only fired these through the pipeline once wewere done with the reading loop. This is not needed and can also increase memory usage.
Modifications:
Remove intermediate collection
Result:
Less overhead and possible less memory usage during read loop.
Motivation:
If there are multiple DNS servers to query Java's DNS resolver will attempt to resolve A and AAAA records in sequential order and will terminate with a failure once all DNS servers have been exhausted. Netty's DNS server will share the same DnsServerAddressStream for the different record types which may send the A question to the first host and the AAAA question to the second host. Netty's DNS resolution also may not progress to the next DNS server in all situations and doesn't have a means to know when resolution has completed.
Modifications:
- DnsServerAddressStream should support new methods to allow the same stream to be used to issue multiple queries (e.g. A and AAAA) against the same host.
- DnsServerAddressStream should support a method to determine when the stream will start to repeat, and therefore a failure can be returned.
- Introduce SequentialDnsServerAddressStreamProvider for sequential use cases
Result:
Fixes https://github.com/netty/netty/issues/6926.
Motivation:
`SocketChannelUDT` from barchart-udt does not have the java 7 `public abstract SocketChannel bind(SocketAddress local)` method. Calling the abstract method `SocketChannel.bind(SocketAddress localAddress)` for `SocketChannelUDT` leads to an `AbstractMethodError` runtime error.
Modifications:
Make workaround with explicit call of `SocketChannelUDT.bind(SocketAddress local)` as it done in `NioUdtByteConnectorChannel`.
Result:
Fixes [#6934].
Motivation:
The kqueue documentation states that 'Calling close() on a file descriptor will remove any kevents that reference the descriptor.' [1], but doesn't mention if this cleanup will be done synchronously. Under some circumstances it has been observed that cleanup was not done immediately and when KQueueEventLoop attempted to access the channel associated with the event the JVM would crash, a ClassCastException, or generally undefined behavior would occur because of invalid pointer references.
[1] https://www.freebsd.org/cgi/man.cgi?query=kqueue&sektion=2
Modifications:
- AbstractKqueueChannel#doClose should not rely upon this assumption and instead should call doDeregister() to ensure cleanup is done synchronously.
- Deleting a kevent should also set the jniSelfPtr stored in the udata of that kevent to NULL, to ensure we will not dereference it later.
Result:
No more kqueue crash due to close/cleanup sequencing.
Motivation:
In the SocksCmdRequest and SocksCmdResponse constructors a host param converts from IDN to ascii compatible form regardless address type.
Modifications:
Use `IDN#toASCII` only for `DOMAIN` address type.
Result:
More correct host handling in socks commands.
Motivation:
Methods `ByteBufUtil#writeUtf8` and `ByteBufUtil#writeAscii` contains a check `ByteBuf#ensureWritable` before the calling `ByteBuf#writeBytes`. But the `ByteBuf#writeBytes` also do a such check inside.
Modifications:
Make checks more targeted.
Result:
Less redundant method calls.
Motivation:
The behaviour of the FixedChannelPool.release was inconsistent with the
SimpleChannelPool implementation, in that given promise is returned.
In the FixedChannelPool implementation a new promise was return and
this meant that the completion of that promise can be different.
Specifically on releasing a channel to a closed pool, the parameter
promise is failed with an IllegalStateException but the returned one
will have been successful (as it was completed by call to super
.release)
Modification:
Return the given promise as the result of FixedChannelPool.release
Result:
Returned promise will reflect the result of the release operation.
Motivation:
Channels returned to a FixedChannelPool after closing it remain active.
Since channels that where acquired from the pool are not closed during the close operation, they remain open even after releasing the channel back to the pool where they are then in accessible and become in-effect a connection leak.
Modification:
Close the released channel on releasing back to a closed pool.
Result:
Much harder to create a connection leak by closing an active
FixedChannelPool instance.
Motivation:
DefaultHttp2ConnectionEncoder#writeHeaders attempts to find a stream object, and if one doesn't exist it tries to create one. However in the event that the local endpoint has received a RST_STREAM frame before writing the response headers we attempt to create a stream. Since this stream ID is for the incorrect endpoint we then generate a GO_AWAY for what appears to be a protocol error, but can instead be failed locally.
Modifications:
- Just fail the local promise in the above situation instead of sending a GO_AWAY
Result:
Less severe consequences if the server asynchronously sends headers after a RST_STREAM has been received.
Fixes https://github.com/netty/netty/issues/6906.
Motivation:
We rely upon the linker being non-lazy to test compatibility the native library compatibility for kqueue, but the default mode of operation is to lazy link.
Modifications:
- We should modify the build scripts to inform the linker that this library should not be lazy linked
- Error messages changes
dyld: lazy symbol binding failed: Symbol not found: _clock_gettime
java.lang.UnsatisfiedLinkError: unsupported JNI version 0xFFFFFFFF required by .../libnetty-transport-native-kqueue.dylib
Result:
Link errors are detected upon library load time.
Motivation:
We had a typo in the method name of the EpollSocketChannelConfig.
Modifications:
Deprecate old method and introduce a new one.
Result:
Fixes [#6909]
Motivation:
Right now HttpRequestEncoder does insertion of slash for url like http://localhost?pararm=1 before the question mark. It is done not effectively.
Modification:
Code:
new StringBuilder(len + 1)
.append(uri, 0, index)
.append(SLASH)
.append(uri, index, len)
.toString();
Replaced with:
new StringBuilder(uri)
.insert(index, SLASH)
.toString();
Result:
Faster HttpRequestEncoder. Additional small test. Attached benchmark in PR.
Benchmark Mode Cnt Score Error Units
HttpRequestEncoderInsertBenchmark.newEncoder thrpt 40 3704843.303 ± 98950.919 ops/s
HttpRequestEncoderInsertBenchmark.oldEncoder thrpt 40 3284236.960 ± 134433.217 ops/s