Motivation:
DefaultChannelPipeline.estimatorHandle needs to be volatile as its accessed from different threads.
Modifications:
Make DefaultChannelPipeline.estimatorHandle volatile and correctly init it via CAS
Result:
No more race.
Motivation:
Currentry logger create hex dump even if log write will not apply.
It's unecessary GC overhead.
Modifications:
Restore optimization from #3492
Result:
Fixes#7025
Motivation:
There should be a way to allow graceful shutdown to wait for all open streams to close without a timeout. Using gracefulShutdownTimeoutMillis with a large value is a bit of a hack, and has a gotcha that sufficiently large values will overflow the long, resulting in a ClosingChannelFutureListener that executes immediately.
Modification:
Allow to use gracefulShutdownTimeoutMillis(-1) to express waiting until all streams are closed.
Result:
We can now shutdown the connection without a forced timeout.
Motivation:
We previously used pollLast() to retrieve a Channel from the queue that backs SimpleChannelPool. This could lead to the problem that some Channels are very unfrequently used and so when these are used the connection was already be closed and so could not be reused.
Modifications:
Allow to configure if the last recent used Channel should be used or the "oldest".
Result:
More flexible usage of ChannelPools
Motivation:
We should not use ipv4 google dns servers if the app is configured to run ipv6.
Modifications:
Use either ipv4 or ipv6 dns servers depending on the system config.
Result:
More correct behaviour
Motivation:
We used an int[] to store all values that are returned in the struct for TCP_INFO which is not good enough as it uses usigned int values.
Modifications:
- Change int[] to long[] and correctly cast values.
Result:
No more truncated values.
Motivation:
When the hostname portion can not be extracted we should just skip the server as otherwise we will produce and exception when trying to create the InetSocketAddress.
This was happing when trying to run the test-suite on a system and using java7:
java.lang.IllegalArgumentException: hostname can't be null
at java.net.InetSocketAddress.checkHost(InetSocketAddress.java:149)
at java.net.InetSocketAddress.<init>(InetSocketAddress.java:216)
at io.netty.util.internal.SocketUtils$10.run(SocketUtils.java:171)
at io.netty.util.internal.SocketUtils$10.run(SocketUtils.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at io.netty.util.internal.SocketUtils.socketAddress(SocketUtils.java:168)
at io.netty.resolver.dns.DefaultDnsServerAddressStreamProvider.<clinit>(DefaultDnsServerAddressStreamProvider.java:74)
at io.netty.resolver.dns.DnsServerAddressesTest.testDefaultAddresses(DnsServerAddressesTest.java:39)
Modifications:
Skip if hostname can not be extracted.
Result:
No more java.lang.ExceptionInInitializerError.
Motivation:
JNDI allows to specify an port so we should respect it.
Modifications:
Use the specified port and if none is specifed use 53.
Result:
Correct handling of JNDI configured DNS.
Motivation:
`Epoll.ensureAvailability()` is called multiple times, once in
static initialization and in a couple of the constructors. This is
redundant and confusing to read.
Modifications:
Move `Epoll.ensureAvailability()` call into an instance initializer
and remove all other references. This ensures that every EELG
checks availability, while still delaying the check until
construction. This pattern is used when there are multiple ctors,
as in this class.
Result:
Easier to read code.
Motivation:
ByteBuf#ensureWritable(int,boolean) returns an int indicating the status of the resize operation. For buffers that are unmodifiable or cannot be resized this method shouldn't throw but just return 1.
ByteBuf#ensureWriteable(int) should throw unmodifiable buffers.
Modifications:
- ReadOnlyByteBuf should be updated as described above.
- Add a unit test to SslHandler which verifies the read only buffer can be tolerated in the aggregation algorithm.
Result:
Fixes https://github.com/netty/netty/issues/7002.
Motivation:
Lots of usages of SelfSignedCertificates were not deleting the certs at
the end of the test. This includes setupHandlers() which is used by
extending classes. Although these files will be deleted at JVM exit and
deleting them early does not free the JVM from trying to delete them at
shutdown, it's good practice to delete eagerly and since users sometimes
use tests as a form of documentation, it'd be good for them to see the
explicit deletes.
Modifications:
Add missing delete() calls to ½ of the SelfSignedCertificates-using
tests.
Result:
Tests that more clearly communicates which resources are created and
may accumulate without early delete.
Motivation:
Previously filterCipherSuites was being passed the OpenSSL-formatted
cipher names. Commit 43ae974 introduced a regression as it swapped to the
RFC/JDK format, except that user-provided ciphers were not converted and
remained in the OpenSSL format.
This mis-match would cause all user-provided to be thrown away, leading
to failure trying to set zero ciphers:
Exception in thread "main" javax.net.ssl.SSLException: failed to set cipher suite: []
at io.netty.handler.ssl.ReferenceCountedOpenSslContext.<init>(ReferenceCountedOpenSslContext.java:299)
at io.netty.handler.ssl.OpenSslContext.<init>(OpenSslContext.java:43)
at io.netty.handler.ssl.OpenSslServerContext.<init>(OpenSslServerContext.java:347)
at io.netty.handler.ssl.OpenSslServerContext.<init>(OpenSslServerContext.java:335)
at io.netty.handler.ssl.SslContext.newServerContextInternal(SslContext.java:421)
at io.netty.handler.ssl.SslContextBuilder.build(SslContextBuilder.java:441)
Caused by: java.lang.Exception: Unable to configure permitted SSL ciphers (error:100000b1:SSL routines:OPENSSL_internal:NO_CIPHER_MATCH)
at io.netty.internal.tcnative.SSLContext.setCipherSuite(Native Method)
at io.netty.handler.ssl.ReferenceCountedOpenSslContext.<init>(ReferenceCountedOpenSslContext.java:295)
... 7 more
Modifications:
Remove the reformatting of user-provided ciphers, as they are already in
the RFC/JDK format.
Result:
No regression, and the internals stay sane using the RFC/JDK format.
Motivation:
08748344d8 introduced two new tests which did not take into account that the multipart delimiter can be between 2 and 16 bytes long.
Modifications:
Take the multipart delimiter length into account.
Result:
Fixes [#7001]
Motivation:
We need to ensure we not allow calling set/writeCharsequence on an released ByteBuf.
Modifications:
Add test-cases
Result:
Proves fix of [#6951].
Motivation:
We need to fail the promise if a failure during handshake happens.
Modification:
Correctly fail the promise.
Result:
Correct websocket client example. Fixes [#6998]
Motivation:
We should not try to detect a free port in tests put just use 0 when bind so there is no race in which the system my bind something to the port we choosen before.
Modifications:
- Remove the usage of TestUtils.getFreePort() in the testsuite
- Remove hack to workaround bind errors which will not happen anymore now
Result:
Less flacky tests.
Motivation:
Shading requires renaming binary components (.so, .dll; for tcnative,
epoll, etc). But the rename then requires setting the
io.netty.packagePrefix system property on the command line or runtime,
which is either a burden or not feasible.
If you don't rename the binary components everything appears to
work, until a dependency on a second version of the binary component is
added. At that point, only one version of the binary will be loaded...
which is what shading is supposed to prevent. So for valid shading, the
binaries must be renamed.
Modifications:
Automatically detect the package prefix by comparing the actual class
name to the non-shaded expected class name. The expected class name must
be obfuscated to prevent shading utilities from changing it.
Result:
When shading and using binary components, runtime configuration is no
longer necessary.
Pre-existing shading users that were not renaming the binary components
will break, because the packagePrefix previously defaulted to "". Since
these pre-existing users had broken configurations that only _appeared_
to work, this breakage is considered a Good Thing. Users may workaround
this breakage temporarily by setting -Dio.netty.packagePrefix= to
restore packagePrefix to "".
Fixes#6963
Motivation:
In some environments, IPv4 may be disabled (at a kernel level).
Google has such an environment for testing v4 -> v6 transition
paths. This give confidence that code is v6 ready.
Modifications:
Change native socket code to ignore failures of trying to enter
dual stack mode. This change has been made to Google's internal
JDK, and will/should be upstreamed to OpenJDK eventually.
Results:
Netty works in IPv6 only environments
Fixes: #6993
Motivation:
codec-http2 currently does not strictly enforce the HTTP/1.x semantics with respect to the number of headers defined in RFC 7540 Section 8.1 [1]. We currently don't validate the number of headers nor do we validate that the trailing headers should indicate EOS.
[1] https://tools.ietf.org/html/rfc7540#section-8.1
Modifications:
- DefaultHttp2ConnectionDecoder should only allow decoding of a single headers and a single trailers
- DefaultHttp2ConnectionEncoder should only allow encoding of a single headers and optionally a single trailers
Result:
Constraints of RFC 7540 restricting the number of headers/trailers is enforced.
Motivation:
AbstractByteBuf.setCharSequence(...) must not expand the buffer if not enough writable space is present in the buffer to be consistent with all the other set operations.
Modifications:
- Ensure we only exand the buffer on writeCharSequence(...) but not on setCharSequence(...)
- Add unit tests.
Result:
Consistent and correct behavior.
Motivation:
Some ChannelOptions must be set before the Channel is really registered to have the desired effect.
Modifications:
Add another constructor argument which allows to not register the EmbeddedChannel to its EventLoop until the user calls register().
Result:
More flexible usage of EmbeddedChannel. Also Fixes [#6968].
Motivation:
HttpPostRequestEncoder maintains an internal buffer that holds the
current encoded data. There are use cases when this internal buffer
becomes null, the next chunk processing implementation should take
this into consideration.
Modifications:
- When preparing the last chunk if currentBuffer is null, mark
isLastChunkSent as true and send LastHttpContent.EMPTY_LAST_CONTENT
- When calculating the remaining size take into consideration that the
currentBuffer might be null
- Tests are based on those provided in the issue by @nebhale and @bfiorini
Result:
Fixes#5478
Motivation:
There was a typo in a dependency in the bom pom.xml which lead to have it specify a non-existing artifact and also so not have the maven release plugin update the version correctly.
Modifications:
Rename netty-transport-unix-common to netty-transport-native-unix-common and also fix the version.
Result:
Fixes [#6979]
Motivation:
Http2ServerUpgradeCodec should support Http2FrameCodec.
Modifications:
- Add support for Http2FrameCodec
- Add example that uses Http2FrameCodec
Result:
More flexible use of Http2ServerUpgradeCodec
Motivation:
DnsNameResolverTest has been observed to timeout on the CI servers. We should increase the timeout from 5 seconds to 30 seconds.
Modifications:
- Increase timeout from 5 to 30 seconds.
Result:
Less false failures due to slower CI machines.
Motivation:
AbstractByteBuf.ensureWritable(...) should check if buffer was released and if so throw an IllegalReferenceCountException
Modifications:
Ensure we throw in all cases.
Result:
More consistent and correct behaviour
Motivation:
ResourceLeakDetector records at most MAX_RECORDS+1 records
Modifications:
Make room before add to lastRecords
Result:
ResourceLeakDetector will record at most MAX_RECORDS records
Motivation:
MQTT unknown message type isn't handled as decoding error
Modification:
Catching exception during the MQTT decoding of the fixed header
Adding a unit test for unknown MQTT message type
Result:
Fixes#6984.
Motivation:
- A `HttpPostMultipartRequestDecoder` contains two pairs of the same methods: `readFileUploadByteMultipartStandard`+`readFileUploadByteMultipart` and `loadFieldMultipartStandard`+`loadFieldMultipart`.
- These methods use `NotEnoughDataDecoderException` to detecting not last data chunk (exception handling is very expensive).
- These methods can be greatly simplified.
- Methods `loadFieldMultipart` and `loadFieldMultipartStandard` has an unnecessary catching for the `IndexOutOfBoundsException`.
Modifications:
- Remove duplicate methods.
- Replace handling `NotEnoughDataDecoderException` by the return of a boolean result.
- Simplify code.
Result:
The code is cleaner and easier to support. Less exception handling logic.
Motivation:
6152990073 introduced a test-case in SSLEngineTest which used OpenSsl.* which should not be done as this is am abstract bass class that is also used for non OpenSsl tests.
Modifications:
Move the protocol definations into SslUtils.
Result:
Cleaner code.
Motivation:
It would be easier to find where is missing release call in several retain release calls on a ByteBuf
Modifications:
Remove final modifier on SimpleLeakAwareByteBuf and SimpleLeakAwareByteBuf release function and override it to record release in AdvancedLeakAwareByteBuf and AdvancedLeakAwareCompositeByteBuf
Result:
Release will be recorded when enable detailed leak detection
Motivation:
When run the current testsuite on docker (mac) it will fail a few tests with:
io.netty.channel.AbstractChannel$AnnotatedConnectException: connect(..) failed: Cannot assign requested address: /0:0:0:0:0:0:0:0%0:46607
Caused by: java.net.ConnectException: connect(..) failed: Cannot assign requested address
Modifications:
Specify host explicit as done in other tests to only use ipv6 when really supported.
Result:
Build pass on docker as well
Motivation:
Code introduced in 6152990073 can be cleaned up and use array initializer expressions.
Modifications:
Use array initializer expressions.
Result:
Cleaner code.
Motivation:
We had recently a report that the issue [#6607] is still not fixed.
Modifications:
Add a testcase to prove the issue is fixed.
Result:
More tests.
Motivation:
Calling JsonObjectDecoder#reset while streaming Json array over multiple
writes causes CorruptedFrameException to be thrown.
Modifications:
While streaming Json array and if the current readerIndex has been reset,
ensure that the states will not be reset.
Result:
Fixes#6969
Motivation:
TLS doesn't support a way to advertise non-contiguous versions from the client's perspective, and the client just advertises the max supported version. The TLS protocol also doesn't support all different combinations of discrete protocols, and instead assumes contiguous ranges. OpenSSL has some unexpected behavior (e.g. handshake failures) if non-contiguous protocols are used even where there is a compatible set of protocols and ciphers. For these reasons this method will determine the minimum protocol and the maximum protocol and enabled a contiguous range from [min protocol, max protocol] in OpenSSL.
Modifications:
- ReferenceCountedOpenSslEngine#setEnabledProtocols should determine the min/max protocol versions and enable a contiguous range
Result:
OpenSslEngine is more consistent with the JDK's SslEngineImpl and no more unexpected handshake failures due to protocol selection quirks.
Motivation:
Currently the default cipher suites are set independently between JDK and OpenSSL. We should use a common approach to setting the default ciphers. Also the OpenSsl default ciphers are expressed in terms of the OpenSSL cipher name conventions, which is not correct and may be exposed to the end user. OpenSSL should also use the RFC cipher names like the JDK defaults.
Modifications:
- Move the default cipher definition to a common location and use it in JDK and OpenSSL initialization
- OpenSSL should not expose OpenSSL cipher names externally
Result:
Common initialization and OpenSSL doesn't expose custom cipher names.
Motivation:
The DNS resolver may use default configuration inherited from the environment. This means the ndots value may change and result in test failure if the tests don't explicitly set the assumed value.
Modifications:
- Explicitly set ndots in resolver-dns unit tests so we don't fail if the environment overrides the search domain and ndots
Result:
Unit tests are less dependent upon the enviroment they run in.
Fixes https://github.com/netty/netty/issues/6966.
Motivation:
PR https://github.com/netty/netty/pull/6803 corrected an error in the return status of the OpenSslEngine. We should now be able to restore the SslHandler optimization.
Modifications:
- This reverts commit 7f3b75a509.
Result:
SslHandler optimization is restored.
Motivation:
JCTools 2.0.2 provides an unbounded MPSC linked queue. Before we shaded JCTools we had our own unbounded MPSC linked queue and used it in various places but gave this up because there was no public equivalent available in JCTools at the time.
Modifications:
- Use JCTool's MPSC linked queue when no upper bound is specified
Result:
Fixes https://github.com/netty/netty/issues/5951
Motivation:
Each call to SSL_write may introduce about ~100 bytes of overhead. The OpenSslEngine (based upon OpenSSL) is not able to do gathering writes so this means each wrap operation will incur the ~100 byte overhead. This commit attempts to increase goodput by aggregating the plaintext in chunks of <a href="https://tools.ietf.org/html/rfc5246#section-6.2">2^14</a>. If many small chunks are written this can increase goodput, decrease the amount of calls to SSL_write, and decrease overall encryption operations.
Modifications:
- Introduce SslHandlerCoalescingBufferQueue in SslHandler which will aggregate up to 2^14 chunks of plaintext by default
- Introduce SslHandler#setWrapDataSize to control how much data should be aggregated for each write. Aggregation can be disabled by setting this value to <= 0.
Result:
Better goodput when using SslHandler and the OpenSslEngine.
Motivation:
The JDK SSLEngine documentation says that a call to wrap/unwrap "will attempt to consume one complete SSL/TLS network packet" [1]. This limitation can result in thrashing in the pipeline to decode and encode data that may be spread amongst multiple SSL/TLS network packets.
ReferenceCountedOpenSslEngine also does not correct account for the overhead introduced by each individual SSL_write call if there are multiple ByteBuffers passed to the wrap() method.
Modifications:
- OpenSslEngine and SslHandler supports a mode to not comply with the limitation to only deal with a single SSL/TLS network packet per call
- ReferenceCountedOpenSslEngine correctly accounts for the overhead of each call to SSL_write
- SslHandler shouldn't cache maxPacketBufferSize as aggressively because this value may change before/after the handshake.
Result:
OpenSslEngine and SslHanadler can handle multiple SSL/TLS network packet per call.
[1] https://docs.oracle.com/javase/7/docs/api/javax/net/ssl/SSLEngine.html
Motivation:
1. Some encoders used a `ByteBuf#writeBytes` to write short constant byte array (2-3 bytes). This can be replaced with more faster `ByteBuf#writeShort` or `ByteBuf#writeMedium` which do not access the memory.
2. Two chained calls of the `ByteBuf#setByte` with constants can be replaced with one `ByteBuf#setShort` to reduce index checks.
3. The signature of method `HttpHeadersEncoder#encoderHeader` has an unnecessary `throws`.
Modifications:
1. Use `ByteBuf#writeShort` or `ByteBuf#writeMedium` instead of `ByteBuf#writeBytes` for the constants.
2. Use `ByteBuf#setShort` instead of chained call of the `ByteBuf#setByte` with constants.
3. Remove an unnecessary `throws` from `HttpHeadersEncoder#encoderHeader`.
Result:
A bit faster writes constants into buffers.
Motivation:
We should also use realloc when shrink the buffer to eliminate extra allocations / memory copies when possible.
Modifications:
Use realloc for expanding and shrinking when possible.
Result:
Less memory copies and allocations