Motivation:
If ByteBufUtil.getBytes() is called with copy=false, it does not
correctly check that the underlying array can be shared in some cases.
In particular:
* It does not check that the arrayOffset() is zero. This causes it to
incorrectly return the underlying array if the other conditions are
met. The returned array will be longer than requested, with additional
unwanted bytes at its start.
* It assumes that the capacity() of the ByteBuf is equal to the backing
array length. This is not true for some types of ByteBuf, such as
PooledHeapByteBuf. This causes it to incorrectly return the underlying
array if the other conditions are met. The returned array will be
longer than requested, with additional unwanted bytes at its end.
Modifications:
This commit fixes the two bugs by:
* Checking that the arrayOffset() is zero before returning the
underlying array.
* Comparing the requested length to the underlying array's length,
rather than the ByteBuf's capacity, before returning the underlying
array.
This commit also adds a series of test cases for ByteBufUtil.getBytes().
Result:
ByteBufUtil.getBytes() now correctly checks whether the underlying array
can be shared or not.
The test cases will ensure the bug is not reintroduced in the future.
Motivation:
This reverts commit 825916c7f0 as it turns out it introduced a big performance regression.
Modifications:
Revert 825916c7f0
Result:
Performance of TLS is back to normal
Motivation:
be able to modify Http2Settings of Http2ConnectionHandlerBuilder.
Modifications:
make initialSettings() of Http2ConnectionHandlerBuilder public.
Result:
public initialSettings() of Http2ConnectionHandlerBuilder allow Http2Settings modification.
Motivation:
In some benchmarks closing the Channel attributes to a lot of overhead due the call of fillInStackTrace(). We should reduce this overhead.
Modifications:
- Create a StacklessClosedChannelException and use it to reduce overhead.
- Only call ChannelOutboundBuffer.failFlushed(...) when there was a flushed message at all.
Result:
Less performance overhead when closing the Channel
Motivation:
`IpSubnetFilter` uses Binary Search for IP Address search which is fast if we have large set of IP addresses to filter.
Modification:
Added `IpSubnetFilter` which takes `IpSubnetFilterRule` for filtering.
Result:
Faster IP address filter.
Motivation:
`RuleBasedIpFilter` had JavaDoc `{@link #channelRejected(ChannelHandlerContext, SocketAddress)}` instead of `{@link AbstractRemoteAddressFilter#channelRejected(ChannelHandlerContext, SocketAddress)}`.
Modification:
Added `AbstractRemoteAddressFilter` reference.
Result:
Fixed JavaDoc error and made documentation more clear.
Motivation:
Right now after a SslMasterKeyHandler is added to the pipeline,
it also needs to be enabled via a system property explicitly. In
some environments where the handler is conditionally added to
the pipeline this is redundant and a bit confusing.
Modifications:
This changeset keeps the default behavior, but allows child
implementations to tweak the way on how it detects that it
should actually handle the event when it is being raised.
Also, removed a private static that is not used in the wireshark
handler.
Result:
Child implementations can be more flexible in deciding when
and how the handler should perform its work (without changing
any of the defaults).
Motivation:
MQTT Specification version 5 was released over a year ago,
netty-codec-mqtt should be changed to support it.
Modifications:
Added more message and header types in `io.netty.handler.codec.mqtt`
package in `netty-coded-mqtt` subproject,
changed `MqttEncoder` and `MqttDecoder` to handle them properly,
added attribute `NETTY_CODEC_MQTT_VERSION` to track protocol version
Result:
`netty-codec-mqtt` supports both MQTT5 and MQTT3 now.
Motivation:
We're converting `Inet4Address` to `Integer` quite frequently so it's a good idea to keep that code in `NetUtils`.
Modification:
Added ipv4AddressToInt(Inet4Address) in NetUtils
Result:
Easy conversion of `Inet4Address` to `Integer`.
Motivation:
Due a change introduced in 68105b257d we incorrectly skipped the usage of nameservers in some cases.
Modifications:
Only fetch a new stream of nameserver if the hostname not matches the original hostname in the query.
Result:
Use all configured nameservers. Fixes https://github.com/netty/netty/issues/10499
Motivation:
JDK15 is about to be released as GA, we should ensure netty works and builds on it. SSLSession#getPeerCertificateChain() throws UnsupportedOperationException in JDK15 and later as it was deprecated before and people should use SSLSession#getPeerCertificates(). We need to account for that in our tests
Modifications:
- Catch UnsupportedOperationException in our testsuite and ignore it when on JDK15+ while rethrowing it otherwise.
Result:
Testsuite passes on JDK15+
Motivation:
Avoid keeping unused dependencies around.
Modification:
Remove all references to javassist dependency, since it does not appear to be used by anything.
Result:
One less dependency to worry about.
Motivation:
I was working on the transport part in Netty (ofc, solving a major issue) and I found this typo so thought to fix it.
Modification:
Fixed Typo
Result:
No more confusion between `us` and `use`.
Motivation
This is used solely for the DataOutput#writeUTF8() method, which may
often not be used.
Modifications
Lazily construct the contained DataOutputStream in ByteBufOutputStream.
Result
Saves an allocation in some common cases
Motivation
ByteBuf has an isAccessible method which was introduced as part of ref
counting optimizations but there are some places still doing
accessibility checks by accessing the volatile refCnt() directly.
Modifications
- Have PooledNonRetained(Duplicate|Sliced)ByteBuf#isAccessible() use
their refcount delegate's isAccessible() method
- Add static isAccessible(buf) and ensureAccessible(buf) methods to
ByteBufUtil
(since ByteBuf#isAccessible() is package-private)
- Adjust DefaultByteBufHolder and similar classes to use these methods
rather than access refCnt() directly
Result
- More efficient accessibility checks in more places
Motivation:
It makes no sense to remove the task when the underlying executor fails as we may be able to pick it up later. Beside this the used Queue doesnt support remove(...) and so will throw.
Modifications:
Remove the queue.remove(...) call
Result:
Fixes https://github.com/netty/netty/issues/10501.
Motivation:
There should be a validation for the input arguments when constructing Http2FrameLogger
Modification:
Check that the provided arguments are not null
Result:
Proper validation when constructing Http2FrameLogger
Motivation:
This is small fixes for #10453 PR according @njhill and @normanmaurer conversation.
Modification:
Simple refactor and takes into account remainder when calculate size.
Result:
Behavior is correct
Expose a LoggingDnsQueryLifeCycleObserverFactory
Motivation:
There is a use case for having logging in the DnsNameResolver, similar to the LoggingHandler.
Previously, one could set `traceEnabled` on the DnsNameResolverBuilder, but this is not very configurable.
Specifically, the log level and the logger context cannot be changed.
Modification:
Expose a LoggingDnsQueryLifeCycleObserverFactory, that permit changing the log-level
and logger context.
Result:
It is now possible to get logging in the DnsNameResolver at a custom log level and logger,
without very much effort.
Fixes#10485
Motivation:
At some point a ChannelHandlerContext did have its own AttributeMap which is not true since 4.1.x was released. Unfortunally we missed to update the javadocs and so these don't reflect reality
Modifications:
Update javadocs
Result:
Fixes https://github.com/netty/netty/issues/10477
Motivation:
To reduce latency and RTTs we should use TLS False Start when jdkCompatibilityMode is not required and its supported
Modifications:
Use SSL_MODE_ENABLE_FALSE_START when jdkCompatibilityMode is false
Result:
Less RTTs and so lower latency when TLS False Start is supported
Motivation:
We need limit the number of queries done to resolve unresolved nameserver as otherwise we may never fail the resolve when there is a missconfigured dns server that lead to a "resolve loop".
Modifications:
- When resolve unresolved nameservers ensure we use the allowedQueries as max upper limit for queries so it will eventually fail
- Add unit tests
Result:
No more possibility to fail in a loop while resolve. This is related to https://github.com/netty/netty/issues/10420
Motivation:
AbstractDiskHttpData#getChunk opens and closes fileChannel every time when it is invoked,
as a result the uploaded file is corrupted. This is a regression caused by #10270.
Modifications:
- Close the fileChannel only if everything was read or an exception is thrown
- Add unit test
Result:
AbstractDiskHttpData#getChunk closes fileChannel only if everything was read or an exception is thrown
Motivation:
We should include TLSv1.3 ciphers as well as recommented ciphers these days for HTTP/2. That is especially true as Java supports TLSv1.3 these days out of the box
Modifications:
- Add TLSv1.3 ciphers that are recommended by mozilla as was for HTTP/2
- Add unit test
Result:
Include TLSv1.3 ciphers as well
Motivation:
DnsAddressResolverGroup allows to be constructed with a DnsNameResolverBuilder and so should respect its configured EventLoop.
Modifications:
- Correctly respect the configured EventLoop
- Ensure there are no thread-issues by calling copy()
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/10460
Motivation:
When we were using the netty http protocol, OOM occurred, this problem has been in 4.1.51.Final Fix [# 10424](https://github.com/netty/netty/issues/10424), even if OOM is up, the service will still receive new connection events, will occur again OOM and eventually cause the connection not to be released.
code `byteBuf = allocHandle.allocate(allocator);`
Modification:
I fail to create buffer when I try to receive new data, i determine if it is OOM then the close read event releases the connection.
```java
if (close || cause instanceof OutOfMemoryError || cause instanceof IOException) {
closeOnRead(pipeline);
}
```
Result:
Fixes # [10434](https://github.com/netty/netty/issues/10434).
Motivation:
Setting a dependency on the connection is normal and permitted; streams
actually default to depending on the connection. Using a PRIORITY frame
with a dependency on the connection could reset a previous PRIORITY,
change the relative weight, or make all streams dependent on one stream.
The previous code was disallowing these usages as it considered
depending on the connection to be a validation failure.
Modifications:
Loosen validation check to also allow depending on the connection. Fix
error message when the validation check fails.
Result:
Setting a dependency on connection would be permitted. Fixes#10416
Motivation:
How we init our static fields in Conscrypt was kind of error-prone and may even lead to NPE later on when methods were invoked out of order.
Modifications:
- Move all the init code to a static block
- Remove static field which is not needed anymore
Result:
Cleanup and also fixes https://github.com/netty/netty/issues/10413
Motivation:
The executor chooser should pluck executors in round-robin, but at the 32-bit
overflow boundary, the round-robin sequence was disrupted when the number of
executors are not a power of 2.
Modification:
Changed the index counter from a 32-bit to a 64-bit long. The overflow bug is
still technically there, but it now takes so long to reach that it will never
happen in practice. For example, 2^63 nanoseconds is almost 300 years.
Result:
The round-robin behaviour for all EventExecutorChoosers is now preserved in
practice.
This fixes#10423.
Motivation:
Recent Intellij versions are starting to anticipate
future versions of Java that include a
`java.lang.Record` class, and the Intellij compiler
gets confused by the `Record` class in our
`ResorceLeakDetector`.
Modification:
Rename our `Record` class to `TracerRecord`.
This matches what the class is doing, while avoiding
any future name clashes.
Result:
Intellij can now compile the project again, even when
configured to use a future (snapshot or early access)
version of Java.
Motivation:
The feature test macros did not work as expected on MacOS, so we ended up compiling for the GNU
variant which resulted in compilation errors.
Modification:
Add `__APPLE__` as another indicator to use the XSI variant of `strerror_r`.
Result:
The project now once again compiles on MacOS.
Motivation:
We previously relied on `strerror`, but this function is unfortunately not thread-safe.
Modification:
The use of `strerror` has been changed to `strerror_r`, which is thread-safe.
This function has a more complicated API, and has portability concerns that needs to be handled.
This accounts for the relatively large increase in lines of code.
Result:
Error messages from JNI are now always generated in a thread-safe way.
Motivation:
Even if the system does not support ipv6 we should try to use it if the user explicit pass an Inet6Address. This way we ensure we fail and not try to convert this to an ipv4 address internally.
This incorrect behavior was introduced by 70731bfa7e
Modifications:
If the user explicit passed an Inet6Address we force the usage of ipv6
Result:
Fixes https://github.com/netty/netty/issues/10402
Motiviation:
When TLSv1.3 was introduced almost 2 years ago, it was decided to disable it by default, even when it's supported by the underlying TLS implementation.
TLSv13 is pretty stable now in Java (out of the box in Java 11, OpenJSSE for Java 8, BoringSSL and OpenSSL) and may be enabled by default.
Modifications:
Ensure TLSv13 is enabled by default when the underyling JDK SSLEngine implementation enables it as well
Result:
TLSv1.3 is now enabled by default, so users don't have to explicitly enable it.
Co-authored-by: Stephane Landelle <slandelle@gatling.io>
Motivation:
AlgorithmId.sha256WithRSAEncryption_oid was removed in JDK15 and later so we should not depend on it as otherwise we will see compilation errors
Modifications:
Replace AlgorithmId.sha256WithRSAEncryption_oid usage with specify the OID directly
Result:
Compiles on JDK15+
Motivation:
JCTools 3.1.0 is out and includes several fixes, see https://github.com/JCTools/JCTools/releases/tag/v3.1.0
Modification:
Upgrade jctools-core version in pom.xml
Result:
Netty ships latest version of jctools.
Motivation:
SSLEngineImpl.unwrap(...) may call FileInputStream.read(...) internally when TLS1.3 is used. This will cause an BlockingOperationError when BlockHound is enabled.
For more details see https://mail.openjdk.java.net/pipermail/security-dev/2020-August/022271.html
Modifications:
- Add whitelist entry to BlockHound config for now
- Update NettyBlockHoundIntegrationTest to include testing for this workaround
Result:
No BlockingOperationError when TLS1.3 is used with JDK SSL implementation and BlockHound is enabled
Motivation:
Regression appeared after making changes in fix#10360 .
The main problem here that `buffer.getBytes(buffer.readerIndex(), fileChannel, fileChannel.position(), localsize)`
doesn't change channel position after writes.
Modification:
Manually set position according to the written bytes.
Result:
Fixes#10449 .
Motivation:
If a request to open a new h2 stream was made from outside of the
EventLoop it will be scheduled for future execution on the EventLoop.
However, during the time before the `open0` task will be executed the
parent channel may already be closed. As the result,
`Http2MultiplexHandler#newOutboundStream()` will throw an
`IllegalStateException` with the message that is hard to
interpret correctly for this use-case: "Http2FrameCodec not found. Has
the handler been added to a pipeline?".
Modifications:
- Check that the parent h2 `Channel` is still active before creating a
new stream when `open0` task is picked up by EventLoop;
Result:
Users see a correct `ClosedChannelException` in case the parent h2
`Channel` was closed concurrently with a request for a new stream.
Motivation:
Replace Class.getClassLoader with io.netty.util.internal.PlatformDependent.getClassLoader in Openssl so it also works when a SecurityManager is in place
Modification:
Replace Class.getClassLoader with io.netty.util.internal.PlatformDependent.getClassLoader in Openssl
Result:
No issues when a SecurityManager is in place
Motivation:
`Http2StreamChannelBootstrap#open0` invokes
`Http2MultiplexHandler#newOutboundStream()` which may throw an
`IllegalStateException`. In this case, it will never complete
the passed promise.
Modifications:
- `try-catch` all invocations of `newOutboundStream()` and fail
promise in case of any exception;
Result:
New H2 stream promise always completes.
Motivation:
GraalVM's native images built with native-image tool do not support Unsafe.staticFieldOffset() method (at least, currently). If an application using Netty (and causing initialization of io.netty.util.internal.PlatformDependent0 class) is built into a native image and run, this results in the following error thrown during initialization:
Exception in thread "main" com.oracle.svm.core.jdk.UnsupportedFeatureError: Unsupported method of Unsafe
at com.oracle.svm.core.util.VMError.unsupportedFeature(VMError.java:86)
at jdk.internal.misc.Unsafe.staticFieldOffset(Unsafe.java:230)
at sun.misc.Unsafe.staticFieldOffset(Unsafe.java:662)
at io.netty.util.internal.PlatformDependent0$5.run(PlatformDependent0.java:294)
at java.security.AccessController.doPrivileged(AccessController.java:83)
at io.netty.util.internal.PlatformDependent0.<clinit>(PlatformDependent0.java:279)
This seems to be the reason of the behavior described in #10051.
Modification:
The idea of this commit is to only invoke Unsafe.staticFieldOffset() is we are not in a native image; if we are, behave like if we could not find the field at all.
GraalDetector is borrowed from Spring framework.
Result:
Fixes#10051
Motivation:
If we cancel the returned future of resolve query, we may get LEAK. Try to release the ByteBuf if netty can't pass the DnsRawRecord to the caller.
Modification:
Using debug mode I saw there are two places that don't handle trySuccess with release. Try to release there.
Result:
Fixes#10447.
Motivation:
`DefaultHttp2FrameStream.stream` is not used outside of its class and
therefore can be private.
Modifications:
- Make `DefaultHttp2FrameStream.stream` private;
Result:
Correct visibility scoping for `DefaultHttp2FrameStream.stream`.
Motivation:
This request only has headers frame, it should set endOfStream flag, or
it will never get a response.
Modifications:
Set endOfStream=true in header frame.
Result:
Http2FrameClient can get a response now.
Motivation:
IntelliJ IDEA may generate a local folder `.shelf/` for version control.
For more information, see
https://www.jetbrains.com/help/idea/shelving-and-unshelving-changes.html
Modifications:
- Add `.shelf/` folder to the `.gitignore` file;
Result:
IntelliJ IDEA's `.shelf/` folder is ignored by git.
Motivation:
From the recent benchmark using gRPC-Java based on Netty's HTTP2, it appears that it prefers `ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256` over `ECDHE_ECDSA_WITH_AES_128_GCM_SHA256` since it uses the Netty HTTPS Cipher list as is. Both are considered safe but `ECDHE_ECDSA_WITH_AES_128_GCM_SHA256` has a good chance to get more optimized implementation. (e.g. AES-NI) When running both on GCP Intel Haswell VM, `TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256` spent 3x CPU time than `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256`. (Note that this VM supports AES-NI)
From the cipher suites listed on `Intermediate compatibility (recommended)` of [Security/Server Side TLS](https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility), they have a cipher preference which is aligned with this PR.
```
0x13,0x01 - TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) Mac=AEAD
0x13,0x02 - TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD
0x13,0x03 - TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD
0xC0,0x2B - ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(128) Mac=AEAD
0xC0,0x2F - ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD
0xC0,0x2C - ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD
0xC0,0x30 - ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD
0xCC,0xA9 - ECDHE-ECDSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=ECDSA Enc=CHACHA20/POLY1305(256) Mac=AEAD
0xCC,0xA8 - ECDHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=RSA Enc=CHACHA20/POLY1305(256) Mac=AEAD
0x00,0x9E - DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(128) Mac=AEAD
0x00,0x9F - DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(256) Mac=AEAD
```
Modification:
Moving up `AES_128_GCM_SHA256` in the CIPHERS of HTTPS so that it gets priority.
Result:
When connecting to the server supporting both `ECDHE_ECDSA_WITH_AES_128_GCM_SHA256` and `ECDSA_WITH_CHACHA20_POLY1305_SHA256` and respecting the client priority of cipher suites, it will be able to save significant cpu time when running it on machines with AES-NI support.