Motivation:
We should stop as soon as we were able to set the key material on the server side as otherwise we may select keymaterial that "belongs" to a less prefered cipher. Beside this it also is just useless work.
We also need to propagate the exception when it happens during key material selection on the client side so openssl will produce the right alert.
Modifications:
- Stop once we were able to select a key material on the server side
- Ensure we not call choose*Alias more often then needed
- Propagate exceptions during selection of the keymaterial on the client side.
Result:
Less overhead and more correct behaviour
Motivation:
We need to let openssl know that we failed to find the key material so it will produce an alert for the remote peer to consume. Beside this we also need to ensure we wrap(...) until we produced everything as otherwise the remote peer may see partial data when an alert is produced in multiple steps.
Modifications:
- Correctly throw if we could not find the keymaterial
- wrap until we produced everything
- Add test
Result:
Correctly handle the case when key material could not be found
Motivation:
Calling chooseServerAlias(...) may be expensive so we should ensure we not call it multiple times for the same auth methods.
Modifications:
Remove duplicated from authMethods before trying to call chooseServerAlias(...)
Result:
Less performance overhead during key material selection
Motivation:
Following Javadoc standard
Modification:
Change from `@param KeyManager` to `@param keyManager`
Result:
The `@param` matches the actual parameter variable name
Motivation:
We should use an initial buffer size with is >= 1500 (which is a common setting for MTU) to reduce the need for memory copies when a new connection is established. This is especially interesting when SSL / TLS comes into the mix.
This was ported from swiftnio:
https://github.com/apple/swift-nio/pull/1641
Modifications:
Increase the initial size from 1024 to 2048.
Result:
Possible less memory copies on new connections
Motivation:
Reason code for MQTT SUBACK messages is truncated, thus not allowing to get new codes supported by MQTT v.5
Modification:
Changed `MqttCodecTest.testUnsubAckMessageForMqtt5` to catch it then, removed truncation in `MqttDecoder.decodeSubackPayload` to make it pass.
Result:
Codec users can get all reason codes supported by MQTT5 now.
Motivation:
Creating exceptions is expensive so we should only do so if really needed.
Modifications:
Only create the ConnectTimeoutException if we really need it.
Result:
Less overhead
Motivation:
Consider a scenario when the client iniitiates a WebSocket handshake but before the handshake is complete,
the channel is closed due to some reason. In such scenario, the handshake timeout scheduled on the executor
is not cleared. The reason it is not cleared is because in such cases the handshakePromise is not completed.
Modifications:
This change completes the handshakePromise exceptinoally on channelInactive callback, if it has not been
completed so far. This triggers the callback on completion of the promise which clears the timeout scheduled
on the executor.
This PR also adds a test case which reproduces the scenario described above. The test case fails before the
fix is added and succeeds when the fix is applied.
Result:
After this change, the timeout scheduled on the executor will be cleared, thus freeing up thread resources.
Motivation:
There was no way to read MQTT properties outside of `io.netty.handler.codec.mqtt` package
Modification:
Expose `MqttProperties.MqttProperty` fields `value` and `propertyId` via public methods
Result:
It's possible to read MQTT properties now
Motivation:
If DeleteOnExitHook is in the open state and the program runs for a long time, the DeleteOnExitHook file keeps increasing.
This results in a memory leak
Modification:
I re-customized a DeleteOnExitHook hook. If DeleteOnExitHook is turned on, this hook will be added when creating a temporary file. After the request ends and the corresponding resources are released, the current file will be removed from this hook, so that it will not increase all the time.
Result:
Fixes https://github.com/netty/netty/issues/10351
Bumps ant from 1.9.7 to 1.9.15.
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Recent changes for MQTT5 may cause NPE if UNSUBACK message is created with `MqttMessageFactory`
and client code isn't up-to-date.
Modifications:
Added default body in `MqttUnsubAckMessage` constructor if null body is passed,
added null checks in `encodeUnsubAckMessage`
Result:
`MqttUnsubAckMessage` created with `MqttMessageFactory` doesn't cause NPE
even if null body is supplied.
Motivation:
Write TCP and UDP packets into Pcap `OutputStream` which helps a lot in debugging.
Modification:
Added TCP and UDP Pcap writer.
Result:
New handler can write packets into an `OutputStream`, e.g. a file that can be opened with Wireshark.
Fixes#10385.
Motivation:
Impossible to specify retained messages handling policy because the corresponding enum
isn't public (https://github.com/netty/netty/issues/10562)
Modification:
Made `RetainedHandlingPolicy` public
Result:
Fixes#10562.
Motivation:
The builds often fail when downloading dependencies.
This might be caused by the build taking a long time, and cause pooled connections to be closed
by the remote end, if they are idle for too long.
Modification:
Disable connection pooling. This should force Maven to reestablish the connection for each download,
thus reducing the likelihood of the remote end closing connections we wish to use.
Result:
I'll leave it up the statistis of our CI to confirm, but we should see more stable builds.
Motivation: Code failed to compile because ByteBuf index marking has been removed.
Modification: Index marking wasn't really used anyway, so just set the relevant index to zero.
Result: Code compiles again.
Motivation:
writeUtf8 can suffer from inlining issues and/or megamorphic call-sites on the hot path due to ByteBuf hierarchy
Modifications:
Duplicate and specialize the code paths to reduce the need of polymorphic calls
Result:
Performance are more stable in user code
Motivation:
When we try to parse the kernel version we need to be careful what to
expect. Especially when a custom kernel is used we may get extra chars
in the version numbers. For example I had this one fail because of my
custom kernel that I built for io_uring:
5.8.7ioring-fixes+
Modifications:
- Try to be a bit more lenient when parsing
- If we cant parse the kernel version just use 0.0.0
Result:
Tests are more robust
only
Motivation:
4b7dba1 introduced a change which was not 100 % complete and so
introduce a regression when a user specified to use
InetProtocolFamily.IPv4 and trying to bind to a port (without specify
the ip).
Modifications:
- Fix regression by respect the InetProtocolFamily
- Add unit test
Result:
Fix regression when binding to port explicit
Reduce garbage on MQTT encoding
Motivation:
MQTT encoding and decoding is doing unnecessary object allocation in a number of places:
- MqttEncoder create many byte[] to encode Strings into UTF-8 bytes
- MqttProperties uses Integer keys instead of int
- Some enums valueOf create unnecessary arrays on the hot paths
- MqttDecoder was using unecessary Result<T>
Modification:
- ByteBufUtil::utf8Bytes and ByteBufUtil::reserveAndWriteUtf8 allows to perform the same operation GC-free
- MqttProperties uses a primitive key map
- Implemented GC free const table lookup/switch valueOf
- Use some bit-tricks to pack 2 ints into a single primitive long to store both result and numberOfBytesConsumed and use byte[].length to compute numberOfByteConsumed on fly. These changes allowed to save creating Result<T>.
Result:
Significantly less garbage produced in MQTT encoding/decoding
Motivation:
If ByteBufUtil.getBytes() is called with copy=false, it does not
correctly check that the underlying array can be shared in some cases.
In particular:
* It does not check that the arrayOffset() is zero. This causes it to
incorrectly return the underlying array if the other conditions are
met. The returned array will be longer than requested, with additional
unwanted bytes at its start.
* It assumes that the capacity() of the ByteBuf is equal to the backing
array length. This is not true for some types of ByteBuf, such as
PooledHeapByteBuf. This causes it to incorrectly return the underlying
array if the other conditions are met. The returned array will be
longer than requested, with additional unwanted bytes at its end.
Modifications:
This commit fixes the two bugs by:
* Checking that the arrayOffset() is zero before returning the
underlying array.
* Comparing the requested length to the underlying array's length,
rather than the ByteBuf's capacity, before returning the underlying
array.
This commit also adds a series of test cases for ByteBufUtil.getBytes().
Result:
ByteBufUtil.getBytes() now correctly checks whether the underlying array
can be shared or not.
The test cases will ensure the bug is not reintroduced in the future.
Motivation:
This reverts commit 7bf1ffb2d4 as it turns out it introduced a big performance regression.
Modifications:
Revert 7bf1ffb2d4
Result:
Performance of TLS is back to normal
Motivation:
be able to modify Http2Settings of Http2ConnectionHandlerBuilder.
Modifications:
make initialSettings() of Http2ConnectionHandlerBuilder public.
Result:
public initialSettings() of Http2ConnectionHandlerBuilder allow Http2Settings modification.
Motivation:
In some benchmarks closing the Channel attributes to a lot of overhead due the call of fillInStackTrace(). We should reduce this overhead.
Modifications:
- Create a StacklessClosedChannelException and use it to reduce overhead.
- Only call ChannelOutboundBuffer.failFlushed(...) when there was a flushed message at all.
Result:
Less performance overhead when closing the Channel
Motivation:
`IpSubnetFilter` uses Binary Search for IP Address search which is fast if we have large set of IP addresses to filter.
Modification:
Added `IpSubnetFilter` which takes `IpSubnetFilterRule` for filtering.
Result:
Faster IP address filter.
Motivation:
`RuleBasedIpFilter` had JavaDoc `{@link #channelRejected(ChannelHandlerContext, SocketAddress)}` instead of `{@link AbstractRemoteAddressFilter#channelRejected(ChannelHandlerContext, SocketAddress)}`.
Modification:
Added `AbstractRemoteAddressFilter` reference.
Result:
Fixed JavaDoc error and made documentation more clear.
Motivation:
Fix a TODO that was due since the "master" branch is baselined on at least Java 8.
Modification:
Remove our own copy of the Consumer interface and fix usage sites to use j.u.Consumer.
Also some cleanup.
Result:
Cleaner code.
Motivation:
Right now after a SslMasterKeyHandler is added to the pipeline,
it also needs to be enabled via a system property explicitly. In
some environments where the handler is conditionally added to
the pipeline this is redundant and a bit confusing.
Modifications:
This changeset keeps the default behavior, but allows child
implementations to tweak the way on how it detects that it
should actually handle the event when it is being raised.
Also, removed a private static that is not used in the wireshark
handler.
Result:
Child implementations can be more flexible in deciding when
and how the handler should perform its work (without changing
any of the defaults).
Motivation:
MQTT Specification version 5 was released over a year ago,
netty-codec-mqtt should be changed to support it.
Modifications:
Added more message and header types in `io.netty.handler.codec.mqtt`
package in `netty-coded-mqtt` subproject,
changed `MqttEncoder` and `MqttDecoder` to handle them properly,
added attribute `NETTY_CODEC_MQTT_VERSION` to track protocol version
Result:
`netty-codec-mqtt` supports both MQTT5 and MQTT3 now.
Motivation:
We're converting `Inet4Address` to `Integer` quite frequently so it's a good idea to keep that code in `NetUtils`.
Modification:
Added ipv4AddressToInt(Inet4Address) in NetUtils
Result:
Easy conversion of `Inet4Address` to `Integer`.
Motivation:
Due a change introduced in 68105b257d we incorrectly skipped the usage of nameservers in some cases.
Modifications:
Only fetch a new stream of nameserver if the hostname not matches the original hostname in the query.
Result:
Use all configured nameservers. Fixes https://github.com/netty/netty/issues/10499
Motivation:
JDK15 is about to be released as GA, we should ensure netty works and builds on it. SSLSession#getPeerCertificateChain() throws UnsupportedOperationException in JDK15 and later as it was deprecated before and people should use SSLSession#getPeerCertificates(). We need to account for that in our tests
Modifications:
- Catch UnsupportedOperationException in our testsuite and ignore it when on JDK15+ while rethrowing it otherwise.
Result:
Testsuite passes on JDK15+
Motivation:
Avoid keeping unused dependencies around.
Modification:
Remove all references to javassist dependency, since it does not appear to be used by anything.
Result:
One less dependency to worry about.
Motivation:
I was working on the transport part in Netty (ofc, solving a major issue) and I found this typo so thought to fix it.
Modification:
Fixed Typo
Result:
No more confusion between `us` and `use`.
Motivation
This is used solely for the DataOutput#writeUTF8() method, which may
often not be used.
Modifications
Lazily construct the contained DataOutputStream in ByteBufOutputStream.
Result
Saves an allocation in some common cases
Motivation
ByteBuf has an isAccessible method which was introduced as part of ref
counting optimizations but there are some places still doing
accessibility checks by accessing the volatile refCnt() directly.
Modifications
- Have PooledNonRetained(Duplicate|Sliced)ByteBuf#isAccessible() use
their refcount delegate's isAccessible() method
- Add static isAccessible(buf) and ensureAccessible(buf) methods to
ByteBufUtil
(since ByteBuf#isAccessible() is package-private)
- Adjust DefaultByteBufHolder and similar classes to use these methods
rather than access refCnt() directly
Result
- More efficient accessibility checks in more places
Motivation:
It makes no sense to remove the task when the underlying executor fails as we may be able to pick it up later. Beside this the used Queue doesnt support remove(...) and so will throw.
Modifications:
Remove the queue.remove(...) call
Result:
Fixes https://github.com/netty/netty/issues/10501.
Motivation:
There should be a validation for the input arguments when constructing Http2FrameLogger
Modification:
Check that the provided arguments are not null
Result:
Proper validation when constructing Http2FrameLogger
Motivation:
This is small fixes for #10453 PR according @njhill and @normanmaurer conversation.
Modification:
Simple refactor and takes into account remainder when calculate size.
Result:
Behavior is correct
Expose a LoggingDnsQueryLifeCycleObserverFactory
Motivation:
There is a use case for having logging in the DnsNameResolver, similar to the LoggingHandler.
Previously, one could set `traceEnabled` on the DnsNameResolverBuilder, but this is not very configurable.
Specifically, the log level and the logger context cannot be changed.
Modification:
Expose a LoggingDnsQueryLifeCycleObserverFactory, that permit changing the log-level
and logger context.
Result:
It is now possible to get logging in the DnsNameResolver at a custom log level and logger,
without very much effort.
Fixes#10485
Motivation:
To reduce latency and RTTs we should use TLS False Start when jdkCompatibilityMode is not required and its supported
Modifications:
Use SSL_MODE_ENABLE_FALSE_START when jdkCompatibilityMode is false
Result:
Less RTTs and so lower latency when TLS False Start is supported
Motivation:
We need limit the number of queries done to resolve unresolved nameserver as otherwise we may never fail the resolve when there is a missconfigured dns server that lead to a "resolve loop".
Modifications:
- When resolve unresolved nameservers ensure we use the allowedQueries as max upper limit for queries so it will eventually fail
- Add unit tests
Result:
No more possibility to fail in a loop while resolve. This is related to https://github.com/netty/netty/issues/10420
Motivation:
AbstractDiskHttpData#getChunk opens and closes fileChannel every time when it is invoked,
as a result the uploaded file is corrupted. This is a regression caused by #10270.
Modifications:
- Close the fileChannel only if everything was read or an exception is thrown
- Add unit test
Result:
AbstractDiskHttpData#getChunk closes fileChannel only if everything was read or an exception is thrown
Motivation:
We should include TLSv1.3 ciphers as well as recommented ciphers these days for HTTP/2. That is especially true as Java supports TLSv1.3 these days out of the box
Modifications:
- Add TLSv1.3 ciphers that are recommended by mozilla as was for HTTP/2
- Add unit test
Result:
Include TLSv1.3 ciphers as well
Motivation:
DnsAddressResolverGroup allows to be constructed with a DnsNameResolverBuilder and so should respect its configured EventLoop.
Modifications:
- Correctly respect the configured EventLoop
- Ensure there are no thread-issues by calling copy()
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/10460
Motivation:
We don't have example code for smtp.
Modifications:
Add SmtpClient and SmtpClientHandler.
Result:
Provide an example for developers who want to write a smtp client.
Motivation:
Noticed we had some unused non-public classes.
There is no reason to keep these around.
Modification:
Remove unused non-public classes.
Result:
Less code to worry about.