Motivation:
There were new releases of various Java versions.
Modifications:
Adjust used java versions of the latest releases and so use these on our CI
Result:
Use latest java versions on our CI.
Motivation:
While OpenSslPrivateKeyMethod.* should never return null we should still guard against it to prevent any possible segfault.
Modifications:
- Throw SignatureException if null is returned
- Add unit test
Result:
No segfault when user returns null.
Motivation:
Http2ConnectionHandler#close(..) always runs the GOAWAY and graceful close
logic. This coupling means that a user would have to override
Http2ConnectionHandler#close(..) to modify the behavior, and the
Http2FrameCodec and Http2MultiplexCodec are not extendable so you cannot
override at this layer. Ideally we can totally decouple the close(..) of the
transport and the GOAWAY graceful closure process completely, but to preserve
backwards compatibility we can add an opt-out option to decouple where the
application is responsible for sending a GOAWAY with error code equal to
NO_ERROR as described in https://tools.ietf.org/html/rfc7540#section-6.8 in
order to initiate graceful close.
Modifications:
- Http2ConnectionHandler supports an additional boolean constructor argument to
opt out of close(..) going through the graceful close path.
- Http2FrameCodecBuilder and Http2MultiplexCodec expose
gracefulShutdownTimeoutMillis but do not hook them up properly. Since these
are already exposed we should hook them up and make sure the timeout is applied
properly.
- Http2ConnectionHandler's goAway(..) method from Http2LifecycleManager should
initiate the graceful closure process after writing a GOAWAY frame if the error
code is NO_ERROR. This means that writing a Http2GoAwayFrame from
Http2FrameCodec will initiate graceful close.
Result:
Http2ConnectionHandler#close(..) can now be decoupled from the graceful close
process, and immediately close the underlying transport if desired.
Motivation
The optimization in #8988 didn't correctly handle the specific case
where the channel hasDisconnect == false, and a
ChannelOutboundHandlerAdapter subclass overrides only the close(ctx,
promise) method without also overriding the disconnect(ctx, promise)
method.
Modifications
Adjust AbstractChannelHandler.disconnect(...) method to divert to
close(...) in !hasDisconnect case before computing target context for
the event.
Result
Fixes#9092
Motivation:
DefaultHeaders entries maintains two linked lists. 1 for overall insertion order
and 1 for "in bucket" order. DefaultHeaders#valueIterator removal (introduced in 1d9090aab2) only reliably
removes the entry from the overall insertion order, but may not remove from the
bucket unless the element is the first entry.
Modifications:
- DefaultHeaders$ValueIterator should track 2 elements behind the next entry so
that the single linked "in bucket" list can be patched up when removing the
previous entry.
Result:
More correct DefaultHeaders#valueIterator removal.
Motivation:
Http2FrameCodec currently fails the write promise associated with creating a
stream with a Http2NoMoreStreamIdsException. However this means the user code
will have to listen to all write futures in order to catch this scenario which
is the same as receiving a GOAWAY frame. We can also simulate receiving a GOAWAY
frame from our remote peer and that allows users to consolidate graceful close
logic in the GOAWAY processing.
Modifications:
- Http2FrameCodec should simulate a DefaultHttp2GoAwayFrame when trying to
create a stream but the stream IDs have been exhausted.
Result:
Applications can rely upon GOAWAY for graceful close processing instead of also
processing write futures.
Motivaiton:
DefaultHttp2ConnectionEncoder uses SimpleChannelPromiseAggregator to combine two
operations into a single future status. However it directly uses the
SimpleChannelPromiseAggregator object instead of using the newPromise() method
in one case. This may result in premature completion of the aggregated future.
Modifications:
- DefaultHttp2ConnectionEncoder to use
SimpleChannelPromiseAggregator#newPromise() instead of directly using the
SimpleChannelPromiseAggregator instance when writing the settings ACK frame
Result:
More correct status for the SETTING ACK frame writing when auto settings ACK is
disabled.
Motivation:
The HTTP/2 codec will synchronously respond to a SETTINGS frame with a SETTINGS
ACK before the application sees the SETTINGS frame. The application may need to
adjust its state depending upon what is in the SETTINGS frame before applying
the remote settings and responding with an ACK (e.g. to adjust for max
concurrent streams). In order to accomplish this the HTTP/2 codec should allow
for the application to opt-in to sending the SETTINGS ACK.
Modifications:
- DefaultHttp2ConnectionDecoder should support a mode where SETTINGS frames can
be queued instead of immediately applying and ACKing.
- DefaultHttp2ConnectionEncoder should attempt to poll from the queue (if it
exists) to apply the earliest received but not yet ACKed SETTINGS frame.
- AbstractHttp2ConnectionHandlerBuilder (and sub classes) should support a new
option to enable the application to opt-in to managing SETTINGS ACK.
Result:
HTTP/2 allows for asynchronous SETTINGS ACK managed by the application.
Motivation:
SmtpRequestEncoderTest#testThrowsIfContentExpected has a ByteBuf leak.
Modifications:
- SmtpRequestEncoderTest#testThrowsIfContentExpected should release buffers in a finally block
Result:
No more leaks in SmtpRequestEncoderTest#testThrowsIfContentExpected.
Motivation
These aren't needed, only one field from each class is used. It also showed as an ambiguous identifier compilation error in my IDE even though javac is obviously fine with it.
Modifications
Static-import explicit ChannelOption fields in EpollDomainSocketChannelConfig instead of using .* wildcard.
Result
Cleaner / more consistent code.
Motivation:
1f93bd3 introduced a regression that could lead to not have the lastAccessed field correctly null'ed out when the endOffset of the internal Component == CompositeByteBuf.readerIndex()
Modifications:
- Correctly null out the lastAccessed field in any case
- Add unit tests
Result:
Fixes regression in CompositeByteBuf.discard*ReadBytes()
Motivation:
We should only try to use OpenSslX509TrustManagerWrapper when using Java 7+ as otherwise it fail to init in it's static block as X509ExtendedTrustManager was only introduced in Java7
Modifications:
Only call OpenSslX509TrustManagerWrapper if we use Java7+
Result:
Fixes https://github.com/netty/netty/issues/9064.
Motivation:
While iterating values it is often desirable to be able to remove individual
entries. The existing mechanism to do this involves removal of all entries and
conditional re-insertion which is heavy weight in order to remove a single
value.
Modifications:
- DefaultHeaders$ValueIterator supports removal
Result:
It is possible to remove entries while iterating the values in DefaultHeaders.
Motivation
These implementations delegate most of their methods to an existing Handle and previously extended RecvByteBufAllocator.DelegatingHandle. This was reverted in #6322 with the introduction of ExtendedHandle but it's not clear to me why it needed to be - the code looks a lot cleaner.
Modifications
Have (Epoll|KQueue)RecvByteAllocatorHandle extend DelegatingHandle again, while still implementing ExtendedHandle.
Result
Less code.
Motivation:
At the moment resolve(...) does just delegate to resolveAll(...) and so will only notify the future once all records were resolved. This is wasteful as we are only interested in the first record anyway. We should notify the promise as soon as one record that matches the preferred record type is resolved.
Modifications:
- Introduce DnsResolveContext.isCompleteEarly(...) to be able to detect once we should early notify the promise.
- Make use of this early detecting if resolve(...) is called
- Remove FutureListener which could lead to IllegalReferenceCountException due double releases
- add unit test
Result:
Be able to notify about resolved host more quickly.
Motivation:
We did not correctly calculate the new ttl as we did forget to add `this.`
Modifications:
Add .this and so correctly calculate the TTL
Result:
Use correct TTL for authoritative nameservers when updating these.
Motivation:
86dd388637 reverted the usage of IPv6 Multicast test. This commit makes the whole multicast testing a lot more robust by selecting the correct interface in any case and also reverts the `@Ignore`
Modifications:
- More robust multicast testing by selecting the right NetworkInterface
- Remove the `@Ignore` again for the IPv6 test
Result:
More robust multicast testing
Motivation:
Results are just wrong for small delays.
Modifications:
Switching to AvarageTime avoid to rely on OS nanoTime granularity.
Result:
Uncontended low delay results are not reliable
Motivation:
To closely mimic what the JDK does we should not try to resolve AAAA records if the system itself does not support IPv6 at all as it is impossible to connect to this addresses later on. In this case we need to use ResolvedAddressTypes.IPV4_ONLY.
Modifications:
Add static method to detect if IPv6 is supported and if not use ResolvedAddressTypes.IPV4_ONLY.
Result:
More consistent behaviour between JDK and our resolver implementation.
Motivation:
At the moment we basically drop all non prefered addresses when calling DnsNameResolver.resolveAll(...). This is just incorrect and was introduced by 4cd39cc4b3. More correct is to still retain these but sort the returned List to have the prefered addresses on the beginning of the List. This also ensures resolve(...) will return the correct return type.
Modifications:
- Introduce PreferredAddressTypeComperator which we use to sort the List so it will contain the preferred address type first.
- Add unit test to verify behaviour
Result:
Include not only preferred addresses in the List that is returned by resolveAll(...)
Motivation:
IKVM.NET seems to ship a bug sun.misc.Unsafe class, for this reason we should better disable our sun.misc.Unsafe usage when we detect IKVM.NET is used.
Modifications:
Check if IKVM.NET is used and if so do not use sun.misc.Unsafe by default.
Result:
Fixes https://github.com/netty/netty/issues/9035 and https://github.com/netty/netty/issues/8916.
Motivation:
Seems like some analyzer / validation tools scan code to detect if it may produce some security risk because of just blindly accept certificates. Such a tool did tag our code because we have such an implementation (which then is actually never be used). We should just change the impl to not do this as it does not matter for us and it makes such tools happier.
Modifications:
Throw CertificateException
Result:
Fixes https://github.com/netty/netty/issues/9032
Motivation:
We should update to netty-tcnative 2.0.25.Final as it fixes a possible segfault on systems that use openssl < 1.0.2 and for which we compiled with gcc.
See https://github.com/netty/netty-tcnative/pull/457
Modifications:
Update netty-tcnative
Result:
No more segfault possible.
Motivation:
The multicast ipv6 test fails on some systems. As I just added it let me ignore it for now while investigating.
Modifications:
Add @ignore
Result:
Stable testsuite while investigate
Motivation:
At the moment we throw a ChannelException if netty_epoll_linuxsocket_setTcpMd5Sig fails. This is inconsistent with other methods which throw a IOException.
Modifications:
Throw IOException
Result:
More correct and consistent exception usage in epoll transport
Motivation:
We currently only cover ipv4 multicast in the testsuite but we should also have tests for ipv6.
Modifications:
- Add test for ipv6
- Ensure we only try to run multicast test for ipv4 / ipv6 if the loopback interface supports it.
Result:
Better test coverage
Motivation:
When a Channel was closed its isActive() method must return false.
Modifications:
First check for isOpen() before isBound() as isBound() will continue to return true even after the underyling fd was closed.
Result:
Fixes https://github.com/netty/netty/issues/9026.
Motivation:
When netty_epoll_linuxsocket_setTcpMd5Sig fails to init the sockaddr we should throw an exception and not silently return.
Modifications:
Throw exception if init of sockaddr fails.
Result:
Correctly report back error to user.
Motivation:
When netty_epoll_linuxsocket_setTcpMd5Sig fails to init the sockaddr we should throw an exception and not silently return.
Modifications:
Throw exception if init of sockaddr fails.
Result:
Correctly report back error to user.
Motivation:
RFC 6455 defines that, generally, a WebSocket client should not close a TCP
connection as far as a server is the one who's responsible for doing that.
In practice tho', it's not always possible to control the server. Server's
misbehavior may lead to connections being leaked (if the server does not
comply with the RFC).
RFC 6455 #7.1.1 says
> In abnormal cases (such as not having received a TCP Close from the server
after a reasonable amount of time) a client MAY initiate the TCP Close.
Modifications:
* WebSocket client handshaker additional param `forceCloseAfterMillis`
* Use 10 seconds as default
Result:
WebSocket client handshaker to comply with RFC. Fixes#8883.
Motivation:
When kevent(...) returns with EINTR we do not correctly decrement the timespec
structure contents to account for the time duration. This may lead to negative
values for tv_nsec which will result in an EINVAL and raise an IOException to
the event loop selection loop.
Modifications:
Correctly calculate new timeoutTs when EINTR is detected
Result:
Fixes#9013.
Motivation:
During investigating some other bug I noticed that we log with warn level if we fail to notify the promise due the fact that it is already full-filled. This is not correct and missleading as there is nothing wrong with it in general. A promise may already been fullfilled because we did multiple queries and one of these was successful.
Modifications:
- Change log level to trace
- Add unit test which before did log with warn level but now does with trace level.
Result:
Less missleading noise in the log.
Motivation:
IdleStateHandler may trigger unexpected idle events when flushing large entries to slow clients.
Modification:
In netty design, we check the identity hash code and total pending write bytes of the current flush entry to determine whether there is a change in output. But if a large entry has been flushing slowly (for some reason, the network speed is slow, or the client processing speed is too slow to cause the TCP sliding window to be zero), the total pending write bytes size and identity hash code would remain unchanged.
Avoid this issue by adding checks for the current entry flush progress.
Result:
Fixes#8912 .
Motivation
AbstractReferenceCounted and AbstractReferenceCountedByteBuf contain
duplicate logic for managing the volatile refcount in an optimized and
consistent manner, which increased in complexity in #8583. It's possible
to extract this into a common helper class now that all access is via an
AtomicIntegerFieldUpdater.
Modifications
- Move duplicate logic into a shared ReferenceCountUpdater class
- Incorporate some additional simplification for the most common single
increment/decrement cases (fewer checks/operations)
Result
Less code duplication, better encapsulation of the "non-trivial"
internal volatile refcount manipulation
Motivation:
DnsNameResolver#resolveAll(String) may return duplicate results in the event that the original hostname DNS response includes an IP address X and a CNAME that ends up resolving the same IP address X. This behavior is inconsistent with the JDK’s resolver and is unexpected to retrun a List with duplicate entries from a resolveAll(..) call.
Modifications:
- Filter out duplicates
- Add unit test
Result:
More consistent and less suprising behavior
Motivation:
Invoking ChannelHandlers is not free and can result in some overhead when the ChannelPipeline becomes very long. This is especially true if most handlers will just forward the call to the next handler in the pipeline. When the user extends Channel*HandlerAdapter we can easily detect if can just skip the handler and invoke the next handler in the pipeline directly. This reduce the overhead of dispatch but also reduce the call-stack in many cases.
This backports https://github.com/netty/netty/pull/8723 and https://github.com/netty/netty/pull/8987 to 4.1
Modifications:
Detect if we can skip the handler when walking the pipeline.
Result:
Reduce overhead for long pipelines.
Benchmark (extraHandlers) Mode Cnt Score Error Units
DefaultChannelPipelineBenchmark.propagateEventOld 4 thrpt 10 267313.031 ± 9131.140 ops/s
DefaultChannelPipelineBenchmark.propagateEvent 4 thrpt 10 824825.673 ± 12727.594 ops/s
Motivation:
4079189f6b introduced OpenSslPrivateKeyMethodTest which will only be run when BoringSSL is used. As the assumeTrue(...) also guards the init of the static fields we need to ensure we only try to destroy these if BoringSSL is used as otherwise it will produce a NPE.
Modifications:
Check if BoringSSL is used before trying to destroy the resources.
Result:
No more NPE when BoringSSL is not used.
Motivation:
32563bfcc1 introduced a regression in which we did now not longer discard the messages after we handled an oversized message.
Modifications:
- Do not set aggregating to false after handleOversizedMessage is called
- Adjust unit tests to verify the behaviour is correct again.
Result:
Fixes https://github.com/netty/netty/issues/9007.
Motivation:
The CompositeByteBuf discardReadBytes / discardReadComponents methods are currently quite inefficient, including when there are no read components to discard. We would like to call the latter more frequently in ByteToMessageDecoder#COMPOSITE_CUMULATOR.
In the same context it would be beneficial to perform a "shallow copy" of a composite buffer (for example when it has a refcount > 1) to avoid having to allocate and copy the contained bytes just to obtain an "independent" cumulation.
Modifications:
- Optimize discardReadBytes() and discardReadComponents() implementations (start at first comp rather than performing a binary search for the readerIndex).
- New addFlattenedComponents(boolean,ByteBuf) method which performs a shallow copy if the provided buffer is also composite and avoids adding any empty buffers, plus unit test.
- Other minor optimizations to avoid unnecessary checks.
Results:
discardReadXX methods are faster, composite buffers can be easily appended without deepening the buffer "tree" or retaining unused components.
Motivation:
4079189f6b changed the dependency to netty-tcnative-borinssl-static but it should still be netty-tcnative.
Modifications:
Change back to netty-tcnative
Result:
Correct dependency is used
Motivation:
BoringSSL allows to customize the way how key signing is done an even offload it from the IO thread. We should provide a way to plugin an own implementation when BoringSSL is used.
Modifications:
- Introduce OpenSslPrivateKeyMethod that can be used by the user to implement custom signing by using ReferenceCountedOpenSslContext.setPrivateKeyMethod(...)
- Introduce static methods to OpenSslKeyManagerFactory which allows to create a KeyManagerFactory which supports to do keyless operations by let the use handle everything in OpenSslPrivateKeyMethod.
- Add testcase which verifies that everything works as expected
Result:
A user is able to customize the way how keys are signed.
…nterface, block or loopback-mode-disabled operations).
Motivation:
Provide epoll/native multicast to support high load multicast users (we are using it for a high load telecomm app at my day job).
Modification:
Added support for (ipv4 only) source specific and any source multicast for epoll transport. Some caveats (beyond no ipv6 support initially - there’s a bit of work to add in join and leave group specifically around SSM, as ipv6 uses different data structures for this): no support for disabling loop back mode, retrieval of interface and block operation, all of which tend to be less frequently used.
Result:
Provides epoll transport multicast for IPv4 for common use cases. Understand if you’d prefer to hold off until ipv6 is included but not sure when I’ll be able to get to that.
Motivation:
During OpenSsl.java initialization, a SelfSignedCertificate is created
during the static initialization block to determine if OpenSsl
can be used.
The default key strength for SelfSignedCertificate was too low if FIPS
mode is used and BouncyCastle-FIPS is the only available provider
(necessary for compliance). A simple fix is to just augment the key
strength to the minimum required about by FIPS.
Modification:
Set default key bit length to 2048 but also allow it to be dynamically set via a system property for future proofing to more stricter security compliance.
Result:
Fixes#9018
Signed-off-by: Farid Zakaria <farid.m.zakaria@gmail.com>
Motivation:
Some SslProvider do support different types of keys and chains. We should fail fast if we can not support the type.
Related to https://github.com/netty/netty-tcnative/issues/455.
Modifications:
- Try to parse key / chain first and if if this fails throw and SslException
- Add tests.
Result:
Fail fast.
Motivation:
Deprecate ChannelOption.newInstance(...) as it is not used.
Modifications:
Deprecate ChannelOption.newInstance(...) as valueOf(...) should be used as a replacement.
Result:
Fixes https://github.com/netty/netty/issues/8983.