Motivation:
Fixes#6681.
Modification:
For the sake of better timer observability, expose the number of pending timeouts through the new HashedWheelTimer.pendingTimeouts method .
Result:
It's now ridiculously easy to observe Netty timer's very basic and yet important metric, the number of pending tasks/timeouts.
Motivation:
Conscrypt is not needed when using the handler module, so it should be marked as optional
Modifications:
Mark conscrypt as optional
Result:
Be able to use handler module without conscrypt
Motivation:
Uptime example is lack of server.
UptimeClient's code style is a little bit different from others, which make reader feel confused.
We don't need to create a new Bootstrap instance each time client reconnect to server.
Modification:
Add UptimeServer and UptimeServerHandler which simply accept all connection and discard all message.
Change UptimeClient's code style.
Share a single Bootstrap instance.
Result:
Uptime server support.
Consistent code style.
Single Bootstrap for all reconnection.
Motivation:
NetUtil#isValidIpV6Address and NetUtil#getIPv6ByName allowed an invalid form of mapped IPv4 addresses which lead to accepting invalid IPv6 addresses as valid.
Modifications:
- NetUtil#isValidIpV6Address and NetUtil#getIPv6ByName should only allow 7 colons for an IPv4 address if they are the first 2 characters.
Result:
More correct implementation of NetUtil#isValidIpV6Address and NetUtil#getIPv6ByName
It is generally useful to have origin http servers respond to
"expect: continue-100" as soon as possible but applications without a
HttpObjectAggregator in their pipelines must use boiler plate to do so.
Modifications:
Introduce the HttpServerExpectContinueHandler handler to make it easier.
Result:
Less boiler plate for http application authors.
Motivation:
DiskFileUpload creates temporary files for storing user uploads containing the user provided file name as part of the temporary file name. While most security problems are prevented by using "new File(userFileName).getName()" a small risk for bugs or security issues remains.
Modifications:
Use a constant string as file name and rely on the callers use of File.createTemp to ensure unique disk file names.
Result:
A slight security improvement at the cost of a little more obfuscated temp file names.
Motivation:
A single DNS query may follow many different paths through resolver-dns. The query may fail for various reasons related to the DNS protocol, general IO errors, it may be cancelled due to the query count being exceeded, or other reasons. A query may also result in other queries as we follow the DNS protocol (e.g. redirects, CNAME, etc...). It is currently impossible to collect information about the life cycle of an individual query though resolver-dns. This information may be valuable when considering which DNS servers are preferred over others.
Modifications:
- Introduce an interface which can provide visibility into all the potential outcomes of an individual DNS query
Result:
resolver-dns provides visibility into individual DNS queries which can be used to avoid poorly performing DNS servers.
Motivation:
A BOM should only expose dependency management for the artifacts that the project produces (so in this case, netty artifacts). It should not have an opinion about third party dependencies (otherwise including that BOM in a project potentially overrides decisions that were made about them).
Modifications:
Stop inheriting from netty-parent and explicitly set the versions for the artifacts.
Result:
Valid BOM.
Motivation:
We not correctly released all buffers in the UnpooledTest and so showed "bad" way of handling buffers to people that inspect our code to understand when a buffer needs to be released.
Modifications:
Explicit release all buffers.
Result:
Cleaner and more correct code.
Motivation:
We need to notify the promise after the codecs are removed to allow writing from the listener and ensure we not try to do any encoding anymore. If we not do we will end up with corrupt data.
Modifications:
Notify promise after codecs are removed.
Result:
Fixes [#6671].
Motivation
SniHandler is "hardcoded" to use hostname -> SslContext mappings but there are use-cases where it's desireable and necessary to return more information than a SslContext. The only option so far has been to use a delegation pattern
Modifications
Extract parts of the existing SniHandler into an abstract base class and extend SniHandler from it. Users can do the same by extending the new abstract base class and implement custom behavior that is possibly very different from the common/default SniHandler.
Touches
- f97866dbc6
- b604a22395
Result
Fixes#6603
Motivation:
NetUtil#getByName and NetUtil#isValidIpV6Address do not strictly enforce the format of IPv4 addresses that are allowed to be embedded in IPv6 addresses as specified in https://tools.ietf.org/html/rfc4291#section-2.5.5. This may lead to invalid addresses being parsed, or invalid addresses being considered valid. Compression of a single IPv6 word was also not handled correctly if there are 7 : characters.
Modifications:
- NetUtil#isValidIpV6Address should enforce the IPv4-Compatible and IPv4-Mapped are the only valid formats for including IPv4 addresses as specified in https://tools.ietf.org/html/rfc4291#section-2.5.5
- NetUtil#getByName should more stritcly parse IPv6 addresses which contain IPv4 addresses as specified in https://tools.ietf.org/html/rfc4291#section-2.5.5
- NetUtil should allow compression even if the number of : characters is 7.
- NetUtil#createByteArrayFromIpAddressString should use the same IP string to byte[] translation which is used in NetUtil#getByName
Result:
NetUtil#getByName and NetUtil#isValidIpV6Address respect the IPv6 RFC which defines the valid formats for embedding IPv4 addresses.
Motivation:
Each StreamByteDistributor may allow for priority in different ways, but there are certain characteristics which are invalid regardless of the distribution algorithm. We should validate these invalid characteristics at the flow controller level.
Modifications:
- Disallow negative stream IDs from being used. These streams may be accepted by the WeightedFairQueueByteDistributor and cause state for other valid streams to be evicted.
- Improve unit tests to verify limits are enforced.
Result:
Boundary conditions related to the priority parameters are validated more strictly.
Motivation:
We should skip the forbidden API check when run the examples as otherwise it may fail.
Modifications:
Skip the API check in run-example.sh
Result:
Be able to run the examples in all cases.
Motivation:
There needs to be some work be done to allow using forbidden API check plugin when using java9.
Modifications:
Skip forbidden API check when using java9
Result:
Builds again with java9
Motivation:
In cases when an application is running in a container or is otherwise
constrained to the number of processors that it is using, the JVM
invocation Runtime#availableProcessors will not return the constrained
value but rather the number of processors available to the virtual
machine. Netty uses this number in sizing various resources.
Additionally, some applications will constrain the number of threads
that they are using independenly of the number of processors available
on the system. Thus, applications should have a way to globally
configure the number of processors.
Modifications:
Rather than invoking Runtime#availableProcessors, Netty should rely on a
method that enables configuration when the JVM is started or by the
application. This commit exposes a new class NettyRuntime for enabling
such configuraiton. This value can only be set once. Its default value
is Runtime#availableProcessors so that there is no visible change to
existing applications, but enables configuring either a system property
or configuring during application startup (e.g., based on settings used
to configure the application).
Additionally, we introduce the usage of forbidden-apis to prevent future
uses of Runtime#availableProcessors from creeping. Future work should
enable the bundled signatures and clean up uses of deprecated and
other forbidden methods.
Result:
Netty can be configured to not use the underlying number of processors,
but rather the constrained number of processors.
Motivation:
Chrome was randomly getting stuck loading the tiles examples.
Investigation showed that the Netty flow controller thought it had
nothing to send for the connection even though some streams has queued
data and window available.
Modifications:
Fixed an accounting error where an implicitly created parent was not
being added to the dependency tree, thus it and all of its children were
orphaned from the connection's tree and would never have data written.
Result:
Fixes#6621
Motivation:
Although effectively unused, the toString() of
WeightedFairQueueByteDistributor.State is useful for debugging. It
accidentally had an infinite loop, as it would recurse infinitely
between a parent and its child, which makes it less useful for
debugging.
Modifications:
Prune the infinite loop by using the parent's streamId instead of the
parent's toString().
Result:
Faster, less stack-overflowing toString()
Motivation:
Java8 adds support for SNIMatcher to reject SNI when the hostname not matches what is expected. We not supported doing this when using SslProvider.OPENSSL*.
Modifications:
- Add support for SNIMatcher when using SslProvider.OPENSSL*
- Add unit tests
Result:
SNIMatcher now support with our own SSLEngine as well.
Motivation:
We not correctly guarded against overflow and so call Base64.encode(...) with a big buffer may lead to an overflow when calculate the size of the out buffer.
Modifications:
Correctly guard against overflow.
Result:
Fixes [#6620].
Motivation:
We need to release all the buffers that may be put into our inbound queue since we closed the Channel to ensure we not leak any memory. This is fine as it basically gives the same guarantees as TCP which means even if the promise was notified before its not really guaranteed that the "remote peer" will see the buffer at all.
Modifications:
Ensure we release all buffers in the inbound buffer if a doClose() is called.
Result:
No more leaks.
Motivation:
We need to ensure we only try to to test with the SslProviders that are supported when running the SslHandlerTest.testCompositeBufSizeEstimationGuaranteesSynchronousWrite test.
Modifications:
Skip SslProvider.OPENSSL* if not supported.
Result:
No more test-failures if openssl is not installed on the system.
Motivation:
Unsafe.invokeCleaner(...) checks if the passed in ByteBuffer is a slice or duplicate and if so throws an IllegalArgumentException on Java9. We need to ensure we never try to free a ByteBuffer that was provided by the user directly as we not know if its a slice / duplicate or not.
Modifications:
Never try to free a ByteBuffer that was passed into UnpooledUnsafeDirectByteBuf constructor by an user (via Unpooled.wrappedBuffer(....)).
Result:
Build passes again on Java9
Motivation:
1. The use of InternetProtocolFamily is not consistent:
the DnsNameResolverContext and DnsNameResolver contains switches
instead of appropriate methods usage.
2. The InternetProtocolFamily class contains redundant switches in the
constructor.
Modifications:
1. Replacing switches to the use of an appropriate methods.
2. Simplifying the InternetProtocolFamily constructor.
Result:
Code is cleaner and simpler.
Motivation:
We not correctly managed the life-cycle of the buffer / frames in our http2 multiplex example which lead to a memory leak.
Modifications:
- Correctly release frame if not echo'ed back the remote peer.
- Not retain content before echo back to remote peer.
Result:
No more leak in the example, fixes [#6636].
Motivation:
Using reflection to obtain the default name servers may fail in Java9 and also in previous Java versions if a SecurityManager is present.
Modifications:
Try using jndi-dns to obtain default name servers and only try using reflection if this fails.
Result:
Be able to detect default name servers in all cases. Fixes [#6347].
Motivation:
Java9 added a new method to Unsafe which allows to allocate a byte[] without memset it. This can have a massive impact in allocation times when the byte[] is big. This change allows to enable this when using Java9 with the io.netty.tryAllocateUninitializedArray property when running Java9+. Please note that you will need to open up the jdk.internal.misc package via '--add-opens java.base/jdk.internal.misc=ALL-UNNAMED' as well.
Modifications:
Allow to allocate byte[] without memset on Java9+
Result:
Better performance when allocate big heap buffers and using java9.
Motivation:
We miss to retain a slice before return it to the user and so an reference count error may accour later on.
Modifications:
Use readRetainedSlice(...) and so ensure we retain the buffer before hand it of to the user.
Result:
Fixes [#6626].
Motivation:
As the javadoc of ScheduledExecutorService state:
Zero and negative delays (but not periods) are also allowed in schedule methods,and are treated as requests for immediate execution.
Modifications:
- Correctly handle delay <= 0.
- Add unit tests.
Result:
Fixes [#6627].
Motivation:
When a VoidChannelPromise is used by the user we need to ensure we propergate the exception through the ChannelPipeline otherwise the exception will just be swallowed and so the user has no idea whats going on.
Modifications:
- Always call tryFailure / trySuccess even when we use the VoidChannelPromise
- Add unit test
Result:
Fixes [#6622].
Motivation:
If a read-only ByteBuf is passed to the ByteToMessageDecoder.channelRead(...) method we need to make a copy of it once we try to merge buffers for cumulation. This usually is not the case but can for example happen if the local transport is used. This was the cause of the leak report we sometimes saw during the codec-http2 tests, as we are using the local transport and write a read-only buffer. This buffer will then be passed to the peer channel and fired through the pipeline and so end up as the cumulation buffer in the ByteToMessageDecoder. Once the next fragement is received we tried to merge these and failed with a ReadOnlyBufferException which then produced a leak.
Modifications:
Ensure we copy the buffer if its read-only.
Result:
No more exceptions and so leak when a read-only buffer is passed to ByteToMessageDecoder.channelRead(...)
Motivation:
Java8 adds support for SNIMatcher to reject SNI when the hostname not matches what is expected. We not supported doing this when using SslProvider.OPENSSL*.
Modifications:
- Add support for SNIMatcher when using SslProvider.OPENSSL*
- Add unit tests
Result:
SNIMatcher now support with our own SSLEngine as well.
Motivation:
When debugging netty memory leaks, it's sometimes helpful to
print the object's reference count.
Modifications:
Add `refCnt` methods to set of already exitsting helpers for ref coutned
objects.
Result:
Users will have utility to print object's ref count without much of a
boilerplate.
Motivation:
Java9 adds a new method to Unsafe which allows to free direct ByteBuffer via the cleaner without the need to use an commandline arguments.
Modifications:
- Add Cleaner interface
- Add CleanerJava9 which will be used when using Java9+ and take care of release direct ByteBuffer
- Let Cleaner0 implement Cleaner
Result:
Be able to free direct ByteBuffer on Java9+ again without any commandline arguments.
Motivation:
Commit #d675febf07d14d4dff82471829f974369705655a introduced a regression in QueryStringEncoder, resulting in whitespace being converted into a literal `+` sign instead of `%20`.
Modification:
Modify `encodeComponent` to pattern match and replace on the result of the call to `URLEncoder#encode`
Result:
Fixes regression
Motivation:
DnsServerAddresses loads the default DNS servers used for DNS resolution in a static initialization block. This is subject to blocking and may cause unexpected delays. We can move this initialization to DefaultDnsServerAddressStreamProvider where it is more expected to load the JDK's default configuration.
Modifications:
- Move all the static initialization from DnsServerAddresses to DefaultDnsServerAddressStreamProvider
- Deprecate static methods in DnsServerAddresses which have moved to DefaultDnsServerAddressStreamProvider
- Remove usage of deprecated methods in DnsServerAddresses
Result:
Usage of JDK's blocking DNS resolver is not required to use resolver-dns.
Motivation:
DnsNameResolverContext completes its DNS query promise automatically
when no queries are in progress, which means there's no need to fail the
promise explicitly.
Modifications:
- Do not fail a DNS query promise explicitly but add an informational
trace
Result:
- Fixes#6600
- Unexpected exception on one question type does not fail the promise
too soon. If the other question succeeds, the query will succeed,
making the resolver more robust.
Motivation:
The CI servers have reported leaks while building the HTTP/2 unit tests. The unit tests attempt to wait for the channels to be closed before exiting the test, but we should wait in case there are any tasks pending on the EventLoopGroup's task queues.
Modifications:
- Change the Future.sync() operations to Future.syncUninterruptibly()
- HTTP/2 unit tests which use local channel should wait for 5 seconds before shutting down the EventLoopGroups
Result:
More likely that any cleanup related tasks will execute before the unit tests are shutdown.
Motivation:
Commit 795f318 simplified some code related to the special case Set for the selected keys and introduced a Selector wrapper to make sure this set was properly reset. However the JDK makes assumptions about the type of Selector and this type is not extensible. This means whenever we call into the JDK we must provide the unwrapped version of the Selector or we get a ClassCastException. We missed a case of unwrapping in NioEventLoop#rebuildSelector0.
Modificaitons:
- NioEventLoop#openSelector should return a tuple so we can atomically set the wrapped and unwrapped Selector
- NioEventLoop#rebuildSelector0 should use the unwrapped version of the selector
Result:
Fixes https://github.com/netty/netty/issues/6607.
https://github.com/netty/netty-tcnative/pull/215
Motivation
OCSP stapling (formally known as TLS Certificate Status Request extension) is alternative approach for checking the revocation status of X.509 Certificates. Servers can preemptively fetch the OCSP response from the CA's responder, cache it for some period of time, and pass it along during (a.k.a. staple) the TLS handshake. The client no longer has to reach out on its own to the CA to check the validity of a cetitficate. Some of the key benefits are:
1) Speed. The client doesn't have to crosscheck the certificate.
2) Efficiency. The Internet is no longer DDoS'ing the CA's OCSP responder servers.
3) Safety. Less operational dependence on the CA. Certificate owners can sustain short CA outages.
4) Privacy. The CA can lo longer track the users of a certificate.
https://en.wikipedia.org/wiki/OCSP_staplinghttps://letsencrypt.org/2016/10/24/squarespace-ocsp-impl.html
Modifications
https://www.openssl.org/docs/man1.0.2/ssl/SSL_set_tlsext_status_type.html
Result
High-level API to enable OCSP stapling