Motivation:
Java9 added a new method to Unsafe which allows to allocate a byte[] without memset it. This can have a massive impact in allocation times when the byte[] is big. This change allows to enable this when using Java9 with the io.netty.tryAllocateUninitializedArray property when running Java9+. Please note that you will need to open up the jdk.internal.misc package via '--add-opens java.base/jdk.internal.misc=ALL-UNNAMED' as well.
Modifications:
Allow to allocate byte[] without memset on Java9+
Result:
Better performance when allocate big heap buffers and using java9.
Motivation:
We miss to retain a slice before return it to the user and so an reference count error may accour later on.
Modifications:
Use readRetainedSlice(...) and so ensure we retain the buffer before hand it of to the user.
Result:
Fixes [#6626].
Motivation:
As the javadoc of ScheduledExecutorService state:
Zero and negative delays (but not periods) are also allowed in schedule methods,and are treated as requests for immediate execution.
Modifications:
- Correctly handle delay <= 0.
- Add unit tests.
Result:
Fixes [#6627].
Motivation:
When a VoidChannelPromise is used by the user we need to ensure we propergate the exception through the ChannelPipeline otherwise the exception will just be swallowed and so the user has no idea whats going on.
Modifications:
- Always call tryFailure / trySuccess even when we use the VoidChannelPromise
- Add unit test
Result:
Fixes [#6622].
Motivation:
If a read-only ByteBuf is passed to the ByteToMessageDecoder.channelRead(...) method we need to make a copy of it once we try to merge buffers for cumulation. This usually is not the case but can for example happen if the local transport is used. This was the cause of the leak report we sometimes saw during the codec-http2 tests, as we are using the local transport and write a read-only buffer. This buffer will then be passed to the peer channel and fired through the pipeline and so end up as the cumulation buffer in the ByteToMessageDecoder. Once the next fragement is received we tried to merge these and failed with a ReadOnlyBufferException which then produced a leak.
Modifications:
Ensure we copy the buffer if its read-only.
Result:
No more exceptions and so leak when a read-only buffer is passed to ByteToMessageDecoder.channelRead(...)
Motivation:
Java8 adds support for SNIMatcher to reject SNI when the hostname not matches what is expected. We not supported doing this when using SslProvider.OPENSSL*.
Modifications:
- Add support for SNIMatcher when using SslProvider.OPENSSL*
- Add unit tests
Result:
SNIMatcher now support with our own SSLEngine as well.
Motivation:
When debugging netty memory leaks, it's sometimes helpful to
print the object's reference count.
Modifications:
Add `refCnt` methods to set of already exitsting helpers for ref coutned
objects.
Result:
Users will have utility to print object's ref count without much of a
boilerplate.
Motivation:
Java9 adds a new method to Unsafe which allows to free direct ByteBuffer via the cleaner without the need to use an commandline arguments.
Modifications:
- Add Cleaner interface
- Add CleanerJava9 which will be used when using Java9+ and take care of release direct ByteBuffer
- Let Cleaner0 implement Cleaner
Result:
Be able to free direct ByteBuffer on Java9+ again without any commandline arguments.
Motivation:
Commit #d675febf07d14d4dff82471829f974369705655a introduced a regression in QueryStringEncoder, resulting in whitespace being converted into a literal `+` sign instead of `%20`.
Modification:
Modify `encodeComponent` to pattern match and replace on the result of the call to `URLEncoder#encode`
Result:
Fixes regression
Motivation:
DnsServerAddresses loads the default DNS servers used for DNS resolution in a static initialization block. This is subject to blocking and may cause unexpected delays. We can move this initialization to DefaultDnsServerAddressStreamProvider where it is more expected to load the JDK's default configuration.
Modifications:
- Move all the static initialization from DnsServerAddresses to DefaultDnsServerAddressStreamProvider
- Deprecate static methods in DnsServerAddresses which have moved to DefaultDnsServerAddressStreamProvider
- Remove usage of deprecated methods in DnsServerAddresses
Result:
Usage of JDK's blocking DNS resolver is not required to use resolver-dns.
Motivation:
DnsNameResolverContext completes its DNS query promise automatically
when no queries are in progress, which means there's no need to fail the
promise explicitly.
Modifications:
- Do not fail a DNS query promise explicitly but add an informational
trace
Result:
- Fixes#6600
- Unexpected exception on one question type does not fail the promise
too soon. If the other question succeeds, the query will succeed,
making the resolver more robust.
Motivation:
The CI servers have reported leaks while building the HTTP/2 unit tests. The unit tests attempt to wait for the channels to be closed before exiting the test, but we should wait in case there are any tasks pending on the EventLoopGroup's task queues.
Modifications:
- Change the Future.sync() operations to Future.syncUninterruptibly()
- HTTP/2 unit tests which use local channel should wait for 5 seconds before shutting down the EventLoopGroups
Result:
More likely that any cleanup related tasks will execute before the unit tests are shutdown.
Motivation:
Commit 795f318 simplified some code related to the special case Set for the selected keys and introduced a Selector wrapper to make sure this set was properly reset. However the JDK makes assumptions about the type of Selector and this type is not extensible. This means whenever we call into the JDK we must provide the unwrapped version of the Selector or we get a ClassCastException. We missed a case of unwrapping in NioEventLoop#rebuildSelector0.
Modificaitons:
- NioEventLoop#openSelector should return a tuple so we can atomically set the wrapped and unwrapped Selector
- NioEventLoop#rebuildSelector0 should use the unwrapped version of the selector
Result:
Fixes https://github.com/netty/netty/issues/6607.
https://github.com/netty/netty-tcnative/pull/215
Motivation
OCSP stapling (formally known as TLS Certificate Status Request extension) is alternative approach for checking the revocation status of X.509 Certificates. Servers can preemptively fetch the OCSP response from the CA's responder, cache it for some period of time, and pass it along during (a.k.a. staple) the TLS handshake. The client no longer has to reach out on its own to the CA to check the validity of a cetitficate. Some of the key benefits are:
1) Speed. The client doesn't have to crosscheck the certificate.
2) Efficiency. The Internet is no longer DDoS'ing the CA's OCSP responder servers.
3) Safety. Less operational dependence on the CA. Certificate owners can sustain short CA outages.
4) Privacy. The CA can lo longer track the users of a certificate.
https://en.wikipedia.org/wiki/OCSP_staplinghttps://letsencrypt.org/2016/10/24/squarespace-ocsp-impl.html
Modifications
https://www.openssl.org/docs/man1.0.2/ssl/SSL_set_tlsext_status_type.html
Result
High-level API to enable OCSP stapling
Motivation:
`io.netty.handler.logging.LoggingHandler` does not log when these
events happen.
Modifiations:
Add overrides with logging to these methods.
Result:
Logging now happens for these two events.
Motivation:
It is generally useful to override DefaultHttp2HeadersDecoder's creation of a new Http2Headers object so more optimized versions can be substituted if the use case allows for it.
Modifications:
- DefaultHttp2HeadersDecoder should support an overridable method to generate the new Http2Headers object for each decode operation
Result:
DefaultHttp2HeadersDecoder is more extensible.
Fixes https://github.com/netty/netty/issues/6591.
Motivation:
In OpenSslCertificateException we tried to validate the supplied error code but did not correctly account for all different valid error codes and so threw an IllegalArgumentException.
Modifications:
- Fix validation by updating to latest netty-tcnative and use CertificateVerifier.isValid
- Add unit tests
Result:
Validation of error code works as expected.
Motivation:
1419f5b601 added support for conscrypt but the CI started to fail when running tests with java7 as conscrypt is compiled with java8. This was partly fixed in c4832cd9d9 but we also need to ensure we not try to even load the classes.
Modifications:
Only try to load conscrypt classes when on java8+-
Result:
CI not fails anymore.
Motivation:
1419f5b601 added support for conscrypt but the CI started to fail when running tests with java7 as conscrypt is compiled with java8.
Modifications:
Only support conscrypt on Java8+
Result:
CI not fails anymore.
Motivation:
UnreleasableByteBuf operations are designed to not modify the reference count of the underlying buffer. The Retained[Duplicate|Slice] operations violate this assumption and can cause the underlying buffer's reference count to be increased, but never allow for it to be decreased. This may lead to memory leaks.
Modifications:
- UnreleasableByteBuf's Retained[Duplicate|Slice] should leave the reference count of the parent buffer unchanged after the operation completes.
Result:
No more memory leaks due to usage of the Retained[Duplicate|Slice] on an UnreleasableByteBuf object.
Motivation:
https://tools.ietf.org/html/rfc7230#section-3.3.2 states that a 204 response MUST NOT include a Content-Length header. If the HTTP version permits keep alive these responses should be treated as keeping the connection alive even if there is no Content-Length header.
Modifications:
- HttpServerKeepAliveHandler#isSelfDefinedMessageLength should account for 204 respones
Result:
Fixes https://github.com/netty/netty/issues/6549.
Motivation:
Recently DnsServerAddressStreamProvider was introduced to allow control for each query as to which DNS server should be used for resolution to respect the local host's default DNS server configuration. However resolver-dns also accepts a stream of DNS servers to use by default, but this stream is not host name aware. This creates an ambiguity as to which method is used to determine the DNS server to user during resolution, and in which order. We can remove this ambiguity and provide a more general API by just supporting DnsServerAddressStreamProvider.
Modifications:
- Remove the fixed DnsServerAddresses and instead only accept a DnsServerAddressStreamProvider.
- Add utility methods to help use DnsServerAddressStreamProvider for a single entry, a list of entries, and get the default for the current machine.
Result:
Fixes https://github.com/netty/netty/issues/6573.
Motivation:
codec-http2 has some helper methods to write to ByteBuf in a big endian fashion. This is the default memory structure for ByteBuf so these helper methods are not necessary.
Modifications:
- remove writeUnsignedInt and writeUnsignedShort
Result:
codec-http2 has less ByteBuf helper methods which are not necessary.
Motivation:
HTTP/2 support two ways to start on a no-tls tcp connection,
http/1.1 upgrade and prior knowlege methodology to start HTTP/2.
Currently, the http2-server from example only support
starting by upgrade. I think we can do a simple dispatch by peek first
bytes from inbound that match to prior knowledge preface or not and
determine which handlers to set into pipeline.
Modifications:
Add ClearTextHttp2ServerUpgradeHandler to support start HTTP/2 via clear
text with two approach. And update example/http2-server to support
this functionality.
Result:
netty HTTP/2 and the example http2-server accept for two ways to start
HTTP/2 over clear text.
Fixed memory leak problem
Update fields to final
Rename ClearText to cleartext
Addressed comments for code improvement
- Always prefer static, final, and private if possible
- Add UnstableApi annotation
- Used EmbeddedChannel.readInbound instead of unhandled inbound handler
- More assertion
Update javadoc for CleartextHttp2ServerUpgradeHandler
Rename ClearTextHttp2ServerUpgradeHandler to CleartextHttp2ServerUpgradeHandler
Removed redundant code about configure pipeline
nit: PriorKnowledgeHandler
Removed Mockito.spy, investigate conn state instead
Add Http2UpgradeEvent
Check null of the constructor arguments
Rename Http2UpgradeEvent to PriorKnowledgeUpgradeEvent
Update unit test
Motivation:
Conscrypt is a Java Security provider that wraps OpenSSL (specifically BoringSSL). It's a possible alternative to Netty-tcnative that we should explore. So this commit is just to enable us to further investigate its use.
Modifications:
Modifying the SslContext creation path to support the Conscrypt provider.
Result:
Netty will support OpenSSL with conscrypt.
Motivation:
gRPC (and potentially other libraries) has an optimized header processor that requires direct access to the HpackDecoder.
Modifications:
Make the HpackDecoder and its constructors public.
Result:
Fixes#6579
Motivation:
We should limit the size of the allocated outbound buffer to MAX_ENCRYPTED_PACKET_LENGTH to ensure we not cause an OOME when the user tries to encrypt a very big buffer.
Modifications:
Limit the size of the allocated outbound buffer to MAX_ENCRYPTED_PACKET_LENGTH
Result:
Fixes [#6564]
Motiviation:
UnsafeByteBufUtil has some bugs related to using an incorrect index, and also omitting the array paramter when dealing with byte[] objects. There is also some simplification possible with respect to type casting, and minor formatting consistentcy issues.
Modifications:
- Ensure indexing is correct when dealing with native memory
- Fix the native access and endianness for the medium/unsigned medium methods
- Ensure array is used when dealing with heap memory
- Remove unecessary casts when using long
- Fix formating and alignment
Result:
UnsafeByteBufUtil is more correct and won't access direct memory when heap arrays are used.
Motivation:
When UNSAFE.allocateMemory is returning an address whose high bit is set we currently throw an IllegalArgumentException. This is not correct as it may return a negative number on at least sparc.
Modifications:
- Allow to pass in negative memoryAddress
- Add unit tests
Result:
Correctly validate the memoryAddress and so also work on sparc as expected. Fixes [#6574].
Motivation:
There are two files that still use `system.out.println` to log their status
Modification:
Replace `system.out.println` with a `debug` function inside an instance of `InternalLoggerFactory`
Result:
Introduce an instance of `InternalLoggerFactory` in class `AbstractMicrobenchmark.java` and `AbstractSharedExecutorMicrobenchmark.java`
Motivation:
Currently netty is receiving HTTP request by ByteBuf and store it as "CharSequence" on HttpObjectDecoder. During this operation, all character on ByteBuf is moving to char[] without breaking encoding.
But in process() function, type casting from byte to char does not consider msb (sign-bit). So the value over 127 can be casted wrong value. (ex : 0xec in byte -> 0xffec in char). This is type casting bug.
Modification:
Fix type casting
Result:
Non-latin characters work.
Motivation:
The widely used SSL Implementation, OpenSSL, already supports Heartbeat Extension; both sending and responding to Heartbeat Messages. But, since Netty is not recognizing that extension as valid packet, peers won't be able to use this extension.
Modification:
Update SslUtils.java to recognize Heartbeat Extension as valid tls packet.
Result:
With this change, softwares using Netty + OpenSSL will be able to respond for TLS Heartbeat requests (actually taken care by OpenSSL - no need of any extra implementation from Clients)
Motivation
Http2StreamChannel ignores options of its parent channel when being created. That leads to surprising results when, for example, unpooled allocator could be silently replaced with pooled allocator (default setting).
Modification
Copy parent channel's options over to the Http2StreamChannel.
Result
Channel options are now consistent between Http2StreamChannel and its parent channel. Newly added test passes on this branch and fails on master. Fixes#6551.
Motivation:
The code accidentally passes channel twice instead of value, resulting in logs like:
Failed to set channel option 'SO_SNDBUF' with value '[id: 0x2c5b2eb4]' for channel '[id: 0x2c5b2eb4]'
Modifications:
Pass value instead of channel where it needs to be.
Result:
Failed to set channel option 'SO_SNDBUF' with value '0' for channel '[id: 0x9bd3c5b8]'
Motivation:
The contract of `ByteBuf.writeBytes(ByteBuf src)` is such that it will
throw an `IndexOutOfBoundsException if `src.readableBytes()` is greater than
`this.writableBytes()`. The EmptyByteBuf class will throw the exception,
even if the source buffer has zero readable bytes, in violation of the
contract.
Modifications:
Use the helper method `checkLength(..)` to check the length and throw
the exception, if appropriate.
Result:
Conformance with the stated behavior of ByteBuf.
Motivation:
The updated HTTP/1.x RFC allows for header values to be CSV and separated by OWS [1]. CombinedHttpHeaders should remove this OWS on insertion.
[1] https://tools.ietf.org/html/rfc7230#section-7
Modification:
CombinedHttpHeaders doesn't account for the OWS and returns it back to the user as part of the value.
Result:
Fixes#6452
Motivation:
We should use SystemPropertyUtil to access system properties and so always handle SecurityExceptions.
Modifications:
Use SystemPropertyUtil everywhere.
Result:
Better and consist handling of SecurityException.
Motivation:
In an effort to better understand how the XmlFrameDecoder works, I consulted the tests to find a method that would reframe the inputs as per the Javadocs for that class. I couldn't find any methods that seemed to be doing it, so I wanted to add one to reinforce my understanding.
Modification:
Add a new test method to XmlFrameDecoder to assert that the reframing works as described.
Result:
New test method is added to XmlFrameDecoder
Motivation:
ChunkedWriteHandler queues written messages and actually writes them
when flush is called. In its doFlush method, it needs to flush after
each chunk is written to preserve memory. However, non-chunked messages
(those that aren't of type ChunkedInput) are treated in the same way,
which means that flush is called after each message is written.
Modifications:
Moved the call to flush() inside the if block that tests if the message
is an instance of ChunkedInput. To ensure flush is called at least once,
the existing boolean flushed is checked at the end of doFlush. This
check was previously in ChunkedWriteHandler.flush(), but wasn't checked in
other invocations of doFlush, e.g. in channelInactive.
Result:
When this handler is present in a pipeline, writing a series
of non-chunked messages will be flushed as the developer intended.
Motivation:
We used some deprecated Mockito methods.
Modifications:
- Replace deprecated method usage
- Some cleanup
Result:
No more usage of deprecated Mockito methods. Fixes [#6482].