Motivation:
There is a possibility to end up with a StackOverflowError when trying to resolve authorative nameservers because of incorrect wrapping the AuthoritativeDnsServerCache.
Modifications:
Ensure we don't end up with an overflow due wrapping
Result:
Fixes https://github.com/netty/netty/issues/10246
Motivation:
It seems that `testSwallowedReadComplete(...)` may fail with an AssertionError sometimes after my tests. The relevant stack trace is as follows:
```
java.lang.AssertionError: expected:<IdleStateEvent(READER_IDLE, first)> but was:<null>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at io.netty.handler.flow.FlowControlHandlerTest.testSwallowedReadComplete(FlowControlHandlerTest.java:478)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:89)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:41)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:542)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:770)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:464)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:210)
```
Obviously the `readerIdleTime` of `IdleStateHandler` and the thread sleep time before `EmbeddedChannel.runPendingTasks` are both 100ms. And if `userEvents.poll()` happened before `userEvents.add(...)` or no `IdleStateEvent` fired at all, this test case would fail.
Modification:
Sleep for a little more time before running all pending tasks in the `EmbeddedChannel`.
Result:
Fix the problem of small probability of failure.
Motivation:
`containsValue()` will check if there are multiple values defined in the specific header name, we need to use this method instead of `contains()` for the `Transfer-Encoding` header to cover the case that multiple values defined, like: `Transfer-Encoding: gzip, chunked`
Modification:
Change from `contains()` to `containsValue()` in `HttpUtil.isTransferEncodingChunked()` method.
Result:
Fixes#10320
Motivation:
Currently we passing custom websocket handshaker response headers to a `WebSocketServerHandshaker` but they can contain a reserved headers (e.g. Connection, Upgrade, Sec-Websocket-Accept) what lead to duplication because we use response.headers().add(..) instead of response.headers().set(..).
Modification:
In each `WebSocketServerHandshaker00`, ... `WebSocketServerHandshaker13` implementation replace the method add(..) to set(..) for reserved response headers.
Result:
Less error-prone
Motivation:
A new GraalVM with JDK 11 was released and GraalVM adds Java 11 support
Modification:
- Update GraalVM JDK 8 version
- Add GraalVM JDK 11 support
Result:
Build with GraalVM JDK 11 and use latest GraalVM JDK 8 version
Motivation:
We should include as much details as possible when throwing an IllegalArgumentException because of overflow in CompositeByteBuf
Modifications:
Add more details and factor out check into a static method to share code
Result:
Make it more clear why an operations failed
Motivation:
SslHandler currently throws a general SSLException if a wrap attempt
fails due to the SSLEngine being closed. If writes are queued the
failure rational typically requires more investigation to track down the
original failure from a previous event. We may have more informative
rational for the failure and so we should use it.
Modifications:
- SslHandler#wrap to use failure information from the handshake or prior
transport closure if available
Result:
More informative exceptions from SslHandler#wrap if the SSLEngine has
been previously closed.
Motivation:
Fix very tiny comment error in Recycler
Modifications:
Fix very tiny comment error in Recycler
Result:
Correctly comment about drop in WeakOrderQueue#add
Motivation:
Once a CNAME loop was detected we can just fail fast and so reduce the number of queries.
Modifications:
Fail fast once a CNAME loop is detected
Result:
Fail fast
Motivation:
The `FlowControlHandler` may cache the received messages in a queue in order to do the flow control. However, if this handler is manually removed from pipeline during runtime, those cached messages might not be passed to the next channel handler forever.
Modification:
Dequeue all these cached messages and call `ChannelHandlerContext.fireChannelRead(...)` in `handlerRemoved(...)` method.
Result:
Avoid losing the received messages.
Motivation:
We did use a HashSet to detect CNAME cache loops which needs allocations. We can use an algorithm that doesnt need any allocations
Modifications:
Use algorithm that doesnt need allocations
Result:
Less allocations on the slow path
Motivation:
`SslHandlerTest.testSessionTicketsWithTLSv12AndNoKey` does not require
BoringSSL and works with OpenSSL as well.
Modifications:
- Remove assume statement that expected BoringSSL;
Result:
Test works for any implementation of `OPENSSL` provider.
Motivation:
To ensure we not crash in all cases we should better check that the SSL pointer was not freed before using it.
Modifications:
Add missing `isDestroyed()` checks
Result:
Ensure we not crash due usage of freed pointer.
Motivation:
The linux-aarch64 packages are not declared in netty-bom. There are no consistency checks for netty bom, hence it can easily miss updates when artifacts are added.
Modifications:
- Add declarations.
- Modify netty-all to depend on netty-bom for version declarations,
thus requiring netty-bom to be uptodate.
Result:
Be able to reference aarch64 packages without an explicit version. The content of netty-all is guaranteed to be declared in netty-bom, adding a safety net.
Signed-off-by: Robert Varga <robert.varga@pantheon.tech>
Motivation:
make the existing setter `ReferenceCountedOpenSslContext.setUseTasks` public
Modification:
Added the `public` qualified and removed the comment "for tests only"
Result:
Fixes#10288
Motivation:
We should respect jdk.tls.client.enableSessionTicketExtension and jdk.tls.server.enableSessionTicketExtension when using the native SSL implementation as well to make the usage of it easier and more consistent. These properties were introduced by JDK13:
https://seanjmullan.org/blog/2019/08/05/jdk13
Modifications:
Check if the properties are set to true and if so enable tickets
Result:
Easier to enable tickets and be more consistent
Motivation:
AbstractCoalescingBufferQueue had a bug which could lead to an empty queue while still report bytes left. This was due the fact that we decremented the pending bytes before draining the queue one-by-one. The problem here is that while the queue is drained we may notify the promise which may add again buffers to the queue for which we never decrement the bytes while we drain these
Modifications:
- Decrement the pending bytes every time we drain a buffer from the queue
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/10286
Motivation:
There exists no DNS client examples in run-example.sh for the moment.
Modification:
Add DNS client examples for run-example.sh.
Result:
Help run the examples.
Motivation:
BoringSSL supports to automatically manage the session tickets to be used and so also rotate them etc. This is often prefered by users as it removed some complexity. We should support to make use of this.
Modifications:
- Allow to have setSessionTickets() called without an argument or an empty array
- Add tests
Result:
Easier usage of session tickets
Motivation:
`FileChannel.read()` may throw an IOException. We must deal with this in case of the occurrence of `I/O` error.
Modification:
Place the `FileChannel.read()` method call in the `try-finally` block.
Result:
Advoid fd leak.
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
The defined classifier for aarch64 was not correct
Modifications:
Fix classifier
Result:
Be able to correctly include the aarch64 native libs
Motivations
-----------
HttpPostStandardRequestDecoder was changed in 4.1.50 to provide its own
ByteBuf UrlDecoder. Prior to this change, it was using the decodeComponent
method from QueryStringDecoder which decoded + characters to
whitespaces. This behavior needs to be preserved to maintain backward
compatibility.
Modifications
-------------
Changed HttpPostStandardRequestDecoder to detect + bytes and decode them
toe whitespaces. Added a test.
Results
-------
Addresses issues#10284
Motivation:
OpenSslSession.getLocalCertificates() and getLocalPrincipal() must return null on client side if mTLS is not used as stated in the API documentation. At the moment this is not always the case
Modifications:
- Ensure we only return non-null if mTLS is used
- Add unit tests
Result:
Follow SSLSession API contract
Motivation:
The nameserver that should / must be used to resolve a CNAME may be different then the nameserver that was selected for the hostname to resolve. Failing to select the correct nameserver may result in problems during resolution.
Modifications:
Use the correct DnsServerAddressStream for CNAMEs
Result:
Always use the correct DnsServerAddressStream for CNAMEs and so fix resolution failures which could accour when CNAMEs are in the mix that use a different domain then the original hostname that we try to resolve
Motivation:
After searching the whole netty project, I found that the private method `translateHeader(...)` with empty body is never used actually. So I think it could be safely removed.
Modification:
Just remove this unused method.
Result:
Clean up the code.
Motivation:
`io.netty.handler.codec.http2.Http2Stream.State` is never used in DefaultHttp2LocalFlowController.java, and `io.netty.handler.codec.http2.HpackUtil.equalsConstantTime` is never used in HpackStaticTable.java.
Modification:
Just remove these unused imports.
Result:
Make imports cleaner.
Motivation:
`Date`, `Expires`, and `Set-Cookie` headers are being generated with a 1-digit day of month,
e.g. `Sun, 6 Nov 1994 08:49:37 GMT`. RFC 2616 specifies that `Date` and `Expires` headers should
use "a fixed-length subset of that defined by RFC 1123" which includes a 2-digit day of month.
RFC6265 is lax in it's specification of the `Set-Cookie` header and permits a 2-digit day of month.
See: https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html
See: https://tools.ietf.org/html/rfc1123#page-55
See: https://tools.ietf.org/html/rfc6265#section-5.1.1
Modifications:
- Update `DateFormatter` to correctly implement RFC 2616 headers
Result:
```
Date: Sun, 06 Nov 1994 08:49:37 GMT
Expires: Sun, 06 Nov 1994 08:49:37 GMT
Set-Cookie: id=a3fWa; Expires=Sun, 06 Nov 1994 08:49:37 GMT
```
Motivation:
GlobalEventExecutor#addTask may be called during SingleThreadEventExecutor shutdown.
May result in a blocking call, because GlobalEventExecutor#taskQueue is a BlockingQueue.
Modifications:
Add allowBlockingCallsInside configuration for GlobalEventExecutor#addTask.
Result:
Fixes#10257.
When BlockHound is installed, GlobalEventExecutor#addTask is not reported as a blocking call.
* Motivation:
JsonObjectDecoderTest did include 3 println(...) call which was leftover from debugging.
Modifications:
Removed println(...)
Result:
Cleanup
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
[DNS-over-TLS (DoT)](https://tools.ietf.org/html/rfc7858.html) encrypts DNS queries and sends it over TLS connection to make sure queries are secure in transit.
[TCP DNS](https://tools.ietf.org/html/rfc7766) sends DNS queries over TCP connection (unencrypted).
Modification:
Add DNS-over-TLS (DoT) Client Example which uses TLSv1.2 and TLSv1.3.
Add TCP DNS Client Example
Result:
DNS-over-TLS (DoT) Client Example
TCP DNS Client Example
Motivation
- Recycler stack and delayed queue drop ratio can only be configured with
the same value. The overall drop ratio is ratio^2.
- #10251 shows that enable drop in `WeakOrderQueue` may introduce
performance degradation. Though the final reason is not clear now,
it would be better to add option to configure delayed queue drop ratio separately.
Modification
- "io.netty.recycler.delayedQueue.ratio" as the drop ratio of delayed queue
- default "delayedQueue.ratio" is same as "ratio"
Results
Able to configure recycler delayed queue drop ratio separately
Motivation
1. It's inable to collect all object because RATIO is always >=1 after
`safeFindNextPositivePowerOfTwo`
2. Enable drop object in `WeakOrderQueue`(commit:
71860e5b94) enlarge the drop ratio. We
can subtly control the overall drop ratio by using `io.netty.recycler.ratio` directly,
Modification
- Remove `safeFindNextPositivePowerOfTwo` before set the ratio
Results
Able to disable drop when recycle object
Motivation:
We cant reuse the ChannelPromise as it will cause an error when trying to ful-fill it multiple times.
Modifications:
- Use a new promise and chain it with the old one
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/10240
Motivation:
`io.netty.handler.codec.http2.Http2CodecUtil.DEFAULT_PRIORITY_WEIGHT` is never used in DefaultHttp2ConnectionEncoder.java.
Modification:
Just remove this unused import.
Result:
Make the DefaultHttp2ConnectionEncoder.java's imports clean.
Motivation
An NPE was reported in #10245, caused by a regression introduced in
#8939. This in particular affects ByteToMessageDecoders that use the
COMPOSITE_CUMULATOR.
Modification
- Unwrap WrappedCompositeByteBufs passed to
CompositeByteBuf#addFlattenedComponents(...) method before accessing
internal components field
- Extend unit test to cover this case and ensure more of the
CompositeByteBuf tests are also run on the wrapped variant
Results
Fixes#10245
Motivation:
EC is better than RSA because of the small key size, efficient and secure which makes it perfect for testing purposes.
Modification:
Added support to specify an algorithm (EC or RSA) in constructors for key pair generation. The default key size is 256-bits as promulgated by NSA Suite B.
Result:
Able to generate a self-signed certificate of EC or RSA.
Motivation:
We should not use Unpooled to allocate buffers if possible to ensure we can make use of pooling etc.
Modifications:
- Only allocate a buffer if really needed
- Use the ByteBufAllocator of the offered ByteBuf
- Ensure we not use buffer.copy() but explicitly allocate a buffer and then copy into it to not hit the limit of maxCapacity()
Result:
Improve allocations
Motivation:
We need to release all ByteBufs that we allocate to prevent leaks. We missed to release the ByteBufs that are used to aggregate in two cases
Modifications:
Add release() calls
Result:
No more memory leak in HttpPostMultipartRequestDecoder
Motivation:
We should only log with warn level if something really critical happens as otherwise we may spam logs and confuse the user.
Modifications:
- Change log level to debug for most cases
Result:
Less noisy logging
Motivation:
We need to detect CNAME loops during lookup the DnsCnameCache as otherwise we may try to follow cnames forever.
Modifications:
- Correctly detect CNAME loops in the cache
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/10220
Motivation:
We are far behind with the version of Conscrypt we are using during testing. We should ensure we use the latest.
Modifications:
- Update conscrypt dependency
- Ensure we use conscrypt provider in tests
- Add workarounds for conscrypt bugs in testsuite
Result:
Use latest Conscrypt release
Motivations
-----------
There is no need to copy the "offered" ByteBuf in HttpPostRequestDecoder
when the first HttpContent ByteBuf is also the last (LastHttpContent) as
the full content can immediately be decoded. No extra bookeeping needed.
Modifications
-------------
HttpPostMultipartRequestDecoder
- Retain the first ByteBuf when it is both the first HttpContent offered
to the decoder and is also LastHttpContent.
- Retain slices of the final buffers values
Results
-------
ByteBufs of FullHttpMessage decoded by HttpPostRequestDecoder are no longer
unnecessarily copied. Attributes are extracted as retained slices when
the content is multi-part. Non-multi-part content continues to return
Unpooled buffers.
Partially addresses issue #10200
Motivation:
`AbstractHttpData.checkSize` may throw an IOException if we set the max size limit via `AbstractHttpData.setMaxSize`. However, if this exception happens, the `AbstractDiskHttpData.file` and the `AbstractHttpData.size` are still be modified. In other words, it may break the failure atomicity here.
Modification:
Just move up the size check.
Result:
Keep the failure atomicity even if `AbstractHttpData.checkSize` fails.