Motivation:
IntelliJ IDEA may generate a local folder `.shelf/` for version control.
For more information, see
https://www.jetbrains.com/help/idea/shelving-and-unshelving-changes.html
Modifications:
- Add `.shelf/` folder to the `.gitignore` file;
Result:
IntelliJ IDEA's `.shelf/` folder is ignored by git.
Motivation:
From the recent benchmark using gRPC-Java based on Netty's HTTP2, it appears that it prefers `ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256` over `ECDHE_ECDSA_WITH_AES_128_GCM_SHA256` since it uses the Netty HTTPS Cipher list as is. Both are considered safe but `ECDHE_ECDSA_WITH_AES_128_GCM_SHA256` has a good chance to get more optimized implementation. (e.g. AES-NI) When running both on GCP Intel Haswell VM, `TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256` spent 3x CPU time than `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256`. (Note that this VM supports AES-NI)
From the cipher suites listed on `Intermediate compatibility (recommended)` of [Security/Server Side TLS](https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility), they have a cipher preference which is aligned with this PR.
```
0x13,0x01 - TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) Mac=AEAD
0x13,0x02 - TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD
0x13,0x03 - TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD
0xC0,0x2B - ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(128) Mac=AEAD
0xC0,0x2F - ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD
0xC0,0x2C - ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD
0xC0,0x30 - ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD
0xCC,0xA9 - ECDHE-ECDSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=ECDSA Enc=CHACHA20/POLY1305(256) Mac=AEAD
0xCC,0xA8 - ECDHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=RSA Enc=CHACHA20/POLY1305(256) Mac=AEAD
0x00,0x9E - DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(128) Mac=AEAD
0x00,0x9F - DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(256) Mac=AEAD
```
Modification:
Moving up `AES_128_GCM_SHA256` in the CIPHERS of HTTPS so that it gets priority.
Result:
When connecting to the server supporting both `ECDHE_ECDSA_WITH_AES_128_GCM_SHA256` and `ECDSA_WITH_CHACHA20_POLY1305_SHA256` and respecting the client priority of cipher suites, it will be able to save significant cpu time when running it on machines with AES-NI support.
Motivation:
TLSv1.3 is not strictly limited to Java11+ anymore as different vendors backported TLSv1.3 to Java8 as well. We should ensure we make the detection of if TLSv1.3 is supported not depend on the Java version that is used.
Modifications:
- Add SslProvider.isTlsv13Supported(...) and use it in tests to detect if we should run tests against TLSv1.3 as well
- Adjust testcase to work on latest JDK 8 release as well
Result:
Correct detection of TLSv1.3 support even if Java version < 11.
Motivation:
We shouldn't call incSmallAllocation() in a synchronized block as its backed by a concurrent datastructure
Modifications:
Move call of incSmallAllocation() out of synchronized block
Result:
Minimize scope of synchronized block
Motivation:
We should provide details about why an IOOBE was thrown
Modification:
Add IndexOutOfBoundsException error information in io.netty.util.internal.AppendableCharSequence and io.netty.handler.codec.CodecOutputList class
Result:
Easier to debug
Motivation:
When a switch statement is used we should always define a `default:` so we don't introduce bugs due fall-through.
Modification:
Add missing `default:`s
Result:
Less error-prone code
Motivation:
For size from 512 bytes to chunkSize, we use a buddy algorithm. The
drawback is that it has a large internal fragmentation.
Modifications:
1. add SizeClassesMetric and SizeClasses
2. remove tiny size, now we have small, normal and huge size
3. rewrite the structure of PoolChunk
4. rewrite pooled allocate algorithm in PoolChunk
5. when allocate subpage, using lowest common multiple of pageSize and
elemSize instead of pageSize.
6. add more tests in PooledByteBufAllocatorTest and PoolArenaTest
Result:
Reduce internal fragmentation and closes#3910
Motivation:
At the moment we don't support session caching for client side when using native SSLEngine implementation and our implementation of SSLSessionContext is incomplete.
Modification:
- Consume netty-tcnative changes to be able to cache session in an external cache
- Add and adjust unit tests to test session caching
- Add an in memory session cache that is hooked into native SSLEngine
Result:
Support session caching on the client and server side
Motivation:
The result of `header.size()` is already cached in `headerSize`. There is no need to call it again actually.
Modification:
Replace the second `header.size()` with `headerSize` directly.
Result:
Improve performance slightly.
Motivation:
jdk.tls.client.enableSessionTicketExtension property must be respect by OPENSSL and OPENSSL_REFCNT SslProvider to ensure a consistent behavior. Due a bug this was not the case and it only worked for OPENSSL_REFCNT but not for OPENSSL.
Modifications:
Move the property check into static method that is used by both
Result:
Correctly respect jdk.tls.client.enableSessionTicketExtension
Motivation:
At the moment we may report BUFFER_OVERFLOW when wrap(...) fails to consume data but still prodce some. This is not correct and we should better report NEED_WRAP as we already have produced some data (for example tickets). This way the user will try again without increasing the buffer size which is more correct and may reduce the number of allocations
Modifications:
Return NEED_WRAP when we produced some data but not consumed any.
Result:
Fix ReferenceCountedOpenSslEngine.wrap(...) state machine
**Motivation:**
We are interested in building Netty libraries with Ahead-of-time compilation with GraalVM. We saw there was [prior work done on this](https://github.com/netty/netty/search?q=graalvm&unscoped_q=graalvm). We want to introduce a change which will unblock GraalVM support for applications using netty and `netty-tcnative`.
This solves the error [that some others encounter](https://github.com/oracle/graal/search?q=UnsatisfiedLinkError+sslOpCipherServerPreference&type=Issues):
```
Exception in thread "main" java.lang.UnsatisfiedLinkError: io.grpc.netty.shaded.io.netty.internal.tcnative.NativeStaticallyReferencedJniMethods.sslOpCipherServerPreference()I [symbol: Java_io_grpc_netty_shaded_io_netty_internal_tcnative_NativeStaticallyReferencedJniMethods_sslOpCipherServerPreference or Java_io_grpc_netty_shaded_io_netty_internal_tcnative_NativeStaticallyReferencedJniMethods_sslOpCipherServerPreference__]
```
**Modification:**
The root cause of the issue is that in the tcnative libraries, the [`SSL.java` class](783a8b6b69/openssl-dynamic/src/main/java/io/netty/internal/tcnative/SSL.java (L67)) references a native call in the static initialization of the class - the method `sslOpCipherServerPreference()` on line 67 is used to initialize the static variable. But you see that `OpenSsl` also uses[ `SSL.class` to check if `netty-tcnative` is on the classpath before loading the native library](cbe238a95b/handler/src/main/java/io/netty/handler/ssl/OpenSsl.java (L123)).
So this is the problem because in ahead-of-time compilation, when it references the SSL class, it will try to load those static initializers and make the native library call, but it cannot do so because the native library has not been loaded yet since the `SSL` class is being referenced to check if the library should be loaded in the first place.
**Solution:** So the proposed solution here is to just choose a different class in the `tcnative` package which does not make a native library call during static initialization. I just chose `SSLContext` if this is OK.
This should have no negative effects other than unblocking the GraalVM use-case.
**Result:**
It fixes the unsatisfied link error. It will fix error for future users; it is a common error people encounter.
Motivation:
There was a new netty-tcnative release which we should use. Beside this the SSLErrorTest was quite fragile and so should be adjusted.
Modifications:
Update netty-tcnative and adjust test
Result:
Use latest netty-tcnative release
Motivation:
Since https://github.com/netty/netty/pull/9865 (Netty 4.1.44) the
default behavior of the HttpObjectDecoder has been to reject any HTTP
message that is found to have multiple Content-Length headers when
decoding. This behavior is well-justified as per the risks outlined in
https://github.com/netty/netty/issues/9861, however, we can see from the
cited RFC section that there are multiple possible options offered for
responding to this scenario:
> If a message is received that has multiple Content-Length header
> fields with field-values consisting of the same decimal value, or a
> single Content-Length header field with a field value containing a
> list of identical decimal values (e.g., "Content-Length: 42, 42"),
> indicating that duplicate Content-Length header fields have been
> generated or combined by an upstream message processor, then the
> recipient MUST either reject the message as invalid or replace the
> duplicated field-values with a single valid Content-Length field
> containing that decimal value prior to determining the message body
> length or forwarding the message.
https://tools.ietf.org/html/rfc7230#section-3.3.2
Netty opted for the first option (rejecting as invalid), which seems
like the safest, but the second option (replacing duplicate values with
a single value) is also valid behavior.
Modifications:
* Introduce "allowDuplicateContentLengths" parameter to
HttpObjectDecoder (defaulting to false).
* When set to true, will allow multiple Content-Length headers only if
they are all the same value. The duplicated field-values will be
replaced with a single valid Content-Length field.
* Add new parameterized test class for testing different variations of
multiple Content-Length headers.
Result:
This is a backwards-compatible change with no functional change to the
existing behavior.
Note that the existing logic would result in NumberFormatExceptions
for header values like "Content-Length: 42, 42". The new logic correctly
reports these as IllegalArgumentException with the proper error message.
Additionally note that this behavior is only applied to HTTP/1.1, but I
suspect that we may want to expand that to include HTTP/1.0 as well...
That behavior is not modified here to minimize the scope of this change.
Motivation:
`getEntry(...)` may throw an IndexOutOfBoundsException without any error messages.
Modification:
Add detailed error message corresponding to the IndexOutOfBoundsException while calling `getEntry(...)` in HpackDynamicTable.java.
Result:
Easier to debug
Motivation:
When BlockHound is installed,
ReferenceCountedOpenSslClientContext$ExtendedTrustManagerVerifyCallback.verify
is reported as blocking call.
Modifications:
Add allowBlockingCallsInside configuration for
ReferenceCountedOpenSslClientContext$ExtendedTrustManagerVerifyCallback.verify
Result:
Fixes#10384
Motivation:
`HpackDynamicTable` needs some test cases to ensure bug-free.
Modification:
Add unit test for `HpackDynamicTable`.
Result:
Improve test coverage slightly.
Motivation:
BoringSSL behaves differently then OpenSSL and not include any TLS1.3 ciphers in the returned array when calling SSL_get_ciphers(...). This is due the fact that it also not allow to explicit configure which are supported and which not for TLS1.3. To mimic the behaviour expected by the SSLEngine API we should workaround this.
Modifications:
- Add a unit test that verifies enabled protocols / ciphers
- Add special handling for BoringSSL and tls1.3
Result:
Make behaviour consistent
Motivation:
newResourceLeakDetector(...) did not correctly pass the samplingInterval parameter and so it was ignored.
Modification:
ResourceLeakDetectorFactory.newResourceLeakDetector(Class, int) use the second parameter as the sampling interval of the newly created ResourceLeakDetector.
Result:
Fixes#10378
Motivation:
We did not correctly preserve the original cause of the SSLException when all the following are true:
* SSL_read failed
* handshake was finished
* some outbound data was produced durigin SSL_read (for example ssl alert) that needs to be picked up by wrap(...)
Modifications:
Ensure we correctly preserve the original cause of the SSLException and rethrow it once we produced all data via wrap(...)
Result:
Be able to understand why we see an error in all cases
Motivation:
Unsafe users are getting MpscChunkedArrayQueue while no Unsafe ones MpscGrowableAtomicArrayQueue
Modifications:
MpscChunkedAtomicArrayQueue should be used for no Unsafe users (matching MpscChunkedArrayQueue behaviour)
Result:
no Unsafe users uses MpscChunkedAtomicArrayQueue while allocating bounded Mpsc Queues
Motivation:
Due a bug in our test we may dropped data on the floor which are generated during handshaking (or slightly after). This could lead to corrupt state in the engine itself and so fail tests. This is especially true for TLS1.3 which generates the sessions on the server after the "actual handshake" is done.
Modifications:
Contine with wrap / unwrap until all data was consumed
Result:
Correctly feed all data to the engine during testing
Motivation:
When ApplicationProtocolNegotiationHandler is in the pipeline we should expect that its handshakeFailure(...) method will be able to completly handle the handshake error. At the moment this is not the case as it only handled SslHandshakeCompletionEvent but not the exceptionCaught(...) that is also triggered in this case
Modifications:
- Call handshakeFailure(...) in exceptionCaught and so fix double notification.
- Add testcases
Result:
Fixes https://github.com/netty/netty/issues/10342
Motivation:
Seems like some users are suprised by the behaviour of DefaultEventExecutor when used within the ChannelPipeline. We should clarify the semantics and also mention UnordedThreadPoolEventExecutor which may be more inline with their expectations
Modifications:
Add javadocs section about UnorderedThreadPoolEventExecutor and expand details for DefaultEventExecutor
Result:
Clarify sematics
Motivation:
AbstractDiskHttpData may cause a memory leak when a CompositeByteBuf is used. This happened because we may call copy() but actually never release the newly created ByteBuf.
Modifications:
- Remove copy() call and just use ByteBuf.getBytes(...) which will internally handle the writing to the FileChannel without any extra copies that need to be released later on.
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/10354
Motivation
HttpObjectDecoder and its associated classes make frequent use of
default values for maxInitialLineLength, maxHeaderSize, maxChunkSize,
etc. Today, these defaults are defined in-line in constructors and
duplicated across many classes. This repetition is more prone to error
and inconsistencies.
Furthermore, due to the current lack of builder support, if a user wants
to change just one of these values (e.g., maxHeaderSize), they are also
required to know and repeat the other default values (e.g.,
maxInitialLineLength and maxChunkSize).
The primary motivation for this change is as we are considering adding
another constructor parameter (for multiple content length behavior),
appending this parameter may require some users to have prior knowledge
of the default initialBufferSize, and it would be cleaner to allow them
to reference the default constant.
Modifications
* Consolidate the HttpObjectDecoder default values into public constants
* Reference these constants where possible
Result
No functional change. Additional telescoping constructors will be easier
and safer to write. Users may have an easier experience changing single
parameters.
The current FLowControlHandler keeps a flag to track whether a read() call is pending.
This could lead to a scenario where you call read multiple times when the queue is empty,
and when the FlowControlHandler Queue starts getting messages, channelRead will be fired only once,
when we should've fired x many times, once for each time the handlers downstream called read().
Modifications:
Minor change to replace the boolean flag with a counter and adding a unit test for this scenario.
Result:
I used TDD, so I wrote the test, made sure it's failing, then updated the code and re-ran the test
to make sure it's successful after the changes.
Co-authored-by: Kareem Ali <kali@localhost.localdomain>
Motivation:
Current implementation `StompSubframeEncoder` can encode `StompFrame` into several separate chunks or encode separately `StompHeadersSubframe` and `StompContentSubframe`. But some client libraries (e.g. stomp.js) do not support aggregation.
Modification:
Add StompWebSocketFrameEncoder for integration between origin stomp suframe encoder and `ContinuationWebSocketFrame` to support chunks on transport level.
Result:
Fixes#10261
Motivation:
9b7e091 added a special SSLHandshakeException sub-class to signal handshake timeouts but we missed to update a testcase to directly assert the type of the exception.
Modifications:
Assert directly that SslHandshakeTimeoutException is used
Result:
Test cleanup
Motivation:
There is a possibility to end up with a StackOverflowError when trying to resolve authorative nameservers because of incorrect wrapping the AuthoritativeDnsServerCache.
Modifications:
Ensure we don't end up with an overflow due wrapping
Result:
Fixes https://github.com/netty/netty/issues/10246
Motivation:
It seems that `testSwallowedReadComplete(...)` may fail with an AssertionError sometimes after my tests. The relevant stack trace is as follows:
```
java.lang.AssertionError: expected:<IdleStateEvent(READER_IDLE, first)> but was:<null>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at io.netty.handler.flow.FlowControlHandlerTest.testSwallowedReadComplete(FlowControlHandlerTest.java:478)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:89)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:41)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:542)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:770)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:464)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:210)
```
Obviously the `readerIdleTime` of `IdleStateHandler` and the thread sleep time before `EmbeddedChannel.runPendingTasks` are both 100ms. And if `userEvents.poll()` happened before `userEvents.add(...)` or no `IdleStateEvent` fired at all, this test case would fail.
Modification:
Sleep for a little more time before running all pending tasks in the `EmbeddedChannel`.
Result:
Fix the problem of small probability of failure.
Motivation:
`containsValue()` will check if there are multiple values defined in the specific header name, we need to use this method instead of `contains()` for the `Transfer-Encoding` header to cover the case that multiple values defined, like: `Transfer-Encoding: gzip, chunked`
Modification:
Change from `contains()` to `containsValue()` in `HttpUtil.isTransferEncodingChunked()` method.
Result:
Fixes#10320
Motivation:
Currently we passing custom websocket handshaker response headers to a `WebSocketServerHandshaker` but they can contain a reserved headers (e.g. Connection, Upgrade, Sec-Websocket-Accept) what lead to duplication because we use response.headers().add(..) instead of response.headers().set(..).
Modification:
In each `WebSocketServerHandshaker00`, ... `WebSocketServerHandshaker13` implementation replace the method add(..) to set(..) for reserved response headers.
Result:
Less error-prone
Motivation:
A new GraalVM with JDK 11 was released and GraalVM adds Java 11 support
Modification:
- Update GraalVM JDK 8 version
- Add GraalVM JDK 11 support
Result:
Build with GraalVM JDK 11 and use latest GraalVM JDK 8 version
Motivation:
We should include as much details as possible when throwing an IllegalArgumentException because of overflow in CompositeByteBuf
Modifications:
Add more details and factor out check into a static method to share code
Result:
Make it more clear why an operations failed
Motivation:
SslHandler currently throws a general SSLException if a wrap attempt
fails due to the SSLEngine being closed. If writes are queued the
failure rational typically requires more investigation to track down the
original failure from a previous event. We may have more informative
rational for the failure and so we should use it.
Modifications:
- SslHandler#wrap to use failure information from the handshake or prior
transport closure if available
Result:
More informative exceptions from SslHandler#wrap if the SSLEngine has
been previously closed.
Motivation:
Fix very tiny comment error in Recycler
Modifications:
Fix very tiny comment error in Recycler
Result:
Correctly comment about drop in WeakOrderQueue#add
Motivation:
Once a CNAME loop was detected we can just fail fast and so reduce the number of queries.
Modifications:
Fail fast once a CNAME loop is detected
Result:
Fail fast
Motivation:
The `FlowControlHandler` may cache the received messages in a queue in order to do the flow control. However, if this handler is manually removed from pipeline during runtime, those cached messages might not be passed to the next channel handler forever.
Modification:
Dequeue all these cached messages and call `ChannelHandlerContext.fireChannelRead(...)` in `handlerRemoved(...)` method.
Result:
Avoid losing the received messages.
Motivation:
We did use a HashSet to detect CNAME cache loops which needs allocations. We can use an algorithm that doesnt need any allocations
Modifications:
Use algorithm that doesnt need allocations
Result:
Less allocations on the slow path
Motivation:
`SslHandlerTest.testSessionTicketsWithTLSv12AndNoKey` does not require
BoringSSL and works with OpenSSL as well.
Modifications:
- Remove assume statement that expected BoringSSL;
Result:
Test works for any implementation of `OPENSSL` provider.
Motivation:
To ensure we not crash in all cases we should better check that the SSL pointer was not freed before using it.
Modifications:
Add missing `isDestroyed()` checks
Result:
Ensure we not crash due usage of freed pointer.
Motivation:
The linux-aarch64 packages are not declared in netty-bom. There are no consistency checks for netty bom, hence it can easily miss updates when artifacts are added.
Modifications:
- Add declarations.
- Modify netty-all to depend on netty-bom for version declarations,
thus requiring netty-bom to be uptodate.
Result:
Be able to reference aarch64 packages without an explicit version. The content of netty-all is guaranteed to be declared in netty-bom, adding a safety net.
Signed-off-by: Robert Varga <robert.varga@pantheon.tech>
Motivation:
make the existing setter `ReferenceCountedOpenSslContext.setUseTasks` public
Modification:
Added the `public` qualified and removed the comment "for tests only"
Result:
Fixes#10288