Motivation:
FastLzFrameDecoder currently not use the allocator to alocate the output buffer. This means that if you use the PooledByteBufAllocator you still can't make use of the pooling. Beside this the decoder also does an uncessary memory copy when no compression is used.
Modifications:
- Allocate the output buffer via the allocator
- Don't allocate and copy if we handle an uncompressed chunk
- Make use of ByteBufChecksum for a few optimizations when running on a recent JDK
Result:
Less allocations when using FastLzFrameDecoder
Motivation:
There is no test case of `StringDecoder` here
Modification:
Need to add `StringDecoder` test case
Result:
Added test case of `StringDecoder`
Signed-off-by: xingrufei <xingrufei@sogou-inc.com>
Co-authored-by: xingrufei <xingrufei@sogou-inc.com>
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
At the moment we not correctly propagate cancellation in some case when we use the PromiseNotifier.
Modifications:
- Add PromiseNotifier static method which takes care of cancellation
- Add unit test
- Deprecate ChannelPromiseNotifier
Result:
Correctly propagate cancellation of operation
Co-authored-by: Nitesh Kant <nitesh_kant@apple.com>
Motivation:
New versions of Bouncy Castle libraries are out and we should upgrade to them.
Modification:
Upgraded all Bouncy Castle libraries to the latest version.
Result:
The latest versions of Bouncy Castle libraries.
Motivation:
As suggested in [section 5.3.4 in http2 spec](https://datatracker.ietf.org/doc/html/rfc7540#section-5.3.4):
> When a stream is removed from the dependency tree, its dependencies can be moved to become dependent on the parent of the closed stream. The weights of new dependencies are recalculated by distributing the weight of the dependency of the closed stream proportionally based on the weights of its dependencies.
For example, we have stream A and B depend on connection stream with default weights (16), and stream C depends on A with max weight (256). When stream A was closed, we move stream C to become dependent on the connection stream, then we should distribute the weight of stream A to its children (only stream C), so the new weight of stream C will be 16. If we keep the weight of stream C unchanged, it will get more resource than stream B
Modification:
- distribute weight to its children when closing a stream
- add a unit test for the case above and fix other related unit tests
Result:
More spec-compliant and more appropriate stream reprioritization
Co-authored-by: Heng Zhang <zhangheng@imo.im>
Motivation:
The TLS handshake must be able to finish on its own, without being driven by outside read calls.
This is currently not the case when TCP FastOpen is enabled.
Reads must be permitted and marked as pending, even when a channel is not active.
This is important because, with TCP FastOpen, the handshake processing of a TLS connection will start
before the connection has been established -- before the process of connecting has even been started.
The SslHandler on the client side will add the Client Hello message to the ChannelOutboundBuffer, then
issue a `ctx.read` call for the anticipated Server Hello response, and then flush the Client Hello
message which, in the case of TCP FastOpen, will cause the TCP connection to be established.
In this transaction, it is important that the `ctx.read` call is not ignored since, if auto-read is
turned off, this could delay or even prevent the Server Hello message from being processed, causing
the server-side handshake to time out.
Modification:
Attach a listener to the SslHandler.handshakeFuture in the EchoClient, that will call ctx.read.
Result:
The SocketSslEchoTest now tests that the SslHandler can finish handshakes on its own, without being driven by 3rd party ctx.read calls.
The various channel implementations have been updated to comply with this behaviour.
Motivation:
JdkOpenSslEngineInteroptTest.testMutualAuthSameCerts() is flaky on the CI and so fails the PR build quite often.
Let's disable it for now until we were able to reproduce it locally and fix it.
Modifications:
Disable flaky test
Result:
More stable CI builds
__Motivation__
Upon receiving a DNS answer, we match whether the name in the question matches the name in the record. Some DNS servers we have encountered append a search domain to the record name which fails this match. eg: for question name `netty` and search domains `io` and `com`, we will do 2 queries: `netty.io.` and `netty.com.`, if the answer for `netty.io` contains `netty.com` then we ignore this record.
__Modification__
If the name in the record does not match the name in the question, append configured search domains to the question name to see if it matches the record name.
__Result__
Records names with appended search domains are still returned as valid answers.
Motivation:
We need to add `--add-exports java.base/sun.security.x509=ALL-UNNAMED` when running the tests for codec-http2 as some of the tests use SelfSignedCertificate.
Modifications:
- Add `--add-exports java.base/sun.security.x509=ALL-UNNAMED` when running the tests for codec-http2
- Ensure we export correct when running with JDK12, 13, 14 and 15 as well
Result:
No more tests failure due not be able to access classes
Motivation:
#11468 was merged but didn't fix tests completely. There is a fight between `LF` and `CRLF`. So to eliminate this, we should just get rid of them.
Modification:
Use a small sample dataset without `LF` and `CRLF`.
Result:
Simple and passing test.
Motivation:
We did migrate all these modules to junit5 before but missed a few usages of junit4
Modifications:
Replace all junit4 imports by junit5 apis
Result:
Part of https://github.com/netty/netty/issues/10757
Motivation:
JavaDoc of StandardCompressionOptions should point towards public methods. Also, Brotli tests were failing on Windows.
Modification:
Fixed JavaDoc and enabled Brotli tests on Windows.
Result:
Better JavaDoc and Brotli tests will run on Windows
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
In #11256, We introduced `Iterable` as a parameter but later in review, it was removed. But we forgot to change `compressionOptionsIterable` to just `compressionOptions`.
Modification:
Changed `compressionOptionsIterable` to `compressionOptions`.
Result:
Correct ObjectUtil message
Motivation:
There are use cases when Unix domain datagram sockets are needed for communication.
This PR adds such support for Epoll/KQueue.
Modification:
- Expose Channel, Config and Packet interfaces/classes for Unix domain datagram sockets.
All interfaces/classes are in `transport-native-unix-common` module in order to be available
for KQueue and Epoll implementations
- Add JNI code for Unix domain datagram sockets
- Refactor `DatagramUnicastTest` so that it can be used for testing also Unix domain datagram sockets
- Add Unix domain datagram sockets implementation for KQueue transport
- Add Unix domain datagram sockets implementation for Epoll transport
Result:
Fixes#6737
Motivation:
At the moment we only support signing / decrypting the private key in a synchronous fashion. This is quite limited as we may want to do a network call to do so on a remote system for example.
Modifications:
- Update to latest netty-tcnative which supports running tasks in an asynchronous fashion.
- Add OpenSslAsyncPrivateKeyMethod interface
- Adjust SslHandler to be able to handle asynchronous task execution
- Adjust unit tests to test that asynchronous task execution works in all cases
Result:
Be able to asynchronous do key signing operations
Motivation:
Currently, Netty only has BrotliDecoder which can decode Brotli encoded data. However, BrotliEncoder is missing which will encode normal data to Brotli encoded data.
Modification:
Added BrotliEncoder and CompressionOption
Result:
Fixes#6899.
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
ZSTD has a wide range of uses on the Internet, so should consider adding `application/zstd` HTTP media-type and `zstd` content-encoding, see https://tools.ietf.org/html/rfc8478
Modification:
Add `application/zstd` HTTP media-type and `zstd` content-encoding
Result:
netty provides `application/zstd` HTTP media-type and `zstd content-encoding` as http headers
Signed-off-by: xingrufei <xingrufei@sogou-inc.com>
Co-authored-by: xingrufei <xingrufei@sogou-inc.com>
Motivation:
At the moment we always build all modules. This script can be used to only build affected modules for a given change
Modifications:
Add script that will only build modules that are affected by a change
Result:
More targeted build
Motivation:
At the moment all methods in `ChannelHandler` declare `throws Exception` as part of their method signature. While this is fine for methods that handle inbound events it is quite confusing for methods that handle outbound events. This comes due the fact that these methods also take a `ChannelPromise` which actually need to be fullfilled to signal back either success or failure. Define `throws...` for these methods is confusing at best. We should just always require the implementation to use the passed in promise to signal back success or failure. Doing so also clears up semantics in general. Due the fact that we can't "forbid" throwing `RuntimeException` we still need to handle this in some way tho. In this case we should just consider it a "bug" and so log it and close the `Channel` in question. The user should never have an exception "escape" their implementation and just use the promise. This also clears up the ownership of the passed in message etc.
As `flush(ChannelHandlerContext)` and `read(ChannelHandlerContext)` don't take a `ChannelPromise` as argument this also means that these methods can never produce an error. This makes kind of sense as these really are just "signals" for the underlying transports to do something. For `RuntimeException` the same rule is used as for other outbound event handling methods, which is logging and closing the `Channel`.
Motifications:
- Remove `throws Exception` from signature
- Adjust code to not throw and just notify the promise directly
- Adjust unit tests
Result:
Much cleaner API and semantics.
Motivation:
We should only run one SSL task per delegation to allow more SSLEngines to make progress in a timely manner
Modifications:
- Only run one task per delegation to the executor
- Only create new SSL task if really needed
- Only schedule if not on the EventExecutor thread
Result:
More fair usage of resources and less allocations
Motivation:
Protocols and Cipher suites constants to prevent typos in protocol and cipher suites names and ease of use.
Modification:
Added Protocols and Cipher suites as constants in their respective classes.
Result:
Fixes#11393
Motivation:
We should call fireUserEventTriggered(...) before we try to modify the pipeline as otherwise we may end up in the situation that the handler was already removed.
Modifications:
Change ordering of calls
Result:
Test pass again
__Motivation__
`ApplicationProtocolNegotiationHandler` buffers messages which are read before SSL handshake complete event is received and drains them when the handler is removed. However, the channel may be closed (or input shutdown) before SSL handshake event is received in which case we may fire channel read after channel closure (from `handlerRemoved()`).
__Modification__
Intercept `channelInactive()` and input closed event and drain the buffer.
__Result__
If channel is closed before SSL handshake complete event is received, we still maintain the order of message read and channel closure.
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
At the moment we only support signing / decrypting the private key in a synchronous fashion. This is quite limited as we may want to do a network call to do so on a remote system for example.
Modifications:
- Update to latest netty-tcnative which supports running tasks in an asynchronous fashion.
- Add OpenSslAsyncPrivateKeyMethod interface
- Adjust SslHandler to be able to handle asynchronous task execution
- Adjust unit tests to test that asynchronous task execution works in all cases
Result:
Be able to asynchronous do key signing operations
Motivation:
We need to change the reflection config to match the constructor that is used
Modifications:
Adjust config
Result:
Graal PR jobs pass again
Motivation:
The `PerMessageDeflateClientExtensionHandler` has the following strange behaviors currently:
* The `requestedServerNoContext` parameter doesn't actually add the `server_no_context_takeover` parameter to the client offer; instead it depends on the requested server window size.
* The handshake will fail if the server responds with a `server_no_context_takeover` parameter and `requestedServerNoContext` is false. According to RFC 7692 (7.1.1.1) the server may do this, and this means that to cover both cases one needs to use two handshakers in the channel pipeline: one with `requestedServerNoContext = true` and one with `requestedServerNoContext = false`.
* The value of the `server_max_window_bits` parameter in the server response is never checked (should be between 8 and 15). And the value of `client_max_window_bits` is checked only in the branch handling the server window parameter.
Modification:
* Add the `server_no_context_takeover` parameter if `requestedServerNoContext` is true.
* Accept a server handshake response which includes the server no context takeover parameter even if we did not request it.
* Check the values of the client and server window size in their respective branches and fail the handshake if they are out of bounds.
Result:
There will be no need to use two handshakers in the pipeline to be lenient in what handshakes are accepted.
Motivation:
Including codec-http in the project and building a native-image out of it using a GraalVM 21.2 nightly can result in a failure.
Modification:
By delaying the initialization of `io.netty.handler.codec.compression.BrotliDecoder` to runtime, native-image will not try to eagerly initialize the class during the image build, avoiding the build failure described in the issue.
Result:
Fixes#11427
Motivation:
Currently, Netty cannot handle HTTP/2 Preface messages if the client used the Prior knowledge technique. In Prior knowledge, the client sends an HTTP/2 preface message immediately after finishing TLS Handshake. But in Netty, when TLS Handshake is finished, ALPNHandler is triggered to configure the pipeline. And between these 2 operations, if an HTTP/2 preface message arrives, it gets dropped.
Modification:
Buffer messages until we are done with the ALPN handling.
Result:
Fixes#11403.
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
This caused test failures due to the deprecation warning and produced a
dumpstream.
Modification:
Replace deprecated flag with recommended one.
Result:
Fix deprecation and cause of test failure in codec project.
Motivation:
The native module is not yet available on aarch64 Mac / Windows thus causing tests in codec/ to fail (specifically all the Brotli ones, since the module could not be loaded).
Modification:
Disable Brotli tests when platform is not supported
Result:
Tests under codec/ now pass under Mac/aarch64 and Windows/aarch64
__Motivation__
Add support for GMSSL protocol to SslUtils.
__Modification__
Modify `SslUtils.getEncryptedPacketLength(ByteBuf buffer, int offset)` to get packet length when protocol is GMSSL.
Modify `SslUtils.getEncryptedPacketLength(ByteBuffer buffer)` to get packet length when protocol is GMSSL.
__Result__
`SslUtils.getEncryptedPacketLength` now supports GMSSL protocol. Fixes https://github.com/netty/netty/issues/11406
Motivation:
HTTP header values are case sensitive. The expected value for `x-request-with` header is `XMLHttpRequest`, not `XmlHttpRequest`.
Modification:
Fix constant's case.
Result:
Correct `XMLHttpRequest` HTTP header value.
Motivation:
We failed to account for the last header when estimating the buffer
size. If the data does not compress enough to make space for the
last header we would exceed the ByteBuf's capacity.
Modifications:
Call #ensureWritable with appropriate capacity for footer ByteBuf
befor writing footer.
Result:
If there is not enough space left in the buffer, the buffer will be
expanded.
Motivation:
We should update to use junit5 in all modules.
Modifications:
Adjust codec-redis tests to use junit5
Result:
Part of https://github.com/netty/netty/issues/10757
Motivation:
8c73dbe9bd did migrate the codec-http2 code to use junit5 but missed two classes.
Modifications:
Adjust the rest of codec-http2 tests to use junit5
Result:
Part of https://github.com/netty/netty/issues/10757
Motivation:
We should update to use junit5 in all modules.
Modifications:
Adjust codec-http2 tests to use junit5
Result:
Part of https://github.com/netty/netty/issues/10757
__Motivation__
`LoggingHandler` misses a constructor variant that only takes `ByteBufFormat`
__Modification__
Added the missing constructor variant.
__Result__
`LoggingHandler` can be constructed with `ByteBufFormat` only.
Co-authored-by: Nitesh Kant <nitesh_kant@apple.com>
Motivation:
In Netty 5 we wish to have a simpler, safe, future proof, and more consistent buffer API.
We developed such an API in the incubating buffer repository, and taking it through multiple rounds of review and adjustments.
This PR/commit bring the results of that work into the Netty 5 branch of the main Netty repository.
Modifications:
* `Buffer` is an interface, and all implementations are hidden behind it.
There is no longer an inheritance hierarchy of abstract classes and implementations.
* Reference counting is gone.
After a buffer has been allocated, calling `close` on it will deallocate it.
It is then up to users and integrators to ensure that the life-times of buffers are managed correctly.
This is usually not a problem as buffers tend to flow through the pipeline to be released after a terminal IO operation.
* Slice and duplicate methods are replaced with `split`.
By removing slices, duplicate, and reference counting, there is no longer a possibility that a buffer and/or its memory can be shared and accessible through multiple routes.
This solves the problem of data being accessed from multiple places in an uncoordinated way, and the problem of buffer memory being closed while being in use by some unsuspecting piece of code.
Some adjustments will have to be made to other APIs, idioms, and usages, since `split` is not always a replacement for `slice` in some use cases.
* The `split` has been added which allows memory to be shared among multiple buffers, but in non-overlapping regions.
When the memory regions don't overlap, it will not be possible for the different buffers to interfere with each other.
An internal, and completely transparent, reference counting system ensures that the backing memory is released once the last buffer view is closed.
* A Send API has been introduced that can be used to enforce (in the type system) the transfer of buffer ownership.
This is not expected to be used in the pipeline flow itself, but rather for other objects that wrap buffers and wish to avoid becoming "shared views" — the absence of "shared views" of memory is important for avoiding bugs in the absence of reference counting.
* A new BufferAllocator API, where the choice of implementation determines factors like on-/off-heap, pooling or not.
How access to the different allocators will be exposed to integrators will be decided later.
Perhaps they'll be directly accessible on the `ChannelHandlerContext`.
* The `PooledBufferAllocator` has been copied and modified to match the new allocator API.
This includes unifying its implementation that was previously split across on-heap and off-heap.
* The `PooledBufferAllocator` implementation has also been adjusted to allocate 4 MiB chunks by default, and a few changes have been made to the implementation to make a newly created, empty allocator use significantly less heap memory.
* A `Resource` interface has been added, which defines the life-cycle methods and the `send` method.
The `Buffer` interface extends this.
* Analogues for `ByteBufHolder` has been added in the `BufferHolder` and `BufferRef` classes.
* `ByteCursor` is added as a new way to iterate the data in buffers.
The byte cursor API is designed to be more JIT friendly than an iterator, or the existing `ByteProcessor` interface.
* `CompositeBuffer` no longer permit the same level of access to its internal components.
The composite buffer enforces its ownership of its components via the `Send` API, and the components can only be individually accessed with the `forEachReadable` and `forEachWritable` methods.
This keeps the API and behavioral differences between composite and non-composite buffers to a minimum.
* Two implementations of the `Buffer` interface are provided with the API: One based on `ByteBuffer`, and one based on `sun.misc.Unsafe`.
The `ByteBuffer` implementation is used by default.
More implementations can be loaded from the classpath via service loading.
The `MemorySegment` based implementation is left behind in the incubator repository.
* An extensive and highly parameterised test suite has been added, to ensure that all implementations have consistent and correct behaviour, regardless of their configuration or composition.
Result:
We have a new buffer API that is simpler, better tested, more consistent in behaviour, and safer by design, than the existing `ByteBuf` API.
The next legs of this journey will be about integrating this new API into Netty proper, and deprecate (and eventually remove) the `ByteBuf` API.
This fixes#11024, #8601, #8543, #8542, #8534, #3358, and #3306.