Motivation:
Currently, Netty only has BrotliDecoder which can decode Brotli encoded data. However, BrotliEncoder is missing which will encode normal data to Brotli encoded data.
Modification:
Added BrotliEncoder and CompressionOption
Result:
Fixes#6899.
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
ZSTD has a wide range of uses on the Internet, so should consider adding `application/zstd` HTTP media-type and `zstd` content-encoding, see https://tools.ietf.org/html/rfc8478
Modification:
Add `application/zstd` HTTP media-type and `zstd` content-encoding
Result:
netty provides `application/zstd` HTTP media-type and `zstd content-encoding` as http headers
Signed-off-by: xingrufei <xingrufei@sogou-inc.com>
Co-authored-by: xingrufei <xingrufei@sogou-inc.com>
Motivation:
At the moment we always build all modules. This script can be used to only build affected modules for a given change
Modifications:
Add script that will only build modules that are affected by a change
Result:
More targeted build
Motivation:
At the moment all methods in `ChannelHandler` declare `throws Exception` as part of their method signature. While this is fine for methods that handle inbound events it is quite confusing for methods that handle outbound events. This comes due the fact that these methods also take a `ChannelPromise` which actually need to be fullfilled to signal back either success or failure. Define `throws...` for these methods is confusing at best. We should just always require the implementation to use the passed in promise to signal back success or failure. Doing so also clears up semantics in general. Due the fact that we can't "forbid" throwing `RuntimeException` we still need to handle this in some way tho. In this case we should just consider it a "bug" and so log it and close the `Channel` in question. The user should never have an exception "escape" their implementation and just use the promise. This also clears up the ownership of the passed in message etc.
As `flush(ChannelHandlerContext)` and `read(ChannelHandlerContext)` don't take a `ChannelPromise` as argument this also means that these methods can never produce an error. This makes kind of sense as these really are just "signals" for the underlying transports to do something. For `RuntimeException` the same rule is used as for other outbound event handling methods, which is logging and closing the `Channel`.
Motifications:
- Remove `throws Exception` from signature
- Adjust code to not throw and just notify the promise directly
- Adjust unit tests
Result:
Much cleaner API and semantics.
Motivation:
We should only run one SSL task per delegation to allow more SSLEngines to make progress in a timely manner
Modifications:
- Only run one task per delegation to the executor
- Only create new SSL task if really needed
- Only schedule if not on the EventExecutor thread
Result:
More fair usage of resources and less allocations
Motivation:
Protocols and Cipher suites constants to prevent typos in protocol and cipher suites names and ease of use.
Modification:
Added Protocols and Cipher suites as constants in their respective classes.
Result:
Fixes#11393
Motivation:
We should call fireUserEventTriggered(...) before we try to modify the pipeline as otherwise we may end up in the situation that the handler was already removed.
Modifications:
Change ordering of calls
Result:
Test pass again
__Motivation__
`ApplicationProtocolNegotiationHandler` buffers messages which are read before SSL handshake complete event is received and drains them when the handler is removed. However, the channel may be closed (or input shutdown) before SSL handshake event is received in which case we may fire channel read after channel closure (from `handlerRemoved()`).
__Modification__
Intercept `channelInactive()` and input closed event and drain the buffer.
__Result__
If channel is closed before SSL handshake complete event is received, we still maintain the order of message read and channel closure.
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
At the moment we only support signing / decrypting the private key in a synchronous fashion. This is quite limited as we may want to do a network call to do so on a remote system for example.
Modifications:
- Update to latest netty-tcnative which supports running tasks in an asynchronous fashion.
- Add OpenSslAsyncPrivateKeyMethod interface
- Adjust SslHandler to be able to handle asynchronous task execution
- Adjust unit tests to test that asynchronous task execution works in all cases
Result:
Be able to asynchronous do key signing operations
Motivation:
We need to change the reflection config to match the constructor that is used
Modifications:
Adjust config
Result:
Graal PR jobs pass again
Motivation:
The `PerMessageDeflateClientExtensionHandler` has the following strange behaviors currently:
* The `requestedServerNoContext` parameter doesn't actually add the `server_no_context_takeover` parameter to the client offer; instead it depends on the requested server window size.
* The handshake will fail if the server responds with a `server_no_context_takeover` parameter and `requestedServerNoContext` is false. According to RFC 7692 (7.1.1.1) the server may do this, and this means that to cover both cases one needs to use two handshakers in the channel pipeline: one with `requestedServerNoContext = true` and one with `requestedServerNoContext = false`.
* The value of the `server_max_window_bits` parameter in the server response is never checked (should be between 8 and 15). And the value of `client_max_window_bits` is checked only in the branch handling the server window parameter.
Modification:
* Add the `server_no_context_takeover` parameter if `requestedServerNoContext` is true.
* Accept a server handshake response which includes the server no context takeover parameter even if we did not request it.
* Check the values of the client and server window size in their respective branches and fail the handshake if they are out of bounds.
Result:
There will be no need to use two handshakers in the pipeline to be lenient in what handshakes are accepted.
Motivation:
Including codec-http in the project and building a native-image out of it using a GraalVM 21.2 nightly can result in a failure.
Modification:
By delaying the initialization of `io.netty.handler.codec.compression.BrotliDecoder` to runtime, native-image will not try to eagerly initialize the class during the image build, avoiding the build failure described in the issue.
Result:
Fixes#11427
Motivation:
Currently, Netty cannot handle HTTP/2 Preface messages if the client used the Prior knowledge technique. In Prior knowledge, the client sends an HTTP/2 preface message immediately after finishing TLS Handshake. But in Netty, when TLS Handshake is finished, ALPNHandler is triggered to configure the pipeline. And between these 2 operations, if an HTTP/2 preface message arrives, it gets dropped.
Modification:
Buffer messages until we are done with the ALPN handling.
Result:
Fixes#11403.
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
This caused test failures due to the deprecation warning and produced a
dumpstream.
Modification:
Replace deprecated flag with recommended one.
Result:
Fix deprecation and cause of test failure in codec project.
Motivation:
The native module is not yet available on aarch64 Mac / Windows thus causing tests in codec/ to fail (specifically all the Brotli ones, since the module could not be loaded).
Modification:
Disable Brotli tests when platform is not supported
Result:
Tests under codec/ now pass under Mac/aarch64 and Windows/aarch64
__Motivation__
Add support for GMSSL protocol to SslUtils.
__Modification__
Modify `SslUtils.getEncryptedPacketLength(ByteBuf buffer, int offset)` to get packet length when protocol is GMSSL.
Modify `SslUtils.getEncryptedPacketLength(ByteBuffer buffer)` to get packet length when protocol is GMSSL.
__Result__
`SslUtils.getEncryptedPacketLength` now supports GMSSL protocol. Fixes https://github.com/netty/netty/issues/11406
Motivation:
HTTP header values are case sensitive. The expected value for `x-request-with` header is `XMLHttpRequest`, not `XmlHttpRequest`.
Modification:
Fix constant's case.
Result:
Correct `XMLHttpRequest` HTTP header value.
Motivation:
We failed to account for the last header when estimating the buffer
size. If the data does not compress enough to make space for the
last header we would exceed the ByteBuf's capacity.
Modifications:
Call #ensureWritable with appropriate capacity for footer ByteBuf
befor writing footer.
Result:
If there is not enough space left in the buffer, the buffer will be
expanded.
Motivation:
We should update to use junit5 in all modules.
Modifications:
Adjust codec-redis tests to use junit5
Result:
Part of https://github.com/netty/netty/issues/10757
Motivation:
8c73dbe9bd did migrate the codec-http2 code to use junit5 but missed two classes.
Modifications:
Adjust the rest of codec-http2 tests to use junit5
Result:
Part of https://github.com/netty/netty/issues/10757
Motivation:
We should update to use junit5 in all modules.
Modifications:
Adjust codec-http2 tests to use junit5
Result:
Part of https://github.com/netty/netty/issues/10757
__Motivation__
`LoggingHandler` misses a constructor variant that only takes `ByteBufFormat`
__Modification__
Added the missing constructor variant.
__Result__
`LoggingHandler` can be constructed with `ByteBufFormat` only.
Co-authored-by: Nitesh Kant <nitesh_kant@apple.com>
Motivation:
In Netty 5 we wish to have a simpler, safe, future proof, and more consistent buffer API.
We developed such an API in the incubating buffer repository, and taking it through multiple rounds of review and adjustments.
This PR/commit bring the results of that work into the Netty 5 branch of the main Netty repository.
Modifications:
* `Buffer` is an interface, and all implementations are hidden behind it.
There is no longer an inheritance hierarchy of abstract classes and implementations.
* Reference counting is gone.
After a buffer has been allocated, calling `close` on it will deallocate it.
It is then up to users and integrators to ensure that the life-times of buffers are managed correctly.
This is usually not a problem as buffers tend to flow through the pipeline to be released after a terminal IO operation.
* Slice and duplicate methods are replaced with `split`.
By removing slices, duplicate, and reference counting, there is no longer a possibility that a buffer and/or its memory can be shared and accessible through multiple routes.
This solves the problem of data being accessed from multiple places in an uncoordinated way, and the problem of buffer memory being closed while being in use by some unsuspecting piece of code.
Some adjustments will have to be made to other APIs, idioms, and usages, since `split` is not always a replacement for `slice` in some use cases.
* The `split` has been added which allows memory to be shared among multiple buffers, but in non-overlapping regions.
When the memory regions don't overlap, it will not be possible for the different buffers to interfere with each other.
An internal, and completely transparent, reference counting system ensures that the backing memory is released once the last buffer view is closed.
* A Send API has been introduced that can be used to enforce (in the type system) the transfer of buffer ownership.
This is not expected to be used in the pipeline flow itself, but rather for other objects that wrap buffers and wish to avoid becoming "shared views" — the absence of "shared views" of memory is important for avoiding bugs in the absence of reference counting.
* A new BufferAllocator API, where the choice of implementation determines factors like on-/off-heap, pooling or not.
How access to the different allocators will be exposed to integrators will be decided later.
Perhaps they'll be directly accessible on the `ChannelHandlerContext`.
* The `PooledBufferAllocator` has been copied and modified to match the new allocator API.
This includes unifying its implementation that was previously split across on-heap and off-heap.
* The `PooledBufferAllocator` implementation has also been adjusted to allocate 4 MiB chunks by default, and a few changes have been made to the implementation to make a newly created, empty allocator use significantly less heap memory.
* A `Resource` interface has been added, which defines the life-cycle methods and the `send` method.
The `Buffer` interface extends this.
* Analogues for `ByteBufHolder` has been added in the `BufferHolder` and `BufferRef` classes.
* `ByteCursor` is added as a new way to iterate the data in buffers.
The byte cursor API is designed to be more JIT friendly than an iterator, or the existing `ByteProcessor` interface.
* `CompositeBuffer` no longer permit the same level of access to its internal components.
The composite buffer enforces its ownership of its components via the `Send` API, and the components can only be individually accessed with the `forEachReadable` and `forEachWritable` methods.
This keeps the API and behavioral differences between composite and non-composite buffers to a minimum.
* Two implementations of the `Buffer` interface are provided with the API: One based on `ByteBuffer`, and one based on `sun.misc.Unsafe`.
The `ByteBuffer` implementation is used by default.
More implementations can be loaded from the classpath via service loading.
The `MemorySegment` based implementation is left behind in the incubator repository.
* An extensive and highly parameterised test suite has been added, to ensure that all implementations have consistent and correct behaviour, regardless of their configuration or composition.
Result:
We have a new buffer API that is simpler, better tested, more consistent in behaviour, and safer by design, than the existing `ByteBuf` API.
The next legs of this journey will be about integrating this new API into Netty proper, and deprecate (and eventually remove) the `ByteBuf` API.
This fixes#11024, #8601, #8543, #8542, #8534, #3358, and #3306.
Use Two way algorithm to optimize ByteBufUtil.indexOf() method
Motivation:
ByteBufUtil.indexOf can be inefficient for substring search on
ByteBuf, in terms of algorithm complexity (O(needle.readableBytes * haystack.readableBytes)), consider using the Two Way algorithm to optimize the ByteBufUtil.indexOf() method
Modification:
Use the Two Way algorithm to optimize ByteBufUtil.indexOf() method.
Result:
The performance of the ByteBufUtil.indexOf() method is higher than the original implementation
Motivation:
Due a bug we did not pass the correct remote and localaddress to the next handler if the outbound portion of the CombinedChannelDuplexHandler was removed
Modifications:
- Call the correct connect(...) method
- Refactor tests to test that the parameters are correctly passed on
- Remvoe some code duplication in the tests
Result:
CombinedChannelDuplexHandler correctly pass parameters on
Motivation:
We need to ensure we always "consumed" all alerts etc via SSLEngine.wrap(...) before we teardown the engine. Failing to do so may lead to a situation where the remote peer will not be able to see the actual cause of the handshake failure but just see the connection being closed.
Modifications:
Correctly return HandshakeStatus.NEED_WRAP when we need to wrap some data first before we shutdown the engine because of a handshake failure.
Result:
Fixes https://github.com/netty/netty/issues/11388
__Motivation__
`HttpUtil#normalizeAndGetContentLength()` throws `StringIndexOutOfBoundsException` for empty `content-length` values, it should instead throw `IllegalArgumentException` for all invalid values.
__Modification__
- Throw `IllegalArgumentException` if the `content-length` value is empty.
- Add tests
__Result__
Fixes https://github.com/netty/netty/issues/11408
Motivation:
WeakOrderQueue would drop object that has been recycled, even when it has space for it.
WeakOrderQueue#add should check DefaultHandler.hasBeenRecycler field first
Modifications:
WeakOrderQueue test the DefaultHandler.hasBeenRecycler first
Result:
WeakOrderQueue would not drop object that has been recycled when there is space
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Co-authored-by: Trustin Lee <t@motd.kr>
Motivation:
We need to use a GraalVM dependency which uses GPL2 + CE.
Modifications:
- Update all graalvm dependencies to new GAV which introduces a license change from GPL2 to GPL2 + CE
- This also required a small bump on the general version from 19.2 to 19.3, which should be fine as 19.3 is an official maintained LTS version, while 19.2 wasn't.
Result:
Fixes: #11398
Signed-off-by: Paulo Lopes <pmlopes@gmail.com>
Motivation:
Native image compatibility is fragile and breaks easily, so we need a PR build to tell us when this happens.
Modification:
Add a graalvm-based build to the PR build matrix.
Result:
Every PR is now also tested on Graal.
Motivation:
At the moment BoringSSL doesnt support explicit set the TLSv1.3 ciphers that should be used. If TLSv1.3 should be used it just enables all ciphers. We should better log if the user tries to explicit set a specific ciphers and using BoringSSL to inform the user that what is tried doesnt really work.
Modifications:
Log if the user tries to not use all TLSv1.3 ciphers and use BoringSSL
Result:
Easier for the user to understand why always all TLSv1.3 ciphers are enabled when using BoringSSL
Co-authored-by: Trustin Lee <trustin@gmail.com>
Motivation:
Netty will fail a handshake for the Per-Message Deflate WebSocket
extension if the server response contains a smaller
`server_max_window_bits` value than the client offered.
However, this is allowed by RFC 7692:
> A server accepts an extension negotiation offer with this parameter
> by including the “server_max_window_bits” extension parameter in the
> extension negotiation response to send back to the client with the
> same or smaller value as the offer.
Modifications:
- Allow the server to respond with a smaller value than offered.
- Change the unit tests to test for this.
Result:
The client will not fail when the server indicates it is using a
smaller window size than offered by the client.
Motivation:
Various compression codecs are currently hard-coded to only support buffers that are backed by byte-arrays that they are willing to expose.
This is efficient for most of the codecs, but compatibility suffers, as we are not able to freely choose our buffer implementations when compression codecs are involved.
Modification:
Add code to the compression codecs, that allow them to handle buffers that don't have arrays.
For many of the codecs, this unfortunately involves allocating temporary byte-arrays, and copying back-and-forth.
We have to do it that way since some codecs can _only_ work with byte-arrays.
Also add tests to verify that this works.
Result:
It is now possible to use all of our compression codecs with both on-heap and off-heap buffers.
The default buffer choice has not changed, however, so performance should be unaffected.
Motivation:
The tests must be executed only when there is no hosts file or
there is no entry for localhost in the hosts file. The tested functionality
is relevant only in these use cases.
Modifications:
Skip the windows tests when there is an entry for localhost in the hosts file.
Result:
Fix failing tests on Windows CI when using GitHub Actions
Related to #11384
Motivation:
Commit c32c520edd incorrectly skip the bytes of the replay decoder buffer. The number of bytes to skip is determined by ByteBuf#readableBytes() instead of using ByteToMessageDecoder#actualReadableBytes(). As result it throws an exception because the ByteBuf provided will return a too large value (Integer.MAX_VALUE - reader index) causing a bound check error in the skipBytes method. This is not detected by the tests because most tests are calling the decode(...) method with a regular ByteBuf. In practice when this method is called with a specialized ByteBuf when channelRead(...) is called. Such tests should actually use channelRead with proper mocking of the ChannelHandlerContext
Modification:
- Rewrite the MqttCodecTest to use channelRead(...) instead of decode(...) and use proper mocking of ChannelHandlerContext to get the message emitted by the decoder.
- Use actualReadableBytes() instead of buff.readableBytes() to compute the number of bytes to skip
Result:
Skip correctly the number of bytes when a too large message is found and improve testing. See #11361
Signed-off-by: Julien Viet <julien@julienviet.com>
Motivation:
We don't publish any tarballs these days so we can just remove the module
Modifications:
Remove tarball module and also adjust release scripts
Result:
Less code / config to mantain
Motivation:
To simplify retrieving pooled message messages, add enums that can be used as key.
Modifications:
- Modify pooled collections from List to Map in FixedRedisMessagePool
- Allow to use enum as the key to easy get pooled message.
- Add unit tests
Result:
Users can get pooled message by enum instead of the whole string
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
__Motivation__
As described in https://github.com/netty/netty/issues/11370 we should support quoted charset values
__Modification__
Modify `HttpUtil.getCharset(CharSequence contentTypeValue, Charset defaultCharset)` to trim the double-quotes if present.
__Result__
`HttpUtil.getCharset()` now supports quoted charsets. Fixes https://github.com/netty/netty/issues/11370
Motivation:
This special case implementation of Promise / Future requires the implementations responsible for completing the promise to have knowledge of this class to provide value. It also requires that the implementations are able to provide intermediate status while the work is being done. Even throughout the core of Netty it is not really supported most of the times and so just brings more complexity without real gain.
Let's remove it completely which is better then only support it sometimes.
Modifications:
Remove Progressive* API
Result:
Code cleanup.... Fixes https://github.com/netty/netty/issues/8519
Motivation:
We didnt really have a good use-case for removeListener* and addListeners. Because of this we should just remove these methods and so make things simpler.
Modifications:
Remove methods
Result:
Cleanup