Motivation:
Creating certificates from a byte[] while lazy parse it is general useful and is also needed by https://github.com/netty/netty-incubator-codec-quic/pull/141
Modifications:
Move classes, rename these and make them public
Result:
Be able to reuse code
Motivation:
Some of the features we want to support can only be supported by some of the SslContext implementations. We should allow to configure these in a consistent way the same way as we do it with Channel / ChannelOption
Modifications:
- Add SslContextOption and add builder methods that take these
- Add OpenSslContextOption and define two options there which are specific to openssl
Result:
More flexible configuration and implementation of SslContext
Motivation:
The JDK deflate implementation added support for operating on ByteBuffers in Java 11 or so.
This means that we don't need to restrict that implementation to ByteBufs that are heap based and can expose arrays.
Modification:
Add clauses to JdkZlibEncoder and JdkZlibDecoder for handling ByteBufs that don't have arrays, but do have one nioByteBuffer.
Expand the test coverage in JdkZlibTest to include all relevant combinations of buffer types and data types.
Result:
The JdkZlibEncoder and JdkZlibDecoder should now work on basically all non-composite ByteBufs, and likely also composite ByteBufs that have exactly one component.
Motivation:
This library is obsolete; hasn't been updated since 2013.
Modification:
Remove jzlib dependency, integration code and tests.
Result:
- No more jzlib support.
- Less code.
- The JdkZlib* code can now be simplified because it no longer share anything with jzlib.
Motivation:
In WriteTimeoutHandler we did make the assumption that the executor which is used to schedule the timeout is the same that is backing the write promise. This may not be true which will cause concurrency issues
Modifications:
Ensure we are on the right thread when try to modify the doubly-linked-list and if not schedule it on the right thread.
Result:
Fixes https://github.com/netty/netty/issues/11053
Motivation:
Channels need to have their configurations applied before we can call out to user-code via handlerAdded and initChannel.
Modification:
This adds tests for this behaviour, and fixes their failures.
Result:
Channel initialisers now have access to channel configuration and attributes.
Motivation:
We need to ensure that we call queue.remove() before we cal writeAndFlush() as this operation may cause an event that also touches the queue and remove from it. If we miss to do so we may see NoSuchElementExceptions.
Modifications:
- Call queue.remove() before calling writeAndFlush(...)
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/11046
This brings forward the build and release automation changes from 4.1 (#10879, #10883, #10884, #10886, #10888, #10889, #10893, #10900, #10933, #10945, #10966, #10968, #11002, and #11019) to 5.0.
Details are as follows:
* Use Github workflows for CI (#10879)
Motivation:
We should just use GitHub Actions for the CI
Modifications:
- Adjust docker / docker compose files
- Add different workflows and jobs to deploy and build the project
Result:
Don't depend on external CI services
* Fix non leak build condition
* Only use build and deploy workflows for 4.1 for now
* Add deploy job for cross compiled aarch64 (#10883)
Motivation:
We should also deploy snapshots for our cross compiled native jars.
Modifications:
- Add job and docker files for deploying cross compiled native jars
- Ensure we map the maven cache into our docker containers
Result:
Deploy aarch64 jars and re-use cache
* Use correct docker-compose file to deploy cross compiled artifacts
* Use correct docker-compose task to deploy for cross compiled artifacts
* Split pr and normal build (#10884)
Motivation:
We should better use seperate workflows for PR and normal builds
Modifications:
- Split workflows
- Better cache reuse
Result:
Cleanup
* Only deploy snapshots for one arch
Motivation:
We need to find a way to deploy SNAPSHOTS for different arch with the same timestamp. Otherwise it will cause problems.
See https://github.com/netty/netty/issues/10887
Modification:
Skip all other deploys then x86_64
Result:
Users are able to use SNAPSHOTS for x86_6
* Use maven cachen when running analyze job (#10888)
Motivation:
To prevent failures to problems while downloading dependencies we shoud cache these
Modifications:
Add maven cache
Result:
No more failures due problems while downloading dependencies
* Also include one PR job that uses boringssl (#10886)
Motivation:
When validating PRs we should also at least run one job that uses boringssl
Modifications:
- Add job that uses boringssl
- Cleanup docker compose files
- Fix buffer leak in test
Result:
Also run with boringssl when PRs are validated
* Use matrix for job configurations (#10889)
Motivation:
We can use the matrix feature to define our jobs. This reduces a lot of config
Modification:
Use job matrix
Result:
Easier to maintain
* Correctly deploy artifacts that are build on different archs (#10893)
Motivation:
We need to take special care when deploying snapshots as we need to generate the jars in multiple steps
Modifications:
- Use the nexus staging pluging to stage jars locally in multiple steps
- Add extra job that will merge these staged jars and deploy these
Result:
Fixes https://github.com/netty/netty/issues/10887
* Dont use cron for PRs
Motivation:
It doesnt make sense to use cron for PRs
Modifications:
Remove cron config
Result:
Cleanup
* We run all combinations when validate the PR, let's just use one type for normal push
Motivation:
Let us just only use one build config when building the 4.1 branch.
Modifications:
As we already do a full validation when doing the PR builds we can just only use one build config for pushes to the "main" branches
Result:
Faster build times
* Update action-docker-layer-caching (#10900)
Motivation:
We are three releases behind.
Modifications:
Update to latest version
Result:
Use up-to-date action-docker-layer-caching version
* Verify we can load native modules and add job that verifies on aarch64 as well (#10933)
Motivation:
As shown in the past we need to verify we actually can load the native as otherwise we may introduce regressions.
Modifications:
- Add new maven module which tests loading of native modules
- Add job that will also test loading on aarch64
Result:
Less likely to introduce regressions related to loading native code in the future
* Let script fail if one command fail (#10945)
Motivation:
We should use `set -e` to ensure we fail the script if one command fails.
Modifications:
Add set -e to script
Result:
Fail fast
* Use action to report unit test errors (#10966)
Motivation:
To make it easier to understand why the build fails lets use an action that will report which unit test failed
Modifications:
- Replace custom script with action-surefire-report
Result:
Easier to understand test failures
* Use custom script to check for build failures (#10968)
Motivation:
It turns out we can't use the action to check for build failures as it can't be used when a PR is done from a fork. Let's just use our simple script.
Modifications:
- Replace action with custom script
Result:
Builds for PRs that are done via forks work again.
* Publish test results after PR run (#11002)
Motivation:
To make it easier to understand why a build failed let us publish the rest results
Modifications:
Use a new workflow to be able to publish the test reports
Result:
Easier to understand why a PR did fail
* Fix test reports name
* Add workflow to cut releases (#11019)
Motivation:
Doing releases manually is error-prone, it would be better if we could do it via a workflow
Modification:
- Add workflow to cut releases
- Add related scripts
Result:
Be able to easily cut a release via a workflow
* Update build for master branch
Motivation:
The build changes were brought forward from 4.1, and contain many things specific to 4.1.
Modification:
Changed baseline Java version from 8 to 11, and changed branch references from "4.1" to "master".
Result:
Builds should now work for the master branch.
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
For protocols like QUIC using UDP_SEGMENT (GSO) can help to reduce the
overhead quite a bit. We should support it.
Modifications:
- Add a SegmentedDatagramPacket which can be used to use UDP_SEGMENT
- Add unit test
Result:
Be able to make use of UDP_SEGMENT
Motivation:
To make it possible to experiment with alternative buffer implementations, we need a way to abstract away the concrete buffers used throughout most of the Netty pipelines, while still having a common currency for doing IO in the end.
Modification:
- Introduce an ByteBufConvertible interface, that allow arbitrary objects to convert themselves into ByteBuf objects.
- Every place in the code, where we did an instanceof check for ByteBuf, we now do an instanceof check for ByteBufConvertible.
- ByteBuf itself implements ByteBufConvertible, and returns itself from the asByteBuf method.
Result:
It is now possible to use Netty with alternative buffer implementations, as long as they can be converted to ByteBuf.
This has been verified elsewhere, with an alternative buffer implementation.
Motivation:
- Underlying buffer usages might be erroneous when releasing them internaly
in HttpPostMultipartRequestDecoder.
2 bugs occurs:
1) Final File upload seems not to be of the right size.
2) Memory, even in Disk mode, is increasing continuously, while it shouldn't.
- Method `getByte(position)` is too often called within the current implementation
of the HttpPostMultipartRequestDecoder.
This implies too much activities which is visible when PARANOID mode is active.
This is also true in standard mode.
Apply the same fix on buffer from HttpPostMultipartRequestDecoder to HttpPostStandardRequestDecoder
made previously.
Finally in order to ensure we do not rewrite already decoded HttpData when decoding
next ones within multipart, we must ensure the buffers are copied and not a retained slice.
Modification:
- Add some tests to check consistency for HttpPostMultipartRequestDecoder.
Add a package protected method for testing purpose only.
- Use the `bytesBefore(...)` method instead of `getByte(pos)` in order to limit the external
access to the underlying buffer by retrieving iteratively the beginning of a correct start
position.
It is used to find both LF/CRLF and delimiter.
2 methods in HttpPostBodyUtil were created for that.
The undecodedChunk is copied when adding a chunk to a DataMultipart is loaded.
The same buffer is also rewritten in order to release the copied memory part.
Result:
Just for note, for both Memory or Disk or Mixed mode factories, the release has to be done as:
for (InterfaceHttpData httpData: decoder.getBodyHttpDatas()) {
httpData.release();
factory.removeHttpDataFromClean(request, httpData);
}
factory.cleanAllHttpData();
decoder.destroy();
The memory used is minimal in Disk or Mixed mode. In Memory mode, a big file is still
in memory but not more in the undecodedChunk but its own buffer (copied).
In terms of benchmarking, the results are:
Original code Benchmark Mode Cnt Score Error Units
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 0,152 ± 0,100 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 0,543 ± 0,218 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,001 ± 0,001 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 0,615 ± 0,070 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 0,114 ± 0,063 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 0,664 ± 0,034 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,001 ± 0,001 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 0,620 ± 0,140 ops/ms
New code Benchmark Mode Cnt Score Error Units
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 4,037 ± 0,358 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 4,226 ± 0,471 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,875 ± 0,029 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 4,346 ± 0,275 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 2,044 ± 0,020 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 2,278 ± 0,159 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,174 ± 0,004 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 2,370 ± 0,065 ops/ms
In short, using big file transfers, this is about 7 times faster with new code, while
using high number of HttpData, this is about 4 times faster with new code when using Simple Level.
When using Paranoid Level, using big file transfers, this is about 800 times faster with new code, while
using high number of HttpData, this is about 170 times faster with new code.
Motivation:
It is possible for two separate threads to race on recycling an object.
If this happens, the object might be added to a WeakOrderQueue when it shouldn't be.
The end result of this is that an object could be acquired multiple times, without a recycle in between.
Effectively, it ends up in circulation twice.
Modification:
We fix this by making the update to the lastRecycledId field of the handle, an atomic state transition.
Only the thread that "wins" the race and succeeds in their state transition will be allowed to recycle the object.
The others will bail out on their recycling.
We use weakCompareAndSet because we only need the atomicity guarantee, and the program order within each thread is sufficient.
Also, spurious failures just means we won't recycle that particular object, which is fine.
Result:
Objects no longer risk circulating twice due to a recycle race.
This fixes#10986
Motivation:
It is not uncommon to run Netty on OS X without the specific
`MacOSDnsServerAddressStreamProvider`. The current log message is too
verbose because it prints a full stack trace on the console while a
simple logging message would have been enough.
Modifications:
- Print a `WARN` message when `MacOSDnsServerAddressStreamProvider`
class is not found;
- Print a `ERROR` message with a stack trace when the class was found
but could not be loaded due to some other reasons;
Result:
Less noise in logs.
Motivation:
If two different headers end up in the same hash bucket, and you are iterating the header that is not the first in the bucket, and you use the iterator to remove the first element returned from the iterator, then you would get a NullPointerException.
Modification:
Change the DefaultHeaders iterator remove method, to re-iterate the hash bucket and unlink the entry once found, if we don't have any existing iteration starting point.
Also made DefaultHeaders.remove0 package private to avoid a synthetic method indirection.
Result:
Removing from iterators from DefaultHeaders is now robust towards hash collisions.
Motivation:
When TLSv1.3 is used (or TLS_FALSE_START) together with mTLS the handshake is considered successful before the server actually did verify the key material that was provided by the client. If the verification fails we currently will just close the stream without any extra information which makes it very hard to debug on the client side.
Modifications:
- Propagate SSLExceptions to the active streams
- Add unit test
Result:
Better visibility into why a stream was closed
Motivation:
#10995
when `io.netty.channel.unix.Socket` is ipv6 and join a multicast group with ipv4 address will cause `io.netty.channel.ChannelException: setsockopt() failed: Invalid argument` (at least in `Linux centos.dev 4.18.0-240.10.1.el8_3.x86_64 #1 SMP Mon Jan 18 17:05:51 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux`)
Modification:
check if target group address is ipv6 before call `io.netty.channel.epoll.LinuxSocket#joinGroup(java.net.InetAddress, java.net.NetworkInterface, java.net.InetAddress)`
I'm not sure if this modification is currect, but i checked source code of java NIO
```
Java_sun_nio_ch_Net_canJoin6WithIPv4Group0(JNIEnv* env, jclass cl)
{
#if defined(__APPLE__)
/* IPV6_ADD_MEMBERSHIP can be used to join IPv4 multicast groups */
return JNI_TRUE;
#else
/* IPV6_ADD_MEMBERSHIP cannot be used to join IPv4 multicast groups */
return JNI_FALSE;
#endif
}
```
seems ipv6 address can't join ipv4 group except osx
Result:
test on `Linux 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux` exception ` setsockopt() failed: Invalid argument` has fixed
Fixes#10995
Motivation:
The `!fastOpen` part of `active || !fastOpen` is always false.
Modification:
- Remove `!fastOpen` and keep only `active` as a `flushAtEnd` flag for
`startHandshakeProcessing`;
- Update comment;
Result:
Simplified `flushAtEnd` flag computation in `SslHandler#handlerAdded`.
Support TCP Fast Open for clients and make SslHandler take advantage
Motivation:
- TCP Fast Open allow us to send a small amount of data along side the initial SYN packet when establishing a TCP connection.
- The TLS Client Hello packet is small enough to fit in there, and is also idempotent (another requirement for using TCP Fast Open), so if we can save a round-trip when establishing TLS connections when using TFO.
Modification:
- Add support for client-side TCP Fast Open for Epoll, and also lowers the Linux kernel version requirements to 3.6.
- When adding the SslHandler to a pipeline, if TCP Fast Open is enabled for the channel (and the channel is not already active) then start the handshake early by writing it to the outbound buffer.
- An important detail to note here, is that the outbound buffer is not flushed at this point, like it would for normal handshakes. The flushing happens later as part of establishing the TCP connection.
Result:
- It is now possible for clients (on epoll) to open connections with TCP Fast Open.
- The SslHandler automatically detects when this is the case, and now send its Client Hello message as part of the initial data in the TCP Fast Open flow when available, saving a round-trip when establishing TLS connections.
Co-authored-by: Colin Godsey <crgodsey@gmail.com>
Motivation:
The current netty's graalvm dependency version is too low, so you need to upgrade the plugin
Modification:
Upgrade Graalvm version to the latest version, please review this pr, thank you
Result:
Use up-to-date version.
Co-authored-by: xingrufei <xingrufei@sogou-inc.com>
Motivation:
The testGlobalWriteThrottle is flaky and failed our build multiple times now. Lets disable it for now until we had time to investigate
Modifications:
Disable flaky test
Result:
Less failures during build
Motivation:
When etcResolver/hosts files are parsed, FileInputStream.read(...) is internally called by
- UnixResolverDnsServerAddressStreamProvider#parseEtcResolverSearchDomains
- UnixResolverDnsServerAddressStreamProvider#parseEtcResolverOptions
- HostsFileParser#parse
This will cause the error below when BlockHound is enabled
reactor.blockhound.BlockingOperationError: Blocking call! java.io.FileInputStream#readBytes
at java.io.FileInputStream.readBytes(FileInputStream.java)
at java.io.FileInputStream.read(FileInputStream.java:255)
Modifications:
- Add whitelist entries to BlockHound configuration
- Fix typos in UnixResolverDnsServerAddressStreamProvider
- Add tests
Result:
Fixes#11004
* Revert "Add a profile for debugging tests that run from Maven (#11011)"
This reverts commit 83895f0f
The same functionality is already natively available in surefire, by adding the `-Dmaven.surefire.debug` flag to Maven.
* Update surefire/failsafe version
These new versions copes better when our tests prints to STDOUT, and disturbs the progress processing that these plugins do.
Motivation:
At the moment we always set SSL_OP_NO_TICKET when building our context. The problem with this is that this also disables resumption for TLSv1.3 in BoringSSL as it only supports stateless resumption for TLSv1.3 which uses tickets.
We should better clear this option when TLSv1.3 is enabled to be able to resume sessions. This is also inline with the OpenJDK which enables this for TLSv1.3 by default as well.
Modifications:
Check for enabled protocols and if TLSv1.3 is set clear SSL_OP_NO_TICKET.
Result:
Be able to resume sessions for TLSv1.3 when using BoringSSL.
Motivation:
File.createTempFile(String, String)` will create a temporary file in the system temporary directory if the 'java.io.tmpdir'. The permissions on that file utilize the umask. In a majority of cases, this means that the file that java creates has the permissions: `-rw-r--r--`, thus, any other local user on that system can read the contents of that file.
This can be a security concern if any sensitive data is stored in this file.
This was reported by Jonathan Leitschuh <jonathan.leitschuh@gmail.com> as a security problem.
Modifications:
Use Files.createTempFile(...) which will use safe-defaults when running on java 7 and later. If running on java 6 there isnt much we can do, which is fair enough as java 6 shouldnt be considered "safe" anyway.
Result:
Create temporary files with sane permissions by default.
Motivation:
ReadOnlyByteBuf and ReadOnlyByteBuffer are not writable, but their writableBytes
related methods return non-zero values. This is inconsistent with the behavior
of these buffer types.
Modifications:
- ReadOnlyByteBuf and ReadOnlyByteBuffer writableBytes related methods should
return 0
Result:
More correct ReadOnlyByteBuf and ReadOnlyByteBuffer behavior with respect to
writability.
Motivation:
DecodeHexBenchmark needs to be less branch-predictor friendly
to mimic the "real" behaviour while decoding
Modifications:
DecodeHexBenchmark uses a larger sets of inputs, picking them at
random on each iteration and the benchmarked method is made !inlineable
Result:
DecodeHexBenchmark is more trusty while showing the performance
difference between different decoding methods
Motivation:
when customer need large of 'byteBuf.capacity' in [7168, 8192], the size of 'chunk.subpages' may be inflated when large of byteBuf be released, not consistent with other 'byteBuf.capacity'
Modification:
when maxNumElems == 1 need consider remove from pool
Result:
Fixes#10896.
Co-authored-by: zxingy <zxingy@servyou.com.cn>
Motivation:
The changes introduced in 1c230405fd did cause various issues while the fix itself is not considered critical. For now it is considered the best to just rollback and investigate more.
Modifications:
- Revert changes done in 1c230405fd (and later) for
the post decoders.
- Ensure we give memory back to the system as soon as possible in a safe manner
Result:
Fixes https://github.com/netty/netty/issues/10973
Motivation:
It was not 100% clear who is responsible calling close() on the InputStream.
Modifications:
Clarify javadocs.
Result:
Related to https://github.com/netty/netty/issues/10974
Co-authored-by: Chris Vest <christianvest_hansen@apple.com>
Motivation:
TLS_FALSE_START slightly changes the "flow" during handshake which may cause suprises for the end-user. We should better disable it by default again and later add a way to enable it for the user.
Modification:
This reverts commit 514d349e1f.
Result:
Restore "old flow" during TLS handshakes.
Allow and skip null handlers when adding a vararg list of handlers
Motivation
Allowing null handlers allows for more convenient idioms in
conditionally adding handlers, e.g.,
ch.pipeline().addLast(
new FooHandler(),
condition ? new BarHandler() : null,
new BazHandler()
);
Modifications
* Change addFirst(..) and addLast(..) to skip null handlers, rather than
break or short-circuit.
* Add new unit tests.
Result
* Makes addFirst(..) and addLast(..) behavior more consistent
* Resolves https://github.com/netty/netty/issues/10728
Motivation:
We should add some more NULL checks to ensure we not SEGV in some cases
Modifications:
Add more NULL checks
Result:
More robust implementation
Motivation:
We didnt correctly filter out TLSv1.3 ciphers when TLSv1.3 is not enabled.
Modifications:
- Filter out ciphers that are not supported due the selected TLS version
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/10911
Co-authored-by: Bryce Anderson <banderson@twitter.com>
Motivation:
The testWriteAfterShutdownOutputNoWritabilityChange() failed a few times on the CI randomly. Let's skip it for now while we investigate and see if there is anything we can do to make the test less flaky on the CI.
Modifications:
Add @Ignore on the testWriteAfterShutdownOutputNoWritabilityChange method
Result:
Less flaky CI
Motivation:
We did have the architecture hardcoded in the dependency which is not correct as this will let the build fail on Applie Silicion (m1). Also we did miss some dependencies on other BSDs
Modifications:
- Fix classifier
- Add missing dependencies
Result:
Be able to build on Apple Silicon (m1)
Motivation:
We need to ensure we copy the attributes and options when bootstrap the channel as otherwise we may change the underlying Entry.
This is similar to what was reported in https://github.com/netty/netty-incubator-codec-quic/issues/152.
Modifications:
- Do a copy and re-use methods
- Add unit tests
Result:
Don't affect attributes / options of channels that are already bootstrapped
Motivation:
If the given port is already bound, the PcapWriteHandlerTest will sometimes fail.
Modification:
Use a dynamic port using `0`, which is more reliable
Result:
Less Flaky
Motivation:
We currently append extensions to the user defined "sec-websocket-extensions" headers. This can cause duplicated entries.
Modifications:
* Replace existing `WebSocketExtensionUtil#appendExtension` private helper with a new `computeMergeExtensionsHeaderValue`. User defined parameters have higher precedence.
* Add tests (existing method wasn't tested)
* Reuse code for both client and server side (code was duplicated).
Result:
No more duplicated entries when user defined extensions overlap with the ones Netty generated.
Motivation:
Exception occurs in the MQTT message builder class (`io.netty.handler.codec.mqtt.MqttMessageBuilders`) when trying to create a message with packetId > 32767
Modification:
-add method that takes int
- deprecate old methods that take short.
Result:
Fixes#10929 .
Motivation:
We should override the get*ApplicationProtocol() methods in ReferenceCountedOpenSslEngine to make it easier for users to obtain the selected application protocol
Modifications:
Add missing overrides
Result:
Easier for the user to get the selected application protocol (if any)
Motivation:
We should expose some methods as protected to make it easier to write custom SslContext implementations.
This will be reused by the code for https://github.com/netty/netty-incubator-codec-quic/issues/97
Modifications:
- Add protected to some static methods which are useful for sub-classes
- Remove some unused methods
- Move *Wrapper classes to util package and make these public
Result:
Easier to write custom SslContext implementations
Motivation:
https://github.com/netty/netty/pull/10765 added support for push promise and priority frames when using the Http2FrameCodec. Unfortunally it didnt correctly guard against the possibility to receive a priority frame for an non-existing stream, which resulted in a NPE
Modifications:
- Ignore priority frame for non existing stream
- Correctly implement equals / hashcode for DefaultHttp2PriorityFrame
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/10941
Motivation:
We had multiple bugs in JdkZlibDecoder which could lead to decoding errors when the data was received in a fragmentated manner.
Modifications:
- Correctly handle skipping of comments
- Correctly handle footer / header decoding
- Add unit test that verifies the correct handling of fragmentation
Result:
Fixes https://github.com/netty/netty/issues/10875
Motivation:
We produced a lot of noise during loading native libraries as we always included the stacktrace if we could not load by one mechanism. We should better just not include the stacktrace in the debugging logging if one mechanism fails. We will log all the stacks anyway when all of the mechanisms fail.
Modifications:
Make logging less aggressive
Result:
Less confusing behaviour for the end-user