Motivation:
Older version of logback are affected by [CVE-2017-5929]. While its an optional dependency we should upgrade
Modifications:
Upgrade to 1.2.3
Result:
No more reports about using affected logback version
Motivation:
c443bc40fa removed the CI profile but we missed to adjust the docker-compose.yaml file
Modifications:
Remove -Pci as the profile doesn't exist anymore
Result:
No more warning during build
Motivation:
We used assumeTrue(...) in some places before to detect if we could load the native library but this could lead to the sitation that we not notice if we break native loading.
Modifications:
Always fail if we cant load the native library
Result:
Ensure we not cause any regression in the native loading code in the future
Motivation:
ccca3959fd fixed a regression which caused the native resolver code to not be loaded but this was not reported by the tests.
Modifications:
Adjust tests to actually fail the build if we cant load the native lib
Result:
Ensure we don't introduce another regression in the future
Motivation:
Cannot load the native library for DNS resolutions on MacOS.
The exception below is observed:
18:02:43.453 [Test worker] ERROR i.n.r.d.DnsServerAddressStreamProviders - Unable to load io.netty.resolver.dns.macos.MacOSDnsServerAddressStreamProvider, fallback to system defaults. This may result in incorrect DNS resolutions on MacOS.
java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at io.netty.resolver.dns.DnsServerAddressStreamProviders.<clinit>(DnsServerAddressStreamProviders.java:64)
at io.netty.resolver.dns.DnsNameResolverBuilder.<init>(DnsNameResolverBuilder.java:60)
at reactor.netty.transport.NameResolverProvider.newNameResolverGroup(NameResolverProvider.java:432)
...
Caused by: java.lang.UnsatisfiedLinkError: io.netty.resolver.dns.macos.MacOSDnsServerAddressStreamProvider.resolvers()[Lio/netty/resolver/dns/macos/DnsResolver;
at io.netty.resolver.dns.macos.MacOSDnsServerAddressStreamProvider.resolvers(Native Method)
at io.netty.resolver.dns.macos.MacOSDnsServerAddressStreamProvider.retrieveCurrentMappings(MacOSDnsServerAddressStreamProvider.java:127)
at io.netty.resolver.dns.macos.MacOSDnsServerAddressStreamProvider.<init>(MacOSDnsServerAddressStreamProvider.java:123)
This is a regression made with #11239
Modification:
When checking for OS, an exception must be thrown when the OS is not MacOS
Result:
The native library for DNS resolutions on MacOS can be loaded
Motivation:
When changing the netty-all artifact to not include any sources we also removed the ability to generate the javadocs / xref files for our website
Modifications:
- Add new profile which will generate the files
- Add script which generates all the files and copy these over to the netty-website
Result:
Easier to generate files for website
Motivation:
The current initialization of Slf4JLoggerFactory is not singleton.
Modification:
Use Slf4JLoggerFactory.INSTANCE to initialize Slf4JLoggerFactory.
Result:
The instance of Slf4JLoggerFactory became a singleton.
Motivation:
After the release was done we need to also copy the apidocs and xref to the netty-website
Modifications:
Add script that does the copy etc
Result:
Less manual steps to remember
Motivation:
ParserImpl is stateless and so we can use the same instance multiple times
Modifications:
- Make constructor private
- Return the same instance all the time
Result:
Less object creation
Motivation:
DefaultHostsFileEntriesResolver should provide all hosts file's entries for a hostname when
DnsNameResolver#resolveAll as opposed to the current implementation where only the first
entry is taken into consideration
Modification:
- Add DefaultHostsFileEntriesResolver#addresses to provide all hosts file's entries for a hostname
- Add HostsFileEntriesProvider to provide all hosts file's entries for a hostname and to keep
backwards compatibility for HostsFileEntries and HostsFileParser
- DnsNameResolver#resolveAll uses the new DefaultHostsFileEntriesResolver#addresses
- BlockHound configuration: replace HostsFileParser#parse with HostsFileEntriesProvider$ParserImpl#parse
as the latter does the parsing
- Add junit tests
Result:
Fixes#10834
Motivation:
`SelfSignedCertificate` creates a certificate and private key files and store them in a temporary directory. However, if the certificate uses a wildcard hostname that uses asterisk *, e.g. `*.shieldblaze.com`, it'll throw an error because * is not a valid character in the file system.
Modification:
Replace the asterisk with 'x'
Result:
Fixes#11240
Motivation:
Just use MAVEN_OPTS to setup all the timeouts etc for dependency downloads. This way we at least can be sure these are applied.
Modifications:
- Use MAVEN_OPTS
- Remove ci profile
- Remove unused settings.xml file
- Always use ./mvnw
Result:
Build stability improvements
Motivation:
We introduced the ability to offload certain operations to an executor that may take some time to complete. At the moment this is not enabled by default when using the openssl based SSL provider. Let's enable it by default as we have this support for some while now and didnt see any issues yet. This will also make things less confusing and more consistent with the JDK based provider.
Modifications:
Use true as default value for io.netty.handler.ssl.openssl.useTasks.
Result:
Offloading works with openssl based SSL provider as well by default
Motivation:
When trying to compile with java16 we should use adopt@1.16*
Modifications:
- Use adopt@1.16.0-1-
- Upgrade to blockhoud 1.0.6 to be able to support java16
Result:
Use correct java version / flavor
Motivation:
We should setup the caching so it will be able to use different restore keys and so almost never need to start from scratch
Modifications:
Adjust caching config to make use of different restore keys for maven caching but also docker caching
Result:
Better cache usage
Motivation:
We should use the same maven cache for all builds so we can re-use as much of the downloaded maven dependencies as possible
Modifications:
- Just use the same cache for all
Result:
Hopefully be able to re-use most of the dependencies
Motivation:
TLSv1 and TLSv1.1 is considered insecure. Let's follow the JDK and disable these by default
Modifications:
- Disable TLSv1 and TLSv1.1 by default when using OpenSSL.
- Add unit tests
Result:
Use only strong TLS versions by default when using OpenSSL
Motivation:
Conscrypt not correctly filters out non support TLS versions which may lead to test failures.
Related to https://github.com/google/conscrypt/issues/1013
Modifications:
- Bump up to latest patch release
- Add workaround
Result:
No more test failures caused by conscrypt
Motivation:
`PlatformDependent#normalizedOs()` already caches normalized variant of
the value of `os.name` system property. Instead of inconsistently
normalizing it in every case, use the utility method.
Modifications:
- `PlatformDependent`: `isWindows0()` and `isOsx0()` use `NORMALIZED_OS`;
- `PlatformDependent#normalizeOs(String)` define `darwin` as `osx`;
- `OpenSsl#loadTcNative()` does not require `equalsIgnoreCase` bcz `os`
is already normalized;
- Epoll and KQueue: `Native#loadNativeLibrary()` use `normalizedOs()`;
- Use consistent `Locale.US` for lower case conversion of `os.name`;
- `MacOSDnsServerAddressStreamProvider#loadNativeLibrary()` uses
`PlatformDependent.isOsx()`;
Result:
Consistent approach for `os.name` parsing.
Motivation:
Mac OS specific DNS resolver fails to take into account search order
of resolvers causing wrong resolver being used is some circumstances
Modifications:
Re-order array of resolvers using their sort order as an ordering key.
Final order is opposite of the search order to make sure that resolver
with the lower sort order goes last (so it overrides previous one
in the `resolverMap`).
Result:
Fixes issue https://github.com/netty/netty/issues/11225
Motivation:
Netty lacks client side support for decompressing Brotli compressed response bodies.
Modification:
* Introduce optional dependency to brotli4j by @hyperxpro. It will be up to the user to provide the brotli4j libraries for the target platform in the classpath. brotli4j is currently available for Linux, OSX and Windows, all for x86 only.
* Introduce BrotliDecoder in codec module
* Plug it onto `HttpContentDecompressor` for HTTP/1 and `DelegatingDecompressorFrameListener` for HTTP/2
* Add test in `HttpContentDecoderTest`
* Add `BrotliDecoderTest` that doesn't extend `AbstractDecoderTest` that looks flaky
Result:
Netty now support decompressing Brotli compressed response bodies.
Motivation:
0f25213918 introduced some properties that were used to make builds more stable on the ci. All of these properties were duplicated everywhere, this made it hard to maintain
Modifications:
- Add profile which sets the properties.
- Just use the profile when build on the ci
Result:
Easier to maintain custom properties for the ci build
Motivation:
It seems like it is a known issue that maven frequently sees connection reset / connection timeout during CI builds. We should workaround these issues like others did:
- https://github.com/kiegroup/kie-wb-common/pull/3416
Modifications:
Add extra maven options during build to reduce the likelyness of timeouts / resets
Result:
More stable builds
Motivation:
We should add explicit null checks so its easier for people to understand why it throws.
Modification:
Add explicit checkNotNull(...)
Result:
Easier to understand for users why it fails.
Signed-off-by: xingrufei <xingrufei@sogou-inc.com>
Co-authored-by: xingrufei <xingrufei@sogou-inc.com>
Motivation:
When checking the latest commit i saw some bad exception messages in MqttVersion hence i improved them.
Modification:
Improved exception messages in MqttVersion.
Result:
Better exception messages in MqttVersion.
Motivation:
We've seen (very rare) flaky test failures due to timeouts.
They are too rare to analyse properly, but a theory is that on overloaded, small cloud CI instances, it can sometimes take a surprising amount of time to start a thread.
It could be that the event loop thread is getting an unlucky schedule, and takes seconds to start, causing the timeouts to elapse.
Modification:
Increase the initial timeouts in the SSLEngineTest, that could end up waiting for the event loop thread to start.
Also fix a few simple warnings from Intellij.
Result:
Hopefully we will not see these tests be flaky again.
Motivation:
DNS resolver falls back to trying CNAME if no records found, but should
only try this for A/AAAA queries. Does not make sense for other query
types, results in a redundant CNAME query that is just going to fail.
Modification:
Check query type before deciding to try CNAME. Only proceed if type is A
or AAAA.
Added unit test to verify CNAME is only tried after A/AAAA queries.
Result:
Fixes#11214.
Motivation:
`Http2FrameCodecBuilder` defines static factory methods `forClient()`
and `forServer()` that should be used to create a new instance.
The default ctor is useful only when users need to override behavior
of the existing builder. Those users should define another way to create
an instance.
Modifications:
- Decrease visibility of `Http2FrameCodecBuilder` default ctor from
`public` to `protected`;
- Add javadoc to clarity responsibilities;
Result:
Users of `Http2FrameCodecBuilder` are not confused why
`new Http2FrameCodecBuilder().build()` works for the server-side, but
does not work for the client-side.
Motivation:
The last non-LTS release is JDK16 now.
Modifications:
Update from JDK15 to JDK16 for building as this is the last non-LTS release atm
Result:
Build with latest non-LTS release as well
Motivation:
netty-all already depends on the other netty-* packages so there's no need to also bundle them.
The duplicated classes cause classpath issues, particularly with Java > 8, which reports errors like this:
The package io.netty.buffer is accessible from more than one module: io.netty.all, io.netty.buffer
Modifications:
- Removed bundling tasks from netty-all's maven pom.xml
Result:
- netty-all no longer bundles all classes. Instead, classes are provided by expressed dependencies.
Fixes#4671
Motivation:
It turns out it is quite easy to cause a classloader deadlock in more recent java updates if you cause classloading while you are in native code. Because of this we should just workaround this issue by pre-load all the classes that needs to be accessed in the OnLoad function.
Modifications:
- Preload all classes that would otherwise be loaded by native OnLoad functions.
Result:
Workaround for https://github.com/netty/netty/issues/11209 and https://bugs.openjdk.java.net/browse/JDK-8266310
Motivation:
Some of the HttpPostMultiPartRequestDecoder specific tests were included in HttpPostRequestDecoderTest. We should better move these in the correct test class.
Modifications:
Move specific tests
Result:
Cleanup
Motivation:
2 years ago a change remove the default clearing of all HttpData, whatever
they are disk based or memory based.
A lot of users were probably releasing HttpData directly, so there was no issue.
But now, it seems, and as the Javadoc said, that `decoder.destroy()` shall clean up
also Memory based HttpData, and not only Disk based HttpData as currently.
Change:
- Add in `destroy()` method the necessary code to release if necessary
the underlying Memory based HttpDatas.
- Change one Junit Test (using Mixed, Memory and Disk based factories)
in order to check the correctness of this behavior and to really act
as a handler (releasing buffers or requests).
- Modify one Junit core to check validity when a delimiter is present in the Chunk
but not CRLF/LF (false delimiter), to ensure correctness.
Result:
No more issue on memory leak
Note that still the List and the Map are not cleaned, since they were not
before. No change is done on this, since it could produce backward issue compatibility.
Fix issues #11175 and #11184
__Motivation__
`Http2FrameCodecBuilder` constructor calls `server()` internally which disallows using certain methods on the builder later. Additionally, the constructor is package private which limits extension of the `Builder` as well as usage outside the available `forClient()` and `forServer()` static methods.
__Modification__
- Introduce a `public` no-arg constructor to `Http2FrameCodecBuilder`.
__Result__
`Http2FrameCodecBuilder` can now be used to create the codec with `Http2Connection` or `Http2ConnectionDecoder` and `Http2ConnectionEncoder` which was earlier not possible due to implicit call to `server()` by the `Http2FrameCodecBuilder` constructor.
Motivation:
ReferenceCountedOpenSslEngine may unwrap data and complete the handshake
in a single unwrap() call. However it may return HanshakeStatus of
HandshakeStatus of NEED_UNWRAP instead of FINISHED. This may result in
the SslHandler sending the unwrapped data up the pipeline before
notifying that the handshake has completed, and result in out-of-order
events.
Modifications:
- if ReferenceCountedOpenSslEngine handshake status is NEED_UNWRAP and
produced data, or NEED_WRAP and consumed some data, we should call
handshake() to get the current state.
Result:
ReferenceCountedOpenSslEngine correctly indicates when the handshake has
finished if at the same time data was produced or consumed.
Motivation:
Before throwing TooLongFrameException, should call the skipBytes method to skip the bytes to be read
Modification:
- skip bytes before throw
Result:
Actually skip the bytes when we detect too much data
Signed-off-by: xingrufei <xingrufei@sogou-inc.com>
Co-authored-by: xingrufei <xingrufei@sogou-inc.com>
Motivation:
In the mqtt v3.1 protocol, the default maximum Client Identifier length is 23.However, in (#11114), there are many cases, the server may still receive a client ID with a length greater than 23. Perhaps should consider letting the user decide whether accept client id greater than 23 on the server side
Modification:
- Allow to specify max length.
Result:
Give a choice for app to extend the length limitation of clientId even in mqtt v3.1 on the server side.
Signed-off-by: xingrufei <xingrufei@sogou-inc.com>
Co-authored-by: xingrufei <xingrufei@sogou-inc.com>
Motivation:
We need to call destroy() if the constructor of HttpPostMultipartRequestDecoder throws as otherwise we may leak memory.
Modifications:
- Call destroy() if we throw
- Add unit test
Result:
No more leaks when constructor throws
Co-authored-by: Frederic Bregier <frederic.bregier@waarp.fr>
Motivation:
We didn't correctly handle the case when no content-type header was found or if the charset was illegal and just did throw a NPE or ICE. We should in both cases throw an ErrorDataDecoderException to reflect what is documented in the javadocs.
Modifications:
- Throw correct exception
- Merge private method into the constructor as it is only used there
- Add unit tests
Result:
Throw expected exceptions on decoding errors
Motivation:
In the past we did see problems sometime when run-on-arch-action was used. We are multiple releases behind, lets update and so maybe fix the problems.
Modifications:
Update to latest release
Result:
Use latest run-on-arch-action release
Motivation:
It turned out we didnt run the openssl tests on the CI when we used the non-static version of netty-tcnative.
Modifications:
- Upgrade netty-tcnative to fix segfault when using shared openssl
- Adjust tests to only run session cache tests when openssl supports it
- Fix some more tests to only depend on KeyManager if the underlying openssl version supports it
Result:
Run all openssl test on the CI even when shared library is used
Motivation:
When create a WebSocketServerProtocolConfig to check URI path starts from '/',
only '/' or '//subPath' can be passed by the checker,but '/subPath' should be
passed as well
Modifications:
in `WebSocketServerProtocolHandshakeHandler.isWebSocketPath()` treat '/' a special case
Result:
'/subPath' can be passed