Motivation:
We introduced the ability to offload certain operations to an executor that may take some time to complete. At the moment this is not enabled by default when using the openssl based SSL provider. Let's enable it by default as we have this support for some while now and didnt see any issues yet. This will also make things less confusing and more consistent with the JDK based provider.
Modifications:
Use true as default value for io.netty.handler.ssl.openssl.useTasks.
Result:
Offloading works with openssl based SSL provider as well by default
Motivation:
TLSv1 and TLSv1.1 is considered insecure. Let's follow the JDK and disable these by default
Modifications:
- Disable TLSv1 and TLSv1.1 by default when using OpenSSL.
- Add unit tests
Result:
Use only strong TLS versions by default when using OpenSSL
Motivation:
Conscrypt not correctly filters out non support TLS versions which may lead to test failures.
Related to https://github.com/google/conscrypt/issues/1013
Modifications:
- Bump up to latest patch release
- Add workaround
Result:
No more test failures caused by conscrypt
Motivation:
`PlatformDependent#normalizedOs()` already caches normalized variant of
the value of `os.name` system property. Instead of inconsistently
normalizing it in every case, use the utility method.
Modifications:
- `PlatformDependent`: `isWindows0()` and `isOsx0()` use `NORMALIZED_OS`;
- `PlatformDependent#normalizeOs(String)` define `darwin` as `osx`;
- `OpenSsl#loadTcNative()` does not require `equalsIgnoreCase` bcz `os`
is already normalized;
- Epoll and KQueue: `Native#loadNativeLibrary()` use `normalizedOs()`;
- Use consistent `Locale.US` for lower case conversion of `os.name`;
- `MacOSDnsServerAddressStreamProvider#loadNativeLibrary()` uses
`PlatformDependent.isOsx()`;
Result:
Consistent approach for `os.name` parsing.
Motivation:
We should add explicit null checks so its easier for people to understand why it throws.
Modification:
Add explicit checkNotNull(...)
Result:
Easier to understand for users why it fails.
Signed-off-by: xingrufei <xingrufei@sogou-inc.com>
Co-authored-by: xingrufei <xingrufei@sogou-inc.com>
Motivation:
We've seen (very rare) flaky test failures due to timeouts.
They are too rare to analyse properly, but a theory is that on overloaded, small cloud CI instances, it can sometimes take a surprising amount of time to start a thread.
It could be that the event loop thread is getting an unlucky schedule, and takes seconds to start, causing the timeouts to elapse.
Modification:
Increase the initial timeouts in the SSLEngineTest, that could end up waiting for the event loop thread to start.
Also fix a few simple warnings from Intellij.
Result:
Hopefully we will not see these tests be flaky again.
Motivation:
ReferenceCountedOpenSslEngine may unwrap data and complete the handshake
in a single unwrap() call. However it may return HanshakeStatus of
HandshakeStatus of NEED_UNWRAP instead of FINISHED. This may result in
the SslHandler sending the unwrapped data up the pipeline before
notifying that the handshake has completed, and result in out-of-order
events.
Modifications:
- if ReferenceCountedOpenSslEngine handshake status is NEED_UNWRAP and
produced data, or NEED_WRAP and consumed some data, we should call
handshake() to get the current state.
Result:
ReferenceCountedOpenSslEngine correctly indicates when the handshake has
finished if at the same time data was produced or consumed.
Motivation:
It turned out we didnt run the openssl tests on the CI when we used the non-static version of netty-tcnative.
Modifications:
- Upgrade netty-tcnative to fix segfault when using shared openssl
- Adjust tests to only run session cache tests when openssl supports it
- Fix some more tests to only depend on KeyManager if the underlying openssl version supports it
Result:
Run all openssl test on the CI even when shared library is used
Motivation:
In the latest version of BouncyCastle, BCJSSE:'TLSv1.3' is now a supported protocol for both client and server. So should consider enabling TLSv1.3 when TLSv1.3 is available
Modification:
This pr is to enable TLSv1.3 when using BouncyCastle ALPN support, please review this pr,thanks
Result:
Enable TLSv1.3 when using BouncyCastle ALPN support
Signed-off-by: xingrufei <xingrufei@sogou-inc.com>
Co-authored-by: xingrufei <xingrufei@sogou-inc.com>
Motivation:
NullChecks resulting in a NullPointerException or IllegalArgumentException, numeric ranges (>0, >=0) checks, not empty strings/arrays checks must never be anonymous but with the parameter or variable name which is checked. They must be specific and should not be done with an "OR-Logic" (if a == null || b == null) throw new NullPointerEx.
Modifications:
* import static relevant checks
* Replace manual checks with ObjectUtil methods
Result:
All checks needed are done with ObjectUtil, some exception texts are improved.
Fixes#11170
Motivation:
Under Android it was not possible to load a specific web page. It might be related to the (missing?) ALPN of the internal TLS implementation. BouncyCastle as a replacement works but this was not supported so far by Netty.
BouncyCastle also has the benefit to be a pure Java solution, all the other providers (OpenSSL, Conscrypt) require native libraries which are not available under Android at least.
Modification:
BouncyCastleAlpnSslEngine.java and support classes have been added. It is relying on the JDK code, hence some support classes had to be opened to prevent code duplication.
Result:
BouncyCastle can be used as TLS provider.
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
We are increasingly running in environments where Unsafe, setAccessible, etc. are not available.
When debug logging is enabled, we log a complete stack trace every time one of these initialisations fail.
Seeing these stack traces can cause people unnecessary concern.
For instance, people might have alerts that are triggered by a stack trace showing up in logs, regardless of its log level.
Modification:
We continue to print debug log messages on the result of our initialisations, but now we only include the full stack trace is _trace_ logging (or FINEST, or equivalent in whatever logging framework is configured) is enabled.
Result:
We now only log these initialisation stack traces when the lowest possible log level is enabled.
Fixes#7817
Motivation:
SslHandler invokes channel.read() during the handshake process. For some
channel implementations (e.g. LocalChannel) this may result in re-entry
conditions into unwrap. Unwrap currently defers updating the input
buffer indexes until the unwrap method returns to avoid intermediate
updates if not necessary, but this may result in unwrapping the same
contents multiple times which leads to handshake failures [1][2].
[1] ssl3_get_record:decryption failed or bad record mac
[2] ssl3_read_bytes:sslv3 alert bad record mac
Modifications:
- SslHandler#unwrap updates buffer indexes on each iteration so that if
reentry scenario happens the correct indexes will be visible.
Result:
Fixes https://github.com/netty/netty/issues/11146
Motivation:
SslHandler has many independent boolean member variables. They can be
collapsed into a single variable to save memory.
Modifications:
- SslHandler boolean state consolidated into a single short variable.
Result:
Savings of 8 bytes per SslHandler (which is per connection) observed on
OpenJDK.
Motivation:
`SslHandler#unwrap` may produce `SslHandshakeCompletionEvent` if it
receives `close_notify` alert. This alert indicates that the engine is
closed and no more data are expected in the pipeline. However, it fires
the event before the last data chunk. As the result, further handlers
may loose data if they handle `SslHandshakeCompletionEvent`.
This issue was not visible before #11133 because we did not write
`close_notify` alert reliably.
Modifications:
- Add tests to reproduce described behavior;
- Move `notifyClosePromise` after fire of the last `decodeOut`;
Result:
`SslHandshakeCompletionEvent` correctly indicates that the engine is
closed and no more data are expected on the pipeline.
Motivation:
We should avoid blocking in the event loop as much as possible.
The InputStream.read() is a blocking method, and we don't need to call it if available() returns a positive number.
Modification:
Bypass calling InputStream.read() if available() returns a positive number.
Result:
Fewer blocking calls in the event loop, in general, when ChunkedStream is used.
Motivation:
SslHandler's wrap method notifies the handshakeFuture and sends a
SslHandshakeCompletionEvent user event down the pipeline before writing
the plaintext that has just been wrapped. It is possible the application
may write as a result of these events and re-enter into wrap to write
more data. This will result in out of sequence data and result in alerts
such as SSLV3_ALERT_BAD_RECORD_MAC.
Modifications:
- SslHandler wrap should write any pending data before notifying
promises, generating user events, or anything else that may create a
re-entry scenario.
Result:
SslHandler will wrap/write data in the same order.
Motivation:
SslHandler owns the responsibility to flush non-application data
(e.g. handshake, renegotiation, etc.) to the socket. However when
TCP Fast Open is supported but the client_hello cannot be written
in the SYN the client_hello may not always be flushed. SslHandler
may not wrap/flush previously written/flushed data in the event
it was not able to be wrapped due to NEED_UNWRAP state being
encountered in wrap (e.g. peer initiated renegotiation).
Modifications:
- SslHandler to flush in channelActive() if TFO is enabled and
the client_hello cannot be written in the SYN.
- SslHandler to wrap application data after non-application data
wrap and handshake status is FINISHED.
- SocketSslEchoTest only flushes when writes are done, and waits
for the handshake to complete before writing.
Result:
SslHandler flushes handshake data for TFO, and previously flushed
application data after peer initiated renegotiation finishes.
Motivation:
At the moment we don't support session caching on the client side at all when using the native SSL implementation. We should at least allow to enable it.
Modification:
Allow to enable session cache for client side but disable ti by default due a JDK bug atm.
Result:
Be able to cache sessions on the client side when using native SSL implementation .
Motivation:
Creating certificates from a byte[] while lazy parse it is general useful and is also needed by https://github.com/netty/netty-incubator-codec-quic/pull/141
Modifications:
Move classes, rename these and make them public
Result:
Be able to reuse code
Motivation:
Some of the features we want to support can only be supported by some of the SslContext implementations. We should allow to configure these in a consistent way the same way as we do it with Channel / ChannelOption
Modifications:
- Add SslContextOption and add builder methods that take these
- Add OpenSslContextOption and define two options there which are specific to openssl
Result:
More flexible configuration and implementation of SslContext
Motivation:
In WriteTimeoutHandler we did make the assumption that the executor which is used to schedule the timeout is the same that is backing the write promise. This may not be true which will cause concurrency issues
Modifications:
Ensure we are on the right thread when try to modify the doubly-linked-list and if not schedule it on the right thread.
Result:
Fixes https://github.com/netty/netty/issues/11053
Motivation:
We need to ensure that we call queue.remove() before we cal writeAndFlush() as this operation may cause an event that also touches the queue and remove from it. If we miss to do so we may see NoSuchElementExceptions.
Modifications:
- Call queue.remove() before calling writeAndFlush(...)
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/11046
This brings forward the build and release automation changes from 4.1 (#10879, #10883, #10884, #10886, #10888, #10889, #10893, #10900, #10933, #10945, #10966, #10968, #11002, and #11019) to 5.0.
Details are as follows:
* Use Github workflows for CI (#10879)
Motivation:
We should just use GitHub Actions for the CI
Modifications:
- Adjust docker / docker compose files
- Add different workflows and jobs to deploy and build the project
Result:
Don't depend on external CI services
* Fix non leak build condition
* Only use build and deploy workflows for 4.1 for now
* Add deploy job for cross compiled aarch64 (#10883)
Motivation:
We should also deploy snapshots for our cross compiled native jars.
Modifications:
- Add job and docker files for deploying cross compiled native jars
- Ensure we map the maven cache into our docker containers
Result:
Deploy aarch64 jars and re-use cache
* Use correct docker-compose file to deploy cross compiled artifacts
* Use correct docker-compose task to deploy for cross compiled artifacts
* Split pr and normal build (#10884)
Motivation:
We should better use seperate workflows for PR and normal builds
Modifications:
- Split workflows
- Better cache reuse
Result:
Cleanup
* Only deploy snapshots for one arch
Motivation:
We need to find a way to deploy SNAPSHOTS for different arch with the same timestamp. Otherwise it will cause problems.
See https://github.com/netty/netty/issues/10887
Modification:
Skip all other deploys then x86_64
Result:
Users are able to use SNAPSHOTS for x86_6
* Use maven cachen when running analyze job (#10888)
Motivation:
To prevent failures to problems while downloading dependencies we shoud cache these
Modifications:
Add maven cache
Result:
No more failures due problems while downloading dependencies
* Also include one PR job that uses boringssl (#10886)
Motivation:
When validating PRs we should also at least run one job that uses boringssl
Modifications:
- Add job that uses boringssl
- Cleanup docker compose files
- Fix buffer leak in test
Result:
Also run with boringssl when PRs are validated
* Use matrix for job configurations (#10889)
Motivation:
We can use the matrix feature to define our jobs. This reduces a lot of config
Modification:
Use job matrix
Result:
Easier to maintain
* Correctly deploy artifacts that are build on different archs (#10893)
Motivation:
We need to take special care when deploying snapshots as we need to generate the jars in multiple steps
Modifications:
- Use the nexus staging pluging to stage jars locally in multiple steps
- Add extra job that will merge these staged jars and deploy these
Result:
Fixes https://github.com/netty/netty/issues/10887
* Dont use cron for PRs
Motivation:
It doesnt make sense to use cron for PRs
Modifications:
Remove cron config
Result:
Cleanup
* We run all combinations when validate the PR, let's just use one type for normal push
Motivation:
Let us just only use one build config when building the 4.1 branch.
Modifications:
As we already do a full validation when doing the PR builds we can just only use one build config for pushes to the "main" branches
Result:
Faster build times
* Update action-docker-layer-caching (#10900)
Motivation:
We are three releases behind.
Modifications:
Update to latest version
Result:
Use up-to-date action-docker-layer-caching version
* Verify we can load native modules and add job that verifies on aarch64 as well (#10933)
Motivation:
As shown in the past we need to verify we actually can load the native as otherwise we may introduce regressions.
Modifications:
- Add new maven module which tests loading of native modules
- Add job that will also test loading on aarch64
Result:
Less likely to introduce regressions related to loading native code in the future
* Let script fail if one command fail (#10945)
Motivation:
We should use `set -e` to ensure we fail the script if one command fails.
Modifications:
Add set -e to script
Result:
Fail fast
* Use action to report unit test errors (#10966)
Motivation:
To make it easier to understand why the build fails lets use an action that will report which unit test failed
Modifications:
- Replace custom script with action-surefire-report
Result:
Easier to understand test failures
* Use custom script to check for build failures (#10968)
Motivation:
It turns out we can't use the action to check for build failures as it can't be used when a PR is done from a fork. Let's just use our simple script.
Modifications:
- Replace action with custom script
Result:
Builds for PRs that are done via forks work again.
* Publish test results after PR run (#11002)
Motivation:
To make it easier to understand why a build failed let us publish the rest results
Modifications:
Use a new workflow to be able to publish the test reports
Result:
Easier to understand why a PR did fail
* Fix test reports name
* Add workflow to cut releases (#11019)
Motivation:
Doing releases manually is error-prone, it would be better if we could do it via a workflow
Modification:
- Add workflow to cut releases
- Add related scripts
Result:
Be able to easily cut a release via a workflow
* Update build for master branch
Motivation:
The build changes were brought forward from 4.1, and contain many things specific to 4.1.
Modification:
Changed baseline Java version from 8 to 11, and changed branch references from "4.1" to "master".
Result:
Builds should now work for the master branch.
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
To make it possible to experiment with alternative buffer implementations, we need a way to abstract away the concrete buffers used throughout most of the Netty pipelines, while still having a common currency for doing IO in the end.
Modification:
- Introduce an ByteBufConvertible interface, that allow arbitrary objects to convert themselves into ByteBuf objects.
- Every place in the code, where we did an instanceof check for ByteBuf, we now do an instanceof check for ByteBufConvertible.
- ByteBuf itself implements ByteBufConvertible, and returns itself from the asByteBuf method.
Result:
It is now possible to use Netty with alternative buffer implementations, as long as they can be converted to ByteBuf.
This has been verified elsewhere, with an alternative buffer implementation.
Motivation:
The `!fastOpen` part of `active || !fastOpen` is always false.
Modification:
- Remove `!fastOpen` and keep only `active` as a `flushAtEnd` flag for
`startHandshakeProcessing`;
- Update comment;
Result:
Simplified `flushAtEnd` flag computation in `SslHandler#handlerAdded`.
Support TCP Fast Open for clients and make SslHandler take advantage
Motivation:
- TCP Fast Open allow us to send a small amount of data along side the initial SYN packet when establishing a TCP connection.
- The TLS Client Hello packet is small enough to fit in there, and is also idempotent (another requirement for using TCP Fast Open), so if we can save a round-trip when establishing TLS connections when using TFO.
Modification:
- Add support for client-side TCP Fast Open for Epoll, and also lowers the Linux kernel version requirements to 3.6.
- When adding the SslHandler to a pipeline, if TCP Fast Open is enabled for the channel (and the channel is not already active) then start the handshake early by writing it to the outbound buffer.
- An important detail to note here, is that the outbound buffer is not flushed at this point, like it would for normal handshakes. The flushing happens later as part of establishing the TCP connection.
Result:
- It is now possible for clients (on epoll) to open connections with TCP Fast Open.
- The SslHandler automatically detects when this is the case, and now send its Client Hello message as part of the initial data in the TCP Fast Open flow when available, saving a round-trip when establishing TLS connections.
Co-authored-by: Colin Godsey <crgodsey@gmail.com>
Motivation:
The testGlobalWriteThrottle is flaky and failed our build multiple times now. Lets disable it for now until we had time to investigate
Modifications:
Disable flaky test
Result:
Less failures during build
Motivation:
At the moment we always set SSL_OP_NO_TICKET when building our context. The problem with this is that this also disables resumption for TLSv1.3 in BoringSSL as it only supports stateless resumption for TLSv1.3 which uses tickets.
We should better clear this option when TLSv1.3 is enabled to be able to resume sessions. This is also inline with the OpenJDK which enables this for TLSv1.3 by default as well.
Modifications:
Check for enabled protocols and if TLSv1.3 is set clear SSL_OP_NO_TICKET.
Result:
Be able to resume sessions for TLSv1.3 when using BoringSSL.
Motivation:
File.createTempFile(String, String)` will create a temporary file in the system temporary directory if the 'java.io.tmpdir'. The permissions on that file utilize the umask. In a majority of cases, this means that the file that java creates has the permissions: `-rw-r--r--`, thus, any other local user on that system can read the contents of that file.
This can be a security concern if any sensitive data is stored in this file.
This was reported by Jonathan Leitschuh <jonathan.leitschuh@gmail.com> as a security problem.
Modifications:
Use Files.createTempFile(...) which will use safe-defaults when running on java 7 and later. If running on java 6 there isnt much we can do, which is fair enough as java 6 shouldnt be considered "safe" anyway.
Result:
Create temporary files with sane permissions by default.
Motivation:
It was not 100% clear who is responsible calling close() on the InputStream.
Modifications:
Clarify javadocs.
Result:
Related to https://github.com/netty/netty/issues/10974
Co-authored-by: Chris Vest <christianvest_hansen@apple.com>
Motivation:
TLS_FALSE_START slightly changes the "flow" during handshake which may cause suprises for the end-user. We should better disable it by default again and later add a way to enable it for the user.
Modification:
This reverts commit 514d349e1f.
Result:
Restore "old flow" during TLS handshakes.
Motivation:
We didnt correctly filter out TLSv1.3 ciphers when TLSv1.3 is not enabled.
Modifications:
- Filter out ciphers that are not supported due the selected TLS version
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/10911
Co-authored-by: Bryce Anderson <banderson@twitter.com>
Motivation:
If the given port is already bound, the PcapWriteHandlerTest will sometimes fail.
Modification:
Use a dynamic port using `0`, which is more reliable
Result:
Less Flaky
Motivation:
We should override the get*ApplicationProtocol() methods in ReferenceCountedOpenSslEngine to make it easier for users to obtain the selected application protocol
Modifications:
Add missing overrides
Result:
Easier for the user to get the selected application protocol (if any)
Motivation:
We should expose some methods as protected to make it easier to write custom SslContext implementations.
This will be reused by the code for https://github.com/netty/netty-incubator-codec-quic/issues/97
Modifications:
- Add protected to some static methods which are useful for sub-classes
- Remove some unused methods
- Move *Wrapper classes to util package and make these public
Result:
Easier to write custom SslContext implementations
Motivation:
We need to ensure we always drain the error stack when a callback throws as otherwise we may pick up the error on a different SSL instance which uses the same thread.
Modifications:
- Correctly drain the error stack if native method throws
- Add a unit test which failed before the change
Result:
Always drain the error stack
Motivation:
When using the JDKs SSLEngineImpl with TLSv1.3 it sometimes returns HandshakeResult.FINISHED multiple times. This can lead to have SslHandshakeCompletionEvents to be fired multiple times.
Modifications:
- Keep track of if we notified before and if so not do so again if we use TLSv1.3
- Add unit test
Result:
Consistent usage of events
Motivation:
We can make use of internalNioBuffer(...) if we cant access the memoryAddress. This at least will reduce the object creations.
Modifications:
Use internalNioBuffer(...) and so reduce the GC
Result:
Less object creation if we can't access the memory address.
Motivation:
https in xmlns URIs does not work and will let the maven release plugin fail:
```
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.779 s
[INFO] Finished at: 2020-11-10T07:45:21Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-release-plugin:2.5.3:prepare (default-cli) on project netty-parent: Execution default-cli of goal org.apache.maven.plugins:maven-release-plugin:2.5.3:prepare failed: The namespace xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" could not be added as a namespace to "project": The namespace prefix "xsi" collides with an additional namespace declared by the element -> [Help 1]
[ERROR]
```
See also https://issues.apache.org/jira/browse/HBASE-24014.
Modifications:
Use http for xmlns
Result:
Be able to use maven release plugin
Motivation:
Sometimes it would be helpful to easily detect if an operation failed due the SSLEngine already be closed.
Modifications:
Add special exception that is used when the engine was closed
Result:
Easier to detect a failure caused by a closed exception
Motivation:
FingerprintTrustManagerFactory can only use SHA-1 that is considered
insecure.
Modifications:
- Updated FingerprintTrustManagerFactory to accept a stronger hash algorithm.
- Remove the constructors that still use SHA-1.
- Added a test for FingerprintTrustManagerFactory.
Result:
A user can now configure FingerprintTrustManagerFactory to use a
stronger hash algorithm.
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
HTTP is a plaintext protocol which means that someone may be able
to eavesdrop the data. To prevent this, HTTPS should be used whenever
possible. However, maintaining using https:// in all URLs may be
difficult. The nohttp tool can help here. The tool scans all the files
in a repository and reports where http:// is used.
Modifications:
- Added nohttp (via checkstyle) into the build process.
- Suppressed findings for the websites
that don't support HTTPS or that are not reachable
Result:
- Prevent using HTTP in the future.
- Encourage users to use HTTPS when they follow the links they found in
the code.
Motivation:
In the master branch we fail fire* operations on the ChannelHandlerContext once the handler was removed. This is by design as it is "unspecified" what the semantics could be after the handler was removed and may lead to very hard to debug problems. Because of this we need to select the right ChannelHandlerContext for firing the event.
Modifications:
Choose a valid ChannelHandlerContext based on the state of the context of the handler
Result:
No more test failures
Motivation:
junit deprecated Assert.assertThat(...)
Modifications:
Use MatcherAssert.assertThat(...) as replacement for deprecated method
Result:
Less deprecation warnings
Motivation:
We can filter out `null` rules while initializing the instance of `RuleBasedIpFilter` so we don't have to keep checking for `null` rules while iterating through `rules` array in `for loop` which is just a waste of CPU cycles.
Modification:
Added `null` rule check inside the constructor.
Result:
No more wasting CPU cycles on check the `null` rule each time in `for loop` and makes the overall operation more faster.
Motivation:
LGTM reports multiple issues. They need to be triaged,
and real ones should be fixed.
Modifications:
- Fixed multiple issues reported by LGTM, such as redundant conditions,
resource leaks, typos, possible integer overflows.
- Suppressed false-positives.
- Added a few testcases.
Result:
Fixed several possible issues, get rid of false alarms in the LGTM report.
Motivation:
Users may want to do special actions when onComplete(...) was called and depend on these once they receive the SniCompletionEvent
Modifications:
Switch order and so call onLookupComplete(...) before we fire the event
Result:
Fixes https://github.com/netty/netty/issues/10655