Motivation:
In MemoryRegionCache.Entry we use the Recycler to reduce GC pressure and churn. The problem is that these will also be recycled when the PoolThreadCache is collected and finalize() is called. This then can have the effect that we try to load class but the WebApp is already stoped.
This will produce an stacktrace like this on Tomcat:
```
19-Mar-2019 15:53:21.351 INFO [Finalizer] org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading Illegal access: this web application instance has been stopped already. Could not load [java.util.WeakHashMap]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already. Could not load [java.util.WeakHashMap]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading(WebappClassLoaderBase.java:1383)
at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForClassLoading(WebappClassLoaderBase.java:1371)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1224)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1186)
at io.netty.util.Recycler$3.initialValue(Recycler.java:233)
at io.netty.util.Recycler$3.initialValue(Recycler.java:230)
at io.netty.util.concurrent.FastThreadLocal.initialize(FastThreadLocal.java:188)
at io.netty.util.concurrent.FastThreadLocal.get(FastThreadLocal.java:142)
at io.netty.util.Recycler$Stack.pushLater(Recycler.java:624)
at io.netty.util.Recycler$Stack.push(Recycler.java:597)
at io.netty.util.Recycler$DefaultHandle.recycle(Recycler.java:225)
at io.netty.buffer.PoolThreadCache$MemoryRegionCache$Entry.recycle(PoolThreadCache.java:478)
at io.netty.buffer.PoolThreadCache$MemoryRegionCache.freeEntry(PoolThreadCache.java:459)
at io.netty.buffer.PoolThreadCache$MemoryRegionCache.free(PoolThreadCache.java:430)
at io.netty.buffer.PoolThreadCache$MemoryRegionCache.free(PoolThreadCache.java:422)
at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:279)
at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:270)
at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:241)
at io.netty.buffer.PoolThreadCache.finalize(PoolThreadCache.java:230)
at java.lang.System$2.invokeFinalize(System.java:1270)
at java.lang.ref.Finalizer.runFinalizer(Finalizer.java:102)
at java.lang.ref.Finalizer.access$100(Finalizer.java:34)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:217)
```
Beside this we also need to ensure we not try to lazy load SizeClass when the finalizer is used as it may not be present anymore if the ClassLoader is already destroyed.
This would produce an error like:
```
20-Mar-2019 11:26:35.254 INFO [Finalizer] org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading Illegal access: this web application instance has been stopped already. Could not load [io.netty.buffer.PoolArena$1]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already. Could not load [io.netty.buffer.PoolArena$1]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading(WebappClassLoaderBase.java:1383)
at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForClassLoading(WebappClassLoaderBase.java:1371)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1224)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1186)
at io.netty.buffer.PoolArena.freeChunk(PoolArena.java:287)
at io.netty.buffer.PoolThreadCache$MemoryRegionCache.freeEntry(PoolThreadCache.java:464)
at io.netty.buffer.PoolThreadCache$MemoryRegionCache.free(PoolThreadCache.java:429)
at io.netty.buffer.PoolThreadCache$MemoryRegionCache.free(PoolThreadCache.java:421)
at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:278)
at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:269)
at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:240)
at io.netty.buffer.PoolThreadCache.finalize(PoolThreadCache.java:229)
at java.lang.System$2.invokeFinalize(System.java:1270)
at java.lang.ref.Finalizer.runFinalizer(Finalizer.java:102)
at java.lang.ref.Finalizer.access$100(Finalizer.java:34)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:217)
```
Modifications:
- Only try to put the Entry back into the Recycler if the PoolThredCache is not destroyed because of the finalizer.
- Only try to access SizeClass if not triggered by finalizer.
Result:
No IllegalStateException anymoe when a webapp is reloaded in Tomcat that uses netty and uses the PooledByteBufAllocator.
Motivation:
The special case fixed in #8497 also requires that we keep a derived slice when trimming components in place, as done by the capacity(int) and discardReadBytes() methods.
Modifications:
Ensure that we keep a ref to trimmed components' original retained slice in capacity(int) and discardReadBytes() methods, so that it is released properly when the they are later freed. Add unit test which fails prior to the fix.
Result:
Edge case leak is eliminated.
Motivation:
PooledByteBufAllocator uses a PoolThreadCache per Thread that allocates / deallocates to minimize the performance overhead. This PoolThreadCache is trimmed after X allocations to free up buffers that are not allocated for a long time. This works out quite well when the app continues to allocate but fails if the app stops to allocate frequently (for whatever reason) and so a lot of memory is wasted and not given back to the arena / freed.
Modifications:
- Add a ThreadExecutorMap that offers multiple methods that wrap Runnable / ThreadFactory / Executor and allow to call ThreadExecutorMap.currentEventExecutor() to get the current executing EventExecutor for the calling Thread.
- Use these methods in the constructors of our EventExecutor implementations (which also covers the EventLoop implementations)
- Add io.netty.allocator.cacheTrimIntervalMillis system property which can be used to specify a fixed rate / interval on which we should try to trim the PoolThreadCache for a EventExecutor that allocates.
- Add PooledByteBufAllocator.trimCurrentThreadCache() to allow the user to trim the cache of the calling thread manually.
- Add testcases
- Introduce FastThreadLocal.getIfExists()
Result:
Allow to better / more frequently trim PoolThreadCache and so give back memory to the area / system.
Motivation:
https://github.com/netty/netty/pull/8963 adds property files which contains a netty copyright header but our old checkstyle regex did not correct detect these.
Modifications:
Update to new netty-build which contains an updated regex.
Result:
Be able to correctly detect copyright headers in property files.
Motivation:
It appears this was an oversight, maybe was valid at some point in the past. Noticed while reviewing #8958.
Modifications:
Change AbstractChannelHandlerContext to not extend DefaultAttributeMap.
Result:
Simpler hierarchy, eliminate unused attributes field from each context instance.
Motivation:
We dont use ObjectCleaner in our FastThreadLocal anymore so we also dont need to take special care to store it there anymore.
Modifications:
Remove code that is not needed anymore.
Result:
Code cleanup.
Motivation:
df8b9d3fb9 added config files for docker-sync but missed to add a gitignore for .docker-sync
Modifications:
Add .docker-sync to gitignore
Result:
Ignore .docker-sync directory
Motivation:
GlobalEventExecutor does already provide all guarantees of OrderedEventExecutor so it should implement it.
Modifications:
Let GlobalEventExecutor implement OrderedEventExecutor.
Result:
Make it more clear how execution order is handled in GlobalEventExecutor.
Motivation:
Since DomainSocketChannel is a DuplexChannel, which be able to shutdown input or output individually on demands, but ALLOW_HALF_CLOSURE channel option has not been supported yet.
I thought this could be a missing feature of Unix domain socket, so here the PR for it.
Modifications:
1. Added allHalfClosure property both in EpollDomainSocketChannelConfig and KQueueDomainSocketChannelConfig,
2. Enabled isAllowHalfClosure method of native channel to support domain channel config,
3. Created EpollDomainSocketShutdownOutputByPeerTest and KQueueDomainSocketShutdownOutputByPeerTest to verify the change.
Result:
ALLOW_HALF_CLOSURE channel option can be set with DomainSocketChannel, and no more warning of Unknown channel option 'ALLOW_HALF_CLOSURE'.
Motivation:
docker-sync.io helps to speed up docker FS access on macOS and so make builds there a lot faster. We should add some config to help users use it.
Modifications:
Add docker-sync configs for centos-6.18 which is what we use for releases.
Result:
Faster builds via docker and when using macOS possible.
Motivation:
This counter is very useful in order to monitor Netty without having every ByteBufAllocator in the JVM
Modification:
Expose the value of DIRECT_MEMORY_COUNTER as we are already doing for DIRECT_MEMORY_LIMIT.
We are returning -1 in case that DIRECT_MEMORY_COUNTER is not available.
Result:
Be able to get the amount of direct memory used.
Motivation:
When the verification of the server cert fails because of the used TrustManager on the client-side we need to ensure we produce the correct alert and send it to the remote peer before closing the connection.
Modifications:
- Use the correct verification mode on the client-side by default.
- Update tests
Result:
Fixes https://github.com/netty/netty/issues/8942.
Motivation:
testsuite-osgi currently resolve its bundles from the local / remote maven repository, which means you will need to do `mvn install` before it can pick up the bundles. Beside this this also means that you may pick up old versions if you forgot to call `install` before running it.
Modifications:
Use alta-maven-plugin to be able to resolve bundles from the local build directory during the build.
Result:
No need to install jars before running the OSGI testsuite and ensure we always test with the latest jars.
Motivation:
We had a typo in NativeLibraryLoader debug log message which could misslead the user.
Modifications:
Fix typo to correctly state java.library.path
Result:
Correct and less confusing log message
Motivation:
According to the specification, the "Connection" header's syntax is:
"
The Connection header field's value has the following grammar:
Connection = 1#connection-option
connection-option = token
Connection options are case-insensitive.
"
https://tools.ietf.org/html/rfc7230#section-6.1
This means that Connection's value can have at least one element or
a comma separated list with elements
When calculating whether the connection can remain open,
HttpUtil.isKeepAlive(HttpMessage) should take this into account.
Modifications:
- Check for "close" and "keep-alive" in a comma separated list
- Add unit test
Result:
HttpUtil.isKeepAlive(HttpMessage) works correctly when "Connection: Upgrade, close"
Motivation:
We want to make the experience as smooth as possible for our users when using Java9+ and so should ensure we do not produce any 'Illegal reflective access' errors when using netty.
Modifications:
Add jvmArgs when running our tests that will deny reflective access and so will fail the build at the end due not be able to load some classes.
Result:
Ensure we do not produce any illegal refelctive access errors when using java9+
Motivation:
At some point we needed --add-opens=java.base/java.nio=ALL-UNNAMED to run our native tests but this is not true anymore.
Modifications:
Remove --add-opens=java.base/java.nio=ALL-UNNAMED when running native tests.
Result:
Remove obsolate jvm arg.
Motivation:
We did not have any unit tests that queries for TXT records.
Modifications:
Add unit test to query TXT records.
Result:
More test-coverage.
Motivation:
The SSLSession that is returned by SSLEngine.getHandshakeSession() must be able to provide the local certificates when the TrustManager is invoked on the server-side.
Modifications:
- Correctly return the local certificates
- Add unit test
Result:
Be able to obtain local certificates from handshake SSLSession during verification on the server side.
Motivation:
When HttpContentDecoder (and so HttpContentDecompressor) does not produce any message we need to make sure it calls ctx.read() if auto read is false to not stale.
Modifications:
- Keep track if we need to call ctx.read() or not
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/8915.
Motivation:
In the past we found a lot of SSL related bugs because of the interopt tests we have in place between different SSLEngine implementations. We should have as many of these interopt tests as possible for this reason.
Modifications:
- Add interopt tests between Conscrypt and OpenSSL SSLEngine implementations
Result:
More tests for SSL.
Motivation:
Simple rules:
* close the connection when sending any error
* specify "Connection: close" header when closing the connection
* successful responses should keep the connection intact when otherwise is not requested by the client
Modifications:
* "send response and cleanup the connection" logic moved to a helper
* for all successful responses set "Content-Lenght" header
* do not specify "Connection: Keep-Alive" header as far it's a default for HTTP/1.1
* set "Connection: close" header when necessary
Result:
Keep-Alive connections management is inlined with RFCs.
Motivation:
SSLEngine API has a notion of tasks that may be expensive and offload these to another thread. We did not support this when using our native implementation but can now for various operations during the handshake.
Modifications:
- Support offloading tasks during the handshake when using our native SSLEngine implementation
- Correctly handle the case when NEED_TASK is returned and nothing was consumed / produced yet
Result:
Be able to offload long running tasks from the EventLoop when using SslHandler with our native SSLEngine.
Motivation:
2bb9f64e16 introduced a change which made it possible to use different shaded versions of netty-tcnative on the classpath. This only partly worked as we did not correctly handled the case when os / arch is part of the library name (which is the case when netty-tcnative-boringssl-static is used with the uber jar).
Modifications:
- If patching the ID failed we retry again with the os / arch stripped
- Add unit tests to verify that patching ID now works with and without os / arch as suffix.
Result:
Using multiple shaded version of netty-tcnative-boringssl-static on MacOS works.
Motivation:
Netty is very widely used which can lead to a lot of pain when we break API / ABI. We should make use japicmp-maven-plugin during the build to verify we do not introduce breakage by mistake.
Modifications:
- Add japicmp-maven-plugin to the build process
- Fix a method signature change in HttpProxyHandler that was flagged as a possible problem.
Result:
Ensure no API/ABI breakage accour between releases.
Motivation:
We must only remove ReferenceCountedOpenSslEngine from OpenSslEngineMap when engine is destroyed as the verifier / certificate callback may be called multiple times when the remote peer did initiate a renegotiation.
If we fail to do so we will cause an NPE like this:
```
13:16:36.750 [testsuite-oio-worker-5-18] DEBUG i.n.h.s.ReferenceCountedOpenSslServerContext - Failed to set the server-side key material
java.lang.NullPointerException: null
at io.netty.handler.ssl.OpenSslKeyMaterialManager.setKeyMaterialServerSide(OpenSslKeyMaterialManager.java:69)
at io.netty.handler.ssl.ReferenceCountedOpenSslServerContext$OpenSslServerCertificateCallback.handle(ReferenceCountedOpenSslServerContext.java:212)
at io.netty.internal.tcnative.SSL.readFromSSL(Native Method)
at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.readPlaintextData(ReferenceCountedOpenSslEngine.java:575)
at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1124)
at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1236)
at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1279)
at io.netty.handler.ssl.SslHandler$SslEngineType$1.unwrap(SslHandler.java:217)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1330)
at io.netty.handler.ssl.SslHandler.decodeNonJdkCompatible(SslHandler.java:1237)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1274)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
at io.netty.channel.oio.AbstractOioByteChannel.doRead(AbstractOioByteChannel.java:170)
at io.netty.channel.oio.AbstractOioChannel$1.run(AbstractOioChannel.java:40)
at io.netty.channel.ThreadPerChannelEventLoop.run(ThreadPerChannelEventLoop.java:69)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
```
While the exception is kind of harmless (as we will reject the renegotiation at the end anyway) it produces some noise in the logs.
Modifications:
Don't remove engine from map after handshake is complete but wait for it to be removed until the engine is destroyed.
Result:
No more NPE and less noise in the logs.
Motivation:
To ensure Netty works on different JVMs we should also run tests on the CI with these.
Modifications:
Add docker-compose config to run build with OpenJ9 JVM
Result:
Ensure Netty works with different JVMs
Motivation:
We should run a CI job using J9 to ensure netty also works when using different JVMs.
Modifications:
- Adjust PooledByteBufAllocatorTest to be able to complete faster when using a JVM which takes longer when joining Threads (this seems to be the case with J9).
- Skip UDT tests on J9 as UDT is not supported there.
Result:
Be able to run CI against J9.
Motivation:
ChunkedWriteHandler needs to close both successful and failed
ChunkInputs. It used to never close successful ones.
Modifications:
* ChunkedWriteHandler always closes ChunkInput before completing
the write promise.
* Ensure only ChunkInput#close() is invoked
on a failed input.
* Ensure no methods are invoked on a closed input.
Result:
Fixes https://github.com/netty/netty/issues/8875.
Motivation:
PromiseCombiner is not thread-safe and even assumes all added Futures are using the same EventExecutor. This is kind of fragile as we do not enforce this. We need to enforce this contract to ensure it's safe to use and easy to spot concurrency problems.
Modifications:
- Add new contructor to PromiseCombiner that takes an EventExecutor and deprecate the old non-arg constructor.
- Check if methods are called from within the EventExecutor thread and if not fail
- Correctly dispatch on the right EventExecutor if the Future uses a different EventExecutor to eliminate concurrency issues.
Result:
More safe use of PromiseCombiner + enforce correct usage / contract.
Motivation:
fa6a8cb09c introduced correct dispatching of delegated tasks for SSLEngine but did not correctly handle some cases for resuming wrap / unwrap after the task was executed. This could lead to stales, which showed up during tests when running with Java11 and BoringSSL.
Modifications:
- Correctly resume wrap / unwrap in all cases.
- Fix timeout value which was changed in previous commit by mistake.
Result:
No more stales after task execution.
Motivation:
We use outdated EA releases when building and testing with JDK 12 and 13.
Modifications:
- Update versions.
- Add workaround for possible JDK12+ bug.
Result:
Use latest releases
Motivation:
https://github.com/netty/netty/pull/8866 added support for calling Iterator.remove() but did not add a testcase.
Modifications:
Add testcase to ensure removal works.
Result:
Better test-coverage.
Motivation:
As ActiveMQ project using netty, we want to make use of this class, unfortunately the iterator on values(), seems to not support remove method, even so the delegated iterator does. Currently we have to clone and modify this class locally albeit a one line change is needed, it would be ideal if netty could allow remove, then removing the need to maintain a clone.
Modifications:
* remove throws UnsupportedOperationException, and instead call remove method on delegated iterator
Result:
Be able to call Iterator.remove() for the values.
Motivation:
`DefaultFileRegion.transferTo` will return 0 all the time when we request more data then the actual file size. This may result in a busy spin while processing the fileregion during writes.
Modifications:
- If we wrote 0 bytes check if the underlying file size is smaller then the requested count and if so throw an IOException
- Add DefaultFileRegionTest
- Add a test to the testsuite
Result:
Fixes https://github.com/netty/netty/issues/8868.
Motivation:
In https://github.com/netty/netty/pull/8665 we changed how we handle the registration of Channels to KQueue but missed to removed some code which would deregister the Channel before it actual closed the underlying socket. This could lead to have events triggered still while not have a mapping to the Channel anymore.
Modifications:
Remove deregister call during socket closure.
Result:
Fixes https://github.com/netty/netty/issues/8849.
Motivation:
To make it easier to understand why a Channel was closed previously and so why the operation failed with a ClosedChannelException we should include the original Exception.
Modifications:
- Store the original exception that lead to the closed Channel and include it in the ClosedChannelException that is used to fail the operation.
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/8862.
Motivation:
We should not depend on the implementation detail of Unpooled.buffer(int) to allocate the exact size of backing byte[] as depending on the implementation it may return a buffer with a bigger backing array.
Modifications:
Explicit allocate the byte[] and wrap it in the ByteBuf. This way we are sure that ByteBuf.array() returns an byte[] which has the exact length and content we expect.
Result:
More correct and safe usage of ByteBuf.array()
Motivation:
When users' /tmp is noexec, NativeLibraryLoader logs a message informing
them how to fix the problem by setting a system property. However, if
Netty has been shaded that message will tell them to set the un-shaded
system property name, which won't work.
Modifications:
Change the code to let shading tools rename the native.workdir property
name reference within user-visible log messages.
Notably, debug logs were _not_ changed, as there's many debug statements
including a variety of property names. Fixing them would be a much more
invasive change and have limited benefit.
Result:
The users will see the correctly-named system property to set if they
are using a noexec /tmp.
Motivation:
Gracefully respond on bad client request.
We have a set of errors produced by Android 7.1.1/7.1.2 clients where both headers `HttpHeaderNames.SEC_WEBSOCKET_VERSION` and `HttpHeaderNames.ORIGIN` are not present. Absence of the first headers leads to WebSocketServerHandshaker00 be applied as a handshaker. However, null 2nd header causes
```
java.lang.NullPointerException: value
io.netty.util.internal.ObjectUtil.checkNotNull(ObjectUtil.java:33)
io.netty.handler.codec.DefaultHeaders.addObject(DefaultHeaders.java:327)
io.netty.handler.codec.http.DefaultHttpHeaders.add(DefaultHttpHeaders.java:123)
io.netty.handler.codec.http.websocketx.WebSocketServerHandshaker00.newHandshakeResponse(WebSocketServerHandshaker00.java:162)
```
Which causes connection close with unclear reason.
Modification:
Added null-check, and in case of null an appropriate WebSocketHandshakeException is thrown.
Result:
In case of null `HttpHeaderNames.ORIGIN` header a WebSocketHandshakeException is caught by WebSocketServerProtocolHandler which sends a graceful `BAD_REQUEST`.
Motivation:
When more than one connection header is present in h2c upgrade request, upgrade fails. This is to fix that.
Modification:
In HttpServerUpgradeHandler's upgrade() method, check whether any of the connection header value is upgrade, not just the first header value which might return a different value other than upgrade.
Result:
Fixes#8846.
With this PR, now when multiple connection headers are sent with the upgrade request, upgrade will not fail.
Motivation:
The SSLEngine does provide a way to signal to the caller that it may need to execute a blocking / long-running task which then can be offloaded to an Executor to ensure the I/O thread is not blocked. Currently how we handle this in SslHandler is not really optimal as while we offload to the Executor we still block the I/O Thread.
Modifications:
- Correctly support offloading the task to the Executor while suspending processing of SSL in the I/O Thread
- Add new methods to SslContext to specify the Executor when creating a SslHandler
- Remove @deprecated annotations from SslHandler constructor that takes an Executor
- Adjust tests to also run with the Executor to ensure all works as expected.
Result:
Be able to offload long running tasks to an Executor when using SslHandler. Partly fixes https://github.com/netty/netty/issues/7862 and https://github.com/netty/netty/issues/7020.
Motivation:
We missed to extend a few tests from the testsuite and so also run these with our native KQueue and Epoll transport.
Modifications:
Extend tests and so run these for our native transports as well.
Result:
More tests.