Motivation:
RFC 6455 defines that, generally, a WebSocket client should not close a TCP
connection as far as a server is the one who's responsible for doing that.
In practice tho', it's not always possible to control the server. Server's
misbehavior may lead to connections being leaked (if the server does not
comply with the RFC).
RFC 6455 #7.1.1 says
> In abnormal cases (such as not having received a TCP Close from the server
after a reasonable amount of time) a client MAY initiate the TCP Close.
Modifications:
* WebSocket client handshaker additional param `forceCloseAfterMillis`
* Use 10 seconds as default
Result:
WebSocket client handshaker to comply with RFC. Fixes#8883.
Motivation:
When kevent(...) returns with EINTR we do not correctly decrement the timespec
structure contents to account for the time duration. This may lead to negative
values for tv_nsec which will result in an EINVAL and raise an IOException to
the event loop selection loop.
Modifications:
Correctly calculate new timeoutTs when EINTR is detected
Result:
Fixes#9013.
Motivation:
During investigating some other bug I noticed that we log with warn level if we fail to notify the promise due the fact that it is already full-filled. This is not correct and missleading as there is nothing wrong with it in general. A promise may already been fullfilled because we did multiple queries and one of these was successful.
Modifications:
- Change log level to trace
- Add unit test which before did log with warn level but now does with trace level.
Result:
Less missleading noise in the log.
Motivation:
IdleStateHandler may trigger unexpected idle events when flushing large entries to slow clients.
Modification:
In netty design, we check the identity hash code and total pending write bytes of the current flush entry to determine whether there is a change in output. But if a large entry has been flushing slowly (for some reason, the network speed is slow, or the client processing speed is too slow to cause the TCP sliding window to be zero), the total pending write bytes size and identity hash code would remain unchanged.
Avoid this issue by adding checks for the current entry flush progress.
Result:
Fixes#8912 .
Motivation
AbstractReferenceCounted and AbstractReferenceCountedByteBuf contain
duplicate logic for managing the volatile refcount in an optimized and
consistent manner, which increased in complexity in #8583. It's possible
to extract this into a common helper class now that all access is via an
AtomicIntegerFieldUpdater.
Modifications
- Move duplicate logic into a shared ReferenceCountUpdater class
- Incorporate some additional simplification for the most common single
increment/decrement cases (fewer checks/operations)
Result
Less code duplication, better encapsulation of the "non-trivial"
internal volatile refcount manipulation
Motivation:
DnsNameResolver#resolveAll(String) may return duplicate results in the event that the original hostname DNS response includes an IP address X and a CNAME that ends up resolving the same IP address X. This behavior is inconsistent with the JDK’s resolver and is unexpected to retrun a List with duplicate entries from a resolveAll(..) call.
Modifications:
- Filter out duplicates
- Add unit test
Result:
More consistent and less suprising behavior
Motivation:
Invoking ChannelHandlers is not free and can result in some overhead when the ChannelPipeline becomes very long. This is especially true if most handlers will just forward the call to the next handler in the pipeline. When the user extends Channel*HandlerAdapter we can easily detect if can just skip the handler and invoke the next handler in the pipeline directly. This reduce the overhead of dispatch but also reduce the call-stack in many cases.
This backports https://github.com/netty/netty/pull/8723 and https://github.com/netty/netty/pull/8987 to 4.1
Modifications:
Detect if we can skip the handler when walking the pipeline.
Result:
Reduce overhead for long pipelines.
Benchmark (extraHandlers) Mode Cnt Score Error Units
DefaultChannelPipelineBenchmark.propagateEventOld 4 thrpt 10 267313.031 ± 9131.140 ops/s
DefaultChannelPipelineBenchmark.propagateEvent 4 thrpt 10 824825.673 ± 12727.594 ops/s
Motivation:
4079189f6b introduced OpenSslPrivateKeyMethodTest which will only be run when BoringSSL is used. As the assumeTrue(...) also guards the init of the static fields we need to ensure we only try to destroy these if BoringSSL is used as otherwise it will produce a NPE.
Modifications:
Check if BoringSSL is used before trying to destroy the resources.
Result:
No more NPE when BoringSSL is not used.
Motivation:
32563bfcc1 introduced a regression in which we did now not longer discard the messages after we handled an oversized message.
Modifications:
- Do not set aggregating to false after handleOversizedMessage is called
- Adjust unit tests to verify the behaviour is correct again.
Result:
Fixes https://github.com/netty/netty/issues/9007.
Motivation:
The CompositeByteBuf discardReadBytes / discardReadComponents methods are currently quite inefficient, including when there are no read components to discard. We would like to call the latter more frequently in ByteToMessageDecoder#COMPOSITE_CUMULATOR.
In the same context it would be beneficial to perform a "shallow copy" of a composite buffer (for example when it has a refcount > 1) to avoid having to allocate and copy the contained bytes just to obtain an "independent" cumulation.
Modifications:
- Optimize discardReadBytes() and discardReadComponents() implementations (start at first comp rather than performing a binary search for the readerIndex).
- New addFlattenedComponents(boolean,ByteBuf) method which performs a shallow copy if the provided buffer is also composite and avoids adding any empty buffers, plus unit test.
- Other minor optimizations to avoid unnecessary checks.
Results:
discardReadXX methods are faster, composite buffers can be easily appended without deepening the buffer "tree" or retaining unused components.
Motivation:
4079189f6b changed the dependency to netty-tcnative-borinssl-static but it should still be netty-tcnative.
Modifications:
Change back to netty-tcnative
Result:
Correct dependency is used
Motivation:
BoringSSL allows to customize the way how key signing is done an even offload it from the IO thread. We should provide a way to plugin an own implementation when BoringSSL is used.
Modifications:
- Introduce OpenSslPrivateKeyMethod that can be used by the user to implement custom signing by using ReferenceCountedOpenSslContext.setPrivateKeyMethod(...)
- Introduce static methods to OpenSslKeyManagerFactory which allows to create a KeyManagerFactory which supports to do keyless operations by let the use handle everything in OpenSslPrivateKeyMethod.
- Add testcase which verifies that everything works as expected
Result:
A user is able to customize the way how keys are signed.
…nterface, block or loopback-mode-disabled operations).
Motivation:
Provide epoll/native multicast to support high load multicast users (we are using it for a high load telecomm app at my day job).
Modification:
Added support for (ipv4 only) source specific and any source multicast for epoll transport. Some caveats (beyond no ipv6 support initially - there’s a bit of work to add in join and leave group specifically around SSM, as ipv6 uses different data structures for this): no support for disabling loop back mode, retrieval of interface and block operation, all of which tend to be less frequently used.
Result:
Provides epoll transport multicast for IPv4 for common use cases. Understand if you’d prefer to hold off until ipv6 is included but not sure when I’ll be able to get to that.
Motivation:
During OpenSsl.java initialization, a SelfSignedCertificate is created
during the static initialization block to determine if OpenSsl
can be used.
The default key strength for SelfSignedCertificate was too low if FIPS
mode is used and BouncyCastle-FIPS is the only available provider
(necessary for compliance). A simple fix is to just augment the key
strength to the minimum required about by FIPS.
Modification:
Set default key bit length to 2048 but also allow it to be dynamically set via a system property for future proofing to more stricter security compliance.
Result:
Fixes#9018
Signed-off-by: Farid Zakaria <farid.m.zakaria@gmail.com>
Motivation:
Some SslProvider do support different types of keys and chains. We should fail fast if we can not support the type.
Related to https://github.com/netty/netty-tcnative/issues/455.
Modifications:
- Try to parse key / chain first and if if this fails throw and SslException
- Add tests.
Result:
Fail fast.
Motivation:
Deprecate ChannelOption.newInstance(...) as it is not used.
Modifications:
Deprecate ChannelOption.newInstance(...) as valueOf(...) should be used as a replacement.
Result:
Fixes https://github.com/netty/netty/issues/8983.
Motivation:
A callback may already have stored a initial handshake exception in ReferenceCountedOpenSslEngine so we should include it when throwing a SslHandshakeException to ensure the user has all the infos when debugging.
Modifications:
Include initial handshake exception
Result:
Include all erros when throwing the SslHandshakeException.
Motivation:
0a0da67f43 introduced a testcase which is flacky. We need to fix it and enable it again.
Modifications:
Mark flaky test as ignore.
Result:
No flaky build anymore.
Motivation:
"Connection: close" header should be specified each time we're going
to close an underlying TCP connection when sending HTTP/1.1 reply.
Modifications:
Introduces changes made in #8914 for the following examples:
* WebSocket index page and WebSocket server handler
* HelloWorld server
* SPDY server handler
* HTTP/1.1 server handler from HTTP/2 HelloWorld example
* HTTP/1.1 server handler from tiles example
Result:
Keep-Alive connections management conforms with RFCs.
Motivation:
https://github.com/netty/netty/pull/8826 added @Deprecated to the exceptionCaught(...) method but we missed to add @SupressWarnings(...) to it's sub-types. Beside this we can make the deprecated docs a bit more clear.
Modifications:
- Add @SupressWarnings("deprecated")
- Clarify docs.
Result:
Less warnings and more clear deprecated docs.
Motivation:
We do not need to call SSL.setHostNameValidation(...) as it should be done as part of the TrustManager implementation. This is consistent with the JDK implementation of SSLEngine.
Modifications:
Remove call to SSL.setHostNameValidation(...)
Result:
More consistent behaviour between our SSLEngine implementation and the one that comes with the JDK.
Motivation:
We synchronize on the chunk.arena when produce the String returned by PoolSubpage.toString() which may raise a NPE when chunk == null. Chunk == null for the head of the linked-list and so a NPE may raised by a debugger. This NPE can never happen in real code tho as we never access toString() of the head.
Modifications:
Add null checks and so fix the possible NPE
Result:
No NPE when using a debugger and inspect the PooledByteBufAllocator.
Motivation:
We use SSL.setKeyMaterial(...) in our implementation when using the KeyManagerFactory so we should also use it to detect if we can support KeyManagerFactory.
Modifications:
Use SSL.setKeyMaterial(...) as replacement for SSL.setCertificateBio(...)
Result:
Use the same method call to detect if KeyManagerFactory can be supported as we use in the real implementation.
Motivation:
Systems depending on Netty may benefit (telemetry, alternative even loop scheduling algorithms) from knowing the number of channels assigned to each EventLoop.
Modification:
Expose the number of channels registered in the EventLoop via SingleThreadEventLoop.registeredChannels.
Result:
Fixes#8276.
Motivation:
com.puppycrawl.tools checkstyle < 8.18 was reported to contain a possible security flaw. We should upgrade.
Modifications:
- Upgrade netty-build and checkstyle.
- Fix checkstyle errors
Result:
Fixes https://github.com/netty/netty/issues/8968.
Motivation:
We have multiple places where we store the exception that was produced by a callback in ReferenceCountedOpenSslEngine, and so have a lot of code-duplication.
Modifications:
- Consolidate code into a package-private method that is called from the callbacks if needed
Result:
Less code-duplication and cleaner code.
Motivation:
We can just use a local variable in HttpUploadServerHandler and so make the example code a bit cleaner.
Modifications:
Use local variable.
Result:
Fixes https://github.com/netty/netty/issues/8892.
Motivation:
BoringSSL supports offloading certificate validation to a different thread. This is useful as it may need to do blocking operations and so may block the EventLoop.
Modification:
- Adjust ReferenceCountedOpenSslEngine to correctly handle offloaded certificate validation (just as we already have code for certificate selection).
Result:
Be able to offload certificate validation when using BoringSSL.
Motivation:
BoringSSL supports offloading certificate validation to a different thread. This is useful as it may need to do blocking operations and so may block the EventLoop.
Modification:
- Adjust ReferenceCountedOpenSslEngine to correctly handle offloaded certificate validation (just as we already have code for certificate selection).
Result:
Be able to offload certificate validation when using BoringSSL.
Motivation:
We had a bug which could case ExtendedSSLSession.getPeerSupportedSignatureAlgorithms() return an empty array when using BoringSSL. This testcase verifies we correctly return algorithms after the fix in https://github.com/netty/netty-tcnative/pull/449.
Modifications:
Add testcase to verify behaviour.
Result:
Ensure we correctly retuen the algorithms.
Motivation:
We recently changed the docker config to use adoptjdk builds but missed to include the docker-sync related files.
Modifications:
Use adoptjdk there as well.
Result:
More conistent usage of JDK versions.
Motivation:
e9ce5048df added a testcase to ensure we correctly send the alert in all cases but did use a too strict message matching which did not work for BoringSSL as it not uses whitespaces but underscores.
Modifications:
Make the message matching less strict.
Result:
Test pass also when using BoringSSL.
Motivation:
Add user possibility to skip the evaluation of certain web socket extension,
for example we can skip compression extension for messages that already compressed or very small and etc.
Modification:
This pull request is related with #5669
Result:
User can set to WebSocketClientExtensionHandshaker or WebSocketServerExtensionHandshaker a filter to skip the evaluation of certain extension.
Motivation:
In MemoryRegionCache.Entry we use the Recycler to reduce GC pressure and churn. The problem is that these will also be recycled when the PoolThreadCache is collected and finalize() is called. This then can have the effect that we try to load class but the WebApp is already stoped.
This will produce an stacktrace like this on Tomcat:
```
19-Mar-2019 15:53:21.351 INFO [Finalizer] org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading Illegal access: this web application instance has been stopped already. Could not load [java.util.WeakHashMap]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already. Could not load [java.util.WeakHashMap]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading(WebappClassLoaderBase.java:1383)
at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForClassLoading(WebappClassLoaderBase.java:1371)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1224)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1186)
at io.netty.util.Recycler$3.initialValue(Recycler.java:233)
at io.netty.util.Recycler$3.initialValue(Recycler.java:230)
at io.netty.util.concurrent.FastThreadLocal.initialize(FastThreadLocal.java:188)
at io.netty.util.concurrent.FastThreadLocal.get(FastThreadLocal.java:142)
at io.netty.util.Recycler$Stack.pushLater(Recycler.java:624)
at io.netty.util.Recycler$Stack.push(Recycler.java:597)
at io.netty.util.Recycler$DefaultHandle.recycle(Recycler.java:225)
at io.netty.buffer.PoolThreadCache$MemoryRegionCache$Entry.recycle(PoolThreadCache.java:478)
at io.netty.buffer.PoolThreadCache$MemoryRegionCache.freeEntry(PoolThreadCache.java:459)
at io.netty.buffer.PoolThreadCache$MemoryRegionCache.free(PoolThreadCache.java:430)
at io.netty.buffer.PoolThreadCache$MemoryRegionCache.free(PoolThreadCache.java:422)
at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:279)
at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:270)
at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:241)
at io.netty.buffer.PoolThreadCache.finalize(PoolThreadCache.java:230)
at java.lang.System$2.invokeFinalize(System.java:1270)
at java.lang.ref.Finalizer.runFinalizer(Finalizer.java:102)
at java.lang.ref.Finalizer.access$100(Finalizer.java:34)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:217)
```
Beside this we also need to ensure we not try to lazy load SizeClass when the finalizer is used as it may not be present anymore if the ClassLoader is already destroyed.
This would produce an error like:
```
20-Mar-2019 11:26:35.254 INFO [Finalizer] org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading Illegal access: this web application instance has been stopped already. Could not load [io.netty.buffer.PoolArena$1]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already. Could not load [io.netty.buffer.PoolArena$1]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading(WebappClassLoaderBase.java:1383)
at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForClassLoading(WebappClassLoaderBase.java:1371)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1224)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1186)
at io.netty.buffer.PoolArena.freeChunk(PoolArena.java:287)
at io.netty.buffer.PoolThreadCache$MemoryRegionCache.freeEntry(PoolThreadCache.java:464)
at io.netty.buffer.PoolThreadCache$MemoryRegionCache.free(PoolThreadCache.java:429)
at io.netty.buffer.PoolThreadCache$MemoryRegionCache.free(PoolThreadCache.java:421)
at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:278)
at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:269)
at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:240)
at io.netty.buffer.PoolThreadCache.finalize(PoolThreadCache.java:229)
at java.lang.System$2.invokeFinalize(System.java:1270)
at java.lang.ref.Finalizer.runFinalizer(Finalizer.java:102)
at java.lang.ref.Finalizer.access$100(Finalizer.java:34)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:217)
```
Modifications:
- Only try to put the Entry back into the Recycler if the PoolThredCache is not destroyed because of the finalizer.
- Only try to access SizeClass if not triggered by finalizer.
Result:
No IllegalStateException anymoe when a webapp is reloaded in Tomcat that uses netty and uses the PooledByteBufAllocator.
Motivation:
The special case fixed in #8497 also requires that we keep a derived slice when trimming components in place, as done by the capacity(int) and discardReadBytes() methods.
Modifications:
Ensure that we keep a ref to trimmed components' original retained slice in capacity(int) and discardReadBytes() methods, so that it is released properly when the they are later freed. Add unit test which fails prior to the fix.
Result:
Edge case leak is eliminated.
Motivation:
PooledByteBufAllocator uses a PoolThreadCache per Thread that allocates / deallocates to minimize the performance overhead. This PoolThreadCache is trimmed after X allocations to free up buffers that are not allocated for a long time. This works out quite well when the app continues to allocate but fails if the app stops to allocate frequently (for whatever reason) and so a lot of memory is wasted and not given back to the arena / freed.
Modifications:
- Add a ThreadExecutorMap that offers multiple methods that wrap Runnable / ThreadFactory / Executor and allow to call ThreadExecutorMap.currentEventExecutor() to get the current executing EventExecutor for the calling Thread.
- Use these methods in the constructors of our EventExecutor implementations (which also covers the EventLoop implementations)
- Add io.netty.allocator.cacheTrimIntervalMillis system property which can be used to specify a fixed rate / interval on which we should try to trim the PoolThreadCache for a EventExecutor that allocates.
- Add PooledByteBufAllocator.trimCurrentThreadCache() to allow the user to trim the cache of the calling thread manually.
- Add testcases
- Introduce FastThreadLocal.getIfExists()
Result:
Allow to better / more frequently trim PoolThreadCache and so give back memory to the area / system.
Motivation:
https://github.com/netty/netty/pull/8963 adds property files which contains a netty copyright header but our old checkstyle regex did not correct detect these.
Modifications:
Update to new netty-build which contains an updated regex.
Result:
Be able to correctly detect copyright headers in property files.
Motivation:
It appears this was an oversight, maybe was valid at some point in the past. Noticed while reviewing #8958.
Modifications:
Change AbstractChannelHandlerContext to not extend DefaultAttributeMap.
Result:
Simpler hierarchy, eliminate unused attributes field from each context instance.
Motivation:
We dont use ObjectCleaner in our FastThreadLocal anymore so we also dont need to take special care to store it there anymore.
Modifications:
Remove code that is not needed anymore.
Result:
Code cleanup.
Motivation:
df8b9d3fb9 added config files for docker-sync but missed to add a gitignore for .docker-sync
Modifications:
Add .docker-sync to gitignore
Result:
Ignore .docker-sync directory
Motivation:
GlobalEventExecutor does already provide all guarantees of OrderedEventExecutor so it should implement it.
Modifications:
Let GlobalEventExecutor implement OrderedEventExecutor.
Result:
Make it more clear how execution order is handled in GlobalEventExecutor.
Motivation:
Since DomainSocketChannel is a DuplexChannel, which be able to shutdown input or output individually on demands, but ALLOW_HALF_CLOSURE channel option has not been supported yet.
I thought this could be a missing feature of Unix domain socket, so here the PR for it.
Modifications:
1. Added allHalfClosure property both in EpollDomainSocketChannelConfig and KQueueDomainSocketChannelConfig,
2. Enabled isAllowHalfClosure method of native channel to support domain channel config,
3. Created EpollDomainSocketShutdownOutputByPeerTest and KQueueDomainSocketShutdownOutputByPeerTest to verify the change.
Result:
ALLOW_HALF_CLOSURE channel option can be set with DomainSocketChannel, and no more warning of Unknown channel option 'ALLOW_HALF_CLOSURE'.
Motivation:
docker-sync.io helps to speed up docker FS access on macOS and so make builds there a lot faster. We should add some config to help users use it.
Modifications:
Add docker-sync configs for centos-6.18 which is what we use for releases.
Result:
Faster builds via docker and when using macOS possible.
Motivation:
This counter is very useful in order to monitor Netty without having every ByteBufAllocator in the JVM
Modification:
Expose the value of DIRECT_MEMORY_COUNTER as we are already doing for DIRECT_MEMORY_LIMIT.
We are returning -1 in case that DIRECT_MEMORY_COUNTER is not available.
Result:
Be able to get the amount of direct memory used.
Motivation:
When the verification of the server cert fails because of the used TrustManager on the client-side we need to ensure we produce the correct alert and send it to the remote peer before closing the connection.
Modifications:
- Use the correct verification mode on the client-side by default.
- Update tests
Result:
Fixes https://github.com/netty/netty/issues/8942.
Motivation:
testsuite-osgi currently resolve its bundles from the local / remote maven repository, which means you will need to do `mvn install` before it can pick up the bundles. Beside this this also means that you may pick up old versions if you forgot to call `install` before running it.
Modifications:
Use alta-maven-plugin to be able to resolve bundles from the local build directory during the build.
Result:
No need to install jars before running the OSGI testsuite and ensure we always test with the latest jars.