Motivation:
The build script for the module Netty Transport Native Epoll can cause
intermittent build break due to broken pipe by /usr/bin/ldd. This issue
likely to occur on a build environment with multiple processors.
Modifications:
The root cause is that the consumer head command finishes earlier the
producer ldd command. Buffering the outputs of the ldd command by an
intermediate tail command avoids the broken pipe.
Result:
A build on multiple processors can finish successfully.
Signed-off-by: Tatsushi Inagaki <e29253@jp.ibm.com>
Motivation:
At the moment we use Unpooled.buffer(...) in InboundHttp2ToHttpAdapter when we need to do a copy of the message. We should better use the configured ByteBufAllocator for the Channel
Modifications:
Change internal interface to also take the ByteBufAllocator as argument and use it when we need to allocate a ByteBuf.
Result:
Use the "correct" ByteBufAllocator in InboundHttp2ToHttpAdapter in all cases
Motivation:
At the moment we set release to false before we call writeData(...). This could let to the sitatuation that we will miss to release the message if writeData(...) throws. We should set release to false after we called writeData(...) to ensure the ownership of the buffer is correctly transferred.
Modifications:
- Set release to false after writeData(...) was successfully called only
Result:
No possibility for a buffer leak
Motivation:
We did miss to take Http2FrameCodecBuilder.isValidateHeaders() into account when a Http2FrameWriter was set on the builder and always assumed validation should be enabled.
Modifications:
Remove hardcode value and use configured value
Result:
Http2FrameCodecBuilder.isValidateHeaders() is respected in all cases
Motivation:
Recycler$Stack.pop will occurs `ArrayIndexOutOfBoundsException` in some race cases, we should double check `size` even after `scavenge` called.
Modifications:
Double check `size` after `scavenge`
Result:
avoid ArrayIndexOutOfBoundsException in `pop`
Motivation
Currently a static AtomicLong is used to allocate a unique id whenever a
task is scheduled to any event loop. This could be a source of
contention if delayed tasks are scheduled at a high frequency and can be
easily avoided by having a non-volatile id counter per queue.
Modifications
- Replace static AtomicLong ScheduledFutureTask#nextTaskId with a long
field in AbstractScheduledExecutorService
- Set ScheduledFutureTask#id based on this when adding the task to the
queue (in event loop) instead of at construction time
- Add simple benchmark
Result
Less contention / cache-miss possibility when scheduling future tasks
Before:
Benchmark (num) Mode Cnt Score Error Units
scheduleLots 100000 thrpt 20 346.008 ± 21.931 ops/s
Benchmark (num) Mode Cnt Score Error Units
scheduleLots 100000 thrpt 20 654.824 ± 22.064 ops/s
Motivation:
At the moment we not consistently (and also not correctly) free allocated native memory in all cases during loading the JNI library. This can lead to native memory leaks in the unlikely case of failure while trying to load the library.
Beside this we also not always correctly handle the case when a new java object can not be created in native code because of out of memory.
Modification:
- Copy some macros from netty-tcnative to be able to handle errors in a more easy fashion
- Correctly account for New* functions to return NULL
- Share code
Result:
More robust and clean JNI code
Motivation:
Users can reuse the same FileChannel for different ChunkedNioFile
instances without being worried that FileChannel::position will be
changed concurrently by them.
In addition, FileChannel::read with absolute position allows to
use on *nix pread that is more efficient then fread.
Modifications:
Always use absolute FileChannel::read ops
Result:
Faster and more flexible uses of FileChannel for ChunkedNioFile
Motivation
This is another iteration of #9476.
Modifications
Instead of maintaining a count of all writes performed and then using
reads during shutdown to ensure all are accounted for, just set a flag
after each write and don't reset it until the corresponding event has
been returned from epoll_wait.
This requires that while a write is still pending we don't reset
wakenUp, i.e. continue to block writes from the wakeup() method.
Result
Race condition eliminated. Fixes#9362
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
Error: Class that is marked for delaying initialization to run time got initialized during image building: io.netty.handler.codec.http2.Http2CodecUtil. Try marking this class for build-time initialization with --initialize-at-build-time=io.netty.handler.codec.http2.Http2CodecUtil
Error: Use -H:+ReportExceptionStackTraces to print stacktrace of underlying exception
Error: Image build request failed with exit status 1
Modification:
After debugging, it seems the culprit is io.netty.handler.codec.http2.Http2ClientUpgradeCodec, which also needs runtime initialisation.
Result:
Fixes #micronaut-projects/micronaut-grpc#8
Motivation:
It is not safe to cache a jclass without obtaining a global reference via NewGlobalRef.
Modifications:
Correctly use NewGlobalRef(...) before caching
Result:
Correctly cache jclass instance
Motivation:
We just released a new version of netty-tcnative.
Modifications:
Bump up to netty-tcnative 2.0.26.Final
Result:
Use latest netty-tcnative release
Motivation:
Due some bug we did endup with ClassCastExceptions in some cases. Beside this we also did not correctly handle the case when ReferenceCountedOpenSslEngineTest did produce tasks to run in on test.
Modifications:
- Correctly unwrap the engine before to fix ClassCastExceptions
- Run delegated tasks when needed.
Result:
All tests pass with different OpenSSL implementations (OpenSSL, BoringSSL etc)
Motivation:
Running tests with a `KQueueDomainSocketChannel` showed worse performance than an `NioSocketChannel`. It turns out that the default send buffer size for Nio sockets is 64k while for KQueue sockets it's 8k. I verified that manually setting the socket's send buffer size improved perf to expected levels.
Modification:
Plumb the `SO_SNDBUF` and `SO_RCVBUF` options into the `*DomainSocketChannelConfig`.
Result:
Can now configure send and receive buffer sizes for domain sockets.
Motivation:
Optimize the QueryStringEncoder for lower memory overhead and higher encode speed.
Modification:
Encode the space to + directly, and reuse the uriStringBuilder rather then create a new one.
Result:
Improved performance
Motivation:
When parsing HTTP headers special care needs to be taken when a whitespace is detected in the header name.
Modifications:
- Ignore whitespace when decoding response (just like before)
- Throw exception when whitespace is detected during parsing
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/9571
Motivation:
Socks5InitialRequestDecoder does not correctly handle fragmentation
Modifications:
- Delete detection of not enough bytes as ReplyingDecoder already handles all of this correctly.
- Add unit test
Result:
Fixes#9574.
Motivation
Currently an epoll_ctl syscall is made every time there is a change to
the event interest flags (EPOLLIN, EPOLLOUT, etc) of a channel. These
are only done in the event loop so can be aggregated into 0 or 1 such
calls per channel prior to the next call to epoll_wait.
Modifications
I think further streamlining/simplification is possible but for now I've
tried to minimize structural changes and added the aggregation beneath
the existing flag manipulation logic.
A new AbstractChannel#activeFlags field records the flags last set on
the epoll fd for that channel. Calls to setFlag/clearFlag update the
flags field as before but instead of calling epoll_ctl immediately, just
set or clear a bit for the channel in a new bitset in the associated
EpollEventLoop to reflect whether there's any change to the last set
value.
Prior to calling epoll_wait the event loop makes the appropriate
epoll_ctl(EPOLL_CTL_MOD) call once for each channel who's bit is set.
Result
Fewer syscalls, particularly in some auto-read=false cases. Simplified
error handling from centralization of these calls.
Motivation:
peek() is implemented in a similar way to poll() for the mpsc queue, thus it is more like a consumer call.
It is possible that we could have multiple thread call peek() and possibly one thread calls poll() at at the same time.
This lead to multiple consumer scenario, which violates the multiple producer single consumer condition and could lead to spin in an infinite loop in peek()
Modification:
Use isEmpty() instead of peek() to check if task queue is empty
Result:
Dont violate the mpsc semantics.
Motivation:
We should correctly reset the cached local and remote address when a Channel.disconnect() is called and the channel has a notion of disconnect vs close (for example DatagramChannel implementations).
Modifications:
- Correctly reset cached kicak abd remote address
- Update testcase to cover it and so ensure all transports work in a consistent way
Result:
Correctly handle disconnect()
Motivation:
SystemPropertyUtil already uses the AccessController internally so not need to wrap its usage with AccessController as well.
Modifications:
Remove explicit AccessController usage when SystemPropertyUtil is used.
Result:
Code cleanup
Motivation:
We did not correctly handle taskoffloading when using BoringSSL / OpenSSL. This could lead to the situation that we did not write the SSL alert out for the remote peer before closing the connection.
Modifications:
- Correctly handle exceptions when we resume processing on the EventLoop after the task was offloadded
- Ensure we call SSL.doHandshake(...) to flush the alert out to the outboundbuffer when an handshake exception was detected
- Correctly signal back the need to call WRAP again when a handshake exception is pending. This will ensure we flush out the alert in all cases.
Result:
No more failures when task offloading is used.
Motivation:
When using io.netty.handler.ssl.openssl.useTasks=true we may call ReferenceCountedOpenSslEngine.setKeyMaterial(...) from another thread and so need to synchronize and also check if the engine was destroyed in the meantime to eliminate of the possibility of a native crash.
The same is try when trying to access the authentication methods.
Modification:
- Add synchronized and isDestroyed() checks where missing
- Add null checks for the case when a callback is executed by another thread after the engine was destroyed already
- Move code for master key extraction to ReferenceCountedOpenSslEngine to ensure there can be no races.
Result:
No native crash possible anymore when using io.netty.handler.ssl.openssl.useTasks=true
Motivation:
calculateMaxBytesPerGatheringWrite() contains duplicated calculation: getSendBufferSize() << 1
Modifications:
Remove the duplicated calculation
Result:
The method will be clear and better
Motivation:
At the moment it is not possible to build netty on a power 8 systems.
Modifications:
- Improve detection of the possibility of using Conscrypt
- Skip testsuite-shading when not on x86_64 as this is the only platform for which we build tcnative atm
- Only include classifier if on x86_64 for tcnative as dependency as this is the only platform for which we build tcnative atm
- Better detect if UDT test can be run
Result:
Fixes https://github.com/netty/netty/issues/9479
Motivation:
Changes that were done to the EpollEventLoop to optimize some things did break some testsuite and caused timeouts. We need to investigate to see why this is the case but for
now we should just revert so we can do a release.
Modifivations:
- Partly revert 1fa7a5e697 and a22d4ba859
Result:
Testsuites pass again.
Motivation:
291f80733a introduced a change to use a byte[] to construct the InetAddress when receiving datagram messages to reduce the overhead. Unfortunally it introduced a regression when handling IPv6-mapped-IPv4 addresses and so produced an IndexOutOfBoundsException when trying to fill the byte[] in native code.
Modifications:
- Correctly use the offset on the pointer of the address.
- Add testcase
- Make tests more robust and include more details when the test fails
Result:
No more IndexOutOfBoundsException
Motivation:
At the current moment HttpContentEncoder handle only first value of multiple accept-encoding headers.
Modification:
Join multiple accept-encoding headers to one separated by comma.
Result:
Fixes#9553
Motivation:
There appears to be a thread-safety issue in the way that `SocksAuthRequest` is using its `CharsetEncoder` instance. `CharsetUtil#encoder` returns a cached thread-local encoder instance, so it is not correct to store this instance in a static member variable and reuse it across multiple threads. The result is an occasional `IllegalStateException` as in the following example:
```
java.lang.IllegalStateException: Current state = RESET, new state = FLUSHED
at java.base/java.nio.charset.CharsetEncoder.throwIllegalStateException(CharsetEncoder.java:989)
at java.base/java.nio.charset.CharsetEncoder.flush(CharsetEncoder.java:672)
at java.base/java.nio.charset.CharsetEncoder.encode(CharsetEncoder.java:801)
at java.base/java.nio.charset.CharsetEncoder.canEncode(CharsetEncoder.java:907)
at java.base/java.nio.charset.CharsetEncoder.canEncode(CharsetEncoder.java:982)
at io.netty.handler.codec.socks.SocksAuthRequest.<init>(SocksAuthRequest.java:43)
```
Modification:
Instead of retrieving the thread-local encoder instance once and storing it as a static member instance, the encoder should be retrieved each time the constructor is invoked. This change prevents any potential concurrency issues where multiple threads may end up using the same encoder instance.
Result:
Fixes#9556.
Motivation:
The java doc doesn't match the real case: The exception only happen when a write operation
cannot finish in a certain period of time instead of write idle happen.
Modification:
Correct java doc
Result:
java doc matched the real case
Motivation
Currently every call to get() on a promise results in two reads of the
volatile result field when one would suffice. Maybe this is optimized
away but it seems sensible not to rely on that.
Modification
Reimplement get() and get(...) in DefaultPromise to reduce volatile access.
Result
Fewer volatile reads.
Motivation:
It is noticed that SimpleChannelPool's POOL_KEY attribute name channelPool is easy to get conflict with user code and throws an exception 'channelPool' is already in use. Being a generic framework - it would be great if we can name the attribute something unique - may be use UUID for the name since the name is not required later.
Modifications:
This change make sure that the POOL_KEY used inside SimpleChannelPool is unique by appending the object hashcode in the name.
Result:
No unwanted channel attribute name conflict with user code.
Motivation:
394a1b3485 added support for recvmmsg(...) for unconnected datagram channels, this change also allows to use recvmmsg(...) with connected datagram channels.
Modifications:
- Always try to use recvmmsg(...) if configured to do so
- Adjust unit test to cover it
Result:
Less syscalls when reading datagram packets
Motivation:
We should also use sendmmsg on connected channels whenever possible to reduce the overhead of syscalls.
Modifications:
No matter if the channel is connected or not try to use sendmmsg when supported to reduce the overhead of syscalls
Result:
Better performance on connected UDP channels due less syscalls
Motivation:
394a1b3485 introduced the possibility to use recvmmsg(...) but did not correctly handle ipv6 mapped ip4 addresses to make it consistent with other transports.
Modifications:
- Correctly handle ipv6 mapped ipv4 addresses by only copy over the relevant bytes
- Small improvement on how to detect ipv6 mapped ipv4 addresses by using memcmp and not byte by byte compare
- Adjust test to cover this bug
Result:
Correctly handle ipv6 mapped ipv4 addresses
Motivation
#9152 reverted some static exception reuse optimizations due to the
problem with Throwable#addSuppressed() raised in #9151. This introduced
a performance issue when promises are cancelled at a high frequency due
to the construction cost of CancellationException at the time that
DefaultPromise#cancel() is called.
Modifications
- Reinstate the prior static CANCELLATION_CAUSE_HOLDER but use it just
as a sentinel to indicate cancellation, constructing a new
CancellationException only if/when one needs to be explicitly
returned/thrown
- Subclass CancellationException, overriding fillInStackTrace() to
minimize the construction cost in these cases
Result
Promises are much cheaper to cancel. Fixes#9522.
Motivation:
Following up on discussion with @normanmaurer with suggestion to improve code clarity.
Modification:
Method is synchronized, no need for assert or verbose sync blocks around calls.
Result:
Reduce verbosity and more idiomatic use of keyword. Also rename the method to better describe what it's for.
Motivation
This is another iteration of #9476.
Modifications
Instead of maintaining a count of all writes performed and then using
reads during shutdown to ensure all are accounted for, just set a flag
after each write and don't reset it until the corresponding event has
been returned from epoll_wait.
This requires that while a write is still pending we don't reset
wakenUp, i.e. continue to block writes from the wakeup() method.
Result
Race condition eliminated. Fixes#9362
Motivation:
When using datagram sockets which need to handle a lot of packets it makes sense to use recvmmsg to be able to read multiple datagram packets with one syscall.
Modifications:
- Add support for recvmmsg on linux
- Add new EpollChannelOption.MAX_DATAGRAM_PACKET_SIZE
- Add tests
Result:
Fixes https://github.com/netty/netty/issues/8446.
Motivation:
We need to update the doubly-linked list nodes while holding a lock via synchronized in all cases as otherwise we may end-up with a corrupted pipeline. We missed this when calling remove0(...) due handlerAdded(...) throwing an exception.
Modifications:
- Correctly hold lock while update node
- Add assert
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9528
Motivation
This is a "simpler" alternative to #9416 which fixes the same
CompositeByteBuf bugs described there, originally reported by @jingene
in #9398.
Modifications
- Add fields to Component class for the original buffer along with its
adjustment, which may be different to the already-stored unwrapped
buffer. Use it in appropriate places to ensure correctness and
equivalent behaviour to that prior to the earlier optimizations
- Add comments explaining purpose of each of the Component fields
- Unwrap more kinds of buffers in newComponent method to extend scope of
the existing indirection-reduction optimization
- De-duplicate common buffer consolidation logic
- Unit test for the original bug provided by @jingene
Result
- Correct behaviour / fixed bugs
- Some code deduplication / simplification
- Unwrapping optimization applied to more types of buffers
The downside is increased mem footprint from the two new fields, and
additional allocations in some specific cases, though those should be
rare.
Co-authored-by: jingene <jingene0206@gmail.com>