Motivation:
`transport-native-epoll` doesn't have ARM release package.
Modification:
This PR added cross compile profile for epoll. Then we can easily build aarch64 package on X86 machine.
Result:
Fixes#8279
Motivation:
netty_epoll_linuxsocket_JNI_OnLoad(...) may produce a deadlock with another thread that will load IOUtil in a static block. This seems to be a JDK bug which is not yet fixed. To workaround this we force IOUtil to be loaded from without java code before init the JNI code
Modifications:
Use Selector.open() as a workaround to load IOUtil
Result:
Fixes https://github.com/netty/netty/issues/10187
Motivation:
8dc6ad5 introduced IPV6-mapped-IPV4 address support but
copied the addresses incorrectly. It copied the first
4 bytes of the ipv6 address to the address byte array
at offset 12, instead of the other way around.
7a547aa implemented this correctly in netty_unix_socket.c
but it seems the change should've been applied to
netty_epoll_native.c as well.
The current behaviour will always set the address to
`0.0.0.0`.
Modifications:
Copy the correct bytes from the ipv6 mapped ipv4 address.
I.e. copy 4 bytes at offset 12 from the native address
to the byte array `addr` at offset 0.
Result:
When using recvmmsg with IPV6-mapped-IPV4 addresses,
the address will be correctly copied to the byte array
`addr` in the NativeDatagramPacket instance.
Motivation:
https://github.com/netty/netty/pull/9797 changed the code for recvmmsg and sendmmsg to use the syscalls directly to remvove the dependency on newer GLIBC versions. Unfortunally it made the assumption that the syscall numbers are the same for different architectures, which is not the case.
Thanks to @jayv for pointing it out
Modifications:
Add #if, #elif and #else declarations to ensure we pick the correct syscall number (or not support if if the architecture is not supported atm).
Result:
Pick the correct syscall number depending on the architecture.
Motivation:
Due a bug we did not correctly set the writerIndex of the ByteBuf when a
user specified EpollChannelOption.MAX_DATAGRAM_PAYLOAD_SIZE but we ended
up with a non scattering read.
Modifications:
- Set writerIndex to the correct value
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/9788
Motivation
The event loop implementations had become somewhat tangled over time and
work was done recently to streamline EpollEventLoop. NioEventLoop would
benefit from the same treatment and it is more straighforward now that
we can follow the same structure as was done for epoll.
Modifications
Untangle NioEventLoop logic and mirror what's now done in EpollEventLoop
w.r.t. the volatile selector wake-up guard and scheduled task deadline
handling.
Some common refinements to EpollEventLoop have also been included - to
use constants for the "special" deadline/wakeup volatile values and to
avoid some unnecessary calls to System.nanoTime() on task-only
iterations.
Result
Hopefully cleaner, more efficient and less fragile NIO transport
implementation.
Motivation:
394a1b3485 introduced a hard dependency on GLIBC 2.12 which was not the case before. This had the effect of not be able to use the native epoll transports on platforms which ship with earlier versions of GLIBC.
To make things a backward compatible as possible we should not introduce such changes in a bugfix release.
Special thanks to @weissi with all the help to fix this.
Modifications:
- Use syscalls directly to remove dependency on GLIBC 2.12
- Make code consistent that needs newer GLIBC versions
- Adjust scattering read test to only run if recvmmsg syscall is supported
- Cleanup pom.xml as some stuff is not needed anymore after using syscalls.
Result:
Fixes https://github.com/netty/netty/issues/9758.
Motivation:
In most cases, we want to use MultithreadEventLoopGroup such as NioEventLoopGroup without setting thread numbers but thread name only. So we need to use followed code:
NioEventLoopGroup boss = new NioEventLoopGroup(0, new DefaultThreadFactory("boss"));
It looks a bit confuse or strange for the number 0 due to we only want to set thread name. So it will be better to add new constructor for this case.
Modifications:
add new constructor into all event loop groups, for example: public NioEventLoopGroup(ThreadFactory threadFactory)
Result:
User can only set thread factory without setting the thread number to 0:
NioEventLoopGroup boss = new NioEventLoopGroup(new DefaultThreadFactory("boss"));
Motivation
The recently-introduced event loop scheduling hooks can be exploited by
the epoll transport to avoid waking the event loop when scheduling
future tasks if there is a timer already set to wake up sooner.
There is also a "default" timeout which will wake the event
loop after 1 second if there are no pending future tasks. The
performance impact of these wakeups themselves is likely negligible but
there's significant overhead in having to re-arm the timer every time
the event loop goes to sleep (see #7816). It's not 100% clear why this
timeout was there originally but we're sure it's no longer needed.
Modification
Combine the existing volatile wakenUp and non-volatile prevDeadlineNanos
fields into a single AtomicLong that stores the next scheduled wakeup
time while the event loop is in epoll_wait, and is -1 while it is awake.
Use this as a guard to debounce wakeups from both immediate scheduled
tasks and future scheduled tasks, the latter using the new
before/afterScheduledTaskSubmitted overrides and based on whether the
new deadline occurs prior to an already-scheduled timer.
A similar optimization was already added to NioEventLoop, but it still
uses two separate volatiles. We should consider similar streamlining of
that in a future update.
Result
Fewer event loop wakeups when scheduling future tasks, greatly reduced
overhead when no future tasks are scheduled.
Motivation:
There is a goto statement above the current position of initialize dynamicMethods, and dynamicMethods is used after the goto which might cause undefined behavior.
Modifications:
Initialize dynamicMehtods at the top.
Result:
No more undefined behavior.
Motivation
This is a vestige that was removed in the original PR #9535 before it
was reverted, but we missed it when re-applying in #9586.
It means there is a possible race condition because a wakeup event could
be missed while shutting down, but the consequences aren't serious since
there's a 1 second safeguard timeout when waiting for it.
Modification
Remove call to epollWaitNow() in EpollEventLoop#closeAll()
Result
Cleanup redundant code, avoid shutdown delay race condition
Motivation
The current event loop shutdown logic is quite fragile and in the
epoll/NIO cases relies on the default 1 second wait/select timeout that
applies when there are no scheduled tasks. Without this default timeout
the shutdown would hang indefinitely.
The timeout only takes effect in this case because queued scheduled
tasks are first cancelled in
SingleThreadEventExecutor#confirmShutdown(), but I _think_ even this
isn't robust, since the main task queue is subsequently serviced which
could result in some new scheduled task being queued with much later
deadline.
It also means shutdowns are unnecessarily delayed by up to 1 second.
Modifications
- Add/extend unit tests to expose the issue
- Adjust SingleThreadEventExecutor shutdown and confirmShutdown methods
to explicitly add no-op tasks to the taskQueue so that the subsequent
event loop iteration doesn't enter blocking wait (as looks like was
originally intended)
Results
Faster and more robust shutdown of event loops, allows removal of the
default wait timeout
Motivation:
The build script for the module Netty Transport Native Epoll can cause
intermittent build break due to broken pipe by /usr/bin/ldd. This issue
likely to occur on a build environment with multiple processors.
Modifications:
The root cause is that the consumer head command finishes earlier the
producer ldd command. Buffering the outputs of the ldd command by an
intermediate tail command avoids the broken pipe.
Result:
A build on multiple processors can finish successfully.
Signed-off-by: Tatsushi Inagaki <e29253@jp.ibm.com>
Motivation:
At the moment we not consistently (and also not correctly) free allocated native memory in all cases during loading the JNI library. This can lead to native memory leaks in the unlikely case of failure while trying to load the library.
Beside this we also not always correctly handle the case when a new java object can not be created in native code because of out of memory.
Modification:
- Copy some macros from netty-tcnative to be able to handle errors in a more easy fashion
- Correctly account for New* functions to return NULL
- Share code
Result:
More robust and clean JNI code
Motivation
This is another iteration of #9476.
Modifications
Instead of maintaining a count of all writes performed and then using
reads during shutdown to ensure all are accounted for, just set a flag
after each write and don't reset it until the corresponding event has
been returned from epoll_wait.
This requires that while a write is still pending we don't reset
wakenUp, i.e. continue to block writes from the wakeup() method.
Result
Race condition eliminated. Fixes#9362
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
Running tests with a `KQueueDomainSocketChannel` showed worse performance than an `NioSocketChannel`. It turns out that the default send buffer size for Nio sockets is 64k while for KQueue sockets it's 8k. I verified that manually setting the socket's send buffer size improved perf to expected levels.
Modification:
Plumb the `SO_SNDBUF` and `SO_RCVBUF` options into the `*DomainSocketChannelConfig`.
Result:
Can now configure send and receive buffer sizes for domain sockets.
Motivation
Currently an epoll_ctl syscall is made every time there is a change to
the event interest flags (EPOLLIN, EPOLLOUT, etc) of a channel. These
are only done in the event loop so can be aggregated into 0 or 1 such
calls per channel prior to the next call to epoll_wait.
Modifications
I think further streamlining/simplification is possible but for now I've
tried to minimize structural changes and added the aggregation beneath
the existing flag manipulation logic.
A new AbstractChannel#activeFlags field records the flags last set on
the epoll fd for that channel. Calls to setFlag/clearFlag update the
flags field as before but instead of calling epoll_ctl immediately, just
set or clear a bit for the channel in a new bitset in the associated
EpollEventLoop to reflect whether there's any change to the last set
value.
Prior to calling epoll_wait the event loop makes the appropriate
epoll_ctl(EPOLL_CTL_MOD) call once for each channel who's bit is set.
Result
Fewer syscalls, particularly in some auto-read=false cases. Simplified
error handling from centralization of these calls.
Motivation:
We should correctly reset the cached local and remote address when a Channel.disconnect() is called and the channel has a notion of disconnect vs close (for example DatagramChannel implementations).
Modifications:
- Correctly reset cached kicak abd remote address
- Update testcase to cover it and so ensure all transports work in a consistent way
Result:
Correctly handle disconnect()
Motivation:
Changes that were done to the EpollEventLoop to optimize some things did break some testsuite and caused timeouts. We need to investigate to see why this is the case but for
now we should just revert so we can do a release.
Modifivations:
- Partly revert 1fa7a5e697 and a22d4ba859
Result:
Testsuites pass again.
Motivation:
291f80733a introduced a change to use a byte[] to construct the InetAddress when receiving datagram messages to reduce the overhead. Unfortunally it introduced a regression when handling IPv6-mapped-IPv4 addresses and so produced an IndexOutOfBoundsException when trying to fill the byte[] in native code.
Modifications:
- Correctly use the offset on the pointer of the address.
- Add testcase
- Make tests more robust and include more details when the test fails
Result:
No more IndexOutOfBoundsException
Motivation:
394a1b3485 added support for recvmmsg(...) for unconnected datagram channels, this change also allows to use recvmmsg(...) with connected datagram channels.
Modifications:
- Always try to use recvmmsg(...) if configured to do so
- Adjust unit test to cover it
Result:
Less syscalls when reading datagram packets
Motivation:
We should also use sendmmsg on connected channels whenever possible to reduce the overhead of syscalls.
Modifications:
No matter if the channel is connected or not try to use sendmmsg when supported to reduce the overhead of syscalls
Result:
Better performance on connected UDP channels due less syscalls
Motivation:
394a1b3485 introduced the possibility to use recvmmsg(...) but did not correctly handle ipv6 mapped ip4 addresses to make it consistent with other transports.
Modifications:
- Correctly handle ipv6 mapped ipv4 addresses by only copy over the relevant bytes
- Small improvement on how to detect ipv6 mapped ipv4 addresses by using memcmp and not byte by byte compare
- Adjust test to cover this bug
Result:
Correctly handle ipv6 mapped ipv4 addresses
Motivation
This is another iteration of #9476.
Modifications
Instead of maintaining a count of all writes performed and then using
reads during shutdown to ensure all are accounted for, just set a flag
after each write and don't reset it until the corresponding event has
been returned from epoll_wait.
This requires that while a write is still pending we don't reset
wakenUp, i.e. continue to block writes from the wakeup() method.
Result
Race condition eliminated. Fixes#9362
Motivation:
When using datagram sockets which need to handle a lot of packets it makes sense to use recvmmsg to be able to read multiple datagram packets with one syscall.
Modifications:
- Add support for recvmmsg on linux
- Add new EpollChannelOption.MAX_DATAGRAM_PACKET_SIZE
- Add tests
Result:
Fixes https://github.com/netty/netty/issues/8446.
Motivation:
There are some extra log level checks (logger.isWarnEnabled()).
Modification:
Remove log level checks (logger.isWarnEnabled()) from io.netty.channel.epoll.AbstractEpollStreamChannel, io.netty.channel.DefaultFileRegion, io.netty.channel.nio.AbstractNioChannel, io.netty.util.HashedWheelTimer, io.netty.handler.stream.ChunkedWriteHandler and io.netty.channel.udt.nio.NioUdtMessageConnectorChannel
Result:
Fixes#9456