Motivation:
We need to respect RecvByteBufAllocator.Handle.continueReading() so settings like MAX_MESSAGES_PER_READ are respected. This also ensures that AUTO_READ is correctly working in all cases
Modifications:
- Correctly respect continueReading();
- Fix IOUringRecvByteAllocatorHandle
- Cleanup
Result:
Correctly handling reading
Motivation:
We did not correctly compute all fields when POLL_REMOVE entry was calculate. Which could lead to not finding the right operation.
Modifications:
- Correctly fill all fields
- Fix unit tests
Result:
Remove IO_POLL operations work again as expected
Motivation:
Due a bug we did not include the IOURING based transport for clients in the testsuite. When enabling this it failed due a bug related to when we register POLLRDHUP.
Modification:
- Include IOURING clients in testsuite
- Register for RDHUP on the right time
Result:
Correctly handle RDHUP and also test IOURING for clients
Motivation:
we should use kHead(with acquire memory barrier) instead of sqeHead as submit() is called internally when sqe is full
Modification:
-submit is called internally when sqe is full
-added a new sqe full test
Result:
you no longer need to check if the sqe is full when you add a new event
Motivation:
segmentation is caused in IovecArrayPool.release because the default of iovecMemoryAddress is 0
Modification:
-set default to -1
-some cleanups
-added new testsuite tests
Result:
fixed segmentation error
Motivation:
We not correctly handled errors and also had some problems with home POLL* was handled.
Modifictions:
- Cleanup
- No need to for links anymore
- Add error handling for most operations (poll still missing)
- Add better handling for RDHUP
- Correctly handle writeScheduled flag for writev
Result:
Cleaner and more correct code
Motivation:
writev which allows to write data into multiple buffers
Modification:
-Added iovec array pool to manage iov memory
-flush override to make sure that write is not called
Result:
performance is much better
Motivation:
we should remove pollIn link, as we don't use pollIn linking anymore
Modification:
-some cleanups in the tests and in IOUring
-pollIn linking was removed
Result:
clean code
Motivation:
We must correctly use the polling support of io_uring to reduce the number of events in flight + only allocate buffers if really needed. For this we should respect the different poll masks and only do the corresponding IO action once the fd becomes ready for it.
Modification:
- Correctly respect poll masks and so only schedule an IO event if the fd is ready for it
- Move some code for cleanup
Result:
More correct usage of io_uring and less memory usage
Motivation:
if connect returns EINPROGRESS we send POLL OUT and check
via socket.finishConnect if the connection is successful
Modifications:
-added new io_uring connect op
-use a directbuffer for the socket address
Result:
you are able to connect to a peer
Motivation:
when you submit a poll, io_uring still hold reference to it even if close is called
source io_uring mailing list(https://lore.kernel.org/io-uring/27657840-4E8E-422D-93BB-7F485F21341A@kernel.dk/T/#t)
Modification:
-To delete fd reference in io_uring, we use POLL_REMOVE to avoid a server socket address error
-Added a POLL_REMOVE test
Result:
server can be closed and restarted
Motivation:
We did create a lot of objects related to the completion queue and submission queue which produced a lot of GC. Beside this we also did maintain an extra map which is not really needed as we can encode everything that we need in the user_data field.
Modification:
- Reduce complexity and GC pressure by store needed informations in the user_data field
- Small refactoring of the code to move channel related logic to the channel
- Remove unused classes
- Use callback to process stuff in the completion queue and so remove all GC created by it
- Simplify by not storing channel and buffer in the event
Result:
Less GC pressure and no extra lookups for events needed
Motivation:
We use eventfd in our io_uring based transport to wakeup the eventloop. When doing so we need to be careful that we read any data previous written to it.
Modification:
- Correctly read data that was written to eventfd before submit another event related to it to the submission queue as otherwise we will see another completion event related to it asap
- Ensure we not remove the wrong event from the storted event ids (we did remove the wrong before because we reused the Event object)
- ensure we only use the submission queue from the EventLoop thread in all cases
- add another unit test
Result:
Wakeups via eventfd work as expected
Motivation:
We need to use deadlineToDelayNanos(...) to calculate the timeout for io_uring as otherwise the timeout will be scheduled at the wrong time in the future
Modifications:
Make use of deadlineToDelayNanos(...)
Result:
Correctly schedule timeou
Motivation:
There was a bug in the implemention so we missed to submit what was in the submission queue. This could lead to a deadlock. Beside this we should also process all events that we can poll without blocking and only after that process tasks. This will ensure we drain the ringbuffers in a timely manner
Modifications:
- Add missing submit() call
- Rename peek() to poll() as we consume the data so peek() is missleading
- Process all events that can be processed without blocking
Result:
Fix a bug, clarify and better performance
Motiviation:
after each pass all channel sockets are closed, after the allocator is changed(4. iteration) the server socket BeginRead wont be called after server socket creation, however, both allocators work in netty example
Modification:
increased the timeout, other tests were commented out
Result:
testsuite changes will be undone later
Motivation:
-at the moment we dont shutdown when we get a read error message
-missing autoread support
Modifications:
-even if autoread is disable, should do check if the read event is already submitted
-added new Handle exception method to shutdown the channels
Result:
EL read event can handle read errors
Motivation:
no checks for non writeable sockets
Modifications:
-Added a linked write poll to make sure that the socket does not write if it is not writeable
-Added a new boolean to avoid to submit a second write operation
Result:
writeable socket check
Motivation:
when the channel connection is lost, we dont get any notification(unless the customer has not submitted a writer or read event)
Modifications:
add rhup polling to be notified when the connection is lost
Result:
the eventloop is notified on connection losts
Motivation:
to shutdown child channels we should create new abstact client class instead of using AbstractIOUringChannel
Modifications:
-Added new child channel abstract class
-Add shutdown methods to close a channel when the connection is lost
Result:
the channels can be closed when the connection is lost
Motivation:
some tcp options (like TcpFastopen or TcpFastopenConnect etc.) are required for testsuite tests
Modification:
-copied the class LinuxSocket from epoll and JNI to load this module in io_uring jni
-some configurations have been adjusted
Result:
more tcp options are available
Motivation:
availability io_uring check for each test case
Modification:
added ioUringExit method to munmap the shared memory and close ring file descriptor which is required for the availability check
Result:
it's able to integrate testsuite tests
Motivation:
no need to poll in front of the read operation since IORING_FEAT_FAST_POLL polls anyway
Modification:
removed poll before the read event
Result:
netty echo prototype works on a custom kernel https://github.com/1Jo1/linux/tree/io_uring_off7(merge linux-block/io_uring-5.9 branch into 5.8.0) and Linux 5.9-rc1 should work as well(not tested yet)
Motivation:
The problem is that if io_uring accept/read non blocking doesnt return -EAGAIN for non-blocking sockets
in general, then it removes a way for the application to tell if
there's ever any data there.
There is a fix in Kernel 5.8 https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?h=v5.8&id=e697deed834de15d2322d0619d51893022c90ea2 which means we need to add poll before the accept/read event(poll<link>read/accept) to fix in netty as well
Modification:
-add poll before the accept/read event with this flag IOSQE_IO_LINK
Result:
netty prototype works on Kernel 5.8
Motivation:
wake up the blocking call io_uring which is called when a new task is added(not from the eventloop thread)
Modification:
-Added timeout operation for scheduled task(not tested yet)
-Added Poll operation
-Added two tests to reproduce the polling signal issue
Result:
io_uring_enter doesnt get any polling signal from eventFdWrite if both functions are executed in different threads
Motivation:
unnecessary use of LinuxSocket class, missing CRLF etc.
Modification:
-Add CRLF
-remove IOUringChannelConfig and LinuxSocket class
Result:
less code and cleanup