To perform writes in AioSocketChannel, we get a ByteBuffer view of the
outbound buffer and specify it as a parameter when we call
AsynchronousSocketChannel.write().
In most cases, the write() operation is finished immediately. However,
sometimes, it is scheduled for later execution. In such a case, there's
a chance for a user's handler to append more data to the outbound
buffer.
When more data is appended to the outbound buffer, the outbound buffer
can expand its capacity by itself. Changing the capacity of a buffer is
basically made of the following steps:
1. Allocate a larger new internal memory region.
2. Copy the current content of the buffer to the new memory region.
3. Rewire the buffer so that it refers to the new region.
4. Deallocate the old memory region.
Because the old memory region is deallocated at the step 4, the write
operation scheduled later will access the deallocated region, leading
all sort of data corruption or even segfaults.
To prevent this situation, I added suspendIntermediaryDeallocations()
and resumeIntermediaryDeallocations() to UnsafeByteBuf.
AioSocketChannel.doFlushByteBuf() now calls suspendIntermediaryDealloc()
to defer the deallocation of the old memory regions until the completion
handler is notified.
An AssertionError is triggered by a ByteBuf when beginRead() attempts to
access the buffer which has been freed already. This commit ensures the
buffer is not freed before performing an I/O operation.
To determine if the buffer has been freed, UnsafeByteBuf.isFreed() has
been added.
After some debugging, I found that JDK AIO implementation often performs
I/O immediately from the caller thread if the caller thread is the I/O
thread, and notifies the completion handler also immediately. This
commit handles such a case correctly during reads and writes.
Additionally, this commit also changes SingleThreadEventExecutor to let
it handle unexpected exceptions such as AssertionError in a robus
manner.
When a Netty application shuts down, a user often sees a REE
(RejectedExecutionException).
A REE is raised due to various reasons we don't have control over, such
as:
- A client connects to a server while the server is shutting down.
- An event is triggered for a closed Channel while its event loop is
also shutting down. Some of them are:
- channelDeregistered (triggered after a channel is closed)
- freeIn/OutboundBuffer (triggered after channelDeregistered)
- userEventTriggered (triggered anytime)
To address this issue, a new method called confirmShutdown() has been
added to SingleThreadEventExecutor. After a user calls shutdown(),
confirmShutdown() runs any remaining tasks in the task queue and ensures
no events are triggered for last 2 seconds. If any task are added to
the task queue before 2 seconds passes, confirmShutdown() prevents the
event loop from terminating by returning false.
Now that SingleThreadEventExecutor needs to accept tasks even after
shutdown(), its execute() method only rejects the task after the event
loop is terminated (i.e. isTerminated() returns true.) Except that,
there's no change in semantics.
SingleThreadEventExecutor also checks if its subclass called
confirmShutdown() in its run() implementation, so that Netty developers
can make sure they shut down their event loop impementation correctly.
It also fixes a bug in AioSocketChannel, revealed by delayed shutdown,
where an inboundBufferUpdated() event is triggered on a closed Channel
with deallocated buffers.
Caveats:
Because SingleThreadEventExecutor.takeTask() does not have a notion of
timeout, confirmShutdown() adds a dummy task (WAKEUP_TASK) to wake up
takeTask() immediately and instead sleeps hard-coded 100ms. I'll
address this issue later by modifying takeTask() times out dynamically.
Miscellaneous changes:
SingleThreadEventExecutor.wakeup() now has the default implementation.
Instead of interrupting the current thread, it simply adds a dummy task
(WAKEUP_TASK) to the task queue, which is more elegant and efficient.
NioEventLoop is the only implementation that overrides it. All other
implementations' wakeup()s were removed thanks to this change.
This commit introduces a new API for ByteBuf allocation which fixes
issue #643 along with refactoring of ByteBuf for simplicity and better
performance. (see #62)
A user can configure the ByteBufAllocator of a Channel via
ChannelOption.ALLOCATOR or ChannelConfig.get/setAllocator(). The
default allocator is currently UnpooledByteBufAllocator.HEAP_BY_DEFAULT.
To allocate a buffer, do not use Unpooled anymore. do the following:
ctx.alloc().buffer(...); // allocator chooses the buffer type.
ctx.alloc().heapBuffer(...);
ctx.alloc().directBuffer(...);
To deallocate a buffer, use the unsafe free() operation:
((UnsafeByteBuf) buf).free();
The following is the list of the relevant changes:
- Add ChannelInboundHandler.freeInboundBuffer() and
ChannelOutboundHandler.freeOutboundBuffer() to let a user free the
buffer he or she allocated. ChannelHandler adapter classes implement
is already, so most users won't need to call free() by themselves.
freeIn/OutboundBuffer() methods are invoked when a Channel is closed
and deregistered.
- All ByteBuf by contract must implement UnsafeByteBuf. To access an
unsafe operation: ((UnsafeByteBuf) buf).internalNioBuffer()
- Replace WrappedByteBuf and ByteBuf.Unsafe with UnsafeByteBuf to
simplify overall class hierarchy and to avoid unnecesary instantiation
of Unsafe instances on an unsafe operation.
- Remove buffer reference counting which is confusing
- Instantiate SwappedByteBuf lazily to avoid instantiation cost
- Rename ChannelFutureFactory to ChannelPropertyAccess and move common
methods between Channel and ChannelHandlerContext there. Also made it
package-private to hide it from a user.
- Remove unused unsafe operations such as newBuffer()
- Add DetectionUtil.canFreeDirectBuffer() so that an allocator decides
which buffer type to use safely
- Add Bootstrap.attr() and ServerBootstrap.attr()/childAttr() so that a
user can initialize the attribute map from the beginning.
- Replace newBootstrap() with duplicate()