- Rename capacity variables to reqCapacity or normCapacity to distinguish if its the request capacity or the normalized capacity
- Do not reallocate on ByteBuf.capacity(int) if reallocation is unnecessary; just update the index range.
- Revert the workaround in DefaultChannelHandlerContext
- Also fixed a incorrect port of SpdySessionHandler
- Previously, it closed the connection too early when sending a GOAWAY frame
- After this fix, SpdySessionHandlerTest now passes again without the previous fix
- Fixes#831
This commit ensures the following events are never triggered as a direct
invocation if they are triggered via ChannelPipeline.fire*():
- channelInactive
- channelUnregistered
- exceptionCaught
This commit also fixes the following issues surfaced by this fix:
- Embedded channel implementations run scheduled tasks too early
- SpdySessionHandlerTest tries to generate inbound data even after the
channel is closed.
- AioSocketChannel enters into an infinite loop on I/O error.
- Fixes#826
Unsafe.isFreed(), free(), suspend/resumeIntermediaryAllocations() are not that dangerous. internalNioBuffer() and internalNioBuffers() are dangerous but it seems like nobody is using it even inside Netty. Removing those two methods also removes the necessity to keep Unsafe interface at all.
This commit also introduce a new interface which is called AioSocketChannelConfig to expose AIO only config options with the right visibility.
Also it change the ChannelConfig.setAllocator(..) to return the ChannelConfig to be more consistent with the other methods.
This pull request introduces the new default ByteBufAllocator implementation based on jemalloc, with a some differences:
* Minimum possible buffer capacity is 16 (jemalloc: 2)
* Uses binary heap with random branching (jemalloc: red-black tree)
* No thread-local cache yet (jemalloc has thread-local cache)
* Default page size is 8 KiB (jemalloc: 4 KiB)
* Default chunk size is 16 MiB (jemalloc: 2 MiB)
* Cannot allocate a buffer bigger than the chunk size (jemalloc: possible) because we don't have control over memory layout in Java. A user can work around this issue by creating a composite buffer, but it's not always a feasible option. Although 16 MiB is a pretty big default, a user's handler might need to deal with the bounded buffers when the user wants to deal with a large message.
Also, to ensure the new allocator performs good enough, I wrote a microbenchmark for it and made it a dedicated Maven module. It uses Google's Caliper framework to run and publish the test result (example)
Miscellaneous changes:
* Made some ByteBuf implementations public so that those who implements a new allocator can make use of them.
* Added ByteBufAllocator.compositeBuffer() and its variants.
* ByteBufAllocator.ioBuffer() creates a buffer with 0 capacity.
testConcurrentMessageBufferAccess() assumes the outbound/inbound byte buffers are unbounded. Because PooledByteBuf is bounded, the test did not pass.
The fix makes an assumption that ctx.flush() or fireInboundBufferUpdated() will make the next buffer consumed immediately, which is not the case in the real world. Under network congestion, a user will see IndexOutOfBoundsException if the user's handler implementation writes boundlessly into inbound/outbound buffers.