Motivation:
At the moment a user can not safetly call slice().retain() or duplicate.retain()in the ByteToMessageDecoder.decode(...) implementation without the risk to see coruption because we may call discardSomeReadBytes() to make room on the buffer once the handling is done.
Modifications:
Check for the refCnt() before call discardSomeReadBytes() and also check before call decode(...) to create a copy if needed.
Result:
The user can safetly call slice().retain() or duplicate.retain() in his/her ByteToMessageDecoder.decode(...) method.
Motivation:
Reduce memory usage in ProtobufVarint32LengthFieldPrepender.
Modifications:
Explicit set the buffer size that is needed for the header (between 1 and 5 bytes).
Result:
Less memory usage in ProtobufVarint32LengthFieldPrepender.
Motivation:
Remove the synchronization bottleneck and so speed up things
Modifications:
Introduce a ThreadLocal cache that holds mappings between classes of ChannelHandlerAdapater implementations and the result of checking if the @Sharable annotation is present.
This way we only will need to do the real check one time and server the other calls via the cache. A ThreadLocal and WeakHashMap combo is used to implement the cache
as this way we can minimize the conditions while still be sure we not leak class instances in containers.
Result:
Less conditions during adding ChannelHandlerAdapter to the ChannelPipeline
- Related: #2163
- Add ResourceLeakHint to allow a user to provide a meaningful information about the leak when touching it
- DefaultChannelHandlerContext now implements ResourceLeakHint to tell where the message is going.
- Cleaner resource leak report by excluding noisy stack trace elements
- Fixes#2014
- Add the tests that mix JDK ZLIB codec and JZlib codecs
- Fix a bug where JdkZlibEncoder does not encode the GZIP header when nothing was written to te channel
- Fix a bug where the encoders do not consider the overhead of the wrapper format when calculating the estimated compressed output size.
- Fix a bug where the decoders do not discard the received data after the compressed stream is finished
- Fixes#2003 properly
- Instead of using 'bundle' packaging, use 'jar' packaging. This is
more robust because some strict build tools fail to retrieve the
artifacts from a Maven repository unless their packaging is not 'jar'.
- All artifacts now contain META-INF/io.netty.version.properties, which
provides the detailed information about the build and repository.
- Removed OSGi testsuite temporarily because it gives false errors
during split package test and examination.
- Add io.netty.util.Version for easy retrieval of version information
This fixes#1664 and revert also the original commit which was meant to fix it 3b1881b523 . The problem with the original commit was that it could delay handlerRemove(..) calls and so mess up the order or forward bytes to late.
- write() now accepts a ChannelPromise and returns ChannelFuture as most
users expected. It makes the user's life much easier because it is
now much easier to get notified when a specific message has been
written.
- flush() does not create a ChannelPromise nor returns ChannelFuture.
It is now similar to what read() looks like.
- Remove channelReadSuspended because it's actually same with messageReceivedLast
- Rename messageReceived to channelRead
- Rename messageReceivedLast to channelReadComplete
We renamed messageReceivedLast to channelReadComplete because it
reflects what it really is for. Also, we renamed messageReceived to
channelRead for consistency in method names.
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
- 5% improvement in throughput (HelloWorldServer example)
- Made CompositeByteBuf a concrete class (renamed from DefaultCompositeByteBuf) because there's no multiple inheritance in Java
Fixes#1536
- Related: #1378
- They now accept only one argument.
- A user who wants to use a buffer for more complex use cases, he or she can always access the buffer directly via memoryAddress() and array()
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
* This could cause for example corrupt WebSocketFrame's if they was written from the server
to the client directly after it send the handshake response.
- Fixes#1308
freeInboundBuffer() and freeOutboundBuffer() were introduced in the early days of the new API when we did not have reference counting mechanism in the buffer. A user did not want Netty to free the handler buffers had to override these methods.
However, now that we have reference counting mechanism built into the buffer, a user who wants to retain the buffers beyond handler's life cycle can simply return the buffer whose reference count is greater than 1 in newInbound/OutboundBuffer().
- Added a test case that reproduces the problem in ReplayingDecoderTest
- Call newHandler.handlerAdded() *before* oldHandler.handlerRemoved() to ensure newHandlerAdded() is called before forwarding the buffer content of the old handler in replace0().
This change also introduce a few other changes which was needed:
* ChannelHandler.beforeAdd(...) and ChannelHandler.beforeRemove(...) were removed
* ChannelHandler.afterAdd(...) -> handlerAdded(...)
* ChannelHandler.afterRemoved(...) -> handlerRemoved(...)
* SslHandler.handshake() -> SslHandler.hanshakeFuture() as the handshake is triggered automatically after
the Channel becomes active
- Return early when the buffer is empty
- Keep only the number of byte buffers
- Remove unnecessary null check in the loop (because we know buffer is not empty at certain point)
- Fixes#1229
- Primarily written by @normanmaurer and revised by @trustin
This commit removes the notion of unfolding from the codec framework
completely. Unfolding was introduced in Netty 3.x to work around the
shortcoming of the codec framework where encode() and decode() did not
allow generating multiple messages.
Such a shortcoming can be fixed by changing the signature of encode()
and decode() instead of introducing an obscure workaround like
unfolding. Therefore, we changed the signature of them in 4.0.
The change is simple, but backward-incompatible. encode() and decode()
do not return anything. Instead, the codec framework will pass a
MessageBuf<Object> so encode() and decode() can add the generated
messages into the MessageBuf.
- Add ChannelHandlerUtil and move the core logic of ChannelInbound/OutboundMessageHandler to ChannelHandlerUtil
- Add ChannelHandlerUtil.SingleInbound/OutboundMessageHandler and make ChannelInbound/OutboundMessageHandlerAdapter implement them. This is a backward incompatible change because it forces all handler methods to be public (was protected previously)
- Fixes: #1119
* Correct reading offset of 1-byte-offset copies
* Keep track of how much we've written so far in order to validate offsets
* Uncomment and reduce number of tests
- Rename ChannelHandlerAdapter to ChannelDuplexHandler
- Add ChannelHandlerAdapter that implements only ChannelHandler
- Rename CombinedChannelHandler to CombinedChannelDuplexHandler and
improve runtime validation
- Remove ChannelInbound/OutboundHandlerAdapter which are not useful
- Make ChannelOutboundByteHandlerAdapter similar to
ChannelInboundByteHandlerAdapter
- Make the tail and head handler of DefaultChannelPipeline accept both
bytes and messages. ChannelHandlerContext.hasNext*() were removed
because they always return true now.
- Removed various unnecessary null checks.
- Correct method/field names:
inboundBufferSuspended -> channelReadSuspended
- Checksum header was being incorrectly read due to incorrect order of
shift and masking operations.
- Length field of 1-byte copy was being incorrectly interpreted due to a
typo in the binary mask used to extract it.
- Use ByteBuf.readUnsignedByte() instead of readByte() & 0xff
- Use bitwise-OR wherever possible
- Use EmbeddedByteChannel to test
- Use ByteBuf comparison instead of array comparison
- Work done by @lw346 and then revised by @trustin
- Move common methods from ByteBuf to Buf
- Rename ensureWritableBytes() to ensureWritable()
- Rename readable() to isReadable()
- Rename writable() to isWritable()
- Add isReadable(int) and isWritable(int)
- Add AbstractMessageBuf
- Rewrite DefaultMessageBuf and QueueBackedMessageBuf
- based on Josh Bloch's public domain ArrayDeque impl
This changes the behavior of the ChannelPipeline.remove(..) and ChannelPipeline.replace(..) methods in that way
that after invocation it is not possible anymore to access any data in the inbound or outbound buffer. This is
because it empty it now to prevent side-effects. If a user want to preserve the content and forward it to the
next handler in the pipeline it is adviced to use one of the new methods which where introduced.
- ChannelPipeline.removeAndForward(..)
- ChannelPipeline.replaceAndForward(..)
This pull request adds two new handler methods: discardInboundReadBytes(ctx) and discardOutboundReadBytes(ctx) to ChannelInboundByteHandler and ChannelOutboundByteHandler respectively. They are called between every inboundBufferUpdated() and flush() respectively. Their default implementation is to call discardSomeReadBytes() on their buffers and a user can override this behavior easily. For example, ReplayingDecoder.discardInboundReadBytes() looks like the following:
@Override
public void discardInboundReadBytes(ChannelHandlerContext ctx) throws Exception {
ByteBuf in = ctx.inboundByteBuffer();
final int oldReaderIndex = in.readerIndex();
super.discardInboundReadBytes(ctx);
final int newReaderIndex = in.readerIndex();
checkpoint -= oldReaderIndex - newReaderIndex;
}
If a handler, which has its own buffer index variable, extends ReplayingDecoder or ByteToMessageDecoder, the handler can also override discardInboundReadBytes() and adjust its index variable accordingly.
This pull request introduces a new operation called read() that replaces the existing inbound traffic control method. EventLoop now performs socket reads only when the read() operation has been issued. Once the requested read() operation is actually performed, EventLoop triggers an inboundBufferSuspended event that tells the handlers that the requested read() operation has been performed and the inbound traffic has been suspended again. A handler can decide to continue reading or not.
Unlike other outbound operations, read() does not use ChannelFuture at all to avoid GC cost. If there's a good reason to create a new future per read at the GC cost, I'll change this.
This pull request consequently removes the readable property in ChannelHandlerContext, which means how the traffic control works changed significantly.
This pull request also adds a new configuration property ChannelOption.AUTO_READ whose default value is true. If true, Netty will call ctx.read() for you. If you need a close control over when read() is called, you can set it to false.
Another interesting fact is that non-terminal handlers do not really need to call read() at all. Only the last inbound handler will have to call it, and that's just enough. Actually, you don't even need to call it at the last handler in most cases because of the ChannelOption.AUTO_READ mentioned above.
There's no serious backward compatibility issue. If the compiler complains your handler does not implement the read() method, add the following:
public void read(ChannelHandlerContext ctx) throws Exception {
ctx.read();
}
Note that this pull request certainly makes bounded inbound buffer support very easy, but itself does not add the bounded inbound buffer support.
- Fixes#826
Unsafe.isFreed(), free(), suspend/resumeIntermediaryAllocations() are not that dangerous. internalNioBuffer() and internalNioBuffers() are dangerous but it seems like nobody is using it even inside Netty. Removing those two methods also removes the necessity to keep Unsafe interface at all.
* UnsafeByteBuf is gone. I added ByteBuf.unsafe() back.
* To avoid extra instantiation, all ByteBuf implementations implement the ByteBuf.Unsafe interface.
* To hide this implementation detail, all ByteBuf implementations are package-private.
* AbstractByteBuf and SwappedByteBuf are public and they do not implement ByteBuf.Unsafe because they don't need to.
* unwrap() is not an unsafe operation anymore.
* ChannelBuf also has unsafe() and Unsafe. ByteBuf.Unsafe extends ChannelBuf.unsafe(). ChannelBuf.unsafe() provides free() operation so that a user does not need to down-cast the buffer in freeInbound/OutboundBuffer().
To perform writes in AioSocketChannel, we get a ByteBuffer view of the
outbound buffer and specify it as a parameter when we call
AsynchronousSocketChannel.write().
In most cases, the write() operation is finished immediately. However,
sometimes, it is scheduled for later execution. In such a case, there's
a chance for a user's handler to append more data to the outbound
buffer.
When more data is appended to the outbound buffer, the outbound buffer
can expand its capacity by itself. Changing the capacity of a buffer is
basically made of the following steps:
1. Allocate a larger new internal memory region.
2. Copy the current content of the buffer to the new memory region.
3. Rewire the buffer so that it refers to the new region.
4. Deallocate the old memory region.
Because the old memory region is deallocated at the step 4, the write
operation scheduled later will access the deallocated region, leading
all sort of data corruption or even segfaults.
To prevent this situation, I added suspendIntermediaryDeallocations()
and resumeIntermediaryDeallocations() to UnsafeByteBuf.
AioSocketChannel.doFlushByteBuf() now calls suspendIntermediaryDealloc()
to defer the deallocation of the old memory regions until the completion
handler is notified.
An AssertionError is triggered by a ByteBuf when beginRead() attempts to
access the buffer which has been freed already. This commit ensures the
buffer is not freed before performing an I/O operation.
To determine if the buffer has been freed, UnsafeByteBuf.isFreed() has
been added.
(See #768)
Once too long object is received, CompatibleMarshallingDecoder has to
discard all input from now on, just like MarshallingDecoder does.
Otherwise, the decoder will raise more exceptions because the decoder
has no idea anymore where the object starts.
Before this fix, SerialThreadLocalCompatibleMarshallingDecoderTest
logged many additional exceptions raised by the decoder after test is
finished.
Using DelimiterBasedFrameDecoder with Delimiters.lineDelimiter() has
quadratic performance in the size of the input buffer. Needless to
say, the performance degrades pretty quickly as the size of the buffer
increases. Larger MTUs or loopback connections can make it so bad that
it appears that the code is "busy waiting", when in fact it's spending
almost 100% of the CPU time in DelimiterBasedFrameDecoder.indexOf().
Add a new LineBasedFrameDecoder that decodes line-delimited frames
in O(n) instead of DelimiterBasedFrameDecoder's O(n^2) implementation.
In OpenTSDB's telnet-style protocol decoder this resulted in throughput
increases of an order of magnitude.
Change DelimiterBasedFrameDecoder to automatically detect when the
frames are delimited by line endings, and automatically switch to
using LineBasedFrameDecoder under the hood. This means that all Netty
applications out there that using the combo DelimiterBasedFrameDecoder
with Delimiters.lineDelimiter() will automatically benefit from the
better performance of LineBasedFrameDecoder, without requiring a code
change.
This commit introduces a new API for ByteBuf allocation which fixes
issue #643 along with refactoring of ByteBuf for simplicity and better
performance. (see #62)
A user can configure the ByteBufAllocator of a Channel via
ChannelOption.ALLOCATOR or ChannelConfig.get/setAllocator(). The
default allocator is currently UnpooledByteBufAllocator.HEAP_BY_DEFAULT.
To allocate a buffer, do not use Unpooled anymore. do the following:
ctx.alloc().buffer(...); // allocator chooses the buffer type.
ctx.alloc().heapBuffer(...);
ctx.alloc().directBuffer(...);
To deallocate a buffer, use the unsafe free() operation:
((UnsafeByteBuf) buf).free();
The following is the list of the relevant changes:
- Add ChannelInboundHandler.freeInboundBuffer() and
ChannelOutboundHandler.freeOutboundBuffer() to let a user free the
buffer he or she allocated. ChannelHandler adapter classes implement
is already, so most users won't need to call free() by themselves.
freeIn/OutboundBuffer() methods are invoked when a Channel is closed
and deregistered.
- All ByteBuf by contract must implement UnsafeByteBuf. To access an
unsafe operation: ((UnsafeByteBuf) buf).internalNioBuffer()
- Replace WrappedByteBuf and ByteBuf.Unsafe with UnsafeByteBuf to
simplify overall class hierarchy and to avoid unnecesary instantiation
of Unsafe instances on an unsafe operation.
- Remove buffer reference counting which is confusing
- Instantiate SwappedByteBuf lazily to avoid instantiation cost
- Rename ChannelFutureFactory to ChannelPropertyAccess and move common
methods between Channel and ChannelHandlerContext there. Also made it
package-private to hide it from a user.
- Remove unused unsafe operations such as newBuffer()
- Add DetectionUtil.canFreeDirectBuffer() so that an allocator decides
which buffer type to use safely
* Add DecodeResult that represents the result of decoding a message
* Add HttpObject which HttpMessage and HttpChunk extend.
** HttpObject has a property 'decodeResult'
o Add ByteBuf.hasNioBuffers() method
o Promote CompositeByteBuf.nioBuffers() methods to ByteBuf
o Use ByteBuf.nioBuffers() methods from AioSocketChannel
- Replace ByteBufferBackedByteBuf with DirectByteBuf
- Make DirectByteBuf and HeapByteBuf dynamic
- Remove DynamicByteBuf
- Replace Unpooled.dynamicBuffer() with Unpooled.buffer() and
directBuffer()
- Remove ByteBufFactory (will be replaced with ByteBufPool later)
- Add ByteBuf.Unsafe (might change in the future)
- Removed VoidEnum because a user can now specify Void instead
- AIO: Prefer discardReadBytes to clear
- AIO: Fixed a potential bug where notifyFlushFutures() is not called
if flush() was requested with no outbound data
- Add Channel.metadata() and remove Channel.bufferType()
- DefaultPipeline automatically redirects disconnect() request to
close() if the channel has no disconnect operation
- Remove unnecessary disconnect() implementations
- Rename ZlibEncoder/Decoder to JZlibEncoder/Decoder
- Define a new ZlibEncoder/Decoder class
- Add JdkZlibEncoder
- All JZlib* and JdkZlib* extends ZlibEncoder/Decoder
- Add ZlibCodecFactory and use it everywhere
- Removed all methods that requires ByteOrder as a parameter
from Unpooled (formerly ByteBufs/ChannelBuffers)
- Instead, a user calls order(ByteOrder) to get a little endian
version of the user's buffer
- This gives less overwhelming number of methods in Unpooled.
- ChannelInboundHandler and ChannelOutboundHandler does not have a type
parameter anymore.
- User should implement ChannelInboundMessageHandler or
ChannelOutboundMessageHandler.
- Add MessageBuf which replaces java.util.Queue
- Add ChannelBuf which is common type of ByteBuf and ChannelBuf
- ChannelBuffers was renamed to ByteBufs
- Add MessageBufs
- All these changes are going to replace ChannelBufferHolder.
- ChannelBuffer gives a perception that it's a buffer of a
channel, but channel's buffer is now a byte buffer or a message
buffer. Therefore letting it be as is is going to be confusing.
- Also prohibited a user from overriding
ChannelInbound(Byte|Message)HandlerAdapter. If a user wants to do
that, he or she should extend ChannelInboundHandlerAdapter instead.
- In computing, 'stream' means both byte stream and message stream,
which is confusing.
- Also, we were already mixing stream and byte in some places and
it's better use the terms consistently.
(e.g. inboundByteBuffer & inbound stream)
- Added EventExecutor.inEventLoop(Thread) and replaced executor identity
comparison in DefaultChannelPipeline with it - more elegant IMO
- Removed the test classes that needs rewrite or is of no use
- LoggingHandler now only logs state and operations
- StreamLoggingHandler and MessageLoggingHandler log the buffer content
- Added ChannelOperationHandlerAdapter
- Used by WriteTimeoutHandler
- Extracted some handler methods from ChannelInboundHandler into
ChannelStateHandler
- Extracted some handler methods from ChannelOutboundHandler into
ChannelOperationHandler
- Moved exceptionCaught and userEventTriggered are now in
ChannelHandler
- Channel(Inbound|Outbound)HandlerContext is merged into
ChannelHandlerContext
- ChannelHandlerContext adds direct access methods for inboud and
outbound buffers
- The use of ChannelBufferHolder is minimal now.
- Before: inbound().byteBuffer()
- After: inboundByteBuffer()
- Simpler and better performance
- Bypass buffer types were removed because it just does not work at all
with the thread model.
- All handlers that uses a bypass buffer are broken. Will fix soon.
- CombinedHandlerAdapter does not make sense anymore either because
there are four handler interfaces to consider and often the two
handlers will implement the same handler interface such as
ChannelStateHandler. Thinking of better ways to provide this feature
... just like we do with byte arrays. toByteBuffer() and
toByteBuffers() had an indeterministic behavior and thus it could not
tell when the returned NIO buffer is shared or not. nioBuffer() always
returns a view buffer of the Netty buffer. The only case where
hasNioBuffer() returns false and nioBuffer() fails is the
CompositeChannelBuffer, which is not very commonly used and *slow*.
- Add EventExecutor and make EventLoop extend it
- Add SingleThreadEventExecutor and MultithreadEventExecutor
- Add EventExecutor's default implementation
- Fixed an API design problem where there is no way to get non-bypass
buffer of desired type
- Add ChannelHandlerContext.eventLoop() for convenience
- Bootstrap and ServerBootstrap handles channel initialization failure
better
- More strict checks for missing @Sharable annotation
- A handler without @Sharable annotation cannot be added more than
once now.
- Exception in this case makes a user less confusing
- To reduce the overhead of filling the stack trace,
NoSuchBufferException has a public pre-constructed instance.
- This is necessary because codec framework sometimes need to support
both type of outbound buffers.
- Fixed a bug where SpdyFrameEncoder did not handle ping messages
- Reduced memory copy in codec embedder (EmbeddedChannel)
- Renamed ChannelBootstrap to Bootstrap
- Renamed ServerChannelBootstrap to ServerBootstrap
- Moved bootstrap classes to io.netty.bootstrap as before
- Moved unfoldAndAdd() to a separate utility class
- Fixed a bug in unfoldAndAdd() where it did not handle ChannelBuffer
correctly
- Previous API did not support the pipeline which contains multiple
MessageToStreamEncoders because there was no way to find the closest
outbound byte buffer. Now you always get the correct buffer even if
the handler that provides the buffer is placed distantly.
For example:
Channel -> MsgAEncoder -> MsgBEncoder -> MsgCEncoder
Msg(A|B|C)Encoder will all have access to the channel's outbound
byte buffer. Previously, it was simply impossible.
- Improved ChannelBufferHolder.toString()
- Replaced pipeline factories with initializers
- Ported essential parts related with HTTP to the new API
- Replaced ChannelHandlerAdapter.combine() with CombinedChannelHandler
- Fixed a bug where ReplayingDecoder does not notify the next handler
- Fixed a bug where ReplayingDecoder calls wrong callDecode() method
- Added a destination buffer as an argument to AbstractChannel.doRead()
for easier implementation
- Fixed a bug where NioSocketChannel did not try to increase the inbound
buffer size (moved the logic to AbstractChannel)
- Replaced some IllegalArgumentExceptions with
UnsupportedMessageTypeException
- MessageToMessage(Encoder|Decoder) should continue polling the
inbound buffer if encode() or decode() returns null
- aggregating codec can do that
- Added CodecException which is either EncoderException or
DecoderException
- Made all decoder exceptions a subtype of DecoderException
- Replaced CodecEmbedderException with CodecException
- All abstract handlers wraps an exception with a CodecException
- UniqueKey removes the duplication between ChannelOption and
AttributeKey
- UniqueName provides common name collision check for AttributeKey,
ChannelOption, and Signal.
- Replaced ReplayError with Signal
- Removed deprecated classes
- Changed type parameter of StreamToMessageDecoder and
MessageToMessageDecoder for more flexibility
- Made all tests in the codec package pass
- Replaced FrameDecoder and OneToOne(Encoder|Decoder) with:
- (Stream|Message)To(String|Message)(Encoder|Decoder)
- Moved the classes in 'codec.frame' up to 'codec'
- Fixed some bugs found while running unit tests
- Channel now creates a ChannelPipeline by itself
I find no reason to allow a user to use one's own pipeline
implementation since I saw nobody does except for the cases where a
user wants to add a user attribute to a channel, which is now covered
by AttributeMap.
- Removed ChannelEvent and its subtypes because they are replaced by
direct method invocation.
- Replaced ChannelSink with Channel.unsafe()
- Various getter renaming (e.g. Channel.getId() -> Channel.id())
- Added ChannelHandlerInvoker interface
- Implemented AbstractChannel and AbstractServerChannel
- Some other changes I don't remember
Merge in fix for threading (related to #140 and #187). This also includes the new feature that allow to submit a Runnable that gets executed later in the io thread.
* Overall code cleanup on FrameDecoder and ReplayingDecoder
* FrameDecoder discards readableBytes only when it has to
* Replaced createCumulationDynamicBuffer with newCumulationBuffer with
an additional hint
* ReplayingDecoder does not perform memory copy if possible
Split the project into the following modules:
* common
* buffer
* codec
* codec-http
* transport
* transport-*
* handler
* example
* testsuite (integration tests that involve 2+ modules)
* all (does nothing yet, but will make it generate netty.jar)
This commit also fixes the compilation errors with transport-sctp on
non-Linux systems. It will at least compile without complaints.