2012-06-04 22:31:44 +02:00
|
|
|
/*
|
|
|
|
* Copyright 2012 The Netty Project
|
|
|
|
*
|
|
|
|
* The Netty Project licenses this file to you under the Apache License,
|
|
|
|
* version 2.0 (the "License"); you may not use this file except in compliance
|
|
|
|
* with the License. You may obtain a copy of the License at:
|
|
|
|
*
|
|
|
|
* http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
*
|
|
|
|
* Unless required by applicable law or agreed to in writing, software
|
|
|
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
|
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
|
|
* License for the specific language governing permissions and limitations
|
|
|
|
* under the License.
|
|
|
|
*/
|
2012-05-16 16:02:06 +02:00
|
|
|
package io.netty.handler.codec;
|
|
|
|
|
2019-01-31 09:06:59 +01:00
|
|
|
import static io.netty.util.internal.ObjectUtil.checkPositive;
|
2019-02-04 10:32:25 +01:00
|
|
|
import static java.util.Objects.requireNonNull;
|
2019-01-31 09:06:59 +01:00
|
|
|
|
2012-06-10 04:08:43 +02:00
|
|
|
import io.netty.buffer.ByteBuf;
|
2014-11-21 21:10:47 +01:00
|
|
|
import io.netty.buffer.ByteBufAllocator;
|
|
|
|
import io.netty.buffer.CompositeByteBuf;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
import io.netty.buffer.Unpooled;
|
2019-03-13 09:46:10 +01:00
|
|
|
import io.netty.channel.ChannelHandlerAdapter;
|
2019-06-28 13:43:25 +02:00
|
|
|
import io.netty.channel.ChannelConfig;
|
2012-06-07 07:52:33 +02:00
|
|
|
import io.netty.channel.ChannelHandlerContext;
|
2019-11-28 12:17:44 +01:00
|
|
|
import io.netty.channel.ChannelHandler;
|
2016-02-13 00:30:09 +01:00
|
|
|
import io.netty.channel.socket.ChannelInputShutdownEvent;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
import io.netty.util.internal.StringUtil;
|
2012-05-16 16:02:06 +02:00
|
|
|
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
import java.util.List;
|
|
|
|
|
2012-12-21 22:22:40 +01:00
|
|
|
/**
|
2019-11-28 12:17:44 +01:00
|
|
|
* {@link ChannelHandler} which decodes bytes in a stream-like fashion from one {@link ByteBuf} to an
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
* other Message type.
|
2012-12-21 22:22:40 +01:00
|
|
|
*
|
|
|
|
* For example here is an implementation which reads all readable bytes from
|
|
|
|
* the input {@link ByteBuf} and create a new {@link ByteBuf}.
|
|
|
|
*
|
|
|
|
* <pre>
|
2013-01-08 08:18:46 +01:00
|
|
|
* public class SquareDecoder extends {@link ByteToMessageDecoder} {
|
2012-12-21 22:22:40 +01:00
|
|
|
* {@code @Override}
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
* public void decode({@link ChannelHandlerContext} ctx, {@link ByteBuf} in, List<Object> out)
|
2012-12-21 22:22:40 +01:00
|
|
|
* throws {@link Exception} {
|
2013-04-03 11:32:33 +02:00
|
|
|
* out.add(in.readBytes(in.readableBytes()));
|
2012-12-21 22:22:40 +01:00
|
|
|
* }
|
|
|
|
* }
|
|
|
|
* </pre>
|
2013-06-27 22:36:08 +02:00
|
|
|
*
|
2014-11-28 20:39:15 +01:00
|
|
|
* <h3>Frame detection</h3>
|
|
|
|
* <p>
|
|
|
|
* Generally frame detection should be handled earlier in the pipeline by adding a
|
|
|
|
* {@link DelimiterBasedFrameDecoder}, {@link FixedLengthFrameDecoder}, {@link LengthFieldBasedFrameDecoder},
|
|
|
|
* or {@link LineBasedFrameDecoder}.
|
|
|
|
* <p>
|
|
|
|
* If a custom frame decoder is required, then one needs to be careful when implementing
|
|
|
|
* one with {@link ByteToMessageDecoder}. Ensure there are enough bytes in the buffer for a
|
|
|
|
* complete frame by checking {@link ByteBuf#readableBytes()}. If there are not enough bytes
|
2015-02-03 20:54:48 +01:00
|
|
|
* for a complete frame, return without modifying the reader index to allow more bytes to arrive.
|
2014-11-28 20:39:15 +01:00
|
|
|
* <p>
|
2015-02-03 20:54:48 +01:00
|
|
|
* To check for complete frames without modifying the reader index, use methods like {@link ByteBuf#getInt(int)}.
|
2014-11-28 20:39:15 +01:00
|
|
|
* One <strong>MUST</strong> use the reader index when using methods like {@link ByteBuf#getInt(int)}.
|
|
|
|
* For example calling <tt>in.getInt(0)</tt> is assuming the frame starts at the beginning of the buffer, which
|
|
|
|
* is not always the case. Use <tt>in.getInt(in.readerIndex())</tt> instead.
|
|
|
|
* <h3>Pitfalls</h3>
|
|
|
|
* <p>
|
2013-06-27 22:36:08 +02:00
|
|
|
* Be aware that sub-classes of {@link ByteToMessageDecoder} <strong>MUST NOT</strong>
|
|
|
|
* annotated with {@link @Sharable}.
|
2014-11-28 20:39:15 +01:00
|
|
|
* <p>
|
2014-11-21 21:10:47 +01:00
|
|
|
* Some methods such as {@link ByteBuf#readBytes(int)} will cause a memory leak if the returned buffer
|
|
|
|
* is not released or added to the <tt>out</tt> {@link List}. Use derived buffers like {@link ByteBuf#readSlice(int)}
|
2014-11-28 20:39:15 +01:00
|
|
|
* to avoid leaking memory.
|
2012-12-21 22:22:40 +01:00
|
|
|
*/
|
2019-11-28 12:17:44 +01:00
|
|
|
public abstract class ByteToMessageDecoder extends ChannelHandlerAdapter {
|
2012-05-16 16:02:06 +02:00
|
|
|
|
2014-11-21 21:10:47 +01:00
|
|
|
/**
|
|
|
|
* Cumulate {@link ByteBuf}s by merge them into one {@link ByteBuf}'s, using memory copies.
|
|
|
|
*/
|
2019-01-29 14:06:05 +01:00
|
|
|
public static final Cumulator MERGE_CUMULATOR = (alloc, cumulation, in) -> {
|
|
|
|
try {
|
|
|
|
if (cumulation.writerIndex() > cumulation.maxCapacity() - in.readableBytes()
|
2019-11-28 12:17:44 +01:00
|
|
|
|| cumulation.refCnt() > 1 || cumulation.isReadOnly()) {
|
2019-01-29 14:06:05 +01:00
|
|
|
// Expand cumulation (by replace it) when either there is not more room in the buffer
|
|
|
|
// or if the refCnt is greater then 1 which may happen when the user use slice().retain() or
|
|
|
|
// duplicate().retain() or if its read-only.
|
|
|
|
//
|
|
|
|
// See:
|
|
|
|
// - https://github.com/netty/netty/issues/2327
|
|
|
|
// - https://github.com/netty/netty/issues/1764
|
2019-11-28 12:17:44 +01:00
|
|
|
cumulation = expandCumulation(alloc, cumulation, in);
|
2019-01-29 14:06:05 +01:00
|
|
|
} else {
|
2019-11-28 12:17:44 +01:00
|
|
|
cumulation.writeBytes(in);
|
2014-11-21 21:10:47 +01:00
|
|
|
}
|
2019-11-28 12:17:44 +01:00
|
|
|
return cumulation;
|
2019-01-29 14:06:05 +01:00
|
|
|
} finally {
|
|
|
|
// We must release in in all cases as otherwise it may produce a leak if writeBytes(...) throw
|
|
|
|
// for whatever release (for example because of OutOfMemoryError)
|
|
|
|
in.release();
|
2014-11-21 21:10:47 +01:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Cumulate {@link ByteBuf}s by add them to a {@link CompositeByteBuf} and so do no memory copy whenever possible.
|
|
|
|
* Be aware that {@link CompositeByteBuf} use a more complex indexing implementation so depending on your use-case
|
2015-02-16 06:36:43 +01:00
|
|
|
* and the decoder implementation this may be slower then just use the {@link #MERGE_CUMULATOR}.
|
2014-11-21 21:10:47 +01:00
|
|
|
*/
|
2019-01-29 14:06:05 +01:00
|
|
|
public static final Cumulator COMPOSITE_CUMULATOR = (alloc, cumulation, in) -> {
|
|
|
|
ByteBuf buffer;
|
|
|
|
try {
|
|
|
|
if (cumulation.refCnt() > 1) {
|
|
|
|
// Expand cumulation (by replace it) when the refCnt is greater then 1 which may happen when the
|
|
|
|
// user use slice().retain() or duplicate().retain().
|
|
|
|
//
|
|
|
|
// See:
|
|
|
|
// - https://github.com/netty/netty/issues/2327
|
|
|
|
// - https://github.com/netty/netty/issues/1764
|
2019-11-28 12:17:44 +01:00
|
|
|
buffer = expandCumulation(alloc, cumulation, in);
|
2019-01-29 14:06:05 +01:00
|
|
|
} else {
|
|
|
|
CompositeByteBuf composite;
|
|
|
|
if (cumulation instanceof CompositeByteBuf) {
|
|
|
|
composite = (CompositeByteBuf) cumulation;
|
2014-11-21 21:10:47 +01:00
|
|
|
} else {
|
2019-01-29 14:06:05 +01:00
|
|
|
composite = alloc.compositeBuffer(Integer.MAX_VALUE);
|
|
|
|
composite.addComponent(true, cumulation);
|
2014-11-21 21:10:47 +01:00
|
|
|
}
|
2019-01-29 14:06:05 +01:00
|
|
|
composite.addComponent(true, in);
|
|
|
|
in = null;
|
|
|
|
buffer = composite;
|
|
|
|
}
|
|
|
|
return buffer;
|
|
|
|
} finally {
|
|
|
|
if (in != null) {
|
|
|
|
// We must release if the ownership was not transferred as otherwise it may produce a leak if
|
|
|
|
// writeBytes(...) throw for whatever release (for example because of OutOfMemoryError).
|
|
|
|
in.release();
|
2014-11-21 21:10:47 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2013-07-12 13:29:54 +02:00
|
|
|
ByteBuf cumulation;
|
2014-11-21 21:10:47 +01:00
|
|
|
private Cumulator cumulator = MERGE_CUMULATOR;
|
2013-06-13 07:43:39 +02:00
|
|
|
private boolean singleDecode;
|
2013-10-23 13:55:53 +02:00
|
|
|
private boolean first;
|
2019-11-29 11:38:50 +01:00
|
|
|
// TODO: Improve this...
|
|
|
|
private CodecOutputList out = CodecOutputList.newInstance();
|
2019-06-28 13:43:25 +02:00
|
|
|
|
|
|
|
/**
|
|
|
|
* This flag is used to determine if we need to call {@link ChannelHandlerContext#read()} to consume more data
|
|
|
|
* when {@link ChannelConfig#isAutoRead()} is {@code false}.
|
|
|
|
*/
|
|
|
|
private boolean firedChannelRead;
|
2015-09-25 11:35:46 +02:00
|
|
|
private int discardAfterReads = 16;
|
|
|
|
private int numReads;
|
2013-04-03 17:07:52 +02:00
|
|
|
|
2013-06-27 22:36:08 +02:00
|
|
|
protected ByteToMessageDecoder() {
|
2017-02-08 16:13:20 +01:00
|
|
|
ensureNotSharable();
|
2013-06-27 22:36:08 +02:00
|
|
|
}
|
|
|
|
|
2013-01-18 08:20:27 +01:00
|
|
|
/**
|
2013-07-09 16:09:28 +02:00
|
|
|
* If set then only one message is decoded on each {@link #channelRead(ChannelHandlerContext, Object)}
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
* call. This may be useful if you need to do some protocol upgrade and want to make sure nothing is mixed up.
|
2013-01-18 08:20:27 +01:00
|
|
|
*
|
|
|
|
* Default is {@code false} as this has performance impacts.
|
|
|
|
*/
|
|
|
|
public void setSingleDecode(boolean singleDecode) {
|
|
|
|
this.singleDecode = singleDecode;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* If {@code true} then only one message is decoded on each
|
2013-07-09 16:09:28 +02:00
|
|
|
* {@link #channelRead(ChannelHandlerContext, Object)} call.
|
2013-01-18 08:20:27 +01:00
|
|
|
*
|
|
|
|
* Default is {@code false} as this has performance impacts.
|
|
|
|
*/
|
|
|
|
public boolean isSingleDecode() {
|
|
|
|
return singleDecode;
|
|
|
|
}
|
2012-11-15 22:04:37 +01:00
|
|
|
|
2014-11-21 21:10:47 +01:00
|
|
|
/**
|
|
|
|
* Set the {@link Cumulator} to use for cumulate the received {@link ByteBuf}s.
|
|
|
|
*/
|
|
|
|
public void setCumulator(Cumulator cumulator) {
|
2019-02-04 10:32:25 +01:00
|
|
|
requireNonNull(cumulator, "cumulator");
|
2014-11-21 21:10:47 +01:00
|
|
|
this.cumulator = cumulator;
|
|
|
|
}
|
|
|
|
|
2015-09-25 11:35:46 +02:00
|
|
|
/**
|
|
|
|
* Set the number of reads after which {@link ByteBuf#discardSomeReadBytes()} are called and so free up memory.
|
|
|
|
* The default is {@code 16}.
|
|
|
|
*/
|
|
|
|
public void setDiscardAfterReads(int discardAfterReads) {
|
2019-01-31 09:06:59 +01:00
|
|
|
checkPositive(discardAfterReads, "discardAfterReads");
|
2015-09-25 11:35:46 +02:00
|
|
|
this.discardAfterReads = discardAfterReads;
|
|
|
|
}
|
|
|
|
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
/**
|
|
|
|
* Returns the actual number of readable bytes in the internal cumulative
|
|
|
|
* buffer of this decoder. You usually do not need to rely on this value
|
|
|
|
* to write a decoder. Use it only when you must use it at your own risk.
|
|
|
|
* This method is a shortcut to {@link #internalBuffer() internalBuffer().readableBytes()}.
|
|
|
|
*/
|
|
|
|
protected int actualReadableBytes() {
|
|
|
|
return internalBuffer().readableBytes();
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Returns the internal cumulative buffer of this decoder. You usually
|
|
|
|
* do not need to access the internal buffer directly to write a decoder.
|
|
|
|
* Use it only when you must use it at your own risk.
|
|
|
|
*/
|
|
|
|
protected ByteBuf internalBuffer() {
|
|
|
|
if (cumulation != null) {
|
|
|
|
return cumulation;
|
|
|
|
} else {
|
|
|
|
return Unpooled.EMPTY_BUFFER;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-01-05 07:04:25 +01:00
|
|
|
@Override
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
public final void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
|
2019-12-04 09:31:45 +01:00
|
|
|
//fireChannelRead(ctx, out, out.size());
|
|
|
|
//out.clear();
|
2019-11-29 11:38:50 +01:00
|
|
|
|
2016-06-03 15:12:32 +02:00
|
|
|
ByteBuf buf = cumulation;
|
|
|
|
if (buf != null) {
|
|
|
|
// Directly set this to null so we are sure we not access it in any other method here anymore.
|
|
|
|
cumulation = null;
|
2019-06-03 08:43:19 +02:00
|
|
|
numReads = 0;
|
2016-06-03 15:12:32 +02:00
|
|
|
int readable = buf.readableBytes();
|
|
|
|
if (readable > 0) {
|
2019-10-16 15:47:59 +02:00
|
|
|
ctx.fireChannelRead(buf);
|
2019-06-03 08:43:19 +02:00
|
|
|
ctx.fireChannelReadComplete();
|
2016-06-03 15:12:32 +02:00
|
|
|
} else {
|
|
|
|
buf.release();
|
|
|
|
}
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
2013-06-13 06:32:47 +02:00
|
|
|
handlerRemoved0(ctx);
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Gets called after the {@link ByteToMessageDecoder} was removed from the actual context and it doesn't handle
|
|
|
|
* events anymore.
|
|
|
|
*/
|
2013-06-13 06:32:47 +02:00
|
|
|
protected void handlerRemoved0(ChannelHandlerContext ctx) throws Exception { }
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
|
|
|
|
@Override
|
2013-07-09 16:09:28 +02:00
|
|
|
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
|
2013-10-23 13:55:53 +02:00
|
|
|
if (msg instanceof ByteBuf) {
|
|
|
|
try {
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
ByteBuf data = (ByteBuf) msg;
|
2013-10-23 13:55:53 +02:00
|
|
|
first = cumulation == null;
|
|
|
|
if (first) {
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
cumulation = data;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
} else {
|
2014-11-21 21:10:47 +01:00
|
|
|
cumulation = cumulator.cumulate(ctx.alloc(), cumulation, data);
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
2013-10-23 13:55:53 +02:00
|
|
|
callDecode(ctx, cumulation, out);
|
|
|
|
} catch (DecoderException e) {
|
|
|
|
throw e;
|
2017-10-04 18:06:59 +02:00
|
|
|
} catch (Exception e) {
|
|
|
|
throw new DecoderException(e);
|
2013-10-23 13:55:53 +02:00
|
|
|
} finally {
|
|
|
|
if (cumulation != null && !cumulation.isReadable()) {
|
2015-09-25 11:35:46 +02:00
|
|
|
numReads = 0;
|
2013-10-23 13:55:53 +02:00
|
|
|
cumulation.release();
|
|
|
|
cumulation = null;
|
2015-09-25 11:35:46 +02:00
|
|
|
} else if (++ numReads >= discardAfterReads) {
|
|
|
|
// We did enough reads already try to discard some bytes so we not risk to see a OOME.
|
|
|
|
// See https://github.com/netty/netty/issues/4275
|
|
|
|
numReads = 0;
|
|
|
|
discardSomeReadBytes();
|
2013-10-23 13:55:53 +02:00
|
|
|
}
|
2015-09-25 11:35:46 +02:00
|
|
|
|
2013-10-23 13:55:53 +02:00
|
|
|
int size = out.size();
|
2019-11-29 11:38:50 +01:00
|
|
|
firedChannelRead |= size > 0;
|
2015-10-06 13:37:52 +02:00
|
|
|
fireChannelRead(ctx, out, size);
|
2019-11-29 11:38:50 +01:00
|
|
|
out.clear();
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
2013-10-23 13:55:53 +02:00
|
|
|
} else {
|
|
|
|
ctx.fireChannelRead(msg);
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
2012-05-16 16:02:06 +02:00
|
|
|
}
|
|
|
|
|
2015-10-06 13:37:52 +02:00
|
|
|
/**
|
|
|
|
* Get {@code numElements} out of the {@link List} and forward these through the pipeline.
|
|
|
|
*/
|
|
|
|
static void fireChannelRead(ChannelHandlerContext ctx, List<Object> msgs, int numElements) {
|
2016-05-03 16:43:47 +02:00
|
|
|
if (msgs instanceof CodecOutputList) {
|
|
|
|
fireChannelRead(ctx, (CodecOutputList) msgs, numElements);
|
|
|
|
} else {
|
|
|
|
for (int i = 0; i < numElements; i++) {
|
|
|
|
ctx.fireChannelRead(msgs.get(i));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Get {@code numElements} out of the {@link CodecOutputList} and forward these through the pipeline.
|
|
|
|
*/
|
|
|
|
static void fireChannelRead(ChannelHandlerContext ctx, CodecOutputList msgs, int numElements) {
|
2015-10-06 13:37:52 +02:00
|
|
|
for (int i = 0; i < numElements; i ++) {
|
2016-05-03 16:43:47 +02:00
|
|
|
ctx.fireChannelRead(msgs.getUnsafe(i));
|
2015-10-06 13:37:52 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-01-27 09:03:29 +01:00
|
|
|
@Override
|
2013-07-09 16:09:28 +02:00
|
|
|
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
|
2015-09-25 11:35:46 +02:00
|
|
|
numReads = 0;
|
2015-04-15 18:04:02 +02:00
|
|
|
discardSomeReadBytes();
|
2019-06-28 13:43:25 +02:00
|
|
|
if (!firedChannelRead && !ctx.channel().config().isAutoRead()) {
|
|
|
|
ctx.read();
|
2015-04-15 18:04:02 +02:00
|
|
|
}
|
2019-06-28 13:43:25 +02:00
|
|
|
firedChannelRead = false;
|
2017-08-17 19:00:19 +02:00
|
|
|
ctx.fireChannelReadComplete();
|
2015-04-15 18:04:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
protected final void discardSomeReadBytes() {
|
2014-04-06 20:49:03 +02:00
|
|
|
if (cumulation != null && !first && cumulation.refCnt() == 1) {
|
2013-10-23 13:55:53 +02:00
|
|
|
// discard some bytes if possible to make more room in the
|
2014-04-06 20:49:03 +02:00
|
|
|
// buffer but only if the refCnt == 1 as otherwise the user may have
|
|
|
|
// used slice().retain() or duplicate().retain().
|
|
|
|
//
|
|
|
|
// See:
|
|
|
|
// - https://github.com/netty/netty/issues/2327
|
|
|
|
// - https://github.com/netty/netty/issues/1764
|
2013-10-23 13:55:53 +02:00
|
|
|
cumulation.discardSomeReadBytes();
|
|
|
|
}
|
2013-01-27 09:03:29 +01:00
|
|
|
}
|
|
|
|
|
2012-05-16 16:02:06 +02:00
|
|
|
@Override
|
2012-06-07 07:52:33 +02:00
|
|
|
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
|
2016-02-13 00:30:09 +01:00
|
|
|
channelInputClosed(ctx, true);
|
|
|
|
}
|
|
|
|
|
|
|
|
@Override
|
|
|
|
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
|
|
|
|
if (evt instanceof ChannelInputShutdownEvent) {
|
|
|
|
// The decodeLast method is invoked when a channelInactive event is encountered.
|
|
|
|
// This method is responsible for ending requests in some situations and must be called
|
|
|
|
// when the input has been shutdown.
|
|
|
|
channelInputClosed(ctx, false);
|
|
|
|
}
|
2019-03-13 09:46:10 +01:00
|
|
|
ctx.fireUserEventTriggered(evt);
|
2016-02-13 00:30:09 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
private void channelInputClosed(ChannelHandlerContext ctx, boolean callChannelInactive) throws Exception {
|
2012-05-16 16:02:06 +02:00
|
|
|
try {
|
2016-04-11 09:15:30 +02:00
|
|
|
channelInputClosed(ctx, out);
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
} catch (DecoderException e) {
|
2013-04-04 07:48:30 +02:00
|
|
|
throw e;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
} catch (Exception e) {
|
|
|
|
throw new DecoderException(e);
|
2013-03-16 10:28:58 +01:00
|
|
|
} finally {
|
2019-11-29 11:38:50 +01:00
|
|
|
if (cumulation != null) {
|
|
|
|
cumulation.release();
|
|
|
|
cumulation = null;
|
|
|
|
}
|
|
|
|
int size = out.size();
|
|
|
|
fireChannelRead(ctx, out, size);
|
|
|
|
out.clear();
|
|
|
|
if (size > 0) {
|
|
|
|
// Something was read, call fireChannelReadComplete()
|
|
|
|
ctx.fireChannelReadComplete();
|
|
|
|
}
|
|
|
|
if (callChannelInactive) {
|
|
|
|
ctx.fireChannelInactive();
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
}
|
2012-05-16 16:02:06 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-04-11 09:15:30 +02:00
|
|
|
/**
|
|
|
|
* Called when the input of the channel was closed which may be because it changed to inactive or because of
|
|
|
|
* {@link ChannelInputShutdownEvent}.
|
|
|
|
*/
|
|
|
|
void channelInputClosed(ChannelHandlerContext ctx, List<Object> out) throws Exception {
|
|
|
|
if (cumulation != null) {
|
|
|
|
callDecode(ctx, cumulation, out);
|
|
|
|
decodeLast(ctx, cumulation, out);
|
|
|
|
} else {
|
|
|
|
decodeLast(ctx, Unpooled.EMPTY_BUFFER, out);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-07-13 16:57:44 +02:00
|
|
|
/**
|
|
|
|
* Called once data should be decoded from the given {@link ByteBuf}. This method will call
|
|
|
|
* {@link #decode(ChannelHandlerContext, ByteBuf, List)} as long as decoding should take place.
|
|
|
|
*
|
|
|
|
* @param ctx the {@link ChannelHandlerContext} which this {@link ByteToMessageDecoder} belongs to
|
|
|
|
* @param in the {@link ByteBuf} from which to read data
|
|
|
|
* @param out the {@link List} to which decoded messages should be added
|
|
|
|
*/
|
|
|
|
protected void callDecode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) {
|
2013-04-03 17:07:52 +02:00
|
|
|
try {
|
|
|
|
while (in.isReadable()) {
|
2013-04-04 07:44:52 +02:00
|
|
|
int outSize = out.size();
|
2015-10-06 13:37:52 +02:00
|
|
|
|
|
|
|
if (outSize > 0) {
|
|
|
|
fireChannelRead(ctx, out, outSize);
|
|
|
|
out.clear();
|
2015-12-29 22:06:28 +01:00
|
|
|
|
|
|
|
// Check if this handler was removed before continuing with decoding.
|
|
|
|
// If it was removed, it is not safe to continue to operate on the buffer.
|
|
|
|
//
|
|
|
|
// See:
|
|
|
|
// - https://github.com/netty/netty/issues/4635
|
|
|
|
if (ctx.isRemoved()) {
|
|
|
|
break;
|
|
|
|
}
|
2015-10-06 13:37:52 +02:00
|
|
|
outSize = 0;
|
|
|
|
}
|
|
|
|
|
2013-04-04 07:44:52 +02:00
|
|
|
int oldInputLength = in.readableBytes();
|
2019-11-29 11:38:50 +01:00
|
|
|
decode(ctx, in, out);
|
2013-08-01 09:54:07 +02:00
|
|
|
|
2013-08-05 10:40:45 +02:00
|
|
|
// Check if this handler was removed before continuing the loop.
|
|
|
|
// If it was removed, it is not safe to continue to operate on the buffer.
|
2013-08-01 09:54:07 +02:00
|
|
|
//
|
|
|
|
// See https://github.com/netty/netty/issues/1664
|
|
|
|
if (ctx.isRemoved()) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2013-04-04 07:44:52 +02:00
|
|
|
if (outSize == out.size()) {
|
2013-04-03 17:07:52 +02:00
|
|
|
if (oldInputLength == in.readableBytes()) {
|
2013-04-03 11:32:33 +02:00
|
|
|
break;
|
2013-04-03 17:07:52 +02:00
|
|
|
} else {
|
2013-04-04 07:44:52 +02:00
|
|
|
continue;
|
2013-04-03 17:07:52 +02:00
|
|
|
}
|
2012-05-18 08:42:36 +02:00
|
|
|
}
|
2013-04-04 07:44:52 +02:00
|
|
|
|
|
|
|
if (oldInputLength == in.readableBytes()) {
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
throw new DecoderException(
|
|
|
|
StringUtil.simpleClassName(getClass()) +
|
2017-05-09 21:58:29 +02:00
|
|
|
".decode() did not read anything but decoded a message.");
|
2013-04-04 07:44:52 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
if (isSingleDecode()) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2013-07-07 04:52:34 +02:00
|
|
|
} catch (DecoderException e) {
|
2013-04-04 07:48:30 +02:00
|
|
|
throw e;
|
2017-10-04 18:06:59 +02:00
|
|
|
} catch (Exception cause) {
|
2013-04-04 07:48:30 +02:00
|
|
|
throw new DecoderException(cause);
|
2012-05-16 16:02:06 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-12-21 22:22:40 +01:00
|
|
|
/**
|
|
|
|
* Decode the from one {@link ByteBuf} to an other. This method will be called till either the input
|
2014-08-14 06:43:22 +02:00
|
|
|
* {@link ByteBuf} has nothing to read when return from this method or till nothing was read from the input
|
|
|
|
* {@link ByteBuf}.
|
2012-12-21 22:22:40 +01:00
|
|
|
*
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
* @param ctx the {@link ChannelHandlerContext} which this {@link ByteToMessageDecoder} belongs to
|
2012-12-21 22:22:40 +01:00
|
|
|
* @param in the {@link ByteBuf} from which to read data
|
2013-07-10 07:50:26 +02:00
|
|
|
* @param out the {@link List} to which decoded messages should be added
|
2017-04-19 22:37:03 +02:00
|
|
|
* @throws Exception is thrown if an error occurs
|
2012-12-21 22:22:40 +01:00
|
|
|
*/
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
protected abstract void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception;
|
2012-05-16 16:02:06 +02:00
|
|
|
|
2012-12-21 22:22:40 +01:00
|
|
|
/**
|
|
|
|
* Is called one last time when the {@link ChannelHandlerContext} goes in-active. Which means the
|
|
|
|
* {@link #channelInactive(ChannelHandlerContext)} was triggered.
|
|
|
|
*
|
2013-07-09 16:09:28 +02:00
|
|
|
* By default this will just call {@link #decode(ChannelHandlerContext, ByteBuf, List)} but sub-classes may
|
2012-12-21 22:22:40 +01:00
|
|
|
* override this for some special cleanup operation.
|
|
|
|
*/
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
protected void decodeLast(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {
|
2016-02-25 16:20:41 +01:00
|
|
|
if (in.isReadable()) {
|
|
|
|
// Only call decode() if there is something left in the buffer to decode.
|
|
|
|
// See https://github.com/netty/netty/issues/4386
|
2019-11-29 11:38:50 +01:00
|
|
|
decode(ctx, in, out);
|
2016-02-25 16:20:41 +01:00
|
|
|
}
|
2013-04-03 11:32:33 +02:00
|
|
|
}
|
2014-11-21 21:10:47 +01:00
|
|
|
|
2019-11-28 12:17:44 +01:00
|
|
|
static ByteBuf expandCumulation(ByteBufAllocator alloc, ByteBuf oldCumulation, ByteBuf in) {
|
|
|
|
ByteBuf newCumulation = alloc.buffer(oldCumulation.readableBytes() + in.readableBytes());
|
|
|
|
ByteBuf toRelease = newCumulation;
|
|
|
|
try {
|
|
|
|
newCumulation.writeBytes(oldCumulation);
|
|
|
|
newCumulation.writeBytes(in);
|
|
|
|
toRelease = oldCumulation;
|
|
|
|
return newCumulation;
|
|
|
|
} finally {
|
|
|
|
toRelease.release();
|
|
|
|
}
|
2014-11-21 21:10:47 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Cumulate {@link ByteBuf}s.
|
|
|
|
*/
|
|
|
|
public interface Cumulator {
|
|
|
|
/**
|
|
|
|
* Cumulate the given {@link ByteBuf}s and return the {@link ByteBuf} that holds the cumulated bytes.
|
|
|
|
* The implementation is responsible to correctly handle the life-cycle of the given {@link ByteBuf}s and so
|
|
|
|
* call {@link ByteBuf#release()} if a {@link ByteBuf} is fully consumed.
|
|
|
|
*/
|
|
|
|
ByteBuf cumulate(ByteBufAllocator alloc, ByteBuf cumulation, ByteBuf in);
|
|
|
|
}
|
2012-05-16 16:02:06 +02:00
|
|
|
}
|