Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
/*
|
|
|
|
* Copyright 2013 The Netty Project
|
|
|
|
*
|
|
|
|
* The Netty Project licenses this file to you under the Apache License,
|
|
|
|
* version 2.0 (the "License"); you may not use this file except in compliance
|
|
|
|
* with the License. You may obtain a copy of the License at:
|
|
|
|
*
|
|
|
|
* http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
*
|
|
|
|
* Unless required by applicable law or agreed to in writing, software
|
|
|
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
|
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
|
|
* License for the specific language governing permissions and limitations
|
|
|
|
* under the License.
|
|
|
|
*/
|
|
|
|
package io.netty.channel;
|
|
|
|
|
2013-07-18 16:14:39 +02:00
|
|
|
import io.netty.buffer.ByteBuf;
|
2013-07-19 07:09:08 +02:00
|
|
|
import io.netty.buffer.ByteBufHolder;
|
2014-02-07 20:52:37 +01:00
|
|
|
import io.netty.buffer.Unpooled;
|
2014-08-05 14:24:49 +02:00
|
|
|
import io.netty.channel.socket.nio.NioSocketChannel;
|
2013-07-18 16:26:45 +02:00
|
|
|
import io.netty.util.Recycler;
|
|
|
|
import io.netty.util.Recycler.Handle;
|
2013-06-27 03:39:39 +02:00
|
|
|
import io.netty.util.ReferenceCountUtil;
|
2014-08-05 14:24:49 +02:00
|
|
|
import io.netty.util.concurrent.FastThreadLocal;
|
|
|
|
import io.netty.util.internal.InternalThreadLocalMap;
|
2015-11-20 06:09:23 +01:00
|
|
|
import io.netty.util.internal.OneTimeTask;
|
2014-02-04 10:37:40 +01:00
|
|
|
import io.netty.util.internal.PlatformDependent;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
import io.netty.util.internal.logging.InternalLogger;
|
|
|
|
import io.netty.util.internal.logging.InternalLoggerFactory;
|
|
|
|
|
2016-03-29 16:00:27 +02:00
|
|
|
import java.io.ByteArrayOutputStream;
|
|
|
|
import java.io.IOException;
|
|
|
|
import java.io.PrintStream;
|
2014-08-05 14:24:49 +02:00
|
|
|
import java.nio.ByteBuffer;
|
2013-07-23 07:33:37 +02:00
|
|
|
import java.nio.channels.ClosedChannelException;
|
2015-05-29 08:04:34 +02:00
|
|
|
import java.util.Arrays;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
import java.util.concurrent.atomic.AtomicIntegerFieldUpdater;
|
2013-08-05 14:58:16 +02:00
|
|
|
import java.util.concurrent.atomic.AtomicLongFieldUpdater;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
|
2013-07-18 18:17:00 +02:00
|
|
|
/**
|
|
|
|
* (Transport implementors only) an internal data structure used by {@link AbstractChannel} to store its pending
|
|
|
|
* outbound write requests.
|
2014-10-22 10:45:28 +02:00
|
|
|
* <p>
|
|
|
|
* All methods must be called by a transport implementation from an I/O thread, except the following ones:
|
|
|
|
* <ul>
|
|
|
|
* <li>{@link #size()} and {@link #isEmpty()}</li>
|
|
|
|
* <li>{@link #isWritable()}</li>
|
|
|
|
* <li>{@link #getUserDefinedWritability(int)} and {@link #setUserDefinedWritability(int, boolean)}</li>
|
|
|
|
* </ul>
|
|
|
|
* </p>
|
2013-07-18 18:17:00 +02:00
|
|
|
*/
|
2014-08-05 14:24:49 +02:00
|
|
|
public final class ChannelOutboundBuffer {
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
|
|
|
|
private static final InternalLogger logger = InternalLoggerFactory.getInstance(ChannelOutboundBuffer.class);
|
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
private static final FastThreadLocal<ByteBuffer[]> NIO_BUFFERS = new FastThreadLocal<ByteBuffer[]>() {
|
2013-07-18 16:26:45 +02:00
|
|
|
@Override
|
2014-08-05 14:24:49 +02:00
|
|
|
protected ByteBuffer[] initialValue() throws Exception {
|
|
|
|
return new ByteBuffer[1024];
|
2013-07-18 16:26:45 +02:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
private final Channel channel;
|
|
|
|
|
|
|
|
// Entry(flushedEntry) --> ... Entry(unflushedEntry) --> ... Entry(tailEntry)
|
|
|
|
//
|
|
|
|
// The Entry that is the first in the linked-list structure that was flushed
|
|
|
|
private Entry flushedEntry;
|
|
|
|
// The Entry which is the first unflushed in the linked-list structure
|
|
|
|
private Entry unflushedEntry;
|
|
|
|
// The Entry which represents the tail of the buffer
|
|
|
|
private Entry tailEntry;
|
|
|
|
// The number of flushed entries that are not written yet
|
2013-08-13 21:39:28 +02:00
|
|
|
private int flushed;
|
2014-08-05 14:24:49 +02:00
|
|
|
|
|
|
|
private int nioBufferCount;
|
|
|
|
private long nioBufferSize;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
|
2013-07-18 13:59:14 +02:00
|
|
|
private boolean inFail;
|
2013-08-05 14:58:16 +02:00
|
|
|
|
2014-02-04 10:37:40 +01:00
|
|
|
private static final AtomicLongFieldUpdater<ChannelOutboundBuffer> TOTAL_PENDING_SIZE_UPDATER;
|
2013-08-05 14:58:16 +02:00
|
|
|
|
2014-10-22 10:45:28 +02:00
|
|
|
@SuppressWarnings("UnusedDeclaration")
|
2013-08-05 14:58:16 +02:00
|
|
|
private volatile long totalPendingSize;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
|
2014-10-22 10:45:28 +02:00
|
|
|
private static final AtomicIntegerFieldUpdater<ChannelOutboundBuffer> UNWRITABLE_UPDATER;
|
2014-02-04 10:37:40 +01:00
|
|
|
|
2014-10-22 10:45:28 +02:00
|
|
|
@SuppressWarnings("UnusedDeclaration")
|
|
|
|
private volatile int unwritable;
|
2014-08-05 14:24:49 +02:00
|
|
|
|
2016-04-01 11:45:43 +02:00
|
|
|
private final Runnable fireChannelWritabilityChangedTask;
|
2014-12-10 10:36:53 +01:00
|
|
|
|
2014-02-04 10:37:40 +01:00
|
|
|
static {
|
2014-10-22 10:45:28 +02:00
|
|
|
AtomicIntegerFieldUpdater<ChannelOutboundBuffer> unwritableUpdater =
|
|
|
|
PlatformDependent.newAtomicIntegerFieldUpdater(ChannelOutboundBuffer.class, "unwritable");
|
|
|
|
if (unwritableUpdater == null) {
|
|
|
|
unwritableUpdater = AtomicIntegerFieldUpdater.newUpdater(ChannelOutboundBuffer.class, "unwritable");
|
2014-02-04 10:37:40 +01:00
|
|
|
}
|
2014-10-22 10:45:28 +02:00
|
|
|
UNWRITABLE_UPDATER = unwritableUpdater;
|
2014-02-04 10:37:40 +01:00
|
|
|
|
|
|
|
AtomicLongFieldUpdater<ChannelOutboundBuffer> pendingSizeUpdater =
|
|
|
|
PlatformDependent.newAtomicLongFieldUpdater(ChannelOutboundBuffer.class, "totalPendingSize");
|
|
|
|
if (pendingSizeUpdater == null) {
|
|
|
|
pendingSizeUpdater = AtomicLongFieldUpdater.newUpdater(ChannelOutboundBuffer.class, "totalPendingSize");
|
|
|
|
}
|
|
|
|
TOTAL_PENDING_SIZE_UPDATER = pendingSizeUpdater;
|
|
|
|
}
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
|
2016-04-01 11:45:43 +02:00
|
|
|
ChannelOutboundBuffer(final AbstractChannel channel) {
|
2014-08-05 14:24:49 +02:00
|
|
|
this.channel = channel;
|
2016-04-01 11:45:43 +02:00
|
|
|
fireChannelWritabilityChangedTask = new ChannelWritabilityChangedTask(channel);
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
|
|
|
|
2014-02-18 10:08:20 +01:00
|
|
|
/**
|
2014-08-05 14:24:49 +02:00
|
|
|
* Add given message to this {@link ChannelOutboundBuffer}. The given {@link ChannelPromise} will be notified once
|
|
|
|
* the message was written.
|
2014-02-18 10:08:20 +01:00
|
|
|
*/
|
2014-08-05 14:24:49 +02:00
|
|
|
public void addMessage(Object msg, int size, ChannelPromise promise) {
|
|
|
|
Entry entry = Entry.newInstance(msg, size, total(msg), promise);
|
|
|
|
if (tailEntry == null) {
|
|
|
|
flushedEntry = null;
|
|
|
|
tailEntry = entry;
|
|
|
|
} else {
|
|
|
|
Entry tail = tailEntry;
|
|
|
|
tail.next = entry;
|
|
|
|
tailEntry = entry;
|
2013-08-05 14:58:16 +02:00
|
|
|
}
|
2014-08-05 14:24:49 +02:00
|
|
|
if (unflushedEntry == null) {
|
|
|
|
unflushedEntry = entry;
|
2013-07-18 13:59:14 +02:00
|
|
|
}
|
|
|
|
|
2013-08-13 21:39:28 +02:00
|
|
|
// increment pending bytes after adding message to the unflushed arrays.
|
|
|
|
// See https://github.com/netty/netty/issues/1619
|
2016-04-01 11:45:43 +02:00
|
|
|
incrementPendingOutboundBytes(size, true);
|
2013-07-18 13:59:14 +02:00
|
|
|
}
|
2013-07-17 14:02:20 +02:00
|
|
|
|
2014-02-18 10:08:20 +01:00
|
|
|
/**
|
2014-08-05 14:24:49 +02:00
|
|
|
* Add a flush to this {@link ChannelOutboundBuffer}. This means all previous added messages are marked as flushed
|
|
|
|
* and so you will be able to handle them.
|
2014-02-18 10:08:20 +01:00
|
|
|
*/
|
2014-08-05 14:24:49 +02:00
|
|
|
public void addFlush() {
|
2014-06-17 09:26:04 +02:00
|
|
|
// There is no need to process all entries if there was already a flush before and no new messages
|
|
|
|
// where added in the meantime.
|
|
|
|
//
|
|
|
|
// See https://github.com/netty/netty/issues/2577
|
2014-08-05 14:24:49 +02:00
|
|
|
Entry entry = unflushedEntry;
|
|
|
|
if (entry != null) {
|
|
|
|
if (flushedEntry == null) {
|
|
|
|
// there is no flushedEntry yet, so start with the entry
|
|
|
|
flushedEntry = entry;
|
|
|
|
}
|
|
|
|
do {
|
|
|
|
flushed ++;
|
2014-06-17 09:26:04 +02:00
|
|
|
if (!entry.promise.setUncancellable()) {
|
|
|
|
// Was cancelled so make sure we free up memory and notify about the freed bytes
|
|
|
|
int pending = entry.cancel();
|
2016-04-01 11:45:43 +02:00
|
|
|
decrementPendingOutboundBytes(pending, true);
|
2014-06-17 09:26:04 +02:00
|
|
|
}
|
2014-08-05 14:24:49 +02:00
|
|
|
entry = entry.next;
|
|
|
|
} while (entry != null);
|
|
|
|
|
|
|
|
// All flushed so reset unflushedEntry
|
|
|
|
unflushedEntry = null;
|
2014-02-17 16:14:25 +01:00
|
|
|
}
|
2013-08-13 21:39:28 +02:00
|
|
|
}
|
|
|
|
|
2013-08-05 14:58:16 +02:00
|
|
|
/**
|
|
|
|
* Increment the pending bytes which will be written at some point.
|
|
|
|
* This method is thread-safe!
|
|
|
|
*/
|
2016-04-01 11:45:43 +02:00
|
|
|
void incrementPendingOutboundBytes(long size, boolean notifyWritability) {
|
2014-08-05 14:24:49 +02:00
|
|
|
if (size == 0) {
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2014-06-17 18:02:41 +02:00
|
|
|
long newWriteBufferSize = TOTAL_PENDING_SIZE_UPDATER.addAndGet(this, size);
|
2014-10-22 10:45:28 +02:00
|
|
|
if (newWriteBufferSize >= channel.config().getWriteBufferHighWaterMark()) {
|
2016-04-01 11:45:43 +02:00
|
|
|
setUnwritable(notifyWritability);
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-08-05 14:58:16 +02:00
|
|
|
/**
|
|
|
|
* Decrement the pending bytes which will be written at some point.
|
|
|
|
* This method is thread-safe!
|
|
|
|
*/
|
2016-04-01 11:45:43 +02:00
|
|
|
void decrementPendingOutboundBytes(long size, boolean notifyWritability) {
|
2014-08-05 14:24:49 +02:00
|
|
|
if (size == 0) {
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2014-06-17 18:02:41 +02:00
|
|
|
long newWriteBufferSize = TOTAL_PENDING_SIZE_UPDATER.addAndGet(this, -size);
|
2016-04-01 11:45:43 +02:00
|
|
|
if (newWriteBufferSize == 0
|
|
|
|
|| newWriteBufferSize <= channel.config().getWriteBufferLowWaterMark()) {
|
|
|
|
setWritable(notifyWritability);
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-08-13 21:39:28 +02:00
|
|
|
private static long total(Object msg) {
|
|
|
|
if (msg instanceof ByteBuf) {
|
|
|
|
return ((ByteBuf) msg).readableBytes();
|
|
|
|
}
|
|
|
|
if (msg instanceof FileRegion) {
|
|
|
|
return ((FileRegion) msg).count();
|
|
|
|
}
|
|
|
|
if (msg instanceof ByteBufHolder) {
|
|
|
|
return ((ByteBufHolder) msg).content().readableBytes();
|
|
|
|
}
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2014-02-18 10:08:20 +01:00
|
|
|
/**
|
2014-08-05 14:24:49 +02:00
|
|
|
* Return the current message to write or {@code null} if nothing was flushed before and so is ready to be written.
|
2014-02-18 10:08:20 +01:00
|
|
|
*/
|
2014-08-05 14:24:49 +02:00
|
|
|
public Object current() {
|
|
|
|
Entry entry = flushedEntry;
|
|
|
|
if (entry == null) {
|
2013-08-13 21:39:28 +02:00
|
|
|
return null;
|
|
|
|
}
|
2014-08-05 14:24:49 +02:00
|
|
|
|
|
|
|
return entry.msg;
|
2013-07-18 13:59:14 +02:00
|
|
|
}
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
/**
|
|
|
|
* Notify the {@link ChannelPromise} of the current message about writing progress.
|
|
|
|
*/
|
|
|
|
public void progress(long amount) {
|
|
|
|
Entry e = flushedEntry;
|
|
|
|
assert e != null;
|
2013-08-13 21:39:28 +02:00
|
|
|
ChannelPromise p = e.promise;
|
2013-07-18 13:59:14 +02:00
|
|
|
if (p instanceof ChannelProgressivePromise) {
|
2013-08-13 21:39:28 +02:00
|
|
|
long progress = e.progress + amount;
|
|
|
|
e.progress = progress;
|
|
|
|
((ChannelProgressivePromise) p).tryProgress(progress, e.total);
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
2013-07-18 13:59:14 +02:00
|
|
|
}
|
|
|
|
|
2014-02-18 10:08:20 +01:00
|
|
|
/**
|
2014-08-05 14:24:49 +02:00
|
|
|
* Will remove the current message, mark its {@link ChannelPromise} as success and return {@code true}. If no
|
|
|
|
* flushed message exists at the time this method is called it will return {@code false} to signal that no more
|
|
|
|
* messages are ready to be handled.
|
2014-02-18 10:08:20 +01:00
|
|
|
*/
|
2014-08-05 14:24:49 +02:00
|
|
|
public boolean remove() {
|
|
|
|
Entry e = flushedEntry;
|
|
|
|
if (e == null) {
|
2015-05-29 08:04:34 +02:00
|
|
|
clearNioBuffers();
|
2013-08-13 21:39:28 +02:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
Object msg = e.msg;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
|
2013-08-13 21:39:28 +02:00
|
|
|
ChannelPromise promise = e.promise;
|
|
|
|
int size = e.pendingSize;
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
removeEntry(e);
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
|
2014-02-07 20:52:37 +01:00
|
|
|
if (!e.cancelled) {
|
|
|
|
// only release message, notify and decrement if it was not canceled before.
|
2014-08-05 14:24:49 +02:00
|
|
|
ReferenceCountUtil.safeRelease(msg);
|
2014-02-07 20:52:37 +01:00
|
|
|
safeSuccess(promise);
|
2016-04-01 11:45:43 +02:00
|
|
|
decrementPendingOutboundBytes(size, true);
|
2014-02-07 20:52:37 +01:00
|
|
|
}
|
2013-07-22 10:44:33 +02:00
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
// recycle the entry
|
|
|
|
e.recycle();
|
|
|
|
|
2013-07-18 13:59:14 +02:00
|
|
|
return true;
|
|
|
|
}
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
|
2014-02-18 10:08:20 +01:00
|
|
|
/**
|
2014-08-05 14:24:49 +02:00
|
|
|
* Will remove the current message, mark its {@link ChannelPromise} as failure using the given {@link Throwable}
|
|
|
|
* and return {@code true}. If no flushed message exists at the time this method is called it will return
|
|
|
|
* {@code false} to signal that no more messages are ready to be handled.
|
2014-02-18 10:08:20 +01:00
|
|
|
*/
|
2014-08-05 14:24:49 +02:00
|
|
|
public boolean remove(Throwable cause) {
|
2015-05-05 10:32:18 +02:00
|
|
|
return remove0(cause, true);
|
|
|
|
}
|
|
|
|
|
|
|
|
private boolean remove0(Throwable cause, boolean notifyWritability) {
|
2014-08-05 14:24:49 +02:00
|
|
|
Entry e = flushedEntry;
|
|
|
|
if (e == null) {
|
2015-05-29 08:04:34 +02:00
|
|
|
clearNioBuffers();
|
2013-08-13 21:39:28 +02:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
Object msg = e.msg;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
|
2013-08-13 21:39:28 +02:00
|
|
|
ChannelPromise promise = e.promise;
|
|
|
|
int size = e.pendingSize;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
removeEntry(e);
|
2013-07-18 13:59:14 +02:00
|
|
|
|
2014-02-07 20:52:37 +01:00
|
|
|
if (!e.cancelled) {
|
|
|
|
// only release message, fail and decrement if it was not canceled before.
|
2014-08-05 14:24:49 +02:00
|
|
|
ReferenceCountUtil.safeRelease(msg);
|
2013-07-22 10:44:33 +02:00
|
|
|
|
2014-02-07 20:52:37 +01:00
|
|
|
safeFail(promise, cause);
|
2016-04-01 11:45:43 +02:00
|
|
|
decrementPendingOutboundBytes(size, notifyWritability);
|
2014-02-07 20:52:37 +01:00
|
|
|
}
|
2013-07-24 04:26:03 +02:00
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
// recycle the entry
|
|
|
|
e.recycle();
|
|
|
|
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
private void removeEntry(Entry e) {
|
|
|
|
if (-- flushed == 0) {
|
|
|
|
// processed everything
|
|
|
|
flushedEntry = null;
|
|
|
|
if (e == tailEntry) {
|
|
|
|
tailEntry = null;
|
|
|
|
unflushedEntry = null;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
flushedEntry = e.next;
|
|
|
|
}
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
|
|
|
|
2014-02-18 10:08:20 +01:00
|
|
|
/**
|
2014-08-05 14:24:49 +02:00
|
|
|
* Removes the fully written entries and update the reader index of the partially written entry.
|
|
|
|
* This operation assumes all messages in this buffer is {@link ByteBuf}.
|
2014-02-18 10:08:20 +01:00
|
|
|
*/
|
2014-08-05 14:24:49 +02:00
|
|
|
public void removeBytes(long writtenBytes) {
|
|
|
|
for (;;) {
|
2014-08-15 18:54:32 +02:00
|
|
|
Object msg = current();
|
|
|
|
if (!(msg instanceof ByteBuf)) {
|
|
|
|
assert writtenBytes == 0;
|
2014-08-05 14:24:49 +02:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2014-08-15 18:54:32 +02:00
|
|
|
final ByteBuf buf = (ByteBuf) msg;
|
2014-08-05 14:24:49 +02:00
|
|
|
final int readerIndex = buf.readerIndex();
|
|
|
|
final int readableBytes = buf.writerIndex() - readerIndex;
|
|
|
|
|
|
|
|
if (readableBytes <= writtenBytes) {
|
|
|
|
if (writtenBytes != 0) {
|
|
|
|
progress(readableBytes);
|
|
|
|
writtenBytes -= readableBytes;
|
|
|
|
}
|
|
|
|
remove();
|
|
|
|
} else { // readableBytes > writtenBytes
|
|
|
|
if (writtenBytes != 0) {
|
|
|
|
buf.readerIndex(readerIndex + (int) writtenBytes);
|
|
|
|
progress(writtenBytes);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2015-05-29 08:04:34 +02:00
|
|
|
clearNioBuffers();
|
|
|
|
}
|
|
|
|
|
|
|
|
// Clear all ByteBuffer from the array so these can be GC'ed.
|
|
|
|
// See https://github.com/netty/netty/issues/3837
|
|
|
|
private void clearNioBuffers() {
|
|
|
|
int count = nioBufferCount;
|
|
|
|
if (count > 0) {
|
|
|
|
nioBufferCount = 0;
|
|
|
|
Arrays.fill(NIO_BUFFERS.get(), 0, count, null);
|
|
|
|
}
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
|
|
|
|
2014-02-18 10:08:20 +01:00
|
|
|
/**
|
2014-08-05 14:24:49 +02:00
|
|
|
* Returns an array of direct NIO buffers if the currently pending messages are made of {@link ByteBuf} only.
|
|
|
|
* {@link #nioBufferCount()} and {@link #nioBufferSize()} will return the number of NIO buffers in the returned
|
|
|
|
* array and the total number of readable bytes of the NIO buffers respectively.
|
|
|
|
* <p>
|
|
|
|
* Note that the returned array is reused and thus should not escape
|
|
|
|
* {@link AbstractChannel#doWrite(ChannelOutboundBuffer)}.
|
|
|
|
* Refer to {@link NioSocketChannel#doWrite(ChannelOutboundBuffer)} for an example.
|
|
|
|
* </p>
|
2014-02-18 10:08:20 +01:00
|
|
|
*/
|
2014-08-05 14:24:49 +02:00
|
|
|
public ByteBuffer[] nioBuffers() {
|
|
|
|
long nioBufferSize = 0;
|
|
|
|
int nioBufferCount = 0;
|
|
|
|
final InternalThreadLocalMap threadLocalMap = InternalThreadLocalMap.get();
|
|
|
|
ByteBuffer[] nioBuffers = NIO_BUFFERS.get(threadLocalMap);
|
|
|
|
Entry entry = flushedEntry;
|
|
|
|
while (isFlushedEntry(entry) && entry.msg instanceof ByteBuf) {
|
|
|
|
if (!entry.cancelled) {
|
|
|
|
ByteBuf buf = (ByteBuf) entry.msg;
|
|
|
|
final int readerIndex = buf.readerIndex();
|
|
|
|
final int readableBytes = buf.writerIndex() - readerIndex;
|
|
|
|
|
|
|
|
if (readableBytes > 0) {
|
2015-05-12 14:04:32 +02:00
|
|
|
if (Integer.MAX_VALUE - readableBytes < nioBufferSize) {
|
|
|
|
// If the nioBufferSize + readableBytes will overflow an Integer we stop populate the
|
|
|
|
// ByteBuffer array. This is done as bsd/osx don't allow to write more bytes then
|
|
|
|
// Integer.MAX_VALUE with one writev(...) call and so will return 'EINVAL', which will
|
|
|
|
// raise an IOException. On Linux it may work depending on the
|
|
|
|
// architecture and kernel but to be safe we also enforce the limit here.
|
|
|
|
// This said writing more the Integer.MAX_VALUE is not a good idea anyway.
|
|
|
|
//
|
|
|
|
// See also:
|
|
|
|
// - https://www.freebsd.org/cgi/man.cgi?query=write&sektion=2
|
|
|
|
// - http://linux.die.net/man/2/writev
|
|
|
|
break;
|
|
|
|
}
|
2014-08-05 14:24:49 +02:00
|
|
|
nioBufferSize += readableBytes;
|
2014-08-13 21:47:00 +02:00
|
|
|
int count = entry.count;
|
|
|
|
if (count == -1) {
|
|
|
|
//noinspection ConstantValueVariableUse
|
|
|
|
entry.count = count = buf.nioBufferCount();
|
|
|
|
}
|
2014-08-05 14:24:49 +02:00
|
|
|
int neededSpace = nioBufferCount + count;
|
|
|
|
if (neededSpace > nioBuffers.length) {
|
2014-08-13 16:40:34 +02:00
|
|
|
nioBuffers = expandNioBufferArray(nioBuffers, neededSpace, nioBufferCount);
|
|
|
|
NIO_BUFFERS.set(threadLocalMap, nioBuffers);
|
2014-08-05 14:24:49 +02:00
|
|
|
}
|
|
|
|
if (count == 1) {
|
2014-08-13 21:47:00 +02:00
|
|
|
ByteBuffer nioBuf = entry.buf;
|
|
|
|
if (nioBuf == null) {
|
|
|
|
// cache ByteBuffer as it may need to create a new ByteBuffer instance if its a
|
|
|
|
// derived buffer
|
|
|
|
entry.buf = nioBuf = buf.internalNioBuffer(readerIndex, readableBytes);
|
|
|
|
}
|
2014-08-05 14:24:49 +02:00
|
|
|
nioBuffers[nioBufferCount ++] = nioBuf;
|
|
|
|
} else {
|
2014-08-13 21:47:00 +02:00
|
|
|
ByteBuffer[] nioBufs = entry.bufs;
|
|
|
|
if (nioBufs == null) {
|
|
|
|
// cached ByteBuffers as they may be expensive to create in terms
|
|
|
|
// of Object allocation
|
|
|
|
entry.bufs = nioBufs = buf.nioBuffers();
|
|
|
|
}
|
2014-08-05 14:24:49 +02:00
|
|
|
nioBufferCount = fillBufferArray(nioBufs, nioBuffers, nioBufferCount);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
entry = entry.next;
|
|
|
|
}
|
|
|
|
this.nioBufferCount = nioBufferCount;
|
|
|
|
this.nioBufferSize = nioBufferSize;
|
|
|
|
|
|
|
|
return nioBuffers;
|
|
|
|
}
|
|
|
|
|
|
|
|
private static int fillBufferArray(ByteBuffer[] nioBufs, ByteBuffer[] nioBuffers, int nioBufferCount) {
|
|
|
|
for (ByteBuffer nioBuf: nioBufs) {
|
|
|
|
if (nioBuf == null) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
nioBuffers[nioBufferCount ++] = nioBuf;
|
|
|
|
}
|
|
|
|
return nioBufferCount;
|
|
|
|
}
|
|
|
|
|
2014-08-13 16:40:34 +02:00
|
|
|
private static ByteBuffer[] expandNioBufferArray(ByteBuffer[] array, int neededSpace, int size) {
|
|
|
|
int newCapacity = array.length;
|
|
|
|
do {
|
|
|
|
// double capacity until it is big enough
|
|
|
|
// See https://github.com/netty/netty/issues/1890
|
|
|
|
newCapacity <<= 1;
|
|
|
|
|
|
|
|
if (newCapacity < 0) {
|
|
|
|
throw new IllegalStateException();
|
|
|
|
}
|
|
|
|
|
|
|
|
} while (neededSpace > newCapacity);
|
|
|
|
|
|
|
|
ByteBuffer[] newArray = new ByteBuffer[newCapacity];
|
|
|
|
System.arraycopy(array, 0, newArray, 0, size);
|
|
|
|
|
|
|
|
return newArray;
|
|
|
|
}
|
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
/**
|
|
|
|
* Returns the number of {@link ByteBuffer} that can be written out of the {@link ByteBuffer} array that was
|
|
|
|
* obtained via {@link #nioBuffers()}. This method <strong>MUST</strong> be called after {@link #nioBuffers()}
|
|
|
|
* was called.
|
|
|
|
*/
|
|
|
|
public int nioBufferCount() {
|
|
|
|
return nioBufferCount;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Returns the number of bytes that can be written out of the {@link ByteBuffer} array that was
|
|
|
|
* obtained via {@link #nioBuffers()}. This method <strong>MUST</strong> be called after {@link #nioBuffers()}
|
|
|
|
* was called.
|
|
|
|
*/
|
|
|
|
public long nioBufferSize() {
|
|
|
|
return nioBufferSize;
|
|
|
|
}
|
|
|
|
|
2014-10-22 10:45:28 +02:00
|
|
|
/**
|
|
|
|
* Returns {@code true} if and only if {@linkplain #totalPendingWriteBytes() the total number of pending bytes} did
|
|
|
|
* not exceed the write watermark of the {@link Channel} and
|
|
|
|
* no {@linkplain #setUserDefinedWritability(int, boolean) user-defined writability flag} has been set to
|
|
|
|
* {@code false}.
|
|
|
|
*/
|
|
|
|
public boolean isWritable() {
|
|
|
|
return unwritable == 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Returns {@code true} if and only if the user-defined writability flag at the specified index is set to
|
|
|
|
* {@code true}.
|
|
|
|
*/
|
|
|
|
public boolean getUserDefinedWritability(int index) {
|
|
|
|
return (unwritable & writabilityMask(index)) == 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Sets a user-defined writability flag at the specified index.
|
|
|
|
*/
|
|
|
|
public void setUserDefinedWritability(int index, boolean writable) {
|
|
|
|
if (writable) {
|
|
|
|
setUserDefinedWritability(index);
|
|
|
|
} else {
|
|
|
|
clearUserDefinedWritability(index);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
private void setUserDefinedWritability(int index) {
|
|
|
|
final int mask = ~writabilityMask(index);
|
|
|
|
for (;;) {
|
|
|
|
final int oldValue = unwritable;
|
|
|
|
final int newValue = oldValue & mask;
|
|
|
|
if (UNWRITABLE_UPDATER.compareAndSet(this, oldValue, newValue)) {
|
|
|
|
if (oldValue != 0 && newValue == 0) {
|
2016-04-01 11:45:43 +02:00
|
|
|
fireChannelWritabilityChanged();
|
2014-10-22 10:45:28 +02:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
private void clearUserDefinedWritability(int index) {
|
|
|
|
final int mask = writabilityMask(index);
|
|
|
|
for (;;) {
|
|
|
|
final int oldValue = unwritable;
|
|
|
|
final int newValue = oldValue | mask;
|
|
|
|
if (UNWRITABLE_UPDATER.compareAndSet(this, oldValue, newValue)) {
|
|
|
|
if (oldValue == 0 && newValue != 0) {
|
2016-04-01 11:45:43 +02:00
|
|
|
fireChannelWritabilityChanged();
|
2014-10-22 10:45:28 +02:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
private static int writabilityMask(int index) {
|
|
|
|
if (index < 1 || index > 31) {
|
|
|
|
throw new IllegalArgumentException("index: " + index + " (expected: 1~31)");
|
|
|
|
}
|
|
|
|
return 1 << index;
|
|
|
|
}
|
|
|
|
|
2016-04-01 11:45:43 +02:00
|
|
|
private void setWritable(boolean notify) {
|
2014-10-22 10:45:28 +02:00
|
|
|
for (;;) {
|
|
|
|
final int oldValue = unwritable;
|
|
|
|
final int newValue = oldValue & ~1;
|
|
|
|
if (UNWRITABLE_UPDATER.compareAndSet(this, oldValue, newValue)) {
|
2016-04-01 11:45:43 +02:00
|
|
|
if (notify && oldValue != 0 && newValue == 0) {
|
|
|
|
fireChannelWritabilityChanged();
|
2014-10-22 10:45:28 +02:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-04-01 11:45:43 +02:00
|
|
|
private void setUnwritable(boolean notify) {
|
2014-10-22 10:45:28 +02:00
|
|
|
for (;;) {
|
|
|
|
final int oldValue = unwritable;
|
|
|
|
final int newValue = oldValue | 1;
|
|
|
|
if (UNWRITABLE_UPDATER.compareAndSet(this, oldValue, newValue)) {
|
2016-04-01 11:45:43 +02:00
|
|
|
if (notify && oldValue == 0 && newValue != 0) {
|
|
|
|
fireChannelWritabilityChanged();
|
2014-10-22 10:45:28 +02:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2014-08-05 14:24:49 +02:00
|
|
|
}
|
|
|
|
|
2016-04-01 11:45:43 +02:00
|
|
|
private void fireChannelWritabilityChanged() {
|
|
|
|
// Always invoke it later to prevent re-entrance bug.
|
|
|
|
// See https://github.com/netty/netty/issues/5028
|
|
|
|
channel.eventLoop().execute(fireChannelWritabilityChangedTask);
|
2014-12-10 10:36:53 +01:00
|
|
|
}
|
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
/**
|
|
|
|
* Returns the number of flushed messages in this {@link ChannelOutboundBuffer}.
|
|
|
|
*/
|
|
|
|
public int size() {
|
|
|
|
return flushed;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
|
|
|
|
2014-02-18 10:08:20 +01:00
|
|
|
/**
|
2014-08-05 14:24:49 +02:00
|
|
|
* Returns {@code true} if there are flushed messages in this {@link ChannelOutboundBuffer} or {@code false}
|
|
|
|
* otherwise.
|
2014-02-18 10:08:20 +01:00
|
|
|
*/
|
2014-08-05 14:24:49 +02:00
|
|
|
public boolean isEmpty() {
|
|
|
|
return flushed == 0;
|
|
|
|
}
|
|
|
|
|
2015-05-05 10:32:18 +02:00
|
|
|
void failFlushed(Throwable cause, boolean notify) {
|
2013-07-23 07:33:37 +02:00
|
|
|
// Make sure that this method does not reenter. A listener added to the current promise can be notified by the
|
|
|
|
// current thread in the tryFailure() call of the loop below, and the listener can trigger another fail() call
|
|
|
|
// indirectly (usually by closing the channel.)
|
|
|
|
//
|
|
|
|
// See https://github.com/netty/netty/issues/1501
|
2013-07-18 03:23:26 +02:00
|
|
|
if (inFail) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2013-07-23 07:33:37 +02:00
|
|
|
try {
|
|
|
|
inFail = true;
|
|
|
|
for (;;) {
|
2015-05-05 10:32:18 +02:00
|
|
|
if (!remove0(cause, notify)) {
|
2013-07-23 07:33:37 +02:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} finally {
|
|
|
|
inFail = false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
void close(final ClosedChannelException cause) {
|
2013-07-23 07:33:37 +02:00
|
|
|
if (inFail) {
|
2015-11-20 06:09:23 +01:00
|
|
|
channel.eventLoop().execute(new OneTimeTask() {
|
2013-07-23 07:33:37 +02:00
|
|
|
@Override
|
|
|
|
public void run() {
|
|
|
|
close(cause);
|
|
|
|
}
|
|
|
|
});
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2013-07-18 03:23:26 +02:00
|
|
|
inFail = true;
|
|
|
|
|
2013-07-23 07:33:37 +02:00
|
|
|
if (channel.isOpen()) {
|
|
|
|
throw new IllegalStateException("close() must be invoked after the channel is closed.");
|
|
|
|
}
|
|
|
|
|
2013-08-13 21:39:28 +02:00
|
|
|
if (!isEmpty()) {
|
2013-07-23 07:33:37 +02:00
|
|
|
throw new IllegalStateException("close() must be invoked after all flushed writes are handled.");
|
|
|
|
}
|
|
|
|
|
2013-07-17 14:02:20 +02:00
|
|
|
// Release all unflushed messages.
|
|
|
|
try {
|
2014-08-05 14:24:49 +02:00
|
|
|
Entry e = unflushedEntry;
|
|
|
|
while (e != null) {
|
2013-07-23 07:33:37 +02:00
|
|
|
// Just decrease; do not trigger any events via decrementPendingOutboundBytes()
|
2013-08-13 21:39:28 +02:00
|
|
|
int size = e.pendingSize;
|
2014-06-17 18:02:41 +02:00
|
|
|
TOTAL_PENDING_SIZE_UPDATER.addAndGet(this, -size);
|
2013-08-05 14:58:16 +02:00
|
|
|
|
2014-02-07 20:52:37 +01:00
|
|
|
if (!e.cancelled) {
|
2014-08-05 14:24:49 +02:00
|
|
|
ReferenceCountUtil.safeRelease(e.msg);
|
2014-02-07 20:52:37 +01:00
|
|
|
safeFail(e.promise, cause);
|
|
|
|
}
|
2014-08-05 14:24:49 +02:00
|
|
|
e = e.recycleAndGetNext();
|
2013-07-17 14:02:20 +02:00
|
|
|
}
|
|
|
|
} finally {
|
2013-07-18 03:23:26 +02:00
|
|
|
inFail = false;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
2015-05-29 08:04:34 +02:00
|
|
|
clearNioBuffers();
|
2013-07-18 13:59:14 +02:00
|
|
|
}
|
2013-07-18 03:29:34 +02:00
|
|
|
|
2014-02-10 23:52:24 +01:00
|
|
|
private static void safeSuccess(ChannelPromise promise) {
|
2014-02-11 00:03:46 +01:00
|
|
|
if (!(promise instanceof VoidChannelPromise) && !promise.trySuccess()) {
|
2016-03-29 16:00:27 +02:00
|
|
|
Throwable err = promise.cause();
|
|
|
|
if (err == null) {
|
|
|
|
logger.warn("Failed to mark a promise as success because it has succeeded already: {}", promise);
|
|
|
|
} else {
|
|
|
|
logger.warn(
|
|
|
|
"Failed to mark a promise as success because it has failed already: {}, unnotified cause {}",
|
|
|
|
promise, stackTraceToString(err));
|
|
|
|
}
|
2014-02-10 23:52:24 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-07-18 13:59:14 +02:00
|
|
|
private static void safeFail(ChannelPromise promise, Throwable cause) {
|
|
|
|
if (!(promise instanceof VoidChannelPromise) && !promise.tryFailure(cause)) {
|
2016-03-29 16:00:27 +02:00
|
|
|
Throwable err = promise.cause();
|
|
|
|
if (err == null) {
|
|
|
|
logger.warn("Failed to mark a promise as failure because it has succeeded already: {}", promise, cause);
|
|
|
|
} else {
|
|
|
|
logger.warn(
|
|
|
|
"Failed to mark a promise as failure because it hass failed already: {}, unnotified cause {}",
|
|
|
|
promise, stackTraceToString(err), cause);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
private static String stackTraceToString(Throwable cause) {
|
|
|
|
ByteArrayOutputStream out = new ByteArrayOutputStream();
|
|
|
|
PrintStream pout = new PrintStream(out);
|
|
|
|
cause.printStackTrace(pout);
|
|
|
|
pout.flush();
|
|
|
|
try {
|
|
|
|
return new String(out.toByteArray());
|
|
|
|
} finally {
|
|
|
|
try {
|
|
|
|
out.close();
|
|
|
|
} catch (IOException ignore) {
|
|
|
|
// ignore as should never happen
|
|
|
|
}
|
2013-07-18 03:29:34 +02:00
|
|
|
}
|
2013-07-18 03:23:26 +02:00
|
|
|
}
|
2013-08-13 21:39:28 +02:00
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
@Deprecated
|
2013-08-21 19:28:37 +02:00
|
|
|
public void recycle() {
|
2014-08-05 14:24:49 +02:00
|
|
|
// NOOP
|
2013-08-21 19:28:37 +02:00
|
|
|
}
|
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
public long totalPendingWriteBytes() {
|
2014-02-10 23:52:24 +01:00
|
|
|
return totalPendingSize;
|
2014-01-03 17:16:12 +01:00
|
|
|
}
|
|
|
|
|
2015-06-09 19:44:05 +02:00
|
|
|
/**
|
|
|
|
* Get how many bytes can be written until {@link #isWritable()} returns {@code false}.
|
|
|
|
* This quantity will always be non-negative. If {@link #isWritable()} is {@code false} then 0.
|
|
|
|
*/
|
2015-06-10 18:10:02 +02:00
|
|
|
public long bytesBeforeUnwritable() {
|
2015-06-09 19:44:05 +02:00
|
|
|
long bytes = channel.config().getWriteBufferHighWaterMark() - totalPendingSize;
|
|
|
|
// If bytes is negative we know we are not writable, but if bytes is non-negative we have to check writability.
|
|
|
|
// Note that totalPendingSize and isWritable() use different volatile variables that are not synchronized
|
|
|
|
// together. totalPendingSize will be updated before isWritable().
|
|
|
|
if (bytes > 0) {
|
|
|
|
return isWritable() ? bytes : 0;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Get how many bytes must be drained from the underlying buffer until {@link #isWritable()} returns {@code true}.
|
|
|
|
* This quantity will always be non-negative. If {@link #isWritable()} is {@code true} then 0.
|
|
|
|
*/
|
|
|
|
public long bytesBeforeWritable() {
|
|
|
|
long bytes = totalPendingSize - channel.config().getWriteBufferLowWaterMark();
|
|
|
|
// If bytes is negative we know we are writable, but if bytes is non-negative we have to check writability.
|
|
|
|
// Note that totalPendingSize and isWritable() use different volatile variables that are not synchronized
|
|
|
|
// together. totalPendingSize will be updated before isWritable().
|
|
|
|
if (bytes > 0) {
|
|
|
|
return isWritable() ? 0 : bytes;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-02-18 10:08:20 +01:00
|
|
|
/**
|
2014-08-05 14:24:49 +02:00
|
|
|
* Call {@link MessageProcessor#processMessage(Object)} for each flushed message
|
|
|
|
* in this {@link ChannelOutboundBuffer} until {@link MessageProcessor#processMessage(Object)}
|
|
|
|
* returns {@code false} or there are no more flushed messages to process.
|
2014-02-18 10:08:20 +01:00
|
|
|
*/
|
2014-08-05 14:24:49 +02:00
|
|
|
public void forEachFlushedMessage(MessageProcessor processor) throws Exception {
|
|
|
|
if (processor == null) {
|
|
|
|
throw new NullPointerException("processor");
|
|
|
|
}
|
2014-02-18 10:08:20 +01:00
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
Entry entry = flushedEntry;
|
|
|
|
if (entry == null) {
|
|
|
|
return;
|
|
|
|
}
|
2014-02-18 10:08:20 +01:00
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
do {
|
|
|
|
if (!entry.cancelled) {
|
|
|
|
if (!processor.processMessage(entry.msg)) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
entry = entry.next;
|
|
|
|
} while (isFlushedEntry(entry));
|
2014-02-18 10:08:20 +01:00
|
|
|
}
|
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
private boolean isFlushedEntry(Entry e) {
|
|
|
|
return e != null && e != unflushedEntry;
|
2014-07-22 22:27:50 +02:00
|
|
|
}
|
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
public interface MessageProcessor {
|
|
|
|
/**
|
|
|
|
* Will be called for each flushed message until it either there are no more flushed messages or this
|
|
|
|
* method returns {@code false}.
|
|
|
|
*/
|
|
|
|
boolean processMessage(Object msg) throws Exception;
|
2014-02-18 10:08:20 +01:00
|
|
|
}
|
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
static final class Entry {
|
|
|
|
private static final Recycler<Entry> RECYCLER = new Recycler<Entry>() {
|
|
|
|
@Override
|
|
|
|
protected Entry newObject(Handle handle) {
|
|
|
|
return new Entry(handle);
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
private final Handle handle;
|
|
|
|
Entry next;
|
2013-08-13 21:39:28 +02:00
|
|
|
Object msg;
|
2014-08-13 21:47:00 +02:00
|
|
|
ByteBuffer[] bufs;
|
|
|
|
ByteBuffer buf;
|
2013-08-13 21:39:28 +02:00
|
|
|
ChannelPromise promise;
|
|
|
|
long progress;
|
|
|
|
long total;
|
|
|
|
int pendingSize;
|
2014-08-13 21:47:00 +02:00
|
|
|
int count = -1;
|
2014-02-07 20:52:37 +01:00
|
|
|
boolean cancelled;
|
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
private Entry(Handle handle) {
|
|
|
|
this.handle = handle;
|
2014-02-18 10:08:20 +01:00
|
|
|
}
|
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
static Entry newInstance(Object msg, int size, long total, ChannelPromise promise) {
|
|
|
|
Entry entry = RECYCLER.get();
|
|
|
|
entry.msg = msg;
|
|
|
|
entry.pendingSize = size;
|
|
|
|
entry.total = total;
|
|
|
|
entry.promise = promise;
|
|
|
|
return entry;
|
2014-02-18 10:08:20 +01:00
|
|
|
}
|
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
int cancel() {
|
2014-02-07 20:52:37 +01:00
|
|
|
if (!cancelled) {
|
|
|
|
cancelled = true;
|
|
|
|
int pSize = pendingSize;
|
|
|
|
|
|
|
|
// release message and replace with an empty buffer
|
2014-08-05 14:24:49 +02:00
|
|
|
ReferenceCountUtil.safeRelease(msg);
|
2014-02-07 20:52:37 +01:00
|
|
|
msg = Unpooled.EMPTY_BUFFER;
|
|
|
|
|
|
|
|
pendingSize = 0;
|
|
|
|
total = 0;
|
|
|
|
progress = 0;
|
2014-08-13 21:47:00 +02:00
|
|
|
bufs = null;
|
|
|
|
buf = null;
|
2014-02-07 20:52:37 +01:00
|
|
|
return pSize;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
2013-08-13 21:39:28 +02:00
|
|
|
|
2014-08-05 14:24:49 +02:00
|
|
|
void recycle() {
|
|
|
|
next = null;
|
2014-08-13 21:47:00 +02:00
|
|
|
bufs = null;
|
|
|
|
buf = null;
|
2013-08-13 21:39:28 +02:00
|
|
|
msg = null;
|
|
|
|
promise = null;
|
|
|
|
progress = 0;
|
|
|
|
total = 0;
|
|
|
|
pendingSize = 0;
|
2014-08-13 21:47:00 +02:00
|
|
|
count = -1;
|
2014-02-07 20:52:37 +01:00
|
|
|
cancelled = false;
|
2014-08-05 14:24:49 +02:00
|
|
|
RECYCLER.recycle(this, handle);
|
|
|
|
}
|
|
|
|
|
|
|
|
Entry recycleAndGetNext() {
|
|
|
|
Entry next = this.next;
|
|
|
|
recycle();
|
|
|
|
return next;
|
2013-08-13 21:39:28 +02:00
|
|
|
}
|
|
|
|
}
|
2016-04-01 11:45:43 +02:00
|
|
|
|
|
|
|
private static final class ChannelWritabilityChangedTask implements Runnable {
|
|
|
|
private final Channel channel;
|
|
|
|
private boolean writable = true;
|
|
|
|
|
|
|
|
ChannelWritabilityChangedTask(Channel channel) {
|
|
|
|
this.channel = channel;
|
|
|
|
}
|
|
|
|
|
|
|
|
@Override
|
|
|
|
public void run() {
|
|
|
|
if (channel.isActive()) {
|
|
|
|
boolean newWritable = channel.isWritable();
|
|
|
|
|
|
|
|
if (writable != newWritable) {
|
|
|
|
writable = newWritable;
|
|
|
|
channel.pipeline().fireChannelWritabilityChanged();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|