Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
/*
|
|
|
|
* Copyright 2013 The Netty Project
|
|
|
|
*
|
|
|
|
* The Netty Project licenses this file to you under the Apache License,
|
|
|
|
* version 2.0 (the "License"); you may not use this file except in compliance
|
|
|
|
* with the License. You may obtain a copy of the License at:
|
|
|
|
*
|
|
|
|
* http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
*
|
|
|
|
* Unless required by applicable law or agreed to in writing, software
|
|
|
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
|
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
|
|
* License for the specific language governing permissions and limitations
|
|
|
|
* under the License.
|
|
|
|
*/
|
|
|
|
|
|
|
|
package io.netty.util;
|
|
|
|
|
2014-06-17 11:37:58 +02:00
|
|
|
import io.netty.util.concurrent.FastThreadLocal;
|
2014-03-12 10:16:53 +01:00
|
|
|
import io.netty.util.internal.SystemPropertyUtil;
|
|
|
|
import io.netty.util.internal.logging.InternalLogger;
|
|
|
|
import io.netty.util.internal.logging.InternalLoggerFactory;
|
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
import java.lang.ref.WeakReference;
|
|
|
|
import java.util.Arrays;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
import java.util.Map;
|
2014-06-13 11:56:35 +02:00
|
|
|
import java.util.WeakHashMap;
|
|
|
|
import java.util.concurrent.atomic.AtomicInteger;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
|
2016-07-29 18:09:29 +02:00
|
|
|
import static io.netty.util.internal.MathUtil.safeFindNextPositivePowerOfTwo;
|
2016-07-11 11:55:09 +02:00
|
|
|
import static java.lang.Math.max;
|
|
|
|
import static java.lang.Math.min;
|
|
|
|
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
/**
|
|
|
|
* Light-weight object pool based on a thread-local stack.
|
|
|
|
*
|
|
|
|
* @param <T> the type of the pooled object
|
|
|
|
*/
|
|
|
|
public abstract class Recycler<T> {
|
|
|
|
|
2014-03-12 10:16:53 +01:00
|
|
|
private static final InternalLogger logger = InternalLoggerFactory.getInstance(Recycler.class);
|
|
|
|
|
2015-08-27 11:08:27 +02:00
|
|
|
@SuppressWarnings("rawtypes")
|
|
|
|
private static final Handle NOOP_HANDLE = new Handle() {
|
|
|
|
@Override
|
|
|
|
public void recycle(Object object) {
|
|
|
|
// NOOP
|
|
|
|
}
|
|
|
|
};
|
2014-06-13 11:56:35 +02:00
|
|
|
private static final AtomicInteger ID_GENERATOR = new AtomicInteger(Integer.MIN_VALUE);
|
|
|
|
private static final int OWN_THREAD_ID = ID_GENERATOR.getAndIncrement();
|
2018-02-09 18:54:05 +01:00
|
|
|
private static final int DEFAULT_INITIAL_MAX_CAPACITY_PER_THREAD = 4 * 1024; // Use 4k instances as default.
|
2016-08-03 09:14:23 +02:00
|
|
|
private static final int DEFAULT_MAX_CAPACITY_PER_THREAD;
|
2014-03-12 10:16:53 +01:00
|
|
|
private static final int INITIAL_CAPACITY;
|
2016-07-11 11:55:09 +02:00
|
|
|
private static final int MAX_SHARED_CAPACITY_FACTOR;
|
2016-07-25 11:15:56 +02:00
|
|
|
private static final int MAX_DELAYED_QUEUES_PER_THREAD;
|
2016-05-25 15:41:21 +02:00
|
|
|
private static final int LINK_CAPACITY;
|
2016-07-26 20:15:21 +02:00
|
|
|
private static final int RATIO;
|
2014-03-12 10:16:53 +01:00
|
|
|
|
|
|
|
static {
|
|
|
|
// In the future, we might have different maxCapacity for different object types.
|
|
|
|
// e.g. io.netty.recycler.maxCapacity.writeTask
|
|
|
|
// io.netty.recycler.maxCapacity.outboundBuffer
|
2016-08-03 09:14:23 +02:00
|
|
|
int maxCapacityPerThread = SystemPropertyUtil.getInt("io.netty.recycler.maxCapacityPerThread",
|
|
|
|
SystemPropertyUtil.getInt("io.netty.recycler.maxCapacity", DEFAULT_INITIAL_MAX_CAPACITY_PER_THREAD));
|
|
|
|
if (maxCapacityPerThread < 0) {
|
|
|
|
maxCapacityPerThread = DEFAULT_INITIAL_MAX_CAPACITY_PER_THREAD;
|
2014-03-12 10:16:53 +01:00
|
|
|
}
|
2016-08-03 09:14:23 +02:00
|
|
|
|
|
|
|
DEFAULT_MAX_CAPACITY_PER_THREAD = maxCapacityPerThread;
|
2016-05-25 15:41:21 +02:00
|
|
|
|
2016-07-11 11:55:09 +02:00
|
|
|
MAX_SHARED_CAPACITY_FACTOR = max(2,
|
|
|
|
SystemPropertyUtil.getInt("io.netty.recycler.maxSharedCapacityFactor",
|
|
|
|
2));
|
|
|
|
|
2016-07-25 11:15:56 +02:00
|
|
|
MAX_DELAYED_QUEUES_PER_THREAD = max(0,
|
|
|
|
SystemPropertyUtil.getInt("io.netty.recycler.maxDelayedQueuesPerThread",
|
|
|
|
// We use the same value as default EventLoop number
|
2017-01-16 18:36:32 +01:00
|
|
|
NettyRuntime.availableProcessors() * 2));
|
2016-07-25 11:15:56 +02:00
|
|
|
|
2016-07-29 18:09:29 +02:00
|
|
|
LINK_CAPACITY = safeFindNextPositivePowerOfTwo(
|
2016-07-11 11:55:09 +02:00
|
|
|
max(SystemPropertyUtil.getInt("io.netty.recycler.linkCapacity", 16), 16));
|
2016-05-25 15:41:21 +02:00
|
|
|
|
2016-07-26 20:15:21 +02:00
|
|
|
// By default we allow one push to a Recycler for each 8th try on handles that were never recycled before.
|
|
|
|
// This should help to slowly increase the capacity of the recycler while not be too sensitive to allocation
|
|
|
|
// bursts.
|
2016-07-29 18:09:29 +02:00
|
|
|
RATIO = safeFindNextPositivePowerOfTwo(SystemPropertyUtil.getInt("io.netty.recycler.ratio", 8));
|
2016-07-26 20:15:21 +02:00
|
|
|
|
2014-03-12 10:16:53 +01:00
|
|
|
if (logger.isDebugEnabled()) {
|
2016-08-03 09:14:23 +02:00
|
|
|
if (DEFAULT_MAX_CAPACITY_PER_THREAD == 0) {
|
|
|
|
logger.debug("-Dio.netty.recycler.maxCapacityPerThread: disabled");
|
2016-07-11 11:55:09 +02:00
|
|
|
logger.debug("-Dio.netty.recycler.maxSharedCapacityFactor: disabled");
|
2016-05-25 15:41:21 +02:00
|
|
|
logger.debug("-Dio.netty.recycler.linkCapacity: disabled");
|
2016-07-26 20:15:21 +02:00
|
|
|
logger.debug("-Dio.netty.recycler.ratio: disabled");
|
2015-08-27 11:08:27 +02:00
|
|
|
} else {
|
2016-08-03 09:14:23 +02:00
|
|
|
logger.debug("-Dio.netty.recycler.maxCapacityPerThread: {}", DEFAULT_MAX_CAPACITY_PER_THREAD);
|
2016-07-11 11:55:09 +02:00
|
|
|
logger.debug("-Dio.netty.recycler.maxSharedCapacityFactor: {}", MAX_SHARED_CAPACITY_FACTOR);
|
2016-05-25 15:41:21 +02:00
|
|
|
logger.debug("-Dio.netty.recycler.linkCapacity: {}", LINK_CAPACITY);
|
2016-07-26 20:15:21 +02:00
|
|
|
logger.debug("-Dio.netty.recycler.ratio: {}", RATIO);
|
2015-08-27 11:08:27 +02:00
|
|
|
}
|
2014-03-12 10:16:53 +01:00
|
|
|
}
|
|
|
|
|
2016-08-03 09:14:23 +02:00
|
|
|
INITIAL_CAPACITY = min(DEFAULT_MAX_CAPACITY_PER_THREAD, 256);
|
2014-03-12 10:16:53 +01:00
|
|
|
}
|
|
|
|
|
2016-08-03 09:14:23 +02:00
|
|
|
private final int maxCapacityPerThread;
|
2016-07-11 11:55:09 +02:00
|
|
|
private final int maxSharedCapacityFactor;
|
2016-07-26 20:15:21 +02:00
|
|
|
private final int ratioMask;
|
2016-07-25 11:15:56 +02:00
|
|
|
private final int maxDelayedQueuesPerThread;
|
2016-07-11 11:55:09 +02:00
|
|
|
|
2014-06-17 11:37:58 +02:00
|
|
|
private final FastThreadLocal<Stack<T>> threadLocal = new FastThreadLocal<Stack<T>>() {
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
@Override
|
|
|
|
protected Stack<T> initialValue() {
|
2016-08-03 09:14:23 +02:00
|
|
|
return new Stack<T>(Recycler.this, Thread.currentThread(), maxCapacityPerThread, maxSharedCapacityFactor,
|
2016-07-25 11:15:56 +02:00
|
|
|
ratioMask, maxDelayedQueuesPerThread);
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
2017-12-14 19:01:10 +01:00
|
|
|
|
|
|
|
@Override
|
|
|
|
protected void onRemoval(Stack<T> value) {
|
|
|
|
// Let us remove the WeakOrderQueue from the WeakHashMap directly if its safe to remove some overhead
|
|
|
|
if (value.threadRef.get() == Thread.currentThread()) {
|
|
|
|
if (DELAYED_RECYCLED.isSet()) {
|
|
|
|
DELAYED_RECYCLED.get().remove(value);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
};
|
|
|
|
|
2014-03-12 10:16:53 +01:00
|
|
|
protected Recycler() {
|
2016-08-03 09:14:23 +02:00
|
|
|
this(DEFAULT_MAX_CAPACITY_PER_THREAD);
|
2014-03-12 10:16:53 +01:00
|
|
|
}
|
|
|
|
|
2016-08-03 09:14:23 +02:00
|
|
|
protected Recycler(int maxCapacityPerThread) {
|
|
|
|
this(maxCapacityPerThread, MAX_SHARED_CAPACITY_FACTOR);
|
2016-07-11 11:55:09 +02:00
|
|
|
}
|
|
|
|
|
2016-08-03 09:14:23 +02:00
|
|
|
protected Recycler(int maxCapacityPerThread, int maxSharedCapacityFactor) {
|
|
|
|
this(maxCapacityPerThread, maxSharedCapacityFactor, RATIO, MAX_DELAYED_QUEUES_PER_THREAD);
|
2016-07-26 20:15:21 +02:00
|
|
|
}
|
|
|
|
|
2016-08-03 09:14:23 +02:00
|
|
|
protected Recycler(int maxCapacityPerThread, int maxSharedCapacityFactor,
|
|
|
|
int ratio, int maxDelayedQueuesPerThread) {
|
2016-07-29 18:09:29 +02:00
|
|
|
ratioMask = safeFindNextPositivePowerOfTwo(ratio) - 1;
|
2016-08-03 09:14:23 +02:00
|
|
|
if (maxCapacityPerThread <= 0) {
|
|
|
|
this.maxCapacityPerThread = 0;
|
2016-07-11 11:55:09 +02:00
|
|
|
this.maxSharedCapacityFactor = 1;
|
2016-07-25 11:15:56 +02:00
|
|
|
this.maxDelayedQueuesPerThread = 0;
|
2016-07-11 11:55:09 +02:00
|
|
|
} else {
|
2016-08-03 09:14:23 +02:00
|
|
|
this.maxCapacityPerThread = maxCapacityPerThread;
|
2016-07-11 11:55:09 +02:00
|
|
|
this.maxSharedCapacityFactor = max(1, maxSharedCapacityFactor);
|
2016-07-25 11:15:56 +02:00
|
|
|
this.maxDelayedQueuesPerThread = max(0, maxDelayedQueuesPerThread);
|
2016-07-11 11:55:09 +02:00
|
|
|
}
|
2014-03-12 10:16:53 +01:00
|
|
|
}
|
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
@SuppressWarnings("unchecked")
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
public final T get() {
|
2016-08-03 09:14:23 +02:00
|
|
|
if (maxCapacityPerThread == 0) {
|
2015-08-27 11:08:27 +02:00
|
|
|
return newObject((Handle<T>) NOOP_HANDLE);
|
|
|
|
}
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
Stack<T> stack = threadLocal.get();
|
2014-06-13 11:56:35 +02:00
|
|
|
DefaultHandle<T> handle = stack.pop();
|
|
|
|
if (handle == null) {
|
|
|
|
handle = stack.newHandle();
|
|
|
|
handle.value = newObject(handle);
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
2014-06-13 11:56:35 +02:00
|
|
|
return (T) handle.value;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
|
|
|
|
2016-05-17 09:31:58 +02:00
|
|
|
/**
|
|
|
|
* @deprecated use {@link Handle#recycle(Object)}.
|
|
|
|
*/
|
|
|
|
@Deprecated
|
2013-12-18 15:05:37 +01:00
|
|
|
public final boolean recycle(T o, Handle<T> handle) {
|
2015-08-27 11:08:27 +02:00
|
|
|
if (handle == NOOP_HANDLE) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
DefaultHandle<T> h = (DefaultHandle<T>) handle;
|
|
|
|
if (h.stack.parent != this) {
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
h.recycle(o);
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2014-08-31 02:39:03 +02:00
|
|
|
final int threadLocalCapacity() {
|
|
|
|
return threadLocal.get().elements.length;
|
|
|
|
}
|
|
|
|
|
2014-12-05 13:09:28 +01:00
|
|
|
final int threadLocalSize() {
|
|
|
|
return threadLocal.get().size;
|
|
|
|
}
|
|
|
|
|
2013-12-18 15:05:37 +01:00
|
|
|
protected abstract T newObject(Handle<T> handle);
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
|
2013-12-18 15:05:37 +01:00
|
|
|
public interface Handle<T> {
|
|
|
|
void recycle(T object);
|
|
|
|
}
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
static final class DefaultHandle<T> implements Handle<T> {
|
|
|
|
private int lastRecycledId;
|
|
|
|
private int recycleId;
|
2013-06-10 09:38:24 +02:00
|
|
|
|
2016-07-26 20:15:21 +02:00
|
|
|
boolean hasBeenRecycled;
|
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
private Stack<?> stack;
|
|
|
|
private Object value;
|
|
|
|
|
|
|
|
DefaultHandle(Stack<?> stack) {
|
|
|
|
this.stack = stack;
|
|
|
|
}
|
|
|
|
|
|
|
|
@Override
|
|
|
|
public void recycle(Object object) {
|
|
|
|
if (object != value) {
|
|
|
|
throw new IllegalArgumentException("object does not belong to handle");
|
|
|
|
}
|
2018-08-24 15:05:01 +02:00
|
|
|
|
|
|
|
Stack<?> stack = this.stack;
|
|
|
|
if (lastRecycledId != recycleId || stack == null) {
|
|
|
|
throw new IllegalStateException("recycled already");
|
|
|
|
}
|
|
|
|
|
2016-07-25 11:15:56 +02:00
|
|
|
stack.push(this);
|
2014-06-13 11:56:35 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
private static final FastThreadLocal<Map<Stack<?>, WeakOrderQueue>> DELAYED_RECYCLED =
|
|
|
|
new FastThreadLocal<Map<Stack<?>, WeakOrderQueue>>() {
|
|
|
|
@Override
|
|
|
|
protected Map<Stack<?>, WeakOrderQueue> initialValue() {
|
|
|
|
return new WeakHashMap<Stack<?>, WeakOrderQueue>();
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
// a queue that makes only moderate guarantees about visibility: items are seen in the correct order,
|
|
|
|
// but we aren't absolutely guaranteed to ever see anything at all, thereby keeping the queue cheap to maintain
|
|
|
|
private static final class WeakOrderQueue {
|
|
|
|
|
2016-07-25 11:15:56 +02:00
|
|
|
static final WeakOrderQueue DUMMY = new WeakOrderQueue();
|
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
// Let Link extend AtomicInteger for intrinsics. The Link itself will be used as writerIndex.
|
|
|
|
@SuppressWarnings("serial")
|
2017-12-21 09:34:36 +01:00
|
|
|
static final class Link extends AtomicInteger {
|
2014-06-13 11:56:35 +02:00
|
|
|
private final DefaultHandle<?>[] elements = new DefaultHandle[LINK_CAPACITY];
|
|
|
|
|
|
|
|
private int readIndex;
|
2017-12-21 09:34:36 +01:00
|
|
|
Link next;
|
|
|
|
}
|
|
|
|
|
2018-06-28 08:15:27 +02:00
|
|
|
// This act as a place holder for the head Link but also will reclaim space once finalized.
|
|
|
|
// Its important this does not hold any reference to either Stack or WeakOrderQueue.
|
|
|
|
static final class Head {
|
2017-12-21 09:34:36 +01:00
|
|
|
private final AtomicInteger availableSharedCapacity;
|
|
|
|
|
|
|
|
Link link;
|
|
|
|
|
|
|
|
Head(AtomicInteger availableSharedCapacity) {
|
|
|
|
this.availableSharedCapacity = availableSharedCapacity;
|
|
|
|
}
|
|
|
|
|
2018-06-28 08:15:27 +02:00
|
|
|
/// TODO: In the future when we move to Java9+ we should use java.lang.ref.Cleaner.
|
2017-12-21 09:34:36 +01:00
|
|
|
@Override
|
2018-06-28 08:15:27 +02:00
|
|
|
protected void finalize() throws Throwable {
|
|
|
|
try {
|
|
|
|
super.finalize();
|
|
|
|
} finally {
|
|
|
|
Link head = link;
|
|
|
|
link = null;
|
|
|
|
while (head != null) {
|
|
|
|
reclaimSpace(LINK_CAPACITY);
|
|
|
|
Link next = head.next;
|
|
|
|
// Unlink to help GC and guard against GC nepotism.
|
|
|
|
head.next = null;
|
|
|
|
head = next;
|
|
|
|
}
|
2017-12-21 09:34:36 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void reclaimSpace(int space) {
|
|
|
|
assert space >= 0;
|
|
|
|
availableSharedCapacity.addAndGet(space);
|
|
|
|
}
|
|
|
|
|
|
|
|
boolean reserveSpace(int space) {
|
|
|
|
return reserveSpace(availableSharedCapacity, space);
|
|
|
|
}
|
|
|
|
|
|
|
|
static boolean reserveSpace(AtomicInteger availableSharedCapacity, int space) {
|
|
|
|
assert space >= 0;
|
|
|
|
for (;;) {
|
|
|
|
int available = availableSharedCapacity.get();
|
|
|
|
if (available < space) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
if (availableSharedCapacity.compareAndSet(available, available - space)) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2014-06-13 11:56:35 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// chain of data items
|
2017-12-21 09:34:36 +01:00
|
|
|
private final Head head;
|
|
|
|
private Link tail;
|
2014-06-13 11:56:35 +02:00
|
|
|
// pointer to another queue of delayed items for the same stack
|
|
|
|
private WeakOrderQueue next;
|
|
|
|
private final WeakReference<Thread> owner;
|
|
|
|
private final int id = ID_GENERATOR.getAndIncrement();
|
|
|
|
|
2016-07-25 11:15:56 +02:00
|
|
|
private WeakOrderQueue() {
|
|
|
|
owner = null;
|
2017-12-21 09:34:36 +01:00
|
|
|
head = new Head(null);
|
2016-07-25 11:15:56 +02:00
|
|
|
}
|
|
|
|
|
2016-07-22 15:24:31 +02:00
|
|
|
private WeakOrderQueue(Stack<?> stack, Thread thread) {
|
2017-12-21 09:34:36 +01:00
|
|
|
tail = new Link();
|
2016-07-22 20:41:07 +02:00
|
|
|
|
|
|
|
// Its important that we not store the Stack itself in the WeakOrderQueue as the Stack also is used in
|
|
|
|
// the WeakHashMap as key. So just store the enclosed AtomicInteger which should allow to have the
|
|
|
|
// Stack itself GCed.
|
2017-12-21 09:34:36 +01:00
|
|
|
head = new Head(stack.availableSharedCapacity);
|
|
|
|
head.link = tail;
|
|
|
|
owner = new WeakReference<Thread>(thread);
|
2016-07-22 15:24:31 +02:00
|
|
|
}
|
2016-07-22 20:41:07 +02:00
|
|
|
|
2016-12-22 13:53:15 +01:00
|
|
|
static WeakOrderQueue newQueue(Stack<?> stack, Thread thread) {
|
2017-12-21 09:34:36 +01:00
|
|
|
final WeakOrderQueue queue = new WeakOrderQueue(stack, thread);
|
2016-12-22 13:53:15 +01:00
|
|
|
// Done outside of the constructor to ensure WeakOrderQueue.this does not escape the constructor and so
|
|
|
|
// may be accessed while its still constructed.
|
|
|
|
stack.setHead(queue);
|
2017-12-21 09:34:36 +01:00
|
|
|
|
2016-12-22 13:53:15 +01:00
|
|
|
return queue;
|
|
|
|
}
|
|
|
|
|
|
|
|
private void setNext(WeakOrderQueue next) {
|
|
|
|
assert next != this;
|
|
|
|
this.next = next;
|
|
|
|
}
|
|
|
|
|
2016-07-22 15:24:31 +02:00
|
|
|
/**
|
|
|
|
* Allocate a new {@link WeakOrderQueue} or return {@code null} if not possible.
|
|
|
|
*/
|
|
|
|
static WeakOrderQueue allocate(Stack<?> stack, Thread thread) {
|
2016-07-11 11:55:09 +02:00
|
|
|
// We allocated a Link so reserve the space
|
2017-12-21 09:34:36 +01:00
|
|
|
return Head.reserveSpace(stack.availableSharedCapacity, LINK_CAPACITY)
|
2017-12-01 16:37:30 +01:00
|
|
|
? newQueue(stack, thread) : null;
|
2014-06-13 11:56:35 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
void add(DefaultHandle<?> handle) {
|
|
|
|
handle.lastRecycledId = id;
|
|
|
|
|
|
|
|
Link tail = this.tail;
|
|
|
|
int writeIndex;
|
|
|
|
if ((writeIndex = tail.get()) == LINK_CAPACITY) {
|
2017-12-21 09:34:36 +01:00
|
|
|
if (!head.reserveSpace(LINK_CAPACITY)) {
|
2016-07-11 11:55:09 +02:00
|
|
|
// Drop it.
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
// We allocate a Link so reserve the space
|
2014-06-13 11:56:35 +02:00
|
|
|
this.tail = tail = tail.next = new Link();
|
2016-07-11 11:55:09 +02:00
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
writeIndex = tail.get();
|
|
|
|
}
|
|
|
|
tail.elements[writeIndex] = handle;
|
|
|
|
handle.stack = null;
|
|
|
|
// we lazy set to ensure that setting stack to null appears before we unnull it in the owning thread;
|
|
|
|
// this also means we guarantee visibility of an element in the queue if we see the index updated
|
|
|
|
tail.lazySet(writeIndex + 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
boolean hasFinalData() {
|
|
|
|
return tail.readIndex != tail.get();
|
|
|
|
}
|
|
|
|
|
|
|
|
// transfer as many items as we can from this queue to the stack, returning true if any were transferred
|
2014-07-02 12:04:11 +02:00
|
|
|
@SuppressWarnings("rawtypes")
|
2014-12-05 13:09:28 +01:00
|
|
|
boolean transfer(Stack<?> dst) {
|
2017-12-21 09:34:36 +01:00
|
|
|
Link head = this.head.link;
|
2014-06-13 11:56:35 +02:00
|
|
|
if (head == null) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (head.readIndex == LINK_CAPACITY) {
|
|
|
|
if (head.next == null) {
|
|
|
|
return false;
|
|
|
|
}
|
2017-12-21 09:34:36 +01:00
|
|
|
this.head.link = head = head.next;
|
2014-06-13 11:56:35 +02:00
|
|
|
}
|
|
|
|
|
2014-12-05 13:09:28 +01:00
|
|
|
final int srcStart = head.readIndex;
|
|
|
|
int srcEnd = head.get();
|
|
|
|
final int srcSize = srcEnd - srcStart;
|
|
|
|
if (srcSize == 0) {
|
2014-06-13 11:56:35 +02:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2014-12-05 13:09:28 +01:00
|
|
|
final int dstSize = dst.size;
|
|
|
|
final int expectedCapacity = dstSize + srcSize;
|
|
|
|
|
|
|
|
if (expectedCapacity > dst.elements.length) {
|
|
|
|
final int actualCapacity = dst.increaseCapacity(expectedCapacity);
|
2016-07-11 11:55:09 +02:00
|
|
|
srcEnd = min(srcStart + actualCapacity - dstSize, srcEnd);
|
2014-06-13 11:56:35 +02:00
|
|
|
}
|
|
|
|
|
2014-12-05 13:09:28 +01:00
|
|
|
if (srcStart != srcEnd) {
|
|
|
|
final DefaultHandle[] srcElems = head.elements;
|
|
|
|
final DefaultHandle[] dstElems = dst.elements;
|
|
|
|
int newDstSize = dstSize;
|
|
|
|
for (int i = srcStart; i < srcEnd; i++) {
|
|
|
|
DefaultHandle element = srcElems[i];
|
|
|
|
if (element.recycleId == 0) {
|
|
|
|
element.recycleId = element.lastRecycledId;
|
|
|
|
} else if (element.recycleId != element.lastRecycledId) {
|
|
|
|
throw new IllegalStateException("recycled already");
|
|
|
|
}
|
2016-07-26 20:15:21 +02:00
|
|
|
srcElems[i] = null;
|
|
|
|
|
|
|
|
if (dst.dropHandle(element)) {
|
|
|
|
// Drop the object.
|
|
|
|
continue;
|
|
|
|
}
|
2014-12-05 13:09:28 +01:00
|
|
|
element.stack = dst;
|
|
|
|
dstElems[newDstSize ++] = element;
|
2014-06-13 11:56:35 +02:00
|
|
|
}
|
|
|
|
|
2014-12-05 13:09:28 +01:00
|
|
|
if (srcEnd == LINK_CAPACITY && head.next != null) {
|
2016-07-11 11:55:09 +02:00
|
|
|
// Add capacity back as the Link is GCed.
|
2017-12-21 09:34:36 +01:00
|
|
|
this.head.reclaimSpace(LINK_CAPACITY);
|
|
|
|
this.head.link = head.next;
|
2014-12-05 13:09:28 +01:00
|
|
|
}
|
2014-03-12 10:16:53 +01:00
|
|
|
|
2014-12-05 13:09:28 +01:00
|
|
|
head.readIndex = srcEnd;
|
2016-07-26 20:15:21 +02:00
|
|
|
if (dst.size == newDstSize) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
dst.size = newDstSize;
|
2014-12-05 13:09:28 +01:00
|
|
|
return true;
|
|
|
|
} else {
|
|
|
|
// The destination stack is full already.
|
|
|
|
return false;
|
|
|
|
}
|
2014-06-13 11:56:35 +02:00
|
|
|
}
|
|
|
|
}
|
2013-06-10 09:38:24 +02:00
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
static final class Stack<T> {
|
|
|
|
|
|
|
|
// we keep a queue of per-thread queues, which is appended to once only, each time a new thread other
|
|
|
|
// than the stack owner recycles: when we run out of items in our stack we iterate this collection
|
|
|
|
// to scavenge those that can be reused. this permits us to incur minimal thread synchronisation whilst
|
|
|
|
// still recycling all items.
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
final Recycler<T> parent;
|
2017-12-13 09:52:53 +01:00
|
|
|
|
|
|
|
// We store the Thread in a WeakReference as otherwise we may be the only ones that still hold a strong
|
|
|
|
// Reference to the Thread itself after it died because DefaultHandle will hold a reference to the Stack.
|
|
|
|
//
|
|
|
|
// The biggest issue is if we do not use a WeakReference the Thread may not be able to be collected at all if
|
|
|
|
// the user will store a reference to the DefaultHandle somewhere and never clear this reference (or not clear
|
|
|
|
// it in a timely manner).
|
|
|
|
final WeakReference<Thread> threadRef;
|
2016-07-25 11:15:56 +02:00
|
|
|
final AtomicInteger availableSharedCapacity;
|
|
|
|
final int maxDelayedQueues;
|
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
private final int maxCapacity;
|
2016-07-26 20:15:21 +02:00
|
|
|
private final int ratioMask;
|
2016-07-25 11:15:56 +02:00
|
|
|
private DefaultHandle<?>[] elements;
|
2014-06-13 11:56:35 +02:00
|
|
|
private int size;
|
2016-07-26 20:15:21 +02:00
|
|
|
private int handleRecycleCount = -1; // Start with -1 so the first one will be recycled.
|
2014-06-13 11:56:35 +02:00
|
|
|
private WeakOrderQueue cursor, prev;
|
2016-07-25 11:15:56 +02:00
|
|
|
private volatile WeakOrderQueue head;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
|
2016-07-25 11:15:56 +02:00
|
|
|
Stack(Recycler<T> parent, Thread thread, int maxCapacity, int maxSharedCapacityFactor,
|
|
|
|
int ratioMask, int maxDelayedQueues) {
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
this.parent = parent;
|
2017-12-13 09:52:53 +01:00
|
|
|
threadRef = new WeakReference<Thread>(thread);
|
2014-03-12 10:16:53 +01:00
|
|
|
this.maxCapacity = maxCapacity;
|
2016-07-11 11:55:09 +02:00
|
|
|
availableSharedCapacity = new AtomicInteger(max(maxCapacity / maxSharedCapacityFactor, LINK_CAPACITY));
|
|
|
|
elements = new DefaultHandle[min(INITIAL_CAPACITY, maxCapacity)];
|
2016-07-26 20:15:21 +02:00
|
|
|
this.ratioMask = ratioMask;
|
2016-07-25 11:15:56 +02:00
|
|
|
this.maxDelayedQueues = maxDelayedQueues;
|
2016-07-11 11:55:09 +02:00
|
|
|
}
|
|
|
|
|
2016-12-22 13:53:15 +01:00
|
|
|
// Marked as synchronized to ensure this is serialized.
|
|
|
|
synchronized void setHead(WeakOrderQueue queue) {
|
|
|
|
queue.setNext(head);
|
|
|
|
head = queue;
|
|
|
|
}
|
|
|
|
|
2014-12-05 13:09:28 +01:00
|
|
|
int increaseCapacity(int expectedCapacity) {
|
|
|
|
int newCapacity = elements.length;
|
|
|
|
int maxCapacity = this.maxCapacity;
|
|
|
|
do {
|
|
|
|
newCapacity <<= 1;
|
|
|
|
} while (newCapacity < expectedCapacity && newCapacity < maxCapacity);
|
|
|
|
|
2016-07-11 11:55:09 +02:00
|
|
|
newCapacity = min(newCapacity, maxCapacity);
|
2014-12-05 13:09:28 +01:00
|
|
|
if (newCapacity != elements.length) {
|
|
|
|
elements = Arrays.copyOf(elements, newCapacity);
|
|
|
|
}
|
|
|
|
|
|
|
|
return newCapacity;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
@SuppressWarnings({ "unchecked", "rawtypes" })
|
|
|
|
DefaultHandle<T> pop() {
|
2013-06-10 09:38:24 +02:00
|
|
|
int size = this.size;
|
|
|
|
if (size == 0) {
|
2014-06-13 11:56:35 +02:00
|
|
|
if (!scavenge()) {
|
|
|
|
return null;
|
|
|
|
}
|
|
|
|
size = this.size;
|
2013-06-10 09:38:24 +02:00
|
|
|
}
|
|
|
|
size --;
|
2014-06-13 11:56:35 +02:00
|
|
|
DefaultHandle ret = elements[size];
|
2016-05-31 23:48:04 +02:00
|
|
|
elements[size] = null;
|
2014-06-13 11:56:35 +02:00
|
|
|
if (ret.lastRecycledId != ret.recycleId) {
|
|
|
|
throw new IllegalStateException("recycled multiple times");
|
|
|
|
}
|
|
|
|
ret.recycleId = 0;
|
|
|
|
ret.lastRecycledId = 0;
|
2013-06-10 09:38:24 +02:00
|
|
|
this.size = size;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
boolean scavenge() {
|
|
|
|
// continue an existing scavenge, if any
|
|
|
|
if (scavengeSome()) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
// reset our scavenge cursor
|
|
|
|
prev = null;
|
|
|
|
cursor = head;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
boolean scavengeSome() {
|
2017-01-06 22:12:43 +01:00
|
|
|
WeakOrderQueue prev;
|
2014-12-05 13:09:28 +01:00
|
|
|
WeakOrderQueue cursor = this.cursor;
|
|
|
|
if (cursor == null) {
|
2017-01-06 22:12:43 +01:00
|
|
|
prev = null;
|
2014-12-05 13:09:28 +01:00
|
|
|
cursor = head;
|
|
|
|
if (cursor == null) {
|
|
|
|
return false;
|
|
|
|
}
|
2017-01-06 22:12:43 +01:00
|
|
|
} else {
|
|
|
|
prev = this.prev;
|
2014-12-05 13:09:28 +01:00
|
|
|
}
|
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
boolean success = false;
|
2014-12-05 13:09:28 +01:00
|
|
|
do {
|
2014-06-13 11:56:35 +02:00
|
|
|
if (cursor.transfer(this)) {
|
|
|
|
success = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
WeakOrderQueue next = cursor.next;
|
|
|
|
if (cursor.owner.get() == null) {
|
2014-12-05 13:09:28 +01:00
|
|
|
// If the thread associated with the queue is gone, unlink it, after
|
|
|
|
// performing a volatile read to confirm there is no data left to collect.
|
|
|
|
// We never unlink the first queue, as we don't want to synchronize on updating the head.
|
2014-06-13 11:56:35 +02:00
|
|
|
if (cursor.hasFinalData()) {
|
|
|
|
for (;;) {
|
2014-12-05 13:09:28 +01:00
|
|
|
if (cursor.transfer(this)) {
|
|
|
|
success = true;
|
|
|
|
} else {
|
2014-06-13 11:56:35 +02:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2016-12-22 13:53:15 +01:00
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
if (prev != null) {
|
2016-12-22 13:53:15 +01:00
|
|
|
prev.setNext(next);
|
2014-06-13 11:56:35 +02:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
prev = cursor;
|
|
|
|
}
|
2014-12-05 13:09:28 +01:00
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
cursor = next;
|
2014-12-05 13:09:28 +01:00
|
|
|
|
|
|
|
} while (cursor != null && !success);
|
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
this.prev = prev;
|
|
|
|
this.cursor = cursor;
|
|
|
|
return success;
|
|
|
|
}
|
|
|
|
|
|
|
|
void push(DefaultHandle<?> item) {
|
2016-07-25 11:15:56 +02:00
|
|
|
Thread currentThread = Thread.currentThread();
|
2017-12-13 09:52:53 +01:00
|
|
|
if (threadRef.get() == currentThread) {
|
2016-07-25 11:15:56 +02:00
|
|
|
// The current Thread is the thread that belongs to the Stack, we can try to push the object now.
|
|
|
|
pushNow(item);
|
|
|
|
} else {
|
2017-12-13 09:52:53 +01:00
|
|
|
// The current Thread is not the one that belongs to the Stack
|
|
|
|
// (or the Thread that belonged to the Stack was collected already), we need to signal that the push
|
2016-07-25 11:15:56 +02:00
|
|
|
// happens later.
|
|
|
|
pushLater(item, currentThread);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
private void pushNow(DefaultHandle<?> item) {
|
2014-06-13 11:56:35 +02:00
|
|
|
if ((item.recycleId | item.lastRecycledId) != 0) {
|
|
|
|
throw new IllegalStateException("recycled already");
|
|
|
|
}
|
|
|
|
item.recycleId = item.lastRecycledId = OWN_THREAD_ID;
|
2013-06-10 09:38:24 +02:00
|
|
|
|
|
|
|
int size = this.size;
|
2016-07-26 20:15:21 +02:00
|
|
|
if (size >= maxCapacity || dropHandle(item)) {
|
|
|
|
// Hit the maximum capacity or should drop - drop the possibly youngest object.
|
2014-08-31 02:39:03 +02:00
|
|
|
return;
|
|
|
|
}
|
2013-06-10 09:38:24 +02:00
|
|
|
if (size == elements.length) {
|
2016-07-11 11:55:09 +02:00
|
|
|
elements = Arrays.copyOf(elements, min(size << 1, maxCapacity));
|
2013-06-10 09:38:24 +02:00
|
|
|
}
|
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
elements[size] = item;
|
2013-06-10 09:38:24 +02:00
|
|
|
this.size = size + 1;
|
|
|
|
}
|
|
|
|
|
2016-07-25 11:15:56 +02:00
|
|
|
private void pushLater(DefaultHandle<?> item, Thread thread) {
|
|
|
|
// we don't want to have a ref to the queue as the value in our weak map
|
|
|
|
// so we null it out; to ensure there are no races with restoring it later
|
|
|
|
// we impose a memory ordering here (no-op on x86)
|
|
|
|
Map<Stack<?>, WeakOrderQueue> delayedRecycled = DELAYED_RECYCLED.get();
|
|
|
|
WeakOrderQueue queue = delayedRecycled.get(this);
|
|
|
|
if (queue == null) {
|
|
|
|
if (delayedRecycled.size() >= maxDelayedQueues) {
|
|
|
|
// Add a dummy queue so we know we should drop the object
|
|
|
|
delayedRecycled.put(this, WeakOrderQueue.DUMMY);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
// Check if we already reached the maximum number of delayed queues and if we can allocate at all.
|
|
|
|
if ((queue = WeakOrderQueue.allocate(this, thread)) == null) {
|
|
|
|
// drop object
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
delayedRecycled.put(this, queue);
|
|
|
|
} else if (queue == WeakOrderQueue.DUMMY) {
|
|
|
|
// drop object
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
queue.add(item);
|
|
|
|
}
|
|
|
|
|
2016-07-26 20:15:21 +02:00
|
|
|
boolean dropHandle(DefaultHandle<?> handle) {
|
|
|
|
if (!handle.hasBeenRecycled) {
|
|
|
|
if ((++handleRecycleCount & ratioMask) != 0) {
|
|
|
|
// Drop the object.
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
handle.hasBeenRecycled = true;
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2014-06-13 11:56:35 +02:00
|
|
|
DefaultHandle<T> newHandle() {
|
|
|
|
return new DefaultHandle<T>(this);
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|