2008-08-08 00:37:18 +00:00
|
|
|
/*
|
2012-06-04 13:31:44 -07:00
|
|
|
* Copyright 2012 The Netty Project
|
2008-08-08 00:37:18 +00:00
|
|
|
*
|
2011-12-09 14:18:34 +09:00
|
|
|
* The Netty Project licenses this file to you under the Apache License,
|
|
|
|
* version 2.0 (the "License"); you may not use this file except in compliance
|
|
|
|
* with the License. You may obtain a copy of the License at:
|
2008-08-08 00:37:18 +00:00
|
|
|
*
|
2012-06-04 13:31:44 -07:00
|
|
|
* http://www.apache.org/licenses/LICENSE-2.0
|
2008-08-08 01:27:24 +00:00
|
|
|
*
|
2009-08-28 07:15:49 +00:00
|
|
|
* Unless required by applicable law or agreed to in writing, software
|
|
|
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
2011-12-09 14:18:34 +09:00
|
|
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
2009-08-28 07:15:49 +00:00
|
|
|
* License for the specific language governing permissions and limitations
|
|
|
|
* under the License.
|
2008-08-08 00:37:18 +00:00
|
|
|
*/
|
2011-12-09 12:38:59 +09:00
|
|
|
package io.netty.channel;
|
2008-08-08 00:37:18 +00:00
|
|
|
|
2009-09-10 04:25:05 +00:00
|
|
|
|
2012-12-21 07:13:31 +01:00
|
|
|
import io.netty.util.Attribute;
|
|
|
|
import io.netty.util.AttributeKey;
|
2012-04-12 17:39:01 +09:00
|
|
|
import io.netty.util.AttributeMap;
|
2013-03-05 21:41:19 +01:00
|
|
|
import io.netty.util.concurrent.EventExecutor;
|
2012-04-12 17:39:01 +09:00
|
|
|
|
2012-05-01 17:48:06 +09:00
|
|
|
import java.nio.channels.Channels;
|
|
|
|
|
2008-08-08 00:37:18 +00:00
|
|
|
/**
|
2010-02-17 08:37:38 +00:00
|
|
|
* Enables a {@link ChannelHandler} to interact with its {@link ChannelPipeline}
|
2012-12-21 07:13:31 +01:00
|
|
|
* and other handlers. A handler can notify the next {@link ChannelHandler} in the {@link ChannelPipeline},
|
|
|
|
* modify the {@link ChannelPipeline} it belongs to dynamically.
|
2010-02-17 08:22:45 +00:00
|
|
|
*
|
2012-12-21 07:13:31 +01:00
|
|
|
* <h3>Notify</h3>
|
2010-02-17 08:22:45 +00:00
|
|
|
*
|
2013-11-06 21:14:07 +09:00
|
|
|
* You can notify the closest handler in the same {@link ChannelPipeline} by calling one of the various method.
|
|
|
|
* Please refer to {@link ChannelPipeline} to understand how an event flows.
|
2010-02-17 08:22:45 +00:00
|
|
|
*
|
|
|
|
* <h3>Modifying a pipeline</h3>
|
|
|
|
*
|
|
|
|
* You can get the {@link ChannelPipeline} your handler belongs to by calling
|
2012-12-21 07:13:31 +01:00
|
|
|
* {@link #pipeline()}. A non-trivial application could insert, remove, or
|
2010-02-17 08:22:45 +00:00
|
|
|
* replace handlers in the pipeline dynamically in runtime.
|
|
|
|
*
|
|
|
|
* <h3>Retrieving for later use</h3>
|
|
|
|
*
|
|
|
|
* You can keep the {@link ChannelHandlerContext} for later use, such as
|
|
|
|
* triggering an event outside the handler methods, even from a different thread.
|
2008-09-24 10:37:19 +00:00
|
|
|
* <pre>
|
2013-02-06 12:55:42 +09:00
|
|
|
* public class MyHandler extends {@link ChannelDuplexHandler} {
|
2010-02-17 08:22:45 +00:00
|
|
|
*
|
2010-02-17 08:34:28 +00:00
|
|
|
* <b>private {@link ChannelHandlerContext} ctx;</b>
|
2010-02-17 08:22:45 +00:00
|
|
|
*
|
|
|
|
* public void beforeAdd({@link ChannelHandlerContext} ctx) {
|
2010-02-17 08:34:28 +00:00
|
|
|
* <b>this.ctx = ctx;</b>
|
2010-02-17 08:22:45 +00:00
|
|
|
* }
|
|
|
|
*
|
2010-02-17 08:34:28 +00:00
|
|
|
* public void login(String username, password) {
|
2012-12-21 07:13:31 +01:00
|
|
|
* ctx.write(new LoginMessage(username, password));
|
2010-02-17 08:22:45 +00:00
|
|
|
* }
|
|
|
|
* ...
|
|
|
|
* }
|
2009-04-28 13:35:55 +00:00
|
|
|
* </pre>
|
2010-02-17 08:22:45 +00:00
|
|
|
*
|
2010-02-17 08:28:45 +00:00
|
|
|
* <h3>Storing stateful information</h3>
|
|
|
|
*
|
2012-12-21 07:13:31 +01:00
|
|
|
* {@link #attr(AttributeKey)} allow you to
|
2010-02-17 08:28:45 +00:00
|
|
|
* store and access stateful information that is related with a handler and its
|
|
|
|
* context. Please refer to {@link ChannelHandler} to learn various recommended
|
|
|
|
* ways to manage stateful information.
|
|
|
|
*
|
2010-02-17 08:22:45 +00:00
|
|
|
* <h3>A handler can have more than one context</h3>
|
|
|
|
*
|
2009-04-28 13:35:55 +00:00
|
|
|
* Please note that a {@link ChannelHandler} instance can be added to more than
|
|
|
|
* one {@link ChannelPipeline}. It means a single {@link ChannelHandler}
|
|
|
|
* instance can have more than one {@link ChannelHandlerContext} and therefore
|
2009-06-17 09:13:10 +00:00
|
|
|
* the single instance can be invoked with different
|
|
|
|
* {@link ChannelHandlerContext}s if it is added to one or more
|
|
|
|
* {@link ChannelPipeline}s more than once.
|
|
|
|
* <p>
|
|
|
|
* For example, the following handler will have as many independent attachments
|
|
|
|
* as how many times it is added to pipelines, regardless if it is added to the
|
|
|
|
* same pipeline multiple times or added to different pipelines multiple times:
|
2009-04-28 13:35:55 +00:00
|
|
|
* <pre>
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 20:40:19 +09:00
|
|
|
* public class FactorialHandler extends {@link ChannelInboundHandlerAdapter}<{@link Integer}> {
|
2012-12-21 07:13:31 +01:00
|
|
|
*
|
|
|
|
* private final {@link AttributeKey}<{@link Integer}> counter =
|
|
|
|
* new {@link AttributeKey}<{@link Integer}>("counter");
|
2009-04-28 14:35:23 +00:00
|
|
|
*
|
|
|
|
* // This handler will receive a sequence of increasing integers starting
|
|
|
|
* // from 1.
|
2010-02-02 02:00:04 +00:00
|
|
|
* {@code @Override}
|
2013-07-09 23:09:28 +09:00
|
|
|
* public void channelRead({@link ChannelHandlerContext} ctx, {@link Integer} integer) {
|
2012-12-21 07:35:42 +01:00
|
|
|
* {@link Attribute}<{@link Integer}> attr = ctx.getAttr(counter);
|
2012-12-21 07:13:31 +01:00
|
|
|
* Integer a = ctx.getAttr(counter).get();
|
2009-04-28 13:35:55 +00:00
|
|
|
*
|
|
|
|
* if (a == null) {
|
2009-04-28 14:35:23 +00:00
|
|
|
* a = 1;
|
2009-04-28 13:35:55 +00:00
|
|
|
* }
|
|
|
|
*
|
2012-12-21 07:13:31 +01:00
|
|
|
* attr.set(a * integer));
|
2009-04-28 13:35:55 +00:00
|
|
|
* }
|
|
|
|
* }
|
2009-04-28 14:35:23 +00:00
|
|
|
*
|
|
|
|
* // Different context objects are given to "f1", "f2", "f3", and "f4" even if
|
2009-04-28 14:41:48 +00:00
|
|
|
* // they refer to the same handler instance. Because the FactorialHandler
|
2009-04-28 14:35:23 +00:00
|
|
|
* // stores its state in a context object (as an attachment), the factorial is
|
2009-04-28 14:43:20 +00:00
|
|
|
* // calculated correctly 4 times once the two pipelines (p1 and p2) are active.
|
2009-04-28 14:35:23 +00:00
|
|
|
* FactorialHandler fh = new FactorialHandler();
|
|
|
|
*
|
2010-02-02 02:00:04 +00:00
|
|
|
* {@link ChannelPipeline} p1 = {@link Channels}.pipeline();
|
2009-04-28 14:35:23 +00:00
|
|
|
* p1.addLast("f1", fh);
|
|
|
|
* p1.addLast("f2", fh);
|
|
|
|
*
|
2010-02-02 02:00:04 +00:00
|
|
|
* {@link ChannelPipeline} p2 = {@link Channels}.pipeline();
|
2009-04-28 14:35:23 +00:00
|
|
|
* p2.addLast("f3", fh);
|
|
|
|
* p2.addLast("f4", fh);
|
2008-09-24 10:37:19 +00:00
|
|
|
* </pre>
|
2008-08-08 00:37:18 +00:00
|
|
|
*
|
2009-06-17 09:13:10 +00:00
|
|
|
* <h3>Additional resources worth reading</h3>
|
|
|
|
* <p>
|
2012-12-21 07:13:31 +01:00
|
|
|
* Please refer to the {@link ChannelHandler}, and
|
|
|
|
* {@link ChannelPipeline} to find out more about inbound and outbound operations,
|
|
|
|
* what fundamental differences they have, how they flow in a pipeline, and how to handle
|
|
|
|
* the operation in your application.
|
2008-08-08 00:37:18 +00:00
|
|
|
*/
|
2012-05-09 22:09:06 +09:00
|
|
|
public interface ChannelHandlerContext
|
2013-11-06 21:14:07 +09:00
|
|
|
extends AttributeMap, ChannelPropertyAccess, ChannelInboundOps, ChannelOutboundOps {
|
2012-08-28 01:14:05 +02:00
|
|
|
|
|
|
|
/**
|
|
|
|
* Return the {@link Channel} which is bound to the {@link ChannelHandlerContext}.
|
|
|
|
*/
|
2012-05-01 17:19:41 +09:00
|
|
|
Channel channel();
|
2012-08-28 01:14:05 +02:00
|
|
|
|
|
|
|
/**
|
2013-11-06 21:14:07 +09:00
|
|
|
* Returns the {@link EventExecutor} which is used to execute an arbitrary task.
|
2012-08-28 01:14:05 +02:00
|
|
|
*/
|
2012-06-01 17:51:19 -07:00
|
|
|
EventExecutor executor();
|
2008-09-02 10:39:57 +00:00
|
|
|
|
2012-08-28 01:14:05 +02:00
|
|
|
/**
|
|
|
|
* The unique name of the {@link ChannelHandlerContext}.The name was used when then {@link ChannelHandler}
|
|
|
|
* was added to the {@link ChannelPipeline}. This name can also be used to access the registered
|
|
|
|
* {@link ChannelHandler} from the {@link ChannelPipeline}.
|
|
|
|
*/
|
2012-04-12 17:39:01 +09:00
|
|
|
String name();
|
2012-08-28 01:14:05 +02:00
|
|
|
|
|
|
|
/**
|
|
|
|
* The {@link ChannelHandler} that is bound this {@link ChannelHandlerContext}.
|
|
|
|
*/
|
2012-04-29 17:53:50 +09:00
|
|
|
ChannelHandler handler();
|
2012-12-21 17:06:24 +01:00
|
|
|
|
2013-06-10 11:14:41 +02:00
|
|
|
/**
|
|
|
|
* Return {@code true} if the {@link ChannelHandler} which belongs to this {@link ChannelHandler} was removed
|
|
|
|
* from the {@link ChannelPipeline}. Note that this method is only meant to be called from with in the
|
2013-11-06 21:14:07 +09:00
|
|
|
* {@link EventExecutor}.
|
2013-06-10 11:14:41 +02:00
|
|
|
*/
|
|
|
|
boolean isRemoved();
|
|
|
|
|
2013-02-11 09:44:04 +01:00
|
|
|
@Override
|
|
|
|
ChannelHandlerContext fireChannelRegistered();
|
|
|
|
|
|
|
|
@Override
|
|
|
|
ChannelHandlerContext fireChannelActive();
|
|
|
|
|
|
|
|
@Override
|
|
|
|
ChannelHandlerContext fireChannelInactive();
|
|
|
|
|
|
|
|
@Override
|
|
|
|
ChannelHandlerContext fireExceptionCaught(Throwable cause);
|
|
|
|
|
|
|
|
@Override
|
|
|
|
ChannelHandlerContext fireUserEventTriggered(Object event);
|
|
|
|
|
|
|
|
@Override
|
2013-07-09 23:09:28 +09:00
|
|
|
ChannelHandlerContext fireChannelRead(Object msg);
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 20:40:19 +09:00
|
|
|
|
|
|
|
@Override
|
2013-07-09 23:09:28 +09:00
|
|
|
ChannelHandlerContext fireChannelReadComplete();
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 20:40:19 +09:00
|
|
|
|
|
|
|
@Override
|
|
|
|
ChannelHandlerContext fireChannelWritabilityChanged();
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 19:03:40 +09:00
|
|
|
|
|
|
|
@Override
|
2013-07-10 13:00:42 +02:00
|
|
|
ChannelHandlerContext flush();
|
2009-08-28 07:15:49 +00:00
|
|
|
}
|