netty5/transport/src/main/java/io/netty/channel/ChannelPipeline.java

627 lines
26 KiB
Java
Raw Normal View History

/*
2012-06-04 22:31:44 +02:00
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
2012-06-04 22:31:44 +02:00
* http://www.apache.org/licenses/LICENSE-2.0
*
2009-08-28 09:15:49 +02:00
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
2009-08-28 09:15:49 +02:00
* License for the specific language governing permissions and limitations
* under the License.
*/
2011-12-09 04:38:59 +01:00
package io.netty.channel;
import io.netty.buffer.ByteBuf;
import io.netty.util.concurrent.DefaultEventExecutorGroup;
import io.netty.util.concurrent.EventExecutorGroup;
import java.net.SocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.SocketChannel;
import java.util.List;
import java.util.Map;
2013-02-08 07:10:46 +01:00
import java.util.Map.Entry;
2008-09-02 14:04:04 +02:00
import java.util.NoSuchElementException;
/**
2013-07-22 13:51:09 +02:00
* A list of {@link ChannelHandler}s which handles or intercepts inbound events and outbound operations of a
* {@link Channel}. {@link ChannelPipeline} implements an advanced form of the
* <a href="http://www.oracle.com/technetwork/java/interceptingfilter-142169.html">Intercepting Filter</a> pattern
* to give a user full control over how an event is handled and how the {@link ChannelHandler}s in a pipeline
* interact with each other.
2008-09-02 16:12:56 +02:00
*
2008-09-03 03:48:08 +02:00
* <h3>Creation of a pipeline</h3>
2010-02-17 09:24:25 +01:00
*
* Each channel has its own pipeline and it is created automatically when a new channel is created.
2008-09-03 03:48:08 +02:00
*
2008-09-01 17:19:34 +02:00
* <h3>How an event flows in a pipeline</h3>
2010-02-17 09:24:25 +01:00
*
* The following diagram describes how I/O events are processed by {@link ChannelHandler}s in a {@link ChannelPipeline}
* typically. An I/O event is handled by either a {@link ChannelInboundHandler} or a {@link ChannelOutboundHandler}
* and be forwarded to its closest handler by calling the event propagation methods defined in
* {@link ChannelHandlerContext}, such as {@link ChannelHandlerContext#fireChannelRead(Object)} and
* {@link ChannelHandlerContext#write(Object)}.
2012-12-23 15:54:30 +01:00
*
2008-09-01 17:19:34 +02:00
* <pre>
* I/O Request
* via {@link Channel} or
* {@link ChannelHandlerContext}
* |
* +---------------------------------------------------+---------------+
* | ChannelPipeline | |
* | \|/ |
* | +---------------------+ +-----------+----------+ |
* | | Inbound Handler N | | Outbound Handler 1 | |
* | +----------+----------+ +-----------+----------+ |
* | /|\ | |
* | | \|/ |
* | +----------+----------+ +-----------+----------+ |
* | | Inbound Handler N-1 | | Outbound Handler 2 | |
* | +----------+----------+ +-----------+----------+ |
* | /|\ . |
* | . . |
* | ChannelHandlerContext.fireIN_EVT() ChannelHandlerContext.OUT_EVT()|
* | [ method call] [method call] |
* | . . |
* | . \|/ |
* | +----------+----------+ +-----------+----------+ |
* | | Inbound Handler 2 | | Outbound Handler M-1 | |
* | +----------+----------+ +-----------+----------+ |
* | /|\ | |
* | | \|/ |
* | +----------+----------+ +-----------+----------+ |
* | | Inbound Handler 1 | | Outbound Handler M | |
* | +----------+----------+ +-----------+----------+ |
* | /|\ | |
* +---------------+-----------------------------------+---------------+
* | \|/
* +---------------+-----------------------------------+---------------+
* | | | |
* | [ Socket.read() ] [ Socket.write() ] |
* | |
* | Netty Internal I/O Threads (Transport Implementation) |
* +-------------------------------------------------------------------+
2008-09-01 17:19:34 +02:00
* </pre>
* An inbound event is handled by the inbound handlers in the bottom-up direction as shown on the left side of the
* diagram. An inbound handler usually handles the inbound data generated by the I/O thread on the bottom of the
* diagram. The inbound data is often read from a remote peer via the actual input operation such as
* {@link SocketChannel#read(ByteBuffer)}. If an inbound event goes beyond the top inbound handler, it is discarded
* silently, or logged if it needs your attention.
* <p>
* An outbound event is handled by the outbound handler in the top-down direction as shown on the right side of the
* diagram. An outbound handler usually generates or transforms the outbound traffic such as write requests.
* If an outbound event goes beyond the bottom outbound handler, it is handled by an I/O thread associated with the
* {@link Channel}. The I/O thread often performs the actual output operation such as
* {@link SocketChannel#write(ByteBuffer)}.
* <p>
* For example, let us assume that we created the following pipeline:
* <pre>
2012-12-23 15:54:30 +01:00
* {@link ChannelPipeline} p = ...;
* p.addLast("1", new InboundHandlerA());
* p.addLast("2", new InboundHandlerB());
* p.addLast("3", new OutboundHandlerA());
* p.addLast("4", new OutboundHandlerB());
* p.addLast("5", new InboundOutboundHandlerX());
* </pre>
* In the example above, the class whose name starts with {@code Inbound} means it is an inbound handler.
* The class whose name starts with {@code Outbound} means it is a outbound handler.
* <p>
* In the given example configuration, the handler evaluation order is 1, 2, 3, 4, 5 when an event goes inbound.
* When an event goes outbound, the order is 5, 4, 3, 2, 1. On top of this principle, {@link ChannelPipeline} skips
* the evaluation of certain handlers to shorten the stack depth:
* <ul>
* <li>3 and 4 don't implement {@link ChannelInboundHandler}, and therefore the actual evaluation order of an inbound
* event will be: 1, 2, and 5.</li>
* <li>1 and 2 don't implement {@link ChannelOutboundHandler}, and therefore the actual evaluation order of a
* outbound event will be: 5, 4, and 3.</li>
* <li>If 5 implements both {@link ChannelInboundHandler} and {@link ChannelOutboundHandler}, the evaluation order of
* an inbound and a outbound event could be 125 and 543 respectively.</li>
* </ul>
*
* <h3>Forwarding an event to the next handler</h3>
*
* As you might noticed in the diagram shows, a handler has to invoke the event propagation methods in
* {@link ChannelHandlerContext} to forward an event to its next handler. Those methods include:
* <ul>
* <li>Inbound event propagation methods:
* <ul>
* <li>{@link ChannelHandlerContext#fireChannelRegistered()}</li>
* <li>{@link ChannelHandlerContext#fireChannelActive()}</li>
* <li>{@link ChannelHandlerContext#fireChannelRead(Object)}</li>
* <li>{@link ChannelHandlerContext#fireChannelReadComplete()}</li>
* <li>{@link ChannelHandlerContext#fireExceptionCaught(Throwable)}</li>
* <li>{@link ChannelHandlerContext#fireUserEventTriggered(Object)}</li>
* <li>{@link ChannelHandlerContext#fireChannelWritabilityChanged()}</li>
* <li>{@link ChannelHandlerContext#fireChannelInactive()}</li>
* <li>{@link ChannelHandlerContext#fireChannelUnregistered()}</li>
* </ul>
* </li>
* <li>Outbound event propagation methods:
* <ul>
* <li>{@link ChannelHandlerContext#bind(SocketAddress, ChannelPromise)}</li>
* <li>{@link ChannelHandlerContext#connect(SocketAddress, SocketAddress, ChannelPromise)}</li>
* <li>{@link ChannelHandlerContext#write(Object, ChannelPromise)}</li>
* <li>{@link ChannelHandlerContext#flush()}</li>
* <li>{@link ChannelHandlerContext#read()}</li>
* <li>{@link ChannelHandlerContext#disconnect(ChannelPromise)}</li>
* <li>{@link ChannelHandlerContext#close(ChannelPromise)}</li>
* <li>{@link ChannelHandlerContext#deregister(ChannelPromise)}</li>
* </ul>
* </li>
* </ul>
*
* and the following example shows how the event propagation is usually done:
*
* <pre>
* public class MyInboundHandler extends {@link ChannelInboundHandlerAdapter} {
* {@code @Override}
* public void channelActive({@link ChannelHandlerContext} ctx) {
* System.out.println("Connected!");
* ctx.fireChannelActive();
* }
* }
*
2017-04-19 22:37:03 +02:00
* public class MyOutboundHandler extends {@link ChannelOutboundHandlerAdapter} {
* {@code @Override}
* public void close({@link ChannelHandlerContext} ctx, {@link ChannelPromise} promise) {
* System.out.println("Closing ..");
* ctx.close(promise);
* }
* }
* </pre>
*
* <h3>Building a pipeline</h3>
* <p>
* A user is supposed to have one or more {@link ChannelHandler}s in a pipeline to receive I/O events (e.g. read) and
* to request I/O operations (e.g. write and close). For example, a typical server will have the following handlers
* in each channel's pipeline, but your mileage may vary depending on the complexity and characteristics of the
* protocol and business logic:
*
* <ol>
* <li>Protocol Decoder - translates binary data (e.g. {@link ByteBuf}) into a Java object.</li>
* <li>Protocol Encoder - translates a Java object into binary data.</li>
* <li>Business Logic Handler - performs the actual business logic (e.g. database access).</li>
* </ol>
*
* and it could be represented as shown in the following example:
*
* <pre>
* static final {@link EventExecutorGroup} group = new {@link DefaultEventExecutorGroup}(16);
* ...
*
* {@link ChannelPipeline} pipeline = ch.pipeline();
*
* pipeline.addLast("decoder", new MyProtocolDecoder());
* pipeline.addLast("encoder", new MyProtocolEncoder());
*
* // Tell the pipeline to run MyBusinessLogicHandler's event handler methods
* // in a different thread than an I/O thread so that the I/O thread is not blocked by
* // a time-consuming task.
* // If your business logic is fully asynchronous or finished very quickly, you don't
* // need to specify a group.
* pipeline.addLast(group, "handler", new MyBusinessLogicHandler());
* </pre>
*
* <h3>Thread safety</h3>
* <p>
* A {@link ChannelHandler} can be added or removed at any time because a {@link ChannelPipeline} is thread safe.
* For example, you can insert an encryption handler when sensitive information is about to be exchanged, and remove it
* after the exchange.
*/
public interface ChannelPipeline
extends ChannelInboundInvoker, ChannelOutboundInvoker, Iterable<Entry<String, ChannelHandler>> {
/**
* Inserts a {@link ChannelHandler} at the first position of this pipeline.
*
* @param name the name of the handler to insert first
* @param handler the handler to insert first
2008-09-02 14:04:04 +02:00
*
* @throws IllegalArgumentException
* if there's an entry with the same name already in the pipeline
* @throws NullPointerException
* if the specified handler is {@code null}
*/
ChannelPipeline addFirst(String name, ChannelHandler handler);
/**
* Inserts a {@link ChannelHandler} at the first position of this pipeline.
*
2012-12-18 07:55:39 +01:00
* @param group the {@link EventExecutorGroup} which will be used to execute the {@link ChannelHandler}
* methods
* @param name the name of the handler to insert first
* @param handler the handler to insert first
*
* @throws IllegalArgumentException
* if there's an entry with the same name already in the pipeline
* @throws NullPointerException
* if the specified handler is {@code null}
*/
ChannelPipeline addFirst(EventExecutorGroup group, String name, ChannelHandler handler);
/**
* Appends a {@link ChannelHandler} at the last position of this pipeline.
*
* @param name the name of the handler to append
* @param handler the handler to append
2008-09-02 14:04:04 +02:00
*
* @throws IllegalArgumentException
* if there's an entry with the same name already in the pipeline
* @throws NullPointerException
* if the specified handler is {@code null}
*/
ChannelPipeline addLast(String name, ChannelHandler handler);
/**
* Appends a {@link ChannelHandler} at the last position of this pipeline.
*
2012-12-18 07:55:39 +01:00
* @param group the {@link EventExecutorGroup} which will be used to execute the {@link ChannelHandler}
* methods
* @param name the name of the handler to append
* @param handler the handler to append
*
* @throws IllegalArgumentException
* if there's an entry with the same name already in the pipeline
* @throws NullPointerException
* if the specified handler is {@code null}
*/
ChannelPipeline addLast(EventExecutorGroup group, String name, ChannelHandler handler);
/**
* Inserts a {@link ChannelHandler} before an existing handler of this
* pipeline.
*
* @param baseName the name of the existing handler
* @param name the name of the handler to insert before
* @param handler the handler to insert before
2008-09-02 14:04:04 +02:00
*
* @throws NoSuchElementException
* if there's no such entry with the specified {@code baseName}
* @throws IllegalArgumentException
* if there's an entry with the same name already in the pipeline
* @throws NullPointerException
* if the specified baseName or handler is {@code null}
*/
ChannelPipeline addBefore(String baseName, String name, ChannelHandler handler);
/**
* Inserts a {@link ChannelHandler} before an existing handler of this
* pipeline.
*
2012-12-18 07:55:39 +01:00
* @param group the {@link EventExecutorGroup} which will be used to execute the {@link ChannelHandler}
* methods
* @param baseName the name of the existing handler
* @param name the name of the handler to insert before
* @param handler the handler to insert before
*
* @throws NoSuchElementException
* if there's no such entry with the specified {@code baseName}
* @throws IllegalArgumentException
* if there's an entry with the same name already in the pipeline
* @throws NullPointerException
* if the specified baseName or handler is {@code null}
*/
ChannelPipeline addBefore(EventExecutorGroup group, String baseName, String name, ChannelHandler handler);
/**
* Inserts a {@link ChannelHandler} after an existing handler of this
* pipeline.
*
* @param baseName the name of the existing handler
* @param name the name of the handler to insert after
* @param handler the handler to insert after
2008-09-02 14:04:04 +02:00
*
* @throws NoSuchElementException
* if there's no such entry with the specified {@code baseName}
* @throws IllegalArgumentException
* if there's an entry with the same name already in the pipeline
* @throws NullPointerException
* if the specified baseName or handler is {@code null}
*/
ChannelPipeline addAfter(String baseName, String name, ChannelHandler handler);
/**
* Inserts a {@link ChannelHandler} after an existing handler of this
* pipeline.
*
2012-12-18 07:55:39 +01:00
* @param group the {@link EventExecutorGroup} which will be used to execute the {@link ChannelHandler}
* methods
* @param baseName the name of the existing handler
* @param name the name of the handler to insert after
* @param handler the handler to insert after
*
* @throws NoSuchElementException
* if there's no such entry with the specified {@code baseName}
* @throws IllegalArgumentException
* if there's an entry with the same name already in the pipeline
* @throws NullPointerException
* if the specified baseName or handler is {@code null}
*/
ChannelPipeline addAfter(EventExecutorGroup group, String baseName, String name, ChannelHandler handler);
2012-12-18 07:55:39 +01:00
/**
* Inserts {@link ChannelHandler}s at the first position of this pipeline.
2012-12-18 07:55:39 +01:00
*
* @param handlers the handlers to insert first
*
*/
ChannelPipeline addFirst(ChannelHandler... handlers);
2012-12-18 07:55:39 +01:00
/**
* Inserts {@link ChannelHandler}s at the first position of this pipeline.
2012-12-18 07:55:39 +01:00
*
* @param group the {@link EventExecutorGroup} which will be used to execute the {@link ChannelHandler}s
* methods.
* @param handlers the handlers to insert first
*
*/
ChannelPipeline addFirst(EventExecutorGroup group, ChannelHandler... handlers);
2012-12-18 07:55:39 +01:00
/**
* Inserts {@link ChannelHandler}s at the last position of this pipeline.
2012-12-18 07:55:39 +01:00
*
* @param handlers the handlers to insert last
*
*/
ChannelPipeline addLast(ChannelHandler... handlers);
2012-12-18 07:55:39 +01:00
/**
* Inserts {@link ChannelHandler}s at the last position of this pipeline.
2012-12-18 07:55:39 +01:00
*
* @param group the {@link EventExecutorGroup} which will be used to execute the {@link ChannelHandler}s
* methods.
* @param handlers the handlers to insert last
*
*/
ChannelPipeline addLast(EventExecutorGroup group, ChannelHandler... handlers);
/**
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
* Removes the specified {@link ChannelHandler} from this pipeline.
*
* @param handler the {@link ChannelHandler} to remove
*
* @throws NoSuchElementException
* if there's no such handler in this pipeline
* @throws NullPointerException
* if the specified handler is {@code null}
*/
ChannelPipeline remove(ChannelHandler handler);
/**
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
* Removes the {@link ChannelHandler} with the specified name from this pipeline.
*
* @param name the name under which the {@link ChannelHandler} was stored.
*
* @return the removed handler
*
* @throws NoSuchElementException
* if there's no such handler with the specified name in this pipeline
* @throws NullPointerException
* if the specified name is {@code null}
*/
ChannelHandler remove(String name);
/**
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
* Removes the {@link ChannelHandler} of the specified type from this pipeline.
*
* @param <T> the type of the handler
* @param handlerType the type of the handler
*
* @return the removed handler
*
* @throws NoSuchElementException
* if there's no such handler of the specified type in this pipeline
* @throws NullPointerException
* if the specified handler type is {@code null}
*/
<T extends ChannelHandler> T remove(Class<T> handlerType);
/**
* Removes the first {@link ChannelHandler} in this pipeline.
*
* @return the removed handler
2008-09-02 14:04:04 +02:00
*
* @throws NoSuchElementException
* if this pipeline is empty
*/
ChannelHandler removeFirst();
/**
* Removes the last {@link ChannelHandler} in this pipeline.
*
* @return the removed handler
2008-09-02 14:04:04 +02:00
*
* @throws NoSuchElementException
* if this pipeline is empty
*/
ChannelHandler removeLast();
/**
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
* Replaces the specified {@link ChannelHandler} with a new handler in this pipeline.
*
* @param oldHandler the {@link ChannelHandler} to be replaced
* @param newName the name under which the replacement should be added
* @param newHandler the {@link ChannelHandler} which is used as replacement
*
* @return itself
* @throws NoSuchElementException
* if the specified old handler does not exist in this pipeline
* @throws IllegalArgumentException
* if a handler with the specified new name already exists in this
* pipeline, except for the handler to be replaced
* @throws NullPointerException
* if the specified old handler or new handler is
* {@code null}
*/
ChannelPipeline replace(ChannelHandler oldHandler, String newName, ChannelHandler newHandler);
/**
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
* Replaces the {@link ChannelHandler} of the specified name with a new handler in this pipeline.
*
* @param oldName the name of the {@link ChannelHandler} to be replaced
* @param newName the name under which the replacement should be added
* @param newHandler the {@link ChannelHandler} which is used as replacement
*
* @return the removed handler
*
* @throws NoSuchElementException
* if the handler with the specified old name does not exist in this pipeline
* @throws IllegalArgumentException
* if a handler with the specified new name already exists in this
* pipeline, except for the handler to be replaced
* @throws NullPointerException
* if the specified old handler or new handler is
* {@code null}
*/
ChannelHandler replace(String oldName, String newName, ChannelHandler newHandler);
/**
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
* Replaces the {@link ChannelHandler} of the specified type with a new handler in this pipeline.
*
* @param oldHandlerType the type of the handler to be removed
* @param newName the name under which the replacement should be added
* @param newHandler the {@link ChannelHandler} which is used as replacement
*
* @return the removed handler
*
* @throws NoSuchElementException
* if the handler of the specified old handler type does not exist
* in this pipeline
* @throws IllegalArgumentException
* if a handler with the specified new name already exists in this
* pipeline, except for the handler to be replaced
* @throws NullPointerException
* if the specified old handler or new handler is
* {@code null}
*/
<T extends ChannelHandler> T replace(Class<T> oldHandlerType, String newName,
ChannelHandler newHandler);
/**
* Returns the first {@link ChannelHandler} in this pipeline.
2008-09-02 14:04:04 +02:00
*
* @return the first handler. {@code null} if this pipeline is empty.
*/
ChannelHandler first();
/**
* Returns the context of the first {@link ChannelHandler} in this pipeline.
*
* @return the context of the first handler. {@code null} if this pipeline is empty.
*/
ChannelHandlerContext firstContext();
/**
* Returns the last {@link ChannelHandler} in this pipeline.
2008-09-02 14:04:04 +02:00
*
* @return the last handler. {@code null} if this pipeline is empty.
*/
ChannelHandler last();
/**
* Returns the context of the last {@link ChannelHandler} in this pipeline.
*
* @return the context of the last handler. {@code null} if this pipeline is empty.
*/
ChannelHandlerContext lastContext();
/**
* Returns the {@link ChannelHandler} with the specified name in this
* pipeline.
2008-09-02 14:04:04 +02:00
*
* @return the handler with the specified name.
* {@code null} if there's no such handler in this pipeline.
*/
ChannelHandler get(String name);
/**
* Returns the {@link ChannelHandler} of the specified type in this
* pipeline.
2008-09-02 14:04:04 +02:00
*
* @return the handler of the specified handler type.
* {@code null} if there's no such handler in this pipeline.
*/
<T extends ChannelHandler> T get(Class<T> handlerType);
/**
* Returns the context object of the specified {@link ChannelHandler} in
* this pipeline.
2008-09-02 14:04:04 +02:00
*
* @return the context object of the specified handler.
* {@code null} if there's no such handler in this pipeline.
*/
ChannelHandlerContext context(ChannelHandler handler);
/**
* Returns the context object of the {@link ChannelHandler} with the
* specified name in this pipeline.
2008-09-02 14:04:04 +02:00
*
* @return the context object of the handler with the specified name.
* {@code null} if there's no such handler in this pipeline.
*/
ChannelHandlerContext context(String name);
/**
* Returns the context object of the {@link ChannelHandler} of the
* specified type in this pipeline.
2008-09-02 14:04:04 +02:00
*
* @return the context object of the handler of the specified type.
* {@code null} if there's no such handler in this pipeline.
*/
ChannelHandlerContext context(Class<? extends ChannelHandler> handlerType);
2012-02-29 22:53:26 +01:00
/**
* Returns the {@link Channel} that this pipeline is attached to.
2008-09-02 14:04:04 +02:00
*
* @return the channel. {@code null} if this pipeline is not attached yet.
*/
Channel channel();
2008-12-03 08:14:22 +01:00
/**
* Returns the {@link List} of the handler names.
*/
List<String> names();
/**
* Converts this pipeline into an ordered {@link Map} whose keys are
* handler names and whose values are handlers.
*/
Map<String, ChannelHandler> toMap();
2013-02-11 09:44:04 +01:00
@Override
2013-02-11 09:44:04 +01:00
ChannelPipeline fireChannelRegistered();
@Override
2013-02-11 09:44:04 +01:00
ChannelPipeline fireChannelUnregistered();
@Override
2013-02-11 09:44:04 +01:00
ChannelPipeline fireChannelActive();
@Override
2013-02-11 09:44:04 +01:00
ChannelPipeline fireChannelInactive();
@Override
2013-02-11 09:44:04 +01:00
ChannelPipeline fireExceptionCaught(Throwable cause);
@Override
2013-02-11 09:44:04 +01:00
ChannelPipeline fireUserEventTriggered(Object event);
@Override
ChannelPipeline fireChannelRead(Object msg);
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
@Override
ChannelPipeline fireChannelReadComplete();
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
@Override
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
ChannelPipeline fireChannelWritabilityChanged();
@Override
ChannelPipeline flush();
}