netty5/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentDecoder.java

260 lines
10 KiB
Java
Raw Normal View History

/*
2012-06-04 22:31:44 +02:00
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
2012-06-04 22:31:44 +02:00
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
2011-12-09 04:38:59 +01:00
package io.netty.handler.codec.http;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
import io.netty.channel.embedded.EmbeddedChannel;
import io.netty.handler.codec.CodecException;
import io.netty.handler.codec.DecoderResult;
import io.netty.handler.codec.MessageToMessageDecoder;
import io.netty.util.ReferenceCountUtil;
import java.util.List;
/**
* Decodes the content of the received {@link HttpRequest} and {@link HttpContent}.
2009-11-03 08:11:52 +01:00
* The original content is replaced with the new content decoded by the
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
* {@link EmbeddedChannel}, which is created by {@link #newContentDecoder(String)}.
2009-11-03 08:11:52 +01:00
* Once decoding is finished, the value of the <tt>'Content-Encoding'</tt>
* header is set to the target content encoding, as returned by {@link #getTargetContentEncoding(String)}.
* Also, the <tt>'Content-Length'</tt> header is updated to the length of the
* decoded content. If the content encoding of the original is not supported
2009-11-03 08:11:52 +01:00
* by the decoder, {@link #newContentDecoder(String)} should return {@code null}
* so that no decoding occurs (i.e. pass-through).
* <p>
* Please note that this is an abstract class. You have to extend this class
* and implement {@link #newContentDecoder(String)} properly to make this class
* functional. For example, refer to the source code of {@link HttpContentDecompressor}.
2009-11-03 08:11:52 +01:00
* <p>
* This handler must be placed after {@link HttpObjectDecoder} in the pipeline
* so that this handler can intercept HTTP requests after {@link HttpObjectDecoder}
* converts {@link ByteBuf}s into HTTP requests.
*/
public abstract class HttpContentDecoder extends MessageToMessageDecoder<HttpObject> {
Fix backward compatibility from the previous backport Motivation: The commit 50e06442c3f2753c9b2a506f68ea70273b829e21 changed the type of the constants in HttpHeaders.Names and HttpHeaders.Values, making 4.1 backward-incompatible with 4.0. It also introduces newer utility classes such as HttpHeaderUtil, which deprecates most static methods in HttpHeaders. To ease the migration between 4.1 and 5.0, we should deprecate all static methods that are non-existent in 5.0, and provide proper counterpart. Modification: - Revert the changes in HttpHeaders.Names and Values - Deprecate all static methods in HttpHeaders in favor of: - HttpHeaderUtil - the member methods of HttpHeaders - AsciiString - Add integer and date access methods to HttpHeaders for easier future migration to 5.0 - Add HttpHeaderNames and HttpHeaderValues which provide standard HTTP constants in AsciiString - Deprecate HttpHeaders.Names and Values - Make HttpHeaderValues.WEBSOCKET lowercased because it's actually lowercased in all WebSocket versions but the oldest one - Add RtspHeaderNames and RtspHeaderValues which provide standard RTSP constants in AsciiString - Deprecate RtspHeaders.* - Do not use AsciiString.equalsIgnoreCase(CharSeq, CharSeq) if one of the parameters are AsciiString - Avoid using AsciiString.toString() repetitively - Change the parameter type of some methods from String to CharSequence Result: Backward compatibility is recovered. New classes and methods will make the migration to 5.0 easier, once (Http|Rtsp)Header(Names|Values) are ported to master.
2014-10-31 08:48:28 +01:00
static final String IDENTITY = HttpHeaderValues.IDENTITY.toString();
protected ChannelHandlerContext ctx;
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
private EmbeddedChannel decoder;
private boolean continueResponse;
@Override
protected void decode(ChannelHandlerContext ctx, HttpObject msg, List<Object> out) throws Exception {
if (msg instanceof HttpResponse && ((HttpResponse) msg).status().code() == 100) {
if (!(msg instanceof LastHttpContent)) {
continueResponse = true;
2013-02-21 19:31:05 +01:00
}
// 100-continue response must be passed through.
out.add(ReferenceCountUtil.retain(msg));
return;
2012-11-12 01:31:40 +01:00
}
if (continueResponse) {
if (msg instanceof LastHttpContent) {
continueResponse = false;
}
// 100-continue response must be passed through.
out.add(ReferenceCountUtil.retain(msg));
return;
}
if (msg instanceof HttpMessage) {
cleanup();
final HttpMessage message = (HttpMessage) msg;
final HttpHeaders headers = message.headers();
// Determine the content encoding.
String contentEncoding = headers.get(HttpHeaderNames.CONTENT_ENCODING);
if (contentEncoding != null) {
contentEncoding = contentEncoding.trim();
} else {
contentEncoding = IDENTITY;
}
decoder = newContentDecoder(contentEncoding);
if (decoder == null) {
if (message instanceof HttpContent) {
((HttpContent) message).retain();
}
out.add(message);
return;
}
// Remove content-length header:
// the correct value can be set only after all chunks are processed/decoded.
// If buffering is not an issue, add HttpObjectAggregator down the chain, it will set the header.
// Otherwise, rely on LastHttpContent message.
if (headers.contains(HttpHeaderNames.CONTENT_LENGTH)) {
headers.remove(HttpHeaderNames.CONTENT_LENGTH);
headers.set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED);
}
// Either it is already chunked or EOF terminated.
// See https://github.com/netty/netty/issues/5892
// set new content encoding,
CharSequence targetContentEncoding = getTargetContentEncoding(contentEncoding);
if (HttpHeaderValues.IDENTITY.contentEquals(targetContentEncoding)) {
// Do NOT set the 'Content-Encoding' header if the target encoding is 'identity'
// as per: http://tools.ietf.org/html/rfc2616#section-14.11
headers.remove(HttpHeaderNames.CONTENT_ENCODING);
} else {
headers.set(HttpHeaderNames.CONTENT_ENCODING, targetContentEncoding);
}
if (message instanceof HttpContent) {
// If message is a full request or response object (headers + data), don't copy data part into out.
// Output headers only; data part will be decoded below.
// Note: "copy" object must not be an instance of LastHttpContent class,
// as this would (erroneously) indicate the end of the HttpMessage to other handlers.
HttpMessage copy;
if (message instanceof HttpRequest) {
HttpRequest r = (HttpRequest) message; // HttpRequest or FullHttpRequest
copy = new DefaultHttpRequest(r.protocolVersion(), r.method(), r.uri());
} else if (message instanceof HttpResponse) {
HttpResponse r = (HttpResponse) message; // HttpResponse or FullHttpResponse
copy = new DefaultHttpResponse(r.protocolVersion(), r.status());
} else {
throw new CodecException("Object of class " + message.getClass().getName() +
" is not a HttpRequest or HttpResponse");
}
copy.headers().set(message.headers());
copy.setDecoderResult(message.decoderResult());
out.add(copy);
} else {
out.add(message);
}
}
if (msg instanceof HttpContent) {
final HttpContent c = (HttpContent) msg;
if (decoder == null) {
out.add(c.retain());
} else {
decodeContent(c, out);
}
}
}
private void decodeContent(HttpContent c, List<Object> out) {
ByteBuf content = c.content();
decode(content, out);
if (c instanceof LastHttpContent) {
finishDecode(out);
LastHttpContent last = (LastHttpContent) c;
// Generate an additional chunk if the decoder produced
// the last product on closure,
HttpHeaders headers = last.trailingHeaders();
if (headers.isEmpty()) {
out.add(LastHttpContent.EMPTY_LAST_CONTENT);
} else {
out.add(new ComposedLastHttpContent(headers, DecoderResult.SUCCESS));
}
}
}
/**
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
* Returns a new {@link EmbeddedChannel} that decodes the HTTP message
* content encoded in the specified <tt>contentEncoding</tt>.
*
* @param contentEncoding the value of the {@code "Content-Encoding"} header
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
* @return a new {@link EmbeddedChannel} if the specified encoding is supported.
* {@code null} otherwise (alternatively, you can throw an exception
* to block unknown encoding).
*/
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
protected abstract EmbeddedChannel newContentDecoder(String contentEncoding) throws Exception;
/**
* Returns the expected content encoding of the decoded content.
* This getMethod returns {@code "identity"} by default, which is the case for
* most decoders.
*
2009-11-03 08:11:52 +01:00
* @param contentEncoding the value of the {@code "Content-Encoding"} header
* @return the expected content encoding of the new content
*/
protected String getTargetContentEncoding(
@SuppressWarnings("UnusedParameters") String contentEncoding) throws Exception {
Fix backward compatibility from the previous backport Motivation: The commit 50e06442c3f2753c9b2a506f68ea70273b829e21 changed the type of the constants in HttpHeaders.Names and HttpHeaders.Values, making 4.1 backward-incompatible with 4.0. It also introduces newer utility classes such as HttpHeaderUtil, which deprecates most static methods in HttpHeaders. To ease the migration between 4.1 and 5.0, we should deprecate all static methods that are non-existent in 5.0, and provide proper counterpart. Modification: - Revert the changes in HttpHeaders.Names and Values - Deprecate all static methods in HttpHeaders in favor of: - HttpHeaderUtil - the member methods of HttpHeaders - AsciiString - Add integer and date access methods to HttpHeaders for easier future migration to 5.0 - Add HttpHeaderNames and HttpHeaderValues which provide standard HTTP constants in AsciiString - Deprecate HttpHeaders.Names and Values - Make HttpHeaderValues.WEBSOCKET lowercased because it's actually lowercased in all WebSocket versions but the oldest one - Add RtspHeaderNames and RtspHeaderValues which provide standard RTSP constants in AsciiString - Deprecate RtspHeaders.* - Do not use AsciiString.equalsIgnoreCase(CharSeq, CharSeq) if one of the parameters are AsciiString - Avoid using AsciiString.toString() repetitively - Change the parameter type of some methods from String to CharSequence Result: Backward compatibility is recovered. New classes and methods will make the migration to 5.0 easier, once (Http|Rtsp)Header(Names|Values) are ported to master.
2014-10-31 08:48:28 +01:00
return IDENTITY;
}
@Override
public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
cleanupSafely(ctx);
super.handlerRemoved(ctx);
}
@Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
cleanupSafely(ctx);
super.channelInactive(ctx);
}
@Override
public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
this.ctx = ctx;
super.handlerAdded(ctx);
}
private void cleanup() {
if (decoder != null) {
// Clean-up the previous decoder if not cleaned up correctly.
decoder.finishAndReleaseAll();
decoder = null;
}
}
private void cleanupSafely(ChannelHandlerContext ctx) {
try {
cleanup();
} catch (Throwable cause) {
// If cleanup throws any error we need to propagate it through the pipeline
// so we don't fail to propagate pipeline events.
ctx.fireExceptionCaught(cause);
}
}
private void decode(ByteBuf in, List<Object> out) {
// call retain here as it will call release after its written to the channel
decoder.writeInbound(in.retain());
fetchDecoderOutput(out);
}
private void finishDecode(List<Object> out) {
if (decoder.finish()) {
fetchDecoderOutput(out);
}
decoder = null;
}
private void fetchDecoderOutput(List<Object> out) {
for (;;) {
ByteBuf buf = decoder.readInbound();
if (buf == null) {
break;
}
if (!buf.isReadable()) {
buf.release();
continue;
}
out.add(new DefaultHttpContent(buf));
}
}
}