netty5/codec-http/src/main/java/io/netty/handler/codec/http/HttpContentEncoder.java

373 lines
14 KiB
Java
Raw Normal View History

/*
2012-06-04 22:31:44 +02:00
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
2012-06-04 22:31:44 +02:00
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
2011-12-09 04:38:59 +01:00
package io.netty.handler.codec.http;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufHolder;
import io.netty.channel.ChannelHandlerContext;
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
import io.netty.channel.embedded.EmbeddedChannel;
import io.netty.handler.codec.DecoderResult;
import io.netty.handler.codec.MessageToMessageCodec;
import io.netty.util.ReferenceCountUtil;
import java.util.ArrayDeque;
import java.util.List;
import java.util.Queue;
/**
* Encodes the content of the outbound {@link HttpResponse} and {@link HttpContent}.
2009-11-03 08:11:52 +01:00
* The original content is replaced with the new content encoded by the
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
* {@link EmbeddedChannel}, which is created by {@link #beginEncode(HttpResponse, String)}.
2009-11-03 08:11:52 +01:00
* Once encoding is finished, the value of the <tt>'Content-Encoding'</tt> header
* is set to the target content encoding, as returned by
* {@link #beginEncode(HttpResponse, String)}.
2009-11-03 08:11:52 +01:00
* Also, the <tt>'Content-Length'</tt> header is updated to the length of the
* encoded content. If there is no supported or allowed encoding in the
* corresponding {@link HttpRequest}'s {@code "Accept-Encoding"} header,
* {@link #beginEncode(HttpResponse, String)} should return {@code null} so that
* no encoding occurs (i.e. pass-through).
2009-11-03 08:11:52 +01:00
* <p>
* Please note that this is an abstract class. You have to extend this class
* and implement {@link #beginEncode(HttpResponse, String)} properly to make
* this class functional. For example, refer to the source code of
* {@link HttpContentCompressor}.
2009-11-03 08:11:52 +01:00
* <p>
* This handler must be placed after {@link HttpObjectEncoder} in the pipeline
* so that this handler can intercept HTTP responses before {@link HttpObjectEncoder}
* converts them into {@link ByteBuf}s.
*/
public abstract class HttpContentEncoder extends MessageToMessageCodec<HttpRequest, HttpObject> {
private enum State {
PASS_THROUGH,
AWAIT_HEADERS,
AWAIT_CONTENT
}
private static final CharSequence ZERO_LENGTH_HEAD = "HEAD";
private static final CharSequence ZERO_LENGTH_CONNECT = "CONNECT";
private static final int CONTINUE_CODE = HttpResponseStatus.CONTINUE.code();
private final Queue<CharSequence> acceptEncodingQueue = new ArrayDeque<CharSequence>();
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
private EmbeddedChannel encoder;
private State state = State.AWAIT_HEADERS;
@Override
public boolean acceptOutboundMessage(Object msg) throws Exception {
return msg instanceof HttpContent || msg instanceof HttpResponse;
}
@Override
protected void decode(ChannelHandlerContext ctx, HttpRequest msg, List<Object> out)
throws Exception {
CharSequence acceptedEncoding = msg.headers().get(HttpHeaderNames.ACCEPT_ENCODING);
if (acceptedEncoding == null) {
Fix backward compatibility from the previous backport Motivation: The commit 50e06442c3f2753c9b2a506f68ea70273b829e21 changed the type of the constants in HttpHeaders.Names and HttpHeaders.Values, making 4.1 backward-incompatible with 4.0. It also introduces newer utility classes such as HttpHeaderUtil, which deprecates most static methods in HttpHeaders. To ease the migration between 4.1 and 5.0, we should deprecate all static methods that are non-existent in 5.0, and provide proper counterpart. Modification: - Revert the changes in HttpHeaders.Names and Values - Deprecate all static methods in HttpHeaders in favor of: - HttpHeaderUtil - the member methods of HttpHeaders - AsciiString - Add integer and date access methods to HttpHeaders for easier future migration to 5.0 - Add HttpHeaderNames and HttpHeaderValues which provide standard HTTP constants in AsciiString - Deprecate HttpHeaders.Names and Values - Make HttpHeaderValues.WEBSOCKET lowercased because it's actually lowercased in all WebSocket versions but the oldest one - Add RtspHeaderNames and RtspHeaderValues which provide standard RTSP constants in AsciiString - Deprecate RtspHeaders.* - Do not use AsciiString.equalsIgnoreCase(CharSeq, CharSeq) if one of the parameters are AsciiString - Avoid using AsciiString.toString() repetitively - Change the parameter type of some methods from String to CharSequence Result: Backward compatibility is recovered. New classes and methods will make the migration to 5.0 easier, once (Http|Rtsp)Header(Names|Values) are ported to master.
2014-10-31 08:48:28 +01:00
acceptedEncoding = HttpContentDecoder.IDENTITY;
}
HttpMethod method = msg.method();
if (HttpMethod.HEAD.equals(method)) {
acceptedEncoding = ZERO_LENGTH_HEAD;
} else if (HttpMethod.CONNECT.equals(method)) {
acceptedEncoding = ZERO_LENGTH_CONNECT;
}
2013-02-21 22:58:13 +01:00
acceptEncodingQueue.add(acceptedEncoding);
out.add(ReferenceCountUtil.retain(msg));
}
@Override
protected void encode(ChannelHandlerContext ctx, HttpObject msg, List<Object> out) throws Exception {
final boolean isFull = msg instanceof HttpResponse && msg instanceof LastHttpContent;
switch (state) {
case AWAIT_HEADERS: {
ensureHeaders(msg);
assert encoder == null;
final HttpResponse res = (HttpResponse) msg;
final int code = res.status().code();
final CharSequence acceptEncoding;
if (code == CONTINUE_CODE) {
// We need to not poll the encoding when response with CONTINUE as another response will follow
// for the issued request. See https://github.com/netty/netty/issues/4079
acceptEncoding = null;
} else {
// Get the list of encodings accepted by the peer.
acceptEncoding = acceptEncodingQueue.poll();
if (acceptEncoding == null) {
throw new IllegalStateException("cannot send more responses than requests");
}
}
/*
* per rfc2616 4.3 Message Body
* All 1xx (informational), 204 (no content), and 304 (not modified) responses MUST NOT include a
* message-body. All other responses do include a message-body, although it MAY be of zero length.
*
* 9.4 HEAD
* The HEAD method is identical to GET except that the server MUST NOT return a message-body
* in the response.
*
* Also we should pass through HTTP/1.0 as transfer-encoding: chunked is not supported.
*
* See https://github.com/netty/netty/issues/5382
*/
if (isPassthru(res.protocolVersion(), code, acceptEncoding)) {
if (isFull) {
out.add(ReferenceCountUtil.retain(res));
} else {
out.add(res);
// Pass through all following contents.
state = State.PASS_THROUGH;
}
break;
}
if (isFull) {
// Pass through the full response with empty content and continue waiting for the next resp.
if (!((ByteBufHolder) res).content().isReadable()) {
out.add(ReferenceCountUtil.retain(res));
break;
}
}
// Prepare to encode the content.
final Result result = beginEncode(res, acceptEncoding.toString());
// If unable to encode, pass through.
if (result == null) {
if (isFull) {
out.add(ReferenceCountUtil.retain(res));
} else {
out.add(res);
// Pass through all following contents.
state = State.PASS_THROUGH;
}
break;
}
2013-01-17 06:48:03 +01:00
encoder = result.contentEncoder();
// Encode the content and remove or replace the existing headers
// so that the message looks like a decoded message.
Fix backward compatibility from the previous backport Motivation: The commit 50e06442c3f2753c9b2a506f68ea70273b829e21 changed the type of the constants in HttpHeaders.Names and HttpHeaders.Values, making 4.1 backward-incompatible with 4.0. It also introduces newer utility classes such as HttpHeaderUtil, which deprecates most static methods in HttpHeaders. To ease the migration between 4.1 and 5.0, we should deprecate all static methods that are non-existent in 5.0, and provide proper counterpart. Modification: - Revert the changes in HttpHeaders.Names and Values - Deprecate all static methods in HttpHeaders in favor of: - HttpHeaderUtil - the member methods of HttpHeaders - AsciiString - Add integer and date access methods to HttpHeaders for easier future migration to 5.0 - Add HttpHeaderNames and HttpHeaderValues which provide standard HTTP constants in AsciiString - Deprecate HttpHeaders.Names and Values - Make HttpHeaderValues.WEBSOCKET lowercased because it's actually lowercased in all WebSocket versions but the oldest one - Add RtspHeaderNames and RtspHeaderValues which provide standard RTSP constants in AsciiString - Deprecate RtspHeaders.* - Do not use AsciiString.equalsIgnoreCase(CharSeq, CharSeq) if one of the parameters are AsciiString - Avoid using AsciiString.toString() repetitively - Change the parameter type of some methods from String to CharSequence Result: Backward compatibility is recovered. New classes and methods will make the migration to 5.0 easier, once (Http|Rtsp)Header(Names|Values) are ported to master.
2014-10-31 08:48:28 +01:00
res.headers().set(HttpHeaderNames.CONTENT_ENCODING, result.targetContentEncoding());
// Output the rewritten response.
if (isFull) {
// Convert full message into unfull one.
HttpResponse newRes = new DefaultHttpResponse(res.protocolVersion(), res.status());
newRes.headers().set(res.headers());
out.add(newRes);
ensureContent(res);
encodeFullResponse(newRes, (HttpContent) res, out);
break;
} else {
// Make the response chunked to simplify content transformation.
res.headers().remove(HttpHeaderNames.CONTENT_LENGTH);
res.headers().set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED);
out.add(res);
state = State.AWAIT_CONTENT;
if (!(msg instanceof HttpContent)) {
// only break out the switch statement if we have not content to process
// See https://github.com/netty/netty/issues/2006
break;
}
// Fall through to encode the content
}
}
case AWAIT_CONTENT: {
ensureContent(msg);
if (encodeContent((HttpContent) msg, out)) {
state = State.AWAIT_HEADERS;
}
break;
}
case PASS_THROUGH: {
ensureContent(msg);
out.add(ReferenceCountUtil.retain(msg));
// Passed through all following contents of the current response.
if (msg instanceof LastHttpContent) {
state = State.AWAIT_HEADERS;
}
break;
}
}
}
private void encodeFullResponse(HttpResponse newRes, HttpContent content, List<Object> out) {
int existingMessages = out.size();
encodeContent(content, out);
if (HttpUtil.isContentLengthSet(newRes)) {
// adjust the content-length header
int messageSize = 0;
for (int i = existingMessages; i < out.size(); i++) {
Object item = out.get(i);
if (item instanceof HttpContent) {
messageSize += ((HttpContent) item).content().readableBytes();
}
}
HttpUtil.setContentLength(newRes, messageSize);
} else {
newRes.headers().set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED);
}
}
private static boolean isPassthru(HttpVersion version, int code, CharSequence httpMethod) {
return code < 200 || code == 204 || code == 304 ||
(httpMethod == ZERO_LENGTH_HEAD || (httpMethod == ZERO_LENGTH_CONNECT && code == 200)) ||
version == HttpVersion.HTTP_1_0;
}
private static void ensureHeaders(HttpObject msg) {
if (!(msg instanceof HttpResponse)) {
throw new IllegalStateException(
"unexpected message type: " +
msg.getClass().getName() + " (expected: " + HttpResponse.class.getSimpleName() + ')');
}
}
private static void ensureContent(HttpObject msg) {
if (!(msg instanceof HttpContent)) {
throw new IllegalStateException(
"unexpected message type: " +
msg.getClass().getName() + " (expected: " + HttpContent.class.getSimpleName() + ')');
}
}
private boolean encodeContent(HttpContent c, List<Object> out) {
ByteBuf content = c.content();
encode(content, out);
if (c instanceof LastHttpContent) {
finishEncode(out);
LastHttpContent last = (LastHttpContent) c;
// Generate an additional chunk if the decoder produced
// the last product on closure,
HttpHeaders headers = last.trailingHeaders();
if (headers.isEmpty()) {
out.add(LastHttpContent.EMPTY_LAST_CONTENT);
} else {
out.add(new ComposedLastHttpContent(headers, DecoderResult.SUCCESS));
}
return true;
}
return false;
}
/**
* Prepare to encode the HTTP message content.
*
* @param headers
* the headers
* @param acceptEncoding
2009-11-03 08:11:52 +01:00
* the value of the {@code "Accept-Encoding"} header
*
* @return the result of preparation, which is composed of the determined
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
* target content encoding and a new {@link EmbeddedChannel} that
* encodes the content into the target content encoding.
* {@code null} if {@code acceptEncoding} is unsupported or rejected
* and thus the content should be handled as-is (i.e. no encoding).
*/
protected abstract Result beginEncode(HttpResponse headers, String acceptEncoding) throws Exception;
@Override
public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
cleanupSafely(ctx);
super.handlerRemoved(ctx);
}
@Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
cleanupSafely(ctx);
super.channelInactive(ctx);
}
private void cleanup() {
if (encoder != null) {
// Clean-up the previous encoder if not cleaned up correctly.
encoder.finishAndReleaseAll();
encoder = null;
}
}
private void cleanupSafely(ChannelHandlerContext ctx) {
try {
cleanup();
} catch (Throwable cause) {
// If cleanup throws any error we need to propagate it through the pipeline
// so we don't fail to propagate pipeline events.
ctx.fireExceptionCaught(cause);
}
}
private void encode(ByteBuf in, List<Object> out) {
// call retain here as it will call release after its written to the channel
encoder.writeOutbound(in.retain());
fetchEncoderOutput(out);
}
private void finishEncode(List<Object> out) {
if (encoder.finish()) {
fetchEncoderOutput(out);
}
encoder = null;
}
private void fetchEncoderOutput(List<Object> out) {
for (;;) {
ByteBuf buf = encoder.readOutbound();
if (buf == null) {
break;
}
if (!buf.isReadable()) {
buf.release();
continue;
}
out.add(new DefaultHttpContent(buf));
}
}
public static final class Result {
private final String targetContentEncoding;
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
private final EmbeddedChannel contentEncoder;
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
public Result(String targetContentEncoding, EmbeddedChannel contentEncoder) {
if (targetContentEncoding == null) {
throw new NullPointerException("targetContentEncoding");
}
if (contentEncoder == null) {
throw new NullPointerException("contentEncoder");
}
this.targetContentEncoding = targetContentEncoding;
this.contentEncoder = contentEncoder;
}
2013-01-17 06:48:03 +01:00
public String targetContentEncoding() {
return targetContentEncoding;
}
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
public EmbeddedChannel contentEncoder() {
return contentEncoder;
}
}
}