2009-11-03 07:47:30 +01:00
|
|
|
/*
|
2012-06-04 22:31:44 +02:00
|
|
|
* Copyright 2012 The Netty Project
|
2009-11-03 07:47:30 +01:00
|
|
|
*
|
2011-12-09 06:18:34 +01:00
|
|
|
* The Netty Project licenses this file to you under the Apache License,
|
|
|
|
* version 2.0 (the "License"); you may not use this file except in compliance
|
|
|
|
* with the License. You may obtain a copy of the License at:
|
2009-11-03 07:47:30 +01:00
|
|
|
*
|
2012-06-04 22:31:44 +02:00
|
|
|
* http://www.apache.org/licenses/LICENSE-2.0
|
2009-11-03 07:47:30 +01:00
|
|
|
*
|
|
|
|
* Unless required by applicable law or agreed to in writing, software
|
|
|
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
2011-12-09 06:18:34 +01:00
|
|
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
2009-11-03 07:47:30 +01:00
|
|
|
* License for the specific language governing permissions and limitations
|
|
|
|
* under the License.
|
|
|
|
*/
|
2011-12-09 04:38:59 +01:00
|
|
|
package io.netty.handler.codec.http;
|
2009-11-03 07:47:30 +01:00
|
|
|
|
2012-06-10 04:08:43 +02:00
|
|
|
import io.netty.buffer.ByteBuf;
|
2013-01-16 05:22:50 +01:00
|
|
|
import io.netty.buffer.ByteBufHolder;
|
2012-06-07 07:52:33 +02:00
|
|
|
import io.netty.channel.ChannelHandlerContext;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
import io.netty.channel.embedded.EmbeddedChannel;
|
2012-05-23 20:42:10 +02:00
|
|
|
import io.netty.handler.codec.MessageToMessageCodec;
|
2013-04-16 04:49:47 +02:00
|
|
|
import io.netty.handler.codec.http.HttpHeaders.Names;
|
|
|
|
import io.netty.handler.codec.http.HttpHeaders.Values;
|
2013-06-14 07:07:33 +02:00
|
|
|
import io.netty.util.ReferenceCountUtil;
|
2009-11-03 07:47:30 +01:00
|
|
|
|
2012-06-12 10:02:00 +02:00
|
|
|
import java.util.ArrayDeque;
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
import java.util.List;
|
2012-05-23 20:42:10 +02:00
|
|
|
import java.util.Queue;
|
|
|
|
|
2009-11-03 07:47:30 +01:00
|
|
|
/**
|
2013-01-16 05:22:50 +01:00
|
|
|
* Encodes the content of the outbound {@link HttpResponse} and {@link HttpContent}.
|
2009-11-03 08:11:52 +01:00
|
|
|
* The original content is replaced with the new content encoded by the
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
* {@link EmbeddedChannel}, which is created by {@link #beginEncode(HttpResponse, String)}.
|
2009-11-03 08:11:52 +01:00
|
|
|
* Once encoding is finished, the value of the <tt>'Content-Encoding'</tt> header
|
2011-10-23 07:34:03 +02:00
|
|
|
* is set to the target content encoding, as returned by
|
2013-04-17 05:51:22 +02:00
|
|
|
* {@link #beginEncode(HttpResponse, String)}.
|
2009-11-03 08:11:52 +01:00
|
|
|
* Also, the <tt>'Content-Length'</tt> header is updated to the length of the
|
2011-10-23 07:34:03 +02:00
|
|
|
* encoded content. If there is no supported or allowed encoding in the
|
2013-01-16 05:22:50 +01:00
|
|
|
* corresponding {@link HttpRequest}'s {@code "Accept-Encoding"} header,
|
2013-04-17 05:51:22 +02:00
|
|
|
* {@link #beginEncode(HttpResponse, String)} should return {@code null} so that
|
2011-10-23 07:34:03 +02:00
|
|
|
* no encoding occurs (i.e. pass-through).
|
2009-11-03 08:11:52 +01:00
|
|
|
* <p>
|
|
|
|
* Please note that this is an abstract class. You have to extend this class
|
2013-04-17 05:51:22 +02:00
|
|
|
* and implement {@link #beginEncode(HttpResponse, String)} properly to make
|
2011-10-23 07:34:03 +02:00
|
|
|
* this class functional. For example, refer to the source code of
|
|
|
|
* {@link HttpContentCompressor}.
|
2009-11-03 08:11:52 +01:00
|
|
|
* <p>
|
2013-01-14 16:52:30 +01:00
|
|
|
* This handler must be placed after {@link HttpObjectEncoder} in the pipeline
|
|
|
|
* so that this handler can intercept HTTP responses before {@link HttpObjectEncoder}
|
2012-06-10 04:08:43 +02:00
|
|
|
* converts them into {@link ByteBuf}s.
|
2009-11-03 07:47:30 +01:00
|
|
|
*/
|
2013-04-17 05:51:22 +02:00
|
|
|
public abstract class HttpContentEncoder extends MessageToMessageCodec<HttpRequest, HttpObject> {
|
|
|
|
|
|
|
|
private enum State {
|
|
|
|
PASS_THROUGH,
|
|
|
|
AWAIT_HEADERS,
|
|
|
|
AWAIT_CONTENT
|
|
|
|
}
|
2009-11-03 07:47:30 +01:00
|
|
|
|
2012-06-12 10:02:00 +02:00
|
|
|
private final Queue<String> acceptEncodingQueue = new ArrayDeque<String>();
|
2013-04-17 05:51:22 +02:00
|
|
|
private String acceptEncoding;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
private EmbeddedChannel encoder;
|
2013-04-17 05:51:22 +02:00
|
|
|
private State state = State.AWAIT_HEADERS;
|
2013-04-03 11:32:33 +02:00
|
|
|
|
2012-05-29 22:34:01 +02:00
|
|
|
@Override
|
2013-04-17 05:51:22 +02:00
|
|
|
public boolean acceptOutboundMessage(Object msg) throws Exception {
|
|
|
|
return msg instanceof HttpContent || msg instanceof HttpResponse;
|
|
|
|
}
|
|
|
|
|
|
|
|
@Override
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
protected void decode(ChannelHandlerContext ctx, HttpRequest msg, List<Object> out)
|
2012-05-29 22:34:01 +02:00
|
|
|
throws Exception {
|
2013-01-16 05:22:50 +01:00
|
|
|
String acceptedEncoding = msg.headers().get(HttpHeaders.Names.ACCEPT_ENCODING);
|
2009-11-03 07:47:30 +01:00
|
|
|
if (acceptedEncoding == null) {
|
|
|
|
acceptedEncoding = HttpHeaders.Values.IDENTITY;
|
|
|
|
}
|
2013-02-21 22:58:13 +01:00
|
|
|
acceptEncodingQueue.add(acceptedEncoding);
|
2013-06-14 07:07:33 +02:00
|
|
|
out.add(ReferenceCountUtil.retain(msg));
|
2012-05-29 22:34:01 +02:00
|
|
|
}
|
|
|
|
|
2009-11-03 07:47:30 +01:00
|
|
|
@Override
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
protected void encode(ChannelHandlerContext ctx, HttpObject msg, List<Object> out) throws Exception {
|
2013-04-17 05:51:22 +02:00
|
|
|
final boolean isFull = msg instanceof HttpResponse && msg instanceof LastHttpContent;
|
|
|
|
switch (state) {
|
|
|
|
case AWAIT_HEADERS: {
|
|
|
|
ensureHeaders(msg);
|
|
|
|
assert encoder == null;
|
|
|
|
|
|
|
|
final HttpResponse res = (HttpResponse) msg;
|
|
|
|
|
2014-06-24 10:39:46 +02:00
|
|
|
if (res.status().code() == 100) {
|
2013-04-17 05:51:22 +02:00
|
|
|
if (isFull) {
|
2013-06-14 07:07:33 +02:00
|
|
|
out.add(ReferenceCountUtil.retain(res));
|
2013-04-17 05:51:22 +02:00
|
|
|
} else {
|
|
|
|
out.add(res);
|
|
|
|
// Pass through all following contents.
|
|
|
|
state = State.PASS_THROUGH;
|
|
|
|
}
|
|
|
|
break;
|
2013-01-14 16:52:30 +01:00
|
|
|
}
|
2009-11-03 07:47:30 +01:00
|
|
|
|
2013-04-17 05:51:22 +02:00
|
|
|
// Get the list of encodings accepted by the peer.
|
|
|
|
acceptEncoding = acceptEncodingQueue.poll();
|
2013-01-14 16:52:30 +01:00
|
|
|
if (acceptEncoding == null) {
|
|
|
|
throw new IllegalStateException("cannot send more responses than requests");
|
|
|
|
}
|
2013-01-30 20:58:07 +01:00
|
|
|
|
2013-04-17 05:51:22 +02:00
|
|
|
if (isFull) {
|
|
|
|
// Pass through the full response with empty content and continue waiting for the the next resp.
|
2013-05-01 10:04:43 +02:00
|
|
|
if (!((ByteBufHolder) res).content().isReadable()) {
|
2013-06-14 07:07:33 +02:00
|
|
|
out.add(ReferenceCountUtil.retain(res));
|
2013-04-17 05:51:22 +02:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Prepare to encode the content.
|
2013-04-27 08:38:28 +02:00
|
|
|
final Result result = beginEncode(res, acceptEncoding);
|
2011-10-23 07:34:03 +02:00
|
|
|
|
2013-04-17 05:51:22 +02:00
|
|
|
// If unable to encode, pass through.
|
2013-01-14 16:52:30 +01:00
|
|
|
if (result == null) {
|
2013-04-17 05:51:22 +02:00
|
|
|
if (isFull) {
|
2013-06-14 07:07:33 +02:00
|
|
|
out.add(ReferenceCountUtil.retain(res));
|
2013-01-16 05:22:50 +01:00
|
|
|
} else {
|
2013-04-17 05:51:22 +02:00
|
|
|
out.add(res);
|
2013-07-10 20:16:29 +02:00
|
|
|
// Pass through all following contents.
|
2013-04-17 05:51:22 +02:00
|
|
|
state = State.PASS_THROUGH;
|
2013-01-16 05:22:50 +01:00
|
|
|
}
|
2013-04-17 05:51:22 +02:00
|
|
|
break;
|
2009-11-03 07:47:30 +01:00
|
|
|
}
|
|
|
|
|
2013-01-17 06:48:03 +01:00
|
|
|
encoder = result.contentEncoder();
|
2013-01-14 16:52:30 +01:00
|
|
|
|
|
|
|
// Encode the content and remove or replace the existing headers
|
|
|
|
// so that the message looks like a decoded message.
|
2013-04-17 05:51:22 +02:00
|
|
|
res.headers().set(Names.CONTENT_ENCODING, result.targetContentEncoding());
|
|
|
|
|
|
|
|
// Make the response chunked to simplify content transformation.
|
|
|
|
res.headers().remove(Names.CONTENT_LENGTH);
|
|
|
|
res.headers().set(Names.TRANSFER_ENCODING, Values.CHUNKED);
|
|
|
|
|
|
|
|
// Output the rewritten response.
|
|
|
|
if (isFull) {
|
|
|
|
// Convert full message into unfull one.
|
2014-06-24 10:39:46 +02:00
|
|
|
HttpResponse newRes = new DefaultHttpResponse(res.protocolVersion(), res.status());
|
2013-04-17 05:51:22 +02:00
|
|
|
newRes.headers().set(res.headers());
|
|
|
|
out.add(newRes);
|
|
|
|
// Fall through to encode the content of the full response.
|
2013-04-16 04:49:47 +02:00
|
|
|
} else {
|
2013-04-17 05:51:22 +02:00
|
|
|
out.add(res);
|
|
|
|
state = State.AWAIT_CONTENT;
|
2013-11-26 10:57:24 +01:00
|
|
|
if (!(msg instanceof HttpContent)) {
|
|
|
|
// only break out the switch statement if we have not content to process
|
|
|
|
// See https://github.com/netty/netty/issues/2006
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
// Fall through to encode the content
|
2009-11-03 07:47:30 +01:00
|
|
|
}
|
2013-04-17 05:51:22 +02:00
|
|
|
}
|
|
|
|
case AWAIT_CONTENT: {
|
|
|
|
ensureContent(msg);
|
2013-07-24 15:59:59 +02:00
|
|
|
if (encodeContent((HttpContent) msg, out)) {
|
2013-04-17 05:51:22 +02:00
|
|
|
state = State.AWAIT_HEADERS;
|
|
|
|
}
|
|
|
|
break;
|
2013-01-14 16:52:30 +01:00
|
|
|
}
|
2013-04-17 05:51:22 +02:00
|
|
|
case PASS_THROUGH: {
|
|
|
|
ensureContent(msg);
|
2013-06-14 07:07:33 +02:00
|
|
|
out.add(ReferenceCountUtil.retain(msg));
|
2013-04-17 05:51:22 +02:00
|
|
|
// Passed through all following contents of the current response.
|
|
|
|
if (msg instanceof LastHttpContent) {
|
|
|
|
state = State.AWAIT_HEADERS;
|
|
|
|
}
|
|
|
|
break;
|
2009-11-03 07:47:30 +01:00
|
|
|
}
|
2013-04-17 05:51:22 +02:00
|
|
|
}
|
|
|
|
}
|
2012-05-23 20:42:10 +02:00
|
|
|
|
2013-04-17 05:51:22 +02:00
|
|
|
private static void ensureHeaders(HttpObject msg) {
|
|
|
|
if (!(msg instanceof HttpResponse)) {
|
|
|
|
throw new IllegalStateException(
|
|
|
|
"unexpected message type: " +
|
|
|
|
msg.getClass().getName() + " (expected: " + HttpResponse.class.getSimpleName() + ')');
|
|
|
|
}
|
|
|
|
}
|
2013-03-25 08:12:48 +01:00
|
|
|
|
2013-04-17 05:51:22 +02:00
|
|
|
private static void ensureContent(HttpObject msg) {
|
|
|
|
if (!(msg instanceof HttpContent)) {
|
|
|
|
throw new IllegalStateException(
|
|
|
|
"unexpected message type: " +
|
|
|
|
msg.getClass().getName() + " (expected: " + HttpContent.class.getSimpleName() + ')');
|
2013-01-21 10:41:23 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-07-24 15:59:59 +02:00
|
|
|
private boolean encodeContent(HttpContent c, List<Object> out) {
|
2013-05-01 10:04:43 +02:00
|
|
|
ByteBuf content = c.content();
|
2013-04-27 08:38:28 +02:00
|
|
|
|
2013-07-05 06:38:25 +02:00
|
|
|
encode(content, out);
|
2013-01-14 16:52:30 +01:00
|
|
|
|
|
|
|
if (c instanceof LastHttpContent) {
|
2013-07-05 06:38:25 +02:00
|
|
|
finishEncode(out);
|
2013-07-24 15:59:59 +02:00
|
|
|
LastHttpContent last = (LastHttpContent) c;
|
2013-01-14 16:52:30 +01:00
|
|
|
|
|
|
|
// Generate an additional chunk if the decoder produced
|
|
|
|
// the last product on closure,
|
2013-07-24 15:59:59 +02:00
|
|
|
HttpHeaders headers = last.trailingHeaders();
|
|
|
|
if (headers.isEmpty()) {
|
|
|
|
out.add(LastHttpContent.EMPTY_LAST_CONTENT);
|
|
|
|
} else {
|
|
|
|
out.add(new ComposedLastHttpContent(headers));
|
|
|
|
}
|
|
|
|
return true;
|
2013-01-14 16:52:30 +01:00
|
|
|
}
|
2013-07-24 15:59:59 +02:00
|
|
|
return false;
|
2009-11-03 07:47:30 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2011-10-23 07:34:03 +02:00
|
|
|
* Prepare to encode the HTTP message content.
|
2009-11-03 07:47:30 +01:00
|
|
|
*
|
2013-04-17 05:51:22 +02:00
|
|
|
* @param headers
|
|
|
|
* the headers
|
2009-11-03 07:47:30 +01:00
|
|
|
* @param acceptEncoding
|
2009-11-03 08:11:52 +01:00
|
|
|
* the value of the {@code "Accept-Encoding"} header
|
2009-11-03 07:47:30 +01:00
|
|
|
*
|
2011-10-23 07:34:03 +02:00
|
|
|
* @return the result of preparation, which is composed of the determined
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
* target content encoding and a new {@link EmbeddedChannel} that
|
2011-10-23 07:34:03 +02:00
|
|
|
* encodes the content into the target content encoding.
|
|
|
|
* {@code null} if {@code acceptEncoding} is unsupported or rejected
|
|
|
|
* and thus the content should be handled as-is (i.e. no encoding).
|
2009-11-03 07:47:30 +01:00
|
|
|
*/
|
2013-04-17 05:51:22 +02:00
|
|
|
protected abstract Result beginEncode(HttpResponse headers, String acceptEncoding) throws Exception;
|
2009-11-03 07:47:30 +01:00
|
|
|
|
2012-08-20 06:38:14 +02:00
|
|
|
@Override
|
2013-04-05 15:46:18 +02:00
|
|
|
public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
|
2012-08-20 06:38:14 +02:00
|
|
|
cleanup();
|
2013-04-05 15:46:18 +02:00
|
|
|
super.handlerRemoved(ctx);
|
2012-08-20 06:38:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
@Override
|
|
|
|
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
|
|
|
|
cleanup();
|
|
|
|
super.channelInactive(ctx);
|
|
|
|
}
|
|
|
|
|
|
|
|
private void cleanup() {
|
|
|
|
if (encoder != null) {
|
|
|
|
// Clean-up the previous encoder if not cleaned up correctly.
|
2013-07-05 06:27:25 +02:00
|
|
|
if (encoder.finish()) {
|
|
|
|
for (;;) {
|
2013-12-16 14:22:47 +01:00
|
|
|
ByteBuf buf = encoder.readOutbound();
|
2013-07-05 06:27:25 +02:00
|
|
|
if (buf == null) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
// Release the buffer
|
|
|
|
// https://github.com/netty/netty/issues/1524
|
|
|
|
buf.release();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
encoder = null;
|
2012-08-20 06:38:14 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
private void encode(ByteBuf in, List<Object> out) {
|
2013-04-22 11:04:56 +02:00
|
|
|
// call retain here as it will call release after its written to the channel
|
|
|
|
encoder.writeOutbound(in.retain());
|
2012-06-07 14:06:56 +02:00
|
|
|
fetchEncoderOutput(out);
|
2009-11-03 07:47:30 +01:00
|
|
|
}
|
|
|
|
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
private void finishEncode(List<Object> out) {
|
2009-11-03 07:47:30 +01:00
|
|
|
if (encoder.finish()) {
|
2012-06-07 14:06:56 +02:00
|
|
|
fetchEncoderOutput(out);
|
2009-11-03 07:47:30 +01:00
|
|
|
}
|
|
|
|
encoder = null;
|
2012-06-07 14:06:56 +02:00
|
|
|
}
|
|
|
|
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
private void fetchEncoderOutput(List<Object> out) {
|
2012-06-07 14:06:56 +02:00
|
|
|
for (;;) {
|
2013-12-16 14:22:47 +01:00
|
|
|
ByteBuf buf = encoder.readOutbound();
|
2012-06-07 14:06:56 +02:00
|
|
|
if (buf == null) {
|
|
|
|
break;
|
|
|
|
}
|
2013-07-10 13:00:42 +02:00
|
|
|
if (!buf.isReadable()) {
|
2013-07-10 18:10:52 +02:00
|
|
|
buf.release();
|
2013-07-10 13:00:42 +02:00
|
|
|
continue;
|
|
|
|
}
|
2013-07-05 06:38:25 +02:00
|
|
|
out.add(new DefaultHttpContent(buf));
|
2012-06-07 14:06:56 +02:00
|
|
|
}
|
2009-11-03 07:47:30 +01:00
|
|
|
}
|
2011-10-23 07:34:03 +02:00
|
|
|
|
|
|
|
public static final class Result {
|
|
|
|
private final String targetContentEncoding;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
private final EmbeddedChannel contentEncoder;
|
2011-10-23 07:34:03 +02:00
|
|
|
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
public Result(String targetContentEncoding, EmbeddedChannel contentEncoder) {
|
2011-10-23 07:34:03 +02:00
|
|
|
if (targetContentEncoding == null) {
|
|
|
|
throw new NullPointerException("targetContentEncoding");
|
|
|
|
}
|
|
|
|
if (contentEncoder == null) {
|
|
|
|
throw new NullPointerException("contentEncoder");
|
|
|
|
}
|
|
|
|
|
|
|
|
this.targetContentEncoding = targetContentEncoding;
|
|
|
|
this.contentEncoder = contentEncoder;
|
|
|
|
}
|
|
|
|
|
2013-01-17 06:48:03 +01:00
|
|
|
public String targetContentEncoding() {
|
2011-10-23 07:34:03 +02:00
|
|
|
return targetContentEncoding;
|
|
|
|
}
|
|
|
|
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
public EmbeddedChannel contentEncoder() {
|
2011-10-23 07:34:03 +02:00
|
|
|
return contentEncoder;
|
|
|
|
}
|
|
|
|
}
|
2009-11-03 07:47:30 +01:00
|
|
|
}
|