netty5/codec-http/src/test/java/io/netty/handler/codec/http/HttpObjectAggregatorTest.java

328 lines
15 KiB
Java
Raw Normal View History

/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.handler.codec.http;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.CompositeByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelHandlerContext;
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
import io.netty.channel.embedded.EmbeddedChannel;
import io.netty.handler.codec.DecoderResultProvider;
import io.netty.handler.codec.TooLongFrameException;
import io.netty.handler.codec.http.HttpHeaders.Names;
import io.netty.util.CharsetUtil;
import org.easymock.EasyMock;
import org.junit.Test;
import java.nio.channels.ClosedChannelException;
import java.util.List;
import static io.netty.util.ReferenceCountUtil.*;
import static org.hamcrest.CoreMatchers.*;
import static org.junit.Assert.*;
2013-04-30 20:41:50 +02:00
public class HttpObjectAggregatorTest {
@Test
public void testAggregate() {
HttpObjectAggregator aggr = new HttpObjectAggregator(1024 * 1024);
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
EmbeddedChannel embedder = new EmbeddedChannel(aggr);
HttpRequest message = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "http://localhost");
HttpHeaders.setHeader(message, "X-Test", true);
HttpContent chunk1 = new DefaultHttpContent(Unpooled.copiedBuffer("test", CharsetUtil.US_ASCII));
HttpContent chunk2 = new DefaultHttpContent(Unpooled.copiedBuffer("test2", CharsetUtil.US_ASCII));
HttpContent chunk3 = new DefaultLastHttpContent(Unpooled.EMPTY_BUFFER);
assertFalse(embedder.writeInbound(message));
assertFalse(embedder.writeInbound(chunk1));
assertFalse(embedder.writeInbound(chunk2));
// this should trigger a channelRead event so return true
assertTrue(embedder.writeInbound(chunk3));
assertTrue(embedder.finish());
FullHttpRequest aggratedMessage = embedder.readInbound();
assertNotNull(aggratedMessage);
assertEquals(chunk1.content().readableBytes() + chunk2.content().readableBytes(),
HttpHeaders.getContentLength(aggratedMessage));
assertEquals(aggratedMessage.headers().get("X-Test"), Boolean.TRUE.toString());
checkContentBuffer(aggratedMessage);
assertNull(embedder.readInbound());
}
private static void checkContentBuffer(FullHttpRequest aggregatedMessage) {
CompositeByteBuf buffer = (CompositeByteBuf) aggregatedMessage.content();
assertEquals(2, buffer.numComponents());
List<ByteBuf> buffers = buffer.decompose(0, buffer.capacity());
assertEquals(2, buffers.size());
for (ByteBuf buf: buffers) {
// This should be false as we decompose the buffer before to not have deep hierarchy
assertFalse(buf instanceof CompositeByteBuf);
}
aggregatedMessage.release();
}
@Test
public void testAggregateWithTrailer() {
HttpObjectAggregator aggr = new HttpObjectAggregator(1024 * 1024);
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
EmbeddedChannel embedder = new EmbeddedChannel(aggr);
HttpRequest message = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "http://localhost");
HttpHeaders.setHeader(message, "X-Test", true);
HttpHeaders.setTransferEncodingChunked(message);
HttpContent chunk1 = new DefaultHttpContent(Unpooled.copiedBuffer("test", CharsetUtil.US_ASCII));
HttpContent chunk2 = new DefaultHttpContent(Unpooled.copiedBuffer("test2", CharsetUtil.US_ASCII));
LastHttpContent trailer = new DefaultLastHttpContent();
trailer.trailingHeaders().set("X-Trailer", true);
assertFalse(embedder.writeInbound(message));
assertFalse(embedder.writeInbound(chunk1));
assertFalse(embedder.writeInbound(chunk2));
// this should trigger a channelRead event so return true
assertTrue(embedder.writeInbound(trailer));
assertTrue(embedder.finish());
FullHttpRequest aggratedMessage = embedder.readInbound();
assertNotNull(aggratedMessage);
assertEquals(chunk1.content().readableBytes() + chunk2.content().readableBytes(),
HttpHeaders.getContentLength(aggratedMessage));
assertEquals(aggratedMessage.headers().get("X-Test"), Boolean.TRUE.toString());
assertEquals(aggratedMessage.headers().get("X-Trailer"), Boolean.TRUE.toString());
checkContentBuffer(aggratedMessage);
assertNull(embedder.readInbound());
}
@Test
public void testOversizedRequest() {
EmbeddedChannel embedder = new EmbeddedChannel(new HttpObjectAggregator(4));
HttpRequest message = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.PUT, "http://localhost");
HttpContent chunk1 = new DefaultHttpContent(Unpooled.copiedBuffer("test", CharsetUtil.US_ASCII));
HttpContent chunk2 = new DefaultHttpContent(Unpooled.copiedBuffer("test2", CharsetUtil.US_ASCII));
HttpContent chunk3 = LastHttpContent.EMPTY_LAST_CONTENT;
assertFalse(embedder.writeInbound(message));
assertFalse(embedder.writeInbound(chunk1));
assertFalse(embedder.writeInbound(chunk2));
FullHttpResponse response = embedder.readOutbound();
assertEquals(HttpResponseStatus.REQUEST_ENTITY_TOO_LARGE, response.status());
assertEquals("0", response.headers().get(Names.CONTENT_LENGTH));
assertFalse(embedder.isOpen());
try {
assertFalse(embedder.writeInbound(chunk3));
fail();
} catch (Exception e) {
assertTrue(e instanceof ClosedChannelException);
}
assertFalse(embedder.finish());
}
@Test
public void testOversizedRequestWithoutKeepAlive() {
// send a HTTP/1.0 request with no keep-alive header
HttpRequest message = new DefaultHttpRequest(HttpVersion.HTTP_1_0, HttpMethod.PUT, "http://localhost");
HttpHeaders.setContentLength(message, 5);
checkOversizedRequest(message);
}
@Test
public void testOversizedRequestWithContentLength() {
HttpRequest message = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.PUT, "http://localhost");
HttpHeaders.setContentLength(message, 5);
checkOversizedRequest(message);
}
@Test
public void testOversizedRequestWith100Continue() {
EmbeddedChannel embedder = new EmbeddedChannel(new HttpObjectAggregator(8));
// send an oversized request with 100 continue
HttpRequest message = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.PUT, "http://localhost");
HttpHeaders.set100ContinueExpected(message);
HttpHeaders.setContentLength(message, 16);
HttpContent chunk1 = releaseLater(new DefaultHttpContent(Unpooled.copiedBuffer("some", CharsetUtil.US_ASCII)));
HttpContent chunk2 = releaseLater(new DefaultHttpContent(Unpooled.copiedBuffer("test", CharsetUtil.US_ASCII)));
HttpContent chunk3 = LastHttpContent.EMPTY_LAST_CONTENT;
// Send a request with 100-continue + large Content-Length header value.
assertFalse(embedder.writeInbound(message));
// The agregator should respond with '413 Request Entity Too Large.'
FullHttpResponse response = embedder.readOutbound();
assertEquals(HttpResponseStatus.REQUEST_ENTITY_TOO_LARGE, response.status());
assertEquals("0", response.headers().get(Names.CONTENT_LENGTH));
// An ill-behaving client could continue to send data without a respect, and such data should be discarded.
assertFalse(embedder.writeInbound(chunk1));
// The aggregator should not close the connection because keep-alive is on.
assertTrue(embedder.isOpen());
// Now send a valid request.
HttpRequest message2 = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.PUT, "http://localhost");
assertFalse(embedder.writeInbound(message2));
assertFalse(embedder.writeInbound(chunk2));
assertTrue(embedder.writeInbound(chunk3));
FullHttpRequest fullMsg = embedder.readInbound();
assertNotNull(fullMsg);
assertEquals(
chunk2.content().readableBytes() + chunk3.content().readableBytes(),
HttpHeaders.getContentLength(fullMsg));
assertEquals(HttpHeaders.getContentLength(fullMsg), fullMsg.content().readableBytes());
fullMsg.release();
assertFalse(embedder.finish());
}
@Test
public void testOversizedRequestWith100ContinueAndDecoder() {
EmbeddedChannel embedder = new EmbeddedChannel(new HttpRequestDecoder(), new HttpObjectAggregator(4));
embedder.writeInbound(Unpooled.copiedBuffer(
"PUT /upload HTTP/1.1\r\n" +
"Expect: 100-continue\r\n" +
"Content-Length: 100\r\n\r\n", CharsetUtil.US_ASCII));
assertNull(embedder.readInbound());
FullHttpResponse response = embedder.readOutbound();
assertEquals(HttpResponseStatus.REQUEST_ENTITY_TOO_LARGE, response.status());
assertEquals("0", response.headers().get(Names.CONTENT_LENGTH));
// Keep-alive is on by default in HTTP/1.1, so the connection should be still alive.
assertTrue(embedder.isOpen());
// The decoder should be reset by the aggregator at this point and be able to decode the next request.
embedder.writeInbound(Unpooled.copiedBuffer("GET /max-upload-size HTTP/1.1\r\n\r\n", CharsetUtil.US_ASCII));
FullHttpRequest request = embedder.readInbound();
assertThat(request.method(), is(HttpMethod.GET));
assertThat(request.uri(), is("/max-upload-size"));
assertThat(request.content().readableBytes(), is(0));
request.release();
assertFalse(embedder.finish());
}
private static void checkOversizedRequest(HttpRequest message) {
EmbeddedChannel embedder = new EmbeddedChannel(new HttpObjectAggregator(4));
assertFalse(embedder.writeInbound(message));
HttpResponse response = embedder.readOutbound();
assertEquals(HttpResponseStatus.REQUEST_ENTITY_TOO_LARGE, response.status());
assertEquals("0", response.headers().get(Names.CONTENT_LENGTH));
assertFalse(embedder.isOpen());
assertFalse(embedder.finish());
}
@Test
public void testOversizedResponse() {
EmbeddedChannel embedder = new EmbeddedChannel(new HttpObjectAggregator(4));
HttpResponse message = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK);
HttpContent chunk1 = new DefaultHttpContent(Unpooled.copiedBuffer("test", CharsetUtil.US_ASCII));
HttpContent chunk2 = new DefaultHttpContent(Unpooled.copiedBuffer("test2", CharsetUtil.US_ASCII));
assertFalse(embedder.writeInbound(message));
assertFalse(embedder.writeInbound(chunk1));
try {
embedder.writeInbound(chunk2);
fail();
} catch (TooLongFrameException expected) {
// Expected
}
assertFalse(embedder.isOpen());
assertFalse(embedder.finish());
}
@Test(expected = IllegalArgumentException.class)
public void testInvalidConstructorUsage() {
new HttpObjectAggregator(0);
}
@Test(expected = IllegalArgumentException.class)
public void testInvalidMaxCumulationBufferComponents() {
HttpObjectAggregator aggr = new HttpObjectAggregator(Integer.MAX_VALUE);
aggr.setMaxCumulationBufferComponents(1);
}
@Test(expected = IllegalStateException.class)
public void testSetMaxCumulationBufferComponentsAfterInit() throws Exception {
HttpObjectAggregator aggr = new HttpObjectAggregator(Integer.MAX_VALUE);
ChannelHandlerContext ctx = EasyMock.createMock(ChannelHandlerContext.class);
EasyMock.replay(ctx);
aggr.handlerAdded(ctx);
aggr.setMaxCumulationBufferComponents(10);
}
@Test
public void testAggregateTransferEncodingChunked() {
HttpObjectAggregator aggr = new HttpObjectAggregator(1024 * 1024);
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
EmbeddedChannel embedder = new EmbeddedChannel(aggr);
HttpRequest message = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.PUT, "http://localhost");
HttpHeaders.setHeader(message, "X-Test", true);
HttpHeaders.setHeader(message, "Transfer-Encoding", "Chunked");
HttpContent chunk1 = new DefaultHttpContent(Unpooled.copiedBuffer("test", CharsetUtil.US_ASCII));
HttpContent chunk2 = new DefaultHttpContent(Unpooled.copiedBuffer("test2", CharsetUtil.US_ASCII));
HttpContent chunk3 = LastHttpContent.EMPTY_LAST_CONTENT;
assertFalse(embedder.writeInbound(message));
assertFalse(embedder.writeInbound(chunk1));
assertFalse(embedder.writeInbound(chunk2));
// this should trigger a channelRead event so return true
assertTrue(embedder.writeInbound(chunk3));
assertTrue(embedder.finish());
FullHttpRequest aggratedMessage = embedder.readInbound();
assertNotNull(aggratedMessage);
assertEquals(chunk1.content().readableBytes() + chunk2.content().readableBytes(),
2013-04-30 21:11:41 +02:00
HttpHeaders.getContentLength(aggratedMessage));
assertEquals(aggratedMessage.headers().get("X-Test"), Boolean.TRUE.toString());
checkContentBuffer(aggratedMessage);
assertNull(embedder.readInbound());
}
@Test
public void testBadRequest() {
EmbeddedChannel ch = new EmbeddedChannel(new HttpRequestDecoder(), new HttpObjectAggregator(1024 * 1024));
ch.writeInbound(Unpooled.copiedBuffer("GET / HTTP/1.0 with extra\r\n", CharsetUtil.UTF_8));
Object inbound = ch.readInbound();
assertThat(inbound, is(instanceOf(FullHttpRequest.class)));
assertTrue(((DecoderResultProvider) inbound).decoderResult().isFailure());
assertNull(ch.readInbound());
ch.finish();
}
@Test
public void testBadResponse() throws Exception {
EmbeddedChannel ch = new EmbeddedChannel(new HttpResponseDecoder(), new HttpObjectAggregator(1024 * 1024));
ch.writeInbound(Unpooled.copiedBuffer("HTTP/1.0 BAD_CODE Bad Server\r\n", CharsetUtil.UTF_8));
Object inbound = ch.readInbound();
assertThat(inbound, is(instanceOf(FullHttpResponse.class)));
assertTrue(((DecoderResultProvider) inbound).decoderResult().isFailure());
assertNull(ch.readInbound());
ch.finish();
}
}