netty5/codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocket00FrameDecoder.java

130 lines
4.6 KiB
Java
Raw Normal View History

2011-09-26 14:51:15 +02:00
/*
2012-06-04 22:31:44 +02:00
* Copyright 2012 The Netty Project
2011-09-26 14:51:15 +02:00
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
2011-09-26 14:51:15 +02:00
*
2012-06-04 22:31:44 +02:00
* http://www.apache.org/licenses/LICENSE-2.0
2011-09-26 14:51:15 +02:00
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
2011-09-26 14:51:15 +02:00
* License for the specific language governing permissions and limitations
* under the License.
*/
2011-12-09 04:38:59 +01:00
package io.netty.handler.codec.http.websocketx;
2011-09-26 14:51:15 +02:00
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
import io.netty.channel.MessageList;
import io.netty.handler.codec.ReplayingDecoder;
import io.netty.handler.codec.TooLongFrameException;
2011-09-26 14:51:15 +02:00
/**
* Decodes {@link ByteBuf}s into {@link WebSocketFrame}s.
2011-09-26 14:51:15 +02:00
* <p>
* For the detailed instruction on adding add Web Socket support to your HTTP server, take a look into the
* <tt>WebSocketServer</tt> example located in the {@code io.netty.example.http.websocket} package.
2011-09-26 14:51:15 +02:00
*/
public class WebSocket00FrameDecoder extends ReplayingDecoder<Void> implements WebSocketFrameDecoder {
2011-09-26 14:51:15 +02:00
static final int DEFAULT_MAX_FRAME_SIZE = 16384;
private final long maxFrameSize;
private boolean receivedClosingHandshake;
public WebSocket00FrameDecoder() {
this(DEFAULT_MAX_FRAME_SIZE);
}
/**
* Creates a new instance of {@code WebSocketFrameDecoder} with the specified {@code maxFrameSize}. If the client
* sends a frame size larger than {@code maxFrameSize}, the channel will be closed.
*
* @param maxFrameSize
* the maximum frame size to decode
*/
public WebSocket00FrameDecoder(int maxFrameSize) {
this.maxFrameSize = maxFrameSize;
}
@Override
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
protected void decode(ChannelHandlerContext ctx, ByteBuf in, MessageList<Object> out) throws Exception {
// Discard all data received if closing handshake was received before.
if (receivedClosingHandshake) {
in.skipBytes(actualReadableBytes());
return;
}
// Decode a frame otherwise.
byte type = in.readByte();
if ((type & 0x80) == 0x80) {
// If the MSB on type is set, decode the frame length
out.add(decodeBinaryFrame(ctx, type, in));
} else {
// Decode a 0xff terminated UTF-8 string
out.add(decodeTextFrame(ctx, in));
}
}
private WebSocketFrame decodeBinaryFrame(ChannelHandlerContext ctx, byte type, ByteBuf buffer) {
long frameSize = 0;
int lengthFieldSize = 0;
byte b;
do {
b = buffer.readByte();
frameSize <<= 7;
frameSize |= b & 0x7f;
if (frameSize > maxFrameSize) {
throw new TooLongFrameException();
}
lengthFieldSize++;
if (lengthFieldSize > 8) {
// Perhaps a malicious peer?
throw new TooLongFrameException();
}
} while ((b & 0x80) == 0x80);
2012-01-19 05:12:45 +01:00
if (type == (byte) 0xFF && frameSize == 0) {
receivedClosingHandshake = true;
return new CloseWebSocketFrame();
}
ByteBuf payload = ctx.alloc().buffer((int) frameSize);
buffer.readBytes(payload);
return new BinaryWebSocketFrame(payload);
}
private WebSocketFrame decodeTextFrame(ChannelHandlerContext ctx, ByteBuf buffer) {
int ridx = buffer.readerIndex();
int rbytes = actualReadableBytes();
int delimPos = buffer.indexOf(ridx, ridx + rbytes, (byte) 0xFF);
if (delimPos == -1) {
// Frame delimiter (0xFF) not found
if (rbytes > maxFrameSize) {
// Frame length exceeded the maximum
throw new TooLongFrameException();
} else {
// Wait until more data is received
return null;
}
}
int frameSize = delimPos - ridx;
if (frameSize > maxFrameSize) {
throw new TooLongFrameException();
}
ByteBuf binaryData = ctx.alloc().buffer(frameSize);
buffer.readBytes(binaryData);
buffer.skipBytes(1);
int ffDelimPos = binaryData.indexOf(binaryData.readerIndex(), binaryData.writerIndex(), (byte) 0xFF);
if (ffDelimPos >= 0) {
throw new IllegalArgumentException("a text frame should not contain 0xFF.");
}
return new TextWebSocketFrame(binaryData);
}
2011-09-26 14:51:15 +02:00
}