2012-07-24 14:38:39 +02:00
|
|
|
/*
|
|
|
|
* Copyright 2012 The Netty Project
|
|
|
|
*
|
|
|
|
* The Netty Project licenses this file to you under the Apache License,
|
|
|
|
* version 2.0 (the "License"); you may not use this file except in compliance
|
|
|
|
* with the License. You may obtain a copy of the License at:
|
|
|
|
*
|
|
|
|
* http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
*
|
|
|
|
* Unless required by applicable law or agreed to in writing, software
|
|
|
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
|
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
|
|
* License for the specific language governing permissions and limitations
|
|
|
|
* under the License.
|
|
|
|
*/
|
|
|
|
package io.netty.handler.codec.http.websocketx;
|
|
|
|
|
|
|
|
import io.netty.buffer.Unpooled;
|
2015-09-09 14:25:32 +02:00
|
|
|
import io.netty.channel.Channel;
|
2012-09-10 07:22:35 +02:00
|
|
|
import io.netty.channel.ChannelFutureListener;
|
2012-07-24 14:38:39 +02:00
|
|
|
import io.netty.channel.ChannelHandler;
|
|
|
|
import io.netty.channel.ChannelHandlerContext;
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
import io.netty.channel.ChannelInboundHandler;
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
import io.netty.channel.ChannelInboundHandlerAdapter;
|
2012-09-10 07:18:26 +02:00
|
|
|
import io.netty.channel.ChannelPipeline;
|
2013-01-16 05:22:50 +01:00
|
|
|
import io.netty.handler.codec.http.DefaultFullHttpResponse;
|
2013-02-09 12:32:49 +01:00
|
|
|
import io.netty.handler.codec.http.FullHttpRequest;
|
2013-01-16 05:22:50 +01:00
|
|
|
import io.netty.handler.codec.http.FullHttpResponse;
|
2016-08-25 18:41:34 +02:00
|
|
|
import io.netty.handler.codec.http.HttpHeaders;
|
2012-07-24 14:38:39 +02:00
|
|
|
import io.netty.handler.codec.http.HttpResponseStatus;
|
|
|
|
import io.netty.util.AttributeKey;
|
|
|
|
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
import java.util.List;
|
2012-11-12 01:31:40 +01:00
|
|
|
|
2013-07-09 16:09:28 +02:00
|
|
|
import static io.netty.handler.codec.http.HttpVersion.*;
|
2019-05-22 12:37:28 +02:00
|
|
|
import static io.netty.util.internal.ObjectUtil.*;
|
2013-07-09 16:09:28 +02:00
|
|
|
|
2012-07-24 14:38:39 +02:00
|
|
|
/**
|
2012-12-12 07:43:57 +01:00
|
|
|
* This handler does all the heavy lifting for you to run a websocket server.
|
|
|
|
*
|
|
|
|
* It takes care of websocket handshaking as well as processing of control frames (Close, Ping, Pong). Text and Binary
|
|
|
|
* data frames are passed to the next handler in the pipeline (implemented by you) for processing.
|
|
|
|
*
|
|
|
|
* See <tt>io.netty.example.http.websocketx.html5.WebSocketServer</tt> for usage.
|
|
|
|
*
|
|
|
|
* The implementation of this handler assumes that you just want to run a websocket server and not process other types
|
|
|
|
* HTTP requests (like GET and POST). If you wish to support both HTTP requests and websockets in the one server, refer
|
|
|
|
* to the <tt>io.netty.example.http.websocketx.server.WebSocketServer</tt> example.
|
2013-03-28 06:57:04 +01:00
|
|
|
*
|
|
|
|
* To know once a handshake was done you can intercept the
|
2016-08-25 18:41:34 +02:00
|
|
|
* {@link ChannelInboundHandler#userEventTriggered(ChannelHandlerContext, Object)} and check if the event was instance
|
|
|
|
* of {@link HandshakeComplete}, the event will contain extra information about the handshake such as the request and
|
|
|
|
* selected subprotocol.
|
2012-07-24 14:38:39 +02:00
|
|
|
*/
|
2013-03-20 10:04:17 +01:00
|
|
|
public class WebSocketServerProtocolHandler extends WebSocketProtocolHandler {
|
2012-07-24 14:38:39 +02:00
|
|
|
|
2013-03-28 06:57:04 +01:00
|
|
|
/**
|
|
|
|
* Events that are fired to notify about handshake status
|
|
|
|
*/
|
|
|
|
public enum ServerHandshakeStateEvent {
|
|
|
|
/**
|
2016-08-25 18:41:34 +02:00
|
|
|
* The Handshake was completed successfully and the channel was upgraded to websockets.
|
|
|
|
*
|
|
|
|
* @deprecated in favor of {@link HandshakeComplete} class,
|
|
|
|
* it provides extra information about the handshake
|
2013-03-28 06:57:04 +01:00
|
|
|
*/
|
2016-08-25 18:41:34 +02:00
|
|
|
@Deprecated
|
2019-05-22 12:37:28 +02:00
|
|
|
HANDSHAKE_COMPLETE,
|
|
|
|
|
|
|
|
/**
|
|
|
|
* The Handshake was timed out
|
|
|
|
*/
|
|
|
|
HANDSHAKE_TIMEOUT
|
2013-03-28 06:57:04 +01:00
|
|
|
}
|
|
|
|
|
2016-08-25 18:41:34 +02:00
|
|
|
/**
|
|
|
|
* The Handshake was completed successfully and the channel was upgraded to websockets.
|
|
|
|
*/
|
|
|
|
public static final class HandshakeComplete {
|
|
|
|
private final String requestUri;
|
|
|
|
private final HttpHeaders requestHeaders;
|
|
|
|
private final String selectedSubprotocol;
|
|
|
|
|
|
|
|
HandshakeComplete(String requestUri, HttpHeaders requestHeaders, String selectedSubprotocol) {
|
|
|
|
this.requestUri = requestUri;
|
|
|
|
this.requestHeaders = requestHeaders;
|
|
|
|
this.selectedSubprotocol = selectedSubprotocol;
|
|
|
|
}
|
|
|
|
|
|
|
|
public String requestUri() {
|
|
|
|
return requestUri;
|
|
|
|
}
|
|
|
|
|
|
|
|
public HttpHeaders requestHeaders() {
|
|
|
|
return requestHeaders;
|
|
|
|
}
|
|
|
|
|
|
|
|
public String selectedSubprotocol() {
|
|
|
|
return selectedSubprotocol;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-07-24 14:38:39 +02:00
|
|
|
private static final AttributeKey<WebSocketServerHandshaker> HANDSHAKER_ATTR_KEY =
|
2013-10-25 13:01:31 +02:00
|
|
|
AttributeKey.valueOf(WebSocketServerHandshaker.class, "HANDSHAKER");
|
2012-07-24 14:38:39 +02:00
|
|
|
|
2019-05-22 12:37:28 +02:00
|
|
|
private static final long DEFAULT_HANDSHAKE_TIMEOUT_MS = 10000L;
|
|
|
|
|
2012-07-24 14:38:39 +02:00
|
|
|
private final String websocketPath;
|
|
|
|
private final String subprotocols;
|
|
|
|
private final boolean allowExtensions;
|
2014-05-04 10:29:30 +02:00
|
|
|
private final int maxFramePayloadLength;
|
2014-10-22 21:59:45 +02:00
|
|
|
private final boolean allowMaskMismatch;
|
2016-12-16 00:59:11 +01:00
|
|
|
private final boolean checkStartsWith;
|
2019-05-22 12:37:28 +02:00
|
|
|
private final long handshakeTimeoutMillis;
|
2012-07-24 14:38:39 +02:00
|
|
|
|
|
|
|
public WebSocketServerProtocolHandler(String websocketPath) {
|
2019-05-22 12:37:28 +02:00
|
|
|
this(websocketPath, DEFAULT_HANDSHAKE_TIMEOUT_MS);
|
|
|
|
}
|
|
|
|
|
|
|
|
public WebSocketServerProtocolHandler(String websocketPath, long handshakeTimeoutMillis) {
|
2012-07-24 14:38:39 +02:00
|
|
|
this(websocketPath, null, false);
|
|
|
|
}
|
|
|
|
|
2016-12-16 00:59:11 +01:00
|
|
|
public WebSocketServerProtocolHandler(String websocketPath, boolean checkStartsWith) {
|
2019-05-22 12:37:28 +02:00
|
|
|
this(websocketPath, checkStartsWith, DEFAULT_HANDSHAKE_TIMEOUT_MS);
|
|
|
|
}
|
|
|
|
|
|
|
|
public WebSocketServerProtocolHandler(String websocketPath, boolean checkStartsWith, long handshakeTimeoutMillis) {
|
|
|
|
this(websocketPath, null, false, 65536, false, checkStartsWith, handshakeTimeoutMillis);
|
2016-12-16 00:59:11 +01:00
|
|
|
}
|
|
|
|
|
2012-07-24 14:38:39 +02:00
|
|
|
public WebSocketServerProtocolHandler(String websocketPath, String subprotocols) {
|
2019-05-22 12:37:28 +02:00
|
|
|
this(websocketPath, subprotocols, DEFAULT_HANDSHAKE_TIMEOUT_MS);
|
|
|
|
}
|
|
|
|
|
|
|
|
public WebSocketServerProtocolHandler(String websocketPath, String subprotocols, long handshakeTimeoutMillis) {
|
|
|
|
this(websocketPath, subprotocols, false, handshakeTimeoutMillis);
|
2012-07-24 14:38:39 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
public WebSocketServerProtocolHandler(String websocketPath, String subprotocols, boolean allowExtensions) {
|
2019-05-22 12:37:28 +02:00
|
|
|
this(websocketPath, subprotocols, allowExtensions, DEFAULT_HANDSHAKE_TIMEOUT_MS);
|
|
|
|
}
|
|
|
|
|
|
|
|
public WebSocketServerProtocolHandler(String websocketPath, String subprotocols, boolean allowExtensions,
|
|
|
|
long handshakeTimeoutMillis) {
|
|
|
|
this(websocketPath, subprotocols, allowExtensions, 65536, handshakeTimeoutMillis);
|
2014-05-04 10:29:30 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
public WebSocketServerProtocolHandler(String websocketPath, String subprotocols,
|
2014-10-22 21:59:45 +02:00
|
|
|
boolean allowExtensions, int maxFrameSize) {
|
2019-05-22 12:37:28 +02:00
|
|
|
this(websocketPath, subprotocols, allowExtensions, maxFrameSize, DEFAULT_HANDSHAKE_TIMEOUT_MS);
|
|
|
|
}
|
|
|
|
|
|
|
|
public WebSocketServerProtocolHandler(String websocketPath, String subprotocols,
|
|
|
|
boolean allowExtensions, int maxFrameSize, long handshakeTimeoutMillis) {
|
|
|
|
this(websocketPath, subprotocols, allowExtensions, maxFrameSize, false, handshakeTimeoutMillis);
|
2014-10-22 21:59:45 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
public WebSocketServerProtocolHandler(String websocketPath, String subprotocols,
|
|
|
|
boolean allowExtensions, int maxFrameSize, boolean allowMaskMismatch) {
|
2019-05-22 12:37:28 +02:00
|
|
|
this(websocketPath, subprotocols, allowExtensions, maxFrameSize, allowMaskMismatch,
|
|
|
|
DEFAULT_HANDSHAKE_TIMEOUT_MS);
|
|
|
|
}
|
|
|
|
|
|
|
|
public WebSocketServerProtocolHandler(String websocketPath, String subprotocols, boolean allowExtensions,
|
|
|
|
int maxFrameSize, boolean allowMaskMismatch, long handshakeTimeoutMillis) {
|
|
|
|
this(websocketPath, subprotocols, allowExtensions, maxFrameSize, allowMaskMismatch, false,
|
|
|
|
handshakeTimeoutMillis);
|
2016-12-16 00:59:11 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
public WebSocketServerProtocolHandler(String websocketPath, String subprotocols,
|
|
|
|
boolean allowExtensions, int maxFrameSize, boolean allowMaskMismatch, boolean checkStartsWith) {
|
2019-05-22 12:37:28 +02:00
|
|
|
this(websocketPath, subprotocols, allowExtensions, maxFrameSize, allowMaskMismatch, checkStartsWith,
|
|
|
|
DEFAULT_HANDSHAKE_TIMEOUT_MS);
|
|
|
|
}
|
|
|
|
|
|
|
|
public WebSocketServerProtocolHandler(String websocketPath, String subprotocols,
|
|
|
|
boolean allowExtensions, int maxFrameSize, boolean allowMaskMismatch,
|
|
|
|
boolean checkStartsWith, long handshakeTimeoutMillis) {
|
|
|
|
this(websocketPath, subprotocols, allowExtensions, maxFrameSize, allowMaskMismatch, checkStartsWith, true,
|
|
|
|
handshakeTimeoutMillis);
|
2018-05-24 20:27:29 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
public WebSocketServerProtocolHandler(String websocketPath, String subprotocols,
|
|
|
|
boolean allowExtensions, int maxFrameSize, boolean allowMaskMismatch,
|
|
|
|
boolean checkStartsWith, boolean dropPongFrames) {
|
2019-05-22 12:37:28 +02:00
|
|
|
this(websocketPath, subprotocols, allowExtensions, maxFrameSize, allowMaskMismatch, checkStartsWith,
|
|
|
|
dropPongFrames, DEFAULT_HANDSHAKE_TIMEOUT_MS);
|
|
|
|
}
|
|
|
|
|
|
|
|
public WebSocketServerProtocolHandler(String websocketPath, String subprotocols, boolean allowExtensions,
|
|
|
|
int maxFrameSize, boolean allowMaskMismatch, boolean checkStartsWith,
|
|
|
|
boolean dropPongFrames, long handshakeTimeoutMillis) {
|
2018-05-24 20:27:29 +02:00
|
|
|
super(dropPongFrames);
|
2012-07-24 14:38:39 +02:00
|
|
|
this.websocketPath = websocketPath;
|
|
|
|
this.subprotocols = subprotocols;
|
|
|
|
this.allowExtensions = allowExtensions;
|
2014-06-24 10:39:46 +02:00
|
|
|
maxFramePayloadLength = maxFrameSize;
|
2014-10-22 21:59:45 +02:00
|
|
|
this.allowMaskMismatch = allowMaskMismatch;
|
2016-12-16 00:59:11 +01:00
|
|
|
this.checkStartsWith = checkStartsWith;
|
2019-05-22 12:37:28 +02:00
|
|
|
this.handshakeTimeoutMillis = checkPositive(handshakeTimeoutMillis, "handshakeTimeoutMillis");
|
2012-07-24 14:38:39 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
@Override
|
2013-04-05 15:46:18 +02:00
|
|
|
public void handlerAdded(ChannelHandlerContext ctx) {
|
2012-09-10 07:18:26 +02:00
|
|
|
ChannelPipeline cp = ctx.pipeline();
|
|
|
|
if (cp.get(WebSocketServerProtocolHandshakeHandler.class) == null) {
|
|
|
|
// Add the WebSocketHandshakeHandler before this one.
|
|
|
|
ctx.pipeline().addBefore(ctx.name(), WebSocketServerProtocolHandshakeHandler.class.getName(),
|
2019-05-22 12:37:28 +02:00
|
|
|
new WebSocketServerProtocolHandshakeHandler(websocketPath, subprotocols,
|
|
|
|
allowExtensions, maxFramePayloadLength,
|
|
|
|
allowMaskMismatch,
|
|
|
|
checkStartsWith,
|
|
|
|
handshakeTimeoutMillis));
|
2012-09-10 07:18:26 +02:00
|
|
|
}
|
2015-12-17 14:57:36 +01:00
|
|
|
if (cp.get(Utf8FrameValidator.class) == null) {
|
|
|
|
// Add the UFT8 checking before this one.
|
|
|
|
ctx.pipeline().addBefore(ctx.name(), Utf8FrameValidator.class.getName(),
|
|
|
|
new Utf8FrameValidator());
|
|
|
|
}
|
2012-07-24 14:38:39 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
@Override
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
protected void decode(ChannelHandlerContext ctx, WebSocketFrame frame, List<Object> out) throws Exception {
|
2012-07-24 14:38:39 +02:00
|
|
|
if (frame instanceof CloseWebSocketFrame) {
|
2015-09-09 14:25:32 +02:00
|
|
|
WebSocketServerHandshaker handshaker = getHandshaker(ctx.channel());
|
2014-02-07 06:22:01 +01:00
|
|
|
if (handshaker != null) {
|
|
|
|
frame.retain();
|
|
|
|
handshaker.close(ctx.channel(), (CloseWebSocketFrame) frame);
|
|
|
|
} else {
|
|
|
|
ctx.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
|
|
|
|
}
|
2012-07-24 14:38:39 +02:00
|
|
|
return;
|
2012-11-12 01:31:40 +01:00
|
|
|
}
|
Revamp the core API to reduce memory footprint and consumption
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
|
|
|
super.decode(ctx, frame, out);
|
2012-07-24 14:38:39 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
@Override
|
|
|
|
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
|
2012-09-10 07:22:35 +02:00
|
|
|
if (cause instanceof WebSocketHandshakeException) {
|
2013-01-16 05:22:50 +01:00
|
|
|
FullHttpResponse response = new DefaultFullHttpResponse(
|
|
|
|
HTTP_1_1, HttpResponseStatus.BAD_REQUEST, Unpooled.wrappedBuffer(cause.getMessage().getBytes()));
|
2013-07-10 13:00:42 +02:00
|
|
|
ctx.channel().writeAndFlush(response).addListener(ChannelFutureListener.CLOSE);
|
2012-09-10 07:22:35 +02:00
|
|
|
} else {
|
2017-05-01 19:48:55 +02:00
|
|
|
ctx.fireExceptionCaught(cause);
|
2012-07-24 14:38:39 +02:00
|
|
|
ctx.close();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-09-09 14:25:32 +02:00
|
|
|
static WebSocketServerHandshaker getHandshaker(Channel channel) {
|
|
|
|
return channel.attr(HANDSHAKER_ATTR_KEY).get();
|
2012-07-24 14:38:39 +02:00
|
|
|
}
|
|
|
|
|
2015-09-09 14:25:32 +02:00
|
|
|
static void setHandshaker(Channel channel, WebSocketServerHandshaker handshaker) {
|
|
|
|
channel.attr(HANDSHAKER_ATTR_KEY).set(handshaker);
|
2012-07-24 14:38:39 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static ChannelHandler forbiddenHttpRequestResponder() {
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
return new ChannelInboundHandlerAdapter() {
|
2012-07-24 14:38:39 +02:00
|
|
|
@Override
|
2013-07-09 16:09:28 +02:00
|
|
|
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
if (msg instanceof FullHttpRequest) {
|
2014-02-07 06:22:01 +01:00
|
|
|
((FullHttpRequest) msg).release();
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
FullHttpResponse response =
|
|
|
|
new DefaultFullHttpResponse(HTTP_1_1, HttpResponseStatus.FORBIDDEN);
|
2013-07-10 13:00:42 +02:00
|
|
|
ctx.channel().writeAndFlush(response);
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
} else {
|
2013-07-09 16:09:28 +02:00
|
|
|
ctx.fireChannelRead(msg);
|
Remove MessageList from public API and change ChannelInbound/OutboundHandler accordingly
I must admit MesageList was pain in the ass. Instead of forcing a
handler always loop over the list of messages, this commit splits
messageReceived(ctx, list) into two event handlers:
- messageReceived(ctx, msg)
- mmessageReceivedLast(ctx)
When Netty reads one or more messages, messageReceived(ctx, msg) event
is triggered for each message. Once the current read operation is
finished, messageReceivedLast() is triggered to tell the handler that
the last messageReceived() was the last message in the current batch.
Similarly, for outbound, write(ctx, list) has been split into two:
- write(ctx, msg)
- flush(ctx, promise)
Instead of writing a list of message with a promise, a user is now
supposed to call write(msg) multiple times and then call flush() to
actually flush the buffered messages.
Please note that write() doesn't have a promise with it. You must call
flush() to get notified on completion. (or you can use writeAndFlush())
Other changes:
- Because MessageList is completely hidden, codec framework uses
List<Object> instead of MessageList as an output parameter.
2013-07-08 12:03:40 +02:00
|
|
|
}
|
2012-07-24 14:38:39 +02:00
|
|
|
}
|
|
|
|
};
|
|
|
|
}
|
|
|
|
}
|