Motivation:
Websocket clients can request to speak a specific subprotocol. The list of
subprotocols the client understands are sent to the server. The server
should select one of the protocols an reply this with the websocket
handshake response. The added code verifies that the reponded subprotocol
is valid.
Modifications:
Added verification of the subprotocol received from the server against the
subprotocol(s) that the user requests. If the user requests a subprotocol
but the server responds none or a non-requested subprotocol this is an
error and the handshake fails through an exception. If the user requests
no subprotocol but the server responds one this is also marked as an
error.
Addiontionally a getter for the WebSocketClientHandshaker in the
WebSocketClientProtocolHandler is added to enable the user of a
WebSocketClientProtocolHandler to extract the used negotiated subprotocol.
Result:
The subprotocol field which is received from a websocket server is now
properly verified on client side and clients and websocket connection
attempts will now only succeed if both parties can negotiate on a
subprotocol.
If the client sends a list of multiple possible subprotocols it can
extract the negotiated subprotocol through the added handshaker getter (WebSocketClientProtocolHandler.handshaker().actualSubprotocol()).
Motivation:
http://public.dhe.ibm.com/software/dw/webservices/ws-mqtt/mqtt-v3r1.html#connack
In MQTT 3.1, MQTT server must send a CONNACK with return code if CONNECT
request contains an invalid client identifier or an unacceptable protocol
version. The return code is one of MqttConnectReturnCode.
But, MqttDecoder throws DecoderException when CONNECT request contains
invalid value without distinguish situations. This makes it difficult
for codec-mqtt users to send a response with return code to clients.
Modifications:
Added exceptions for client identifier rejected and unacceptable
protocol version. MqttDecoder will throw those exceptions instead of
DecoderException.
Result:
Users of codec-mqtt can distinguish which is invalid when CONNECT
contains invalid client identifier or invalid protocol version. And, users can
send CONNACK with return code to clients.
Motivation:
I was not fully reassured that whether everything works correctly when a websocket client receives the websocket handshake HTTP response and a websocket frame in a single ByteBuf (which can happen when the server sends a response directly or shortly after the connect). In this case some parts of the ByteBuf must be processed by HTTP decoder and the remaining by the websocket decoder.
Modification:
Adding a test that verifies that in this scenaria the handshake and the message are correctly interpreted and delivered by Netty.
Result:
One more test for Netty.
The test succeeds - No problems
Motivation:
In MQTT 3.1 specification, "The Client Identifier (Client ID) is between
1 and 23 characters long, and uniquely identifies the client to the
server". But, current client id validation length is 0~23. It must be
1~23. The empty string is invalid client id in MQTT 3.1
Modifications:
Change isValidClientId method. Add MIN_CLIENT_ID_LENGTH.
Result:
The validation check for client id length is between 1 and 23.
Motivation:
It is often helpful to measure the performance of connections, e.g. the
latency and the throughput. This can be performed through benchmarks.
Modification:
This adds a simple but configurable benchmark for websockets into the
example directory. The Netty WebSocket server will echo all received
websocket frames and will provide an HTML/JS page which serves as the
client for the benchmark.
The benchmark also provides a verification mode that verifies the sent
against the received data. This can be used for the verification ob
websocket frame encoding and decoding funtionality.
Result:
A benchmark is added in form a further Netty websocket example.
With this benchmark it is easily possible to measure the performance between Netty and a browser
Motivation:
The WebSocketClientProtocolHandshakeHandler never releases the received handshake response.
Modification:
Release the message in a finally block.
Result:
No more leak
Motivation:
The WebSocket08FrameEncoder contains an optimization path for small messages which copies the message content into the header buffer to avoid vectored writes. However this path is in the current implementation never taken because the target buffer is preallocated only for exactly the size of the header.
Modification:
For messages below a certain treshold allocate the buffer so that the message can be directly copied. Thereby the optimized path is taken.
Result:
A speedup of about 25% for 100byte messages. Declines with bigger message sizes. I have currently set the treshold to 1kB which is a point where I could still see a few percent speedup, but we should also avoid burning too many CPU cycles.
Motivation:
Websocket performance is to a large account determined through the masking
and unmasking of frames. The current behavior of this in Netty can be
improved.
Modifications:
Perform the XOR operation not bytewise but in int blocks as long as
possible. This reduces the number of necessary operations by 4. Also don't
read the writerIndex in each iteration.
Added a unit test for websocket decoding and encoding for verifiation.
Result:
A large performance gain (up to 50%) in websocket throughput.
Motivation:
There is a NPE due to the order of builder initialization in the class.
Modifications:
-Correct the ordering of initialization and building to avoid NPE.
Result:
No more NPE in construction.
Motivation:
This was lost in recent changes, just adding it back in.
Modifications:
Added listener() accessor to Http2ConnectionDecoder and the default
impl.
Result:
The Http2FrameListener can be obtained from the decoder.
Motivation:
Currently, Http2LifecycleManager implements the exception handling logic
which makes it difficult to extend or modify the exception handling
behavior. Simply overriding exceptionCaught() will only affect one of
the many possible exception paths. We need to reorganize the exception
handling code to centralize the exception handling logic into a single
place that can easily be extended by subclasses of
Http2ConnectionHandler.
Modifications:
Made Http2LifecycleManager an interface, implemented directly by
Http2ConnectionHandler. This adds a circular dependency between the
handler and the encoder/decoder, so I added builders for them that allow
the constructor of Http2ConnectionHandler to set itself as the lifecycle
manager and build them.
Changed Http2LifecycleManager.onHttpException to just
onException(Throwable) to simplify the interface. This method is now the
central control point for all exceptions. Subclasses now only need to
override onException() to intercept any exception encountered by the
handler.
Result:
HTTP/2 has more extensible exception handling, that is less likely to
see exceptions vanish into the ether.
Motivation:
Some tests occasionally appear unstable, throwing a
org.mockito.exceptions.misusing.UnfinishedStubbingException. Mockito
stubbing does not work properly in multi-threaded environments, so any
stubbing has to be done before the threads are started.
Modifications:
Modified tests to perform any custom stubbing before the client/server
bootstrap logic executes.
Result:
HTTP/2 tests should be more stable.
Motivation:
The HTTP/2 example can timeout at the client waiting for a response due
to the server not flushing after writing the response.
Modifications:
Updated the server's HelloWorldHttp2Handler to flush after writing the
response.
Result:
The HTTP/2 example runs successfully.
Motivation:
Some tests do not properly assert that all requests have been
sent/received, so the failures messages may be misleading.
Modifications:
Adding missing asserts to HTTP/2 tests for awaiting requests and
responses.
Result:
HTTP/2 tests properly assert message counts.
Motiviation:
PR https://github.com/netty/netty/pull/2948 missed a collection to synchronize in the HTTP/2 unit tests.
Modifications:
synchronize the collection that was missed
Result:
Missed collection is syncronized and initial size is corrected
The HTTP/2 tests have been unstable, in particular the
Http2ConnectionRoundtripTest.
Modifications:
Modified fields in Http2TestUtil to be volatile.
Result:
Tests should (hopefully) be more stable.
Motivation:
Currently the last read/write throughput is calculated by first division,this will be 0 if the last read/write bytes < interval,change the order will get the correct result
Modifications:
Change the operator order from first do division to multiplication
Result:
Get the correct result instead of 0 when bytes are smaller than interval
Motivation:
HTTP/2 codec does not properly test exception passed to
exceptionCaught() for instanceof Http2Exception (since the exception
will always be wrapped in a PipelineException), so it will never
properly handle Http2Exceptions in the pipeline.
Also if any streams are present, the connection close logic will execute
twice when a pipeline exception. This is because the exception logic
calls ctx.close() which then triggers the handleInActive() logic to
execute. This clears all of the remaining streams and then attempts to
run the closeListener logic (which has already been run).
Modifications:
Changed exceptionCaught logic to properly extract Http2Exception from
the PipelineException. Also added logic to the closeListener so that is
only run once.
Changed Http2CodecUtil.toHttp2Exception() to avoid NPE when creating
an exception with cause.getMessage().
Refactored Http2ConnectionHandler to more cleanly separate inbound and
outbound flows (Http2ConnectionDecoder/Http2ConnectionEncoder).
Added a test for verifying that a pipeline exception closes the
connection.
Result:
Exception handling logic is tidied up.
Motivation:
The HTTP/2 unit tests are collecting responses read events which are happening in a multithreaded environment.
These collections are currently not synchronized or thread safe and are resulting in verification failures.
Modifications:
-Modify unit tests that use collections to store results for verifiction to be thread safe
Result:
Tests should not fail because of syncrhonization issues while verifying expected results.
Motivation:
According to the websocket specification peers may send a close frame when
they detect a protocol violation (with status code 1002). The current
implementation simply closes the connection. This update should add this
functionality. The functionality is optional - but it might help other
implementations with debugging when they receive such a frame.
Modification:
When a protocol violation in the decoder is detected and a close was not
already initiated by the remote peer a close frame is
sent.
Result:
Remotes which will send an invalid frame will now get a close frame that
indicates the protocol violation instead of only seeing a closed
connection.
Motivation:
The HTTP/2 codec has some duplication and the read/write interfaces are not cleanly exposed to users of the codec.
Modifications:
-Restructure the AbstractHttp2ConnectionHandler class to be able to extend write behavior before the outbound flow control gets the data
-Add Http2InboundConnectionHandler and Http2OutboundConnectionHandler interfaces and restructure external codec interface around these concepts
Result:
HTTP/2 codec provides a cleaner external interface which is easy to extend for read/write events.
Motivation:
We incorrectly used SslContext.newServerContext() in some places where a we needed a client context.
Modifications:
Use SslContext.newClientContext() when using ssl on the client side.
Result:
Working ssl client examples.
Motivation:
The HTTP tranlsation layer uses a FullHttpMessage object after it has been fired up the pipeline.
Although the content ByteBuf is not used by default it is still not ideal to use a releasable object
after it has potentially been released.
Modifications:
-InboundHttp2ToHttpAdapter ordering issues will be corrected
Result:
Safer access to releasable objects in the HTTP/2 to HTTP translation layer.
Motivation:
To eliminate the tests as being a cause of leaks, removing the automatic
retaining of ByteBufs in Http2TestUtil.
Modifications:
Each test that relied on retaining buffers for validation has been
modified to copy the buffer into a list of Strings that are manually
validated after the message is received.
Result:
The HTTP/2 tests should (hopefully) no longer be reporting erroneous
leaks due to the testing code, itself.
Motivation:
The current implementation of the HTTP/2 decompression does not integrate with flow control properly.
The decompression code is giving the post-decompression size to the flow control algorithm which
results in flow control errors at incorrect times.
Modifications:
-DecompressorHttp2FrameReader.java will need to change where it hooks into the HTTP/2 codec
-Enhance unit tests to test this condition
Result:
No more flow control errors because of decompression design flaw
Motivation:
The java implementations for Inet6Address.getHostName() do not follow the RFC 5952 (http://tools.ietf.org/html/rfc5952#section-4) for recommended string representation. This introduces inconsistencies when integrating with other technologies that do follow the RFC.
Modifications:
-NetUtil.java to have another public static method to convert InetAddress to string. Inet4Address will use the java InetAddress.getHostAddress() implementation and there will be new code to implement the RFC 5952 IPV6 string conversion.
-New unit tests to test the new method
Result:
Netty provides a RFC 5952 compliant string conversion method for IPV6 addresses
Motivation:
We use malloc(1) in the on JNI_OnLoad method but never free the allocated memory. This means we have a tiny memory leak of 1 byte.
Modifications:
Call free(...) on previous allocated memory.
Result:
Fix memory leak
Motivation:
We introduced a PoolThreadCache which is used in our PooledByteBufAllocator to reduce the synchronization overhead on PoolArenas when allocate / deallocate PooledByteBuf instances. This cache is used for both the allocation path and deallocation path by:
- Look for cached memory in the PoolThreadCache for the Thread that tries to allocate a new PooledByteBuf and if one is found return it.
- Add the memory that is used by a PooledByteBuf to the PoolThreadCache of the Thread that release the PooledByteBuf
This works out very well when all allocation / deallocation is done in the EventLoop as the EventLoop will be used for read and write. On the otherside this can lead to surprising side-effects if the user allocate from outside the EventLoop and and pass the ByteBuf over for writing. The problem here is that the memory will be added to the PoolThreadCache that did the actual write on the underlying transport and not on the Thread that previously allocated the buffer.
Modifications:
Don't cache if different Threads are used for allocating/deallocating
Result:
Less confusing behavior for users that allocate PooledByteBufs from outside the EventLoop.
Motivation:
When MemoryRegionCache.trim() is called, some unused cache entries will be freed (started from head). However, in MeoryRegionCache.trim() the head is not updated, which make entry list's head point to an entry whose chunk is null now and following allocate of MeoryRegionCache will return false immediately.
In other word, cache is no longer usable once trim happen.
Modifications:
Update head to correct idx after free entries in trim().
Result:
MemoryRegionCache behaves correctly even after calling trim().
Motivation:
The current build is showing potential leaks in the HTTP/2 tests that
use Http2TestUtil.FrameCountDown, which copies the buffers when it
receives them from the decoder. The leak detecor sees this copy as the
source of a leak. It would be better all around to just retain, rather
than copying the buffer. This should help to lower the overall memory
footprint of the tests as well as potentially getting rid of the
reported "leaks".
Modifications:
Modified Http2TestUtil to use ByteBuf.retain() everywhere that was
previously calling ByteBuf.copy().
Result:
Smaller memory footprint for tests and hopefully getting rid of reported
leaks.
Motivation:
When an assertTrue(condition) statement fails we usually don't know
why, as the parameters of the condition are not logged.
Modifications:
Include relevant parameters in the assertion error message.
Result:
Easier to debug and understand test failures.
Motivation:
handlerAdded and handlerRemoved were overriden but super was never
called, while it should.
Also add one missing information in the toString method.
Modifications:
Add the super corresponding call, and add checkInterval to the
toString() method
Result;
super method calls are correctly passed to the super implementation
part.
Motivation:
A discovered typo in LzmaFrameEncoder constructor when we check `lc + lp` for better compatibility.
Modifications:
Changed `lc + pb` to `lc + lp`.
Result:
Correct check of `lc + lp` value.
Motivation:
Sometimes it is useful to be able to access the uri that was used to initialize the QueryStringDecoder.
Modifications:
Add method which allows to retrieve the uri.
Result:
Allow to retrieve the uri that was used to create the QueryStringDecoder.
Motivation:
When constructing a FingerprintTrustManagerFactory from an Iterable of Strings, the fingerprints were correctly parsed but never added to the result array. The constructed FingerprintTrustManagerFactory consequently fails to validate any certificate.
Modifications:
I added a line to add each converted SHA-1 certificate fingerprint to the result array which then gets passed on to the next constructor.
Result:
Certificate fingerprints passed to the constructor are now correctly added to the array of valid fingerprints. The resulting FingerprintTrustManagerFactory object correctly validates certificates against the list of specified fingerprints.
Motivation:
The HTTP/2 spec does not restrict headers to being String. The current
implementation of the HTTP/2 codec uses Strings as header keys and
values. We should change this so that header keys and values allow
binary values.
Modifications:
Making Http2Headers based on AsciiString, which is a wrapper around a
byte[].
Various changes throughout the HTTP/2 codec to use the new interface.
Result:
HTTP/2 codec no longer requires string headers.
Motiviation:
If sendmmsg is already defined then the native epoll module failed to build because of conflicting definitions.
The mmsghdr type was also redefined on systems that already supported this structure.
Modifications:
Provide a way so that systems which already define sendmmsg and mmsghdr can build
Provide a way so that systems which don't define sendmmsg and mmsghdr can build
Result:
The native EPOLL module can build in more environments
Motivation:
The HTTP/2 unit tests are suffering from OOME on the master branch.
These unit tests allocating a large number of threads (~706 peak live) which may
be related to this memory pressure.
Modifications:
Each EventLoopGroup shutdown operation will have a `sync()` call.
Result:
Lower peek live thread count and less associated memory pressure.
Motivation:
LZMA compression algorithm has a very good compression ratio.
Modifications:
- Added `lzma-java` library which implements LZMA algorithm.
- Implemented LzmaFrameEncoder which extends MessageToByteEncoder and provides compression of outgoing messages.
- Added tests to verify the LzmaFrameEncoder and how it can compress data for the next uncompression using the original library.
Result:
LZMA encoder which can compress data using LZMA algorithm.
Motivation:
ExtensionRegistry is a subclass of ExtensionRegistryLite. The ProtobufDecoder
doesn't use the registry directly, it simply passes it through to the Protobuf
API. The Protobuf calls in question are themselves written in terms
ExtensionRegistryLite not ExtensionRegistry.
Modifications:
Require ExtensionRegistryLite instead of ExtensionRegistry in ProtobufDecoder.
Result:
Consumers can use ExtensionRegistryLite with ProtobufDecoder.
Motivation:
The HTTP/2 tests do not always clean up ByteBuf resources reliably. There are issues with the refCnt, over allocating buffers, and potentially not waiting long enough to reclaim resources for stress tests.
Modifications:
Scrub the HTTP/2 unit tests for ByteBuf leaks.
Result:
Less leaks (hopefully none) in the HTTP/2 unit tests. No OOME from HTTP/2 unit tests.
Motivation:
The HTTP/2 codec does not provide a way to decompress data. This functionality is supported by the HTTP codec and is expected to be a commonly used feature.
Modifications:
-The Http2FrameReader will be modified to allow hooks for decompression
-New classes which detect the decompression from HTTP/2 header frames and uses that decompression when HTTP/2 data frames come in
-New unit tests
Result:
The HTTP/2 codec will provide a means to support data decompression
Motiviation:
The HttpContentDecoder.getTargetContentEncoding has a SuppressWarnings(unused) on its parameter.
This should be SuppressWarnings(UnusedParameters).
Modifications:
SuppressWarnings(unused) -> SuppressWarnings(UnusedParameters)
Result:
Correctly suppressing warnings due to HttpContentDecoder.getTargetContentEncoding
Motivation:
Currently the Executor created by (Nio|Epoll)EventLoopGroup is not correctly shutdown.
This might lead to resource shortages, due to resources not being freed asap.
Modifications:
If (Nio|Epoll)EventLoopGroup create their internal Executor via a constructor
provided `ExecutorServiceFactory` object or via
MultithreadEventLoopGroup.newDefaultExecutorService(...) the ExecutorService.shutdown()
method will be called after (Nio|Epoll)EventLoopGroup is shutdown.
ExecutorService.shutdown() will not be called if the Executor object was passed
to the (Nio|Epoll)EventLoopGroup (that is, it was instantiated outside of Netty).
Result:
Correctly release resources on (Nio|Epoll)EventLoopGroup shutdown.
Motivation:
The ServerBootrap's child group would not be shutdown.
Modification:
Add missing shutdownGracefully() call.
Result:
The child group is shutdown correctly.
Motivation:
The HTTP/2 specification places restrictions on the cipher suites that can be used. There is no central place to pull the ciphers that are allowed by the specification, supported by different java versions, and recommended by the community.
Modifications:
-HTTP/2 will have a security utility class to define supported ciphers
-netty-handler will be modified to support filtering the supplied list of ciphers to the supported ciphers for the current SSLEngine
Result:
-Netty provides unified support for HTTP/2 cipher lists and ciphers can be pruned by currently supported ciphers
Motiviation:
The HTTP content decoder's cleanup method is not cleaning up the decoder correctly.
The cleanup method is currently doing a readOutbound on the EmbeddedChannel but
for decoding the call should be readInbound.
Modifications:
-Change readOutbound to readInbound in the cleanup method
Result:
The cleanup method should be correctly releaseing unused resources
Motivation:
In linux it is possible to write more then one buffer withone syscall when sending datagram messages.
Modifications:
Not copy CompositeByteBuf if it only contains direct buffers.
Result:
More performance due less overhead for copy.
Motivation:
Due incorrect usage of CompositeByteBuf a buffer leak was introduced.
Modifications:
Correctly handle tests with CompositeByteBuf.
Result:
No more buffer leaks