Motivation:
Currently when an exception occurs during a listener.onDataRead
callback, we return all bytes as processed. However, the listener may
choose to return bytes via the InboundFlowState object rather than
returning the integer. If the listener returns a few bytes and then
throws, we will attempt to return too many bytes.
Modifications:
Added InboundFlowState.unProcessedBytes() to indicate how many
unprocessed bytes are outstanding.
Updated DefaultHttp2ConnectionDecoder to compare the unprocessed bytes
before and after the listener.onDataRead callback when an exception was
encountered. If there is a difference, it is subtracted off the total
processed bytes to be returned to the flow controller.
Result:
HTTP/2 data frame delivery properly accounts for processed bytes through
an exception.
Motivation:
The SPDY/3.1 spec does not adequate describe how to push resources
from the server. This was solidified in the HTTP/2 drafts by dividing
the push into two frames, a PushPromise containing the request,
followed by a Headers frame containing the response.
Modifications:
This commit modifies the SpdyHttpDecoder to support pushed resources
that are divided into multiple frames. The decoder will accept a
pushed SpdySynStreamFrame containing the request headers, followed by
a SpdyHeadersFrame containing the response headers.
Result:
The SpdyHttpDecoder will create an HttpRequest object followed by an
HttpResponse object when receiving pushed resources.
Motivation:
MQTT 3.1.1 became an OASIS Standard at 13 Nov 2014.
http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/mqtt-v3.1.1.html
MQTT 3.1.1 is a minor update of 3.1. But, previous codec-mqtt supported only MQTT 3.1.
Modifications:
- Add protocol name `MQTT` with previous `MQIsdp` for `CONNECT`’s variable header.
- Update client identifier validation for 3.1 with 3.1.1.
- Add `FAILURE (0x80)` for `SUBACK`’s new error code.
- Add a test for encode/decode `CONNECT` of 3.1.1.
Result:
MqttEncoder/MqttDecoder can encode/decode frames of 3.1 or 3.1.1.
Motivation:
The current priority algorithm uses 2 different mechanisms to iterate the priority tree and send the results of the allocation. The current algorithm also uses a two step phase where the priority tree is traversed and allocation amounts are calculated and then all active streams are traversed to send for any streams that may or may not have been allocated bytes.
Modifications:
- DefaultHttp2OutboundFlowController will allocate and send (when possible) in the same looping structure.
- The recursive method will send only for the children instead of itself and its children which should simplify the recursion.
Result:
Hopefully simplified recursive algorithm where the tree iteration determines who needs to send and less iteration after the recursive calls complete.
Currently the DefaultHttp2InboundFlowController only supports the
ability to turn on and off "window maintenance" for a stream. This is
insufficient for true application-level flow control that may only want
to return a few bytes to flow control at a time.
Modifications:
Removing "window maintenance" interface from
DefaultHttp2InboundFlowController in favor of the new interface.
Created the Http2InboundFlowState interface which extends Http2FlowState
to add the ability to return bytes for a specific stream.
Changed the onDataRead method to return an integer number of bytes that
will be immediately returned to flow control, to support use cases that
want to opt-out of application-level inbound flow control.
Updated DefaultHttp2InboundFlowController to use 2 windows per stream.
The first, "window", is the actual flow control window that is
decremented as soon as data is received. The second "processedWindow"
is a delayed view of "window" that is only decremented after the
application returns the processed bytes. It is processedWindow that is
used when determining when to send a WINDOW_UPDATE to restore part of
the inbound flow control window for the stream/connection.
Result:
The HTTP/2 inbound flow control interfaces support application-level
flow control.
Motivation:
Too many warnings from IntelliJ IDEA code inspector, PMD and FindBugs.
Modifications:
- Removed unnecessary casts, braces, modifiers, imports, throws on methods, etc.
- Added static modifiers where it is possible.
- Fixed incorrect links in javadoc.
Result:
Better code.
Motivation:
The outbound flow controller logic does not properly reset the allocated
bytes between successive invocations of the priority algorithm.
Modifications:
Updated the priority algorithm to reset the allocated bytes for each
stream.
Result:
Each call to the priority algorithm now starts with zero allocated bytes
for each stream.
Motivation:
RFC 2616, 4.3 Message Body states that:
All 1xx (informational), 204 (no content), and 304 (not modified) responses MUST NOT include a
message-body. All other responses do include a message-body, although it MAY be of zero length.
Modifications:
HttpContentEncoder was previously modified to cater for HTTP 100 responses. This check is enhanced to
include HTTP 204 and 304 responses.
Result:
Empty response bodies will not be modified to include the compression footer. This footer messed with Chrome's
response parsing leading to "hanging" requests.
Motivation:
HttpObjectDecoder extended ReplayDecoder which is slightly slower then ByteToMessageDecoder.
Modifications:
- Changed super class of HttpObjectDecoder from ReplayDecoder to ByteToMessageDecoder.
- Rewrote decode() method of HttpObjectDecoder to use proper state machine.
- Changed private methods HeaderParser.parse(ByteBuf), readHeaders(ByteBuf) and readTrailingHeaders(ByteBuf), skipControlCharacters(ByteBuf) to consider available bytes.
- Set HeaderParser and LineParser as static inner classes.
- Replaced not safe actualReadableBytes() with buffer.readableBytes().
Result:
Improved performance of HttpObjectDecoder by approximately 177%.
Motivation:
PausableChannelEventExecutor().shutdown() used to call unwrap().terminationFuture() instead of unwrap.shutdown().
This error was reported by @xfrag in a comment to a commit message [1]. Tyvm.
Modifications:
PausableChannelEventExecutor.shutdown() now correctly invokes unwrap().shutdown() instead of unwrap().terminationFuture().
Result:
Correct code.
[1] 220660e351 (commitcomment-8489643)
Motivation:
NetUtil.isValidIpV6Address() handles the interface name in IPv6 address
incorrectly. For example, it returns false for the following addresses:
- ::1%lo
- ::1%_%_in_name_
Modifications:
- Strip the square brackets before validation for simplicity
- Strip the part after the percent sign completely before validation for
simplicity
- Simplify and reformat NetUtilTest
Result:
- The interface names in IPv6 addresses are handled correctly.
- NetUtilTest is cleaner
Motivation:
I came across an issue when I was adding/setting headers and mistakenly
used an upper case header name. When using the http2 example that ships
with Netty this was not an issue. But when working with a browser that
supports http2, in my case I was using Firefox Nightly, I'm guessing
that it interprets the response as invalid in accordance with the
specifiction
https://tools.ietf.org/html/draft-ietf-httpbis-http2-14#section-8.1.2
"However, header field names MUST be converted to lowercase prior
to their encoding in HTTP/2. A request or response containing
uppercase header field names MUST be treated as malformed"
This PR suggests converting to lowercase to be the default.
Modifications:
Added a no-args constructor that defaults to forcing the key/name to
lowercase, and providing a second constructor to override this behaviour
if desired.
Result:
It is now possible to specify a header like this:
Http2Headers headers = new DefaultHttp2Headers(true)
.status(new AsciiString("200"))
.set(new AsciiString("Testing-Uppercase"), new AsciiString("some value"));
And the header written to the client will then become:
testing-uppercase:"some value"
Motivation:
When running the http2 example no SslProvider is specified when calling
SslContext.newServerContext. This may lead to the provider being
determined depending on the availabilty of OpenSsl. But as far as I can
tell the OpenSslServerContext does not support APLN, which is the
protocol configured in the example.
This produces the following error when running the example:
Exception in thread "main" java.lang.UnsupportedOperationException:
OpenSSL provider does not support ALPN protocol
io.netty.handler.ssl.OpenSslServerContext.toNegotiator(OpenSslServerContext.java:391)
io.netty.handler.ssl.OpenSslServerContext.<init>(OpenSslServerContext.java:117)
io.netty.handler.ssl.SslContext.newServerContext(SslContext.java:238)
io.netty.handler.ssl.SslContext.newServerContext(SslContext.java:184)
io.netty.handler.ssl.SslContext.newServerContext(SslContext.java:124)
io.netty.example.http2.server.Http2Server.main(Http2Server.java:51)
Modifications:
Force SslProvider.JDK when creating the SslContext since the
example is using APLN.
Result:
There is no longer an error if OpenSsl is supported on the platform in
use.
Motivation:
There are a few very minor issues in the Http2 examples javadoc and
since I don't think that these javadocs are published this is very much
optional to include.
Modifications:
Updated the @see according to [1] to avoid warning when generating
javadocs.
Result:
No warning when generating javadocs.
[1] http://docs.oracle.com/javase/1.5.0/docs/tooldocs/windows/javadoc.html#@see
Motivation:
ChannelPromiseAggregator and ChannelPromiseNotifiers only allow
consumers to work with Channels as the result type. Generic versions
of these classes allow consumers to aggregate or broadcast the results
of an asynchronous execution with other result types.
Modifications:
Add PromiseAggregator and PromiseNotifier. Add unit tests for both.
Remove code in ChannelPromiseAggregator and ChannelPromiseNotifier and
modify them to extend the new base classes.
Result:
Consumers can now aggregate or broadcast the results of an asynchronous
execution with results types other than Channel.
Motivation:
The HTTP/2 codec currently does not provide an interface to compress data. There is an analogous case to this in the HTTP codec and it is expected to be used commonly enough that it will be beneficial to have the feature in the http2-codec.
Modifications:
- Add a class which extends DefaultHttp2ConnectionEncoder and provides hooks to an EmbeddedChannel
- Add a compressor element to the Http2Stream interface
- Update unit tests to utilize the new feature
Result:
HTTP/2 codec supports data compression.
Motiviation:
The HttpContentEncoder does not account for a EmptyLastHttpContent being provided as input. This is useful in situations where the client is unable to determine if the current content chunk is the last content chunk (i.e. a proxy forwarding content when transfer encoding is chunked).
Modifications:
- HttpContentEncoder should not attempt to compress empty HttpContent objects
Result:
HttpContentEncoder supports a EmptyLastHttpContent to terminate the response.
Motivation:
The current logic in DefaultHttp2OutboundFlowController for handling the
case of a stream shutdown results in a Http2Exception (not a
Http2StreamException). This results in a GO_AWAY being sent for what
really could just be a stream-specific error.
Modifications:
Modified DefaultHttp2OutboundFlowController to set a stream exception
rather than a connection-wide exception. Also using the error code of
INTERNAL_ERROR rather than STREAM_CLOSED, since it's more appropriate
for this case.
Result:
Should not be triggering GO_AWAY when a stream closes prematurely.
Motivation:
Currently due to flow control, HEADERS frames can be written
out-of-order WRT DATA frames.
Modifications:
When data is written, we preserve the future as the lastWriteFuture in
the outbound flow controller. The encoder then uses the lastWriteFuture
such that headers are only written after the lastWriteFuture completes.
Result:
HEADERS/DATA write order is correctly preserved.
Motivation:
The HTTP/2 specification indicates that when converting from HTTP/2 to HTTP/1.x and non-ascii characters are detected that an error should be thrown.
Modifications:
- The ASCII validation is already done but the exception that is raised is not properly converted to a RST_STREAM error.
Result:
- If HTTP/2 to HTTP/1.x translation layer is in use and a non-ascii header is received then a RST_STREAM frame should be sent in response.
Motivation:
The DefaultOutboundFlowController was attempting to write frames with a negative length. This resulted in attempting to allocate a buffer of negative size and thus an exception.
Modifications:
- Don't allow DefaultOutboundFlowController to write negative length buffers.
Result:
No more negative length writes which resulted in IllegalArgumentExceptions.
Motivation:
CollectionUtils has only one method and it is used only in DefaultHeaders.
Modification:
Move CollectionUtils.equals() to DefaultHeaders and make it private
Result:
One less class to expose in our public API
Motivation:
x-gzip and x-deflate are not standard header values, and thus should be
removed from HttpHeaderValues, which is meant to provide the standard
values only.
Modifications:
- Remove X_DEFLATE and X_GZIP from HttpHeaderValues
- Move X_DEFLATE and X_GZIP to HttpContentDecompressor and
DelegatingDecompressorFrameListener
- We have slight code duplication here, but it does less harm than
having non-standard constant.
Result:
HttpHeaderValues contains only standard header values.
Related: 4ce994dd4f
Motivation:
In 4.1, we were not able to change the type of the HTTP header name and
value constants from String to AsciiString due to backward compatibility
reasons.
Instead of breaking backward compatibility in 4.1, we introduced new
types called HttpHeaderNames and HttpHeaderValues which provides the
AsciiString version of the constants, and then deprecated
HttpHeaders.Names/Values.
We should make the same changes while deleting the deprecated classes
activaly.
Modifications:
- Remove HttpHeaders.Names/Values and RtspHeaders
- Add HttpHeaderNames/Values and RtspHeaderNames/Values
- Make HttpHeaderValues.WEBSOCKET lowercased because it's actually
lowercased in all WebSocket versions but the oldest one
- Do not use AsciiString.equalsIgnoreCase(CharSeq, CharSeq) if one of
the parameters are AsciiString
- Avoid using AsciiString.toString() repetitively
- Change the parameter type of some methods from String to
CharSequence
Result:
A user who upgraded from 4.0 to 4.1 first and removed the references to
the deprecated classes and methods can easily upgrade from 4.1 to 5.0.
Motivation:
When ALPN/NPN is disabled, a user has to instantiate a new
ApplicationProtocolConfig with meaningless parameters.
Modifications:
- Add ApplicationProtocolConfig.DISABLED, the singleton instance
- Reject the constructor calls with Protocol.NONE, which doesn't make
much sense because a user should use DISABLED instead.
Result:
More user-friendly API when ALPN/NPN is not needed by a user.
Motivation:
Netty currently does not support creating SslContext objects that support mutual authentication.
Modifications:
-Modify the SslContext interface to support mutual authentication for JDK and OpenSSL
-Provide an implementation of mutual authentication for JDK
-Add unit tests to support new feature
Result:
Netty SslContext interface supports mutual authentication and JDK providers have an implementation.
Motivation:
If there are no common protocols in the ALPN protocol exchange we still compete the handshake successfully. This handshake should fail according to http://tools.ietf.org/html/rfc7301#section-3.2 with a status of no_application_protocol. The specification also allows for the server to "play dumb" and not advertise that it supports ALPN in this case (see MAY clauses in http://tools.ietf.org/html/rfc7301#section-3.1)
Modifications:
-Upstream project used for ALPN (alpn-boot) does not support this. So a PR https://github.com/jetty-project/jetty-alpn/pull/3 was submitted.
-The netty code using alpn-boot should support the new interface (return null on existing method).
-Version number of alpn-boot must be updated in pom.xml files
Result:
-Netty fails the SSL handshake if ALPN is used and there are no common protocols.
Motivation:
The SslHandler currently forces the use of a direct buffer for the input to the SSLEngine.wrap(..) operation. This allocation may not always be desired and should be conditionally done.
Modifications:
- Use the pre-existing wantsDirectBuffer variable as the condition to do the conversion.
Result:
- An allocation of a direct byte buffer and a copy of data is now not required for every SslHandler wrap operation.
Motivation:
The SslHandler wrap method requires that a direct buffer be passed to the SSLEngine.wrap() call. If the ByteBuf parameter does not have an underlying direct buffer then one is allocated in this method, but it is not released.
Modifications:
- Release the direct ByteBuffer only accessible in the scope of SslHandler.wrap
Result:
Memory leak in SslHandler.wrap is fixed.
Motivation:
If the http2 encoder has exhausted all available stream IDs a GOAWAY frame is not sent. Once the encoder detects the a new stream ID has rolled over past the last stream ID a GOAWAY should be sent as recommended in section [5.1.1](https://tools.ietf.org/html/draft-ietf-httpbis-http2-14#section-5.1.1).
Modifications:
-This condition is already detected but it just needs to result in a GOAWAY being sent.
-Add a subclass of Http2Exception so the encoder can detect this special case.
-Add a unit test which checks that the GOAWAY is sent/received.
Result:
Encoder attempting to use the first 'rolled over' stream id results in a GOAWAY being sent.
Motivation:
The requirement for the masking of frames and for checks of correct
masking in the websocket specifiation have a large impact on performance.
While it is mandatory for browsers to use masking there are other
applications (like IPC protocols) that want to user websocket framing and proxy-traversing
characteristics without the overhead of masking. The websocket standard
also mentions that the requirement for mask verification on server side
might be dropped in future.
Modifications:
Added an optional parameter allowMaskMismatch for the websocket decoder
that allows a server to also accept unmasked frames (and clients to accept
masked frames).
Allowed to set this option through the websocket handshaker
constructors as well as the websocket client and server handlers.
The public API for existing components doesn't change, it will be
forwarded to functions which implicetly set masking as required in the
specification.
For websocket clients an additional parameter is added that allows to
disable the masking of frames that are sent by the client.
Result:
This update gives netty users the ability to create and use completely
unmasked websocket connections in addition to the normal masked channels
that the standard describes.
Motivation:
DnsNameResolver.testResolveA() tests if the cache works as well as the usual DNS protocol test. To ensure the result from the cache is identical to the result without cache, it compares the two Maps which contain the result of cached/uncached resolution. The comparison of two Maps yields an expected behavior, but the output of the comparison on failure is often unreadable due to its long length.
Modifications:
Compare entry-by-entry for more comprehensible test failure output
Result:
When failure occurs, it's easier to see which domain was the cause of the problem.
Motivation:
At the moment the whole HTTP header must be parsed at once which can lead to multiple parsing of the same bytes. We can do better here and allow to parse it in multiple steps.
Modifications:
- Not parse headers multiple times
- Simplify the code
- Eliminate uncessary String[] creations
- Use readSlice(...).retain() when possible.
Result:
Performance improvements as shown in the included benchmark below.
Before change:
[nmaurer@xxx]~% ./wrk-benchmark
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 21.55ms 15.10ms 245.02ms 90.26%
Req/Sec 196.33k 30.17k 297.29k 76.03%
373954750 requests in 2.00m, 50.15GB read
Requests/sec: 3116466.08
Transfer/sec: 427.98MB
After change:
[nmaurer@xxx]~% ./wrk-benchmark
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 20.91ms 36.79ms 1.26s 98.24%
Req/Sec 206.67k 21.69k 243.62k 94.96%
393071191 requests in 2.00m, 52.71GB read
Requests/sec: 3275971.50
Transfer/sec: 449.89MB
Motivation:
As report in #2953 the websocket server example contained a bug and did therefore not work with chrome:
A websocket extension is added to the pipeline but extensions were disallowed in the handshaker and decoder,
which is leading the decoder to closing the connection after receiving an extension frame.
Modifications:
Allow websocket extensions in the handshaker to correctly enable the extension.
Result:
Working websocket server example
Fixes#2953
Related: #2945
Motivation:
Some special handlers such as TrafficShapingHandler need to override the
writability of a Channel to throttle the outbound traffic.
Modifications:
Add a new indexed property called 'user-defined writability flag' to
ChannelOutboundBuffer so that a handler can override the writability of
a Channel easily.
Result:
A handler can override the writability of a Channel using an unsafe API.
For example:
Channel ch = ...;
ch.unsafe().outboundBuffer().setUserDefinedWritability(1, false);
Motivation:
The 4.1.0-Beta3 implementation of HttpObjectAggregator.handleOversizedMessage closes the
connection if the client sent oversized chunked data with no Expect:
100-continue header. This causes a broken pipe or "connection reset by
peer" error in some clients (tested on Firefox 31 OS X 10.9.5,
async-http-client 1.8.14).
This part of the HTTP 1.1 spec (below) seems to say that in this scenario the connection
should not be closed (unless the intention is to be very strict about
how data should be sent).
http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html
"If an origin server receives a request that does not include an
Expect request-header field with the "100-continue" expectation,
the request includes a request body, and the server responds
with a final status code before reading the entire request body
from the transport connection, then the server SHOULD NOT close
the transport connection until it has read the entire request,
or until the client closes the connection. Otherwise, the client
might not reliably receive the response message. However, this
requirement is not be construed as preventing a server from
defending itself against denial-of-service attacks, or from
badly broken client implementations."
Modifications:
Change HttpObjectAggregator.handleOversizedMessage to close the
connection only if keep-alive is off and Expect: 100-continue is
missing. Update test to reflect the change.
Result:
Broken pipe and connection reset errors on the client are avoided when
oversized data is sent.
Motiviation:
Some of the contains method signatures did not include the correct type of parameters for the method names. Also these same methods were not using the correct conversion methods but instead the generic object conversion methods.
Modifications
-Correct the parameter types of the contains methods to match the method name
-Correct the implementation of these methods to use the correct conversion methods
Result:
Headers contains signatures are more correct and correct conversion methods are used.
Motivation:
- There are still various inspector warnings to fix.
- ValueConverter.convert() methods need to end with the type name like
other methods in Headers, such as setInt() and addInt(), for more
consistency
Modifications:
- Fix all inspector warnings
- Rename ValueConverter.convert() to convert<type>()
Result:
- Cleaner code
- Consistency
Related: #2983
Motivation:
It is a well known idiom to write an empty buffer and add a listener to
its future to close a channel when the last byte has been written out:
ChannelFuture f = channel.writeAndFlush(Unpooled.EMPTY_BUFFER);
f.addListener(ChannelFutureListener.CLOSE);
When HttpObjectEncoder is in the pipeline, this still works, but it
silently raises an IllegalStateException, because HttpObjectEncoder does
not allow writing a ByteBuf when it is expecting an HttpMessage.
Modifications:
- Handle an empty ByteBuf specially in HttpObjectEncoder, so that
writing an empty buffer does not fail even if the pipeline contains an
HttpObjectEncoder
- Add a test
Result:
An exception is not triggered anymore by HttpObjectEncoder, when a user
attempts to write an empty buffer.
Motivation:
Currently, when the CorsHandler processes a preflight request, or
respondes with an 403 Forbidden using the short-curcuit option, the
HttpRequest is not released which leads to a buffer leak.
Modifications:
Releasing the HttpRequest when done processing a preflight request or
responding with an 403.
Result:
Using the CorsHandler will not cause buffer leaks.
Motivation:
Headers within netty do not cleanly share a common class hierarchy. As a result some header types support some operations
and don't support others. The consolidation of the class hierarchy will allow for maintenance and scalability for new codec.
The existing hierarchy also has a few short comings such as it is not clear when data conversions are happening. This
could result unintentionally getting back a collection or iterator where a conversion on each entry must happen.
The current headers algorithm also prepends all elements which means to find the first element or return a collection
in insertion order often requires a complete traversal followed by a collections.reverse call.
Modifications:
-Provide a generic base class which provides all the implementation for headers in netty
-Provide an extension to this class which allows for name type conversions to happen (to accommodate legacy CharSequence to String conversions)
-Update the headers interface to clarify when conversions will happen.
-Update the headers data structure so that appends are done to avoid unnecessary iteration or collection reversal.
Result:
-More unified class hierarchy for headers in netty
-Improved headers data structure and algorithms
-headers API more clearly identify when conversions are required.
Related: #2952
Motivation:
META-INF/io.netty.versions.properties in netty-all-*.jar does not
contain the version information about the netty-transport-epoll module.
Modifications:
Fix a bug in the regular expression in pom.xml, so that the artifacts
with a classifier is also included in the version properties file.
Result:
The version information of all modules are included in the version
properties file, and Version.identify() does not miss
netty-transport-epoll.
Motivation:
When a thread-local random is used first time, it tries to get the
initial seed uniquifier from SecureRandom, and it sometimes takes time
for the system to have enough entropy.
In ChannelDeregistrationTest, the retrieval of the initial seed
uniquifier takes place lazily when a new channel is created, and it
blocks the channel creation for more than 2 seconds, resulting in test
timeouts.
Modifications:
Increase the timeout of the tests to 5 seconds, because the timeout of
the initial seed uniquifier retrieval is 3 seconds.
Result:
Less flakey tests