Motivation:
The current logic in DefaultHttp2OutboundFlowController for handling the
case of a stream shutdown results in a Http2Exception (not a
Http2StreamException). This results in a GO_AWAY being sent for what
really could just be a stream-specific error.
Modifications:
Modified DefaultHttp2OutboundFlowController to set a stream exception
rather than a connection-wide exception. Also using the error code of
INTERNAL_ERROR rather than STREAM_CLOSED, since it's more appropriate
for this case.
Result:
Should not be triggering GO_AWAY when a stream closes prematurely.
Motivation:
Currently due to flow control, HEADERS frames can be written
out-of-order WRT DATA frames.
Modifications:
When data is written, we preserve the future as the lastWriteFuture in
the outbound flow controller. The encoder then uses the lastWriteFuture
such that headers are only written after the lastWriteFuture completes.
Result:
HEADERS/DATA write order is correctly preserved.
Motivation:
The HTTP/2 specification indicates that when converting from HTTP/2 to HTTP/1.x and non-ascii characters are detected that an error should be thrown.
Modifications:
- The ASCII validation is already done but the exception that is raised is not properly converted to a RST_STREAM error.
Result:
- If HTTP/2 to HTTP/1.x translation layer is in use and a non-ascii header is received then a RST_STREAM frame should be sent in response.
Motivation:
The DefaultOutboundFlowController was attempting to write frames with a negative length. This resulted in attempting to allocate a buffer of negative size and thus an exception.
Modifications:
- Don't allow DefaultOutboundFlowController to write negative length buffers.
Result:
No more negative length writes which resulted in IllegalArgumentExceptions.
Motivation:
CollectionUtils has only one method and it is used only in DefaultHeaders.
Modification:
Move CollectionUtils.equals() to DefaultHeaders and make it private
Result:
One less class to expose in our public API
Motivation:
x-gzip and x-deflate are not standard header values, and thus should be
removed from HttpHeaderValues, which is meant to provide the standard
values only.
Modifications:
- Remove X_DEFLATE and X_GZIP from HttpHeaderValues
- Move X_DEFLATE and X_GZIP to HttpContentDecompressor and
DelegatingDecompressorFrameListener
- We have slight code duplication here, but it does less harm than
having non-standard constant.
Result:
HttpHeaderValues contains only standard header values.
Related: 4ce994dd4f
Motivation:
In 4.1, we were not able to change the type of the HTTP header name and
value constants from String to AsciiString due to backward compatibility
reasons.
Instead of breaking backward compatibility in 4.1, we introduced new
types called HttpHeaderNames and HttpHeaderValues which provides the
AsciiString version of the constants, and then deprecated
HttpHeaders.Names/Values.
We should make the same changes while deleting the deprecated classes
activaly.
Modifications:
- Remove HttpHeaders.Names/Values and RtspHeaders
- Add HttpHeaderNames/Values and RtspHeaderNames/Values
- Make HttpHeaderValues.WEBSOCKET lowercased because it's actually
lowercased in all WebSocket versions but the oldest one
- Do not use AsciiString.equalsIgnoreCase(CharSeq, CharSeq) if one of
the parameters are AsciiString
- Avoid using AsciiString.toString() repetitively
- Change the parameter type of some methods from String to
CharSequence
Result:
A user who upgraded from 4.0 to 4.1 first and removed the references to
the deprecated classes and methods can easily upgrade from 4.1 to 5.0.
Motivation:
When ALPN/NPN is disabled, a user has to instantiate a new
ApplicationProtocolConfig with meaningless parameters.
Modifications:
- Add ApplicationProtocolConfig.DISABLED, the singleton instance
- Reject the constructor calls with Protocol.NONE, which doesn't make
much sense because a user should use DISABLED instead.
Result:
More user-friendly API when ALPN/NPN is not needed by a user.
Motivation:
Netty currently does not support creating SslContext objects that support mutual authentication.
Modifications:
-Modify the SslContext interface to support mutual authentication for JDK and OpenSSL
-Provide an implementation of mutual authentication for JDK
-Add unit tests to support new feature
Result:
Netty SslContext interface supports mutual authentication and JDK providers have an implementation.
Motivation:
If there are no common protocols in the ALPN protocol exchange we still compete the handshake successfully. This handshake should fail according to http://tools.ietf.org/html/rfc7301#section-3.2 with a status of no_application_protocol. The specification also allows for the server to "play dumb" and not advertise that it supports ALPN in this case (see MAY clauses in http://tools.ietf.org/html/rfc7301#section-3.1)
Modifications:
-Upstream project used for ALPN (alpn-boot) does not support this. So a PR https://github.com/jetty-project/jetty-alpn/pull/3 was submitted.
-The netty code using alpn-boot should support the new interface (return null on existing method).
-Version number of alpn-boot must be updated in pom.xml files
Result:
-Netty fails the SSL handshake if ALPN is used and there are no common protocols.
Motivation:
The SslHandler currently forces the use of a direct buffer for the input to the SSLEngine.wrap(..) operation. This allocation may not always be desired and should be conditionally done.
Modifications:
- Use the pre-existing wantsDirectBuffer variable as the condition to do the conversion.
Result:
- An allocation of a direct byte buffer and a copy of data is now not required for every SslHandler wrap operation.
Motivation:
The SslHandler wrap method requires that a direct buffer be passed to the SSLEngine.wrap() call. If the ByteBuf parameter does not have an underlying direct buffer then one is allocated in this method, but it is not released.
Modifications:
- Release the direct ByteBuffer only accessible in the scope of SslHandler.wrap
Result:
Memory leak in SslHandler.wrap is fixed.
Motivation:
If the http2 encoder has exhausted all available stream IDs a GOAWAY frame is not sent. Once the encoder detects the a new stream ID has rolled over past the last stream ID a GOAWAY should be sent as recommended in section [5.1.1](https://tools.ietf.org/html/draft-ietf-httpbis-http2-14#section-5.1.1).
Modifications:
-This condition is already detected but it just needs to result in a GOAWAY being sent.
-Add a subclass of Http2Exception so the encoder can detect this special case.
-Add a unit test which checks that the GOAWAY is sent/received.
Result:
Encoder attempting to use the first 'rolled over' stream id results in a GOAWAY being sent.
Motivation:
The requirement for the masking of frames and for checks of correct
masking in the websocket specifiation have a large impact on performance.
While it is mandatory for browsers to use masking there are other
applications (like IPC protocols) that want to user websocket framing and proxy-traversing
characteristics without the overhead of masking. The websocket standard
also mentions that the requirement for mask verification on server side
might be dropped in future.
Modifications:
Added an optional parameter allowMaskMismatch for the websocket decoder
that allows a server to also accept unmasked frames (and clients to accept
masked frames).
Allowed to set this option through the websocket handshaker
constructors as well as the websocket client and server handlers.
The public API for existing components doesn't change, it will be
forwarded to functions which implicetly set masking as required in the
specification.
For websocket clients an additional parameter is added that allows to
disable the masking of frames that are sent by the client.
Result:
This update gives netty users the ability to create and use completely
unmasked websocket connections in addition to the normal masked channels
that the standard describes.
Motivation:
DnsNameResolver.testResolveA() tests if the cache works as well as the usual DNS protocol test. To ensure the result from the cache is identical to the result without cache, it compares the two Maps which contain the result of cached/uncached resolution. The comparison of two Maps yields an expected behavior, but the output of the comparison on failure is often unreadable due to its long length.
Modifications:
Compare entry-by-entry for more comprehensible test failure output
Result:
When failure occurs, it's easier to see which domain was the cause of the problem.
Motivation:
At the moment the whole HTTP header must be parsed at once which can lead to multiple parsing of the same bytes. We can do better here and allow to parse it in multiple steps.
Modifications:
- Not parse headers multiple times
- Simplify the code
- Eliminate uncessary String[] creations
- Use readSlice(...).retain() when possible.
Result:
Performance improvements as shown in the included benchmark below.
Before change:
[nmaurer@xxx]~% ./wrk-benchmark
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 21.55ms 15.10ms 245.02ms 90.26%
Req/Sec 196.33k 30.17k 297.29k 76.03%
373954750 requests in 2.00m, 50.15GB read
Requests/sec: 3116466.08
Transfer/sec: 427.98MB
After change:
[nmaurer@xxx]~% ./wrk-benchmark
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 20.91ms 36.79ms 1.26s 98.24%
Req/Sec 206.67k 21.69k 243.62k 94.96%
393071191 requests in 2.00m, 52.71GB read
Requests/sec: 3275971.50
Transfer/sec: 449.89MB
Motivation:
As report in #2953 the websocket server example contained a bug and did therefore not work with chrome:
A websocket extension is added to the pipeline but extensions were disallowed in the handshaker and decoder,
which is leading the decoder to closing the connection after receiving an extension frame.
Modifications:
Allow websocket extensions in the handshaker to correctly enable the extension.
Result:
Working websocket server example
Fixes#2953
Related: #2945
Motivation:
Some special handlers such as TrafficShapingHandler need to override the
writability of a Channel to throttle the outbound traffic.
Modifications:
Add a new indexed property called 'user-defined writability flag' to
ChannelOutboundBuffer so that a handler can override the writability of
a Channel easily.
Result:
A handler can override the writability of a Channel using an unsafe API.
For example:
Channel ch = ...;
ch.unsafe().outboundBuffer().setUserDefinedWritability(1, false);
Motivation:
The 4.1.0-Beta3 implementation of HttpObjectAggregator.handleOversizedMessage closes the
connection if the client sent oversized chunked data with no Expect:
100-continue header. This causes a broken pipe or "connection reset by
peer" error in some clients (tested on Firefox 31 OS X 10.9.5,
async-http-client 1.8.14).
This part of the HTTP 1.1 spec (below) seems to say that in this scenario the connection
should not be closed (unless the intention is to be very strict about
how data should be sent).
http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html
"If an origin server receives a request that does not include an
Expect request-header field with the "100-continue" expectation,
the request includes a request body, and the server responds
with a final status code before reading the entire request body
from the transport connection, then the server SHOULD NOT close
the transport connection until it has read the entire request,
or until the client closes the connection. Otherwise, the client
might not reliably receive the response message. However, this
requirement is not be construed as preventing a server from
defending itself against denial-of-service attacks, or from
badly broken client implementations."
Modifications:
Change HttpObjectAggregator.handleOversizedMessage to close the
connection only if keep-alive is off and Expect: 100-continue is
missing. Update test to reflect the change.
Result:
Broken pipe and connection reset errors on the client are avoided when
oversized data is sent.
Motiviation:
Some of the contains method signatures did not include the correct type of parameters for the method names. Also these same methods were not using the correct conversion methods but instead the generic object conversion methods.
Modifications
-Correct the parameter types of the contains methods to match the method name
-Correct the implementation of these methods to use the correct conversion methods
Result:
Headers contains signatures are more correct and correct conversion methods are used.
Motivation:
- There are still various inspector warnings to fix.
- ValueConverter.convert() methods need to end with the type name like
other methods in Headers, such as setInt() and addInt(), for more
consistency
Modifications:
- Fix all inspector warnings
- Rename ValueConverter.convert() to convert<type>()
Result:
- Cleaner code
- Consistency
Related: #2983
Motivation:
It is a well known idiom to write an empty buffer and add a listener to
its future to close a channel when the last byte has been written out:
ChannelFuture f = channel.writeAndFlush(Unpooled.EMPTY_BUFFER);
f.addListener(ChannelFutureListener.CLOSE);
When HttpObjectEncoder is in the pipeline, this still works, but it
silently raises an IllegalStateException, because HttpObjectEncoder does
not allow writing a ByteBuf when it is expecting an HttpMessage.
Modifications:
- Handle an empty ByteBuf specially in HttpObjectEncoder, so that
writing an empty buffer does not fail even if the pipeline contains an
HttpObjectEncoder
- Add a test
Result:
An exception is not triggered anymore by HttpObjectEncoder, when a user
attempts to write an empty buffer.
Motivation:
Currently, when the CorsHandler processes a preflight request, or
respondes with an 403 Forbidden using the short-curcuit option, the
HttpRequest is not released which leads to a buffer leak.
Modifications:
Releasing the HttpRequest when done processing a preflight request or
responding with an 403.
Result:
Using the CorsHandler will not cause buffer leaks.
Motivation:
Headers within netty do not cleanly share a common class hierarchy. As a result some header types support some operations
and don't support others. The consolidation of the class hierarchy will allow for maintenance and scalability for new codec.
The existing hierarchy also has a few short comings such as it is not clear when data conversions are happening. This
could result unintentionally getting back a collection or iterator where a conversion on each entry must happen.
The current headers algorithm also prepends all elements which means to find the first element or return a collection
in insertion order often requires a complete traversal followed by a collections.reverse call.
Modifications:
-Provide a generic base class which provides all the implementation for headers in netty
-Provide an extension to this class which allows for name type conversions to happen (to accommodate legacy CharSequence to String conversions)
-Update the headers interface to clarify when conversions will happen.
-Update the headers data structure so that appends are done to avoid unnecessary iteration or collection reversal.
Result:
-More unified class hierarchy for headers in netty
-Improved headers data structure and algorithms
-headers API more clearly identify when conversions are required.
Related: #2952
Motivation:
META-INF/io.netty.versions.properties in netty-all-*.jar does not
contain the version information about the netty-transport-epoll module.
Modifications:
Fix a bug in the regular expression in pom.xml, so that the artifacts
with a classifier is also included in the version properties file.
Result:
The version information of all modules are included in the version
properties file, and Version.identify() does not miss
netty-transport-epoll.
Motivation:
When a thread-local random is used first time, it tries to get the
initial seed uniquifier from SecureRandom, and it sometimes takes time
for the system to have enough entropy.
In ChannelDeregistrationTest, the retrieval of the initial seed
uniquifier takes place lazily when a new channel is created, and it
blocks the channel creation for more than 2 seconds, resulting in test
timeouts.
Modifications:
Increase the timeout of the tests to 5 seconds, because the timeout of
the initial seed uniquifier retrieval is 3 seconds.
Result:
Less flakey tests
Motivation:
There should be a unit test for when the stream ID wraps around and is 'too large' or negative.
The lack of unit test masked an issue where this was not being throw.
Modifications:
Add a unit test to cover the case where creating a remote and local stream where stream id is 'too large'
Result:
Unit test scope increases.
Motivation:
Using a needless local copy of keys.length.
Modifications:
Using keys.length explicitly everywhere.
Result:
Slight performance improvement of hashIndex.
Motivation:
The hashIndex method currently uses a conditional to handle negative
keys. This could be done without a conditional to slightly improve
performance.
Modifications:
Modified hashIndex() to avoid using a conditional.
Result:
Slight performance improvement to hashIndex().
Motivation:
IntObjectHashMap throws an exception when using negative values for
keys.
Modifications:
Changed hashIndex() to normalize the index if the mod operation returns
a negative number.
Result:
IntObjectHashMap supports negative key values.
Motivation:
Make it much more readable code.
Modifications:
- Added states of decompression.
- Refactored decode(...) method to use this states.
Result:
Much more readable decoder which looks like other compression decoders.
Motivation
Issue #3004 shows that "=" character was not supported as it should in
the HttpPostRequestDecoder in form-data boundary.
Modifications:
Add 2 methods in StringUtil
split with maxParts: String split with a max parts only (to prevent multiple '=' to
be source of extra split while not needed)
substringAfter: String part after delimiter (since first part is not needed)
Use those methods in HttpPostRequestDecoder. Change and the
HttpPostRequestDecoderTest to check using a boundary beginning with "=".
Results:
The fix implies more stability and fix the issue.
Motivation:
Currently when receiving DATA/HEADERS frames, we throw Http2Exception (a
connection error) instead of Http2StreamException (stream error). This
is incorrect according to the HTTP/2 spec.
Modifications:
Updated various places in the encoder and decoder that were out of spec
WRT connection/state checking.
Result:
Stream state verification is properly handled.
Related: #2034
Motivation:
Some users want to mock Bootstrap (or ServerBootstrap), and thus they
should not be final but be fully overridable and extensible.
Modifications:
Remove finals wherever possible
Result:
@daschl is happy.
Related: #2964
Motivation:
Writing a zero-length FileRegion to an NIO channel will lead to an
infinite loop.
Modification:
- Do not write a zero-length FileRegion by protecting with proper 'if'.
- Update the testsuite
Result:
Another bug fixed
Motivation:
We see occational failures in the datagram tests saying 'address already
in use' when we attempt to bind on a port returned by
TestUtils.getFreePort().
It turns out that TestUtils.getFreePort() only checks if TCP port is
available.
Modifications:
Also check if UDP port is available, so that the datagram tests do not
fail because of the 'address already in use' error during a bind
attempt.
Result:
Less chance of datagram test failures
Motivation:
Twitter hpack has upgraded to 0.9.1, we should upgrade to the latest.
Modifications:
Updated the parent pom to specify the dependency version. Updated the
http2 pom to use the version specified by the parent.
Result:
HTTP/2 updated to the latest hpack release.
Motivation:
The default name resolver attempts to resolve the bad host name (destination.com) and actually succeeds, making the ProxyHandlerTest fail.
Modification:
Use NoopNameResolverGroup instead.
Result:
ProxyHandlerTest passes again.
Motivation:
So far, we relied on the domain name resolution mechanism provided by
JDK. It served its purpose very well, but had the following
shortcomings:
- Domain name resolution is performed in a blocking manner.
This becomes a problem when a user has to connect to thousands of
different hosts. e.g. web crawlers
- It is impossible to employ an alternative cache/retry policy.
e.g. lower/upper bound in TTL, round-robin
- It is impossible to employ an alternative name resolution mechanism.
e.g. Zookeeper-based name resolver
Modification:
- Add the resolver API in the new module: netty-resolver
- Implement the DNS-based resolver: netty-resolver-dns
.. which uses netty-codec-dns
- Make ChannelFactory reusable because it's now used by
io.netty.bootstrap, io.netty.resolver.dns, and potentially by other
modules in the future
- Move ChannelFactory from io.netty.bootstrap to io.netty.channel
- Deprecate the old ChannelFactory
- Add ReflectiveChannelFactory
Result:
It is trivial to resolve a large number of domain names asynchronously.
Motivation:
DnsQueryEncoder does not encode the 'additional resources' section at all, which contains the pseudo-RR as defined in RFC 2671.
Modifications:
- Modify DnsQueryEncoder to encode the additional resources
- Fix a bug in DnsQueryEncoder where an empty name is encoded incorrectly
Result:
A user can send an EDNS query.
Motivation:
When a datagram packet is sent to a destination where nobody actually listens to,
the server O/S will respond with an ICMP Port Unreachable packet.
The ICMP Port Unreachable packet is translated into PortUnreachableException by JDK.
PortUnreachableException is not a harmful exception that prevents a user from sending a datagram.
Therefore, we should not close a datagram channel when PortUnreachableException is caught.
Modifications:
- Do not close a channel when the caught exception is PortUnreachableException.
Result:
A datagram channel is not closed unexpectedly anymore.