Motivation:
Although the new IntObjectMap.values() that returns Collection is
useful, the removed values(Class<V>) that returns an array is also
useful. It's also good for backward compatibility.
Modifications:
- Add IntObjectMap.values(Class<V>) back
- Miscellaneous improvements
- Cache the collection returned by IntObjectHashMap.values()
- Inspector warnings
- Update the IntObjectHashMapTest to test both values()
Result:
- Backward compatibility
- Potential performance improvement of values()
Related: #3122
Motivation:
The HttpStaticFileServer example writes the LastHttpContent twice at the
end of the transfer. HttpChunkedInput already produces a
LastHttpContent at the end of the stream, so there's no reason to write
another.
Modifications:
Do not write LastHttpContent in HttpStaticFileServerHandler when
HttpChunkedInput is used to transfer a file.
Result:
HttpStaticFileServer does not violates the protocol anymore.
Motivation:
The mentioned commit contains a bug fix and an improvement in
IntObjectHashMap that requires backporting.
Modifications:
Update IntObjectMap, IntObjectHashMap, and IntObjectHashMapTest
Result:
Easier to backport HTTP/2 and other changes in master in the future
Related: #3157
Motivation:
It should be convenient to have an easy way to classify an
HttpResponseStatus based on the first digit of the HTTP status code, as
defined in the RFC 2616:
- Information 1xx
- Success 2xx
- Redirection 3xx
- Client Error 4xx
- Server Error 5xx
Modification:
- Add HttpStatusClass
- Add HttpResponseStatus.codeClass() that returns the class of the HTTP
status code
Result:
It's easier to determine the class of an HTTP status
Motivation:
When running the examples using the provided run-examples.sh script the
log level is 'info' level. It can be handy to be able to configure a
different level, for example 'debug', while learning and trying out the
the examples.
Modifications:
Added a dependency to logback-classic to the examples pom.xml, and also
added a logback configuration file. The log level can be configured by
setting the 'logLevel' system property, and if that property is not set
the default will be 'info' level.
The run-examples.sh was updated to show an example of using the system
property to set the log level to 'debug'
Result:
It is now possible to turn on debug logging by settnig a system property
on the command line.
Motivation:
When authenticating with a proxy server, HttpProxyHandler should use the
'Proxy-Authorization' header rather than the 'Authorization' header.
Modifications:
- Use 'Proxy-Authorization' header
Result:
Can connect to an HTTP proxy server
Motivation:
I found myself writing AsciiString constants in my code for
response statuses and thought that perhaps it might be nice to have
them defined by Netty instead.
Modifications:
Adding codeAsText to HttpResponseStatus that returns the status code as
AsciiText.
In addition, added the 421 Misdirected Request response code from
https://tools.ietf.org/html/draft-ietf-httpbis-http2-15#section-9.1.2
This response header was renamed in draft 15:
https://tools.ietf.org/html/draft-ietf-httpbis-http2-15#appendix-A.1
But the code itself was not changed, and I thought using the latest would
be better.
Result:
It is now possible to specify a status like this:
new DefaultHttp2Headers().status(HttpResponseStatus.OK.codeAsText());
Motivation:
Found performance issues via FindBugs and PMD.
Modifications:
- Removed unnecessary boxing/unboxing operations in DefaultTextHeaders.convertToInt(CharSequence) and DefaultTextHeaders.convertToLong(CharSequence). A boxed primitive is created from a string, just to extract the unboxed primitive value.
- Added a static modifier for DefaultHttp2Connection.ParentChangedEvent class. This class is an inner class, but does not use its embedded reference to the object which created it. This reference makes the instances of the class larger, and may keep the reference to the creator object alive longer than necessary.
- Added a static compiled Pattern to avoid compile it each time it is used when we need to replace some part of authority.
- Improved using of StringBuilders.
Result:
Performance improvements.
Motivation:
Currently, we only test our ZlibEncoders against our ZlibDecoders. It is
convenient to write such tests, but it does not necessarily guarantee
their correctness. For example, both encoder and decoder might be faulty
even if the tests pass.
Modifications:
Add another test that makes sure that our GZIP encoder generates the
GZIP trailer, using the fact that GZIPInputStream raises an EOFException
when GZIP trailer is missing.
Result:
More coverage for GZIP compression
Motivation:
The SPDY/3.1 spec does not adequate describe how to push resources
from the server. This was solidified in the HTTP/2 drafts by dividing
the push into two frames, a PushPromise containing the request,
followed by a Headers frame containing the response.
Modifications:
This commit modifies the SpdyHttpDecoder to support pushed resources
that are divided into multiple frames. The decoder will accept a
pushed SpdySynStreamFrame containing the request headers, followed by
a SpdyHeadersFrame containing the response headers.
Result:
The SpdyHttpDecoder will create an HttpRequest object followed by an
HttpResponse object when receiving pushed resources.
Motivation:
MQTT 3.1.1 became an OASIS Standard at 13 Nov 2014.
http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/mqtt-v3.1.1.html
MQTT 3.1.1 is a minor update of 3.1. But, previous codec-mqtt supported only MQTT 3.1.
Modifications:
- Add protocol name `MQTT` with previous `MQIsdp` for `CONNECT`’s variable header.
- Update client identifier validation for 3.1 with 3.1.1.
- Add `FAILURE (0x80)` for `SUBACK`’s new error code.
- Add a test for encode/decode `CONNECT` of 3.1.1.
Result:
MqttEncoder/MqttDecoder can encode/decode frames of 3.1 or 3.1.1.
Motivation:
RFC 2616, 4.3 Message Body states that:
All 1xx (informational), 204 (no content), and 304 (not modified) responses MUST NOT include a
message-body. All other responses do include a message-body, although it MAY be of zero length.
Modifications:
HttpContentEncoder was previously modified to cater for HTTP 100 responses. This check is enhanced to
include HTTP 204 and 304 responses.
Result:
Empty response bodies will not be modified to include the compression footer. This footer messed with Chrome's
response parsing leading to "hanging" requests.
Motivation:
HttpObjectDecoder extended ReplayDecoder which is slightly slower then ByteToMessageDecoder.
Modifications:
- Changed super class of HttpObjectDecoder from ReplayDecoder to ByteToMessageDecoder.
- Rewrote decode() method of HttpObjectDecoder to use proper state machine.
- Changed private methods HeaderParser.parse(ByteBuf), readHeaders(ByteBuf) and readTrailingHeaders(ByteBuf), skipControlCharacters(ByteBuf) to consider available bytes.
- Set HeaderParser and LineParser as static inner classes.
- Replaced not safe actualReadableBytes() with buffer.readableBytes().
Result:
Improved performance of HttpObjectDecoder by approximately 177%.
Motivation:
NetUtil.isValidIpV6Address() handles the interface name in IPv6 address
incorrectly. For example, it returns false for the following addresses:
- ::1%lo
- ::1%_%_in_name_
Modifications:
- Strip the square brackets before validation for simplicity
- Strip the part after the percent sign completely before validation for
simplicity
- Simplify and reformat NetUtilTest
Result:
- The interface names in IPv6 addresses are handled correctly.
- NetUtilTest is cleaner
Motivation:
ChannelPromiseAggregator and ChannelPromiseNotifiers only allow
consumers to work with Channels as the result type. Generic versions
of these classes allow consumers to aggregate or broadcast the results
of an asynchronous execution with other result types.
Modifications:
Add PromiseAggregator and PromiseNotifier. Add unit tests for both.
Remove code in ChannelPromiseAggregator and ChannelPromiseNotifier and
modify them to extend the new base classes.
Result:
Consumers can now aggregate or broadcast the results of an asynchronous
execution with results types other than Channel.
Motiviation:
The HttpContentEncoder does not account for a EmptyLastHttpContent being provided as input. This is useful in situations where the client is unable to determine if the current content chunk is the last content chunk (i.e. a proxy forwarding content when transfer encoding is chunked).
Modifications:
- HttpContentEncoder should not attempt to compress empty HttpContent objects
Result:
HttpContentEncoder supports a EmptyLastHttpContent to terminate the response.
Motivation:
Headers has getTimeMillis(), not getDate()
Modification:
- Replace HttpHeaders.getDate() with getTimeMillis() so that migration
is smoother
Result:
User code which accesses a date header is easier to migrate
Motivation:
CollectionUtils has only one method and it is used only in DefaultHeaders.
Modification:
Move CollectionUtils.equals() to DefaultHeaders and make it private
Result:
One less class to expose in our public API
Motivation:
The commit 50e06442c3 changed the type of
the constants in HttpHeaders.Names and HttpHeaders.Values, making 4.1
backward-incompatible with 4.0.
It also introduces newer utility classes such as HttpHeaderUtil, which
deprecates most static methods in HttpHeaders. To ease the migration
between 4.1 and 5.0, we should deprecate all static methods that are
non-existent in 5.0, and provide proper counterpart.
Modification:
- Revert the changes in HttpHeaders.Names and Values
- Deprecate all static methods in HttpHeaders in favor of:
- HttpHeaderUtil
- the member methods of HttpHeaders
- AsciiString
- Add integer and date access methods to HttpHeaders for easier future
migration to 5.0
- Add HttpHeaderNames and HttpHeaderValues which provide standard HTTP
constants in AsciiString
- Deprecate HttpHeaders.Names and Values
- Make HttpHeaderValues.WEBSOCKET lowercased because it's actually
lowercased in all WebSocket versions but the oldest one
- Add RtspHeaderNames and RtspHeaderValues which provide standard RTSP
constants in AsciiString
- Deprecate RtspHeaders.*
- Do not use AsciiString.equalsIgnoreCase(CharSeq, CharSeq) if one of
the parameters are AsciiString
- Avoid using AsciiString.toString() repetitively
- Change the parameter type of some methods from String to
CharSequence
Result:
Backward compatibility is recovered. New classes and methods will make
the migration to 5.0 easier, once (Http|Rtsp)Header(Names|Values) are
ported to master.
Motivation:
The header class hierarchy and algorithm was improved on the master branch for versions 5.x. These improvments should be backported to the 4.1 baseline.
Modifications:
- cherry-pick the following commits from the master branch: 2374e17, 36b4157, 222d258
Result:
Header improvements in master branch are available in 4.1 branch.
Motivation:
When ALPN/NPN is disabled, a user has to instantiate a new
ApplicationProtocolConfig with meaningless parameters.
Modifications:
- Add ApplicationProtocolConfig.DISABLED, the singleton instance
- Reject the constructor calls with Protocol.NONE, which doesn't make
much sense because a user should use DISABLED instead.
Result:
More user-friendly API when ALPN/NPN is not needed by a user.
Motivation:
Previous backport removed the old methods and constructors. They should
not be removed in 4.x but just deprecated in favor of the new methods
and constructors.
Modifications:
Add back the removed methods and constructors in SslContext and its
subtypes for backward compatibility.
Result:
Backward compatibility issues fixed.
Motivation:
Improvements were made on the main line to support ALPN and mutual
authentication for TLS. These should be backported.
Modifications:
- Backport commits from the master branch
- f8af84d599
- e74c8edba3
Result:
Support for ALPN and mutual authentication.
Motivation:
The SslHandler currently forces the use of a direct buffer for the input to the SSLEngine.wrap(..) operation. This allocation may not always be desired and should be conditionally done.
Modifications:
- Use the pre-existing wantsDirectBuffer variable as the condition to do the conversion.
Result:
- An allocation of a direct byte buffer and a copy of data is now not required for every SslHandler wrap operation.
Motivation:
The java implementations for Inet6Address.getHostName() do not follow the RFC 5952 (http://tools.ietf.org/html/rfc5952#section-4) for recommended string representation. This introduces inconsistencies when integrating with other technologies that do follow the RFC.
Modifications:
-NetUtil.java to have another public static method to convert InetAddress to string. Inet4Address will use the java InetAddress.getHostAddress() implementation and there will be new code to implement the RFC 5952 IPV6 string conversion.
-New unit tests to test the new method
Result:
Netty provides a RFC 5952 compliant string conversion method for IPV6 addresses
Motivation:
The SslHandler wrap method requires that a direct buffer be passed to the SSLEngine.wrap() call. If the ByteBuf parameter does not have an underlying direct buffer then one is allocated in this method, but it is not released.
Modifications:
- Release the direct ByteBuffer only accessible in the scope of SslHandler.wrap
Result:
Memory leak in SslHandler.wrap is fixed.
Motivation:
The requirement for the masking of frames and for checks of correct
masking in the websocket specifiation have a large impact on performance.
While it is mandatory for browsers to use masking there are other
applications (like IPC protocols) that want to user websocket framing and proxy-traversing
characteristics without the overhead of masking. The websocket standard
also mentions that the requirement for mask verification on server side
might be dropped in future.
Modifications:
Added an optional parameter allowMaskMismatch for the websocket decoder
that allows a server to also accept unmasked frames (and clients to accept
masked frames).
Allowed to set this option through the websocket handshaker
constructors as well as the websocket client and server handlers.
The public API for existing components doesn't change, it will be
forwarded to functions which implicetly set masking as required in the
specification.
For websocket clients an additional parameter is added that allows to
disable the masking of frames that are sent by the client.
Result:
This update gives netty users the ability to create and use completely
unmasked websocket connections in addition to the normal masked channels
that the standard describes.
Motivation:
DnsNameResolver.testResolveA() tests if the cache works as well as the usual DNS protocol test. To ensure the result from the cache is identical to the result without cache, it compares the two Maps which contain the result of cached/uncached resolution. The comparison of two Maps yields an expected behavior, but the output of the comparison on failure is often unreadable due to its long length.
Modifications:
Compare entry-by-entry for more comprehensible test failure output
Result:
When failure occurs, it's easier to see which domain was the cause of the problem.
Motivation:
At the moment the whole HTTP header must be parsed at once which can lead to multiple parsing of the same bytes. We can do better here and allow to parse it in multiple steps.
Modifications:
- Not parse headers multiple times
- Simplify the code
- Eliminate uncessary String[] creations
- Use readSlice(...).retain() when possible.
Result:
Performance improvements as shown in the included benchmark below.
Before change:
[nmaurer@xxx]~% ./wrk-benchmark
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 21.55ms 15.10ms 245.02ms 90.26%
Req/Sec 196.33k 30.17k 297.29k 76.03%
373954750 requests in 2.00m, 50.15GB read
Requests/sec: 3116466.08
Transfer/sec: 427.98MB
After change:
[nmaurer@xxx]~% ./wrk-benchmark
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 20.91ms 36.79ms 1.26s 98.24%
Req/Sec 206.67k 21.69k 243.62k 94.96%
393071191 requests in 2.00m, 52.71GB read
Requests/sec: 3275971.50
Transfer/sec: 449.89MB
Motivation:
As report in #2953 the websocket server example contained a bug and did therefore not work with chrome:
A websocket extension is added to the pipeline but extensions were disallowed in the handshaker and decoder,
which is leading the decoder to closing the connection after receiving an extension frame.
Modifications:
Allow websocket extensions in the handshaker to correctly enable the extension.
Result:
Working websocket server example
Fixes#2953
Related: #2945
Motivation:
Some special handlers such as TrafficShapingHandler need to override the
writability of a Channel to throttle the outbound traffic.
Modifications:
Add a new indexed property called 'user-defined writability flag' to
ChannelOutboundBuffer so that a handler can override the writability of
a Channel easily.
Result:
A handler can override the writability of a Channel using an unsafe API.
For example:
Channel ch = ...;
ch.unsafe().outboundBuffer().setUserDefinedWritability(1, false);
Motivation:
The 4.1.0-Beta3 implementation of HttpObjectAggregator.handleOversizedMessage closes the
connection if the client sent oversized chunked data with no Expect:
100-continue header. This causes a broken pipe or "connection reset by
peer" error in some clients (tested on Firefox 31 OS X 10.9.5,
async-http-client 1.8.14).
This part of the HTTP 1.1 spec (below) seems to say that in this scenario the connection
should not be closed (unless the intention is to be very strict about
how data should be sent).
http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html
"If an origin server receives a request that does not include an
Expect request-header field with the "100-continue" expectation,
the request includes a request body, and the server responds
with a final status code before reading the entire request body
from the transport connection, then the server SHOULD NOT close
the transport connection until it has read the entire request,
or until the client closes the connection. Otherwise, the client
might not reliably receive the response message. However, this
requirement is not be construed as preventing a server from
defending itself against denial-of-service attacks, or from
badly broken client implementations."
Modifications:
Change HttpObjectAggregator.handleOversizedMessage to close the
connection only if keep-alive is off and Expect: 100-continue is
missing. Update test to reflect the change.
Result:
Broken pipe and connection reset errors on the client are avoided when
oversized data is sent.
Related: #2983
Motivation:
It is a well known idiom to write an empty buffer and add a listener to
its future to close a channel when the last byte has been written out:
ChannelFuture f = channel.writeAndFlush(Unpooled.EMPTY_BUFFER);
f.addListener(ChannelFutureListener.CLOSE);
When HttpObjectEncoder is in the pipeline, this still works, but it
silently raises an IllegalStateException, because HttpObjectEncoder does
not allow writing a ByteBuf when it is expecting an HttpMessage.
Modifications:
- Handle an empty ByteBuf specially in HttpObjectEncoder, so that
writing an empty buffer does not fail even if the pipeline contains an
HttpObjectEncoder
- Add a test
Result:
An exception is not triggered anymore by HttpObjectEncoder, when a user
attempts to write an empty buffer.
Motivation:
Currently, when the CorsHandler processes a preflight request, or
respondes with an 403 Forbidden using the short-curcuit option, the
HttpRequest is not released which leads to a buffer leak.
Modifications:
Releasing the HttpRequest when done processing a preflight request or
responding with an 403.
Result:
Using the CorsHandler will not cause buffer leaks.
Related: #2952
Motivation:
META-INF/io.netty.versions.properties in netty-all-*.jar does not
contain the version information about the netty-transport-epoll module.
Modifications:
Fix a bug in the regular expression in pom.xml, so that the artifacts
with a classifier is also included in the version properties file.
Result:
The version information of all modules are included in the version
properties file, and Version.identify() does not miss
netty-transport-epoll.
Motivation
Issue #3004 shows that "=" character was not supported as it should in
the HttpPostRequestDecoder in form-data boundary.
Modifications:
Add 2 methods in StringUtil
- split with maxPart argument: String split with max parts only (to prevent multiple '='
to be source of extra split while not needed)
- substringAfter: String part after delimiter (since first part is not
needed)
Use those methods in HttpPostRequestDecoder.
Change and the HttpPostRequestDecoderTest to check using a boundary
beginning with "=".
Results:
The fix implies more stability and fix the issue.
Motivation:
Using a needless local copy of keys.length.
Modifications:
Using keys.length explicitly everywhere.
Result:
Slight performance improvement of hashIndex.
Motivation:
The hashIndex method currently uses a conditional to handle negative
keys. This could be done without a conditional to slightly improve
performance.
Modifications:
Modified hashIndex() to avoid using a conditional.
Result:
Slight performance improvement to hashIndex().
Motivation:
IntObjectHashMap throws an exception when using negative values for
keys.
Modifications:
Changed hashIndex() to normalize the index if the mod operation returns
a negative number.
Result:
IntObjectHashMap supports negative key values.
Motivation:
Make it much more readable code.
Modifications:
- Added states of decompression.
- Refactored decode(...) method to use this states.
Result:
Much more readable decoder which looks like other compression decoders.
Related: #2034
Motivation:
Some users want to mock Bootstrap (or ServerBootstrap), and thus they
should not be final but be fully overridable and extensible.
Modifications:
Remove finals wherever possible
Result:
@daschl is happy.