Motivation:
Although the new IntObjectMap.values() that returns Collection is
useful, the removed values(Class<V>) that returns an array is also
useful. It's also good for backward compatibility.
Modifications:
- Add IntObjectMap.values(Class<V>) back
- Miscellaneous improvements
- Cache the collection returned by IntObjectHashMap.values()
- Inspector warnings
- Update the IntObjectHashMapTest to test both values()
Result:
- Backward compatibility
- Potential performance improvement of values()
Motivation:
The DefaultHttp2InboundFlowController uses processedBytes to determine
when to send the WINDOW_UPDATE, but uses window to determine the delta
to send in the request. This is incorrect since we shouldn't be
requesting bytes that haven't been processed.
Modifications:
Changed DefaultHttp2InboundFlowController to use processedBytes in the
calculation of the delta to send in the WINDOW_UPDATE request.
Result:
Inbound flow control only asks for bytes that have been processed in
WINDOW_UPDATE.
Motivation:
AbstractChannel.close() completes the closeFuture and then proceeds to
schedule a task to fireChannelInactive and unregister the channel. If
the user shuts down the EventLoop as a result of the closeFuture
completing this can result in a RejectedExecutionException when the
unregister task is scheduled. This is mostly noise, but it does
somewhat pollute the logs.
Modifications:
Changed AbstractChannel.close() to not complete the channelFuture and
promise until after the unregistration task is scheduled.
Result:
Noisy error log should no longer be an issue.
Related: #3122
Motivation:
The HttpStaticFileServer example writes the LastHttpContent twice at the
end of the transfer. HttpChunkedInput already produces a
LastHttpContent at the end of the stream, so there's no reason to write
another.
Modifications:
Do not write LastHttpContent in HttpStaticFileServerHandler when
HttpChunkedInput is used to transfer a file.
Result:
HttpStaticFileServer does not violates the protocol anymore.
Related: #3157
Motivation:
It should be convenient to have an easy way to classify an
HttpResponseStatus based on the first digit of the HTTP status code, as
defined in the RFC 2616:
- Information 1xx
- Success 2xx
- Redirection 3xx
- Client Error 4xx
- Server Error 5xx
Modification:
- Add HttpStatusClass
- Add HttpResponseStatus.codeClass() that returns the class of the HTTP
status code
- Remove HttpResponseStatus.isInformational()
Result:
It's easier to determine the class of an HTTP status
Motivation:
When running the examples using the provided run-examples.sh script the
log level is 'info' level. It can be handy to be able to configure a
different level, for example 'debug', while learning and trying out the
the examples.
Modifications:
Added a dependency to logback-classic to the examples pom.xml, and also
added a logback configuration file. The log level can be configured by
setting the 'logLevel' system property, and if that property is not set
the default will be 'info' level.
The run-examples.sh was updated to show an example of using the system
property to set the log level to 'debug'
Result:
It is now possible to turn on debug logging by settnig a system property
on the command line.
Motivation:
The HTTP/2 compressor does not release the input buffer when compression is done. This results in buffer leaks.
Modifications:
- Release the buffer in the HTTP/2 compressor
- Update tests to reflect the correct state
Result:
1 less buffer leak.
Motivation:
The interface for HTTP/2 onDataRead states that buffers will be released by the codec. The decompressor and compressor methods are not releasing buffers created during the decompression/compression process.
Modifications:
After onDataRead calls the decompressor and compressor classes will release the data buffer.
Result:
HTTP/2 compressor/decompressors are consistent with onDataRead interface assumptions.
Motivation:
Some of the comments in HTTP/2 Frame Listener interface are misleading.
Modifications:
Clarify comments in Http2FrameListener.
Result:
Http2FrameListener onDataRead comments are clarified.
Motivation:
When DefaultHttp2FrameReader has read a settings frame, the settings
will be passed along the pipeline. This allows a client to hold off
sending data until it has received a settings frame. But for a server it
will always have received a settings frame and the usefulness of this
forwarding of settings is less useful. This also causes a debug message
to be logged on the server side if there is no channel handler to handle
the settings:
[nioEventLoopGroup-1-1] DEBUG io.netty.channel.DefaultChannelPipeline -
Discarded inbound message {INITIAL_WINDOW_SIZE=131072,
MAX_FRAME_SIZE=16384} that reached at the tail of the pipeline. Please
check your pipeline configuration.
Modifications:
Added a builder for the InboundHttp2ToHttpAdapter and
InboundHttp2PriortyAdapter and a new parameter named 'propagateSettings'
to their constructors.
Result:
It is now possible to control whether settings should be passed along
the pipeline or not.
Motivation:
When authenticating with a proxy server, HttpProxyHandler should use the
'Proxy-Authorization' header rather than the 'Authorization' header.
Modifications:
- Use 'Proxy-Authorization' header
Result:
Can connect to an HTTP proxy server
Motivation:
NameResolverGroup uses the EventExecutor specified with getResolver() as
the key of its internal map. Because the EventExecutor is often a
wrapped one, such as PausibleChannelEventExecutor, which actually is a
wrapper of the same executor, they should instantiate only one
NameResolver.
Modifications:
Unwrap the EventExecutor before using it as the key of the internal map
Result:
Memory leak is gone.
Motivation:
I found myself writing AsciiString constants in my code for
response statuses and thought that perhaps it might be nice to have
them defined by Netty instead.
Modifications:
Adding codeAsText to HttpResponseStatus that returns the status code as
AsciiText.
In addition, added the 421 Misdirected Request response code from
https://tools.ietf.org/html/draft-ietf-httpbis-http2-15#section-9.1.2
This response header was renamed in draft 15:
https://tools.ietf.org/html/draft-ietf-httpbis-http2-15#appendix-A.1
But the code itself was not changed, and I thought using the latest would
be better.
Result:
It is now possible to specify a status like this:
new DefaultHttp2Headers().status(HttpResponseStatus.OK.codeAsText());
Related: #3113
Motivation:
Because ChannelHandlerContext is most often instantiated by an I/O
thread, we can rely on thread-local variables to keep the skipFlags
cache, which should be faster than partitioned synchronized variable.
Modifications:
Use FastThreadLocal for skipFlagsCache instead of partitioned
synchronized map.
Result:
Less contention
Motivation:
Found performance issues via FindBugs and PMD.
Modifications:
- Removed unnecessary boxing/unboxing operations in DefaultTextHeaders.convertToInt(CharSequence) and DefaultTextHeaders.convertToLong(CharSequence). A boxed primitive is created from a string, just to extract the unboxed primitive value.
- Added a static modifier for DefaultHttp2Connection.ParentChangedEvent class. This class is an inner class, but does not use its embedded reference to the object which created it. This reference makes the instances of the class larger, and may keep the reference to the creator object alive longer than necessary.
- Added a static compiled Pattern to avoid compile it each time it is used when we need to replace some part of authority.
- Improved using of StringBuilders.
Result:
Performance improvements.
Motivation:
The HTTP/2 codec currently does not expose the boundaries of the initial settings. This could be useful for applications to define their own initial settings.
Modifications:
Add new static final variables to Http2CodecUtil and make Http2Settings use these in the bounds checks.
Result:
Applications can use the max (or min) variables to initialize their settings.
Motivation:
When the inbound flow controller recognizes that the flow control window
has been violated on a stream (not connection-wide), it throws a
connection error.
Modifications:
Changed the DefaultHttp2InboundFlowController to properly throw
connection error if the connection window is violated and stream error
if a stream window is violated.
Result:
inbound flow control throws the correct error for window violations.
Motivation:
The current name of the class which converts from HTTP objects to HTTP/2 frames contains the text Http2ToHttp. This is misleading and opposite of what is being done.
Modifications:
Rename this class name to be HttpToHttp2.
Result:
Class names that more clearly identify what they do.
Motivation:
The current decompression frame listener currently opts-out of application level flow control. The application should still be able to control flow control even if decompression is in use.
Modifications:
- DecompressorFrameListener will maintain how many compressed bytes, decompressed bytes, and processed by the listener bytes. A ratio will be used to translate these values into application level flow control amount.
Result:
HTTP/2 decompressor delegates the application level flow control to the listener processing the decompressed data.
Motivation:
Currently, we only test our ZlibEncoders against our ZlibDecoders. It is
convenient to write such tests, but it does not necessarily guarantee
their correctness. For example, both encoder and decoder might be faulty
even if the tests pass.
Modifications:
Add another test that makes sure that our GZIP encoder generates the
GZIP trailer, using the fact that GZIPInputStream raises an EOFException
when GZIP trailer is missing.
Result:
More coverage for GZIP compression
Motivation:
Currently when an exception occurs during a listener.onDataRead
callback, we return all bytes as processed. However, the listener may
choose to return bytes via the InboundFlowState object rather than
returning the integer. If the listener returns a few bytes and then
throws, we will attempt to return too many bytes.
Modifications:
Added InboundFlowState.unProcessedBytes() to indicate how many
unprocessed bytes are outstanding.
Updated DefaultHttp2ConnectionDecoder to compare the unprocessed bytes
before and after the listener.onDataRead callback when an exception was
encountered. If there is a difference, it is subtracted off the total
processed bytes to be returned to the flow controller.
Result:
HTTP/2 data frame delivery properly accounts for processed bytes through
an exception.
Motivation:
The SPDY/3.1 spec does not adequate describe how to push resources
from the server. This was solidified in the HTTP/2 drafts by dividing
the push into two frames, a PushPromise containing the request,
followed by a Headers frame containing the response.
Modifications:
This commit modifies the SpdyHttpDecoder to support pushed resources
that are divided into multiple frames. The decoder will accept a
pushed SpdySynStreamFrame containing the request headers, followed by
a SpdyHeadersFrame containing the response headers.
Result:
The SpdyHttpDecoder will create an HttpRequest object followed by an
HttpResponse object when receiving pushed resources.
Motivation:
MQTT 3.1.1 became an OASIS Standard at 13 Nov 2014.
http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/mqtt-v3.1.1.html
MQTT 3.1.1 is a minor update of 3.1. But, previous codec-mqtt supported only MQTT 3.1.
Modifications:
- Add protocol name `MQTT` with previous `MQIsdp` for `CONNECT`’s variable header.
- Update client identifier validation for 3.1 with 3.1.1.
- Add `FAILURE (0x80)` for `SUBACK`’s new error code.
- Add a test for encode/decode `CONNECT` of 3.1.1.
Result:
MqttEncoder/MqttDecoder can encode/decode frames of 3.1 or 3.1.1.
Motivation:
The current priority algorithm uses 2 different mechanisms to iterate the priority tree and send the results of the allocation. The current algorithm also uses a two step phase where the priority tree is traversed and allocation amounts are calculated and then all active streams are traversed to send for any streams that may or may not have been allocated bytes.
Modifications:
- DefaultHttp2OutboundFlowController will allocate and send (when possible) in the same looping structure.
- The recursive method will send only for the children instead of itself and its children which should simplify the recursion.
Result:
Hopefully simplified recursive algorithm where the tree iteration determines who needs to send and less iteration after the recursive calls complete.
Currently the DefaultHttp2InboundFlowController only supports the
ability to turn on and off "window maintenance" for a stream. This is
insufficient for true application-level flow control that may only want
to return a few bytes to flow control at a time.
Modifications:
Removing "window maintenance" interface from
DefaultHttp2InboundFlowController in favor of the new interface.
Created the Http2InboundFlowState interface which extends Http2FlowState
to add the ability to return bytes for a specific stream.
Changed the onDataRead method to return an integer number of bytes that
will be immediately returned to flow control, to support use cases that
want to opt-out of application-level inbound flow control.
Updated DefaultHttp2InboundFlowController to use 2 windows per stream.
The first, "window", is the actual flow control window that is
decremented as soon as data is received. The second "processedWindow"
is a delayed view of "window" that is only decremented after the
application returns the processed bytes. It is processedWindow that is
used when determining when to send a WINDOW_UPDATE to restore part of
the inbound flow control window for the stream/connection.
Result:
The HTTP/2 inbound flow control interfaces support application-level
flow control.
Motivation:
Too many warnings from IntelliJ IDEA code inspector, PMD and FindBugs.
Modifications:
- Removed unnecessary casts, braces, modifiers, imports, throws on methods, etc.
- Added static modifiers where it is possible.
- Fixed incorrect links in javadoc.
Result:
Better code.
Motivation:
The outbound flow controller logic does not properly reset the allocated
bytes between successive invocations of the priority algorithm.
Modifications:
Updated the priority algorithm to reset the allocated bytes for each
stream.
Result:
Each call to the priority algorithm now starts with zero allocated bytes
for each stream.
Motivation:
RFC 2616, 4.3 Message Body states that:
All 1xx (informational), 204 (no content), and 304 (not modified) responses MUST NOT include a
message-body. All other responses do include a message-body, although it MAY be of zero length.
Modifications:
HttpContentEncoder was previously modified to cater for HTTP 100 responses. This check is enhanced to
include HTTP 204 and 304 responses.
Result:
Empty response bodies will not be modified to include the compression footer. This footer messed with Chrome's
response parsing leading to "hanging" requests.
Motivation:
HttpObjectDecoder extended ReplayDecoder which is slightly slower then ByteToMessageDecoder.
Modifications:
- Changed super class of HttpObjectDecoder from ReplayDecoder to ByteToMessageDecoder.
- Rewrote decode() method of HttpObjectDecoder to use proper state machine.
- Changed private methods HeaderParser.parse(ByteBuf), readHeaders(ByteBuf) and readTrailingHeaders(ByteBuf), skipControlCharacters(ByteBuf) to consider available bytes.
- Set HeaderParser and LineParser as static inner classes.
- Replaced not safe actualReadableBytes() with buffer.readableBytes().
Result:
Improved performance of HttpObjectDecoder by approximately 177%.
Motivation:
PausableChannelEventExecutor().shutdown() used to call unwrap().terminationFuture() instead of unwrap.shutdown().
This error was reported by @xfrag in a comment to a commit message [1]. Tyvm.
Modifications:
PausableChannelEventExecutor.shutdown() now correctly invokes unwrap().shutdown() instead of unwrap().terminationFuture().
Result:
Correct code.
[1] 220660e351 (commitcomment-8489643)
Motivation:
NetUtil.isValidIpV6Address() handles the interface name in IPv6 address
incorrectly. For example, it returns false for the following addresses:
- ::1%lo
- ::1%_%_in_name_
Modifications:
- Strip the square brackets before validation for simplicity
- Strip the part after the percent sign completely before validation for
simplicity
- Simplify and reformat NetUtilTest
Result:
- The interface names in IPv6 addresses are handled correctly.
- NetUtilTest is cleaner
Motivation:
I came across an issue when I was adding/setting headers and mistakenly
used an upper case header name. When using the http2 example that ships
with Netty this was not an issue. But when working with a browser that
supports http2, in my case I was using Firefox Nightly, I'm guessing
that it interprets the response as invalid in accordance with the
specifiction
https://tools.ietf.org/html/draft-ietf-httpbis-http2-14#section-8.1.2
"However, header field names MUST be converted to lowercase prior
to their encoding in HTTP/2. A request or response containing
uppercase header field names MUST be treated as malformed"
This PR suggests converting to lowercase to be the default.
Modifications:
Added a no-args constructor that defaults to forcing the key/name to
lowercase, and providing a second constructor to override this behaviour
if desired.
Result:
It is now possible to specify a header like this:
Http2Headers headers = new DefaultHttp2Headers(true)
.status(new AsciiString("200"))
.set(new AsciiString("Testing-Uppercase"), new AsciiString("some value"));
And the header written to the client will then become:
testing-uppercase:"some value"
Motivation:
When running the http2 example no SslProvider is specified when calling
SslContext.newServerContext. This may lead to the provider being
determined depending on the availabilty of OpenSsl. But as far as I can
tell the OpenSslServerContext does not support APLN, which is the
protocol configured in the example.
This produces the following error when running the example:
Exception in thread "main" java.lang.UnsupportedOperationException:
OpenSSL provider does not support ALPN protocol
io.netty.handler.ssl.OpenSslServerContext.toNegotiator(OpenSslServerContext.java:391)
io.netty.handler.ssl.OpenSslServerContext.<init>(OpenSslServerContext.java:117)
io.netty.handler.ssl.SslContext.newServerContext(SslContext.java:238)
io.netty.handler.ssl.SslContext.newServerContext(SslContext.java:184)
io.netty.handler.ssl.SslContext.newServerContext(SslContext.java:124)
io.netty.example.http2.server.Http2Server.main(Http2Server.java:51)
Modifications:
Force SslProvider.JDK when creating the SslContext since the
example is using APLN.
Result:
There is no longer an error if OpenSsl is supported on the platform in
use.
Motivation:
There are a few very minor issues in the Http2 examples javadoc and
since I don't think that these javadocs are published this is very much
optional to include.
Modifications:
Updated the @see according to [1] to avoid warning when generating
javadocs.
Result:
No warning when generating javadocs.
[1] http://docs.oracle.com/javase/1.5.0/docs/tooldocs/windows/javadoc.html#@see
Motivation:
ChannelPromiseAggregator and ChannelPromiseNotifiers only allow
consumers to work with Channels as the result type. Generic versions
of these classes allow consumers to aggregate or broadcast the results
of an asynchronous execution with other result types.
Modifications:
Add PromiseAggregator and PromiseNotifier. Add unit tests for both.
Remove code in ChannelPromiseAggregator and ChannelPromiseNotifier and
modify them to extend the new base classes.
Result:
Consumers can now aggregate or broadcast the results of an asynchronous
execution with results types other than Channel.
Motivation:
The HTTP/2 codec currently does not provide an interface to compress data. There is an analogous case to this in the HTTP codec and it is expected to be used commonly enough that it will be beneficial to have the feature in the http2-codec.
Modifications:
- Add a class which extends DefaultHttp2ConnectionEncoder and provides hooks to an EmbeddedChannel
- Add a compressor element to the Http2Stream interface
- Update unit tests to utilize the new feature
Result:
HTTP/2 codec supports data compression.
Motiviation:
The HttpContentEncoder does not account for a EmptyLastHttpContent being provided as input. This is useful in situations where the client is unable to determine if the current content chunk is the last content chunk (i.e. a proxy forwarding content when transfer encoding is chunked).
Modifications:
- HttpContentEncoder should not attempt to compress empty HttpContent objects
Result:
HttpContentEncoder supports a EmptyLastHttpContent to terminate the response.
Motivation:
The current logic in DefaultHttp2OutboundFlowController for handling the
case of a stream shutdown results in a Http2Exception (not a
Http2StreamException). This results in a GO_AWAY being sent for what
really could just be a stream-specific error.
Modifications:
Modified DefaultHttp2OutboundFlowController to set a stream exception
rather than a connection-wide exception. Also using the error code of
INTERNAL_ERROR rather than STREAM_CLOSED, since it's more appropriate
for this case.
Result:
Should not be triggering GO_AWAY when a stream closes prematurely.
Motivation:
Currently due to flow control, HEADERS frames can be written
out-of-order WRT DATA frames.
Modifications:
When data is written, we preserve the future as the lastWriteFuture in
the outbound flow controller. The encoder then uses the lastWriteFuture
such that headers are only written after the lastWriteFuture completes.
Result:
HEADERS/DATA write order is correctly preserved.
Motivation:
The HTTP/2 specification indicates that when converting from HTTP/2 to HTTP/1.x and non-ascii characters are detected that an error should be thrown.
Modifications:
- The ASCII validation is already done but the exception that is raised is not properly converted to a RST_STREAM error.
Result:
- If HTTP/2 to HTTP/1.x translation layer is in use and a non-ascii header is received then a RST_STREAM frame should be sent in response.
Motivation:
The DefaultOutboundFlowController was attempting to write frames with a negative length. This resulted in attempting to allocate a buffer of negative size and thus an exception.
Modifications:
- Don't allow DefaultOutboundFlowController to write negative length buffers.
Result:
No more negative length writes which resulted in IllegalArgumentExceptions.
Motivation:
CollectionUtils has only one method and it is used only in DefaultHeaders.
Modification:
Move CollectionUtils.equals() to DefaultHeaders and make it private
Result:
One less class to expose in our public API
Motivation:
x-gzip and x-deflate are not standard header values, and thus should be
removed from HttpHeaderValues, which is meant to provide the standard
values only.
Modifications:
- Remove X_DEFLATE and X_GZIP from HttpHeaderValues
- Move X_DEFLATE and X_GZIP to HttpContentDecompressor and
DelegatingDecompressorFrameListener
- We have slight code duplication here, but it does less harm than
having non-standard constant.
Result:
HttpHeaderValues contains only standard header values.
Related: 4ce994dd4f
Motivation:
In 4.1, we were not able to change the type of the HTTP header name and
value constants from String to AsciiString due to backward compatibility
reasons.
Instead of breaking backward compatibility in 4.1, we introduced new
types called HttpHeaderNames and HttpHeaderValues which provides the
AsciiString version of the constants, and then deprecated
HttpHeaders.Names/Values.
We should make the same changes while deleting the deprecated classes
activaly.
Modifications:
- Remove HttpHeaders.Names/Values and RtspHeaders
- Add HttpHeaderNames/Values and RtspHeaderNames/Values
- Make HttpHeaderValues.WEBSOCKET lowercased because it's actually
lowercased in all WebSocket versions but the oldest one
- Do not use AsciiString.equalsIgnoreCase(CharSeq, CharSeq) if one of
the parameters are AsciiString
- Avoid using AsciiString.toString() repetitively
- Change the parameter type of some methods from String to
CharSequence
Result:
A user who upgraded from 4.0 to 4.1 first and removed the references to
the deprecated classes and methods can easily upgrade from 4.1 to 5.0.
Motivation:
When ALPN/NPN is disabled, a user has to instantiate a new
ApplicationProtocolConfig with meaningless parameters.
Modifications:
- Add ApplicationProtocolConfig.DISABLED, the singleton instance
- Reject the constructor calls with Protocol.NONE, which doesn't make
much sense because a user should use DISABLED instead.
Result:
More user-friendly API when ALPN/NPN is not needed by a user.