Motivation:
We deviate from the AbstractChannel implementation on deregistration by
failing the provided promise if the channel is already deregistered. In
contrast, AbstractChannel will always set the promise to successfully
done.
Modification:
Change the
Http2MultiplexCodec.DefaultHttp2StreamChannel.Http2ChannelUnsafe to
always set the promise provided to deregister as done as is the
case in AbstractChannel.
Motivation:
There is an inconsistency between the order of events in the
StreamChannel implementation in Http2MultiplexCodec and other Channel
implementations that extend AbstractChannel where channelInactive and
channelUnregistered events are not performed 'later'. This can cause an
unexected order of events for ChannelHandler implementations that call
Channel.close() in response to some event.
Modification:
The Http2MultiplexCodec.DefaultHttp2StreamChannel.Http2ChannelUnsafe was
modified to bounce the deregistration and channelInactive events through
the parent channels EventLoop.
Result:
Stream events are now in the proper order.
Fixes#8018.
Motivation:
Http2MultiplexCodec doesn't currently have an API for using the response
of a h2c upgrade request.
Modifications:
Add a new API to the Http2MultiplexCodecBuilder which allows for setting
an upgrade handler and wire it into the Http2MultiplexCodec
implementation.
Result:
When using the Http2MultiplexCodec with h2c upgrades the upgrade handler
will get added to the Http2StreamChannel which represents the
half-closed (local) response of stream 1. It is then up to the user to
manage the transition from the IO channel pipeline configuration
necessary for making the h2c upgrade request to a form where it can read
the response from the new stream channel.
Fixes#7947.
Motivation:
The `ByteBuffer emptyPingBuf()` method of Http2CodecUtils is has been dead
code since DefaultHttp2PingFrame switched from using a ByteBuf to represent
the 8 octets to a long.
Modifications:
Remove the method and the unused static ByteBuf.
Result:
Less dead code.
Fixes#8002
Motivation:
This is a followup for #7860. In the fix for #7860 we only partly fixed the problem as Http2UnknownFrame did not correctly extend HttpStreamFrame and so only worked when using the Http2FrameCodec. We need to have it extend HttpStreamFrame as otherwise Http2MultiplexCodec will reject to handle it correctly.
Modifications:
- Let Http2UnknownFrame extend HttpStreamFrame
- Add unit tests for writing and reading Http2UnkownFrame instances when the Http2MultiplexCodec is used.
Result:
Fixes https://github.com/netty/netty/issues/7969.
Motivation:
When a sender sends too large of headers it should not unnecessarily
kill the connection, as killing the connection is a heavy-handed
solution while SETTINGS_MAX_HEADER_LIST_SIZE is advisory and may be
ignored.
The maxHeaderListSizeGoAway limit in HpackDecoder is unnecessary because
any headers causing the list to exceeding the max size can simply be
thrown away. In addition, DefaultHttp2FrameReader.HeadersBlockBuilder
limits the entire block to maxHeaderListSizeGoAway. Thus individual
literals are limited to maxHeaderListSizeGoAway.
(Technically, literals are limited to 1.6x maxHeaderListSizeGoAway,
since the canonical Huffman code has a maximum compression ratio of
.625. However, the "unnecessary" limit in HpackDecoder was also being
applied to compressed sizes.)
Modifications:
Remove maxHeaderListSizeGoAway checking in HpackDecoder and instead
eagerly throw away any headers causing the list to exceed
maxHeaderListSize.
Result:
Fewer large header cases will trigger connection-killing.
DefaultHttp2FrameReader.HeadersBlockBuilder will still kill the
connection when maxHeaderListSizeGoAway is exceeded, however.
Fixes#7887
Motivation:
Integer autoboxing in this class (and possibly also the varargs arrays)
showed non-negligible CPU and garbage contribution when profiling a gRPC
service. grpc-java currently hardcodes use of Http2FrameLogger, set at
DEBUG level.
Modifications:
Wrap offending log statements in conditional blocks.
Result:
Garbage won't be produced by Http2FrameLogger when set to a disabled
logging level.
Motivation:
Streams can be deregistered so we can't assume their existence in the stream map.
Modifications:
Add a null-check in case a stream has been deregistered.
Result:
Fixes#7898.
Motivation:
We incorrectly called frame.release() in onHttp2GoAwayFrame which could lead to IllegalReferenceCountExceptions. The call of release() is inappropriate because the fireChannelRead() in onHttp2Frame() will handle it.
Modifications:
- Not call frame.release()
- Add a unit test
Result:
Fxies https://github.com/netty/netty/issues/7892.
It is possible to create streams in the half-closed state where the
stream state doesn't reflect that the request headers have been sent by
the client or the server hasn't received the request headers. This
state isn't possible in the H2 spec as a half closed stream must have
either received a full request or have received the headers from a
pushed stream. In the current implementation, this can cause the stream
created as part of an h2c upgrade request to be in this invalid state
and result in the omission of RST frames as the client doesn't believe
it has sent the request to begin with.
Modification:
The `DefaultHttp2Connection.activate` method checks the state and
modifies the status of the request headers as appropriate.
Result:
Fixes#7847.
Motivation:
When connecting to an HTTP/2 server that did not set any value for the
SETTINGS_MAX_HEADER_LIST_SIZE in the settings frame, the netty client was
imposing an arbitrary maximum header list size of 8kB. There should be no need
for the client to enforce such a limit if the server has not specified any
limit. This caused an issue for a grpc-java client that needed to send a large
header to a server via an Envoy proxy server. The error condition is
demonstrated here: https://github.com/JLofgren/demo-grpc-java-bug-4284
Fixes grpc-java issue #4284 - https://github.com/grpc/grpc-java/issues/4284
and netty issue #7825 - https://github.com/netty/netty/issues/7825
Modifications:
In HpackEncoder use MAX_HEADER_LIST_SIZE as default maxHeader list size.
Result:
HpackEncoder will only enforce a max header list size if the server has
specified a limit in its settings frame.
Motivation:
We should allow to write Http2UnkownFrame to allow custom extensions.
Modifications:
Allow to write Http2UnkownFrame
Add unit test
Result:
Fixes https://github.com/netty/netty/issues/7860.
Motivation:
There is a race between both flushing the upgrade response and receiving
more data before the flush ChannelPromise can fire and reshape the
pipeline. Since We have already committed to an upgrade by writing the
upgrade response, we need to be immediately prepared for handling the
next protocol.
Modifications:
The pipeline reshaping logic in HttpServerUpgradeHandler has been moved
out of the ChannelFutureListener attached to the write of the upgrade
response and happens immediately after the writeAndFlush call, but
before the method returns.
Result:
The pipeline is no longer subject to receiving more data before the
pipeline has been reformed.
Motivation:
We did not correctly set the stream id in the headers of HttpMessage when converting a Http2HeadersFrame. This is based on https://github.com/netty/netty/pull/7778 so thanks to @jprante.
Modifications:
- Correctly set the id when possible in the header.
- Add test case
Result:
Correctly include stream id.
Motivation:
Allow the observation of SETTINGS frame by other handlers in the pipeline. For my particular use case this allows me to observe the value of MAX_CONCURRENT_STREAMS for a ChannelPool abstraction that supports HTTP/2 multiplexing. Beside this also forward GOAWAY frames.
Modification:
Always forward SETTINGS and GOAWAY frames
Result:
Settings / Goaway can now be observed in the parent channel. Previously it was not possible (to my knowledge) to capture the settings when using Http2MultiplexCodec.
Motivation:
At the moment we use a ByteBuf as the payload for a http2 frame. This complicates life-time management a lot with no real gain and also may produce more objects then needed. We should just use a long as it is required to be 8 bytes anyway.
Modifications:
Use long for ping payloads.
Result:
Fixes [#7629].
Motivation:
Reflective setAccessible(true) will produce scary warnings on the console when using java9+, while netty still works. That said users may feel uncomfortable with these warnings, we should not try to do it by default when using java9+.
Modifications:
Add io.netty.tryReflectionSetAccessible system property which controls if setAccessible(...) will be used. By default it will bet set to false when using java9+.
Result:
Fixes [#7254].
Motivation:
According to the spec:
All pseudo-header fields MUST appear in the header block before regular
header fields. Any request or response that contains a pseudo-header
field that appears in a header block after
a regular header field MUST be treated as malformed (Section 8.1.2.6).
Pseudo-header fields are only valid in the context in which they are defined.
Pseudo-header fields defined for requests MUST NOT appear in responses;
pseudo-header fields defined for responses MUST NOT appear in requests.
Pseudo-header fields MUST NOT appear in trailers.
Endpoints MUST treat a request or response that contains undefined or
invalid pseudo-header fields as malformed (Section 8.1.2.6).
Clients MUST NOT accept a malformed response. Note that these requirements
are intended to protect against several types of common attacks against HTTP;
they are deliberately strict because being permissive can expose
implementations to these vulnerabilities.
Modifications:
- Introduce validation in HPackDecoder
Result:
- Requests with unknown pseudo-field headers are rejected
- Requests with containing response specific pseudo-headers are rejected
- Requests where pseudo-header appear after regular header are rejected
- h2spec 8.1.2.1 pass
Motivation:
We should convert Http2Exceptions that are produced because of STREAM_CLOSED to ClosedChannelException when hand-over to the child channel to make it more consistent with other transports.
Modifications:
- Check if STREAM_CLOSED is used and if so create a new ClosedChannelException (while preserve the original exception as cause) and use it in the child channel
- Ensure STREAM_CLOSED is used in DefaultHttp2RemoteFlowController when writes are failed because of a closed stream.
- Add testcase
Result:
More consistent and correct exception usage.
Motivation:
When checking if a value is present, ReadOnlyHttp2Headers always ignores
case for values.
RFC 7540 says: https://tools.ietf.org/html/rfc7540#section-8.1.2
"header field names are strings of ASCII characters that are compared in a case-insensitive fashion"
But there is no such constraint on header values
Modifications:
Updated ReadOnlyHttp2Headers.contains to compare header value in a
case-sensitive way.
Result:
ReadOnlyHttp2Headers compares header names in a case-insensitive way,
values in a case-sensitive way.
Motivation:
If you test a header value providing a String, contains() returns false.
This is due to the implementation inherited from DefaultHeaders using
the JAVA_HASHER.
JAVA_HASHER.equals returns false because a is a String and b an
AsciiString.
Modifications:
DefaultHttp2Headers overrides contains and uses CASE_SENSITIVE_HASHER.
Result:
You can test a header value with any CharSequence implementation.
Motivation:
The completion order of promises in Http2MultiplexChannel#close should be consistent with that of AbstractChannel. Otherwise this may result in Future listeners seeing incorrect channel state.
Modifications:
Add tests cases.
Result:
Ensure consistent behavior between Http2MultiplexChannel and AbstractChannel.
Motivation:
Calling DefaultHttp2StreamChannel.Unsafe.close(...) multiple times should not fail.
Modification:
- Correctly handle multiple calls to DefaultHttp2StreamChannel.Unsafe.close(...)
- Complete closePromise and promise that is given to close(...) in the correct order.
- Add unit test
Result:
Fixes [#7628] and [#7641]
Motivation:
If DefaultHttp2ConnectionEncoder process outbound operation it sometimes missed to call Http2LifecycleManager.onError(...) when the operation was executed asynchronously.
Modifications:
Make best effort to update flags but still ensure failures are propageted to Http2LifecycleManager.onError(...) in all cases.
Result:
More consistent handling of errors.
Motivation:
Pseudo headers are checked less frequently than normal headers, so
it is more efficient to check the latter first.
Modifications:
Swap the order of the check, and fix minor formatting
Result:
Possibly more efficient header checks
Motivation:
We can just implement the interfaces directly and so reduce object creation in DefaultHttp2RemoteFlowController.
Modifications:
Directly implement the interfaces.
Result:
Less object creation.
Motivation:
When part of a HTTP/2 StreamChannel the Http2StreamChannel.isOpen() / isActive() should report false within a call to a ChannelInboundHandlers channelInactive() method.
Modifications:
Fullfill promise before call fireChannelInactive()
Result:
Correctly update state / promise before notify handlers. Fixes [#7638]
Motivation:
With HTTP1, it's very easy to check if a header is present and has a
given value: you can simply invoke
io.netty.handler.codec.http.HttpHeaders#contains(java.lang.CharSequence, java.lang.CharSequence, boolean)
It is not possible to do the same with HTTP2. You have to get the list
of all headers (returned as String) and then iterate over it invoking
String#equals or String#equalsIgnoreCase
Modifications:
I've added io.netty.handler.codec.http2.Http2Headers#contains and
implemented it in DefaultHttp2Headers, EmptyHttp2Headers and ReadOnlyHttp2Headers.
Result:
You can use AsciiString constants to check if a header is present in a
consice and efficient manner.
Motivation:
Usually when using netty exceptions which happen for outbound operations should not be fired through the pipeline but only the ChannelPromise should be failed.
Modifications:
- Change Http2LifecycleManager.onError(...) to take also an boolean that indicate if the error was caused by an outbound operation
- Channel Http2ConnectionHandler.on*Error(...) methods to also take this boolean
- Change Http2FrameCodec to only fire exceptions through the pipeline if these are not outbound operations related
- Add unit test.
Result:
More consistent error handling when using Http2FrameCodec and Http2MultiplexCodec.
Motivation:
Http2MultiplexCodec swallows Http2PingFrames without releasing the payload, resulting in a memory leak.
Modification:
Send unhandled frames down the pipeline for consumption/disposal by another InboundChannelHandler.
Result:
Fixes#7607.
Motivation:
Usages of HttpResponseStatus may result in more object allocation then necessary due to not looking for cached objects and the AsciiString parsing method not being used due to CharSequence method being used instead.
Modifications:
- HttpResponseDecoder should attempt to get the HttpResponseStatus from cache instead of allocating a new object
- HttpResponseStatus#parseLine(CharSequence) should check if the type is AsciiString and redirect to the AsciiString parsing method which may not require an additional toString call
- HttpResponseStatus#parseLine(AsciiString) can be optimized and doesn't require and may not require object allocation
Result:
Less allocations when dealing with HttpResponseStatus.
Motivation:
When we cancel the flowcontrolled writes we did create a new StreamException for each write that was enqueued. Creating Exceptions is very expensive due of filling the stacktrace.
Modifications:
Only create the StreamException once and reuse the same for all the flowcontrolled writes (per stream).
Result:
Less expensive to cancel flowcontrolled writes.
Motivation:
Http2FrameCodec increases the initialWindowSize when the user attempts to increase the connection flow control window. The initialWindowSize should only be touched as a result of a SETTINGS frame, and otherwise may result in flow control getting out of sync with our peer.
Modifications:
- Http2FrameCodec shouldn't update the initialWindowSize when a WindowUpdateFrame is written on the connection channel
Result:
More correct WindowUpdate processing.
Motivation:
HPackDecoder works on entire header block, we shouldn't encounter
incomplete header fields. If we do we should treat it as
a decoding error and according to the specification:
A decoding error in a header block MUST be treated as
a connection error (Section 5.4.1) of type COMPRESSION_ERROR.
Modifications:
* Check final state in HpackDecoder once we've decoded all the data.
Result:
* Throw a connection error if we receive incomplete header fields
* H2spec 4.3 tests all passes
Motivation:
Http2FrameStream#CONNECTION_STREAM is required to identify the
connection stream. However this leads to inconsistent usage from a user
perspective. When a user creates a Http2Frame for a non-connection
stream, the Http2MultiplexCodec automatically sets the stream, and the
user is never exposed to the Http2FrameStream object. However when the
user writes a Http2Frame for a connection stream they are required to
set the Http2FrameStream object. We can remove the Http2FrameStream#CONNECTION_STREAM
and keep the Http2FrameStream object internal, and therefore consistent
between the connection and non-connection use cases.
Modifications:
- Remove Http2FrameStream#CONNECTION_STREAM
- Update Http2FrameCodec to handle Http2Frame#stream() which returns
null
Result:
More consistent usage on http2 parent channel and http2 child channel.
Motivation:
Http2FrameCodecTest#newOutboundStream has a timeout of 1 second and has been observed to timeout on CI servers.
Modifications:
- Increase the timeout to 5 seconds
Result:
Less false positive test failures on CI servers.
Motivation:
HpackDecoder#addHeader takes in the streamId as a parameter but no longer uses it.
Modifications:
- Remove the streamId parameter from HpackDecoder#addHeader
Result:
Less unused parameters in HpackDecoder.
Motivation:
According to the HTTP/2 Spec:
SETTINGS_MAX_HEADER_LIST_SIZE (0x6): This advisory setting informs a
peer of the maximum size of header list that the sender is
prepared to accept, in octets. The value is based on the
uncompressed size of header fields, including the length of the
name and value in octets plus an overhead of 32 octets for each
header field.
We were accounting for the 32 bytes when encoding in HpackEncoder,
but not when decoding in HPackDecoder.
Modifications:
- Add 32 bytes to the header list length for each entry when decoding
with HPackDecoder.
Result:
- We account for the 32 bytes overhead by header entry in HPackDecoder
Motiviation:
In our replace(...) methods we always used validation for the newly created headers while the original headers may not use validation at all.
Modifications:
- Only use validation if the original headers used validation as well.
- Ensure we create a copy of the headers in replace(...).
Result:
Fixes [#5226]
Motivation:
Currently the remote flow controller limits the maximum amount of pending data to Integer.MAX_VALUE. The overflow handling is also not very graceful in that it may lead to infinite loops, or otherwise no progress being made.
Modifications:
- StreamByteDistributor and RemoteFlowController should support pending bytes of type long.
Result:
Fixes https://github.com/netty/netty/issues/4283
Motivation:
http/2 counts header sizes somewhat inconsistently. Sometimes, headers
which are substantively less than the header list size will be measured
as longer than the header list size.
Modifications:
Keep better track of the nameLength of a given name, so that we don't
accidentally end up reusing a nameLength.
Result:
More consistent measurement of header list size.
Fixes#7511.
H2C upgrades should be ineligible for flow control
Motivation:
When the h2c upgrade request is too big, the Http2FrameCodec complains
it's too big for flow control reasons, even though it's ineligible for
flow control.
Modifications:
Specially mark upgrade streams and make Http2FrameCodec know not to try
to flow control on those streams.
Result:
Servers won't barf when they receive an upgrade request with a fat
payload.
[Fixes#7280]
Motivation:
Http2ConnectionHandler uses ctx.fireUserEvent to propagate the Http2ConnectionPrefaceAndSettingsFrameWrittenEvent through the pipeline. This will propagate the event to the next inbound handler in the pipeline. If the user extends Http2ConnectionHandler the Http2ConnectionPrefaceAndSettingsFrameWrittenEvent may be missed and initialization dependent upon this event will not be run.
Modifications:
- Http2ConnectionHandler should use userEventTriggered instead of ctx.fireUserEvent
Result:
Classes that extend Http2ConnectionHandler will see the Http2ConnectionPrefaceAndSettingsFrameWrittenEvent user event.
Automatic-Module-Name entry provides a stable JDK9 module name, when Netty is used in a modular JDK9 applications. More info: http://blog.joda.org/2017/05/java-se-9-jpms-automatic-modules.html
When Netty migrates to JDK9 in the future, the entry can be replaced by actual module-info descriptor.
Modification:
The POM-s are configured to put the correct module names to the manifest.
Result:
Fixes#7218.
Motivation:
HttpConversionUtil#toHttp2Headers has special code to filter the TE header name. However this filtering code may result in adding the <TE, TRAILERS> tuple in scenarios that are not appropriate. For example if a value containing trailers is seen it will be added, but the value could not actually be equal to trailers. Also CSV values are not supported.
Modifications:
- Account for CSV header values
- Account for the value containing 'trailers' but not actually being equal to 'trailers'
Result:
More robust parsing of the TE header.
Modifications:
HttpConversionUtil#toLowercaseMap requires an intermediate List to be allocated. This can be avoided with the recently added value iterator methods.
Modifications:
- Use HttpHeaders#valueCharSequenceIterator instead of getAll
Result:
Less intermediate object allocation and copying.
Motivation:
DefaultHttp2FrameWriter#writeData allocates a DataFrameHeader for each write operation. DataFrameHeader maintains internal state and allocates multiple slices of a buffer which is a maximum of 30 bytes. This 30 byte buffer may not always be necessary and the additional slice operations can utilize retainedSlice to take advantage of pooled objects. We can also save computation and object allocations if there is no padding which is a common case in practice.
Modifications:
- Remove DataFrameHeader
- Add a fast path for padding == 0
Result:
Less object allocation in DefaultHttp2FrameWriter
Motivation:
AbstractByteBuf#readSlice relied upon the bounds checking of the slice operation in order to detect index out of bounds conditions. However the slice bounds checking operation allows for the slice to go beyond the writer index, and this is out of bounds for a read operation.
Modifications:
- AbstractByteBuf#readSlice and AbstractByteBuf#readRetainedSlice should ensure the desired amount of bytes are readable before taking a slice
Result:
No reading of undefined data in AbstractByteBuf#readSlice and AbstractByteBuf#readRetainedSlice.
Motivation:
Netty could handle "connection" or "te" headers more gently when
converting from http/1.1 to http/2 headers. Http/2 headers don't
support single-hop headers, so when we convert from http/1.1 to http/2,
we should drop all single-hop headers. This includes headers like
"transfer-encoding" and "connection", but also the headers that
"connection" points to, since "connection" can be used to designate
other headers as single-hop headers. For the "te" header, we can more
permissively convert it by just dropping non-conforming headers (ie
non-"trailers" headers) which is what we do for all other headers when
we convert.
Modifications:
Add a new blacklist to the http/1.1 to http/2 conversion, which is
constructed from the values of the "connection" header, and stop
throwing an exception when a "te" header is passed with a non-"trailers"
value. Instead, drop all values except for "trailers". Add unit tests
for "connection" and "te" headers when converting from http/1.1 to http/2.
Result:
This will improve the h2c upgrade request, and also conversions from
http/1.1 to http/2. This will simplify implementing spec-compliant
http/2 servers that want to share code between their http/1.1 and http/2
implementations.
[Fixes#7355]
Motivation:
HTTP/2 allows writes of 0 length data frames. However in some cases EMPTY_BUFFER is used instead of the actual buffer that was written. This may mask writes of released buffers or otherwise invalid buffer objects. It is also possible that if the buffer is invalid AbstractCoalescingBufferQueue will not release the aggregated buffer nor fail the associated promise.
Modifications:
- DefaultHttp2FrameCodec should take care to fail the promise, even if releasing the data throws
- AbstractCoalescingBufferQueue should release any aggregated data and fail the associated promise if something goes wrong during aggregation
Result:
More correct handling of invalid buffers in HTTP/2 code.
Motivation:
If a child channel's read is triggered outside the parent channel's read
loop then it is possible a WINDOW_UPDATE will be written, but not
flushed.
If a child channel's beginRead processes data from the inboundBuffer and
then readPending is set to false, which will result in data not being
delivered if in the parent's read loop and more data is attempted to be
delievered to that child channel.
Modifications:
- The child channel must force a flush if a frame is written as a result
of reading a frame, and this is not in the parent channel's read loop
- The child channel must allow a transition from dequeueing from
beginRead into the parent channel's read loop to deliver more data
Result:
The child channel flushes data when reading outside the parent's read
loop, and has frames delivered more reliably.
Motivation:
Http2StreamFrameToHttpObjectCodec was not properly encoding and
decoding 100-Continue HttpResponse/Http2SettingsFrame properly. It was
encoding 100-Continue FullHttpResponse as an Http2SettingFrame with
endStream=true, causing the child channel to terminate. It was not
decoding 100-Continue Http2SettingsFrame (endStream=false) as
FullHttpResponse. This should be fixed as it would cause http2 child
stream to prematurely close, and could cause HttpObjectAggregator to
fail if it's in the pipeline.
Modification:
- Fixed encode() to properly encode 100-Continue FullHttpResponse as
Http2SettingsFrame with endStream=false
- Reject 100-Continue HttpResponse that are NOT FullHttpResponse
- Fixed decode() to properly decode 100-Continue Http2SettingsFrame
(endStream=false) as a FullHttpResponse
- made Http2StreamFrameToHttpObjectCodec sharable so that it can b used
among child streams within the same Http2MultiplexCodec
Result:
Now Http2StreamFrameToHttpObjectCodec should be properly handling
100-Continue responses.
Motivation:
Previously client Http2ConnectionHandler trigger a user event
immediately when the HTTP/2 connection preface is sent. Any attempt to
immediately send a new request could cause the server to terminate the
connection, as it might not have received the SETTINGS frame from the
client. Per RFC7540 Section 3.5, the preface "MUST be followed by a
SETTINGS frame (Section 6.5), which MAY be empty."
(https://tools.ietf.org/html/rfc7540#section-3.5)
This event could be made more meaningful if it also indicates that the
initial client SETTINGS frame has been sent to signal that the channel
is ready to send new requests.
Modification:
- Renamed event to Http2ConnectionPrefaceAndSettingsFrameWrittenEvent.
- Modified Http2ConnectionHandler to trigger the user event only if it
is a client and it has sent both the preface and SETTINGS frame.
Result:
It is now safe to use the event as an indicator that the HTTP/2
connection is ready to send new requests.
Motivation:
A regression was introduced in 86e653e which had the effect that the writability was not updated for a Channel while queueing data in the SslHandler.
Modifications:
- Factor out code that will increment / decrement pending bytes and use it in AbstractCoalescingBufferQueue and PendingWriteQueue
- Add test-case
Result:
Channel writability changes are triggered again.
Motivation:
Objects of java.util.TreeMap or java.util.TreeSet will become
non-Serializable if instantiated with Comparators, which are not also
Serializable. This can result in unexpected and difficult-to-diagnose
bugs.
Modifications:
Implements Serializable for all classes, which implements Comparator.
Result:
Proper Comparators which will not force collections to
non-Serializable mode.
Motivation:
Even if it's a super micro-optimization (most JVM could optimize such
cases in runtime), in theory (and according to some perf tests) it
may help a bit. It also makes a code more clear and allows you to
access such methods in the test scope directly, without instance of
the class.
Modifications:
Add 'static' modifier for all methods, where it possible. Mostly in
test scope.
Result:
Cleaner code with proper 'static' modifiers.
Motivation:
Without a 'serialVersionUID' field, any change to a class will make
previously serialized versions unreadable.
Modifications:
Add missed 'serialVersionUID' field for all Serializable
classes.
Result:
Proper deserialization of previously serialized objects.
Motivation:
During code read of the Netty codebase I noticed that the Netty
HttpServerUpgradeHandler unconditionally sets a Content-Length: 0
header on 101 Switching Protocols responses. This explicitly
contravenes RFC 7230 Section 3.3.2 (Content-Length), which notes
that:
A server MUST NOT send a Content-Length header field in any
response with a status code of 1xx (Informational) or 204
(No Content).
While it is unlikely that any client will ever be confused by
this behaviour, there is no reason to contravene this part of the
specification.
Modifications:
Removed the line of code setting the header field and changed the
only test that expected it to be there.
Result:
When performing the server portion of HTTP upgrade, the 101
Switching Protocols response will no longer contain a
Content-Length: 0 header field.
Motivation:
Http2ServerDowngrader is specifically built for server channels where
inbound Http2StreamFrames are converted into HttpRequests, and outbound
HttpResponses are converted into Http2StreamFrames. It can be easily
made to be more generic to work with client channels where inbound
Http2StreamFrames are converted into HttpResponses, and outbound
HttpRequests are converted into Http2StreamFrames.
Modification:
- Renamed Http2ServerDowngrader to a more general
Http2StreamFrameToHttpObjectCodec
- Made it take in an "isServer" parameter to determine whether encoding
inbound Http2StreamFrames should create HttpRequests (for server) or
HttpResponses (for client)
- Norman fixed a leak in the unit test. Thanks! :-)
Result:
Now Http2StreamFrameToHttpObjectCodec can be used to translate
Http2StreamFrame to HttpObject for both server and client.
Motivation:
We changed Http2ConnectionHandler to expect the upgrade method to be
called *after* we send the preface (ie add the handler to the pipeline)
but we forgot to change the Http2ClientUpgradeCodec to match the new
policy. This meant that client-side h2c upgrades failed.
Modifications:
Reverse sending the preface and calling the upgrade method to match the
new policy.
Result:
Clients can initiate h2c upgrades successfully.
Motiviation:
At the moment an NPE is thrown if someone tries to use the InboundHttp2ToHttpAdapter.
Modifications:
- Ensure the status was null in "InboundHttp2ToHttpAdapter::onPushPromiseRead" before calling "HttpConversionUtil.parseStatus" methods.
- Fix setting status to OK in "InboundHttp2ToHttpAdapter::onPushPromiseRead".
Result:
Fixes [#7214].
Motivation:
The Headers interface supports an interface to get all the headers values corresponding to a particular name. This API returns a List which requires intermediate storage and increases GC pressure.
Modifications:
- Add a method which returns an iterator over all the values for a specific name
Result:
Ability to iterator over values for a specific name with no intermediate collection.
Motivation:
We missed to mark the Http2StreamChannel as writable in some cases which could lead to the situation that a Channel never becomes writable. Also when a Http2StreamChannel was created we always marked it non-writable at the beginning which means if the user will only start writing once the Channel becomes writable it will never happen as it only became writable after the first header was written.
Modifications:
- Correctly handle updates for writability in all cases
- Change unit tests to cover this.
Result:
Fixes [#7179].
Motiviation:
At the moment an NPE is thrown if someone tries to use the Http2ServerUpgradeCodec with Http2FrameCodec and Http2MultiplexCodec.
Modifications:
- Ensure the handler was added to the pipeline before calling on*Upgrade(...) methods.
- Add tests
- Fix adding of handlers after upgrade.
Result:
Fixes [#7173].
Motivation:
In `Http2StreamChannelId` a `hashCode()` is not consistent with `equals()`.
Modifications:
Make a `Http2StreamChannelId.hashCode()` consistent with `equals()`.
Result:
Faster hash map's operations where the Http2StreamChannelId as keys.
Motivation:
In `DefaultHttp2GoAwayFrame.equals()` a content compared twice: explicitly and in the `super` method.
Modifications:
Remove explicit content comparision.
Make `hashCode()` consistent with `equals()`.
Result:
A `DefaultHttp2GoAwayFrame.equals()` work faster.
Motivation:
We must not add the inboundStreamHandler for outbound streams creates by Http2MultiplexCodec as the user will specify a handler via Http2StreamChannelBootstrap.
Modifications:
- Check if the stream is for outbound and if so not add the inboundStreamHandler to the pipeline
- Update tests so this is covered.
Result:
Fixes [#7178]
Motivation:
379ac890f4 introduced the usage of the inline mock maker. This unfortunally not work on java7 and java9.
Modifications:
Just use reflection to create the event for now.
Result:
Netty tests pass again on java7 and java9 as well.
Motivation:
When using Http2FrameCodec / Http2MultiplexCodec with HttpServerUpgradeHandler reference count exception will be triggered.
Modifications:
- Correctly retain before calling InboundHttpToHttp2Adapter.handle
- Add unit test
Result:
Fixes [#7172].
Motivation:
The construction `new AsciiString(string.substring(...))` can be replaced with the `new AsciiString(string, start, length)` to avoid extra allocation.
Modifications:
Apply the described replacement in `HttpConversionUtil#setHttp2Authority`.
Result:
Less allocations.
Motivation:
The `AsciiString#toString` method calculate string value and cache it into field. If an `AsciiString` created from the `String` value, we can avoid rebuilding strings if we cache them immediately when creating `AsciiString`. It would be useful for constants strings, which already stored in the JVMs string table, or in cases where an unavoidable `#toString `method call is assumed.
Modifications:
- Add new static method `AsciiString#cache(String)` which save string value into cache field.
- Apply a "benign" data race in the `#hashCode` and `#toString` methods.
Result:
Less memory usage in some `AsciiString` use cases.
Motivation:
We should call promise.setUncancellable() in DefaultHttp2StreamChannel.Unsafe impl to detect if the operation was cancelled.
Modifications:
Add promise.setUncancellable() calls
Result:
More correct handling of cancelled promises
Motivation:
Our http2 child channel implementation was not 100 % complete and had a few bugs. Beside this the performance overhead was non-trivial.
Modifications:
There are a lot of modifications, the most important....
* Http2FrameCodec extends Http2ConnectionHandler and Http2MultiplexCodec extends Http2FrameCodec to reduce performance heads and inter-dependencies on handlers in the pipeline
* Correctly handle outbound flow control for child channels
* Support unknow frame types in Http2FrameCodec and Http2MultiplexCodec
* Use a consistent way how to create Http2ConnectionHandler, Http2FrameCodec and Http2MultiplexCodec (via a builder)
* Remove Http2Codec and Http2CodecBuilder as the user should just use Http2MultipleCodec and Http2MultiplexCodecBuilder now
* Smart handling of flushes from child channels to reduce overhead
* Reduce object allocations
* child channels always use the same EventLoop as the parent Channel to reduce overhead and simplify implementation.
* Not extend AbstractChannel for the child channel implementation to reduce overhead in terms of performance and memory usage
* Remove Http2FrameStream.managedState(...) as the user of the child channel api should just use Channel.attr(...)
Result:
Http2MultiplexCodec (and so child channels) and Http2FrameCodec are more correct, faster and more feature complete.
Motivation:
This PR (unfortunately) does 4 things:
1) Add outbound flow control to the Http2MultiplexCodec:
The HTTP/2 child channel API should interact with HTTP/2 outbound/remote flow control. That is,
if a H2 stream used up all its flow control window, the corresponding child channel should be
marked unwritable and a writability-changed event should be fired. Similarly, a unwritable
child channel should be marked writable and a writability-event should be fired, once a
WINDOW_UPDATE frame has been received. The changes are (mostly) contained in ChannelOutboundBuffer,
AbstractHttp2StreamChannel and Http2MultiplexCodec.
2) Introduce a Http2Stream2 object, that is used instead of stream identifiers on stream frames. A
Http2Stream2 object allows an application to attach state to it, and so a application handler
no longer needs to maintain stream state (i.e. in a map(id -> state)) himself.
3) Remove stream state events, which are no longer necessary due to the introduction of Http2Stream2.
Also those stream state events have been found hard and complex to work with, when porting gRPC
to the Http2FrameCodec.
4) Add support for HTTP/2 frames that have not yet been implemented, like PING and SETTINGS. Also add
a Http2FrameCodecBuilder that exposes options from the Http2ConnectionHandler API that couldn't else
be used with the frame codec, like buffering outbound streams, window update ratio, frame logger, etc.
Modifications:
1) A child channel's writability and a H2 stream's outbound flow control window interact, as described
in the motivation. A channel handler is free to ignore the channel's writability, in which case the
parent channel is reponsible for buffering writes until a WINDOW_UPDATE is received.
The connection-level flow control window is ignored for now. That is, a child channel's writability
is only affected by the stream-level flow control window. So a child channel could be marked writable,
even though the connection-level flow control window is zero.
2) Modify Http2StreamFrame and the Http2FrameCodec to take a Http2Stream2 object intstead of a primitive
integer. Introduce a special Http2ChannelDuplexHandler that has newStream() and forEachActiveStream()
methods. It's recommended for a user to extend from this handler, to use those advanced features.
3) As explained in the documentation, a new inbound stream active can be detected by checking if the
Http2Stream2.managedState() of a Http2HeadersFrame is null. An outbound stream active can be detected
by adding a listener to the ChannelPromise of the write of the first Http2HeadersFrame. A stream
closed event can be listened to by adding a listener to the Http2Stream2.closeFuture().
4) Add a simple Http2FrameCodecBuilder and implement the missing frame types.
Result:
1) The Http2MultiplexCodec supports outbound flow control.
2) The Http2FrameCodec API makes it easy for a user to manage custom stream specific state and to create
new outbound streams.
3) The Http2FrameCodec API is much cleaner and easier to work with. Hacks like the ChannelCarryingHeadersFrame
are no longer necessary.
4) The Http2FrameCodec now also supports PING and SETTINGS frames. The Http2FrameCodecBuilder allows the Http2FrameCodec
to use some of the rich features of the Http2ConnectionHandler API.
Motivation:
The javadocs of Http2RemoteFlowController.isWritable(...) are incorrect.
Modifications:
Update javadocs to reflect reality.
Result:
Correct javadocs.
Motivation:
The labels identifying the frame types in Http2FrameLogger are not always correct.
Modification:
- Correct the string labels to indicate the right frame type in Http2FrameLogger
Result:
Logs are more correct.
Motivation:
In Http2ConnectionHandler we call flush(...) in channelReadComplete(...) to ensure we update the flow-controller and write stuff to the remote peer. We should better flip the order and so may be able to pick up more bytes.
Modifications:
Change order of calls.
Result:
Better performance
Motivation:
When we use pipeline.replace and we still had ongoing inbound, then
there will be some problem that inbound message would go to wrong
handlers. So we add handler first, and remove self after add, so that
the next handler will be the correct one.
Modifications:
Uses remove after addAfter instead of replace.
Result:
Fixed#6881
Motivation:
Http2FrameCodec should use Http2Settings.defaultSettings() when no Http2Settings were specified by the user.
Modifications:
Replace new Http2Settings() with Http2Settings.defaultSettings()
Result:
Use correct Http2Settings by default when using Http2FrameCodec in all cases.
Motivation:
Currentry logger create hex dump even if log write will not apply.
It's unecessary GC overhead.
Modifications:
Restore optimization from #3492
Result:
Fixes#7025
Motivation:
There should be a way to allow graceful shutdown to wait for all open streams to close without a timeout. Using gracefulShutdownTimeoutMillis with a large value is a bit of a hack, and has a gotcha that sufficiently large values will overflow the long, resulting in a ClosingChannelFutureListener that executes immediately.
Modification:
Allow to use gracefulShutdownTimeoutMillis(-1) to express waiting until all streams are closed.
Result:
We can now shutdown the connection without a forced timeout.
Motivation:
codec-http2 currently does not strictly enforce the HTTP/1.x semantics with respect to the number of headers defined in RFC 7540 Section 8.1 [1]. We currently don't validate the number of headers nor do we validate that the trailing headers should indicate EOS.
[1] https://tools.ietf.org/html/rfc7540#section-8.1
Modifications:
- DefaultHttp2ConnectionDecoder should only allow decoding of a single headers and a single trailers
- DefaultHttp2ConnectionEncoder should only allow encoding of a single headers and optionally a single trailers
Result:
Constraints of RFC 7540 restricting the number of headers/trailers is enforced.
Motivation:
Http2ServerUpgradeCodec should support Http2FrameCodec.
Modifications:
- Add support for Http2FrameCodec
- Add example that uses Http2FrameCodec
Result:
More flexible use of Http2ServerUpgradeCodec
Motivation:
DefaultHttp2ConnectionEncoder#writeHeaders attempts to find a stream object, and if one doesn't exist it tries to create one. However in the event that the local endpoint has received a RST_STREAM frame before writing the response headers we attempt to create a stream. Since this stream ID is for the incorrect endpoint we then generate a GO_AWAY for what appears to be a protocol error, but can instead be failed locally.
Modifications:
- Just fail the local promise in the above situation instead of sending a GO_AWAY
Result:
Less severe consequences if the server asynchronously sends headers after a RST_STREAM has been received.
Fixes https://github.com/netty/netty/issues/6906.
Motivation:
PR #6811 introduced a public utility methods to decode hex dump and its parts, but they are not visible from netty-common.
Modifications:
1. Move the `decodeHexByte`, `decodeHexDump` and `decodeHexNibble` methods into `StringUtils`.
2. Apply these methods where applicable.
3. Remove similar methods from other locations (e.g. `HpackHex` test class).
Result:
Less code duplication.
Motivation:
The Http2FrameLogger uses a custom format when logging events. We should use the more familiar format of 'channel event type: details' and single line logging for more consistent debugging.
Modifications:
- Http2FrameLogger should not use a StringBuilder and instead should directly use the Logger
- Http2FrameLogger should use the more consistent format defined above
Result:
Http2FrameLogger's logging formate is more consistent with other log events.
Motivation:
In a `HttpConversionUtil#toHttp2Headers` a status code conversion can be replaced with using `HttpResponseStatus#codeAsText` method.
Modifications:
Apply `HttpResponseStatus#codeAsText` method.
Result:
Less allocations.
Motivation:
Before we always expanded the buffer by the initialCapacity which by default is 32 bytes. This may lead to many expansions of the buffer until we finally reached the point that the buffer can fin everything.
Modifications:
Double the buffer size until the threshold of >= 1024 is hit. After this will grow it by the initialCapacity
Result:
Less expansion of the buffer (and so allocations / copies) when the intialCapacity is not big enough. Fixes [#6864].
Motivation:
HpackHuffmanDecoder.Decoder did not do any bound-checking but just catched IndexOutOfBoundsException to detect if the array needs to grow. This can be very expensive because of fillInStackTrace()
Modifications:
Add proper bounds checking and grow the array if needed without catching IndexOutOfBoundsException.
Result:
Less overhead if the array needs to grow.
Motivation:
Depending on the use case it may make sense to increase or decrease the initial size of the buffer used during the HPACK huffman decode process. This is currently not exposed through the AbstractHttp2ConnectionHandlerBuilder.
Modifications:
- Add a method to AbstractHttp2ConnectionHandlerBuilder which allows the initial size of the buffer used during the HPACK huffman decode prcoess to be configured.
Result:
AbstractHttp2ConnectionHandlerBuilder provides more control of codec-http2 knobs.
Motivation:
Some JUnit assert calls can be replaced by simpler.
Modifications:
Replacement with a more suitable methods.
Result:
More informative JUnit reports.
Motivation:
DefaultHttp2FrameWriter has constructors that it would be a hassle to
expose as configuration parameters on Http2Codec. We should instead
make a builder for Http2Codec.
Modifications:
Get rid of the public constructors on Http2Codec and instead make sure
you can always use the builder where you would have used the constructor
before.
Result:
Http2Codec can be configured more flexibly, and the SensitivityDetector
can be configured.
Motivation:
Each StreamByteDistributor may allow for priority in different ways, but there are certain characteristics which are invalid regardless of the distribution algorithm. We should validate these invalid characteristics at the flow controller level.
Modifications:
- Disallow negative stream IDs from being used. These streams may be accepted by the WeightedFairQueueByteDistributor and cause state for other valid streams to be evicted.
- Improve unit tests to verify limits are enforced.
Result:
Boundary conditions related to the priority parameters are validated more strictly.
Motivation:
Chrome was randomly getting stuck loading the tiles examples.
Investigation showed that the Netty flow controller thought it had
nothing to send for the connection even though some streams has queued
data and window available.
Modifications:
Fixed an accounting error where an implicitly created parent was not
being added to the dependency tree, thus it and all of its children were
orphaned from the connection's tree and would never have data written.
Result:
Fixes#6621
Motivation:
Although effectively unused, the toString() of
WeightedFairQueueByteDistributor.State is useful for debugging. It
accidentally had an infinite loop, as it would recurse infinitely
between a parent and its child, which makes it less useful for
debugging.
Modifications:
Prune the infinite loop by using the parent's streamId instead of the
parent's toString().
Result:
Faster, less stack-overflowing toString()
Motivation:
The CI servers have reported leaks while building the HTTP/2 unit tests. The unit tests attempt to wait for the channels to be closed before exiting the test, but we should wait in case there are any tasks pending on the EventLoopGroup's task queues.
Modifications:
- Change the Future.sync() operations to Future.syncUninterruptibly()
- HTTP/2 unit tests which use local channel should wait for 5 seconds before shutting down the EventLoopGroups
Result:
More likely that any cleanup related tasks will execute before the unit tests are shutdown.
Motivation:
It is generally useful to override DefaultHttp2HeadersDecoder's creation of a new Http2Headers object so more optimized versions can be substituted if the use case allows for it.
Modifications:
- DefaultHttp2HeadersDecoder should support an overridable method to generate the new Http2Headers object for each decode operation
Result:
DefaultHttp2HeadersDecoder is more extensible.
Fixes https://github.com/netty/netty/issues/6591.
Motivation:
codec-http2 has some helper methods to write to ByteBuf in a big endian fashion. This is the default memory structure for ByteBuf so these helper methods are not necessary.
Modifications:
- remove writeUnsignedInt and writeUnsignedShort
Result:
codec-http2 has less ByteBuf helper methods which are not necessary.
Motivation:
HTTP/2 support two ways to start on a no-tls tcp connection,
http/1.1 upgrade and prior knowlege methodology to start HTTP/2.
Currently, the http2-server from example only support
starting by upgrade. I think we can do a simple dispatch by peek first
bytes from inbound that match to prior knowledge preface or not and
determine which handlers to set into pipeline.
Modifications:
Add ClearTextHttp2ServerUpgradeHandler to support start HTTP/2 via clear
text with two approach. And update example/http2-server to support
this functionality.
Result:
netty HTTP/2 and the example http2-server accept for two ways to start
HTTP/2 over clear text.
Fixed memory leak problem
Update fields to final
Rename ClearText to cleartext
Addressed comments for code improvement
- Always prefer static, final, and private if possible
- Add UnstableApi annotation
- Used EmbeddedChannel.readInbound instead of unhandled inbound handler
- More assertion
Update javadoc for CleartextHttp2ServerUpgradeHandler
Rename ClearTextHttp2ServerUpgradeHandler to CleartextHttp2ServerUpgradeHandler
Removed redundant code about configure pipeline
nit: PriorKnowledgeHandler
Removed Mockito.spy, investigate conn state instead
Add Http2UpgradeEvent
Check null of the constructor arguments
Rename Http2UpgradeEvent to PriorKnowledgeUpgradeEvent
Update unit test
Motivation:
gRPC (and potentially other libraries) has an optimized header processor that requires direct access to the HpackDecoder.
Modifications:
Make the HpackDecoder and its constructors public.
Result:
Fixes#6579
Motivation
Http2StreamChannel ignores options of its parent channel when being created. That leads to surprising results when, for example, unpooled allocator could be silently replaced with pooled allocator (default setting).
Modification
Copy parent channel's options over to the Http2StreamChannel.
Result
Channel options are now consistent between Http2StreamChannel and its parent channel. Newly added test passes on this branch and fails on master. Fixes#6551.
Motivation:
We used some deprecated Mockito methods.
Modifications:
- Replace deprecated method usage
- Some cleanup
Result:
No more usage of deprecated Mockito methods. Fixes [#6482].
Motivation:
DefaultHttp2ConnectionDecoder#onSettingsRead processes the settings, and then sends a SETTINGS ACK to the remote peer. Processing the settings may result in frames which violate the previous settings being send to the remote peer. The remote peer will not apply the new settings until it has received the SETTINGS ACK, and therefore we may violate the settings from the remote peer's perspective and the connection will be shutdown.
Modifications:
- We should send the SETTINGS ACK before we process the settings to ensure the peer receives the SETTINGS ACK before other frames which assume the settings have already been applied
Result:
Fixes https://github.com/netty/netty/issues/6520.
Motivation:
Http2SecurityUtil currently lists HTTP/2 ciphers as documented by
JSSE docs [1] and the IANA [2] using the TLS_ prefix.
In some IBM J9 implementations the SSL_ prefix is used, which is also
covered by the JSSE.
[1] http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html
[2] http://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml
Modifications:
Add both variants of the cipher names (prefixed with SSL_ in additon to TLS_)
Result:
HTTP/2 connections can now be created using the SslProvider.JDK on IBM J9
and potentially other JVMs which use the SSL_ prefix.