Motivation:
If you test a header value providing a String, contains() returns false.
This is due to the implementation inherited from DefaultHeaders using
the JAVA_HASHER.
JAVA_HASHER.equals returns false because a is a String and b an
AsciiString.
Modifications:
DefaultHttp2Headers overrides contains and uses CASE_SENSITIVE_HASHER.
Result:
You can test a header value with any CharSequence implementation.
Motivation:
The completion order of promises in Http2MultiplexChannel#close should be consistent with that of AbstractChannel. Otherwise this may result in Future listeners seeing incorrect channel state.
Modifications:
Add tests cases.
Result:
Ensure consistent behavior between Http2MultiplexChannel and AbstractChannel.
Motivation:
Calling DefaultHttp2StreamChannel.Unsafe.close(...) multiple times should not fail.
Modification:
- Correctly handle multiple calls to DefaultHttp2StreamChannel.Unsafe.close(...)
- Complete closePromise and promise that is given to close(...) in the correct order.
- Add unit test
Result:
Fixes [#7628] and [#7641]
Motivation:
If DefaultHttp2ConnectionEncoder process outbound operation it sometimes missed to call Http2LifecycleManager.onError(...) when the operation was executed asynchronously.
Modifications:
Make best effort to update flags but still ensure failures are propageted to Http2LifecycleManager.onError(...) in all cases.
Result:
More consistent handling of errors.
Motivation:
Pseudo headers are checked less frequently than normal headers, so
it is more efficient to check the latter first.
Modifications:
Swap the order of the check, and fix minor formatting
Result:
Possibly more efficient header checks
Motivation:
We can just implement the interfaces directly and so reduce object creation in DefaultHttp2RemoteFlowController.
Modifications:
Directly implement the interfaces.
Result:
Less object creation.
Motivation:
When part of a HTTP/2 StreamChannel the Http2StreamChannel.isOpen() / isActive() should report false within a call to a ChannelInboundHandlers channelInactive() method.
Modifications:
Fullfill promise before call fireChannelInactive()
Result:
Correctly update state / promise before notify handlers. Fixes [#7638]
Motivation:
With HTTP1, it's very easy to check if a header is present and has a
given value: you can simply invoke
io.netty.handler.codec.http.HttpHeaders#contains(java.lang.CharSequence, java.lang.CharSequence, boolean)
It is not possible to do the same with HTTP2. You have to get the list
of all headers (returned as String) and then iterate over it invoking
String#equals or String#equalsIgnoreCase
Modifications:
I've added io.netty.handler.codec.http2.Http2Headers#contains and
implemented it in DefaultHttp2Headers, EmptyHttp2Headers and ReadOnlyHttp2Headers.
Result:
You can use AsciiString constants to check if a header is present in a
consice and efficient manner.
Motivation:
Usually when using netty exceptions which happen for outbound operations should not be fired through the pipeline but only the ChannelPromise should be failed.
Modifications:
- Change Http2LifecycleManager.onError(...) to take also an boolean that indicate if the error was caused by an outbound operation
- Channel Http2ConnectionHandler.on*Error(...) methods to also take this boolean
- Change Http2FrameCodec to only fire exceptions through the pipeline if these are not outbound operations related
- Add unit test.
Result:
More consistent error handling when using Http2FrameCodec and Http2MultiplexCodec.
Motivation:
Http2MultiplexCodec swallows Http2PingFrames without releasing the payload, resulting in a memory leak.
Modification:
Send unhandled frames down the pipeline for consumption/disposal by another InboundChannelHandler.
Result:
Fixes#7607.
Motivation:
Usages of HttpResponseStatus may result in more object allocation then necessary due to not looking for cached objects and the AsciiString parsing method not being used due to CharSequence method being used instead.
Modifications:
- HttpResponseDecoder should attempt to get the HttpResponseStatus from cache instead of allocating a new object
- HttpResponseStatus#parseLine(CharSequence) should check if the type is AsciiString and redirect to the AsciiString parsing method which may not require an additional toString call
- HttpResponseStatus#parseLine(AsciiString) can be optimized and doesn't require and may not require object allocation
Result:
Less allocations when dealing with HttpResponseStatus.
Motivation:
When we cancel the flowcontrolled writes we did create a new StreamException for each write that was enqueued. Creating Exceptions is very expensive due of filling the stacktrace.
Modifications:
Only create the StreamException once and reuse the same for all the flowcontrolled writes (per stream).
Result:
Less expensive to cancel flowcontrolled writes.
Motivation:
Http2FrameCodec increases the initialWindowSize when the user attempts to increase the connection flow control window. The initialWindowSize should only be touched as a result of a SETTINGS frame, and otherwise may result in flow control getting out of sync with our peer.
Modifications:
- Http2FrameCodec shouldn't update the initialWindowSize when a WindowUpdateFrame is written on the connection channel
Result:
More correct WindowUpdate processing.
Motivation:
HPackDecoder works on entire header block, we shouldn't encounter
incomplete header fields. If we do we should treat it as
a decoding error and according to the specification:
A decoding error in a header block MUST be treated as
a connection error (Section 5.4.1) of type COMPRESSION_ERROR.
Modifications:
* Check final state in HpackDecoder once we've decoded all the data.
Result:
* Throw a connection error if we receive incomplete header fields
* H2spec 4.3 tests all passes
Motivation:
Http2FrameStream#CONNECTION_STREAM is required to identify the
connection stream. However this leads to inconsistent usage from a user
perspective. When a user creates a Http2Frame for a non-connection
stream, the Http2MultiplexCodec automatically sets the stream, and the
user is never exposed to the Http2FrameStream object. However when the
user writes a Http2Frame for a connection stream they are required to
set the Http2FrameStream object. We can remove the Http2FrameStream#CONNECTION_STREAM
and keep the Http2FrameStream object internal, and therefore consistent
between the connection and non-connection use cases.
Modifications:
- Remove Http2FrameStream#CONNECTION_STREAM
- Update Http2FrameCodec to handle Http2Frame#stream() which returns
null
Result:
More consistent usage on http2 parent channel and http2 child channel.
Motivation:
Http2FrameCodecTest#newOutboundStream has a timeout of 1 second and has been observed to timeout on CI servers.
Modifications:
- Increase the timeout to 5 seconds
Result:
Less false positive test failures on CI servers.
Motivation:
HpackDecoder#addHeader takes in the streamId as a parameter but no longer uses it.
Modifications:
- Remove the streamId parameter from HpackDecoder#addHeader
Result:
Less unused parameters in HpackDecoder.
Motivation:
According to the HTTP/2 Spec:
SETTINGS_MAX_HEADER_LIST_SIZE (0x6): This advisory setting informs a
peer of the maximum size of header list that the sender is
prepared to accept, in octets. The value is based on the
uncompressed size of header fields, including the length of the
name and value in octets plus an overhead of 32 octets for each
header field.
We were accounting for the 32 bytes when encoding in HpackEncoder,
but not when decoding in HPackDecoder.
Modifications:
- Add 32 bytes to the header list length for each entry when decoding
with HPackDecoder.
Result:
- We account for the 32 bytes overhead by header entry in HPackDecoder
Motiviation:
In our replace(...) methods we always used validation for the newly created headers while the original headers may not use validation at all.
Modifications:
- Only use validation if the original headers used validation as well.
- Ensure we create a copy of the headers in replace(...).
Result:
Fixes [#5226]
Motivation:
Currently the remote flow controller limits the maximum amount of pending data to Integer.MAX_VALUE. The overflow handling is also not very graceful in that it may lead to infinite loops, or otherwise no progress being made.
Modifications:
- StreamByteDistributor and RemoteFlowController should support pending bytes of type long.
Result:
Fixes https://github.com/netty/netty/issues/4283
Motivation:
http/2 counts header sizes somewhat inconsistently. Sometimes, headers
which are substantively less than the header list size will be measured
as longer than the header list size.
Modifications:
Keep better track of the nameLength of a given name, so that we don't
accidentally end up reusing a nameLength.
Result:
More consistent measurement of header list size.
Fixes#7511.
H2C upgrades should be ineligible for flow control
Motivation:
When the h2c upgrade request is too big, the Http2FrameCodec complains
it's too big for flow control reasons, even though it's ineligible for
flow control.
Modifications:
Specially mark upgrade streams and make Http2FrameCodec know not to try
to flow control on those streams.
Result:
Servers won't barf when they receive an upgrade request with a fat
payload.
[Fixes#7280]
Motivation:
Http2ConnectionHandler uses ctx.fireUserEvent to propagate the Http2ConnectionPrefaceAndSettingsFrameWrittenEvent through the pipeline. This will propagate the event to the next inbound handler in the pipeline. If the user extends Http2ConnectionHandler the Http2ConnectionPrefaceAndSettingsFrameWrittenEvent may be missed and initialization dependent upon this event will not be run.
Modifications:
- Http2ConnectionHandler should use userEventTriggered instead of ctx.fireUserEvent
Result:
Classes that extend Http2ConnectionHandler will see the Http2ConnectionPrefaceAndSettingsFrameWrittenEvent user event.
Motivation:
HttpConversionUtil#toHttp2Headers has special code to filter the TE header name. However this filtering code may result in adding the <TE, TRAILERS> tuple in scenarios that are not appropriate. For example if a value containing trailers is seen it will be added, but the value could not actually be equal to trailers. Also CSV values are not supported.
Modifications:
- Account for CSV header values
- Account for the value containing 'trailers' but not actually being equal to 'trailers'
Result:
More robust parsing of the TE header.
Modifications:
HttpConversionUtil#toLowercaseMap requires an intermediate List to be allocated. This can be avoided with the recently added value iterator methods.
Modifications:
- Use HttpHeaders#valueCharSequenceIterator instead of getAll
Result:
Less intermediate object allocation and copying.
Motivation:
DefaultHttp2FrameWriter#writeData allocates a DataFrameHeader for each write operation. DataFrameHeader maintains internal state and allocates multiple slices of a buffer which is a maximum of 30 bytes. This 30 byte buffer may not always be necessary and the additional slice operations can utilize retainedSlice to take advantage of pooled objects. We can also save computation and object allocations if there is no padding which is a common case in practice.
Modifications:
- Remove DataFrameHeader
- Add a fast path for padding == 0
Result:
Less object allocation in DefaultHttp2FrameWriter
Motivation:
AbstractByteBuf#readSlice relied upon the bounds checking of the slice operation in order to detect index out of bounds conditions. However the slice bounds checking operation allows for the slice to go beyond the writer index, and this is out of bounds for a read operation.
Modifications:
- AbstractByteBuf#readSlice and AbstractByteBuf#readRetainedSlice should ensure the desired amount of bytes are readable before taking a slice
Result:
No reading of undefined data in AbstractByteBuf#readSlice and AbstractByteBuf#readRetainedSlice.
Motivation:
Netty could handle "connection" or "te" headers more gently when
converting from http/1.1 to http/2 headers. Http/2 headers don't
support single-hop headers, so when we convert from http/1.1 to http/2,
we should drop all single-hop headers. This includes headers like
"transfer-encoding" and "connection", but also the headers that
"connection" points to, since "connection" can be used to designate
other headers as single-hop headers. For the "te" header, we can more
permissively convert it by just dropping non-conforming headers (ie
non-"trailers" headers) which is what we do for all other headers when
we convert.
Modifications:
Add a new blacklist to the http/1.1 to http/2 conversion, which is
constructed from the values of the "connection" header, and stop
throwing an exception when a "te" header is passed with a non-"trailers"
value. Instead, drop all values except for "trailers". Add unit tests
for "connection" and "te" headers when converting from http/1.1 to http/2.
Result:
This will improve the h2c upgrade request, and also conversions from
http/1.1 to http/2. This will simplify implementing spec-compliant
http/2 servers that want to share code between their http/1.1 and http/2
implementations.
[Fixes#7355]
Motivation:
HTTP/2 allows writes of 0 length data frames. However in some cases EMPTY_BUFFER is used instead of the actual buffer that was written. This may mask writes of released buffers or otherwise invalid buffer objects. It is also possible that if the buffer is invalid AbstractCoalescingBufferQueue will not release the aggregated buffer nor fail the associated promise.
Modifications:
- DefaultHttp2FrameCodec should take care to fail the promise, even if releasing the data throws
- AbstractCoalescingBufferQueue should release any aggregated data and fail the associated promise if something goes wrong during aggregation
Result:
More correct handling of invalid buffers in HTTP/2 code.
Motivation:
If a child channel's read is triggered outside the parent channel's read
loop then it is possible a WINDOW_UPDATE will be written, but not
flushed.
If a child channel's beginRead processes data from the inboundBuffer and
then readPending is set to false, which will result in data not being
delivered if in the parent's read loop and more data is attempted to be
delievered to that child channel.
Modifications:
- The child channel must force a flush if a frame is written as a result
of reading a frame, and this is not in the parent channel's read loop
- The child channel must allow a transition from dequeueing from
beginRead into the parent channel's read loop to deliver more data
Result:
The child channel flushes data when reading outside the parent's read
loop, and has frames delivered more reliably.
Motivation:
Http2StreamFrameToHttpObjectCodec was not properly encoding and
decoding 100-Continue HttpResponse/Http2SettingsFrame properly. It was
encoding 100-Continue FullHttpResponse as an Http2SettingFrame with
endStream=true, causing the child channel to terminate. It was not
decoding 100-Continue Http2SettingsFrame (endStream=false) as
FullHttpResponse. This should be fixed as it would cause http2 child
stream to prematurely close, and could cause HttpObjectAggregator to
fail if it's in the pipeline.
Modification:
- Fixed encode() to properly encode 100-Continue FullHttpResponse as
Http2SettingsFrame with endStream=false
- Reject 100-Continue HttpResponse that are NOT FullHttpResponse
- Fixed decode() to properly decode 100-Continue Http2SettingsFrame
(endStream=false) as a FullHttpResponse
- made Http2StreamFrameToHttpObjectCodec sharable so that it can b used
among child streams within the same Http2MultiplexCodec
Result:
Now Http2StreamFrameToHttpObjectCodec should be properly handling
100-Continue responses.
Motivation:
Previously client Http2ConnectionHandler trigger a user event
immediately when the HTTP/2 connection preface is sent. Any attempt to
immediately send a new request could cause the server to terminate the
connection, as it might not have received the SETTINGS frame from the
client. Per RFC7540 Section 3.5, the preface "MUST be followed by a
SETTINGS frame (Section 6.5), which MAY be empty."
(https://tools.ietf.org/html/rfc7540#section-3.5)
This event could be made more meaningful if it also indicates that the
initial client SETTINGS frame has been sent to signal that the channel
is ready to send new requests.
Modification:
- Renamed event to Http2ConnectionPrefaceAndSettingsFrameWrittenEvent.
- Modified Http2ConnectionHandler to trigger the user event only if it
is a client and it has sent both the preface and SETTINGS frame.
Result:
It is now safe to use the event as an indicator that the HTTP/2
connection is ready to send new requests.
Motivation:
A regression was introduced in 86e653e which had the effect that the writability was not updated for a Channel while queueing data in the SslHandler.
Modifications:
- Factor out code that will increment / decrement pending bytes and use it in AbstractCoalescingBufferQueue and PendingWriteQueue
- Add test-case
Result:
Channel writability changes are triggered again.
Motivation:
Objects of java.util.TreeMap or java.util.TreeSet will become
non-Serializable if instantiated with Comparators, which are not also
Serializable. This can result in unexpected and difficult-to-diagnose
bugs.
Modifications:
Implements Serializable for all classes, which implements Comparator.
Result:
Proper Comparators which will not force collections to
non-Serializable mode.
Motivation:
Even if it's a super micro-optimization (most JVM could optimize such
cases in runtime), in theory (and according to some perf tests) it
may help a bit. It also makes a code more clear and allows you to
access such methods in the test scope directly, without instance of
the class.
Modifications:
Add 'static' modifier for all methods, where it possible. Mostly in
test scope.
Result:
Cleaner code with proper 'static' modifiers.
Motivation:
Without a 'serialVersionUID' field, any change to a class will make
previously serialized versions unreadable.
Modifications:
Add missed 'serialVersionUID' field for all Serializable
classes.
Result:
Proper deserialization of previously serialized objects.
Motivation:
During code read of the Netty codebase I noticed that the Netty
HttpServerUpgradeHandler unconditionally sets a Content-Length: 0
header on 101 Switching Protocols responses. This explicitly
contravenes RFC 7230 Section 3.3.2 (Content-Length), which notes
that:
A server MUST NOT send a Content-Length header field in any
response with a status code of 1xx (Informational) or 204
(No Content).
While it is unlikely that any client will ever be confused by
this behaviour, there is no reason to contravene this part of the
specification.
Modifications:
Removed the line of code setting the header field and changed the
only test that expected it to be there.
Result:
When performing the server portion of HTTP upgrade, the 101
Switching Protocols response will no longer contain a
Content-Length: 0 header field.
Motivation:
Http2ServerDowngrader is specifically built for server channels where
inbound Http2StreamFrames are converted into HttpRequests, and outbound
HttpResponses are converted into Http2StreamFrames. It can be easily
made to be more generic to work with client channels where inbound
Http2StreamFrames are converted into HttpResponses, and outbound
HttpRequests are converted into Http2StreamFrames.
Modification:
- Renamed Http2ServerDowngrader to a more general
Http2StreamFrameToHttpObjectCodec
- Made it take in an "isServer" parameter to determine whether encoding
inbound Http2StreamFrames should create HttpRequests (for server) or
HttpResponses (for client)
- Norman fixed a leak in the unit test. Thanks! :-)
Result:
Now Http2StreamFrameToHttpObjectCodec can be used to translate
Http2StreamFrame to HttpObject for both server and client.
Motivation:
We changed Http2ConnectionHandler to expect the upgrade method to be
called *after* we send the preface (ie add the handler to the pipeline)
but we forgot to change the Http2ClientUpgradeCodec to match the new
policy. This meant that client-side h2c upgrades failed.
Modifications:
Reverse sending the preface and calling the upgrade method to match the
new policy.
Result:
Clients can initiate h2c upgrades successfully.
Motiviation:
At the moment an NPE is thrown if someone tries to use the InboundHttp2ToHttpAdapter.
Modifications:
- Ensure the status was null in "InboundHttp2ToHttpAdapter::onPushPromiseRead" before calling "HttpConversionUtil.parseStatus" methods.
- Fix setting status to OK in "InboundHttp2ToHttpAdapter::onPushPromiseRead".
Result:
Fixes [#7214].
Motivation:
The Headers interface supports an interface to get all the headers values corresponding to a particular name. This API returns a List which requires intermediate storage and increases GC pressure.
Modifications:
- Add a method which returns an iterator over all the values for a specific name
Result:
Ability to iterator over values for a specific name with no intermediate collection.
Motivation:
We missed to mark the Http2StreamChannel as writable in some cases which could lead to the situation that a Channel never becomes writable. Also when a Http2StreamChannel was created we always marked it non-writable at the beginning which means if the user will only start writing once the Channel becomes writable it will never happen as it only became writable after the first header was written.
Modifications:
- Correctly handle updates for writability in all cases
- Change unit tests to cover this.
Result:
Fixes [#7179].
Motiviation:
At the moment an NPE is thrown if someone tries to use the Http2ServerUpgradeCodec with Http2FrameCodec and Http2MultiplexCodec.
Modifications:
- Ensure the handler was added to the pipeline before calling on*Upgrade(...) methods.
- Add tests
- Fix adding of handlers after upgrade.
Result:
Fixes [#7173].
Motivation:
In `Http2StreamChannelId` a `hashCode()` is not consistent with `equals()`.
Modifications:
Make a `Http2StreamChannelId.hashCode()` consistent with `equals()`.
Result:
Faster hash map's operations where the Http2StreamChannelId as keys.
Motivation:
In `DefaultHttp2GoAwayFrame.equals()` a content compared twice: explicitly and in the `super` method.
Modifications:
Remove explicit content comparision.
Make `hashCode()` consistent with `equals()`.
Result:
A `DefaultHttp2GoAwayFrame.equals()` work faster.
Motivation:
We must not add the inboundStreamHandler for outbound streams creates by Http2MultiplexCodec as the user will specify a handler via Http2StreamChannelBootstrap.
Modifications:
- Check if the stream is for outbound and if so not add the inboundStreamHandler to the pipeline
- Update tests so this is covered.
Result:
Fixes [#7178]
Motivation:
379ac890f4 introduced the usage of the inline mock maker. This unfortunally not work on java7 and java9.
Modifications:
Just use reflection to create the event for now.
Result:
Netty tests pass again on java7 and java9 as well.
Motivation:
When using Http2FrameCodec / Http2MultiplexCodec with HttpServerUpgradeHandler reference count exception will be triggered.
Modifications:
- Correctly retain before calling InboundHttpToHttp2Adapter.handle
- Add unit test
Result:
Fixes [#7172].
Motivation:
The construction `new AsciiString(string.substring(...))` can be replaced with the `new AsciiString(string, start, length)` to avoid extra allocation.
Modifications:
Apply the described replacement in `HttpConversionUtil#setHttp2Authority`.
Result:
Less allocations.
Motivation:
The `AsciiString#toString` method calculate string value and cache it into field. If an `AsciiString` created from the `String` value, we can avoid rebuilding strings if we cache them immediately when creating `AsciiString`. It would be useful for constants strings, which already stored in the JVMs string table, or in cases where an unavoidable `#toString `method call is assumed.
Modifications:
- Add new static method `AsciiString#cache(String)` which save string value into cache field.
- Apply a "benign" data race in the `#hashCode` and `#toString` methods.
Result:
Less memory usage in some `AsciiString` use cases.
Motivation:
We should call promise.setUncancellable() in DefaultHttp2StreamChannel.Unsafe impl to detect if the operation was cancelled.
Modifications:
Add promise.setUncancellable() calls
Result:
More correct handling of cancelled promises
Motivation:
Our http2 child channel implementation was not 100 % complete and had a few bugs. Beside this the performance overhead was non-trivial.
Modifications:
There are a lot of modifications, the most important....
* Http2FrameCodec extends Http2ConnectionHandler and Http2MultiplexCodec extends Http2FrameCodec to reduce performance heads and inter-dependencies on handlers in the pipeline
* Correctly handle outbound flow control for child channels
* Support unknow frame types in Http2FrameCodec and Http2MultiplexCodec
* Use a consistent way how to create Http2ConnectionHandler, Http2FrameCodec and Http2MultiplexCodec (via a builder)
* Remove Http2Codec and Http2CodecBuilder as the user should just use Http2MultipleCodec and Http2MultiplexCodecBuilder now
* Smart handling of flushes from child channels to reduce overhead
* Reduce object allocations
* child channels always use the same EventLoop as the parent Channel to reduce overhead and simplify implementation.
* Not extend AbstractChannel for the child channel implementation to reduce overhead in terms of performance and memory usage
* Remove Http2FrameStream.managedState(...) as the user of the child channel api should just use Channel.attr(...)
Result:
Http2MultiplexCodec (and so child channels) and Http2FrameCodec are more correct, faster and more feature complete.
Motivation:
This PR (unfortunately) does 4 things:
1) Add outbound flow control to the Http2MultiplexCodec:
The HTTP/2 child channel API should interact with HTTP/2 outbound/remote flow control. That is,
if a H2 stream used up all its flow control window, the corresponding child channel should be
marked unwritable and a writability-changed event should be fired. Similarly, a unwritable
child channel should be marked writable and a writability-event should be fired, once a
WINDOW_UPDATE frame has been received. The changes are (mostly) contained in ChannelOutboundBuffer,
AbstractHttp2StreamChannel and Http2MultiplexCodec.
2) Introduce a Http2Stream2 object, that is used instead of stream identifiers on stream frames. A
Http2Stream2 object allows an application to attach state to it, and so a application handler
no longer needs to maintain stream state (i.e. in a map(id -> state)) himself.
3) Remove stream state events, which are no longer necessary due to the introduction of Http2Stream2.
Also those stream state events have been found hard and complex to work with, when porting gRPC
to the Http2FrameCodec.
4) Add support for HTTP/2 frames that have not yet been implemented, like PING and SETTINGS. Also add
a Http2FrameCodecBuilder that exposes options from the Http2ConnectionHandler API that couldn't else
be used with the frame codec, like buffering outbound streams, window update ratio, frame logger, etc.
Modifications:
1) A child channel's writability and a H2 stream's outbound flow control window interact, as described
in the motivation. A channel handler is free to ignore the channel's writability, in which case the
parent channel is reponsible for buffering writes until a WINDOW_UPDATE is received.
The connection-level flow control window is ignored for now. That is, a child channel's writability
is only affected by the stream-level flow control window. So a child channel could be marked writable,
even though the connection-level flow control window is zero.
2) Modify Http2StreamFrame and the Http2FrameCodec to take a Http2Stream2 object intstead of a primitive
integer. Introduce a special Http2ChannelDuplexHandler that has newStream() and forEachActiveStream()
methods. It's recommended for a user to extend from this handler, to use those advanced features.
3) As explained in the documentation, a new inbound stream active can be detected by checking if the
Http2Stream2.managedState() of a Http2HeadersFrame is null. An outbound stream active can be detected
by adding a listener to the ChannelPromise of the write of the first Http2HeadersFrame. A stream
closed event can be listened to by adding a listener to the Http2Stream2.closeFuture().
4) Add a simple Http2FrameCodecBuilder and implement the missing frame types.
Result:
1) The Http2MultiplexCodec supports outbound flow control.
2) The Http2FrameCodec API makes it easy for a user to manage custom stream specific state and to create
new outbound streams.
3) The Http2FrameCodec API is much cleaner and easier to work with. Hacks like the ChannelCarryingHeadersFrame
are no longer necessary.
4) The Http2FrameCodec now also supports PING and SETTINGS frames. The Http2FrameCodecBuilder allows the Http2FrameCodec
to use some of the rich features of the Http2ConnectionHandler API.
Motivation:
The javadocs of Http2RemoteFlowController.isWritable(...) are incorrect.
Modifications:
Update javadocs to reflect reality.
Result:
Correct javadocs.
Motivation:
The labels identifying the frame types in Http2FrameLogger are not always correct.
Modification:
- Correct the string labels to indicate the right frame type in Http2FrameLogger
Result:
Logs are more correct.
Motivation:
In Http2ConnectionHandler we call flush(...) in channelReadComplete(...) to ensure we update the flow-controller and write stuff to the remote peer. We should better flip the order and so may be able to pick up more bytes.
Modifications:
Change order of calls.
Result:
Better performance
Motivation:
When we use pipeline.replace and we still had ongoing inbound, then
there will be some problem that inbound message would go to wrong
handlers. So we add handler first, and remove self after add, so that
the next handler will be the correct one.
Modifications:
Uses remove after addAfter instead of replace.
Result:
Fixed#6881
Motivation:
Http2FrameCodec should use Http2Settings.defaultSettings() when no Http2Settings were specified by the user.
Modifications:
Replace new Http2Settings() with Http2Settings.defaultSettings()
Result:
Use correct Http2Settings by default when using Http2FrameCodec in all cases.
Motivation:
Currentry logger create hex dump even if log write will not apply.
It's unecessary GC overhead.
Modifications:
Restore optimization from #3492
Result:
Fixes#7025
Motivation:
There should be a way to allow graceful shutdown to wait for all open streams to close without a timeout. Using gracefulShutdownTimeoutMillis with a large value is a bit of a hack, and has a gotcha that sufficiently large values will overflow the long, resulting in a ClosingChannelFutureListener that executes immediately.
Modification:
Allow to use gracefulShutdownTimeoutMillis(-1) to express waiting until all streams are closed.
Result:
We can now shutdown the connection without a forced timeout.
Motivation:
codec-http2 currently does not strictly enforce the HTTP/1.x semantics with respect to the number of headers defined in RFC 7540 Section 8.1 [1]. We currently don't validate the number of headers nor do we validate that the trailing headers should indicate EOS.
[1] https://tools.ietf.org/html/rfc7540#section-8.1
Modifications:
- DefaultHttp2ConnectionDecoder should only allow decoding of a single headers and a single trailers
- DefaultHttp2ConnectionEncoder should only allow encoding of a single headers and optionally a single trailers
Result:
Constraints of RFC 7540 restricting the number of headers/trailers is enforced.
Motivation:
Http2ServerUpgradeCodec should support Http2FrameCodec.
Modifications:
- Add support for Http2FrameCodec
- Add example that uses Http2FrameCodec
Result:
More flexible use of Http2ServerUpgradeCodec
Motivation:
DefaultHttp2ConnectionEncoder#writeHeaders attempts to find a stream object, and if one doesn't exist it tries to create one. However in the event that the local endpoint has received a RST_STREAM frame before writing the response headers we attempt to create a stream. Since this stream ID is for the incorrect endpoint we then generate a GO_AWAY for what appears to be a protocol error, but can instead be failed locally.
Modifications:
- Just fail the local promise in the above situation instead of sending a GO_AWAY
Result:
Less severe consequences if the server asynchronously sends headers after a RST_STREAM has been received.
Fixes https://github.com/netty/netty/issues/6906.
Motivation:
PR #6811 introduced a public utility methods to decode hex dump and its parts, but they are not visible from netty-common.
Modifications:
1. Move the `decodeHexByte`, `decodeHexDump` and `decodeHexNibble` methods into `StringUtils`.
2. Apply these methods where applicable.
3. Remove similar methods from other locations (e.g. `HpackHex` test class).
Result:
Less code duplication.
Motivation:
The Http2FrameLogger uses a custom format when logging events. We should use the more familiar format of 'channel event type: details' and single line logging for more consistent debugging.
Modifications:
- Http2FrameLogger should not use a StringBuilder and instead should directly use the Logger
- Http2FrameLogger should use the more consistent format defined above
Result:
Http2FrameLogger's logging formate is more consistent with other log events.
Motivation:
In a `HttpConversionUtil#toHttp2Headers` a status code conversion can be replaced with using `HttpResponseStatus#codeAsText` method.
Modifications:
Apply `HttpResponseStatus#codeAsText` method.
Result:
Less allocations.
Motivation:
Before we always expanded the buffer by the initialCapacity which by default is 32 bytes. This may lead to many expansions of the buffer until we finally reached the point that the buffer can fin everything.
Modifications:
Double the buffer size until the threshold of >= 1024 is hit. After this will grow it by the initialCapacity
Result:
Less expansion of the buffer (and so allocations / copies) when the intialCapacity is not big enough. Fixes [#6864].
Motivation:
HpackHuffmanDecoder.Decoder did not do any bound-checking but just catched IndexOutOfBoundsException to detect if the array needs to grow. This can be very expensive because of fillInStackTrace()
Modifications:
Add proper bounds checking and grow the array if needed without catching IndexOutOfBoundsException.
Result:
Less overhead if the array needs to grow.
Motivation:
Depending on the use case it may make sense to increase or decrease the initial size of the buffer used during the HPACK huffman decode process. This is currently not exposed through the AbstractHttp2ConnectionHandlerBuilder.
Modifications:
- Add a method to AbstractHttp2ConnectionHandlerBuilder which allows the initial size of the buffer used during the HPACK huffman decode prcoess to be configured.
Result:
AbstractHttp2ConnectionHandlerBuilder provides more control of codec-http2 knobs.
Motivation:
Some JUnit assert calls can be replaced by simpler.
Modifications:
Replacement with a more suitable methods.
Result:
More informative JUnit reports.
Motivation:
DefaultHttp2FrameWriter has constructors that it would be a hassle to
expose as configuration parameters on Http2Codec. We should instead
make a builder for Http2Codec.
Modifications:
Get rid of the public constructors on Http2Codec and instead make sure
you can always use the builder where you would have used the constructor
before.
Result:
Http2Codec can be configured more flexibly, and the SensitivityDetector
can be configured.
Motivation:
Each StreamByteDistributor may allow for priority in different ways, but there are certain characteristics which are invalid regardless of the distribution algorithm. We should validate these invalid characteristics at the flow controller level.
Modifications:
- Disallow negative stream IDs from being used. These streams may be accepted by the WeightedFairQueueByteDistributor and cause state for other valid streams to be evicted.
- Improve unit tests to verify limits are enforced.
Result:
Boundary conditions related to the priority parameters are validated more strictly.
Motivation:
Chrome was randomly getting stuck loading the tiles examples.
Investigation showed that the Netty flow controller thought it had
nothing to send for the connection even though some streams has queued
data and window available.
Modifications:
Fixed an accounting error where an implicitly created parent was not
being added to the dependency tree, thus it and all of its children were
orphaned from the connection's tree and would never have data written.
Result:
Fixes#6621
Motivation:
Although effectively unused, the toString() of
WeightedFairQueueByteDistributor.State is useful for debugging. It
accidentally had an infinite loop, as it would recurse infinitely
between a parent and its child, which makes it less useful for
debugging.
Modifications:
Prune the infinite loop by using the parent's streamId instead of the
parent's toString().
Result:
Faster, less stack-overflowing toString()
Motivation:
The CI servers have reported leaks while building the HTTP/2 unit tests. The unit tests attempt to wait for the channels to be closed before exiting the test, but we should wait in case there are any tasks pending on the EventLoopGroup's task queues.
Modifications:
- Change the Future.sync() operations to Future.syncUninterruptibly()
- HTTP/2 unit tests which use local channel should wait for 5 seconds before shutting down the EventLoopGroups
Result:
More likely that any cleanup related tasks will execute before the unit tests are shutdown.
Motivation:
It is generally useful to override DefaultHttp2HeadersDecoder's creation of a new Http2Headers object so more optimized versions can be substituted if the use case allows for it.
Modifications:
- DefaultHttp2HeadersDecoder should support an overridable method to generate the new Http2Headers object for each decode operation
Result:
DefaultHttp2HeadersDecoder is more extensible.
Fixes https://github.com/netty/netty/issues/6591.
Motivation:
codec-http2 has some helper methods to write to ByteBuf in a big endian fashion. This is the default memory structure for ByteBuf so these helper methods are not necessary.
Modifications:
- remove writeUnsignedInt and writeUnsignedShort
Result:
codec-http2 has less ByteBuf helper methods which are not necessary.
Motivation:
HTTP/2 support two ways to start on a no-tls tcp connection,
http/1.1 upgrade and prior knowlege methodology to start HTTP/2.
Currently, the http2-server from example only support
starting by upgrade. I think we can do a simple dispatch by peek first
bytes from inbound that match to prior knowledge preface or not and
determine which handlers to set into pipeline.
Modifications:
Add ClearTextHttp2ServerUpgradeHandler to support start HTTP/2 via clear
text with two approach. And update example/http2-server to support
this functionality.
Result:
netty HTTP/2 and the example http2-server accept for two ways to start
HTTP/2 over clear text.
Fixed memory leak problem
Update fields to final
Rename ClearText to cleartext
Addressed comments for code improvement
- Always prefer static, final, and private if possible
- Add UnstableApi annotation
- Used EmbeddedChannel.readInbound instead of unhandled inbound handler
- More assertion
Update javadoc for CleartextHttp2ServerUpgradeHandler
Rename ClearTextHttp2ServerUpgradeHandler to CleartextHttp2ServerUpgradeHandler
Removed redundant code about configure pipeline
nit: PriorKnowledgeHandler
Removed Mockito.spy, investigate conn state instead
Add Http2UpgradeEvent
Check null of the constructor arguments
Rename Http2UpgradeEvent to PriorKnowledgeUpgradeEvent
Update unit test
Motivation:
gRPC (and potentially other libraries) has an optimized header processor that requires direct access to the HpackDecoder.
Modifications:
Make the HpackDecoder and its constructors public.
Result:
Fixes#6579
Motivation
Http2StreamChannel ignores options of its parent channel when being created. That leads to surprising results when, for example, unpooled allocator could be silently replaced with pooled allocator (default setting).
Modification
Copy parent channel's options over to the Http2StreamChannel.
Result
Channel options are now consistent between Http2StreamChannel and its parent channel. Newly added test passes on this branch and fails on master. Fixes#6551.
Motivation:
We used some deprecated Mockito methods.
Modifications:
- Replace deprecated method usage
- Some cleanup
Result:
No more usage of deprecated Mockito methods. Fixes [#6482].
Motivation:
DefaultHttp2ConnectionDecoder#onSettingsRead processes the settings, and then sends a SETTINGS ACK to the remote peer. Processing the settings may result in frames which violate the previous settings being send to the remote peer. The remote peer will not apply the new settings until it has received the SETTINGS ACK, and therefore we may violate the settings from the remote peer's perspective and the connection will be shutdown.
Modifications:
- We should send the SETTINGS ACK before we process the settings to ensure the peer receives the SETTINGS ACK before other frames which assume the settings have already been applied
Result:
Fixes https://github.com/netty/netty/issues/6520.
Motivation:
Http2SecurityUtil currently lists HTTP/2 ciphers as documented by
JSSE docs [1] and the IANA [2] using the TLS_ prefix.
In some IBM J9 implementations the SSL_ prefix is used, which is also
covered by the JSSE.
[1] http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html
[2] http://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml
Modifications:
Add both variants of the cipher names (prefixed with SSL_ in additon to TLS_)
Result:
HTTP/2 connections can now be created using the SslProvider.JDK on IBM J9
and potentially other JVMs which use the SSL_ prefix.
Motivation:
A previous commit added methods to AbstractHttp2ConnectionHandlerBuilder but forgot to expose them in Http2ConnectionHandlerBuilder.
Modifications:
- expose the new methods in Http2ConnectionHandlerBuilder
Result:
Http2ConnectionHandlerBuilder supports the new configuration options.
Motivation:
The internal.hpack classes are no longer exposed in our public APIs and can be made package private in the http2 package.
Modifications:
- Make the hpack classes package private in the http2 package
Result:
Less APIs exposed as public.
Motivation:
The timeouts used in the Http2ConnectionRoundtripTest seems to be too low when leak-detection is enabled so we sometimes get failed tests due timeout.
Modifications:
Increase timeouts.
Result:
Fixes [#6442].
Motivation:
DefaultHttp2FrameWriter contains important logic for how http2
frame encode into binary, currently, netty had no unit test
just for DefaultHttp2FrameWriter.
Modifications:
Write DefaultHttp2FrameWriterTest to test DefaultHttp2FrameWriter
Result:
The coverage of DefaultHttp2FrameWriter is increased, and the
class was more reliable
Motivation:
In previous PR about handling unknwon frame in the middle of header
block, I didn't notice and re-use about checking is processing header
. And I added a redundant method for same functionality.
I think that the redundant method would lead to some misleading
situation.
Modifications:
Removed redundant code on DefaultHttp2FrameReader
Result:
The code is more readable
Motivation:
Some unit HTTP/2 unit tests use LocalChannel. LocalChannel's doClose method will ensure any pending items in the queue will be released, but it may execute a Runnable on the peer's EventLoop to ensure the peer's queue is also cleaned up. The HTTP/2 unit tests close the event loop groups with no wait time so that unit tests will execute quickly, but if the doClose Runnable is in the EventLoop's queue it will not run and thus the items in the queue will not be released.
Modifications:
- Ensure all HTTP/2 unit tests which use LocalChannel wait for both client and server channels to be closed before closing the EventLoop.
Result:
Related to https://github.com/netty/netty/issues/5850.
Motivation:
DefaultHttp2FrameReader contains the logic for how it parsed the network
traffic from http2 client,
it also validate the content is legal or not.
So keep high coverage rate on it will increase the stability of api.
Modifications:
Add unit test on DefaultHttp2FrameReader
Result:
Coverage rate increased
Motivation:
We used various mocking frameworks. We should only use one...
Modifications:
Make usage of mocking framework consistent by only using Mockito.
Result:
Less dependencies and more consistent mocking usage.
Motivation:
codec-http2 couples the dependency tree state with the remainder of the stream state (Http2Stream). This makes implementing constraints where stream state and dependency tree state diverge in the RFC challenging. For example the RFC recommends retaining dependency tree state after a stream transitions to closed [1]. Dependency tree state can be exchanged on streams in IDLE. In practice clients may use stream IDs for the purpose of establishing QoS classes and therefore retaining this dependency tree state can be important to client perceived performance. It is difficult to limit the total amount of state we retain when stream state and dependency tree state is combined.
Modifications:
- Remove dependency tree, priority, and weight related items from public facing Http2Connection and Http2Stream APIs. This information is optional to track and depends on the flow controller implementation.
- Move all dependency tree, priority, and weight related code from DefaultHttp2Connection to WeightedFairQueueByteDistributor. This is currently the only place which cares about priority. We can pull out the dependency tree related code in the future if it is generally useful to expose for other implementations.
- DefaultHttp2Connection should explicitly limit the number of reserved streams now that IDLE streams are no longer created.
Result:
More compliant with the HTTP/2 RFC.
Fixes https://github.com/netty/netty/issues/6206.
[1] https://tools.ietf.org/html/rfc7540#section-5.3.4
Motivation:
If an HTTP/2 client writes data before the connection preface the peer will shutdown the socket. Depending on what is in the pipeline (SslHandler) may require different evaluation criteria to infer when the codec-http2 has written the connection preface on behalf of the client. This can lead to unnecessarily complexity and error prone/racy application code.
Modifications:
- Introduce a user event that is fired up the pipeline when codec-http2 writes the connection preface
Result:
Reliable mechanism for applications to use to know when connection preface has been written (related to https://github.com/netty/netty/issues/6272).
Motivation:
When An HTTP server is listening in plaintext mode, it doesn't have
a chance to negotiate "h2" in the tls handshake. HTTP 1 clients
that are not expecting an HTTP2 server will accidentally a request
that isn't an upgrade, which the HTTP/2 decoder will not
understand. The decoder treats the bytes as hex and adds them to
the error message.
These error messages are hard to understand by humans, and result
in extra, manual work to decode.
Modification:
If the first bytes of the request are not the preface, the decoder
will now see if they are an HTTP/1 request first. If so, the error
message will include the method and path of the original request in
the error message.
In case the path is long, the decoder will check up to the first
1024 bytes to see if it matches. This could be a DoS vector if
tons of bad requests or other garbage come in. A future optimization
would be to treat the first few bytes as an AsciiString and not do
any Charset decoding. ByteBuf.toCharSequence alludes to such an
optimization.
The code has been left simple for the time being.
Result:
Faster identification of errant HTTP requests.
Motivation:
2fd42cfc6b fixed a bug related to encoding headers but it also introduced a throws statement onto the Http2FrameWriter methods which write headers. This throws statement makes the API more verbose and is not necessary because we can communicate the failure in the ChannelFuture that is returned by these methods.
Modifications:
- Remove throws from all Http2FrameWriter methods.
Result:
Http2FrameWriter APIs do not propagate checked exceptions.
Motivation:
At rfc7540 5.5, it said that it's not permitted to send extension
frame in the middle of header block and need be treated as
protocol error
Modifications:
When received a extension frame, in netty it's called unknown frame,
will verify that is there an headersContinuation exists
Result:
When received a extension frame in the middle of header block,
will throw connection error and closed the connection
This prevents us from having the first request, that hasn't ack'ed the setting causing a GOAWAY when we'd would
be under the maxHeaderListSizeGoAway that would have been set after the settings ack.
Motivation:
If the HPACK Decoder detects that SETTINGS_MAX_HEADER_LIST_SIZE has been violated it aborts immediately and sends a RST_STREAM frame for what ever stream caused the issue. Because HPACK is stateful this means that the HPACK state may become out of sync between peers, and the issue won't be detected until the next headers frame. We should make a best effort to keep processing to keep the HPACK state in sync with our peer, or completely close the connection.
If the HPACK Encoder is configured to verify SETTINGS_MAX_HEADER_LIST_SIZE it checks the limit and encodes at the same time. This may result in modifying the HPACK local state but not sending the headers to the peer if SETTINGS_MAX_HEADER_LIST_SIZE is violated. This will also lead to an inconsistency in HPACK state that will be flagged at some later time.
Modifications:
- HPACK Decoder now has 2 levels of limits related to SETTINGS_MAX_HEADER_LIST_SIZE. The first will attempt to keep processing data and send a RST_STREAM after all data is processed. The second will send a GO_AWAY and close the entire connection.
- When the HPACK Encoder enforces SETTINGS_MAX_HEADER_LIST_SIZE it should not modify the HPACK state until the size has been checked.
- https://tools.ietf.org/html/rfc7540#section-6.5.2 states that the initial value of SETTINGS_MAX_HEADER_LIST_SIZE is "unlimited". We currently use 8k as a limit. We should honor the specifications default value so we don't unintentionally close a connection before the remote peer is aware of the local settings.
- Remove unnecessary object allocation in DefaultHttp2HeadersDecoder and DefaultHttp2HeadersEncoder.
Result:
Fixes https://github.com/netty/netty/issues/6209.
Motivation:
- Decoder#decodeULE128 has a bounds bug and cannot decode Integer.MAX_VALUE
- Decoder#decodeULE128 doesn't support values greater than can be represented with Java's int data type. This is a problem because there are cases that require at least unsigned 32 bits (max header table size).
- Decoder#decodeULE128 treats overflowing the data type and invalid input the same. This can be misleading when inspecting the error that is thrown.
- Encoder#encodeInteger doesn't support values greater than can be represented with Java's int data type. This is a problem because there are cases that require at least unsigned 32 bits (max header table size).
Modifications:
- Correct the above issues and add unit tests.
Result:
Fixes https://github.com/netty/netty/issues/6210.
Motivation:
Build failures have been observed with 2 second timeouts on the CI servers. We should make the timeouts longer to reduce false positive test failures due to tests timing out prematurely.
Modifications:
- Increase timeouts from 2 and 3 seconds to 5 seconds.
Result:
Less false positive test failures.
Motivation:
When DefaultHttp2Connection removes a stream it iterates over all children and adds them as children to the parent of the stream being removed. This process may remove elements from the child map while iterating without using the iterator's remove() method. This is generally unsafe and may result in an undefined iteration.
Modifications:
- We should use the Iterator's remove() method while iterating over the child map
Result:
Fixes https://github.com/netty/netty/issues/6163
Motivation:
Currently clients attempting to send headers that are too large recieve
a RST frame. This makes it harder than needed for implementations on top
of netty to handle this in a graceful way.
Modifications:
When the Decoder throws a StreamError of type FRAME_SIZE_ERROR, the
Http2ConnectionHandler will now attempt to send an Http2Header with
status 431 and endOfStream=true
Result:
Implementations now do not have to subclass parts of netty to handle
431s
Motivation:
The 2 second timeout to bootstrap the test can timeout on the build servers. We should increase the timeout so it is less likely under powered or over worked machines are less likely to generate false failures.
Modifications:
- HttpToHttp2ConnectionHandlerTest setup timeout changed from 2 to 5 seconds
Result:
Less false build failures.
Motivation:
IntelliJ issues several warnings.
Modifications:
* `ClientCookieDecoder` and `ServerCookieDecoder`:
* `nameEnd`, `valueBegin` and `valueEnd` don't need to be initialized
* `keyValLoop` loop doesn't been to be labelled, as it's the most inner one (same thing for labelled breaks)
* Remove `if (i != headerLen)` as condition is always true
* `ClientCookieEncoder` javadoc still mention old logic
* `DefaultCookie`, `ServerCookieEncoder` and `DefaultHttpHeaders` use ternary ops that can be turned into simple boolean ones
* `DefaultHeaders` uses a for(int) loop over an array. It can be turned into a foreach one as javac doesn't allocate an iterator to iterate over arrays
* `DefaultHttp2Headers` and `AbstractByteBuf` `equal` can be turned into a single boolean statement
Result:
Cleaner code
Motivation:
LastInboundHandler maintains 2 queues which may contain the same data and tries to match these up when you read elements out of it. Because of this it can happen that you remove an element only out of one queue and so double free stuff later.
Modifications:
Just use one "queue" to store things.
Result:
Not possible to only remove things from one queue and so get into trouble later when release everything that sits in the handler.
Motivation:
When a LocalChannel is closed it is responsible to ensure all queued objects are released. When a LocalChannel is closed it will also close its peer channel. However in HTTP/2 unit tests we may not wait until all channels have completed the shutdown process before destroying the threads and exiting the test. This may mean buffers are GCed before they are released and be reported as a leak.
Modifications:
- In HTTP/2 tests when we use LocalChannel we should wait for all channels to close before exiting the test and cleaning up the associated EventLoopGroups.
Result:
More correct usage of LocalChannel in HTTP/2 tests.
Motiviation:
We used ReferenceCountUtil.releaseLater(...) in our tests which simplifies a bit the releasing of ReferenceCounted objects. The problem with this is that while it simplifies stuff it increase memory usage a lot as memory may not be freed up in a timely manner.
Modifications:
- Deprecate releaseLater(...)
- Remove usage of releaseLater(...) in tests.
Result:
Less memory needed to build netty while running the tests.
Motivation:
If a stream is not able to send any data (flow control window for the stream is exhausted) but has descendants who can send data then WeightedFairQueueByteDistributor may incorrectly modify the pseudo time and also double add the associated state to the parent's priority queue. The pseudo time should only be modified if a node is moved in the priority tree, and not if there happens to be no active streams in its descendent tree and a descendent is moved (e.g. removed from the tree because it wrote all data and the last data frame was EOS). Also the state objects for WeightedFairQueueByteDistributor should only appear once in any queue. If this condition is violated the pseudo time accounting would be biased at and assumptions in WeightedFairQueueByteDistributor would be invalidated.
Modifications:
- WeightedFairQueueByteDistributor#isActiveCountChangeForTree should not allow re-adding to the priority queue if we are currently processing a node in the distribution algorithm. The distribution algorithm will re-evaluate if the node should be re-added on the tail end of the recursion.
Result:
Fixes https://github.com/netty/netty/issues/5980
Motivation:
Currently it is not possible to have an Http/2 server send non default
initial settings to clients when doing the initial connection handshake
Modifications:
Add additional constructors to Http2Codec allowing users to specify the
initial settings to send to the client and apply locally
Result:
You can now specify non default initial settings
Motivation:
The SETTINGS_MAX_HEADER_LIST_SIZE limit, as enforced by the HPACK Encoder, should be a stream error and not apply to the whole connection.
Modifications:
Made the necessary changes for the exception to be of type StreamException.
Result:
A HEADERS frame exceeding the limit, only affects a specific stream.
Motivation:
Commit 908464f161 also introduced a change to guard against re-entrance but failed to correctly handle the debugData and promise.
Modifications:
Release debugData and correctly notify the promise.
Result:
No more buffer leak and promise is always notified.
Motivation:
The weight header with the default value is not set but it should be (rfc7540#5.3.5: …Pushed streams initially depend on their associated stream … are assigned a default weight of 16).
Modifications:
Add STREAM_WEIGHT header.
Result:
Correctly add headers.
Motivation:
The responsibility for retaining the settings values and enforcing the settings constraints is spread out in different areas of the code and may be initialized with different values than the default specified in the RFC. This should not be allowed by default and interfaces which are responsible for maintaining/enforcing settings state should clearly indicate the restrictions that they should only be set by the codec upon receipt of a SETTINGS ACK frame.
Modifications:
- Encoder, Decoder, and the Headers Encoder/Decoder no longer expose public constructors that allow the default settings to be changed.
- Http2HeadersDecoder#maxHeaderSize() exists to provide some bound when headers/continuation frames are being aggregated. However this is roughly the same as SETTINGS_MAX_HEADER_LIST_SIZE (besides the 32 byte octet for each header field) and can be used instead of attempting to keep the two independent values in sync.
- Encoding headers now enforces SETTINGS_MAX_HEADER_LIST_SIZE at the octect level. Previously the header encoder compared the number of header key/value pairs against SETTINGS_MAX_HEADER_LIST_SIZE instead of the number of octets (plus 32 bytes overhead).
- DefaultHttp2ConnectionDecoder#onData calls shouldIgnoreHeadersOrDataFrame but may swallow exceptions from this method. This means a STREAM_RST frame may not be sent when it should for an unknown stream and thus violate the RFC. The exception is no longer swallowed.
Result:
Default settings state is enforced and interfaces related to settings state are clarified.
Motivation:
The HTTP/2 child channel API does not allow to create local/outbound HTTP/2 streams.
Modifications:
Add a Http2StreamChannelBootstrap that allows to create outbound streams.
Result:
The HTTP/2 child channel API now supports outbound streams.
Motivation:
HttpToHttp2ConnectionHandler.write(ctx, msg, promise) ignores HttpConversionUtil.ExtensionHeaderNames.STREAM_DEPENDENCY_ID header in outbound message. Beside this InboundHttp2ToHttpAdapter also not add the STREAM_DEPENDENCY_ID and STREAM_WEIGHT headers.
Modifications:
Respect STREAM_DEPENDENCY_ID and STREAM_WEIGHT
Result:
Correctly respect STREAM_DEPENDENCY_ID and STREAM_WEIGHT.
Motivation:
the build doesnt seem to enforce this, so they piled up
Modifications:
removed unused import lines
Result:
less unused imports
Signed-off-by: radai-rosenblatt <radai.rosenblatt@gmail.com>
Motivation:
Http2EventAdapter implements the Http2FrameListener interface but implements the #onUnknownFrame(...) method without the interface's throws Http2Exception.
Modifications:
Add throws Http2Exception.
Result:
More correct method signature.
Motivation:
The StreamBufferingEncoder is missing documentation of what happens
to buffered frames when it's closed.
Modifications:
Added this missing piece of documentation.
Result:
Improved documentation.
Motivation:
codec-http2 is really loud!
Modification:
Allow users to select how to log in the Http2Codec.
Result:
We can run Http2Codec and log however we like.
Motivation:
HTTP/2 Decoder#decodeULE128 current will tolerate more bytes than necessary when attempted to detect overflow. The usage of this method also currently requires an additional overflow conditional.
Modifications:
- Integrate the first byte into Decoder#decodeULE128 which allows us to detect overflow reliably and avoid overflow checks outside of this method.
Result:
Less conditionals and earlier overflow detection in Decoder#decodeULE128
Motivation:
Http2ServerDowngrader doesn't mark chunked requests as chunked, even
though the natural conversion from http/2 requests to http/1.1 requests
is to chunked ones.
Modifications:
Mark requests that aren't already complete as chunked.
Result:
Requests will be chunked, and can later be aggregated if necessary.
Motivation:
Void promises need special treatment, as they don't behave like normal promises. One
cannot add a listener to a void promise for example.
Modifications:
Inspected the code for addListener() calls, and added extra logic for void promises
where necessary. Essentially, when writing a frame with a void promise, any errors
are reported via the channel pipeline's exceptionCaught event.
Result:
The HTTP/2 codec has better support for void promises.
Motivation:
We need to ensure we only call debugData.release() if we called debugData.retain(), otherwise we my see an IllegalReferenceCountException.
Modifications:
Only call release() if we called retain().
Result:
No more IllegalReferenceCountException possible.
Motivation:
We need to call debugData.retain() before we forward the frame to the pipeline as ByteToMessageDecoder will call release() on the buffer.
Modifications:
Correctly retain debugData and fix the unit test to test for it.
Result:
No more IllegalReferenceCountException when using the Http2FrameCodec.
Motivation:
he HTTP/2 spec demands that the max value for SETTINGS_HEADER_TABLE_SIZE should be an unsigned 32-bit integer.
Modifications:
Change the limit to unsigned 32-bit integer and add tests.
Result:
Complient to rfc.
Motivation:
DefaultHttp2ConnectionEncoder.FlowControlledData.write() does not
complete the write promise of empty HTTP/2 DATA frames until either a
non-DATA frame, a non-empty DATA frame or a DATA frame with endOfStream
set. This makes the write promise of the empty DATA frame is notified
much later than it could be.
Modifications:
- Notify the write promise of the empty DATA frames immediately is the
queue contains empty DATA frames only
Result:
The write promise of an empty DATA frame is notified sooner.
Motivation:
The HTTP/2 HPACK Encoder class has some code which is only used for test purposes. This code can be removed to reduce complexity and member variable count.
Modifications:
- Remove test code and update unit tests
- Other minor cleanup
Result:
Test code is removed from operational code.
Motivation:
In DefaultHttp2ConnectionEncoder we fail the promise in in the FlowControlledData.error(...) method but also add it the CoalescingBufferQueue. Which can lead to have the promise failed by error(...) before it can be failed in CoalescingBufferQueue.
This can lead to confusing and missleading errors in the log like:
2016-08-12 09:47:43,716 nettyIoExecutorGroup-1-9 [WARN ] PromiseNotifier - Failed to mark a promise as failure because it's done already: DefaultChannelPromise@374225e0(failure: javax.net.ssl.SSLException: SSLEngine closed already)
javax.net.ssl.SSLException: SSLEngine closed already
at io.netty.handler.ssl.SslHandler.wrap(...)(Unknown Source) ~[netty-all-4.1.5.Final-SNAPSHOT.jar:?]
Modifications:
Ensure we only fail the queue (which will also fail the promise).
Result:
No more missleading logs.
Motivation:
As per the HTTP/2 spec, exceeding the MAX_CONCURRENT_STREAMS should be treated as a stream error as opposed to a connection error.
"An endpoint that receives a HEADERS frame that causes its advertised concurrent stream limit to be exceeded MUST treat this as a stream error (Section 5.4.2) of type PROTOCOL_ERROR or REFUSED_STREAM." http://httpwg.org/specs/rfc7540.html#rfc.section.5.1.2
Modifications:
Make the error a stream error.
Result:
It's a stream error.
Motivation:
21e8d84b79 changed the way bounds checking was done, but however a bounds check in the case of READ_LITERAL_HEADER_NAME_LENGTH_PREFIX was using an old value. This would delay when the bounds check would actually be done and potentially allow more allocation than necessary.
Modifications:
- Use the new length (index) in the bounds check instead of an old length (nameLength) which had not yet been assigned to the new value.
Result:
More correct bounds checking.