Motivation:
Before we always expanded the buffer by the initialCapacity which by default is 32 bytes. This may lead to many expansions of the buffer until we finally reached the point that the buffer can fin everything.
Modifications:
Double the buffer size until the threshold of >= 1024 is hit. After this will grow it by the initialCapacity
Result:
Less expansion of the buffer (and so allocations / copies) when the intialCapacity is not big enough. Fixes [#6864].
Motivation:
HpackHuffmanDecoder.Decoder did not do any bound-checking but just catched IndexOutOfBoundsException to detect if the array needs to grow. This can be very expensive because of fillInStackTrace()
Modifications:
Add proper bounds checking and grow the array if needed without catching IndexOutOfBoundsException.
Result:
Less overhead if the array needs to grow.
Motivation:
Depending on the use case it may make sense to increase or decrease the initial size of the buffer used during the HPACK huffman decode process. This is currently not exposed through the AbstractHttp2ConnectionHandlerBuilder.
Modifications:
- Add a method to AbstractHttp2ConnectionHandlerBuilder which allows the initial size of the buffer used during the HPACK huffman decode prcoess to be configured.
Result:
AbstractHttp2ConnectionHandlerBuilder provides more control of codec-http2 knobs.
Motivation:
Some JUnit assert calls can be replaced by simpler.
Modifications:
Replacement with a more suitable methods.
Result:
More informative JUnit reports.
Motivation:
DefaultHttp2FrameWriter has constructors that it would be a hassle to
expose as configuration parameters on Http2Codec. We should instead
make a builder for Http2Codec.
Modifications:
Get rid of the public constructors on Http2Codec and instead make sure
you can always use the builder where you would have used the constructor
before.
Result:
Http2Codec can be configured more flexibly, and the SensitivityDetector
can be configured.
Motivation:
Each StreamByteDistributor may allow for priority in different ways, but there are certain characteristics which are invalid regardless of the distribution algorithm. We should validate these invalid characteristics at the flow controller level.
Modifications:
- Disallow negative stream IDs from being used. These streams may be accepted by the WeightedFairQueueByteDistributor and cause state for other valid streams to be evicted.
- Improve unit tests to verify limits are enforced.
Result:
Boundary conditions related to the priority parameters are validated more strictly.
Motivation:
Chrome was randomly getting stuck loading the tiles examples.
Investigation showed that the Netty flow controller thought it had
nothing to send for the connection even though some streams has queued
data and window available.
Modifications:
Fixed an accounting error where an implicitly created parent was not
being added to the dependency tree, thus it and all of its children were
orphaned from the connection's tree and would never have data written.
Result:
Fixes#6621
Motivation:
Although effectively unused, the toString() of
WeightedFairQueueByteDistributor.State is useful for debugging. It
accidentally had an infinite loop, as it would recurse infinitely
between a parent and its child, which makes it less useful for
debugging.
Modifications:
Prune the infinite loop by using the parent's streamId instead of the
parent's toString().
Result:
Faster, less stack-overflowing toString()
Motivation:
The CI servers have reported leaks while building the HTTP/2 unit tests. The unit tests attempt to wait for the channels to be closed before exiting the test, but we should wait in case there are any tasks pending on the EventLoopGroup's task queues.
Modifications:
- Change the Future.sync() operations to Future.syncUninterruptibly()
- HTTP/2 unit tests which use local channel should wait for 5 seconds before shutting down the EventLoopGroups
Result:
More likely that any cleanup related tasks will execute before the unit tests are shutdown.
Motivation:
It is generally useful to override DefaultHttp2HeadersDecoder's creation of a new Http2Headers object so more optimized versions can be substituted if the use case allows for it.
Modifications:
- DefaultHttp2HeadersDecoder should support an overridable method to generate the new Http2Headers object for each decode operation
Result:
DefaultHttp2HeadersDecoder is more extensible.
Fixes https://github.com/netty/netty/issues/6591.
Motivation:
codec-http2 has some helper methods to write to ByteBuf in a big endian fashion. This is the default memory structure for ByteBuf so these helper methods are not necessary.
Modifications:
- remove writeUnsignedInt and writeUnsignedShort
Result:
codec-http2 has less ByteBuf helper methods which are not necessary.
Motivation:
HTTP/2 support two ways to start on a no-tls tcp connection,
http/1.1 upgrade and prior knowlege methodology to start HTTP/2.
Currently, the http2-server from example only support
starting by upgrade. I think we can do a simple dispatch by peek first
bytes from inbound that match to prior knowledge preface or not and
determine which handlers to set into pipeline.
Modifications:
Add ClearTextHttp2ServerUpgradeHandler to support start HTTP/2 via clear
text with two approach. And update example/http2-server to support
this functionality.
Result:
netty HTTP/2 and the example http2-server accept for two ways to start
HTTP/2 over clear text.
Fixed memory leak problem
Update fields to final
Rename ClearText to cleartext
Addressed comments for code improvement
- Always prefer static, final, and private if possible
- Add UnstableApi annotation
- Used EmbeddedChannel.readInbound instead of unhandled inbound handler
- More assertion
Update javadoc for CleartextHttp2ServerUpgradeHandler
Rename ClearTextHttp2ServerUpgradeHandler to CleartextHttp2ServerUpgradeHandler
Removed redundant code about configure pipeline
nit: PriorKnowledgeHandler
Removed Mockito.spy, investigate conn state instead
Add Http2UpgradeEvent
Check null of the constructor arguments
Rename Http2UpgradeEvent to PriorKnowledgeUpgradeEvent
Update unit test
Motivation:
gRPC (and potentially other libraries) has an optimized header processor that requires direct access to the HpackDecoder.
Modifications:
Make the HpackDecoder and its constructors public.
Result:
Fixes#6579
Motivation
Http2StreamChannel ignores options of its parent channel when being created. That leads to surprising results when, for example, unpooled allocator could be silently replaced with pooled allocator (default setting).
Modification
Copy parent channel's options over to the Http2StreamChannel.
Result
Channel options are now consistent between Http2StreamChannel and its parent channel. Newly added test passes on this branch and fails on master. Fixes#6551.
Motivation:
We used some deprecated Mockito methods.
Modifications:
- Replace deprecated method usage
- Some cleanup
Result:
No more usage of deprecated Mockito methods. Fixes [#6482].
Motivation:
DefaultHttp2ConnectionDecoder#onSettingsRead processes the settings, and then sends a SETTINGS ACK to the remote peer. Processing the settings may result in frames which violate the previous settings being send to the remote peer. The remote peer will not apply the new settings until it has received the SETTINGS ACK, and therefore we may violate the settings from the remote peer's perspective and the connection will be shutdown.
Modifications:
- We should send the SETTINGS ACK before we process the settings to ensure the peer receives the SETTINGS ACK before other frames which assume the settings have already been applied
Result:
Fixes https://github.com/netty/netty/issues/6520.
Motivation:
Http2SecurityUtil currently lists HTTP/2 ciphers as documented by
JSSE docs [1] and the IANA [2] using the TLS_ prefix.
In some IBM J9 implementations the SSL_ prefix is used, which is also
covered by the JSSE.
[1] http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html
[2] http://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml
Modifications:
Add both variants of the cipher names (prefixed with SSL_ in additon to TLS_)
Result:
HTTP/2 connections can now be created using the SslProvider.JDK on IBM J9
and potentially other JVMs which use the SSL_ prefix.
Motivation:
A previous commit added methods to AbstractHttp2ConnectionHandlerBuilder but forgot to expose them in Http2ConnectionHandlerBuilder.
Modifications:
- expose the new methods in Http2ConnectionHandlerBuilder
Result:
Http2ConnectionHandlerBuilder supports the new configuration options.
Motivation:
The internal.hpack classes are no longer exposed in our public APIs and can be made package private in the http2 package.
Modifications:
- Make the hpack classes package private in the http2 package
Result:
Less APIs exposed as public.
Motivation:
The timeouts used in the Http2ConnectionRoundtripTest seems to be too low when leak-detection is enabled so we sometimes get failed tests due timeout.
Modifications:
Increase timeouts.
Result:
Fixes [#6442].
Motivation:
DefaultHttp2FrameWriter contains important logic for how http2
frame encode into binary, currently, netty had no unit test
just for DefaultHttp2FrameWriter.
Modifications:
Write DefaultHttp2FrameWriterTest to test DefaultHttp2FrameWriter
Result:
The coverage of DefaultHttp2FrameWriter is increased, and the
class was more reliable
Motivation:
In previous PR about handling unknwon frame in the middle of header
block, I didn't notice and re-use about checking is processing header
. And I added a redundant method for same functionality.
I think that the redundant method would lead to some misleading
situation.
Modifications:
Removed redundant code on DefaultHttp2FrameReader
Result:
The code is more readable
Motivation:
Some unit HTTP/2 unit tests use LocalChannel. LocalChannel's doClose method will ensure any pending items in the queue will be released, but it may execute a Runnable on the peer's EventLoop to ensure the peer's queue is also cleaned up. The HTTP/2 unit tests close the event loop groups with no wait time so that unit tests will execute quickly, but if the doClose Runnable is in the EventLoop's queue it will not run and thus the items in the queue will not be released.
Modifications:
- Ensure all HTTP/2 unit tests which use LocalChannel wait for both client and server channels to be closed before closing the EventLoop.
Result:
Related to https://github.com/netty/netty/issues/5850.
Motivation:
DefaultHttp2FrameReader contains the logic for how it parsed the network
traffic from http2 client,
it also validate the content is legal or not.
So keep high coverage rate on it will increase the stability of api.
Modifications:
Add unit test on DefaultHttp2FrameReader
Result:
Coverage rate increased
Motivation:
We used various mocking frameworks. We should only use one...
Modifications:
Make usage of mocking framework consistent by only using Mockito.
Result:
Less dependencies and more consistent mocking usage.
Motivation:
codec-http2 couples the dependency tree state with the remainder of the stream state (Http2Stream). This makes implementing constraints where stream state and dependency tree state diverge in the RFC challenging. For example the RFC recommends retaining dependency tree state after a stream transitions to closed [1]. Dependency tree state can be exchanged on streams in IDLE. In practice clients may use stream IDs for the purpose of establishing QoS classes and therefore retaining this dependency tree state can be important to client perceived performance. It is difficult to limit the total amount of state we retain when stream state and dependency tree state is combined.
Modifications:
- Remove dependency tree, priority, and weight related items from public facing Http2Connection and Http2Stream APIs. This information is optional to track and depends on the flow controller implementation.
- Move all dependency tree, priority, and weight related code from DefaultHttp2Connection to WeightedFairQueueByteDistributor. This is currently the only place which cares about priority. We can pull out the dependency tree related code in the future if it is generally useful to expose for other implementations.
- DefaultHttp2Connection should explicitly limit the number of reserved streams now that IDLE streams are no longer created.
Result:
More compliant with the HTTP/2 RFC.
Fixes https://github.com/netty/netty/issues/6206.
[1] https://tools.ietf.org/html/rfc7540#section-5.3.4
Motivation:
If an HTTP/2 client writes data before the connection preface the peer will shutdown the socket. Depending on what is in the pipeline (SslHandler) may require different evaluation criteria to infer when the codec-http2 has written the connection preface on behalf of the client. This can lead to unnecessarily complexity and error prone/racy application code.
Modifications:
- Introduce a user event that is fired up the pipeline when codec-http2 writes the connection preface
Result:
Reliable mechanism for applications to use to know when connection preface has been written (related to https://github.com/netty/netty/issues/6272).
Motivation:
When An HTTP server is listening in plaintext mode, it doesn't have
a chance to negotiate "h2" in the tls handshake. HTTP 1 clients
that are not expecting an HTTP2 server will accidentally a request
that isn't an upgrade, which the HTTP/2 decoder will not
understand. The decoder treats the bytes as hex and adds them to
the error message.
These error messages are hard to understand by humans, and result
in extra, manual work to decode.
Modification:
If the first bytes of the request are not the preface, the decoder
will now see if they are an HTTP/1 request first. If so, the error
message will include the method and path of the original request in
the error message.
In case the path is long, the decoder will check up to the first
1024 bytes to see if it matches. This could be a DoS vector if
tons of bad requests or other garbage come in. A future optimization
would be to treat the first few bytes as an AsciiString and not do
any Charset decoding. ByteBuf.toCharSequence alludes to such an
optimization.
The code has been left simple for the time being.
Result:
Faster identification of errant HTTP requests.
Motivation:
2fd42cfc6b fixed a bug related to encoding headers but it also introduced a throws statement onto the Http2FrameWriter methods which write headers. This throws statement makes the API more verbose and is not necessary because we can communicate the failure in the ChannelFuture that is returned by these methods.
Modifications:
- Remove throws from all Http2FrameWriter methods.
Result:
Http2FrameWriter APIs do not propagate checked exceptions.
Motivation:
At rfc7540 5.5, it said that it's not permitted to send extension
frame in the middle of header block and need be treated as
protocol error
Modifications:
When received a extension frame, in netty it's called unknown frame,
will verify that is there an headersContinuation exists
Result:
When received a extension frame in the middle of header block,
will throw connection error and closed the connection
This prevents us from having the first request, that hasn't ack'ed the setting causing a GOAWAY when we'd would
be under the maxHeaderListSizeGoAway that would have been set after the settings ack.