Motivation:
The current name of the class which converts from HTTP objects to HTTP/2 frames contains the text Http2ToHttp. This is misleading and opposite of what is being done.
Modifications:
Rename this class name to be HttpToHttp2.
Result:
Class names that more clearly identify what they do.
Motivation:
The current decompression frame listener currently opts-out of application level flow control. The application should still be able to control flow control even if decompression is in use.
Modifications:
- DecompressorFrameListener will maintain how many compressed bytes, decompressed bytes, and processed by the listener bytes. A ratio will be used to translate these values into application level flow control amount.
Result:
HTTP/2 decompressor delegates the application level flow control to the listener processing the decompressed data.
Motivation:
Currently when an exception occurs during a listener.onDataRead
callback, we return all bytes as processed. However, the listener may
choose to return bytes via the InboundFlowState object rather than
returning the integer. If the listener returns a few bytes and then
throws, we will attempt to return too many bytes.
Modifications:
Added InboundFlowState.unProcessedBytes() to indicate how many
unprocessed bytes are outstanding.
Updated DefaultHttp2ConnectionDecoder to compare the unprocessed bytes
before and after the listener.onDataRead callback when an exception was
encountered. If there is a difference, it is subtracted off the total
processed bytes to be returned to the flow controller.
Result:
HTTP/2 data frame delivery properly accounts for processed bytes through
an exception.
Motivation:
The current priority algorithm uses 2 different mechanisms to iterate the priority tree and send the results of the allocation. The current algorithm also uses a two step phase where the priority tree is traversed and allocation amounts are calculated and then all active streams are traversed to send for any streams that may or may not have been allocated bytes.
Modifications:
- DefaultHttp2OutboundFlowController will allocate and send (when possible) in the same looping structure.
- The recursive method will send only for the children instead of itself and its children which should simplify the recursion.
Result:
Hopefully simplified recursive algorithm where the tree iteration determines who needs to send and less iteration after the recursive calls complete.
Currently the DefaultHttp2InboundFlowController only supports the
ability to turn on and off "window maintenance" for a stream. This is
insufficient for true application-level flow control that may only want
to return a few bytes to flow control at a time.
Modifications:
Removing "window maintenance" interface from
DefaultHttp2InboundFlowController in favor of the new interface.
Created the Http2InboundFlowState interface which extends Http2FlowState
to add the ability to return bytes for a specific stream.
Changed the onDataRead method to return an integer number of bytes that
will be immediately returned to flow control, to support use cases that
want to opt-out of application-level inbound flow control.
Updated DefaultHttp2InboundFlowController to use 2 windows per stream.
The first, "window", is the actual flow control window that is
decremented as soon as data is received. The second "processedWindow"
is a delayed view of "window" that is only decremented after the
application returns the processed bytes. It is processedWindow that is
used when determining when to send a WINDOW_UPDATE to restore part of
the inbound flow control window for the stream/connection.
Result:
The HTTP/2 inbound flow control interfaces support application-level
flow control.
Motivation:
Too many warnings from IntelliJ IDEA code inspector, PMD and FindBugs.
Modifications:
- Removed unnecessary casts, braces, modifiers, imports, throws on methods, etc.
- Added static modifiers where it is possible.
- Fixed incorrect links in javadoc.
Result:
Better code.
Motivation:
The outbound flow controller logic does not properly reset the allocated
bytes between successive invocations of the priority algorithm.
Modifications:
Updated the priority algorithm to reset the allocated bytes for each
stream.
Result:
Each call to the priority algorithm now starts with zero allocated bytes
for each stream.
Motivation:
I came across an issue when I was adding/setting headers and mistakenly
used an upper case header name. When using the http2 example that ships
with Netty this was not an issue. But when working with a browser that
supports http2, in my case I was using Firefox Nightly, I'm guessing
that it interprets the response as invalid in accordance with the
specifiction
https://tools.ietf.org/html/draft-ietf-httpbis-http2-14#section-8.1.2
"However, header field names MUST be converted to lowercase prior
to their encoding in HTTP/2. A request or response containing
uppercase header field names MUST be treated as malformed"
This PR suggests converting to lowercase to be the default.
Modifications:
Added a no-args constructor that defaults to forcing the key/name to
lowercase, and providing a second constructor to override this behaviour
if desired.
Result:
It is now possible to specify a header like this:
Http2Headers headers = new DefaultHttp2Headers(true)
.status(new AsciiString("200"))
.set(new AsciiString("Testing-Uppercase"), new AsciiString("some value"));
And the header written to the client will then become:
testing-uppercase:"some value"
Motivation:
The HTTP/2 codec currently does not provide an interface to compress data. There is an analogous case to this in the HTTP codec and it is expected to be used commonly enough that it will be beneficial to have the feature in the http2-codec.
Modifications:
- Add a class which extends DefaultHttp2ConnectionEncoder and provides hooks to an EmbeddedChannel
- Add a compressor element to the Http2Stream interface
- Update unit tests to utilize the new feature
Result:
HTTP/2 codec supports data compression.
Motivation:
The current logic in DefaultHttp2OutboundFlowController for handling the
case of a stream shutdown results in a Http2Exception (not a
Http2StreamException). This results in a GO_AWAY being sent for what
really could just be a stream-specific error.
Modifications:
Modified DefaultHttp2OutboundFlowController to set a stream exception
rather than a connection-wide exception. Also using the error code of
INTERNAL_ERROR rather than STREAM_CLOSED, since it's more appropriate
for this case.
Result:
Should not be triggering GO_AWAY when a stream closes prematurely.
Motivation:
Currently due to flow control, HEADERS frames can be written
out-of-order WRT DATA frames.
Modifications:
When data is written, we preserve the future as the lastWriteFuture in
the outbound flow controller. The encoder then uses the lastWriteFuture
such that headers are only written after the lastWriteFuture completes.
Result:
HEADERS/DATA write order is correctly preserved.
Motivation:
The HTTP/2 specification indicates that when converting from HTTP/2 to HTTP/1.x and non-ascii characters are detected that an error should be thrown.
Modifications:
- The ASCII validation is already done but the exception that is raised is not properly converted to a RST_STREAM error.
Result:
- If HTTP/2 to HTTP/1.x translation layer is in use and a non-ascii header is received then a RST_STREAM frame should be sent in response.
Motivation:
The DefaultOutboundFlowController was attempting to write frames with a negative length. This resulted in attempting to allocate a buffer of negative size and thus an exception.
Modifications:
- Don't allow DefaultOutboundFlowController to write negative length buffers.
Result:
No more negative length writes which resulted in IllegalArgumentExceptions.
Motivation:
x-gzip and x-deflate are not standard header values, and thus should be
removed from HttpHeaderValues, which is meant to provide the standard
values only.
Modifications:
- Remove X_DEFLATE and X_GZIP from HttpHeaderValues
- Move X_DEFLATE and X_GZIP to HttpContentDecompressor and
DelegatingDecompressorFrameListener
- We have slight code duplication here, but it does less harm than
having non-standard constant.
Result:
HttpHeaderValues contains only standard header values.
Related: 4ce994dd4f
Motivation:
In 4.1, we were not able to change the type of the HTTP header name and
value constants from String to AsciiString due to backward compatibility
reasons.
Instead of breaking backward compatibility in 4.1, we introduced new
types called HttpHeaderNames and HttpHeaderValues which provides the
AsciiString version of the constants, and then deprecated
HttpHeaders.Names/Values.
We should make the same changes while deleting the deprecated classes
activaly.
Modifications:
- Remove HttpHeaders.Names/Values and RtspHeaders
- Add HttpHeaderNames/Values and RtspHeaderNames/Values
- Make HttpHeaderValues.WEBSOCKET lowercased because it's actually
lowercased in all WebSocket versions but the oldest one
- Do not use AsciiString.equalsIgnoreCase(CharSeq, CharSeq) if one of
the parameters are AsciiString
- Avoid using AsciiString.toString() repetitively
- Change the parameter type of some methods from String to
CharSequence
Result:
A user who upgraded from 4.0 to 4.1 first and removed the references to
the deprecated classes and methods can easily upgrade from 4.1 to 5.0.
Motivation:
If the http2 encoder has exhausted all available stream IDs a GOAWAY frame is not sent. Once the encoder detects the a new stream ID has rolled over past the last stream ID a GOAWAY should be sent as recommended in section [5.1.1](https://tools.ietf.org/html/draft-ietf-httpbis-http2-14#section-5.1.1).
Modifications:
-This condition is already detected but it just needs to result in a GOAWAY being sent.
-Add a subclass of Http2Exception so the encoder can detect this special case.
-Add a unit test which checks that the GOAWAY is sent/received.
Result:
Encoder attempting to use the first 'rolled over' stream id results in a GOAWAY being sent.
Motivation:
- There are still various inspector warnings to fix.
- ValueConverter.convert() methods need to end with the type name like
other methods in Headers, such as setInt() and addInt(), for more
consistency
Modifications:
- Fix all inspector warnings
- Rename ValueConverter.convert() to convert<type>()
Result:
- Cleaner code
- Consistency
Motivation:
Headers within netty do not cleanly share a common class hierarchy. As a result some header types support some operations
and don't support others. The consolidation of the class hierarchy will allow for maintenance and scalability for new codec.
The existing hierarchy also has a few short comings such as it is not clear when data conversions are happening. This
could result unintentionally getting back a collection or iterator where a conversion on each entry must happen.
The current headers algorithm also prepends all elements which means to find the first element or return a collection
in insertion order often requires a complete traversal followed by a collections.reverse call.
Modifications:
-Provide a generic base class which provides all the implementation for headers in netty
-Provide an extension to this class which allows for name type conversions to happen (to accommodate legacy CharSequence to String conversions)
-Update the headers interface to clarify when conversions will happen.
-Update the headers data structure so that appends are done to avoid unnecessary iteration or collection reversal.
Result:
-More unified class hierarchy for headers in netty
-Improved headers data structure and algorithms
-headers API more clearly identify when conversions are required.
Motivation:
There should be a unit test for when the stream ID wraps around and is 'too large' or negative.
The lack of unit test masked an issue where this was not being throw.
Modifications:
Add a unit test to cover the case where creating a remote and local stream where stream id is 'too large'
Result:
Unit test scope increases.
Motivation:
Currently when receiving DATA/HEADERS frames, we throw Http2Exception (a
connection error) instead of Http2StreamException (stream error). This
is incorrect according to the HTTP/2 spec.
Modifications:
Updated various places in the encoder and decoder that were out of spec
WRT connection/state checking.
Result:
Stream state verification is properly handled.
Motivation:
Twitter hpack has upgraded to 0.9.1, we should upgrade to the latest.
Modifications:
Updated the parent pom to specify the dependency version. Updated the
http2 pom to use the version specified by the parent.
Result:
HTTP/2 updated to the latest hpack release.
Motiviation:
The HTTP/2 server example is not using the outbound flow control. It is instead using a FrameWriter directly.
This can lead to flow control errors and other comm. related errors
Modifications:
-Force server example to use outbound flow controller
Result:
-Server example should use follow flow control rules.
Motivation:
InboundHttp2ToHttpAdapterTest swaps non-volatile CountDownLatches in
handlers, which seems to cause a race condition that can lead to missing
messages.
Modifications:
Make CountDownLatch variables in InboundHttp2ToHttpAdapterTest volatile.
Result:
InboundHttp2ToHttpAdapterTest should be more stable.
Motivation:
The current GOAWAY methods are in each endpoint and are a little
confusing since their called connection.<endpoint>.goAwayReceived().
Modifications:
Moving GOAWAY methods to connection with more clear names
goAwaySent/goAwayReceived.
Result:
The GOAWAY method names are more clear.
Motivation:
There is a NPE due to the order of builder initialization in the class.
Modifications:
-Correct the ordering of initialization and building to avoid NPE.
Result:
No more NPE in construction.
Motivation:
This was lost in recent changes, just adding it back in.
Modifications:
Added listener() accessor to Http2ConnectionDecoder and the default
impl.
Result:
The Http2FrameListener can be obtained from the decoder.
Motivation:
Currently, Http2LifecycleManager implements the exception handling logic
which makes it difficult to extend or modify the exception handling
behavior. Simply overriding exceptionCaught() will only affect one of
the many possible exception paths. We need to reorganize the exception
handling code to centralize the exception handling logic into a single
place that can easily be extended by subclasses of
Http2ConnectionHandler.
Modifications:
Made Http2LifecycleManager an interface, implemented directly by
Http2ConnectionHandler. This adds a circular dependency between the
handler and the encoder/decoder, so I added builders for them that allow
the constructor of Http2ConnectionHandler to set itself as the lifecycle
manager and build them.
Changed Http2LifecycleManager.onHttpException to just
onException(Throwable) to simplify the interface. This method is now the
central control point for all exceptions. Subclasses now only need to
override onException() to intercept any exception encountered by the
handler.
Result:
HTTP/2 has more extensible exception handling, that is less likely to
see exceptions vanish into the ether.
Motivation:
Some tests occasionally appear unstable, throwing a
org.mockito.exceptions.misusing.UnfinishedStubbingException. Mockito
stubbing does not work properly in multi-threaded environments, so any
stubbing has to be done before the threads are started.
Modifications:
Modified tests to perform any custom stubbing before the client/server
bootstrap logic executes.
Result:
HTTP/2 tests should be more stable.
Motivation:
Some tests do not properly assert that all requests have been
sent/received, so the failures messages may be misleading.
Modifications:
Adding missing asserts to HTTP/2 tests for awaiting requests and
responses.
Result:
HTTP/2 tests properly assert message counts.
Motiviation:
PR https://github.com/netty/netty/pull/2948 missed a collection to synchronize in the HTTP/2 unit tests.
Modifications:
synchronize the collection that was missed
Result:
Missed collection is syncronized and initial size is corrected
The HTTP/2 tests have been unstable, in particular the
Http2ConnectionRoundtripTest.
Modifications:
Modified fields in Http2TestUtil to be volatile.
Result:
Tests should (hopefully) be more stable.
Motivation:
HTTP/2 codec does not properly test exception passed to
exceptionCaught() for instanceof Http2Exception (since the exception
will always be wrapped in a PipelineException), so it will never
properly handle Http2Exceptions in the pipeline.
Also if any streams are present, the connection close logic will execute
twice when a pipeline exception. This is because the exception logic
calls ctx.close() which then triggers the handleInActive() logic to
execute. This clears all of the remaining streams and then attempts to
run the closeListener logic (which has already been run).
Modifications:
Changed exceptionCaught logic to properly extract Http2Exception from
the PipelineException. Also added logic to the closeListener so that is
only run once.
Changed Http2CodecUtil.toHttp2Exception() to avoid NPE when creating
an exception with cause.getMessage().
Refactored Http2ConnectionHandler to more cleanly separate inbound and
outbound flows (Http2ConnectionDecoder/Http2ConnectionEncoder).
Added a test for verifying that a pipeline exception closes the
connection.
Result:
Exception handling logic is tidied up.
Motivation:
The HTTP/2 unit tests are collecting responses read events which are happening in a multithreaded environment.
These collections are currently not synchronized or thread safe and are resulting in verification failures.
Modifications:
-Modify unit tests that use collections to store results for verifiction to be thread safe
Result:
Tests should not fail because of syncrhonization issues while verifying expected results.
Motivation:
The HTTP/2 codec has some duplication and the read/write interfaces are not cleanly exposed to users of the codec.
Modifications:
-Restructure the AbstractHttp2ConnectionHandler class to be able to extend write behavior before the outbound flow control gets the data
-Add Http2InboundConnectionHandler and Http2OutboundConnectionHandler interfaces and restructure external codec interface around these concepts
Result:
HTTP/2 codec provides a cleaner external interface which is easy to extend for read/write events.
Motivation:
The HTTP tranlsation layer uses a FullHttpMessage object after it has been fired up the pipeline.
Although the content ByteBuf is not used by default it is still not ideal to use a releasable object
after it has potentially been released.
Modifications:
-InboundHttp2ToHttpAdapter ordering issues will be corrected
Result:
Safer access to releasable objects in the HTTP/2 to HTTP translation layer.
Motivation:
To eliminate the tests as being a cause of leaks, removing the automatic
retaining of ByteBufs in Http2TestUtil.
Modifications:
Each test that relied on retaining buffers for validation has been
modified to copy the buffer into a list of Strings that are manually
validated after the message is received.
Result:
The HTTP/2 tests should (hopefully) no longer be reporting erroneous
leaks due to the testing code, itself.
Motivation:
The current implementation of the HTTP/2 decompression does not integrate with flow control properly.
The decompression code is giving the post-decompression size to the flow control algorithm which
results in flow control errors at incorrect times.
Modifications:
-DecompressorHttp2FrameReader.java will need to change where it hooks into the HTTP/2 codec
-Enhance unit tests to test this condition
Result:
No more flow control errors because of decompression design flaw
Motivation:
The current build is showing potential leaks in the HTTP/2 tests that
use Http2TestUtil.FrameCountDown, which copies the buffers when it
receives them from the decoder. The leak detecor sees this copy as the
source of a leak. It would be better all around to just retain, rather
than copying the buffer. This should help to lower the overall memory
footprint of the tests as well as potentially getting rid of the
reported "leaks".
Modifications:
Modified Http2TestUtil to use ByteBuf.retain() everywhere that was
previously calling ByteBuf.copy().
Result:
Smaller memory footprint for tests and hopefully getting rid of reported
leaks.
Motivation:
The HTTP/2 spec does not restrict headers to being String. The current
implementation of the HTTP/2 codec uses Strings as header keys and
values. We should change this so that header keys and values allow
binary values.
Modifications:
Making Http2Headers based on AsciiString, which is a wrapper around a
byte[].
Various changes throughout the HTTP/2 codec to use the new interface.
Result:
HTTP/2 codec no longer requires string headers.
Motivation:
The HTTP/2 unit tests are suffering from OOME on the master branch.
These unit tests allocating a large number of threads (~706 peak live) which may
be related to this memory pressure.
Modifications:
Each EventLoopGroup shutdown operation will have a `sync()` call.
Result:
Lower peek live thread count and less associated memory pressure.
Motivation:
The HTTP/2 tests do not always clean up ByteBuf resources reliably. There are issues with the refCnt, over allocating buffers, and potentially not waiting long enough to reclaim resources for stress tests.
Modifications:
Scrub the HTTP/2 unit tests for ByteBuf leaks.
Result:
Less leaks (hopefully none) in the HTTP/2 unit tests. No OOME from HTTP/2 unit tests.
Motivation:
The HTTP/2 codec does not provide a way to decompress data. This functionality is supported by the HTTP codec and is expected to be a commonly used feature.
Modifications:
-The Http2FrameReader will be modified to allow hooks for decompression
-New classes which detect the decompression from HTTP/2 header frames and uses that decompression when HTTP/2 data frames come in
-New unit tests
Result:
The HTTP/2 codec will provide a means to support data decompression
Motivation:
The ServerBootrap's child group would not be shutdown.
Modification:
Add missing shutdownGracefully() call.
Result:
The child group is shutdown correctly.
Motivation:
The HTTP/2 specification places restrictions on the cipher suites that can be used. There is no central place to pull the ciphers that are allowed by the specification, supported by different java versions, and recommended by the community.
Modifications:
-HTTP/2 will have a security utility class to define supported ciphers
-netty-handler will be modified to support filtering the supplied list of ciphers to the supported ciphers for the current SSLEngine
Result:
-Netty provides unified support for HTTP/2 cipher lists and ciphers can be pruned by currently supported ciphers
Motivation:
Outbound flow control does not properly remove the head of queue after
it's written. This will cause streams with multiple frames to get stuck
and not send all of the data.
Modifications:
Modified the DefaultHttp2OutboundFlowController to properly remove the
head of the pending write queue once a queued frame has been written.
Added an integration test that sends a large message to verify that all
DATA frames are properly collected at the other end.
Result:
Outbound flow control properly handles several queued messages.
Motivation:
We failed to release buffers on protocolErrors which caused buffer leaks when using HTTP/2
Modifications:
Release buffer on protocol errors
Result:
No more buffer leaks
Motivation:
Netty only supports a java NPN implementation provided by npn-api and npn-boot.
There is no java implementation for ALPN.
ALPN is needed to be compliant with the HTTP/2 spec.
Modifications:
-SslContext and JdkSslContext to support ALPN
-JettyNpn* class restructure for NPN and ALPN common aspects
-Pull in alpn-api and alpn-boot optional dependencies for ALPN java implementation
Result:
-Netty provides access to a java implementation of APLN
Motivation:
A recent refactoring of the outbound flow controller interface
introduced a bug when writing data. We're no longer properly handling
the completion of the write (i.e. updating stream state/handling error).
Modifications:
Updated AbstractHttp2ConnectionHandler.writeData to properly handle the
completion of the write future.
Result:
DATA writes now perform post-write cleanup.