Motivation:
We noticed that the headers implementation in Netty for HTTP/2 uses quite a lot of memory
and that also at least the performance of randomly accessing a header is quite poor. The main
concern however was memory usage, as profiling has shown that a DefaultHttp2Headers
not only use a lot of memory it also wastes a lot due to the underlying hashmaps having
to be resized potentially several times as new headers are being inserted.
This is tracked as issue #3600.
Modifications:
We redesigned the DefaultHeaders to simply take a Map object in its constructor and
reimplemented the class using only the Map primitives. That way the implementation
is very concise and hopefully easy to understand and it allows each concrete headers
implementation to provide its own map or to even use a different headers implementation
for processing requests and writing responses i.e. incoming headers need to provide
fast random access while outgoing headers need fast insertion and fast iteration. The
new implementation can support this with hardly any code changes. It also comes
with the advantage that if the Netty project decides to add a third party collections library
as a dependency, one can simply plug in one of those very fast and memory efficient map
implementations and get faster and smaller headers for free.
For now, we are using the JDK's TreeMap for HTTP and HTTP/2 default headers.
Result:
- Significantly fewer lines of code in the implementation. While the total commit is still
roughly 400 lines less, the actual implementation is a lot less. I just added some more
tests and microbenchmarks.
- Overall performance is up. The current implementation should be significantly faster
for insertion and retrieval. However, it is slower when it comes to iteration. There is simply
no way a TreeMap can have the same iteration performance as a linked list (as used in the
current headers implementation). That's totally fine though, because when looking at the
benchmark results @ejona86 pointed out that the performance of the headers is completely
dominated by insertion, that is insertion is so significantly faster in the new implementation
that it does make up for several times the iteration speed. You can't iterate what you haven't
inserted. I am demonstrating that in this spreadsheet [1]. (Actually, iteration performance is
only down for HTTP, it's significantly improved for HTTP/2).
- Memory is down. The implementation with TreeMap uses on avg ~30% less memory. It also does not
produce any garbage while being resized. In load tests for GRPC we have seen a memory reduction
of up to 1.2KB per RPC. I summarized the memory improvements in this spreadsheet [1]. The data
was generated by [2] using JOL.
- While it was my original intend to only improve the memory usage for HTTP/2, it should be similarly
improved for HTTP, SPDY and STOMP as they all share a common implementation.
[1] https://docs.google.com/spreadsheets/d/1ck3RQklyzEcCLlyJoqDXPCWRGVUuS-ArZf0etSXLVDQ/edit#gid=0
[2] https://gist.github.com/buchgr/4458a8bdb51dd58c82b4
Motivation:
The HTTP/2 hello world example server should be expecting a FullHttpRequest when falling back to HTTP/1.x mode.
Modifications:
- HelloWorldHttp1Handler should process FullHttpRequestObjects
- Http2ServerInitializer should insert an HttpObjectAggregator into the pipeline if no upgrade was attempted
Result:
Responses from the HelloWorldHttp1Handler should only come after full HTTP requests are received.
Proposal to fix issue #3636
Motivations:
Currently, while adding the next buffers to the decoder
(`decoder.offer()`), there is no way to access to the current HTTP
object being decoded since it can only be available currently once fully
decoded by `decoder.hasNext()`.
Some could want to know the progression on the overall transfer but also
per HTTP object.
While overall progression could be done using (if available) the global
Content-Length of the request and taking into account each HttpContent
size, the per HttpData object progression is unknown.
Modifications:
1) For HTTP object, `AbstractHttpData` has 2 protected properties named
`definedSize` and `size`, respectively the supposely final size and the
current (decoded until now) size.
This provides a new method `definedSize()` to get the current value for
`definedSize`. The `size` attribute is reachable by the `length()`
method.
Note however there are 2 different ways that currently managed the
`definedSize`:
a) `Attribute`: it is reset each time the value is less than actual
(when a buffer is added, the value is increased) since the final length
is not known (no Content-Length)
b) `FileUpload`: it is set at startup from the lengh provided
So these differences could lead in wrong perception;
a) `Attribute`: definedSize = size always
b) `FileUpload`: definedSize >= size always
Therefore the comment tries to explain clearly the different behaviors.
2) In the InterfaceHttpPostRequestDecoder (and the derived classes), I
add a new method: `decoder.currentPartialHttpData()` which will return a
`InterfaceHttpData` (if any) as the current `Attribute` or `FileUpload`
(the 2 generic types), which will allow then the programmer to check
according to the real type (instance of) the 2 methods `definedSize()`
and `length()`.
This method check if currentFileUpload or currentAttribute are null and
returns the one (only one could be not null) that is not null.
Note that if this method returns null, it might mean 2 situations:
a) the last `HttpData` (whatever attribute or file upload) is already
finished and therefore accessible through `next()`
b) there is not yet any `HttpData` in decoding (body not yet parsed for
instance)
Result:
The developper has more access and therefore control on the current
upload.
The coding from developper side could looks like in the example in
HttpUloadServerHandler.
Related: #3814
Motivation:
To implement the support for an upgrade from cleartext HTTP/1.1
connection to cleartext HTTP/2 (h2c) connection, a user usually uses
HttpServerUpgradeHandler.
It does its job, but it requires a user to instantiate the UpgradeCodecs
for all supported protocols upfront. It means redundancy for the
connections that are not upgraded.
Modifications:
- Change the constructor of HttpServerUpgradeHandler
- Accept UpgraceCodecFactory instead of UpgradeCodecs
- The default constructor of HttpServerUpgradeHandler sets the
maxContentLength to 0 now, which shouldn't be a problem because a
usual upgrade request is a GET.
- Update the examples accordingly
Result:
A user can instantiate Http2ServerUpgradeCodec and its related objects
(Http2Connection, Http2FrameReader/Writer, Http2FrameListener, etc) only
when necessary.
Motivation:
Our HTTP/2 implementation sometimes uses hard-coded handler names when
adding/removing a handler to/from a pipeline. It's not really a good
idea because it can easily result in name clashes. Unless there is a
good reason, we need to use the reference to the handlers
Modifications:
- Allow null as a handler name for Http2Client/ServerUpgradeCodec
- Use null as the default upgrade handler name
- Do not use handler name strings in some test cases and examples
Result:
Fixes#3815
Motivation:
SpdyOrHttpChooser and Http2OrHttpChooser duplicate fair amount code with each other.
Modification:
- Replace SpdyOrHttpChooser and Http2OrHttpChooser with ApplicationProtocolNegotiationHandler
- Add ApplicationProtocolNames to define the known application-level protocol names
Result:
- Less code duplication
- A user can perform dynamic pipeline configuration that follows ALPN/NPN for any protocols.
Related: #3641 and #3813
Motivation:
When setting up an HTTP/1 or HTTP/2 (or SPDY) pipeline, a user usually
ends up with adding arbitrary set of handlers.
Http2OrHttpChooser and SpdyOrHttpChooser have two abstract methods
(create*Handler()) that expect a user to return a single handler, and
also have add*Handlers() methods that add the handler returned by
create*Handler() to the pipeline as well as the pre-defined set of
handlers.
The problem is, some users (read: I) don't need all of them or the
user wants to add more than one handler. For example, take a look at
io.netty.example.http2.tiles.Http2OrHttpHandler, which works around
this issue by overriding addHttp2Handlers() and making
createHttp2RequestHandler() a no-op.
Modifications:
- Replace add*Handlers() and create*Handler() with configure*()
- Rename getProtocol() to selectProtocol() to make what it does clear
- Provide the default implementation of selectProtocol()
- Remove SelectedProtocol.UNKNOWN and use null instead, because
'UNKNOWN' is not a protocol
- Proper exception handling in the *OrHttpChooser so that the
exception is logged and the connection is closed when failed to
select a protocol
- Make SpdyClient example always use SSL. It was always using SSL
anyway.
- Implement SslHandshakeCompletionEvent.toString() for debuggability
- Remove an orphaned class: JettyNpnSslSession
- Add SslHandler.applicationProtocol() to get the name of the
application protocol
- SSLSession.getProtocol() now returns transport-layer protocol name
only, so that it conforms to its contract.
Result:
- *OrHttpChooser have better API.
- *OrHttpChooser handle protocol selection failure properly.
- SSLSession.getProtocol() now conforms to its contract.
- SpdyClient example works with SpdyServer example out of the box
Motivation:
The logic in the current websocket example is confusing and misleading
Modifications:
Remove occurrences of "http" and "https" and replace them with "ws" and "wss"
Result:
The example code is now coherent and is easier to understand for a new user.
Motivation:
There are no Netty SCTP examples on multi-homing.
Modifications:
- Added new example classes based on echo client/server example
Result:
Better documentation
Motiviation:
Commit f45d754 from 4.1 was directly merged but interfaces were not updated.
Modifications:
- Fix areas where interfaces have changed between 4.1 and master
Result:
No more compile error.
Motivation:
Adding an example that showcases Netty’s HTTP/2 codec and that is
slightly more complex than the existing hello-world example. It is
based on the Gopher tiles example available here:
https://http2.golang.org/gophertiles?latency=0
Modifications:
Moved current http2 example to http2/helloworld.
Added http2 tiles example under http2/tiles.
Result:
A Netty tiles example is available.
Motiviation:
The HTTP/2 server example just hangs when a client is using only HTTP with no ALPN or upgrade attempts. We should still send some kind of response.
Modifications:
The HTTP/2 server example has a special handler to detect no upgrade HTTP clients and generate a response.
Result:
Clients that just use HTTP with no upgrade will no appear hung when interacting with the HTTP/2 server example.
Motivation:
RFC6265 specifies which characters are allowed in a cookie name and value.
Netty is currently too lax, which can used for HttpOnly escaping.
Modification:
In ServerCookieDecoder: discard cookie key-value pairs that contain invalid characters.
In ClientCookieEncoder: throw an exception when trying to encode cookies with invalid characters.
Drop old Cookie encoders and decoders that were deprecated in 4.1.
Result:
The problem described in the motivation section is fixed.
Motivation:
Examples that are using ALPN/NPN are using a failure mode which is not supported by the JDK SslProvider. The examples fail to run and throw an exception if the JDK SslProvider is used.
Modifications:
- Use SelectorFailureBehavior.NO_ADVERTISE
- Use SelectedListenerFailureBehavior.ACCEPT
Result:
Examples can be run with both OpenSsl and JDK SslProviders.
Motivation:
Using factory methods of SslContext is deprecated. Code should be using
SslContextBuilder instead. This would have been done when the old
methods were deprecated, but memcache and http2 examples didn't exist in
the 4.0 branch which the PR was against.
Modifications:
Swap to the new construction pattern.
Result:
No more deprecated warnings during build of examples. Users are
instructed to use the new pattern.
Motivation:
The usage and code within AsciiString has exceeded the original design scope for this class. Its usage as a binary string is confusing and on the verge of violating interface assumptions in some spots.
Modifications:
- ByteString will be created as a base class to AsciiString. All of the generic byte handling processing will live in ByteString and all the special character encoding will live in AsciiString.
Results:
The AsciiString interface will be clarified. Users of AsciiString can now be clear of the limitations the class imposes while users of the ByteString class don't have to live with those limitations.
Motivation:
SslContext factory methods have gotten out of control; it's past time to
swap to a builder.
Modifications:
New Builder class. The existing factory methods must be left as-is for
backward compatibility.
Result:
Fixes#3531
Motivation:
We missed to flush the channel when using HttpChunkedInput (this is done when using SSL). This will result in a stale.
Modifications:
Replace ctx.write(...) with ctx.writeAndFlush(...)
Result:
Correctly working example.
Motivation:
To support HTTP2 we need APLN support. This was not provided before when using OpenSslEngine, so SSLEngine (JDK one) was the only bet.
Beside this CipherSuiteFilter was not supported
Modifications:
- Upgrade netty-tcnative and make use of new features to support ALPN and NPN in server and client mode.
- Guard against segfaults after the ssl pointer is freed
- support correctly different failure behaviours
- add support for CipherSuiteFilter
Result:
Be able to use OpenSslEngine for ALPN / NPN for server and client.
Motivation:
The DefaultHttp2ConnectionDecoder class is calling verifyPrefaceReceived() for almost every frame event at all times.
The Http2ConnectionHandler class is calling readClientPrefaceString() on every decode event.
Modifications:
- DefaultHttp2ConnectionDecoder should not have to continuously call verifyPrefaceReceived() because it transitions boolean state 1 time for each connection.
- Http2ConnectionHandler should not have to continuously call readClientPrefaceString() because it transitions boolean state 1 time for each connection.
Result:
- Less conditional checks for the mainstream usage of the connection.
Motivation:
The Http2FrameLogger is currently using the internal logging classes. We should change this so that it's using the public classes and then converts internally.
Modifications:
Modified Http2FrameLogger and the examples to use the public LogLevel class.
Result:
Fixes#2512
While implementing netty-handler-proxy, I realized various issues in our
current socksx package. Here's the list of the modifications and their
background:
- Split message types into interfaces and default implementations
- so that a user can implement an alternative message implementations
- Use classes instead of enums when a user might want to define a new
constant
- so that a user can extend SOCKS5 protocol, such as:
- defining a new error code
- defining a new address type
- Rename the message classes
- to avoid abbreviated class names. e.g:
- Cmd -> Command
- Init -> Initial
- so that the class names align better with the protocol
specifications. e.g:
- AuthRequest -> PasswordAuthRequest
- AuthScheme -> AuthMethod
- Rename the property names of the messages
- so that the property names align better when the field names in the
protocol specifications
- Improve the decoder implementations
- Give a user more control over when a decoder has to be removed
- Use DecoderResult and DecoderResultProvider to handle decode failure
gracefully. i.e. no more Unknown* message classes
- Add SocksPortUnifinicationServerHandler since it's useful to the users
who write a SOCKS server
- Cleaned up and moved from the socksproxy example
Motivation:
The terminology used with inbound/outbound is a little confusing since
it's not discussed in the spec. We should switch to using local/remote
instead. Also there is some asymmetry between the inbound/outbound
interfaces which could probably be cleaned up.
Modifications:
Changing the interface names and making a common Http2FlowController
interface for most of the methods.
Result:
The HTTP/2 flow control interfaces should be more clear.
Motivation:
The example MemcacheClient set command doesn't work.
Modifications:
Fill the extras field buffer with zeros so that it gets written to the
request payload.
Result:
The example MemcacheClient set command works.
Related: #3122
Motivation:
The HttpStaticFileServer example writes the LastHttpContent twice at the
end of the transfer. HttpChunkedInput already produces a
LastHttpContent at the end of the stream, so there's no reason to write
another.
Modifications:
Do not write LastHttpContent in HttpStaticFileServerHandler when
HttpChunkedInput is used to transfer a file.
Result:
HttpStaticFileServer does not violates the protocol anymore.
Motivation:
When running the examples using the provided run-examples.sh script the
log level is 'info' level. It can be handy to be able to configure a
different level, for example 'debug', while learning and trying out the
the examples.
Modifications:
Added a dependency to logback-classic to the examples pom.xml, and also
added a logback configuration file. The log level can be configured by
setting the 'logLevel' system property, and if that property is not set
the default will be 'info' level.
The run-examples.sh was updated to show an example of using the system
property to set the log level to 'debug'
Result:
It is now possible to turn on debug logging by settnig a system property
on the command line.
Motivation:
When DefaultHttp2FrameReader has read a settings frame, the settings
will be passed along the pipeline. This allows a client to hold off
sending data until it has received a settings frame. But for a server it
will always have received a settings frame and the usefulness of this
forwarding of settings is less useful. This also causes a debug message
to be logged on the server side if there is no channel handler to handle
the settings:
[nioEventLoopGroup-1-1] DEBUG io.netty.channel.DefaultChannelPipeline -
Discarded inbound message {INITIAL_WINDOW_SIZE=131072,
MAX_FRAME_SIZE=16384} that reached at the tail of the pipeline. Please
check your pipeline configuration.
Modifications:
Added a builder for the InboundHttp2ToHttpAdapter and
InboundHttp2PriortyAdapter and a new parameter named 'propagateSettings'
to their constructors.
Result:
It is now possible to control whether settings should be passed along
the pipeline or not.
Motivation:
I found myself writing AsciiString constants in my code for
response statuses and thought that perhaps it might be nice to have
them defined by Netty instead.
Modifications:
Adding codeAsText to HttpResponseStatus that returns the status code as
AsciiText.
In addition, added the 421 Misdirected Request response code from
https://tools.ietf.org/html/draft-ietf-httpbis-http2-15#section-9.1.2
This response header was renamed in draft 15:
https://tools.ietf.org/html/draft-ietf-httpbis-http2-15#appendix-A.1
But the code itself was not changed, and I thought using the latest would
be better.
Result:
It is now possible to specify a status like this:
new DefaultHttp2Headers().status(HttpResponseStatus.OK.codeAsText());
Motivation:
Found performance issues via FindBugs and PMD.
Modifications:
- Removed unnecessary boxing/unboxing operations in DefaultTextHeaders.convertToInt(CharSequence) and DefaultTextHeaders.convertToLong(CharSequence). A boxed primitive is created from a string, just to extract the unboxed primitive value.
- Added a static modifier for DefaultHttp2Connection.ParentChangedEvent class. This class is an inner class, but does not use its embedded reference to the object which created it. This reference makes the instances of the class larger, and may keep the reference to the creator object alive longer than necessary.
- Added a static compiled Pattern to avoid compile it each time it is used when we need to replace some part of authority.
- Improved using of StringBuilders.
Result:
Performance improvements.
Motivation:
The current name of the class which converts from HTTP objects to HTTP/2 frames contains the text Http2ToHttp. This is misleading and opposite of what is being done.
Modifications:
Rename this class name to be HttpToHttp2.
Result:
Class names that more clearly identify what they do.
Currently the DefaultHttp2InboundFlowController only supports the
ability to turn on and off "window maintenance" for a stream. This is
insufficient for true application-level flow control that may only want
to return a few bytes to flow control at a time.
Modifications:
Removing "window maintenance" interface from
DefaultHttp2InboundFlowController in favor of the new interface.
Created the Http2InboundFlowState interface which extends Http2FlowState
to add the ability to return bytes for a specific stream.
Changed the onDataRead method to return an integer number of bytes that
will be immediately returned to flow control, to support use cases that
want to opt-out of application-level inbound flow control.
Updated DefaultHttp2InboundFlowController to use 2 windows per stream.
The first, "window", is the actual flow control window that is
decremented as soon as data is received. The second "processedWindow"
is a delayed view of "window" that is only decremented after the
application returns the processed bytes. It is processedWindow that is
used when determining when to send a WINDOW_UPDATE to restore part of
the inbound flow control window for the stream/connection.
Result:
The HTTP/2 inbound flow control interfaces support application-level
flow control.
Motivation:
Too many warnings from IntelliJ IDEA code inspector, PMD and FindBugs.
Modifications:
- Removed unnecessary casts, braces, modifiers, imports, throws on methods, etc.
- Added static modifiers where it is possible.
- Fixed incorrect links in javadoc.
Result:
Better code.
Motivation:
When running the http2 example no SslProvider is specified when calling
SslContext.newServerContext. This may lead to the provider being
determined depending on the availabilty of OpenSsl. But as far as I can
tell the OpenSslServerContext does not support APLN, which is the
protocol configured in the example.
This produces the following error when running the example:
Exception in thread "main" java.lang.UnsupportedOperationException:
OpenSSL provider does not support ALPN protocol
io.netty.handler.ssl.OpenSslServerContext.toNegotiator(OpenSslServerContext.java:391)
io.netty.handler.ssl.OpenSslServerContext.<init>(OpenSslServerContext.java:117)
io.netty.handler.ssl.SslContext.newServerContext(SslContext.java:238)
io.netty.handler.ssl.SslContext.newServerContext(SslContext.java:184)
io.netty.handler.ssl.SslContext.newServerContext(SslContext.java:124)
io.netty.example.http2.server.Http2Server.main(Http2Server.java:51)
Modifications:
Force SslProvider.JDK when creating the SslContext since the
example is using APLN.
Result:
There is no longer an error if OpenSsl is supported on the platform in
use.
Motivation:
There are a few very minor issues in the Http2 examples javadoc and
since I don't think that these javadocs are published this is very much
optional to include.
Modifications:
Updated the @see according to [1] to avoid warning when generating
javadocs.
Result:
No warning when generating javadocs.
[1] http://docs.oracle.com/javase/1.5.0/docs/tooldocs/windows/javadoc.html#@see
Related: 4ce994dd4f
Motivation:
In 4.1, we were not able to change the type of the HTTP header name and
value constants from String to AsciiString due to backward compatibility
reasons.
Instead of breaking backward compatibility in 4.1, we introduced new
types called HttpHeaderNames and HttpHeaderValues which provides the
AsciiString version of the constants, and then deprecated
HttpHeaders.Names/Values.
We should make the same changes while deleting the deprecated classes
activaly.
Modifications:
- Remove HttpHeaders.Names/Values and RtspHeaders
- Add HttpHeaderNames/Values and RtspHeaderNames/Values
- Make HttpHeaderValues.WEBSOCKET lowercased because it's actually
lowercased in all WebSocket versions but the oldest one
- Do not use AsciiString.equalsIgnoreCase(CharSeq, CharSeq) if one of
the parameters are AsciiString
- Avoid using AsciiString.toString() repetitively
- Change the parameter type of some methods from String to
CharSequence
Result:
A user who upgraded from 4.0 to 4.1 first and removed the references to
the deprecated classes and methods can easily upgrade from 4.1 to 5.0.
Motivation:
If there are no common protocols in the ALPN protocol exchange we still compete the handshake successfully. This handshake should fail according to http://tools.ietf.org/html/rfc7301#section-3.2 with a status of no_application_protocol. The specification also allows for the server to "play dumb" and not advertise that it supports ALPN in this case (see MAY clauses in http://tools.ietf.org/html/rfc7301#section-3.1)
Modifications:
-Upstream project used for ALPN (alpn-boot) does not support this. So a PR https://github.com/jetty-project/jetty-alpn/pull/3 was submitted.
-The netty code using alpn-boot should support the new interface (return null on existing method).
-Version number of alpn-boot must be updated in pom.xml files
Result:
-Netty fails the SSL handshake if ALPN is used and there are no common protocols.
Motivation:
As report in #2953 the websocket server example contained a bug and did therefore not work with chrome:
A websocket extension is added to the pipeline but extensions were disallowed in the handshaker and decoder,
which is leading the decoder to closing the connection after receiving an extension frame.
Modifications:
Allow websocket extensions in the handshaker to correctly enable the extension.
Result:
Working websocket server example
Fixes#2953
Motivation:
Headers within netty do not cleanly share a common class hierarchy. As a result some header types support some operations
and don't support others. The consolidation of the class hierarchy will allow for maintenance and scalability for new codec.
The existing hierarchy also has a few short comings such as it is not clear when data conversions are happening. This
could result unintentionally getting back a collection or iterator where a conversion on each entry must happen.
The current headers algorithm also prepends all elements which means to find the first element or return a collection
in insertion order often requires a complete traversal followed by a collections.reverse call.
Modifications:
-Provide a generic base class which provides all the implementation for headers in netty
-Provide an extension to this class which allows for name type conversions to happen (to accommodate legacy CharSequence to String conversions)
-Update the headers interface to clarify when conversions will happen.
-Update the headers data structure so that appends are done to avoid unnecessary iteration or collection reversal.
Result:
-More unified class hierarchy for headers in netty
-Improved headers data structure and algorithms
-headers API more clearly identify when conversions are required.
Related issue: #1133
Motivation:
There is no support for client socket connections via a proxy server in
Netty.
Modifications:
- Add a new module 'handler-proxy'
- Add ProxyHandler and its subclasses to support SOCKS 4a/5 and HTTP(S)
proxy connections
- Add a full parameterized test for most scenarios
- Clean up pom.xml
Result:
A user can make an outgoing connection via proxy servers with only
trivial effort.
Motiviation:
The HTTP/2 server example is not using the outbound flow control. It is instead using a FrameWriter directly.
This can lead to flow control errors and other comm. related errors
Modifications:
-Force server example to use outbound flow controller
Result:
-Server example should use follow flow control rules.
Motivation:
It is often helpful to measure the performance of connections, e.g. the
latency and the throughput. This can be performed through benchmarks.
Modification:
This adds a simple but configurable benchmark for websockets into the
example directory. The Netty WebSocket server will echo all received
websocket frames and will provide an HTML/JS page which serves as the
client for the benchmark.
The benchmark also provides a verification mode that verifies the sent
against the received data. This can be used for the verification ob
websocket frame encoding and decoding funtionality.
Result:
A benchmark is added in form a further Netty websocket example.
With this benchmark it is easily possible to measure the performance between Netty and a browser
Motivation:
The HTTP/2 example can timeout at the client waiting for a response due
to the server not flushing after writing the response.
Modifications:
Updated the server's HelloWorldHttp2Handler to flush after writing the
response.
Result:
The HTTP/2 example runs successfully.
Motivation:
HTTP/2 codec does not properly test exception passed to
exceptionCaught() for instanceof Http2Exception (since the exception
will always be wrapped in a PipelineException), so it will never
properly handle Http2Exceptions in the pipeline.
Also if any streams are present, the connection close logic will execute
twice when a pipeline exception. This is because the exception logic
calls ctx.close() which then triggers the handleInActive() logic to
execute. This clears all of the remaining streams and then attempts to
run the closeListener logic (which has already been run).
Modifications:
Changed exceptionCaught logic to properly extract Http2Exception from
the PipelineException. Also added logic to the closeListener so that is
only run once.
Changed Http2CodecUtil.toHttp2Exception() to avoid NPE when creating
an exception with cause.getMessage().
Refactored Http2ConnectionHandler to more cleanly separate inbound and
outbound flows (Http2ConnectionDecoder/Http2ConnectionEncoder).
Added a test for verifying that a pipeline exception closes the
connection.
Result:
Exception handling logic is tidied up.
Motivation:
The HTTP/2 codec has some duplication and the read/write interfaces are not cleanly exposed to users of the codec.
Modifications:
-Restructure the AbstractHttp2ConnectionHandler class to be able to extend write behavior before the outbound flow control gets the data
-Add Http2InboundConnectionHandler and Http2OutboundConnectionHandler interfaces and restructure external codec interface around these concepts
Result:
HTTP/2 codec provides a cleaner external interface which is easy to extend for read/write events.
Motivation:
We incorrectly used SslContext.newServerContext() in some places where a we needed a client context.
Modifications:
Use SslContext.newClientContext() when using ssl on the client side.
Result:
Working ssl client examples.
Motivation:
The current implementation of the HTTP/2 decompression does not integrate with flow control properly.
The decompression code is giving the post-decompression size to the flow control algorithm which
results in flow control errors at incorrect times.
Modifications:
-DecompressorHttp2FrameReader.java will need to change where it hooks into the HTTP/2 codec
-Enhance unit tests to test this condition
Result:
No more flow control errors because of decompression design flaw
Motivation:
The HTTP/2 spec does not restrict headers to being String. The current
implementation of the HTTP/2 codec uses Strings as header keys and
values. We should change this so that header keys and values allow
binary values.
Modifications:
Making Http2Headers based on AsciiString, which is a wrapper around a
byte[].
Various changes throughout the HTTP/2 codec to use the new interface.
Result:
HTTP/2 codec no longer requires string headers.