Motivation:
The HTTP/2 codec has some duplication and the read/write interfaces are not cleanly exposed to users of the codec.
Modifications:
-Restructure the AbstractHttp2ConnectionHandler class to be able to extend write behavior before the outbound flow control gets the data
-Add Http2InboundConnectionHandler and Http2OutboundConnectionHandler interfaces and restructure external codec interface around these concepts
Result:
HTTP/2 codec provides a cleaner external interface which is easy to extend for read/write events.
Motivation:
We incorrectly used SslContext.newServerContext() in some places where a we needed a client context.
Modifications:
Use SslContext.newClientContext() when using ssl on the client side.
Result:
Working ssl client examples.
Motivation:
The current implementation of the HTTP/2 decompression does not integrate with flow control properly.
The decompression code is giving the post-decompression size to the flow control algorithm which
results in flow control errors at incorrect times.
Modifications:
-DecompressorHttp2FrameReader.java will need to change where it hooks into the HTTP/2 codec
-Enhance unit tests to test this condition
Result:
No more flow control errors because of decompression design flaw
Motivation:
The HTTP/2 spec does not restrict headers to being String. The current
implementation of the HTTP/2 codec uses Strings as header keys and
values. We should change this so that header keys and values allow
binary values.
Modifications:
Making Http2Headers based on AsciiString, which is a wrapper around a
byte[].
Various changes throughout the HTTP/2 codec to use the new interface.
Result:
HTTP/2 codec no longer requires string headers.
Motivation:
The HTTP/2 codec does not provide a way to decompress data. This functionality is supported by the HTTP codec and is expected to be a commonly used feature.
Modifications:
-The Http2FrameReader will be modified to allow hooks for decompression
-New classes which detect the decompression from HTTP/2 header frames and uses that decompression when HTTP/2 data frames come in
-New unit tests
Result:
The HTTP/2 codec will provide a means to support data decompression
Motivation:
Currently the Executor created by (Nio|Epoll)EventLoopGroup is not correctly shutdown.
This might lead to resource shortages, due to resources not being freed asap.
Modifications:
If (Nio|Epoll)EventLoopGroup create their internal Executor via a constructor
provided `ExecutorServiceFactory` object or via
MultithreadEventLoopGroup.newDefaultExecutorService(...) the ExecutorService.shutdown()
method will be called after (Nio|Epoll)EventLoopGroup is shutdown.
ExecutorService.shutdown() will not be called if the Executor object was passed
to the (Nio|Epoll)EventLoopGroup (that is, it was instantiated outside of Netty).
Result:
Correctly release resources on (Nio|Epoll)EventLoopGroup shutdown.
Motivation:
The HTTP/2 specification places restrictions on the cipher suites that can be used. There is no central place to pull the ciphers that are allowed by the specification, supported by different java versions, and recommended by the community.
Modifications:
-HTTP/2 will have a security utility class to define supported ciphers
-netty-handler will be modified to support filtering the supplied list of ciphers to the supported ciphers for the current SSLEngine
Result:
-Netty provides unified support for HTTP/2 cipher lists and ciphers can be pruned by currently supported ciphers
Motivation:
Netty only supports a java NPN implementation provided by npn-api and npn-boot.
There is no java implementation for ALPN.
ALPN is needed to be compliant with the HTTP/2 spec.
Modifications:
-SslContext and JdkSslContext to support ALPN
-JettyNpn* class restructure for NPN and ALPN common aspects
-Pull in alpn-api and alpn-boot optional dependencies for ALPN java implementation
Result:
-Netty provides access to a java implementation of APLN
Motivation:
The priority information reported by the HTTP/2 to HTTP tranlsation layer is not correct in all situations.
The HTTP translation layer is not using the Http2Connection.Listener interface to track tree restructures.
This incorrect information is being sent up to clients and is misleading.
Modifications:
-Restructure InboundHttp2ToHttpAdapter to allow a default data/header mode
-Extend this interface to provide an optional priority translation layer
Result:
-Priority information being correctly reported in HTTP/2 to HTTP translation layer
-Cleaner code with seperation of concerns (optional priority conversion).
Motivation:
HTTP/2 draft 14 came out a couple of weeks ago and we need to keep up
with the spec.
Modifications:
-Revert back to dispatching FullHttpMessage objects instead of individual HttpObjects
-Corrections to HttpObject comparitors to support test cases
-New test cases to support sending headers immediatley
-Bug fixes cleaned up to ensure the message flow is terminated properly
Result:
Netty HTTP/2 to HTTP/1.x translation layer will support the HTTP/2 draft message flow.
Motivation:
This is just some general cleanup to get rid of the FrameWriter inner
interface withing Http2InboundFlowController. It's not necessary since
the flow controller can just use the Http2FrameWriter to send
WINDOW_UPDATE frames.
Modifications:
Updated DefaultHttp2InboundFlowController to use Http2FrameWriter.
Result:
The inbound flow control code is somewhat less smelly :).
Motivation:
This is addressing a TODO in the outbound flow controller. We currently
have a separate writer interface passed into the outbound flow
controller. This is confusing and limiting as to how the flow controller
can perform its writes (e.g. no control over flushing). Instead it would
be better to just let the flow controller use the Http2FrameWriter
directly.
Modifications:
- Added a new Http2DataWriter interface, which is extended by
Http2FrameWriter and Http2OutboundFlowController.
- Removed automatic flushing from Http2DataWriter in order to facilitate
optimizing the case where there are multiple writes.
- Updated DefaultHttp2OutboundFlowController to properly optimize
flushing of the ChannelHandlerContext when multiple writes occur.
Result:
Code is greatly simplified WRT outbound flow control and flushes are
optimized for flow-controlled DATA frames.
Motivation:
HTTP/2 draft 14 came out a couple of weeks ago and we need to keep up
with the spec.
Modifications:
- Removed use of segment throughout.
- Added new setting for MAX_FRAME_SIZE. Used by the frame reader/writer
rather than a constant.
- Added new setting for MAX_HEADER_LIST_SIZE. This is currently unused.
- Expanded the header size to 9 bytes. The frame length field is now 3
bytes and added logic for checking that it falls within the valid range.
Result:
Netty will support HTTP/2 draft 14 framing. There will still be some
work to do to be compliant with the HTTP adaptation layer.
Motivation:
The example mis handle two elements:
1) Last message is a LastHttpContent and is not taken into account by
the server handler
2) The client makes a sync on last write (chunked) but there is no flush
before, therefore the sync is waiting forever.
Modifications:
1) Take into account the message LastHttpContent in simple Get.
2) Removes sync but add flush for each post and multipost parts
Results:
Example is no more blocked after get test.
Should be done also in 4.0 and Master (similar changes)
- SocksV[45] -> Socks[45]
- Make encodeAsByteBuf package private with some hassle
- Split SocksMessageEncoder into Socks4MessageEncoder and
Socks5MessageEncoder, and remove the original
- Remove lazy singleton instantiation; we don't need it.
- Remove the deprecated methods
- Fix Javadoc errors
Motivation:
SOCKS 4 and 5 are very different protocols although they share the same
name. It is not possible to incorporate the two protocol versions into
a single package.
Modifications:
- Add a new package called 'socksx' to supercede 'socks' package.
- Add SOCKS 4/4a support to the 'socksx' package
Result:
codec-socks now supports all SOCKS versions
Motivation:
The HTTP/2 codec currently provides direct callbacks to access stream events/data. The HTTP/2 codec provides the protocol support for HTTP/2 but it does not pass messages up the context pipeline. It would be nice to have a decoder which could collect the data framed by HTTP/2 and translate this into traditional HTTP type objects. This would allow the traditional Netty context pipeline to be used to separate processing concerns (i.e. HttpContentDecompressor). It would also be good to have a layer which can translate FullHttp[Request|Response] objects into HTTP/2 frame outbound events.
Modifications:
Introduce a new InboundHttp2ToHttpAdapter and supporting classes which will translate HTTP/2 stream events/data into HttpObject objects. Introduce a new DelegatingHttp2HttpConnectionHandler which will translate FullHttp[Request|Response] objects to HTTP/2 frame events.
Result:
Introduced HTTP/2 frame events to HttpObject layer.
Introduced FullHttp[Request|Response] to HTTP/2 frame events.
Introduced new unit tests to support new code.
Updated HTTP/2 client example to use new code.
Miscelaneous updates and bug fixes made to support new code.
Related issue: #2250
Motivation:
Prior to this commit, Netty's non blocking EventLoops
were each assigned a fixed thread by which all of the
EventLoop's I/O and handler logic would be performed.
While this is a fine approach for most users of Netty,
some advanced users require more flexibility in
scheduling the EventLoops.
Modifications:
Remove all direct usages of threads in MultithreadEventExecutorGroup,
SingleThreadEventExecutor et al., and introduce an Executor
abstraction instead.
The way to think about this change is, that each
iteration of an eventloop is now a task that gets scheduled
in a ForkJoinPool.
While the ForkJoinPool is the default, one also has the
ability to plug in his/her own Executor (aka thread pool)
into a EventLoop(Group).
Result:
Netty hands off thread management to a ForkJoinPool by default.
Users can also provide their own thread pool implementation and
get some control over scheduling Netty's EventLoops
Motivation:
There are still a few places in the HTTP/2 code that have the compressed
flag (from pre-draft 13). Need to remove this flag since it's no longer
used.
Modifications:
Various changes to remove the flag from the writing path.
Result:
No references to the compressed flag.
Add permessage-deflate and deflate-frame WebSocket extension
implementations.
Motivation:
Need to compress HTTP WebSocket frames payload.
Modifications:
- Move UTF8 checking of WebSocketFrames from frame decoder to and
external handler.
- Change ZlibCodecFactory in order to use ZLib implentation instead of
JDK if windowSize or memLevel is different from the default one.
- Add WebSocketServerExtensionHandler and
WebSocketClientExtensionHandler to handle WebSocket Extension headers.
- Add DeflateFrame and PermessageDeflate extension implementations.
Modifications:
When trying out the Http2Client example I noticed that adding a request
payload would not cause a data frame to written. This seems to be
because writeHeaders completes the promise, and then the writeData
call ends up in FlowControlWriter.writeFrame, isDone is true and
the data released and the call aborted and returned.
Adding a new promise for the writeData method allows a data frame to be
written.
Result:
A body/payload can now be sent to the server. The example was updated to
simply echo the payload received back to the calling client.
Motivation:
Need to upgrade HTTP/2 implementation to latest draft.
Modifications:
Various changes to support draft 13.
Result:
Support for HTTP/2 draft 13.
Motivation:
HttpOrSpdyChooser can be simplified so the user not need to implement getProtocol(...) method.
Modification:
Add implementation for the method. The user can override it if necessary.
Result:
Easier usage of HttpOrSpdyChooser.
Motivation:
OkResponseHandler is the last handler in the pipeline of the HTTP CORS
example. It is responsible for releasing all messages it handled.
Modification:
Extend SimpleChannelInboundHandler instead of ChannelHandlerAdapter
Result:
Fixed a leak
Motivation:
When running the Http2Client the data returned from the server, the
"Hello World" string, is supposed to be printed but instead the following
is displayed:
Received message:
Modifications:
Use the aggregated buffer to print out.
Result:
The example now logs the correct data sent from the server:
Received message: Hello World
Motivation:
Persuit for the consistency in method naming
Modifications:
- Remove the 'get' prefix from all HTTP/SPDY message classes
- Fix some inspector warnings
Result:
Consistency
Motivation:
We have quite a bit of code duplication between HTTP/1, HTTP/2, SPDY,
and STOMP codec, because they all have a notion of 'headers', which is a
multimap of string names and values.
Modifications:
- Add TextHeaders and its default implementation
- Add AsciiString to replace HttpHeaderEntity
- Borrowed some portion from Apache Harmony's java.lang.String.
- Reimplement HttpHeaders, SpdyHeaders, and StompHeaders using
TextHeaders
- Add AsciiHeadersEncoder to reuse the encoding a TextHeaders
- Used a dedicated encoder for HTTP headers for better performance
though
- Remove shortcut methods in SpdyHeaders
- Remove shortcut methods in SpdyHttpHeaders
- Replace SpdyHeaders.getStatus() with HttpResponseStatus.parseLine()
Result:
- Removed quite a bit of code duplication in the header implementations.
- Slightly better performance thanks to improved header validation and
hash code calculation
Motivation:
Subclasses of AbstractHttp2ConnectionHandler have to implement all frame
handler methods, many of which can be ignored in many cases. Also there
is no easy way to access the connection object.
Modifications:
Added default implementations for frame handler methods to
AbstractHttp2ConnectionHandler, and added an accessor for the
connection.
Also fixed example test for HTTP/2 with cleartext upgrade. It must have
been broken by recent commits.
Result:
AbstractHttp2ConnectionHandler is more subclass-friendly.
Motivation:
The connection, priority tree, and inbound/outbound flow controllers
each maintain a separate map for stream information. This is wasteful
and complicates the design since as streams are added/removed, multiple
structures have to be updated.
Modifications:
- Merging the priority tree into Http2Connection. Then we can use
Http2Connection as the central stream repository.
- Adding observer pattern to Http2Connection so flow controllers can be
told when a new stream is created, closed, etc.
- Adding properties for inboundFlow/outboundFlow state to Http2Stream.
This allows the controller to access flow control state directly from
the stream without requiring additional structures.
- Separate out the StreamRemovalPolicy and created a "default"
implementation that runs periodic garbage collection. This used to be
internal to the outbound flow controller, but I think it is more general
than that.
Result:
HTTP/2 classes will require less storage for new streams.
Motivation:
We have different message aggregator implementations for different
protocols, but they are very similar with each other. They all stems
from HttpObjectAggregator. If we provide an abstract class that provide
generic message aggregation functionality, we will remove their code
duplication.
Modifications:
- Add MessageAggregator which provides generic message aggregation
- Reimplement all existing aggregators using MessageAggregator
- Add DecoderResultProvider interface and extend it wherever possible so
that MessageAggregator respects the state of the decoded message
Result:
Less code duplication
Motivation:
Currently OkResponseHandler returns a DefaultHttpResponse which is not
correct and it should be returning complete http response.
Modifications:
Updated OkResponseHandler to return an instance of
DefaultFullHttpResponse.
Result:
It is not possible to add compression to the example without getting any
errors.
Motivation:
maven-antrun-plugin does not redirect stdin, and thus it's impossible to
run interactive examples such as securechat-client and telnet-client.
org.codehaus.mojo:exec-maven-plugin redirects stdin, but it buffers
stdout and stderr, and thus an application output is not flushed timely.
Modifications:
Deploy a forked version of exec-maven-plugin which flushes output
buffers in a timely manner.
Result:
Interactive examples work. Launches faster than maven-antrun-plugin.
Motivation:
The examples have not been updated since long time ago, showing various
issues fixed in this commit.
Modifications:
- Overall simplification to reduce LoC
- Use system properties to get options instead of parsing args.
- Minimize option validation
- Just use System.out/err instead of Logger
- Do not pass config as parameters - just access it directly
- Move the main logic to main(String[]) instead of creating a new
instance meaninglessly
- Update netty-build-21 to make checkstyle not complain
- Remove 'throws Exception' clause if possible
- Line wrap at 120 (previously at 80)
- Add an option to enable SSL for most examples
- Use ChannelFuture.sync() instead of await()
- Use System.out for the actual result. Use System.err otherwise.
- Delete examples that are not very useful:
- applet
- websocket/html5
- websocketx/sslserver
- localecho/multithreaded
- Add run-example.sh which simplifies launching an example from command
line
- Rewrite FileServer example
Result:
Shorter and simpler examples. A user can focus more on what it actually
does than miscellaneous stuff. A user can launch an example very
easily.
Motivation:
exec-maven-plugin does not flush stdout and stderr, making the console
output from the examples invisible to users
Modification:
Use maven-antrun-plugin instead
Result:
A user sees the output from the examples immediately.
Motivation:
According to TLS ALPN draft-05, a client sends the list of the supported
protocols and a server responds with the selected protocol, which is
different from NPN. Therefore, ApplicationProtocolSelector won't work
with ALPN
Modifications:
- Use Iterable<String> to list the supported protocols on the client
side, rather than using ApplicationProtocolSelector
- Remove ApplicationProtocolSelector
Result:
Future compatibility with TLS ALPN
Motivation:
- OpenSslEngine and JDK SSLEngine (+ Jetty NPN) have different APIs to
support NextProtoNego extension.
- It is impossible to configure NPN with SslContext when the provider
type is JDK.
Modification:
- Implement NextProtoNego extension by overriding the behavior of
SSLSession.getProtocol() for both OpenSSLEngine and JDK SSLEngine.
- SSLEngine.getProtocol() returns a string delimited by a colon (':')
where the first component is the transport protosol (e.g. TLSv1.2)
and the second component is the name of the application protocol
- Remove the direct reference of Jetty NPN classes from the examples
- Add SslContext.newApplicationProtocolSelector
Result:
- A user can now use both JDK SSLEngine and OpenSslEngine for NPN-based
protocols such as HTTP2 and SPDY
Motivation:
- There's no way to pass an argument to an example.
- Assigning a Maven profile for each example is an overkill.
It makes the pom.xml crowded.
Modifications:
- Remove example profiles from example/pom.xml
- Keep the list of examples in run-example.sh
- run-example.sh passes all options to exec-maven-plugin.
For example, we can now do this:
./run-example.sh -Dssl -Dport=443 http-server
Result:
- It's much easier to add a new example and provide an easy way to
launch it.
- We can still pass an arbitrary argument to the example being launched.
(I'll update all examples to make them get their options from system
properties rather than from args[].
Motivation:
Build fails with JDK 8 because npn-boot does not work with JDK 8
Modifications:
Do not specify bootclasspath when on JDK 8
Result:
Build is green again.
Motivation:
- example/pom.xml has quite a bit of duplication.
- We expect that we depend on npn-boot in more than one module in the
near future. (e.g. handler, codec-http, and codec-http2)
Modification:
- Deduplicate the profiles in example/pom.xml
- Move the build configuration related with npn-boot to the parent pom.
- Add run-example.sh that helps a user launch an example easily
Result:
- Cleaner build files
- Easier to add a new example
- Easier to launch an example
- Easier to run the tests that relies on npn-boot in the future
Motivation:
It's useful to have netty-tcnative dependency in netty-example because
we can play with OpenSslEngine from our IDE.
Modifications:
Add netty-tcnative to example/pom.xml