Motivation:
DefaultHttp2FrameWriter has constructors that it would be a hassle to
expose as configuration parameters on Http2Codec. We should instead
make a builder for Http2Codec.
Modifications:
Get rid of the public constructors on Http2Codec and instead make sure
you can always use the builder where you would have used the constructor
before.
Result:
Http2Codec can be configured more flexibly, and the SensitivityDetector
can be configured.
Motivation:
Uptime example is lack of server.
UptimeClient's code style is a little bit different from others, which make reader feel confused.
We don't need to create a new Bootstrap instance each time client reconnect to server.
Modification:
Add UptimeServer and UptimeServerHandler which simply accept all connection and discard all message.
Change UptimeClient's code style.
Share a single Bootstrap instance.
Result:
Uptime server support.
Consistent code style.
Single Bootstrap for all reconnection.
It is generally useful to have origin http servers respond to
"expect: continue-100" as soon as possible but applications without a
HttpObjectAggregator in their pipelines must use boiler plate to do so.
Modifications:
Introduce the HttpServerExpectContinueHandler handler to make it easier.
Result:
Less boiler plate for http application authors.
Motivation:
We not correctly managed the life-cycle of the buffer / frames in our http2 multiplex example which lead to a memory leak.
Modifications:
- Correctly release frame if not echo'ed back the remote peer.
- Not retain content before echo back to remote peer.
Result:
No more leak in the example, fixes [#6636].
https://github.com/netty/netty-tcnative/pull/215
Motivation
OCSP stapling (formally known as TLS Certificate Status Request extension) is alternative approach for checking the revocation status of X.509 Certificates. Servers can preemptively fetch the OCSP response from the CA's responder, cache it for some period of time, and pass it along during (a.k.a. staple) the TLS handshake. The client no longer has to reach out on its own to the CA to check the validity of a cetitficate. Some of the key benefits are:
1) Speed. The client doesn't have to crosscheck the certificate.
2) Efficiency. The Internet is no longer DDoS'ing the CA's OCSP responder servers.
3) Safety. Less operational dependence on the CA. Certificate owners can sustain short CA outages.
4) Privacy. The CA can lo longer track the users of a certificate.
https://en.wikipedia.org/wiki/OCSP_staplinghttps://letsencrypt.org/2016/10/24/squarespace-ocsp-impl.html
Modifications
https://www.openssl.org/docs/man1.0.2/ssl/SSL_set_tlsext_status_type.html
Result
High-level API to enable OCSP stapling
Motivation:
HTTP/2 support two ways to start on a no-tls tcp connection,
http/1.1 upgrade and prior knowlege methodology to start HTTP/2.
Currently, the http2-server from example only support
starting by upgrade. I think we can do a simple dispatch by peek first
bytes from inbound that match to prior knowledge preface or not and
determine which handlers to set into pipeline.
Modifications:
Add ClearTextHttp2ServerUpgradeHandler to support start HTTP/2 via clear
text with two approach. And update example/http2-server to support
this functionality.
Result:
netty HTTP/2 and the example http2-server accept for two ways to start
HTTP/2 over clear text.
Fixed memory leak problem
Update fields to final
Rename ClearText to cleartext
Addressed comments for code improvement
- Always prefer static, final, and private if possible
- Add UnstableApi annotation
- Used EmbeddedChannel.readInbound instead of unhandled inbound handler
- More assertion
Update javadoc for CleartextHttp2ServerUpgradeHandler
Rename ClearTextHttp2ServerUpgradeHandler to CleartextHttp2ServerUpgradeHandler
Removed redundant code about configure pipeline
nit: PriorKnowledgeHandler
Removed Mockito.spy, investigate conn state instead
Add Http2UpgradeEvent
Check null of the constructor arguments
Rename Http2UpgradeEvent to PriorKnowledgeUpgradeEvent
Update unit test
Motivation:
Conscrypt is a Java Security provider that wraps OpenSSL (specifically BoringSSL). It's a possible alternative to Netty-tcnative that we should explore. So this commit is just to enable us to further investigate its use.
Modifications:
Modifying the SslContext creation path to support the Conscrypt provider.
Result:
Netty will support OpenSSL with conscrypt.
Motivation:
Calling a static method is faster then dynamic
Modifications:
Add 'static' keyword for methods where it missed
Result:
A bit faster method calls
Motivation:
2fd42cfc6b fixed a bug related to encoding headers but it also introduced a throws statement onto the Http2FrameWriter methods which write headers. This throws statement makes the API more verbose and is not necessary because we can communicate the failure in the ChannelFuture that is returned by these methods.
Modifications:
- Remove throws from all Http2FrameWriter methods.
Result:
Http2FrameWriter APIs do not propagate checked exceptions.
Motivation:
Currently Netty does not wrap socket connect, bind, or accept
operations in doPrivileged blocks. Nor does it wrap cases where a dns
lookup might happen.
This prevents an application utilizing the SecurityManager from
isolating SocketPermissions to Netty.
Modifications:
I have introduced a class (SocketUtils) that wraps operations
requiring SocketPermissions in doPrivileged blocks.
Result:
A user of Netty can grant SocketPermissions explicitly to the Netty
jar, without granting it to the rest of their application.
Motivation:
If the HPACK Decoder detects that SETTINGS_MAX_HEADER_LIST_SIZE has been violated it aborts immediately and sends a RST_STREAM frame for what ever stream caused the issue. Because HPACK is stateful this means that the HPACK state may become out of sync between peers, and the issue won't be detected until the next headers frame. We should make a best effort to keep processing to keep the HPACK state in sync with our peer, or completely close the connection.
If the HPACK Encoder is configured to verify SETTINGS_MAX_HEADER_LIST_SIZE it checks the limit and encodes at the same time. This may result in modifying the HPACK local state but not sending the headers to the peer if SETTINGS_MAX_HEADER_LIST_SIZE is violated. This will also lead to an inconsistency in HPACK state that will be flagged at some later time.
Modifications:
- HPACK Decoder now has 2 levels of limits related to SETTINGS_MAX_HEADER_LIST_SIZE. The first will attempt to keep processing data and send a RST_STREAM after all data is processed. The second will send a GO_AWAY and close the entire connection.
- When the HPACK Encoder enforces SETTINGS_MAX_HEADER_LIST_SIZE it should not modify the HPACK state until the size has been checked.
- https://tools.ietf.org/html/rfc7540#section-6.5.2 states that the initial value of SETTINGS_MAX_HEADER_LIST_SIZE is "unlimited". We currently use 8k as a limit. We should honor the specifications default value so we don't unintentionally close a connection before the remote peer is aware of the local settings.
- Remove unnecessary object allocation in DefaultHttp2HeadersDecoder and DefaultHttp2HeadersEncoder.
Result:
Fixes https://github.com/netty/netty/issues/6209.
Motivation:
The HTTP/2 helloworld client example has 2 bugs:
1. HttpResponseHandler has a map which is accessed from multiple threads, but the map is not thread safe.
2. Requests are flushed and maybe completely written and the responses may be received/processed by Netty before an element is inserted into the HttpResponseHandler map. This may result in an 'unexpected message' error even though the message has actually been sent.
Modifications:
- HttpResponseHandler should use a thread safe map
- Http2Client shouldn't flush until entries are added to the HttpResponseHandler map
Result:
Fixes https://github.com/netty/netty/issues/6165.
Motivation:
The responsibility for retaining the settings values and enforcing the settings constraints is spread out in different areas of the code and may be initialized with different values than the default specified in the RFC. This should not be allowed by default and interfaces which are responsible for maintaining/enforcing settings state should clearly indicate the restrictions that they should only be set by the codec upon receipt of a SETTINGS ACK frame.
Modifications:
- Encoder, Decoder, and the Headers Encoder/Decoder no longer expose public constructors that allow the default settings to be changed.
- Http2HeadersDecoder#maxHeaderSize() exists to provide some bound when headers/continuation frames are being aggregated. However this is roughly the same as SETTINGS_MAX_HEADER_LIST_SIZE (besides the 32 byte octet for each header field) and can be used instead of attempting to keep the two independent values in sync.
- Encoding headers now enforces SETTINGS_MAX_HEADER_LIST_SIZE at the octect level. Previously the header encoder compared the number of header key/value pairs against SETTINGS_MAX_HEADER_LIST_SIZE instead of the number of octets (plus 32 bytes overhead).
- DefaultHttp2ConnectionDecoder#onData calls shouldIgnoreHeadersOrDataFrame but may swallow exceptions from this method. This means a STREAM_RST frame may not be sent when it should for an unknown stream and thus violate the RFC. The exception is no longer swallowed.
Result:
Default settings state is enforced and interfaces related to settings state are clarified.
Motivation:
We need to duplicate the buffer before passing it to writeBytes(...) as it will increase the readerIndex().
Modifications:
Call duplicate().
Result:
No more IndexOutOfBoundsException when runing the multiplex example.
Motivation:
We called ctx.flush() which is not correct as it will not call flowController().writePendingBytes().
Modifications:
Call flush(ChannelHandlerContext) and so also call flowController().writePendingBytes().
Result:
Correct http2 example
Motivation:
Http2ServerInitializer uses a SimpleChannelHandler in an attempt to ease putting an HttpObjectAggregator in the pipeline when no upgrade is attempted. However the message is double released because it is fired up the pipeline (which will be released) and also released by SimpleChannelHandler in a finally block.
Modifications:
- Retain the message if we fire it up the pipeline
Result:
HTTP/2 examples don't encounter a reference count error if no upgrade was attempted.
Motivation:
Since netty shaded JCTools the OSGi manifest no longer is correct. It claims to
have an optional import "org.jctools.queues;resolution:=optional,org.jctools.qu
eues.atomic;resolution:=optional,org.jctools.util;resolution:=optional"
However since it is shaded, this is no longer true.
This was noticed when making JCTools a real bundle and netty resolved it as
optional import.
Modifications:
Modify the generated manifest by no longer analyzing org.jctools for imports.
A manual setting of sun.misc as optional was required.
Result:
Netty OSGi bundle will no longer interfere with a JCTools bundle.
Motivation:
As we use compression in the websocketx example we need to allow extensions as ohterwise the example not works.
Modifications:
Allow extensions.
Result:
websocketx example does work.
Motivation:
The HTTP Static File Server seems to ignore filenames that doesn't contains only latin characters, but these days people wish to serve files in other languages, or even include some emojis in the filename. Although these files are not displayed on the directory listing, they are accessible by HTTP requests. This fix will make such files more visible.
Modifications:
I've changed the ALLOWED_FILE_NAME pattern to disallow only files that starts with underline, minus or a dot (such as .htaccess), and hide other "unsafe" filenames that may be used to trigger some security issues. Other filenames, including the space character are allowed.
I've also added charset encoding to the directory listing, because the browser default MAY be configured for ISO-8859-1 instead of UTF-8.
Result:
Directory listing will work for files that contains the space character, as well as other Unicode characters.
Motivation:
When Netty HTTP Static File Server does directory listing, it does expose the user.dir environment variable to the user. Although it doesn't a security issue, it is a bad practice to show it, and the user does expect to see the server virtual root instead, which is the absolute path as mentioned in the RFC.
Modifications:
the sendListing method receives a third argument, which is the requested URI, and this is what should be displayed on the page instead of the filesystem path.
Result:
The directory listing pages will show the virtual path as described in the URI and not the real filesystem path.
Removed fallback method
Motivation:
Socks5 proxy supports resolve domain at the server side. When testing
with curl, the SocksServer in example package only works for proxy
request with IP, not with domain name (`--socks5` vs
`--socks5-hostname`). As curl is widely used, it should work with
the example provided.
Modifications:
Passing address and port to the Socks5CommandResponse, so that it
works for curl.
Result:
`curl --socks5-hostname` works as expected.
Motivation:
We not need to mark the field as volatile and so this may confuse people.
Modifications:
Remove volatile and add comment to explain why its not needed.
Result:
More correct example.
Motivation:
Quote from issue 4914:
"Http2MultiplexCodec currently does two things: mapping the existing h2 API to frames and managing the child channels.
It would be better if the two parts were separated. This would allow less-coupled development of the HTTP/2 handlers (flow control could be its own handler, for instance) and allow applications to insert themselves between all streams and the codec, which permits custom logic and could be used, in part, to implement custom frame types.
It would also greatly ease testing, as the child channel could be tested by itself without dealing with how frames are encoded on the wire."
Modifications:
- Split the Http2MultiplexCodec into Http2FrameCodec and Http2MultiplexCodec. The Http2FrameCodec interacts with the existing HTTP/2 callback-based API, while the Http2MulitplexCodec is completely independent of it and simply multiplexes Http2StreamFrames to the child channels. Additionally, the Http2Codec handler is introduced, which is a convenience class that simply sets up the Http2FrameCodec and Http2MultiplexCodec in the channel pipeline and removes itself.
- Improved test coverage quite a bit.
Result:
- The original Http2MultiplexCodec is split into Http2FrameCodec and Http2MultiplexCodec.
- More tests for higher confidence in the code.
Motivation:
It seems like intellij / idea is confused because of shading of jctools.
Modifications:
Add jctools as dependency with scope runtime to the examples as workaround
Result:
Its possible again to run the examples in the ide.
Related: #4333#4421#5128
Motivation:
slice(), duplicate() and readSlice() currently create a non-recyclable
derived buffer instance. Under heavy load, an application that creates a
lot of derived buffers can put the garbage collector under pressure.
Modifications:
- Add the following methods which creates a non-recyclable derived buffer
- retainedSlice()
- retainedDuplicate()
- readRetainedSlice()
- Add the new recyclable derived buffer implementations, which has its
own reference count value
- Add ByteBufHolder.retainedDuplicate()
- Add ByteBufHolder.replace(ByteBuf) so that..
- a user can replace the content of the holder in a consistent way
- copy/duplicate/retainedDuplicate() can delegate the holder
construction to replace(ByteBuf)
- Use retainedDuplicate() and retainedSlice() wherever possible
- Miscellaneous:
- Rename DuplicateByteBufTest to DuplicatedByteBufTest (missing 'D')
- Make ReplayingDecoderByteBuf.reject() return an exception instead of
throwing it so that its callers don't need to add dummy return
statement
Result:
Derived buffers are now recycled when created via retainedSlice() and
retainedDuplicate() and derived from a pooled buffer
Motivation:
Currently the way a 'null' origin, a request that most often indicated
that the request is coming from a file on the local file system, is
handled is incorrect. We are currently returning a wildcard origin '*'
but should be returning 'null' for the 'Access-Control-Allow-Origin'
which is valid according to the specification [1].
Modifications:
Updated CorsHandler to add a 'null' origin instead of the '*' origin in
the case the request origin is 'null.
Result:
All test pass and the CORS example as does the cors.html example if you
try to serve it by opening the file directly in a web browser.
[1]
https://www.w3.org/TR/cors/#access-control-allow-origin-response-header
Motivation:
- Add an example Redis client using codec-redis.
Modifications:
- Add an example Redis client that reads input from STDIN and writes output to STDOUT.
Result:
- Added an example Redis client using codec-redis.
Motivation:
We lately added ByteBuf.isReadOnly() which allows to detect if a buffer is read-only or not. We should add ByteBuf.asReadOnly() to allow easily access a read-only version of a buffer.
Modifications:
- Add ByteBuf.asReadOnly()
- Deprecate Unpooled.unmodifiableBuffer(Bytebuf)
Result:
More consistent api.
Motivation:
People need to set all length fields manually when creating a memcache message and it's error prone. See #2736 for more dicussion.
Modifications:
This patch adds the logic to update the keyLength, extrasLength and totalBodyLength when key, extras or content is set.
Result:
The length fields of memcache messages will be updated automatically.
Motivation:
This allows using handlers for Streams in normal Netty-style. Frames are
read/written to the channel as messages, not directly as a
callback/method call. Handlers allow mixing and can ease HTTP/1 and
HTTP/2 interoperability by eventually supporting HTTP/1 handlers in
HTTP/2 and vise versa.
Modifications:
New handler Http2MultiplexCodec that converts from the current HTTP/2
API to a message-based API and child channels for streams.
Result:
The basics are done for server-side: new streams trigger creation of new
channels in much the same appearance to how new connections trigger new
channel creation. The basic frames HEADERS and DATA are handled, but
also GOAWAY and RST_STREAM.
Inbound flow control is implemented, but outbound is not. That will be
done later, along with not completing write promises on the child
channel until the write actually completes on the parent.
There is not yet support for outbound priority/weight, push promises,
and many other features.
There is a generic Object that may be set on stream frames. This also
paves the way for client-side support which needs a way to refer to
yet-to-be-created streams (due to how HEADERS allocates a stream id, and
the allocation order must be the same as transmission order).
Motivation:
It will be easier to support websockets in server application by using WebSocketServerProtocolHandshakeHandler class and not reinvent its functionality. But currently it handles all http requests as if they were websocket handshake requests.
Modifications:
Check if http request path is equals to adjusted websocket path.
Fixed example of websocket server implementation.
Result:
WebSocketServerProtocolHandshakeHandler handles only websocket handshake requests.
Motivation:
The key can be ByteBuf to avoid converting between ByteBuf and String. See #3689.
Modifications:
Replace the type of key with ByteBuf.
Result:
The type of key becomes ByteBuf.
Motivation:
As we now can easily build static linked versions of tcnative it makes sense to run our netty build against all of them.
This helps to ensure our code works with libressl, openssl and boringssl.
Modifications:
Allow to specify -Dtcnative.artifactId= and -Dtcnative.version=
Result:
Easy to run netty build against different tcnative flavors.
Motivation:
WebSocketClientCompressionHandler is stateless so it should be @Sharable.
Modifications:
Add @Sharable annotation to WebSocketClientCompressionHandler, make constructor private and add static field to get the instance.
Result:
Less object creation.
Motivation:
Boxing/unboxing can be avoided.
Modifications:
Use parseInt/parseLong to avoid unnecessary boxing/unboxing.
Result:
Remove unnecessary boxing/unboxing.
Motivation:
We had to add a new profile for each OpenJDK/OracleJDK release to make
Maven choose the correct alpn-boot.jar and npn-boot.jar. As a result,
our pom.xml has a large number of `<profile/>` sections.
Modifications:
- Use jetty-alpn-agent, which chooses the correct alpn-boot.jar and
npn-boot.jar automatically to remove all the nasty profile sections
from pom.xml
- Visit https://github.com/trustin/jetty-alpn-agent for more info
Result:
Cleaner pom.xml
Motivation:
Warnings in IDE, unclean code, negligible performance impact.
Modification:
Deletion of unused imports
Result:
No more warnings in IDE, cleaner code, negligible performance improvement.
Motivation:
Javadoc reports errors about invalid docs.
Modifications:
Fix some errors reported by javadoc.
Result:
A lot of javadoc errors are fixed by this patch.
Motivation:
We have websocket extension support (with compression) in old master. We should port this to 4.1
Modifications:
Backport relevant code.
Result:
websocket extension support (with compression) is now in 4.1.
Related: #4572#4574
Motivation:
Consistency in our builder API design
Modifications:
- Add AbstractInboundHttp2ToHttpAdapterBuilder
- Replace the old 'Builder's with InboundHttp2ToHttpAdapterBuilder and
InboundHttp2ToHttpPriorityAdapterBuilder
Result:
Builder API consistency
Motivation:
Consistency in API design
Modifications:
- Deprecate CorsConfig.Builder and its factory methods
- Deprecate CorsConfig.DateValueGenerator
- Add CorsConfigBuilder and its factory methods
- Fix typo (curcuit -> circuit)
Result:
Consistency with other builder APIs such as SslContextBuilder and
Http2ConnectionHandlerBuilder
Related: #4572
Motivation:
- A user might want to extend Http2ConnectionHandler and define his/her
own static inner Builder class that extends
Http2ConnectionHandler.BuilderBase. This introduces potential
confusion because there's already Http2ConnectionHandler.Builder. Your
IDE will warn about this name duplication as well.
- BuilderBase exposes all setters with public modifier. A user's Builder
might not want to expose them to enforce it to certain configuration.
There's no way to hide them because it's public already and they are
final.
- BuilderBase.build(Http2ConnectionDecoder, Http2ConnectionEncoder)
ignores most properties exposed by BuilderBase, such as
validateHeaders, frameLogger and encoderEnforceMaxConcurrentStreams.
If any build() method ignores the properties exposed by the builder,
there's something wrong.
- A user's Builder that extends BuilderBase might want to require more
parameters in build(). There's no way to do that cleanly because
build() is public and final already.
Modifications:
- Make BuilderBase and Builder top-level so that there's no duplicate
name issue anymore.
- Add AbstractHttp2ConnectionHandlerBuilder
- Add Http2ConnectionHandlerBuilder
- Add HttpToHttp2ConnectionHandlerBuilder
- Make all builder methods in AbstractHttp2ConnectionHandlerBuilder
protected so that a subclass can choose which methods to expose
- Provide only a single build() method
- Add connection() and codec() so that a user can still specify
Http2Connection or Http2Connection(En|De)coder explicitly
- Implement proper state validation mechanism so that it is prevented
to invoke conflicting setters
Result:
Less confusing yet flexible builder API
Motivation:
The HTTP/2 client example is not validating the results of ALPN if TLS is enabled.
Modifications:
- Use ApplicationProtocolNegotiationHandler to validate ALPN results.
Result:
Client example validates ALPN results.
Motivation:
The proxy example contains some code that is not needed. This can confuse the reader.
Modifications:
Remove the not needed ctx.write(...).
Result:
Less confusing code.
Motivation:
Using the builder pattern for Http2ConnectionHandler (and subclasses) would be advantageous for the following reasons:
1. Provides the consistent construction afforded by the builder pattern for 'optional' arguments. Users can specify these options 1 time in the builder and then re-use the builder after this.
2. Enforces that the Http2ConnectionHandler's internals (decoder Http2FrameListener) are initialized after construction.
Modifications:
- Add an extensible builder which can be used to build Http2ConnectionHandler objects
- Update classes which inherit from Http2ConnectionHandler
Result:
It is easier to specify options and construct Http2ConnectionHandler objects.
Motivation:
It is often the case that implementations of Http2FrameListener will want to send responses when data is read. The Http2FrameListener needs access to the Http2ConnectionHandler (or the encoder contained within) to be able to send responses. However the Http2ConnectionHandler requires a Http2FrameListener instance to be passed in during construction time. This creates a cyclic dependency which can make it difficult to cleanly accomplish this relationship.
Modifications:
- Add Http2ConnectionDecoder.frameListener(..) method to set the frame listener. This will allow the listener to be set after construction.
Result:
Classes which inherit from Http2ConnectionHandler can more cleanly set the Http2FrameListener.
Motivation:
The latest netty-tcnative fixes a bug in determining the version of the runtime openssl lib. It also publishes an artificact with the classifier linux-<arch>-fedora for fedora-based systems.
Modifications:
Modified the build files to use the "-fedora" classifier when appropriate for tcnative. Care is taken, however, to not change the classifier for the native epoll transport.
Result:
Netty is updated the the new shiny netty-tcnative.
Motivation:
Http2CodecUtils has some static variables which are defined as Strings instead of CharSequence. One of these defines is used as a header name and should be AsciiString.
Modifications:
- Change the String defines in Http2CodecUtils to CharSequence
Result:
Types are more consistently using CharSequence and adding the upgrade header will require less work.
Motivation:
The DefaultHttp2Headers code is throwing a IllegalArgumentException if an invalid character is detected. This is being ignored by the HTTP/2 codec instead of generating a GOAWAY.
Modifications:
- Throw a Http2Exception of type PROTOCOL_ERROR in accordance with https://tools.ietf.org/html/rfc7540#section-8.1.2.6
- Update examples which were building invalid headers
Result:
More compliant with https://tools.ietf.org/html/rfc7540#section-8.1.2.6
Motivation:
There currently exists http.HttpUtil, http2.HttpUtil, and http.HttpHeaderUtil. Having 2 HttpUtil methods can be confusing and the utilty methods in the http package could be consolidated.
Modifications:
- Rename http2.HttpUtil to http2.HttpConversionUtil
- Move http.HttpHeaderUtil methods into http.HttpUtil
Result:
Consolidated utilities whose names don't overlap.
Fixes https://github.com/netty/netty/issues/4120
Motivation:
A degradation in performance has been observed from the 4.0 branch as documented in https://github.com/netty/netty/issues/3962.
Modifications:
- Simplify Headers class hierarchy.
- Restore the DefaultHeaders to be based upon DefaultHttpHeaders from 4.0.
- Make various other modifications that are causing hot spots.
Result:
Performance is now on par with 4.0.
Motivation:
We noticed that the headers implementation in Netty for HTTP/2 uses quite a lot of memory
and that also at least the performance of randomly accessing a header is quite poor. The main
concern however was memory usage, as profiling has shown that a DefaultHttp2Headers
not only use a lot of memory it also wastes a lot due to the underlying hashmaps having
to be resized potentially several times as new headers are being inserted.
This is tracked as issue #3600.
Modifications:
We redesigned the DefaultHeaders to simply take a Map object in its constructor and
reimplemented the class using only the Map primitives. That way the implementation
is very concise and hopefully easy to understand and it allows each concrete headers
implementation to provide its own map or to even use a different headers implementation
for processing requests and writing responses i.e. incoming headers need to provide
fast random access while outgoing headers need fast insertion and fast iteration. The
new implementation can support this with hardly any code changes. It also comes
with the advantage that if the Netty project decides to add a third party collections library
as a dependency, one can simply plug in one of those very fast and memory efficient map
implementations and get faster and smaller headers for free.
For now, we are using the JDK's TreeMap for HTTP and HTTP/2 default headers.
Result:
- Significantly fewer lines of code in the implementation. While the total commit is still
roughly 400 lines less, the actual implementation is a lot less. I just added some more
tests and microbenchmarks.
- Overall performance is up. The current implementation should be significantly faster
for insertion and retrieval. However, it is slower when it comes to iteration. There is simply
no way a TreeMap can have the same iteration performance as a linked list (as used in the
current headers implementation). That's totally fine though, because when looking at the
benchmark results @ejona86 pointed out that the performance of the headers is completely
dominated by insertion, that is insertion is so significantly faster in the new implementation
that it does make up for several times the iteration speed. You can't iterate what you haven't
inserted. I am demonstrating that in this spreadsheet [1]. (Actually, iteration performance is
only down for HTTP, it's significantly improved for HTTP/2).
- Memory is down. The implementation with TreeMap uses on avg ~30% less memory. It also does not
produce any garbage while being resized. In load tests for GRPC we have seen a memory reduction
of up to 1.2KB per RPC. I summarized the memory improvements in this spreadsheet [1]. The data
was generated by [2] using JOL.
- While it was my original intend to only improve the memory usage for HTTP/2, it should be similarly
improved for HTTP, SPDY and STOMP as they all share a common implementation.
[1] https://docs.google.com/spreadsheets/d/1ck3RQklyzEcCLlyJoqDXPCWRGVUuS-ArZf0etSXLVDQ/edit#gid=0
[2] https://gist.github.com/buchgr/4458a8bdb51dd58c82b4
Motivation:
The HTTP/2 hello world example server should be expecting a FullHttpRequest when falling back to HTTP/1.x mode.
Modifications:
- HelloWorldHttp1Handler should process FullHttpRequestObjects
- Http2ServerInitializer should insert an HttpObjectAggregator into the pipeline if no upgrade was attempted
Result:
Responses from the HelloWorldHttp1Handler should only come after full HTTP requests are received.
Proposal to fix issue #3636
Motivations:
Currently, while adding the next buffers to the decoder
(`decoder.offer()`), there is no way to access to the current HTTP
object being decoded since it can only be available currently once fully
decoded by `decoder.hasNext()`.
Some could want to know the progression on the overall transfer but also
per HTTP object.
While overall progression could be done using (if available) the global
Content-Length of the request and taking into account each HttpContent
size, the per HttpData object progression is unknown.
Modifications:
1) For HTTP object, `AbstractHttpData` has 2 protected properties named
`definedSize` and `size`, respectively the supposely final size and the
current (decoded until now) size.
This provides a new method `definedSize()` to get the current value for
`definedSize`. The `size` attribute is reachable by the `length()`
method.
Note however there are 2 different ways that currently managed the
`definedSize`:
a) `Attribute`: it is reset each time the value is less than actual
(when a buffer is added, the value is increased) since the final length
is not known (no Content-Length)
b) `FileUpload`: it is set at startup from the lengh provided
So these differences could lead in wrong perception;
a) `Attribute`: definedSize = size always
b) `FileUpload`: definedSize >= size always
Therefore the comment tries to explain clearly the different behaviors.
2) In the InterfaceHttpPostRequestDecoder (and the derived classes), I
add a new method: `decoder.currentPartialHttpData()` which will return a
`InterfaceHttpData` (if any) as the current `Attribute` or `FileUpload`
(the 2 generic types), which will allow then the programmer to check
according to the real type (instance of) the 2 methods `definedSize()`
and `length()`.
This method check if currentFileUpload or currentAttribute are null and
returns the one (only one could be not null) that is not null.
Note that if this method returns null, it might mean 2 situations:
a) the last `HttpData` (whatever attribute or file upload) is already
finished and therefore accessible through `next()`
b) there is not yet any `HttpData` in decoding (body not yet parsed for
instance)
Result:
The developper has more access and therefore control on the current
upload.
The coding from developper side could looks like in the example in
HttpUloadServerHandler.
Related: #3814
Motivation:
To implement the support for an upgrade from cleartext HTTP/1.1
connection to cleartext HTTP/2 (h2c) connection, a user usually uses
HttpServerUpgradeHandler.
It does its job, but it requires a user to instantiate the UpgradeCodecs
for all supported protocols upfront. It means redundancy for the
connections that are not upgraded.
Modifications:
- Change the constructor of HttpServerUpgradeHandler
- Accept UpgraceCodecFactory instead of UpgradeCodecs
- The default constructor of HttpServerUpgradeHandler sets the
maxContentLength to 0 now, which shouldn't be a problem because a
usual upgrade request is a GET.
- Update the examples accordingly
Result:
A user can instantiate Http2ServerUpgradeCodec and its related objects
(Http2Connection, Http2FrameReader/Writer, Http2FrameListener, etc) only
when necessary.
Motivation:
Our HTTP/2 implementation sometimes uses hard-coded handler names when
adding/removing a handler to/from a pipeline. It's not really a good
idea because it can easily result in name clashes. Unless there is a
good reason, we need to use the reference to the handlers
Modifications:
- Allow null as a handler name for Http2Client/ServerUpgradeCodec
- Use null as the default upgrade handler name
- Do not use handler name strings in some test cases and examples
Result:
Fixes#3815
Motivation:
SpdyOrHttpChooser and Http2OrHttpChooser duplicate fair amount code with each other.
Modification:
- Replace SpdyOrHttpChooser and Http2OrHttpChooser with ApplicationProtocolNegotiationHandler
- Add ApplicationProtocolNames to define the known application-level protocol names
Result:
- Less code duplication
- A user can perform dynamic pipeline configuration that follows ALPN/NPN for any protocols.
Related: #3641 and #3813
Motivation:
When setting up an HTTP/1 or HTTP/2 (or SPDY) pipeline, a user usually
ends up with adding arbitrary set of handlers.
Http2OrHttpChooser and SpdyOrHttpChooser have two abstract methods
(create*Handler()) that expect a user to return a single handler, and
also have add*Handlers() methods that add the handler returned by
create*Handler() to the pipeline as well as the pre-defined set of
handlers.
The problem is, some users (read: I) don't need all of them or the
user wants to add more than one handler. For example, take a look at
io.netty.example.http2.tiles.Http2OrHttpHandler, which works around
this issue by overriding addHttp2Handlers() and making
createHttp2RequestHandler() a no-op.
Modifications:
- Replace add*Handlers() and create*Handler() with configure*()
- Rename getProtocol() to selectProtocol() to make what it does clear
- Provide the default implementation of selectProtocol()
- Remove SelectedProtocol.UNKNOWN and use null instead, because
'UNKNOWN' is not a protocol
- Proper exception handling in the *OrHttpChooser so that the
exception is logged and the connection is closed when failed to
select a protocol
- Make SpdyClient example always use SSL. It was always using SSL
anyway.
- Implement SslHandshakeCompletionEvent.toString() for debuggability
- Remove an orphaned class: JettyNpnSslSession
- Add SslHandler.applicationProtocol() to get the name of the
application protocol
- SSLSession.getProtocol() now returns transport-layer protocol name
only, so that it conforms to its contract.
Result:
- *OrHttpChooser have better API.
- *OrHttpChooser handle protocol selection failure properly.
- SSLSession.getProtocol() now conforms to its contract.
- SpdyClient example works with SpdyServer example out of the box
Motivation:
The logic in the current websocket example is confusing and misleading
Modifications:
Remove occurrences of "http" and "https" and replace them with "ws" and "wss"
Result:
The example code is now coherent and is easier to understand for a new user.
Motivation:
There are no Netty SCTP examples on multi-homing.
Modifications:
- Added new example classes based on echo client/server example
Result:
Better documentation
Motivation:
Adding an example that showcases Netty’s HTTP/2 codec and that is
slightly more complex than the existing hello-world example. It is
based on the Gopher tiles example available here:
https://http2.golang.org/gophertiles?latency=0
Modifications:
Moved current http2 example to http2/helloworld.
Added http2 tiles example under http2/tiles.
Result:
A Netty tiles example is available.
Motiviation:
Interface changes between master and 4.1 branch resulted in a compile failure.
Modifications:
- change messageReceived to channelRead0
Result:
No more compile error.
Motiviation:
The HTTP/2 server example just hangs when a client is using only HTTP with no ALPN or upgrade attempts. We should still send some kind of response.
Modifications:
The HTTP/2 server example has a special handler to detect no upgrade HTTP clients and generate a response.
Result:
Clients that just use HTTP with no upgrade will no appear hung when interacting with the HTTP/2 server example.
Motivation:
Examples that are using ALPN/NPN are using a failure mode which is not supported by the JDK SslProvider. The examples fail to run and throw an exception if the JDK SslProvider is used.
Modifications:
- Use SelectorFailureBehavior.NO_ADVERTISE
- Use SelectedListenerFailureBehavior.ACCEPT
Result:
Examples can be run with both OpenSsl and JDK SslProviders.
Motivation:
Using factory methods of SslContext is deprecated. Code should be using
SslContextBuilder instead. This would have been done when the old
methods were deprecated, but memcache and http2 examples didn't exist in
the 4.0 branch which the PR was against.
Modifications:
Swap to the new construction pattern.
Result:
No more deprecated warnings during build of examples. Users are
instructed to use the new pattern.
RFC6265 specifies which characters are allowed in a cookie name and value.
Netty is currently too lax, which can used for HttpOnly escaping.
Modification:
In ServerCookieDecoder: discard cookie key-value pairs that contain invalid characters.
In ClientCookieEncoder: throw an exception when trying to encode cookies with invalid characters.
Result:
The problem described in the motivation section is fixed.
Motivation:
The usage and code within AsciiString has exceeded the original design scope for this class. Its usage as a binary string is confusing and on the verge of violating interface assumptions in some spots.
Modifications:
- ByteString will be created as a base class to AsciiString. All of the generic byte handling processing will live in ByteString and all the special character encoding will live in AsciiString.
Results:
The AsciiString interface will be clarified. Users of AsciiString can now be clear of the limitations the class imposes while users of the ByteString class don't have to live with those limitations.
Motivation:
SslContext factory methods have gotten out of control; it's past time to
swap to a builder.
Modifications:
New Builder class. The existing factory methods must be left as-is for
backward compatibility.
Result:
Fixes#3531
Motivation:
We missed to flush the channel when using HttpChunkedInput (this is done when using SSL). This will result in a stale.
Modifications:
Replace ctx.write(...) with ctx.writeAndFlush(...)
Result:
Correctly working example.
Motivation:
To support HTTP2 we need APLN support. This was not provided before when using OpenSslEngine, so SSLEngine (JDK one) was the only bet.
Beside this CipherSuiteFilter was not supported
Modifications:
- Upgrade netty-tcnative and make use of new features to support ALPN and NPN in server and client mode.
- Guard against segfaults after the ssl pointer is freed
- support correctly different failure behaviours
- add support for CipherSuiteFilter
Result:
Be able to use OpenSslEngine for ALPN / NPN for server and client.
Motivation:
The DefaultHttp2ConnectionDecoder class is calling verifyPrefaceReceived() for almost every frame event at all times.
The Http2ConnectionHandler class is calling readClientPrefaceString() on every decode event.
Modifications:
- DefaultHttp2ConnectionDecoder should not have to continuously call verifyPrefaceReceived() because it transitions boolean state 1 time for each connection.
- Http2ConnectionHandler should not have to continuously call readClientPrefaceString() because it transitions boolean state 1 time for each connection.
Result:
- Less conditional checks for the mainstream usage of the connection.
Motivation:
The Http2FrameLogger is currently using the internal logging classes. We should change this so that it's using the public classes and then converts internally.
Modifications:
Modified Http2FrameLogger and the examples to use the public LogLevel class.
Result:
Fixes#2512
While implementing netty-handler-proxy, I realized various issues in our
current socksx package. Here's the list of the modifications and their
background:
- Split message types into interfaces and default implementations
- so that a user can implement an alternative message implementations
- Use classes instead of enums when a user might want to define a new
constant
- so that a user can extend SOCKS5 protocol, such as:
- defining a new error code
- defining a new address type
- Rename the message classes
- to avoid abbreviated class names. e.g:
- Cmd -> Command
- Init -> Initial
- so that the class names align better with the protocol
specifications. e.g:
- AuthRequest -> PasswordAuthRequest
- AuthScheme -> AuthMethod
- Rename the property names of the messages
- so that the property names align better when the field names in the
protocol specifications
- Improve the decoder implementations
- Give a user more control over when a decoder has to be removed
- Use DecoderResult and DecoderResultProvider to handle decode failure
gracefully. i.e. no more Unknown* message classes
- Add SocksPortUnifinicationServerHandler since it's useful to the users
who write a SOCKS server
- Cleaned up and moved from the socksproxy example
Motivation:
HTTP/2 codec was implemented in master branch.
Since, master is not yet stable and will be some time before it gets released, backporting it to 4.1, enables people to use the codec with a stable netty version.
Modification:
The code has been copied from master branch as is, with minor modifications to suit the `ChannelHandler` API in 4.x.
Apart from that change, there are two backward incompatible API changes included, namely,
- Added an abstract method:
`public abstract Map.Entry<CharSequence, CharSequence> forEachEntry(EntryVisitor<CharSequence> visitor)
throws Exception;`
to `HttpHeaders` and implemented the same in `DefaultHttpHeaders` as a delegate to the internal `TextHeader` instance.
- Added a method:
`FullHttpMessage copy(ByteBuf newContent);`
in `FullHttpMessage` with the implementations copied from relevant places in the master branch.
- Added missing abstract method related to setting/adding short values to `HttpHeaders`
Result:
HTTP/2 codec can be used with netty 4.1
Motivation:
The example MemcacheClient set command doesn't work.
Modifications:
Fill the extras field buffer with zeros so that it gets written to the
request payload.
Result:
The example MemcacheClient set command works.
Related: #3122
Motivation:
The HttpStaticFileServer example writes the LastHttpContent twice at the
end of the transfer. HttpChunkedInput already produces a
LastHttpContent at the end of the stream, so there's no reason to write
another.
Modifications:
Do not write LastHttpContent in HttpStaticFileServerHandler when
HttpChunkedInput is used to transfer a file.
Result:
HttpStaticFileServer does not violates the protocol anymore.
Motivation:
When running the examples using the provided run-examples.sh script the
log level is 'info' level. It can be handy to be able to configure a
different level, for example 'debug', while learning and trying out the
the examples.
Modifications:
Added a dependency to logback-classic to the examples pom.xml, and also
added a logback configuration file. The log level can be configured by
setting the 'logLevel' system property, and if that property is not set
the default will be 'info' level.
The run-examples.sh was updated to show an example of using the system
property to set the log level to 'debug'
Result:
It is now possible to turn on debug logging by settnig a system property
on the command line.
Motivation:
Found performance issues via FindBugs and PMD.
Modifications:
- Removed unnecessary boxing/unboxing operations in DefaultTextHeaders.convertToInt(CharSequence) and DefaultTextHeaders.convertToLong(CharSequence). A boxed primitive is created from a string, just to extract the unboxed primitive value.
- Added a static modifier for DefaultHttp2Connection.ParentChangedEvent class. This class is an inner class, but does not use its embedded reference to the object which created it. This reference makes the instances of the class larger, and may keep the reference to the creator object alive longer than necessary.
- Added a static compiled Pattern to avoid compile it each time it is used when we need to replace some part of authority.
- Improved using of StringBuilders.
Result:
Performance improvements.
Motivation:
The commit 50e06442c3 changed the type of
the constants in HttpHeaders.Names and HttpHeaders.Values, making 4.1
backward-incompatible with 4.0.
It also introduces newer utility classes such as HttpHeaderUtil, which
deprecates most static methods in HttpHeaders. To ease the migration
between 4.1 and 5.0, we should deprecate all static methods that are
non-existent in 5.0, and provide proper counterpart.
Modification:
- Revert the changes in HttpHeaders.Names and Values
- Deprecate all static methods in HttpHeaders in favor of:
- HttpHeaderUtil
- the member methods of HttpHeaders
- AsciiString
- Add integer and date access methods to HttpHeaders for easier future
migration to 5.0
- Add HttpHeaderNames and HttpHeaderValues which provide standard HTTP
constants in AsciiString
- Deprecate HttpHeaders.Names and Values
- Make HttpHeaderValues.WEBSOCKET lowercased because it's actually
lowercased in all WebSocket versions but the oldest one
- Add RtspHeaderNames and RtspHeaderValues which provide standard RTSP
constants in AsciiString
- Deprecate RtspHeaders.*
- Do not use AsciiString.equalsIgnoreCase(CharSeq, CharSeq) if one of
the parameters are AsciiString
- Avoid using AsciiString.toString() repetitively
- Change the parameter type of some methods from String to
CharSequence
Result:
Backward compatibility is recovered. New classes and methods will make
the migration to 5.0 easier, once (Http|Rtsp)Header(Names|Values) are
ported to master.
Motivation:
The header class hierarchy and algorithm was improved on the master branch for versions 5.x. These improvments should be backported to the 4.1 baseline.
Modifications:
- cherry-pick the following commits from the master branch: 2374e17, 36b4157, 222d258
Result:
Header improvements in master branch are available in 4.1 branch.
Motivation:
Improvements were made on the main line to support ALPN and mutual
authentication for TLS. These should be backported.
Modifications:
- Backport commits from the master branch
- f8af84d599
- e74c8edba3
Result:
Support for ALPN and mutual authentication.
Motivation:
As report in #2953 the websocket server example contained a bug and did therefore not work with chrome:
A websocket extension is added to the pipeline but extensions were disallowed in the handshaker and decoder,
which is leading the decoder to closing the connection after receiving an extension frame.
Modifications:
Allow websocket extensions in the handshaker to correctly enable the extension.
Result:
Working websocket server example
Fixes#2953
Related issue: #1133
Motivation:
There is no support for client socket connections via a proxy server in
Netty.
Modifications:
- Add a new module 'handler-proxy'
- Add ProxyHandler and its subclasses to support SOCKS 4a/5 and HTTP(S)
proxy connections
- Add a full parameterized test for most scenarios
- Clean up pom.xml
Result:
A user can make an outgoing connection via proxy servers with only
trivial effort.
Motivation:
It is often helpful to measure the performance of connections, e.g. the
latency and the throughput. This can be performed through benchmarks.
Modification:
This adds a simple but configurable benchmark for websockets into the
example directory. The Netty WebSocket server will echo all received
websocket frames and will provide an HTML/JS page which serves as the
client for the benchmark.
The benchmark also provides a verification mode that verifies the sent
against the received data. This can be used for the verification ob
websocket frame encoding and decoding funtionality.
Result:
A benchmark is added in form a further Netty websocket example.
With this benchmark it is easily possible to measure the performance between Netty and a browser
Motivation:
We incorrectly used SslContext.newServerContext() in some places where a we needed a client context.
Modifications:
Use SslContext.newClientContext() when using ssl on the client side.
Result:
Working ssl client examples.
Motivation:
The example mis handle two elements:
1) Last message is a LastHttpContent and is not taken into account by
the server handler
2) The client makes a sync on last write (chunked) but there is no flush
before, therefore the sync is waiting forever.
Modifications:
1) Take into account the message LastHttpContent in simple Get.
2) Removes sync but add flush for each post and multipost parts
Results:
Example is no more blocked after get test.
Should be done also in 4.0 and Master (similar changes)
- SocksV[45] -> Socks[45]
- Make encodeAsByteBuf package private with some hassle
- Split SocksMessageEncoder into Socks4MessageEncoder and
Socks5MessageEncoder, and remove the original
- Remove lazy singleton instantiation; we don't need it.
- Remove the deprecated methods
- Fix Javadoc errors
Motivation:
SOCKS 4 and 5 are very different protocols although they share the same
name. It is not possible to incorporate the two protocol versions into
a single package.
Modifications:
- Add a new package called 'socksx' to supercede 'socks' package.
- Add SOCKS 4/4a support to the 'socksx' package
Result:
codec-socks now supports all SOCKS versions
Motivation:
HttpOrSpdyChooser can be simplified so the user not need to implement getProtocol(...) method.
Modification:
Add implementation for the method. The user can override it if necessary.
Result:
Easier usage of HttpOrSpdyChooser.