Motivation:
Conscrypt is a Java Security provider that wraps OpenSSL (specifically BoringSSL). It's a possible alternative to Netty-tcnative that we should explore. So this commit is just to enable us to further investigate its use.
Modifications:
Modifying the SslContext creation path to support the Conscrypt provider.
Result:
Netty will support OpenSSL with conscrypt.
Motivation:
Calling a static method is faster then dynamic
Modifications:
Add 'static' keyword for methods where it missed
Result:
A bit faster method calls
Motivation:
2fd42cfc6b fixed a bug related to encoding headers but it also introduced a throws statement onto the Http2FrameWriter methods which write headers. This throws statement makes the API more verbose and is not necessary because we can communicate the failure in the ChannelFuture that is returned by these methods.
Modifications:
- Remove throws from all Http2FrameWriter methods.
Result:
Http2FrameWriter APIs do not propagate checked exceptions.
Motivation:
Currently Netty does not wrap socket connect, bind, or accept
operations in doPrivileged blocks. Nor does it wrap cases where a dns
lookup might happen.
This prevents an application utilizing the SecurityManager from
isolating SocketPermissions to Netty.
Modifications:
I have introduced a class (SocketUtils) that wraps operations
requiring SocketPermissions in doPrivileged blocks.
Result:
A user of Netty can grant SocketPermissions explicitly to the Netty
jar, without granting it to the rest of their application.
Motivation:
If the HPACK Decoder detects that SETTINGS_MAX_HEADER_LIST_SIZE has been violated it aborts immediately and sends a RST_STREAM frame for what ever stream caused the issue. Because HPACK is stateful this means that the HPACK state may become out of sync between peers, and the issue won't be detected until the next headers frame. We should make a best effort to keep processing to keep the HPACK state in sync with our peer, or completely close the connection.
If the HPACK Encoder is configured to verify SETTINGS_MAX_HEADER_LIST_SIZE it checks the limit and encodes at the same time. This may result in modifying the HPACK local state but not sending the headers to the peer if SETTINGS_MAX_HEADER_LIST_SIZE is violated. This will also lead to an inconsistency in HPACK state that will be flagged at some later time.
Modifications:
- HPACK Decoder now has 2 levels of limits related to SETTINGS_MAX_HEADER_LIST_SIZE. The first will attempt to keep processing data and send a RST_STREAM after all data is processed. The second will send a GO_AWAY and close the entire connection.
- When the HPACK Encoder enforces SETTINGS_MAX_HEADER_LIST_SIZE it should not modify the HPACK state until the size has been checked.
- https://tools.ietf.org/html/rfc7540#section-6.5.2 states that the initial value of SETTINGS_MAX_HEADER_LIST_SIZE is "unlimited". We currently use 8k as a limit. We should honor the specifications default value so we don't unintentionally close a connection before the remote peer is aware of the local settings.
- Remove unnecessary object allocation in DefaultHttp2HeadersDecoder and DefaultHttp2HeadersEncoder.
Result:
Fixes https://github.com/netty/netty/issues/6209.
Motivation:
The HTTP/2 helloworld client example has 2 bugs:
1. HttpResponseHandler has a map which is accessed from multiple threads, but the map is not thread safe.
2. Requests are flushed and maybe completely written and the responses may be received/processed by Netty before an element is inserted into the HttpResponseHandler map. This may result in an 'unexpected message' error even though the message has actually been sent.
Modifications:
- HttpResponseHandler should use a thread safe map
- Http2Client shouldn't flush until entries are added to the HttpResponseHandler map
Result:
Fixes https://github.com/netty/netty/issues/6165.
Motivation:
The responsibility for retaining the settings values and enforcing the settings constraints is spread out in different areas of the code and may be initialized with different values than the default specified in the RFC. This should not be allowed by default and interfaces which are responsible for maintaining/enforcing settings state should clearly indicate the restrictions that they should only be set by the codec upon receipt of a SETTINGS ACK frame.
Modifications:
- Encoder, Decoder, and the Headers Encoder/Decoder no longer expose public constructors that allow the default settings to be changed.
- Http2HeadersDecoder#maxHeaderSize() exists to provide some bound when headers/continuation frames are being aggregated. However this is roughly the same as SETTINGS_MAX_HEADER_LIST_SIZE (besides the 32 byte octet for each header field) and can be used instead of attempting to keep the two independent values in sync.
- Encoding headers now enforces SETTINGS_MAX_HEADER_LIST_SIZE at the octect level. Previously the header encoder compared the number of header key/value pairs against SETTINGS_MAX_HEADER_LIST_SIZE instead of the number of octets (plus 32 bytes overhead).
- DefaultHttp2ConnectionDecoder#onData calls shouldIgnoreHeadersOrDataFrame but may swallow exceptions from this method. This means a STREAM_RST frame may not be sent when it should for an unknown stream and thus violate the RFC. The exception is no longer swallowed.
Result:
Default settings state is enforced and interfaces related to settings state are clarified.
Motivation:
We need to duplicate the buffer before passing it to writeBytes(...) as it will increase the readerIndex().
Modifications:
Call duplicate().
Result:
No more IndexOutOfBoundsException when runing the multiplex example.
Motivation:
We called ctx.flush() which is not correct as it will not call flowController().writePendingBytes().
Modifications:
Call flush(ChannelHandlerContext) and so also call flowController().writePendingBytes().
Result:
Correct http2 example
Motivation:
Http2ServerInitializer uses a SimpleChannelHandler in an attempt to ease putting an HttpObjectAggregator in the pipeline when no upgrade is attempted. However the message is double released because it is fired up the pipeline (which will be released) and also released by SimpleChannelHandler in a finally block.
Modifications:
- Retain the message if we fire it up the pipeline
Result:
HTTP/2 examples don't encounter a reference count error if no upgrade was attempted.
Motivation:
Since netty shaded JCTools the OSGi manifest no longer is correct. It claims to
have an optional import "org.jctools.queues;resolution:=optional,org.jctools.qu
eues.atomic;resolution:=optional,org.jctools.util;resolution:=optional"
However since it is shaded, this is no longer true.
This was noticed when making JCTools a real bundle and netty resolved it as
optional import.
Modifications:
Modify the generated manifest by no longer analyzing org.jctools for imports.
A manual setting of sun.misc as optional was required.
Result:
Netty OSGi bundle will no longer interfere with a JCTools bundle.
Motivation:
As we use compression in the websocketx example we need to allow extensions as ohterwise the example not works.
Modifications:
Allow extensions.
Result:
websocketx example does work.
Motivation:
The HTTP Static File Server seems to ignore filenames that doesn't contains only latin characters, but these days people wish to serve files in other languages, or even include some emojis in the filename. Although these files are not displayed on the directory listing, they are accessible by HTTP requests. This fix will make such files more visible.
Modifications:
I've changed the ALLOWED_FILE_NAME pattern to disallow only files that starts with underline, minus or a dot (such as .htaccess), and hide other "unsafe" filenames that may be used to trigger some security issues. Other filenames, including the space character are allowed.
I've also added charset encoding to the directory listing, because the browser default MAY be configured for ISO-8859-1 instead of UTF-8.
Result:
Directory listing will work for files that contains the space character, as well as other Unicode characters.
Motivation:
When Netty HTTP Static File Server does directory listing, it does expose the user.dir environment variable to the user. Although it doesn't a security issue, it is a bad practice to show it, and the user does expect to see the server virtual root instead, which is the absolute path as mentioned in the RFC.
Modifications:
the sendListing method receives a third argument, which is the requested URI, and this is what should be displayed on the page instead of the filesystem path.
Result:
The directory listing pages will show the virtual path as described in the URI and not the real filesystem path.
Removed fallback method
Motivation:
Socks5 proxy supports resolve domain at the server side. When testing
with curl, the SocksServer in example package only works for proxy
request with IP, not with domain name (`--socks5` vs
`--socks5-hostname`). As curl is widely used, it should work with
the example provided.
Modifications:
Passing address and port to the Socks5CommandResponse, so that it
works for curl.
Result:
`curl --socks5-hostname` works as expected.
Motivation:
We not need to mark the field as volatile and so this may confuse people.
Modifications:
Remove volatile and add comment to explain why its not needed.
Result:
More correct example.
Motivation:
Quote from issue 4914:
"Http2MultiplexCodec currently does two things: mapping the existing h2 API to frames and managing the child channels.
It would be better if the two parts were separated. This would allow less-coupled development of the HTTP/2 handlers (flow control could be its own handler, for instance) and allow applications to insert themselves between all streams and the codec, which permits custom logic and could be used, in part, to implement custom frame types.
It would also greatly ease testing, as the child channel could be tested by itself without dealing with how frames are encoded on the wire."
Modifications:
- Split the Http2MultiplexCodec into Http2FrameCodec and Http2MultiplexCodec. The Http2FrameCodec interacts with the existing HTTP/2 callback-based API, while the Http2MulitplexCodec is completely independent of it and simply multiplexes Http2StreamFrames to the child channels. Additionally, the Http2Codec handler is introduced, which is a convenience class that simply sets up the Http2FrameCodec and Http2MultiplexCodec in the channel pipeline and removes itself.
- Improved test coverage quite a bit.
Result:
- The original Http2MultiplexCodec is split into Http2FrameCodec and Http2MultiplexCodec.
- More tests for higher confidence in the code.
Motivation:
It seems like intellij / idea is confused because of shading of jctools.
Modifications:
Add jctools as dependency with scope runtime to the examples as workaround
Result:
Its possible again to run the examples in the ide.
Related: #4333#4421#5128
Motivation:
slice(), duplicate() and readSlice() currently create a non-recyclable
derived buffer instance. Under heavy load, an application that creates a
lot of derived buffers can put the garbage collector under pressure.
Modifications:
- Add the following methods which creates a non-recyclable derived buffer
- retainedSlice()
- retainedDuplicate()
- readRetainedSlice()
- Add the new recyclable derived buffer implementations, which has its
own reference count value
- Add ByteBufHolder.retainedDuplicate()
- Add ByteBufHolder.replace(ByteBuf) so that..
- a user can replace the content of the holder in a consistent way
- copy/duplicate/retainedDuplicate() can delegate the holder
construction to replace(ByteBuf)
- Use retainedDuplicate() and retainedSlice() wherever possible
- Miscellaneous:
- Rename DuplicateByteBufTest to DuplicatedByteBufTest (missing 'D')
- Make ReplayingDecoderByteBuf.reject() return an exception instead of
throwing it so that its callers don't need to add dummy return
statement
Result:
Derived buffers are now recycled when created via retainedSlice() and
retainedDuplicate() and derived from a pooled buffer
Motivation:
Currently the way a 'null' origin, a request that most often indicated
that the request is coming from a file on the local file system, is
handled is incorrect. We are currently returning a wildcard origin '*'
but should be returning 'null' for the 'Access-Control-Allow-Origin'
which is valid according to the specification [1].
Modifications:
Updated CorsHandler to add a 'null' origin instead of the '*' origin in
the case the request origin is 'null.
Result:
All test pass and the CORS example as does the cors.html example if you
try to serve it by opening the file directly in a web browser.
[1]
https://www.w3.org/TR/cors/#access-control-allow-origin-response-header
Motivation:
- Add an example Redis client using codec-redis.
Modifications:
- Add an example Redis client that reads input from STDIN and writes output to STDOUT.
Result:
- Added an example Redis client using codec-redis.
Motivation:
We lately added ByteBuf.isReadOnly() which allows to detect if a buffer is read-only or not. We should add ByteBuf.asReadOnly() to allow easily access a read-only version of a buffer.
Modifications:
- Add ByteBuf.asReadOnly()
- Deprecate Unpooled.unmodifiableBuffer(Bytebuf)
Result:
More consistent api.
Motivation:
People need to set all length fields manually when creating a memcache message and it's error prone. See #2736 for more dicussion.
Modifications:
This patch adds the logic to update the keyLength, extrasLength and totalBodyLength when key, extras or content is set.
Result:
The length fields of memcache messages will be updated automatically.