Motivation:
If the content-length does not parse as a number, leniency causes this
to instead be parsed as the default value. This leads to bodies being
silently ignored on requests which can be incredibly dangerous. Instead,
if the content-length header is invalid, an exception should be thrown
for upstream handling.
Modifications:
This commit removes the leniency in parsing the content-length header by
allowing a number format exception, if thrown, to escape from the method
rather than falling back to the default value.
Result:
In invalid content-length header will not be silently ignored.
Motivation:
The DNS resolver supports search domains. However the ndots are not correctly enforced. The search domain should only be appended under the following scenario [1]:
> Resolver queries having fewer than ndots dots (default is 1) in them will be attempted using each component of the search path in turn until a match is found.
The DNS resolver current appends the search domains if ndots is 0 which should never happen (because no domain can have less than 0 dots).
[1] https://linux.die.net/man/5/resolv.conf
Modifications:
- Parse /etc/resolv.conf to get the default value for ndots on Unix platforms
- The search domain shouldn't be used if ndots is 0
- Avoid failing a promise to trigger the search domain queries in DnsNameResolverContext#resolve
Result:
More correct usage of search domains in the DNS resolver.
Fixes https://github.com/netty/netty/issues/6844.
Motivation:
The Http2FrameLogger uses a custom format when logging events. We should use the more familiar format of 'channel event type: details' and single line logging for more consistent debugging.
Modifications:
- Http2FrameLogger should not use a StringBuilder and instead should directly use the Logger
- Http2FrameLogger should use the more consistent format defined above
Result:
Http2FrameLogger's logging formate is more consistent with other log events.
Motivation:
In a `HttpConversionUtil#toHttp2Headers` a status code conversion can be replaced with using `HttpResponseStatus#codeAsText` method.
Modifications:
Apply `HttpResponseStatus#codeAsText` method.
Result:
Less allocations.
Motivation:
HelloWorldHttp2Handler throws a NPE when converting from HTTP/1.x headers to HTTP/2 headers because there is no Host header.
Modifications:
- HelloWorldHttp2Handler should check if the Host header is present before setting it in the HTTP/2 headers
Result:
No more NPE in HelloWorldHttp2Handler.
Motivation:
To be able to easily build only one of the native sub-modules its needed that all the dependencies can be fetched from maven. At the moment we dont deploy transport-native-unix-common and so an attempt to just build for example the native epoll transport fails with:
[ERROR] Failed to execute goal on project netty-transport-native-epoll: Could not resolve dependencies for project io.netty:netty-transport-native-epoll:jar:4.1.13.Final-SNAPSHOT: Could not find artifact io.netty:netty-transport-native-unix-common-tests:jar:4.1.13.Final-SNAPSHOT in sonatype-nexus-snapshots (https://oss.sonatype.org/content/repositories/snapshots) -> [Help 1]
Modifications:
Deploy jar
Result:
All dependencies on maven repository.
Motivation:
For historical reasons OpenSSL's internal naming convention for CHACHA20 based cipher suites does not include the HMAC algorithm in the cipher name. This will prevent the CHACHA20 cipher suites from being used if the RFC cipher names are specified.
Modifications:
- Add a special case for CHACHA20 cipher name conversions in CipherSuiteConverter
- Update OPENSSL_CIPHERSUITE_PATTERN to accommodate the new naming scheme for CHACHA20 cipher suites
Result:
CipherSuiteConverter now works with CHACHA20 cipher suites.
Motivation:
For multi-line headers HttpObjectDecoder uses StringBuilder.append(a).append(b) pattern that could be easily replaced with regular a + b. Also oparations with a and b moved out from concat operation to make it friendly for StringOptimizeConcat optimization and thus - faster.
Modification:
StringBuilder.append(a).append(b) reaplced with a + b. Operations with a and b moved out from concat oparation.
Result:
Code simpler to read and faster.
Motivation:
Lz4FrameEncoder maintains internal state, but the life cycle of the buffer is not consistently managed. The buffer is allocated in handlerAdded but freed in close, but the buffer can still be used until handlerRemoved is called.
Modifications:
- Move the cleanup of the buffer from close to handlerRemoved
- Explicitly throw an EncoderException from Lz4FrameEncoder if the encode operation has finished and there isn't enough space to write data
Result:
No more NPE in Lz4FrameEncoder on the buffer.
Motivation
It's cleaner to add listeners to returned Futures rather than provided Promises because the latter can have strange side effects in terms of listeners firing and called methods returning. Adding listeners preemtively may yield also to more OPS than necessary when there's an Exception in the to be called method.
Modifications
Add listener to returned ChannelFuture rather than given ChannelPromise
Result
Cleaner completion and exception handling
Motivation:
Before we always expanded the buffer by the initialCapacity which by default is 32 bytes. This may lead to many expansions of the buffer until we finally reached the point that the buffer can fin everything.
Modifications:
Double the buffer size until the threshold of >= 1024 is hit. After this will grow it by the initialCapacity
Result:
Less expansion of the buffer (and so allocations / copies) when the intialCapacity is not big enough. Fixes [#6864].
Motivation:
We had some useless synchronized (ReferenceCountedOpenSslContext.class) blocks in our code which could slow down concurrent collecting and creating of ReferenceCountedOpenSslContext instances. Beside this we missed a few guards.
Modifications:
Use ReadWriteLock to correctly guard. A ReadWriteLock was choosen as SSL.newSSL(...) will be called from multiple threads all the time so using synchronized would be worse and there would be no way for the JIT to optimize it away
Result:
Faster concurrent creating and collecting of ReferenceCountedOpenSslContext instances and correctly guard in all cases.
Motivation:
The bom does not provide entries for a number of netty modules, in
particular those that are deployed with classifiers. As a result, they
can't be used without defining a version.
Modifications:
Provide dependency management for the missing modules.
Result:
Fixes [#6852]
Motivation:
The class `HttpPostRequestEncoder` has minor issues:
- The `encodeNextChunkMultipart()` method contains two identical blocks of code with a difference only in the cast interfaces: `Attribute` vs `HttpData`. Because the `Attribute` is extended by `HttpData`, the block with the `Attribute` can be safely deleted.
- The `getNewMultipartDelimiter()` method contains a redundant `toLowerCase()`.
- The `addBodyFileUploads()` method throws `NPE` instead of `IllegalArgumentException`.
Modifications:
- Remove duplicated code block from `encodeNextChunkMultipart()`.
- Remove redundant `toLowerCase()` from `getNewMultipartDelimiter()`.
- Replace `NPE` with `IllegalArgumentException` in `addBodyFileUploads()`.
- Use `ObjectUtil#checkNotNull` where possible.
Result:
More correct and clean code.
Motivation:
A `StringUtil#escapeCsv` creates new `StringBuilder` on each value even if the same string is returned in the end.
Modifications:
Create new `StringBuilder` only if it really needed. Otherwise, return the original string (or just trimmed substring).
Result:
Less GC load. Up to 4x faster work for not changed strings.
Motivation:
We should not fail the promise when a closed Channel is offereed back to the ChannelPool as we explicit mention that the Channel must always be returned.
Modifications:
- Not fail the promise
- Add test-case
Result:
Fixes [#6831]
Motivation:
Update the mqtt-codec based on mqtt spec (3.1.3.5).
Modification:
Changes made to the file MqttConnectPayload.java.
Subsequent changes have been made to files MqttDecoder.java, MqttEncoder.java, MqttMessageBuilders.java.
Test cases have been updated.
Result:
Fixes#6750 .
Motivation:
The entry for the netty-transport-native-unix-common module in the bom
was using the wrong artifact ID and version.
Modifications:
Correct the artifact ID for the netty-transport-native-unix-common
module in the bom.
Result:
Fixes [#6849]
Motivation:
It’s currently complicated to extend `DnsNameResolver` as the default
value for `searchDomain` is package private.
Modifications:
* let `DnsNameResolver` accept a null `searchDomains` and then default
to `DEFAULT_SEARCH_DOMAINS`, just like it’s being done with
`resolvedAddressTypes`.
* set default `DnsNameResolverBuilder#searchDomains` value to null to
avoid cloning internal `DnsNameResolver.DEFAULT_SEARCH_DOMAINS` in
`DnsNameResolver` constructor.
Result:
More versatile `DnsNameResolver` constructor.
No array copy when using default search domains.
Motivation:
HpackHuffmanDecoder.Decoder did not do any bound-checking but just catched IndexOutOfBoundsException to detect if the array needs to grow. This can be very expensive because of fillInStackTrace()
Modifications:
Add proper bounds checking and grow the array if needed without catching IndexOutOfBoundsException.
Result:
Less overhead if the array needs to grow.
Motivation:
Depending on the use case it may make sense to increase or decrease the initial size of the buffer used during the HPACK huffman decode process. This is currently not exposed through the AbstractHttp2ConnectionHandlerBuilder.
Modifications:
- Add a method to AbstractHttp2ConnectionHandlerBuilder which allows the initial size of the buffer used during the HPACK huffman decode prcoess to be configured.
Result:
AbstractHttp2ConnectionHandlerBuilder provides more control of codec-http2 knobs.
Motivation:
AppendableCharSequence depends upon IndexOutOfBoundsException to trigger a resize operation under the assumption that the resize operation will be rare if the initial size guess is good. However if the initial size guess is not good then the performance will be more unpredictable and likely suffer.
Modifications:
- Check the position in AppendableCharSequence#append to determine if a resize is necessary
Result:
More predictable performance in AppendableCharSequence#append.
Motivation:
This allows netty to operate in 'transparent proxy' mode, intercepting connections
to other addresses by means of Linux firewalling rules, as per
https://www.kernel.org/doc/Documentation/networking/tproxy.txt
The original destination address can be obtained by referencing
ch.localAddress().
Modification:
Add methods similar to those for ipFreeBind, to set the IP_TRANSPARENT option.
Result:
Allows setting and getting of the IP_TRANSPARENT option, which allows retrieval of the ultimate socket address originally requested.
Motivation:
We should not force autoconf and compile as this will result in multiple executions and so slow down the build.
Modifications:
Remove force declarations
Result:
Faster build of native modules
Motivation:
When I run Netty micro benchmarks I get many warnings like:
WARNING: -Dio.netty.noResourceLeakDetection is deprecated. Use '-Dio.netty.leakDetection.level=simple' instead.
Modification:
-Dio.netty.noResourceLeakDetection replaced with -Dio.netty.leakDetection.level=disabled.
Result:
No warnings.
Motivation:
We need to use FQCN to prevent classloader issues for classes that are > Java6. This is a cleanup of ed5fcbb773.
Modifications:
Just remove the imports and use FQCN.
Result:
No classloader issues with java6
Motivation:
Commit 3c4dfed08a introduced a regression in handling buffers that have no memoryAddress.
Modifications:
Fix regression and also add unit tests.
Result:
It's possible again to write buffers without memory address.
Motivation:
We should allow to access the memoryAddress of the wrapped ByteBuf when using ReadOnlyByteBuf for peformance reasons. If a user act on a memoryAddress its his responsible anyway to do nothing "stupid".
Modifications:
Delegate to wrapped ByteBuf.
Result:
Less performance overhead for various operations and also when writing to a native transport (which needs the memoryAddress).
Motivations:
1. There are duplicated implementations of decoding hex strings. #6797
2. ByteBufUtil.HexUtil.decodeHexDump does not handle substring start
index properly and does not decode hex byte rigorously.
Modifications:
1. Function decodeHexByte is moved from QueryStringDecoder into ByteBufUtil.
2. ByteBufUtil.HexUtil.decodeHexDump is changed to use decodeHexByte.
3. Tests are Updated accordingly.
Result:
Fixed#6797 and made hex decoding functions more robust.
Motivation:
Docker's `--tmpfs` flag mounts the temp volume with `noexec` by default,
resulting in an UnsatisfiedLinkError. While this is good security
practice, it is a surprising failure from a seemingly innocuous flag.
Modifications:
Add a best-effort attempt in `NativeLibraryLoader` to detect when temp
files beng loaded cannot be executed even when execution permissions
are set, often because the `noexec` flag is set on the volume.
Requires numerous additional exclusions to the Animal Sniffer config
for Java7 POSIX permissions manipulation.
Result:
Fixes [#6678].
Motivation:
The current PROXY protocol implementation does not have support for optional Type-Length-Value fields. This pull requests adds the TLV values as specified in the PROXY protocol specification (http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt) and adds support for arbitrary TLVs.
Modifications:
The existing HAProxyMessage implements an additional TLV reading operation. A small bug in the AF_UNIX reader which didn’t set the reader index correctly was also fixed.
Result:
The PROXY protocol supports TLVs
Motivation:
A `SeekAheadNoBackArrayException` used as check for `ByteBuf#hasArray`. The catch of exceptions carries a large overhead on stack trace filling, and this should be avoided.
Modifications:
- Remove the class `SeekAheadNoBackArrayException` and replace its usage with `if` statements.
- Use methods from `ObjectUtils` for better readability.
- Make private methods static where it make sense.
- Remove unused private methods.
Result:
Less of exception handling logic, better performance.
Motivation:
JdkZlibDecoder will allocate a new buffer when the previous buffer is filled with inflated data, but JZlibDecoder will attempt to use the same buffer by resizing. This leads to inconsistent results when these two decoders that are intended to be functionality equivalent.
Modifications:
- JdkZlibDecoder should attempt to resize and reuse the existing buffer instead of creating multiple buffers
Result:
Fixes https://github.com/netty/netty/issues/6804
Motivation:
If a full HttpResponse with a Content-Length header is encoded by the HttpContentEncoder subtypes the Content-Length header is removed and the message is set to Transfer-Encoder: chunked. This is an unnecessary loss of information about the message content.
Modifications:
- If a full HttpResponse has a Content-Length header, the header is adjusted after encoding.
Result:
Complete messages continue to have the Content-Length header after encoding.
Motivation:
If the destination buffer is completely filled during a call to OpenSslEngine#wrap(..) we may return NEED_UNWRAP because there is no data pending in the SSL buffers. However during a handshake if the SSL buffers were just drained, and filled up the destination buffer it is possible OpenSSL may produce more data on the next call to SSL_write. This means we should keep trying to call SSL_write as long as the destination buffer is filled and only return NEED_UNWRAP when the destination buffer is not full and there is no data pending in OpenSSL's buffers.
Modifications:
- If the handshake produces data in OpenSslEngine#wrap(..) we should return NEED_WRAP if the destination buffer is completely filled
Result:
OpenSslEngine returns the correct handshake status from wrap().
Fixes https://github.com/netty/netty/issues/6796.
Motivation:
QueryStringDecoder has several problems:
- doesn't decode correctly path part with `+` (plus) sign in it,
- doesn't cut a `fragment` (after `#`) from query string (see RFC 3986),
- doesn't work correctly with encoding,
- treat `%%` as a percent character escaping (it's don't described in RFC).
Modifications:
- leave `+` chars in a `path` part of uri string,
- ignore `fragment` part (after `#`),
- correctly work with encoding.
- don't treat `%%` as escaping for the `%`.
Result:
Fixed issues from #6745.
Motivation:
ChannelPipeline will happily add a handler to a closed Channel's pipeline and will call handlerAdded(...) but will not call handlerRemoved(...).
Modifications:
Check if pipeline was destroyed and if so not add the handler at all but propergate an exception.
Result:
Fixes [#6768]
Motivation:
Allow subclasses of HttpObjectEncoder other than HttpServerCodec to override the isContentAlwaysEmpty method
Modification:
Change the method visibility from package private to protected
Result:
Fixes#6761
Motivation:
We need to exclude the netty-tcnative classes from the all jar when building it as otherwise there may be classloader issues.
Modifications:
Add io/netty/internal/tcnative to the exclude list.
Result:
Fixes [#6791]
Motivation:
ByteBufUtil provides a hexDump method. For debugging purposes it is often useful to decode that hex dump to get the original content, but no such method exists.
Modifications:
- Add ByteBufUtil#decodeHexDump
Result:
ByteBufUtil#decodeHexDump is available to make debugging easier.
Motivation:
RedisDecoder can get into an infinite loop while decoding bulk strings if the final \r and \n to indicate the end of content are split on ByteBuf boundaries.
Modifications:
- We should break out of the decode loop if remainingBulkLength is 0 and we don't have enough data to read EOL
Result:
No more infinite loop in RedisDecoder#decodeBulkStringContent.
Motivation:
We used strstr to find the path to the library, which fails if the library is contained in a directory that also matches the library name.
Modifications:
- Introduce netty_unix_util_strstr_last which will return a pointer which points to the last accourance and so not fails if the direct also matches the library name.
Result:
Be able to load the library in all cases.
Motivation:
Fix the regression recently introduced that causes already encoded responses to be encoded again as gzip
Modification:
instead of just looking for IDENTITY, anything set for Content-Encoding should be respected and left as-is
added unit tests to capture this use case
Result:
Fixes#6784