Motivation:
At the moment resolving addresses during connect is done via setting an AddressResolverGroup on the Bootstrap. While this works most of the times as expected sometimes the user want to trigger the connect() from the Channel itself and not via the Bootstrap. For this cases we should provide a ChannelHandler that the user can use that will do the resolution.
Modifications:
Add ResolveAddressHandler and tests
Result:
Be able to resolve addresses without Bootstrap
Motivation:
I'm performing some security research against various Java-based projects as a part of the new [GitHub Security Lab](https://securitylab.github.com/) Bug Bounty program.
Currently, LGTM.com can't create a CodeQL database for this project because of missing dependencies. This fixes the build so it works correctly.
The maintainers may also want to consider enabling the free [LGTM Integration](https://github.com/marketplace/lgtm) once this PR is merged.
Modification:
Adds a `.lgtm.yml` with a tested configuration:
A successful build can be found here:
https://lgtm.com/logs/6695df28f6b2b1d3fd4d03e968b600a3e9c9aecf/lang:java
Result:
LGTM.com will be able to process this repository to create a CodeQL database.
Motivation:
Use latest JCTools, bug fixes etc.
Modification:
Change pom to depend on 3.0.0 version, and ignore shaded jar API changes
Result:
Using newer JCTools
Motivation:
We should remove WebSocketServerExtensionHandler from pipeline after successful WebSocket upgrade even if the client has not selected any extensions.
Modification:
Remove handler once upgrade is complete and no extensions are used.
Result:
Fixes#9939.
Motivation:
The resolver API and implementations should be considered stable by now so we should not mark these with @UnstableApi
Modifications:
Remove @UnstableApi annotation from API and implementation of resolver
Result:
Make it explicit that the API is considered stable
Motivation:
We should close encoder when `LzfEncoder` was removed from pipeline.
Modification:
call `encoder.close` when `handlerRemoved` triggered.
Result:
Close encoder to release internal buffer.
Motivation
Recent optimization #9765 introduced a bug where the native indices of
the internal reused duplicate nio buffer are not properly reset prior to
using it to copy data during a reallocation operation. This can result
in BufferOverflowExceptions thrown during ByteBuf capacity changes.
The code path in question applies only to pooled direct buffers when
Unsafe is disabled or not available.
Modification
Ensure ByteBuffer#clear() is always called on the reused internal nio
buffer prior to returning it from PooledByteBuf#internalNioBuffer()
(protected method); add unit test that exposes the bug.
Result
Fixes#9911
Motivation:
Since "Http2ClientInitializer" creates a new SSLContext Handler without specifying Host, Netty does not add SNI Extension in TLS Client Hello request and the request fails if the server uses SNI to establish TLS Connection.
Modification:
Specified Host while creating a new SSLContext Handler in "Http2ClientInitializer".
Result:
Netty adds SNI Extension of the Host Specified in new SSLContext Handler and sends it with TLS Client Hello request.
Fixes#9815.
Motivation:
41c47b4 introduced a change in an existing testcase which let the build fail when jdkCompatibilityMode is false.
Modifications:
Fix unit tests
Result:
Build passes when jdkCompatibilityMode is false as well
Motivation
A bug was introduced in #9806 which looks likely to be the cause of
#9919. SniHandler will enter an infinite loop if an SSL record is
received with SSL major version byte != 3 (i.e. something other than TLS
or SSL3.0)
Modifications
- Follow default path as intended for majorVersion != 3 in
AbstractSniHandler#decode(...)
- Add unit test to reproduce the hang
Result
Fixes#9919
Motivation:
We can extend `ResourceLeakDetector` through `ResourceLeakDetectorFactory`, and then report the leaked information by covering `reportTracedLeak` and `reportUntracedLeak`. However, the behavior of `reportTracedLeak` and `reportUntracedLeak` is controlled by `logger.isErrorEnabled()`, which is not reasonable. In the case of extending `ResourceLeakDetector`, we sometimes need `needReport` to always return true instead of relying on `logger.isErrorEnabled ()`.
Modification:
introduce `needReport` method and let it be `protected`
Result:
We can control the report leak behavior.
Motivation:
"io.netty.leakDetection.maxSampledRecords" was removed in 16b1dbdf92
Modification:
Replaced it with targetRecords
Result:
Use correct flag during test execution
Motivation:
When `consolidatedWhenNoReadInProgress` is true, `channel.writeAndFlush (data) .addListener (f-> channel.writeAndFlush (data2))` Will cause data2 to never be flushed.
Because the flush operation will synchronously execute the `channel.writeAndFlush (data2))` in the `listener`, and at this time, since the current execution thread is still an `eventloop`(`executor.inEventLoop()` was true), all handlers will be executed synchronously. At this time, since `nextScheduledFlush` is still not null, the `flush` operation of `data2` will be ignored in `FlushConsolidationHandler#scheduleFlush`.
Modification:
- reset `nextScheduledFlush` before `ctx.flush`
- use `ObjectUtil` to polish code
Result:
Fixes https://github.com/netty/netty/issues/9923
Motivation:
Utf8FrameValidator must release the input buffer if the validation fails to ensure no memory leak happens
Modifications:
- Catch exception, release frame and rethrow
- Adjust unit test
Result:
Fixes https://github.com/netty/netty/issues/9906
Motivation:
decodeHexNibble can be a lot faster using a lookup table
Modifications:
decodeHexNibble is made faster by using a lookup table
Result:
decodeHexNibble is faster
Motivation
This PR is a reduced-scope replacement for #8931. It doesn't include the
changes related to how/when discarding read bytes is done, which we plan
to address in subsequent updates.
Modifications
- Avoid copying bytes in COMPOSITE_CUMULATOR in all cases, performing a
shallow copy where necessary; also guard against (unusual) case where
input buffer is composite with writer index != capacity
- Ensure we don't pass a non-contiguous buffer when MERGE_CUMULATOR is
used
- Manually inline some calls to ByteBuf#writeBytes(...) to eliminate
redundant checks and reduce stack depth
Also includes prior minor review comments from @trustin
Result
More correct handling of merge/composite cases and
more efficient handling of composite case.
Motivation:
Currently, characters are appended to the encoded string char-by-char even when no encoding is needed. We can instead separate out codepath that appends the entire string in one go for better `StringBuilder` allocation performance.
Modification:
Only go into char-by-char loop when finding a character that requires encoding.
Result:
The results aren't so clear with noise on my hot laptop - the biggest impact is on long strings, both to reduce resizes of the buffer and also to reduce complexity of the loop. I don't think there's a significant downside though for the cases that hit the slow path.
After
```
Benchmark Mode Cnt Score Error Units
QueryStringEncoderBenchmark.longAscii thrpt 6 1.406 ± 0.069 ops/us
QueryStringEncoderBenchmark.longAsciiFirst thrpt 6 0.046 ± 0.001 ops/us
QueryStringEncoderBenchmark.longUtf8 thrpt 6 0.046 ± 0.001 ops/us
QueryStringEncoderBenchmark.shortAscii thrpt 6 15.781 ± 0.949 ops/us
QueryStringEncoderBenchmark.shortAsciiFirst thrpt 6 3.171 ± 0.232 ops/us
QueryStringEncoderBenchmark.shortUtf8 thrpt 6 3.900 ± 0.667 ops/us
```
Before
```
Benchmark Mode Cnt Score Error Units
QueryStringEncoderBenchmark.longAscii thrpt 6 0.444 ± 0.072 ops/us
QueryStringEncoderBenchmark.longAsciiFirst thrpt 6 0.043 ± 0.002 ops/us
QueryStringEncoderBenchmark.longUtf8 thrpt 6 0.047 ± 0.001 ops/us
QueryStringEncoderBenchmark.shortAscii thrpt 6 16.503 ± 1.015 ops/us
QueryStringEncoderBenchmark.shortAsciiFirst thrpt 6 3.316 ± 0.154 ops/us
QueryStringEncoderBenchmark.shortUtf8 thrpt 6 3.776 ± 0.956 ops/us
```
Motivation:
https://github.com/netty/netty/pull/9883 added a bug-fix for the Comparator in ClientCookieEncoder but did not add a testcase.
Modifications:
- Add testcase
- Simplify code
Result:
Include a test to ensure we not regress.
Motivation:
The current implementation causes IllegalArgumetExceptions to be thrown on Java 11.
The current implementation would violate comparison contract for two cookies C1 and C2 with same path length, since C1 < C2 and C2 < C1. Returning 0 (equality) does not since C1 == C2 and C2 == C1. See #9881
Modification:
Return equality instead of less than on same path length.
Result:
Fixes#9881.
Motivation:
In Java, it is almost always at least slower to use `ByteBuffer` than `byte[]` without pooling or I/O. `QueryStringDecoder` can use `byte[]` with arguably simpler code.
Modification:
Replace `ByteBuffer` / `CharsetDecoder` with `byte[]` and `new String`
Result:
After
```
Benchmark Mode Cnt Score Error Units
QueryStringDecoderBenchmark.noDecoding thrpt 6 5.612 ± 2.639 ops/us
QueryStringDecoderBenchmark.onlyDecoding thrpt 6 1.393 ± 0.067 ops/us
QueryStringDecoderBenchmark.mixedDecoding thrpt 6 1.223 ± 0.048 ops/us
```
Before
```
Benchmark Mode Cnt Score Error Units
QueryStringDecoderBenchmark.noDecoding thrpt 6 6.123 ± 0.250 ops/us
QueryStringDecoderBenchmark.onlyDecoding thrpt 6 0.922 ± 0.159 ops/us
QueryStringDecoderBenchmark.mixedDecoding thrpt 6 1.032 ± 0.178 ops/us
```
I notice #6781 switched from an array to `ByteBuffer` but I can't find any motivation for that in the PR. Unit tests pass fine with an array and we get a reasonable speed bump.
Motivation:
The resolv.conf file may contain inline comments which should be ignored
Modifications:
- Detect if we have a comment after the ipaddress and if so skip it
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9889
Motivation:
ByteToMessageDecoder requires using an intermediate List to put results into. This intermediate list adds overhead (memory/CPU) which grows as the number of objects increases. This overhead can be avoided by directly propagating events through the ChannelPipeline via ctx.fireChannelRead(...). This also makes the semantics more clear and allows us to keep track if we need to call ctx.read() in all cases.
Modifications:
- Remove List from the method signature of ByteToMessageDecoder.decode(...) and decodeLast(...)
- Adjust all sub-classes
- Adjust unit tests
- Fix javadocs.
Result:
Adjust ByteToMessageDecoder as noted in https://github.com/netty/netty/issues/8525.
Motivation:
ByteToMessageDecoder's default MERGE_CUMULATOR will allocate a new buffer and
copy if the refCnt() of the cumulation is > 1. However this is overly
conservative because we maybe able to avoid allocate/copy if the current
cumulation can accommodate the input buffer without a reallocation. Also when the
reallocation and copy does occur the new buffer is sized just large enough to
accommodate the current the current amount of data. If some data remains in the
cumulation after decode this will require a new allocation/copy when more data
arrives.
Modifications:
- Use maxFastWritableBytes to avoid allocation/copy if the current buffer can
accommodate the input data without a reallocation operation.
- Use ByteBufAllocator#calculateNewCapacity(..) to get the size of the buffer
when a reallocation/copy operation is necessary.
Result:
ByteToMessageDecoder MERGE_CUMULATOR won't allocate/copy if the cumulation
buffer can accommodate data without a reallocation, and when a reallocation
occurs we are more likely to leave additional space for future data in an effort
to reduce overall reallocations.
Motivation:
RFC7230 states that we should not accept multiple content-length headers and also should not accept a content-length header in combination with transfer-encoding: chunked
Modifications:
- Check for multiple content-length headers and if found mark message as invalid
- Check if we found a content-length header and also a transfer-encoding: chunked and if so mark the message as invalid
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9861
Motivation:
Technical speaking its valid to have http headers with no values so we should support it. That said we need to detect if these are "generated" because of an "invalid" fold.
Modifications:
- Detect if a colon is missing when parsing headers.
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9866
Motivation:
Client can split data into different numbers of fragments and sometimes the last frame may contain trash data that doesn't affect decompression process.
Modification:
Added check if last frame is `ContinuationWebSocketFrame` and decompression data is empty
then don't throw an exception.
Result:
Fixes#9770
Motivation:
In #9603 the executor hung on shutdown because of an abandoned task
on another executor the first was waiting for.
Modifications:
This commit modifies the executor shutdown sequence to include
switching to SHUTDOWN state and then running all remaining tasks.
This ensures that no more tasks are scheduled after SHUTDOWN and
the last pass of running remaining tasks will take it all.
Any tasks scheduled after SHUTDOWN will be rejected.
This change preserves the functionality of graceful shutdown with
quiet period and only adds one more pass of task execution after
the default shutdown process has finished and the executor is
ready for termination.
Result:
After this change tasks that succeed to be added to the executor will
be always executed. Tasks which come late will be rejected instead of
abandoned.
Motivation:
Multiple cookie values can be present in a single header.
Modification:
Add `decodeAll` overload which returns all cookies
Result:
Fixes#7210
Note:
This change is not as perscriptive as the ideas brought up in the linked issue. Changing the Set implementation or the equals/compareTo definition is likely a breaking change, so they are practical.
# Motivation:
`DefaultByteBufHolder.equals()` considers another object equal if it's an instance of `ByteBufferHolder` and if the contents of two objects are equal. However, the behavior of `equals` method is not a part of the `ByteBufHolder` contract so `DefaultByteBufHolder`'s version may be causing violation of the symmetric property if other classes have different logic.
There are already a few classes that are affected by this: `DefaultHttp2GoAwayFrame`, `DefaultHttp2UnknownFrame`, and `SctpMessage` are all overriding `equals` method breaking the symmetric property.
Another effect of this behavior is that all instances with empty data are considered equal. That may not be desireable in the situations when instances are created for predefined constants, e.g. `FullBulkStringRedisMessage.NULL_INSTANCE` and `FullBulkStringRedisMessage.EMPTY_INSTANCE` in `codec-redis`.
# Modification:
Make `DefaultByteBufHolder.equals()` implementation only work for the objects of the same class.
# Result:
- The symmetric property of the `equals` method is restored for the classes in question.
- Instances of different classes are not considered equal even if the content of the data they hold are the same.
Motivation:
We should use Objects.requireNonNull(...) as we require java8
Modifications:
Replace ObjectUtil.checkNonNull(...) with Objects.requireNonNull(...)
Result:
Code cleanup
Motivation:
The current implementation delegates to writeHeaders(ChannelHandlerContext ctx, int streamId, Http2Headers headers, int streamDependency, short weight, boolean exclusive, int padding, boolean endStream, ChannelPromise promise) that will send an header frame with the priority flag set and the default priority values even if the user didnt want too.
Modifications:
- Change DefaultHttp2ConnectionEncoder to call the correct Http2FrameWriter method depending on if the user wants to use priorities or not
- Adjust tests
Result:
Fixes https://github.com/netty/netty/issues/9842
Motivation:
SnappyFrameDecoderTest has a few tests which fail to close the EmbeddedChannel
and therefore may leak ByteBuf objects.
Modifications:
- Make sure EmbeddedChannel#finishAndReleaseAll() is called in all tests
Result:
No more leaks from SnappyFrameDecoderTest.
Motivation:
We did not correctly close the `EmbeddedChannel` which would lead to not have `handlerRemoved(...)` called. This can lead to leaks. Beside this we also did not correctly consume produced data which could also show up as a leak.
Modifications:
- Always call `EmbeddedChannel.finish()`
- Ensure we consume all produced data and release it
Result:
No more leaks in test. This showed up in https://github.com/netty/netty/pull/9850#issuecomment-562504863.
Motivation:
97361fa2c8 replace synchronized with ConcurrentHashMap in *Bootstrap classes but missed to do the same for the Http2 variant.
Modifications:
- Use ConcurrentHashMap
- Simplify code in *Bootstrap classes
Result:
Less contention
Motivation
While working on other changes I noticed some opportunities to
streamline a few things in AbstractByteBuf.
Modifications
- Avoid duplicate ensureAccessible() checks in discard(Some)ReadBytes()
and ensureWritable0(int) methods
- Simplify ensureWritable0(int) logic
- Make some conditional checks more concise
Result
Cleaner, possibly faster code
Motivation:
We should include the shaded sources for JCTools in our sources jar to make it easier to debug.
Modifications:
- Adjust plugin configuration to execute plugins in correct order
- Update source plugin
- Add configuration for shade plugin to generate source jar content
Result:
Fixes https://github.com/netty/netty/issues/6640.
Motivation:
https://github.com/netty/netty/pull/9797 changed the code for recvmmsg and sendmmsg to use the syscalls directly to remvove the dependency on newer GLIBC versions. Unfortunally it made the assumption that the syscall numbers are the same for different architectures, which is not the case.
Thanks to @jayv for pointing it out
Modifications:
Add #if, #elif and #else declarations to ensure we pick the correct syscall number (or not support if if the architecture is not supported atm).
Result:
Pick the correct syscall number depending on the architecture.
Motivation:
Netty uses epoll edge-triggered by default forever and there is really not reason why someone should use level-triggered (its considered a implementation detail).
Modifications:
- Remove code that was related to level-triggered mode
- Adjust testclass names
Result:
Fixes https://github.com/netty/netty/issues/9349
Motivation:
HttpPostStandardRequestDecoder may throw multiple different exceptions in the constructor which could lead to memory leaks. We need to guard against this by explicit catch all of them and rethrow after we released any allocated memory.
Modifications:
- Catch, destroy and rethrow in any case
- Ensure we correctly wrap IllegalArgumentExceptions
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/9829
Motivation:
Replace Map with Set. `reportedLeaks` has better semantics as a Set, and if it is a Map, it seems that the value of this Map has no meaning to us.
Modifications:
Use Set.
Result:
Cleaner code
Motivation:
At some point we added spliceTo(...) support which was never really used and so we should better take the chance and remove it again now as part of the next major release
Modifications:
Remove spliceTo(...) related code
Result:
Less code to maintain