Motivations
-----------
DnsNameResolverBuilder and DnsNameResolver do not auto-configure
themselves uing default options define in /etc/resolv.conf.
In particular, rotate, timeout and attempts options are ignored.
Modifications
-------------
- Modified UnixResolverDnsServerAddressStreamProvider to parse ndots,
attempts and timeout options all at once and use these defaults to
configure DnsNameResolver when values are not provided by the
DnsNameResolverBuilder.
- When rotate option is specified, the DnsServerAddresses returned by
UnixResolverDnsServerAddressStreamProvider is rotational.
- Amend resolv.conf options with the RES_OPTIONS environment variable
when present.
Result:
Fixes https://github.com/netty/netty/issues/10202
Motivation:
Currently calculateSize method in AbstractTrafficShapingHandler calculates size for object of type ByteBuf or ByteBufHolder. Adding a check for FileRegion, makes it possible to do traffic shaping for FileRegion objects as well
Modification:
Check if object to be sent is of type FileRegion, if yes calculate the size using its count() method.
Co-authored-by: Dinesh Joshi <dinesh.joshi@apple.com>
Motivation:
An unexpected IOException may be thrown from `FileChannel.force`. If it happens, the `FileChannel.close` may not be invoked.
Modification:
Place the `FileChannel.close` in a finally block.
Result:
Avoid fd leak.
Motivation:
We need to make some slightly changes to be able to build on Java15 as some previous deprecated methods now throw UnsupportedOperationException
Modifications:
- Add code to handle UnsupportedOperationException
- Revert previous applied workaround for bug that was fixed in Java15
- Add maven profile
Result:
Be able to build with latest Java15 EA release
Motivation:
We can make use of our optimized implementations for UTF-8 and US-ASCII if the user request a copy of a sequence for these charsets
Modifications:
- Add fastpath implementation
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/10205
Motivation:
Http2ConnectionHandler may call ctx.close(...) with the same promise instance multiple times if the timeout for gracefulShutdown elapse and the listener itself is notified. This can cause problems as other handlers in the pipeline may queue these promises and try to notify these later via setSuccess() or setFailure(...) which will then throw an IllegalStateException if the promise was notified already
Modification:
- Add boolean flag to ensure doClose() will only try to call ctx.close(...) one time
Result:
Don't call ctx.close(...) with the same promise multiple times when gradefulShutdown timeout elapses.
Motivation:
We should provide more informations when ALPN is not supported and a user tries to use it.
Modifications:
- Use UnsupportedOperationException
Result:
Easier to debug ALPN problems
Motivation:
ALPN support was backported to java 8 lately. Ensure we support it if the user uses the latest java 8 release
Modifications:
- Update logic to be able to detect if ALPN is supported out of the box when using Java8
- Update jetty alpn version
Result:
Be able to use ALPN out of the box when using java 8u251
Motivation:
We need to ensure we not overflow when adding buffers to a CompositeByteBuf
Modifications:
- Correctly validate overflow before adding to the internal storage
- Add testcase
Result:
Fixes https://github.com/netty/netty/issues/10194
Motivation:
`RandomAccessFile.setLength` may throw an IOException. We must deal with this in case of the occurrence of `I/O` error.
Modification:
Place the `RandomAccessFile.setLength` method call in the `try-finally` block.
Result:
Avoid fd leak.
Motivation:
Add support for HAProxyMessageEncoder.
This should help java based HAProxy server implementations propagate proxy information.
Modification:
Add public constructors for `HAProxyMessage`, `HAProxyTLV`, `HAProxySSLTLV`.
Add additional argument checks for `HAProxyMessage` and modify exceptions thrown when creating via public constructors directly.
Introduce a `@Sharable` `HAProxyMessageEncoder` which encodes a `HAProxyMessage` into a byte array.
Add an example `HAProxyServer` and `HAProxyClient` to `io.netty.example`
Result:
Fixes#10164
Motivation:
In DefaultChannelHandlerContext we had some code where we tried to guard against endless loops caused by exceptions thrown by exceptionCaught(...) that would trigger exceptionCaught again. This code was proplematic for two reasons:
- It is quite expensive as we need to compare all the stacks
- We may end up not notify another handlers exceptionCaught(...) if in our exeuction stack we triggered actions that will cause an exceptionCaught somewhere else in the pipeline
Modifications:
- Just remove the detection code as we already handle everything correctly when we invoke exceptionCaught(...)
- Add unit tests
Result:
Ensure we always notify correctly and also fixes performance issue reported as https://github.com/netty/netty/issues/10165
Motivation:
netty_epoll_linuxsocket_JNI_OnLoad(...) may produce a deadlock with another thread that will load IOUtil in a static block. This seems to be a JDK bug which is not yet fixed. To workaround this we force IOUtil to be loaded from without java code before init the JNI code
Modifications:
Use Selector.open() as a workaround to load IOUtil
Result:
Fixes https://github.com/netty/netty/issues/10187
Motivation:
We have found out that ByteBufUtil.indexOf can be inefficient for substring search on
ByteBuf, both in terms of algorithm complexity (worst case O(needle.readableBytes *
haystack.readableBytes)), and in constant factor (esp. on Composite buffers).
With implementation of more performant search algorithms we have seen improvements on
the order of magnitude.
Modifications:
This change introduces three search algorithms:
1. Knuth Morris Pratt - classical textbook algorithm, a good default choice.
2. Bit mask based algorithm - stable performance on any input, but limited to maximum
search substring (the needle) length of 64 bytes.
3. Aho–Corasick - worse performance and higher memory consumption than [1] and [2], but
it supports multiple substring (the needles) search simultaneously, by inspecting every
byte of the haystack only once.
Each algorithm processes every byte of underlying buffer only once, they are implemented
as ByteProcessor.
Result:
Efficient search algorithms with linear time complexity available in Netty (I will share
benchmark results in a comment on a PR).
Motivation:
`FileChannel.force` may throw an IOException. A fd leak may happen here.
Modification:
Close the fileChannel in a finally block.
Result:
Avoid fd leak.
Motivation:
To ensure we always recycle the CodecOutputList we should better do it in a finally block
Modifications:
Call CodecOutputList.recycle() in finally
Result:
Less chances of non-recycled lists. Related to https://github.com/netty/netty/issues/10183
Motivation:
The `AbstractTrafficShapingHandler` caches the `ReopenReadTimerTask` instance in the channel attribute. However, if this handler is removed from the channel pipeline, this `ReopenReadTimerTask` instance may not be released.
Modification:
Release the channel attribute `REOPEN_TASK` in `handlerRemoved` method.
Result:
Avoid a channel attribute leak.
Motivation:
Parameter maxCount needs the unit test.
Modifications:
1. Change the conditional statement to avoid the ineffective maxCount (enhance the robustness of the code merely).
2. Add the unit test for maxCount.
Result:
Enable this parameter to be tested.
Motivation:
We should update our optional bouncycastle dependency to ensure we use the latest which has all the security fixes
Modifications:
Update bouncycastle version
Result:
Fixes https://github.com/netty/netty/issues/10184
Motivation:
Only cipher suite is logged during handshake. Picked protocol is interesting too.
Modification:
Log protocol as well.
Result:
More interesting information when debugging TLS handshakes.
Motivation:
Often people want to use `stomp-codec` with WebSocket transport or other but cannot figure out how can do this staff on Netty.
Modification:
Create example for demonstrating integration between STOMP and WebSocket.
Inspired by https://github.com/jmesnil/stomp-websocket
Result:
Fixes#9383
Motivation:
A user might want to cancel DNS resolution when it takes too long.
Currently, there's no way to cancel the internal DNS queries especially when there's a lot of search domains.
Modification:
- Stop sending a DNS query if the original `Promise`, which was passed calling `resolve()`, is canceled.
Result:
- You can now stop sending DNS queries by cancelling the `Promise`.
Motivation:
Related https://github.com/line/armeria/issues/2463
Here is an example that an NIC has only link local address for IPv6.
```
$ ipaddr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if18692: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1460 qdisc noqueue
link/ether 1a:5e:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
inet 10.xxx.xxx.xxx/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::xxxx:xxxx:xxxx:xxxx/64 scope link
valid_lft forever preferred_lft forever
```
If the NICs have only local or link local addresses, We should not send IPv6 DNS queris.
Modification:
- Ignore link-local IPv6 addresses which may exist even on a machine without IPv6 network.
Result:
- `DnsNameResolver` does not send DNS queries for AAAA when IPv6 is not available.
Motivation:
An `IOException` may be thrown from `FileChannel.write` or `FileChannel.force`, and cause the fd leak.
Modification:
Close the file in a finally block.
Result:
Avoid fd leak.
Motivation:
PoolChunk requires a link to a PoolThreadCache to init ByteBuf. Currently the link is retrieved from a thread local: arena.parent.threadCache().
It has some performance cost. At the beginning of the allocation call the PoolThreadCache is already retrieved from the thread local. The reference can be propagated through the calls and used.
Modifications:
Replace second lookup of PoolThreadCache during ByteBuf init by propagation of a reference to PoolThreadCache down in the allocation stack explicitly
Result:
Improve performance of ByteBuf allocation
--Before--
Benchmark (size) (tokens) (useThreadCache) Mode Cnt Score Error Units
SimpleByteBufPooledAllocatorBenchmark.getAndRelease 123 0 true avgt 20 57.112 ± 1.004 ns/op
SimpleByteBufPooledAllocatorBenchmark.getAndRelease 123 100 true avgt 20 222.827 ± 1.307 ns/op
--After--
Benchmark (size) (tokens) (useThreadCache) Mode Cnt Score Error Units
SimpleByteBufPooledAllocatorBenchmark.getAndRelease 123 0 true avgt 20 50.732 ± 1.321 ns/op
SimpleByteBufPooledAllocatorBenchmark.getAndRelease 123 100 true avgt 20 216.892 ± 3.806 ns/op
Motivation:
Different versions of the JDK use different TLS versions by default. We should define the versions explicit
Modifications:
Explicit specify TLSv1.2
Result:
Blockhound tests pass on JDK14 as well
Motivation:
If the SslHandler is removed from the pipeline we also need to ensure we fail the handshake / close promise if it was not notified before as otherwise we may never do so.
Modifications:
- Correctly fail promise and notify pipeline if handshake was not done yet when the SslHandler is removed
- Add unit test
Result:
Fix https://github.com/netty/netty/issues/10158
Motivation:
An IOException may be thrown from FileChannel.read, and cause the fd leak.
Modification:
Close the file when IOException occurs.
Result:
Avoid fd leak.
Motivation:
PoolChunk.usage() method has non-trivial computations. It is used currently in hot path methods invoked when an allocation and de-allocation are happened.
The idea is to replace usage() output comparison against percent thresholds by Chunk.freeBytes plain comparison against absolute thresholds. In such way the majority of computations from the threshold conditions are moved to init logic.
Modifications:
Replace PoolChunk.usage() conditions in PoolChunkList with equivalent conditions for PoolChunk.freeBytes()
Result:
Improve performance of allocation and de-allocation of ByteBuf from normal size cache pool
Motivation:
How we did wildcard matching was not correct according to RFC6125. Beside this our implementation was quite CPU heavy.
Modifications:
- Add new DomainWildcardMappingBuilder which correctly does wildcard matching. See https://tools.ietf.org/search/rfc6125#section-6.4
- Add unit tests
- Deprecate old implementations
Result:
Correctly implement wildcard matching and improve performance
Motivation:
Not always STOMP frames contain any payload some times it just headers. So we wan't allocate additional buffer with NULL content for this situation.
Modification:
Modify StompSubframeEncoder to check if content is readable or not. If content is not readable just add NULL byte to encoded header buffer.
Result:
Less allocations
Motivation:
JDK 14 was released and need some special settings to be able to build with. Also there seems to be one regression that we need to workaround for now.
Modifications:
- Add maven profile for JDK 14
- Update blockhound version to be able to work on JDK 14
- Add workaround for possible JDK 14 regression
Result:
Be able to build on JDK 14
Motivation:
It is more efficient to avoid allocating objects when we don't need to.
Modification:
Don't allocate a `LinkedList` for returning an empty list of header values when the header doesn't exist at all.
Result:
Less allocations
Motivation:
Mqtt procotol defines packetId instead of messageId, replace deprecated method to match protocol definition
Modification:
Replace messageId() to packetId() in MqttEncoder
Result:
Match mqtt protocol definition and improve readability.
Motivation:
currently (http) disk based attributes or uploads are globally configured in a single directory and can also only globally be deleted on exit or not. it does not fit well multiple cases, in particular the case you have multiple servers in the same JVM.
Modification:
make it configurable per attribute/fileupload.
Result:
This PR duplicates Disk* constructor to add basedir and deleteonexit parameters and wires it in default http daa factory.
Motivation:
An IOException may be thrown from InputStream.read or checkSize method, and cause the ByteBuf leak.
Modification:
Release the ByteBuf when IOException occurs.
Result:
Avoid ByteBuf leak.
Motivation:
JDK 14 was released so we should include it in our build matrix. Beside this there was also a JDK 13 update
Modifications:
- Add docker-compose files for JDK 14
- Update JDK 13 version
Result:
Build with JDK 14 as well and use latest JDK 13 release
Motivation:
Before release 4.1.23, there was only ONE call to exceptionCaught method when an ssl handshake failure occurs, now we have two when using the JDK provider.
Modifications:
- Ensure we only propagate one exception fi we already failed the handshake and channelInactive(...) produce an exception again
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/10119
Motivation:
The current implementation of log2 in PoolThreadCache uses a loop and less efficient than an version based on Integer.numberOfLeadingZeros (intrinsic).
Modifications:
Replace the current log2 implementation in PoolThreadCache with a version based on Integer.numberOfLeadingZeros
Result:
It can improve performance slightly during allocation and de-allocation of ByteBuf using pooled allocator.
Motivation:
At the moment HttpObjectDecoder does not limit the number of controls chars. These should be counted with the initial line and so a limit should be exposed
Modifications:
- Change LineParser to also be used when skipping control chars and so enforce a limit
- Add various tests to ensure that limit is enforced
Result:
Fixes https://github.com/netty/netty/issues/10111
Motivation:
HttpProxyHandler itself will add a HttpClientCodec into the ChannelPipeline and so confuse the WebSocket*Handshaker when it tries to modify the pipeline as it will replace the wrong HttpClientCodec.
Modifications:
Wrap the internal HttpClientCodec that is added by HttpProxyHandler so it will not be replaced by HttpProxyHandler.
Result:
Fixes https://github.com/netty/netty/issues/5201 and https://github.com/netty/netty/issues/5070
Motivation:
Http2StreamChannelId is Serializable. A test case is needed to ensure that it works.
Modification:
Add a test case about serialization.
Result:
Improve test coverage slightly.
Motivation:
In our WebSocketClientHandshaker* implementations we "hardcode" the version number to use. This is error-prone, we should better use the WebSocketVersion so we dont need to maintain the value multiple times. Beside this we can also use an AsciiString to improve performance
Modifications:
- Use WebSocketVersion.toAsciiString
Result:
Less stuff to maintain and small performance win
Motivation:
The user may need to provide a specific HOST header. We should not override it when specified during handshake.
Modifications:
Check if a custom HOST header is already provided by the user and if so dont override it
Result:
Fixes https://github.com/netty/netty/issues/10101
Motivation:
For whatever reason IntelliJ produces some testfailures when using mock(ChannelHandler.class). In this case it does not "correctly" detect that the @Skip annotation should not be taken into effect.
Modifications:
Use spy(...) to ensure we correctly invoke methods
Result:
No more tests failures when running via IntelliJ.
Motivation:
Netty currently does not support the SameSite attribute for response cookies (see issue #8161 for discussion).
Modifications:
The attribute has been added to the DefaultCookie class as a quick fix since adding new methods to the Cookie interface would be backwards-incompatible.
ServerCookieEncoder and ClientCookieDecoder have been updated accordingly to process this value. No validation for allowed values (Lax, None, Strict) has been implemented.
Result:
Response cookies with the SameSite attribute set can be read or written by Netty.
Co-authored-by: David Latorre <a-dlatorre@hotels.com>
Motivation:
Since the LZF support non-compress and compress format, we can let LzfEncoder support length aware ability. It can let the user control compress.
Modification:
When the data length over compressThreshold, LzfEncoder use compress format to compress data. Otherwise, only use non-compress format. Whatever compress format the encoder use, the LzfDecoder can decompress data well.
Result:
Gives users control over compression capabilities