Motivation:
When using netty on android or with for example a IBM JVM it may not be able to build a SslContext as we hardcoded the use of JKS and SunX509 (which both may not be present).
Modifications:
- Use the default algorithm / type which can be override via a System property
- Remove System property check as its redundant with KeyManagerFactory.getDefaultAlgorithm()
Result:
More portable code. Fixes [#7546].
Motivation:
DnsNameResolverTest has not been updated in a while.
Modifications:
- Update the DOMAINS definition in DnsNameResolverTest
Result:
More current domain names.
Motivation:
SSL.setState() has gone from openssl 1.1. Calling it is, and probably
always has been, incorrect. Doing renogitation in this manner is
potentially insecure. There have been at least two insecure
renegotiation vulnerabilities in users of the OpenSSL library.
Renegotiation is not necessary for correct operation of the TLS protocol.
BoringSSL has already eliminated this functionality, and the tests
(now deleted) were not running on BoringSSL.
Modifications:
If the connection setup has completed, always return that
negotiation is not supported. Previously this was done only if we were
the client.
Remove the tests for this functionality.
Fixes#6320.
Motivation:
DefaultHttpDataFactory uses HttpRequest as map keys.
Because of the implementation of "hashCode"" and "equals" in DefaultHttpRequest,
if we use normal maps, HttpDatas of different requests may end up in the same map entry,
causing cleanup bug.
Consider this example:
- Suppose that request1 is equal to request2, causing their HttpDatas to be stored in one single map entry.
- request1 is cleaned up first, while request2 is still being decoded.
- Consequently request2's HttpDatas are suddenly gone, causing NPE, or worse loss of data.
This bug can be reproduced by starting the HttpUploadServer example,
then run this command:
ab -T 'application/x-www-form-urlencoded' -n 100 -c 5 -p post.txt http://localhost:8080/form
post.txt file content:
a=1&b=2
There will be errors like this:
java.lang.NullPointerException
at io.netty.handler.codec.http.multipart.MemoryAttribute.getValue(MemoryAttribute.java:64)
at io.netty.handler.codec.http.multipart.MixedAttribute.getValue(MixedAttribute.java:243)
at io.netty.example.http.upload.HttpUploadServerHandler.writeHttpData(HttpUploadServerHandler.java:271)
at io.netty.example.http.upload.HttpUploadServerHandler.readHttpDataChunkByChunk(HttpUploadServerHandler.java:230)
at io.netty.example.http.upload.HttpUploadServerHandler.channelRead0(HttpUploadServerHandler.java:193)
at io.netty.example.http.upload.HttpUploadServerHandler.channelRead0(HttpUploadServerHandler.java:66)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1412)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:943)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Modifications:
Keep identity of requests by using IdentityHashMap
Result:
DefaultHttpDataFactory is fixed.
The ConcurrentHashMap is replaced with a synchronized map, but I think the performance won't be affected much in real web apps.
Motivation:
At the moment we use netty.io as BAD_HOST with an port that we know is timing out. This may change in the future so we should better use 198.51.100.254 which is specified as "for documentation only".
Modifications:
Replace netty.io with 198.51.100.254 in tests that depend on BAD_HOST.
Result:
More future proof code.
Motivation:
Http2FrameCodecTest#newOutboundStream has a timeout of 1 second and has been observed to timeout on CI servers.
Modifications:
- Increase the timeout to 5 seconds
Result:
Less false positive test failures on CI servers.
Motivation:
Our tests are often asynchronous and have timeouts to avoid hanging indefinitely. However sometimes the timeouts maybe set to low for the CI servers. It would be helpful to confirm if the application was busy with GC and if that was a contributing factor to the test timing out.
Modifications:
- Unit tests should run with -XX:+PrintGCDetails by default
Result:
More visibility into GC behavior in unit tests.
Motivation:
FastThreadLocal#set calls isIndexedVariableSet to determine if we need to register with the cleaner, but the set(InternalThreadLocalMap, V) method will also internally do this check so we can share code and only do the check a single time.
Modifications:
- extract code from set(InternalThreadLocalMap, V) so it can be called externally to determine if a new item was created
Result:
Less code duplication in FastThreadLocal#set.
Motivation:
HpackDecoder#addHeader takes in the streamId as a parameter but no longer uses it.
Modifications:
- Remove the streamId parameter from HpackDecoder#addHeader
Result:
Less unused parameters in HpackDecoder.
Motivation:
e329ca1 introduced the user of ObjectCleaner in FastThreadLocal but we missed the case to register our cleaner task if FastThreadLocal.set was called only.
Modifications:
- Use ObjectCleaner also when FastThreadLocal.set is used.
- Add test case.
Result:
ObjectCleaner is always used.
Motivation:
According to the HTTP/2 Spec:
SETTINGS_MAX_HEADER_LIST_SIZE (0x6): This advisory setting informs a
peer of the maximum size of header list that the sender is
prepared to accept, in octets. The value is based on the
uncompressed size of header fields, including the length of the
name and value in octets plus an overhead of 32 octets for each
header field.
We were accounting for the 32 bytes when encoding in HpackEncoder,
but not when decoding in HPackDecoder.
Modifications:
- Add 32 bytes to the header list length for each entry when decoding
with HPackDecoder.
Result:
- We account for the 32 bytes overhead by header entry in HPackDecoder
Motivation:
Allow pre-computing calculation of the constants for compiler where it could be.
Similar fix in OpenJDK: [1].
Modifications:
- Use parentheses.
- Simplify static initialization of `BYTE2HEX_*` arrays in `StringUtil`.
Result:
Less bytecode, possible faster calculations at runtime.
[1] https://bugs.openjdk.java.net/browse/JDK-4477961
Motivation:
There is no guarantee that FastThreadLocal.onRemoval(...) is called if the FastThreadLocal is used by "non" FastThreacLocalThreads. This can lead to all sort of problems, like for example memory leaks as direct memory is not correctly cleaned up etc.
Beside this we use ThreadDeathWatcher to check if we need to release buffers back to the pool when thread local caches are collected. In the past ThreadDeathWatcher was used which will need to "wakeup" every second to check if the registered Threads are still alive. If we can ensure FastThreadLocal.onRemoval(...) is called we do not need this anymore.
Modifications:
- Introduce ObjectCleaner and use it to ensure FastThreadLocal.onRemoval(...) is always called when a Thread is collected.
- Deprecate ThreadDeathWatcher
- Add unit tests.
Result:
Consistent way of cleanup FastThreadLocals when a Thread is collected.
Motiviation:
In our replace(...) methods we always used validation for the newly created headers while the original headers may not use validation at all.
Modifications:
- Only use validation if the original headers used validation as well.
- Ensure we create a copy of the headers in replace(...).
Result:
Fixes [#5226]
Motivation:
Currently the remote flow controller limits the maximum amount of pending data to Integer.MAX_VALUE. The overflow handling is also not very graceful in that it may lead to infinite loops, or otherwise no progress being made.
Modifications:
- StreamByteDistributor and RemoteFlowController should support pending bytes of type long.
Result:
Fixes https://github.com/netty/netty/issues/4283
Motivation:
As shown in issues it is sometimes hard to understand why a leak was reported when the user just calles EmbeddedChannel.readInbound() / EmbeddedChannel.readOutbound() and drop the message on the floor.
Modifications:
Add a hint before handover the message to the user and transfer the ownership.
Result:
Easier debugging of leaks caused by EmbeddedChannel.read*().
Motivation:
The JMH doc suggests to use BlackHoles to avoid dead code elimination hence would be better to follow this best practice.
Modifications:
Each benchmark method is returning the ByteBuf/ByteBuffer to avoid the JVM to perform any dead code elimination.
Result:
The results are more reliable and comparable to the others provided by other ByteBuf benchmarks (eg HeapByteBufBenchmark)
Motivation:
http/2 counts header sizes somewhat inconsistently. Sometimes, headers
which are substantively less than the header list size will be measured
as longer than the header list size.
Modifications:
Keep better track of the nameLength of a given name, so that we don't
accidentally end up reusing a nameLength.
Result:
More consistent measurement of header list size.
Fixes#7511.
Motivation :
Avoid unnecessary array allocation when using the function with varargs in the DefaultChannelPipeline class.
Modifications :
Added addLast and addFirst overloaded methods with 1 handler instead of varargs.
Result :
No array allocation when using simple construction like pipeline.addLast(new Handler());
Motivation:
H2Spec is a conformance testing tool for HTTP/2 implementation.
To help us fix failing tests and avoid future regression we
should run h2spec as part of the build
Modifications:
- Add testsuite-http2 module to the project
Result:
- Run h2spec as part of the build
- 22 tests are currently ignored, we should remove the ignore as we fix them
Motivation:
We should remove the WeakOrderedQueue from the WeakHashMap directly if possible and only depend on the semantics of the WeakHashMap if there is no other way for us to cleanup it.
Modifications:
Override onRemoval(...) to remove the WeakOrderedQueue if possible.
Result:
Less overhead and quicker collection of WeakOrderedQueue for some cases.
Motivation
There is currently no way to enforce the position of a handler in a ChannelPipeline and assume you wanted to write something like a custom Channel type that acts as a proxy between two other Channels.
ProxyChannel(Channel client, Channel server) {
client calls write(msg) -> server.write(msg)
client calls flush() -> server.flush()
server calls fireChannelRead(msg) -> client.write(msg)
server calls fireChannelReadComplete() -> client.flush()
}
In order to make it work reliably one needs to be able to scoop up the various events at the head and tail of the pipeline. The head side of the pipeline is covered by Unsafe and it's also relatively safe to count on the user to not use the addFirst() method to manipulate the pipeline. The tail side is always at a risk of getting broken because addLast() is the goto method to add handlers.
Modifications
Adding a few extra methods to DefaultChannelPipeline that expose some of the events that reach the pipeline's TailContext.
Result
Fixes#7484
Motivation:
af2f343648 introduced a test-case which was flacky due of multiple problems:
- we called writeAndFlush(...) in channelRead(...) and assumed it will only be called once. This is true most of the times but it may be called multile times if the data is fragemented.
- we didnt guard against the possibility that channelRead(...) is called with an empty buffer
Modifications:
- Call writeAndFlush(...) in channelActive(...) so we are sure its only called once and close the channel once we wrote the data
- only compare the data after we received a close so we are sure there isnt anything extra received
- check for exception and if we catched one fail the test.
Result:
No flacky test anymore and easier to debug issues that accour because of a catched exception.
Motivation:
FileDescriptor#writev calls JNI code, and that JNI code dereferences a NULL pointer which crashes the application. This occurs when writing a single CompositeByteBuf object with more than one component.
Modifications:
- Initialize the iovec iterator properly to avoid the core dump
- Fix the array length calculation if we aren't able to fit all the ByteBuffer objects in the iovec array
Result:
No more core dump.
Motivation:
When doStartThread throws an exception, e.g. due to the actual executor being depleted of threads and throwing in its rejected execution handler, the STEE ends up in started state anyway. If we try to execute another task in this executor, it will be queued but the thread won't be started anymore and the task will linger forever.
Modifications:
- Ensure we not update the internal state if the startThread() method throws.
- Add testcase
Result:
Fixes [#7483]
Motivation:
In our Recycler implementation we store a reference to the current Thread in the Stack that is stored in a FastThreadLocal. The Stack itself is referenced in the DefaultHandle itself. A problem can arise if a user stores a Reference to an Object that holds a reference to the DefaultHandle somewhere and either not remove the reference at all or remove it very late. In this case the Thread itself can not be collected as its still referenced in the Stack that is referenced by the DefaultHandle.
Modifications:
- Use a WeakReference to store the reference to the Thread in the Stack
- Add a test case
Result:
Ensure a Thread can be collected in a timely manner in all cases even if it used the Recycler.
Motivation:
We used subList in CompositeByteBuf to remove ranges of elements from the internal storage. Beside this we also used an foreach loop in a few cases which will crate an Iterator.
Modifications:
- Use our own sub-class of ArrayList which exposes removeRange(...). This allows to remove a range of elements without an extra allocation.
- Use an old style for loop to iterate over the elements to reduce object allocations.
Result:
Less allocations.
Motivation:
ThreadDeathWatcher and GlobalEventExecutor may create and start a new thread from various other threads and so inherit the classloader. We need to ensure we not inherit to allow recycling the classloader.
Modifications:
Use Thread.setContextClassLoader(null) to ensure we not hold a strong reference to the classloader and so not leak it.
Result:
Fixes [#7290].
Motivation:
We tried to call `select` after we closed the channel (and so removed all the handlers from the pipeline) when we detected a non SSL record. This would cause an exception like this:
```
Caused by: java.util.NoSuchElementException: io.netty.handler.ssl.SniHandler
at io.netty.channel.DefaultChannelPipeline.getContextOrDie(DefaultChannelPipeline.java:1098)
at io.netty.channel.DefaultChannelPipeline.replace(DefaultChannelPipeline.java:506)
at io.netty.handler.ssl.SniHandler.replaceHandler(SniHandler.java:133)
at io.netty.handler.ssl.SniHandler.onLookupComplete(SniHandler.java:113)
at io.netty.handler.ssl.AbstractSniHandler.select(AbstractSniHandler.java:225)
at io.netty.handler.ssl.AbstractSniHandler.decode(AbstractSniHandler.java:218)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
... 40 more
```
Modifications:
- Ensure we rethrow the NotSslRecordException when detecting it (and closing the channel). This will also ensure we not call `select(...)`
- Not catch `Throwable` but only `Exception`
- Add test case.
Result:
Correctly handle the case of an non SSL record.
* FIX: force a read operation for peer instead of self
Motivation:
When A is in `writeInProgress` and call self close, A should
`finishPeerRead` for B(A' peer).
Modifications:
Call `finishPeerRead` with peer in `LocalChannel#doClose`
Result:
Clear confuse of code logic
* FIX: preserves order of close after write in same event loop
Motivation:
If client and server(client's peer channel) are in same event loop, client writes data to
server in `ChannelActive`. Server receives the data and write it
back. The client's read can't be triggered becasue client's
`ChannelActive` is not finished at this point and its `readInProgress`
is false. Then server closes itself, it will also close the client's
channel. And client has no chance to receive the data.
Modifications:
1. Add a test case to demonstrate the problem
2. When `doClose` peer, we always call
`peer.eventLoop().execute()` and `registerInProgress` is not needed.
3. Remove test case
`testClosePeerInWritePromiseCompleteSameEventLoopPreservesOrder`. This
test case can't pass becasue of this commit. IMHO, I think it is OK,
becasue it is reasonable that the client flushes the data to socket,
then server close the channel without received the data.
4. For mismatch test in SniClientTest, the client should receive server's alert before closed(caused by server's close)
Result:
The problem is gone.
H2C upgrades should be ineligible for flow control
Motivation:
When the h2c upgrade request is too big, the Http2FrameCodec complains
it's too big for flow control reasons, even though it's ineligible for
flow control.
Modifications:
Specially mark upgrade streams and make Http2FrameCodec know not to try
to flow control on those streams.
Result:
Servers won't barf when they receive an upgrade request with a fat
payload.
[Fixes#7280]
Motivation:
The writeSpinCount currently loops over the same buffer, gathering
write, file write, or other write operation multiple times but will
continue writing until there is nothing left or the OS doesn't accept
any data for that specific write. However if the OS keeps accepting
writes there is no way to limit how much time we spend on a specific
socket. This can lead to unfair consumption of resources dedicated to a
single socket.
We currently don't limit the amount of bytes we attempt to write per
gathering write. If there are many more bytes pending relative to the
SO_SNDBUF size we will end up building iov arrays with more elements
than can be written, which results in extra iteration, conditionals,
and book keeping.
Modifications:
- writeSpinCount should limit the number of system calls we make to
write data, instead of applying to individual write operations
- IovArray should support a maximum number of bytes
- IovArray should support composite buffers of greater than size 1024
- We should auto-scale the amount of data that we attempt to write per
gathering write operation relative to SO_SNDBUF and how much data is
successfully written
- The non-unsafe path should also support a maximum number of bytes,
and respect the IOV_MAX limit
Result:
Write resource consumption can be bounded and gathering writes have
a limit relative to the amount of data which can actually be accepted
by the socket.
Motivation:
We used NetUtil.isIpV4StackPreferred() when loading JNI code which tries to load NetworkInterface in its static initializer. Unfortunally a lock on the NetworkInterface class init may be already hold somewhere else which may cause a loader deadlock.
Modifications:
Add a new Socket.initialize() method that will be called when init the library and pass everything needed to the JNI level so we not need to call back to java.
Result:
Fixes [#7458].
Motivation:
At the moment its a bit "hacky" to retrieve the hostname that was used during SNI as you need to hold a reference to SniHandler and then call hostname() once the selection is done. It would be better to fire an event to let the user know we did the selection.
Modifications:
Add a SniCompletionEvent that can be used to get the hostname that was used to do the selection and was included in the SNI extension.
Result:
Easier usage of SNI.
Motivation:
We only want to log for the particular case when debug logging is enabled so we not need to try to match the message if this is not the case.
Modifications:
Guard with logger.isDebugEnabled()
Result:
Less overhead when debug logging is not enabled.
Motivation:
AbstractChannel attempts to "filter" messages which are written [1]. A goal of this process is to copy from heap to direct if necessary. However implementations of this method [2][3] may translate a buffer with 0 readable bytes to EMPTY_BUFFER. This may mask a user error where an empty buffer is written but already released.
Modifications:
Replace safeRelease(...) with release(...) to ensure we propagate reference count issues.
Result:
Fixes [#7383]
Motivation:
If large amounts of data is being transferred it is difficult to correlate the amount we attempt to read vs the maximum amount that the OS will actually buffer and deliver to the application. For exmaple some OSes may dynicamlly update the SO_RCVBUF size or otherwise dynamically adjust how much data is delieved to the application. In these circumstances it can reduce latency to just call read() on the socket another time to see if there is really any data remaining instead of giving up the maxMessagesPerRead quantum and going back to the selector to read later.
Motifications:
- Add DefaultMaxMessagesRecvByteBufAllocator#respectMaybeMoreData which provides a way to ignore the maybeMoreData function which may not account for the current data pending, and if it does this maybe racy.
Result:
Option to always use the full maxMessagesPerRead quantum before going back to the selector.
Motivation:
SslHandler will do aggregation of writes by default in an attempt to improve goodput and reduce the number of discrete buffers which must be accumulated. However if aggregation is not possible then a CompositeByteBuf is used to accumulate multiple buffers. Using a CompositeByteBuf doesn't provide any of the benefits of better goodput and in the case of small + large writes (e.g. http/2 frame header + data) this can reduce the amount of data that can be passed to writev by about half. This has the impact of increasing latency as well as reducing goodput.
Modifications:
- SslHandler should prefer copying instead of using a CompositeByteBuf
Result:
Better goodput (and potentially improved latency) at the cost of copy operations.
Motivation:
AdaptiveRecvByteBufAllocator currently adjusts the ByteBuf allocation size guess when readComplete is called. However the default configuration for number of reads before readComplete is called is 16. This means that there will be 16 reads done before any adjustment is done. If there is a large amount of data pending AdaptiveRecvByteBufAllocator will be slow to adjust the allocation size guess. In addition to being slow the result of only updating the guess in readComplete means that we must go back to the selector and wait to be woken up again when data is ready to read. Going back to the selector is an expensive operations and can add significant latency if there is large amount of data pending to read.
Modifications:
- AdaptiveRecvByteBufAllocator should check on each read if a step up is necessary. The step down process is left unchanged and can be more gradual at the cost of potentially over allocating.
Result:
AdaptiveRecvByteBufAllocator increases the guess size during the read loop to reduce latency when large amounts of data is being read.
Motivation:
Http2ConnectionHandler uses ctx.fireUserEvent to propagate the Http2ConnectionPrefaceAndSettingsFrameWrittenEvent through the pipeline. This will propagate the event to the next inbound handler in the pipeline. If the user extends Http2ConnectionHandler the Http2ConnectionPrefaceAndSettingsFrameWrittenEvent may be missed and initialization dependent upon this event will not be run.
Modifications:
- Http2ConnectionHandler should use userEventTriggered instead of ctx.fireUserEvent
Result:
Classes that extend Http2ConnectionHandler will see the Http2ConnectionPrefaceAndSettingsFrameWrittenEvent user event.
Motivation:
When the ECS source prefix length is not a mutiple of 8, the last byte the address inside the
ECS OPT record is not padded properly.
Modifications:
DefaultDnsRecordEncoder.padWithZeros(...) was modified to add padding from the least
significant bits.
Result:
ECS encoding bug fixed.