Motivation:
Currently the ByteBuf created as a result of retained[Slice|Duplicate] maintains its own reference count, and when this reference count is depleated it will release the ByteBuf returned from unwrap(). The unwrap() buffer is designed to be the 'root parent' and will skip all intermediate layers of buffers. If the intermediate layers of buffers contain a retained[Slice|Duplicate] then these reference counts will be ignored during deallocation. This may lead to deallocating the 'root parent' before all derived pooled buffers are actually released. This same issue holds if a retained[Slice|Duplicate] is in the heirachy and a 'regular' slice() or duplicate() buffer is created.
Modifications:
- AbstractPooledDerivedByteBuf must maintain a reference to the direct parent (the buffer which retained[Slice|Duplicate] was called on) and release on this buffer instead of the 'root parent' returned by unwrap()
- slice() and duplicate() buffers created from AbstractPooledDerivedByteBuf must also delegate reference count operations to their immediate parent (or first ancestor which maintains an independent reference count).
Result:
Fixes https://github.com/netty/netty/issues/5999
Motivation:
2c78902ebc ensured buffers were released in the general case but didn't clean up an extra release in LzmaFrameEncoderTest#testCompressionOfBatchedFlowOfData which lead to a double release.
Modifications:
LzmaFrameEncoderTest#testCompressionOfBatchedFlowOfData should not explicitly release the buffer because decompress will release the buffer
Result:
No more reference count exception and failed test.
Motivation:
The SniHandlerTest.testServerNameParsing did fail when SslProvider.JDK was used as it the JDK SSLEngineImpl does not send an alert.
Modifications:
Ensure tests pass with JDK and OPENSSL ssl implementations.
Result:
SniHandlerTest will run with all SslProvider and not fail when SslProvider.JDK is used.
Motivation:
c1932a8537 made an assumption that the LzmaInputStream which wraps a ByteBufInputStream would delegate the close operation to the wrapped stream. This assumption is not true and thus we still had a leak. An issue has been logged with our LZMA dependency https://github.com/jponge/lzma-java/issues/14.
Modifications:
- Force a close on the wrapped stream
Result:
No more leak.
Motiviation:
We need to ensure we only consume as much da as we can maximal put in one ssl record to not produce a BUFFER_OVERFLOW when calling wrap(...).
Modification:
- Limit the amount of data that we consume based on the maximal plain text size that can be put in one ssl record
- Add testcase to verify the fix
- Tighten up testcases to ensure the amount of produced and consumed data in SslEngineResult matches the buffers. If not the tests will fail now.
Result:
Correct and conform behavior of OpenSslEngine.wrap(...) and better test coverage during handshaking in general.
Motivation:
If a stream is not able to send any data (flow control window for the stream is exhausted) but has descendants who can send data then WeightedFairQueueByteDistributor may incorrectly modify the pseudo time and also double add the associated state to the parent's priority queue. The pseudo time should only be modified if a node is moved in the priority tree, and not if there happens to be no active streams in its descendent tree and a descendent is moved (e.g. removed from the tree because it wrote all data and the last data frame was EOS). Also the state objects for WeightedFairQueueByteDistributor should only appear once in any queue. If this condition is violated the pseudo time accounting would be biased at and assumptions in WeightedFairQueueByteDistributor would be invalidated.
Modifications:
- WeightedFairQueueByteDistributor#isActiveCountChangeForTree should not allow re-adding to the priority queue if we are currently processing a node in the distribution algorithm. The distribution algorithm will re-evaluate if the node should be re-added on the tail end of the recursion.
Result:
Fixes https://github.com/netty/netty/issues/5980
Motivation:
Netty provides a adaptor from ByteBuf to Java's InputStream interface. The JDK Stream interfaces have an explicit lifetime because they implement the Closable interface. This lifetime may be differnt than the ByteBuf which is wrapped, and controlled by the interface which accepts the JDK Stream. However Netty's ByteBufInputStream currently does not take reference count ownership of the underlying ByteBuf. There may be no way for existing classes which only accept the InputStream interface to communicate when they are done with the stream, other than calling close(). This means that when the stream is closed it may be appropriate to release the underlying ByteBuf, as the ownership of the underlying ByteBuf resource may be transferred to the Java Stream.
Motivation:
- ByteBufInputStream.close() supports taking reference count ownership of the underyling ByteBuf
Result:
ByteBufInputStream can assume reference count ownership so the underlying ByteBuf can be cleaned up when the stream is closed.
Motivation:
The unit tests for the compression encoders/decoders may write buffers to an EmbeddedChannel but then may not release buffer or close the channel after the test. This may result in buffer leaks.
Modifications:
- Call channel.finishAndReleaseAll() after each test
Result:
Fixes https://github.com/netty/netty/issues/6007
Motivation:
HashWheelTimerTest has busy/wait and sleep statements which are not necessary. We also depend upon a com.google.common.base.Supplier which isn't necessary.
Modifications:
- Remove buys wait loops and timeouts where possible
Result:
HashWheelTimerTest more explicit in verifying conditions and less reliant on wait times.
Motivation:
If the wsURL contains an encoded query, it will be decoded when generating the raw path. For example if the wsURL is http://test.org/path?a=1%3A5, the returned raw path would be /path?a=1:5
Modifications:
Use wsURL.getRawQuery() rather than wsURL.getQuery()
Result:
rawPath will now return /path?a=1%3A5
Motivation:
Some commons values are missing from HttpHeader values constants.
Modifications:
- Add constants for "application/json" Content-Type
- Add constants for "gzip,deflate" Content-Encoding
Result:
More HttpHeader values constants available, both in
`HttpHeaders.Values` and `HttpHeaderValues`.
Motivation:
OpenSslEngine.wrap(...) and OpenSslEngie.unwrap(...) may consume bytes even if an BUFFER_OVERFLOW / BUFFER_UNDERFLOW is detected. This is not correct as it should only consume bytes if it can process them without storing data between unwrap(...) / wrap(...) calls. Beside this it also should only process one record at a time.
Modifications:
- Correctly detect BUFFER_OVERFLOW / BUFFER_UNDERFLOW and only consume bytes if non of them is detected.
- Only process one record per call.
Result:
OpenSslEngine behaves like stated in the javadocs of SSLEngine.
Motivation:
ObjectOutputStream uses a Channel Attribute to cache a ObjectOutputStream which is backed by a ByteBuf that may be released after an object is encoded and the underlying buffer is written to the channel. On subsequent encode operations the cached ObjectOutputStream will be invalid and lead to a reference count exception.
Modifications:
- CompatibleObjectEncoder should not cache a ObjectOutputStream.
Result:
CompatibleObjectEncoder doesn't use a cached object backed by a released ByteBuf.
Motivation:
We should not use the InternalThreadLocalMap where access may be done from outside the EventLoop as this may create a lot of memory usage while not be reused anyway.
Modifications:
Not use InternalThreadLocalMap in places where the code-path will likely be executed from outside the EventLoop.
Result:
Less memory bloat.
Motivation:
If the rate at which new timeouts are created is very high and the created timeouts are not cancelled, then the JVM can crash because of out of heap space. There should be a guard in the implementation to prevent this.
Modifications:
The constructor of HashedWheelTimer now takes an optional max pending timeouts parameter beyond which it will reject new timeouts by throwing RejectedExecutionException.
Result:
After this change, if the max pending timeouts parameter is passed as constructor argument to HashedWheelTimer, then it keeps a track of pending timeouts that aren't yet expired or cancelled. When a new timeout is being created, it checks for current pending timeouts and if it's equal to or greater than provided max pending timeouts, then it throws RejectedExecutionException.
Motivation:
Currently it is not possible to have an Http/2 server send non default
initial settings to clients when doing the initial connection handshake
Modifications:
Add additional constructors to Http2Codec allowing users to specify the
initial settings to send to the client and apply locally
Result:
You can now specify non default initial settings
Motivation:
PlatformDependent0 should not be referenced directly when sun.misc.Unsafe is unavailable.
Modifications:
Guard byteArrayBaseOffset with hasUnsafe check.
Result:
PlatformDependent can be initialized when sun.misc.Unsafe is unavailable.
Motivation:
The HttpObjectAggregator never appends a 'Connection: close' header to
the response of oversized messages even though in the majority of cases
its going to close the connection.
Modification:
This PR addresses that by ensuring the requisite header is present when
the connection is going to be closed.
Result:
Gracefully signal that we are about to close the connection.
Motivation:
Now the ```resolveAll``` method of RoundRobinInetAddressResolver returns results without any rotation and shuffling. As a result, it doesn't force any round-robin for clients that get a result of ```resolveAll``` and use addresses from the result one by one for a connection establishing until success. This commit implements round-robin in RoundRobinInetAddressResolver#resolveAll. These improvements inspired by the discussion here: https://github.com/AsyncHttpClient/async-http-client/issues/1285
Modifications:
Rotate collection from internal ```resolveAll``` call by index, which is incremented every call to RoundRobinInetAddressResolver#resolveAll method.
Random replaced by an incrementing counter, which makes code cheaper and guarantees predictable address order in tests.
Result:
Improved ```RoundRobinInetAddressResolver``` is compatible with clients that use ```resolveAll``` result.
Motivation:
I had a need to know the user credentials of a connected unix domain socket.
Modifications:
Added a class to encapsulate user credentials (UID, GID, and the PID).
Augemented the Socket class to provide the JNI native interface to return this new class
Augemented the c code to call getSockOpts passing <a href=http://man7.org/linux/man-pages/man7/socket.7.html>SO_PEERCRED</a>
Then surfaced the ability to get user credentials in the EpollDomainSocketChannel
Result:
The EpollDomainSocketChannel now has a the following function signature:
public PeerCredentials peerCredentials() throws IOException allowing a caller to get the UID, GID, and PID of the linux process
connected to the unix domain socket.
Motivation:
To guard against the case that a user will enqueue a lot of empty or small buffers and so raise an OOME we need to also take the overhead of the ChannelOutboundBuffer / PendingWriteQueue into account when detect if a Channel is writable or not. This is related to #5856.
Modifications:
When calculate the memory for an message that is enqueued also add some extra bytes depending on the implementation.
Result:
Better guard against OOME.
Motivation:
ResourceLeakDetector shows two main problems, racy access and heavy lock contention.
Modifications:
This PR fixes this by doing two things:
1. Replace the sampling counter with a ThreadLocalRandom. This has two benefits.
First, it makes the sampling ration no longer have to be a power of two. Second,
it de-noises the continuous races that fight over this single value. Instead,
this change uses slightly more CPU to decide if it should sample by using TLR.
2. DefaultResourceLeaks need to be kept alive in order to catch leaks. The means
by which this happens is by a singular, doubly-linked list. This creates a
large amount of contention when allocating quickly. This is noticeable when
running on a multi core machine.
Instead, this uses a concurrent hash map to keep track of active resources
which has much better contention characteristics.
Results:
Better concurrent hygiene. Running the gRPC QPS benchmark showed RLD taking about
3 CPU seconds for every 1 wall second when runnign with 12 threads.
There are some minor perks to this as well. DefaultResourceLeak accounting is
moved to a central place which probably has better caching behavior.
Motivation:
Make small refactoring for recently merged PR #5867 to make the code more flexible and expose aggressive round robin as a NameResolver too with proper code reuse.
Modifications:
Round robin is a method of hostname resolving - so Round robin related code fully moved to RoundRobinInetAddressResolver implements NameResolver<InetAddress>, RoundRobinInetSocketAddressResolver is deleted as a separate class, instance with the same functionality could be created by calling #asAddressResolver.
Result:
New forced Round Robin code exposed not only as an AddressResolver but as a NameResolver too, more proper code and semantic reusing of InetNameResolver and InetSocketAddressResolver classes.
Motivation:
We need to ensure we not add the Transfer-Encoding header if the HttpMessage is EOF terminated.
Modifications:
Only add the Transfer-Encoding header if an Content-Length header is present.
Result:
Correctly handle HttpMessage that is EOF terminated.
Motivation:
The previously generated manifest causes a parse exception when loaded into an Apache Felix OSGI container.
Modifications:
Fix parameter delimiter and unbalanced quotes in manifest entry. Suffixed with asterisk so the bundle is resolved on other architectures as well even if native libs won't be loaded.
Result:
Bundle will load properly in OSGI containers.
Motivation:
A new version of centos was released we should verify against it when release.
Modifications:
Bump up version.
Result:
Release on latest centos version.
Motivation:
We want to reject the upgrade as quickly as possible, so that we can
support streamed responses.
Modifications:
Reject the upgrade as soon as we inspect the headers if they're wrong,
instead of waiting for the entire response body.
Result:
If a remote server doesn't know how to use the http upgrade and tries to
responsd with a streaming response that never ends, the client doesn't
buffer forever, but can instead pass it along. Fixes#5954
Motivation
It's possible to extend LocalChannel as well as LocalServerChannel but the LocalServerChannel's serve(peer) method is hardcoded to create only instances of LocalChannel.
Modifications
Add a protected factory method that returns by default new LocalChannel(...) but users may override it to customize it.
Result
It's possible to customize the LocalChannel instance on either end of the virtual connection.
Motivation:
When auto-read is disabled and no reads are issued by a user, ProxyHandler will stall the connection on the proxy handshake phase waiting for the first response from a server (that was never read).
Modifications:
Read if needed when very first handshake message is send by ProxyHandler.
Result:
Proxy handshake now succeeds no matter if auto-read disabled or enabled. Without the fix, the new test is failing on master.
Motivation:
PlatformDependent has a hash code algorithm which utilizes UNSAFE for performance reasons. This hash code algorithm must also be consistent with CharSequence objects that represent a collection of ASCII characters. In order to make the UNSAFE versions and CharSequence versions the endianness should be taken into account. However the big endian code was not correct in a few places.
Modifications:
- Correct bugs in PlatformDependent class related to big endian ASCII hash code computation
Result:
Fixes https://github.com/netty/netty/issues/5925
Motivation:
8cf90f0512 switch a duplicate opreration to a slice operation. Typically this would be fine but DNS supports a compression (https://www.ietf.org/rfc/rfc1035 4.1.4. Message compression) where the payload contains absolute indexes which refer back to previously referenced content. Using a slice will break the ability for the indexes in the payload to correctly self reference to the index of the originial payload, and thus decoding may fail.
Modifications:
- Use duplicate instead of slice so DNS message compression and index references are preserved.
Result:
Fixes DefaultDnsRecordDecoder regression
Motivation:
The SETTINGS_MAX_HEADER_LIST_SIZE limit, as enforced by the HPACK Encoder, should be a stream error and not apply to the whole connection.
Modifications:
Made the necessary changes for the exception to be of type StreamException.
Result:
A HEADERS frame exceeding the limit, only affects a specific stream.
Motivation:
Commit 908464f161 also introduced a change to guard against re-entrance but failed to correctly handle the debugData and promise.
Modifications:
Release debugData and correctly notify the promise.
Result:
No more buffer leak and promise is always notified.
Motivation:
In some ByteBuf implementations we not correctly implement getBytes(index, ByteBuffer).
Modifications:
Correct code to do what is defined in the javadocs and adding test.
Result:
Implementation works as described.
Motivation:
Since Java 7, X509TrustManager implementation is wrapped by a JDK class
called AbstractTrustManagerWrapper, which performs an additional
certificate validation for Socket or SSLEngine-backed connections.
This makes the TrustManager implementations provided by
InsecureTrustManagerFactory and FingerprintTrustManagerFactory not
insecure enough, where their certificate validation fails even when it
should pass.
Modifications:
- Add X509TrustManagerWrapper which adapts an X509TrustManager into an
X509ExtendedTrustManager
- Make SimpleTrustManagerFactory wrap an X509TrustManager with
X509TrustManagerWrapper is the provided TrustManager does not extend
X509ExtendedTrustManager
Result:
- InsecureTrustManagerFactory and FingerprintTrustManagerFactory are now
insecure as expected.
- Fixes#5910
Motivation:
8ba5b5f740 removed some ciphers from the default list, and SocketSslEchoTest had one of these ciphers hard coded in the test. The test will fail if the cihper is not supported by default.
Modifications:
SocketSslEchoTest should ensure a cipher is used which will be supported by the peer
Result:
Test result no longer depends upon default cipher list.
Motivation:
Our default cipher list has not been updated in a while. We current support some older ciphers not commonly in use and we don't support some newer ciphers which are more commonly used.
Modifications:
- Update the default list of ciphers for JDK and OpenSSL.
Result:
Default cipher list is more likely to connect to peers.
Fixes https://github.com/netty/netty/issues/5859
Motivation:
Some unit tests in SingleThreadEventLoopTest rely upon Thread.sleep for sequencing events between threads. This can be unreliable and result in spurious test failures if thread scheduling does not occur in a fair predictable manner.
Modifications:
- Reduce the reliance on Thread.sleep in SingleThreadEventLoopTest
Result:
Fixes https://github.com/netty/netty/issues/5851
Motivation:
The local transport is used to communicate in the same JVM so we should use heap buffers.
Modifications:
Use heapbuffers by default if not requested otherwise.
Result:
No allocating of direct buffers by default when using local transport
Motivation:
Suppose the domain `foo.example.com` resolves to the following ip
addresses `10.0.0.1`, `10.0.0.2`, `10.0.0.3`. Round robin DNS works by
having each client probabilistically getting a different ordering of
the set of target IP’s, so connections from different clients (across
the world) would be split up across each of the addresses. Example: In
a `ChannelPool` to manage connections to `foo.example.com`, it may be
desirable for high QPS applications to spread the requests across all
available network addresses. Currently, Netty’s resolver would return
only the first address (`10.0.0.1`) to use. Let say we are making
dozens of connections. The name would be resolved to a single IP and
all of the connections would be made to `10.0.0.1`. The other two
addresses would not see any connections. (they may see it later if new
connections are made and `10.0.0.2` is the first in the list at that
time of a subsequent resolution). In these changes, I add support to
select a random one of the resolved addresses to use on each resolve
call, all while leveraging the existing caching and inflight request
detection. This way in my example, the connections would be make to
random selections of the resolved IP addresses.
Modifications:
I added another method `newAddressResolver` to
`DnsAddressResolverGroup` which can be overriden much like
`newNameResolver`. The current functionality which creates
`InetSocketAddressResolver` is still used. I added
`RoundRobinDnsAddressResolverGroup` which extends
DnsAddressResolverGroup and overrides the `newAddressResolver` method
to return a subclass of the `InetSocketAddressResolver`. This subclass
is called `RoundRobinInetSocketAddressResolver` and it contains logic
that takes a `resolve` request, does a `resolveAll` under the hood, and
returns a single element at random from the result of the `resolveAll`.
Result:
The existing functionality of `DnsAddressResolverGroup` is left
unchanged. All new functionality is in the
`RoundRobinInetSocketAddressResolver` which users will now have the
option to use.
Motivation:
If the user removes the SslHandler while still in the processing loop we will produce an IllegalReferenceCountException. We should stop looping when the handlerwas removed.
Modifications:
Ensure we stop looping when the handler is removed.
Result:
No more IllegalReferenceCountException.
Motivation:
The weight header with the default value is not set but it should be (rfc7540#5.3.5: …Pushed streams initially depend on their associated stream … are assigned a default weight of 16).
Modifications:
Add STREAM_WEIGHT header.
Result:
Correctly add headers.
Motivation:
When using java.nio.DatagramChannel we should not close the channel when a SocketException was thrown as we can still use the channel.
Modifications:
Not close the Channel when SocketException is thrown
Result:
More robust and correct handling of exceptions when using NioDatagramChannel.