Motivation:
The code of transport-native-epoll missed some things in terms of static keywords, @deprecated annotations and other minor things.
Modifications:
- Add missing @deprecated annotation
- Not using FQCN in javadocs
- Add static keyword where possible
- Use final fields when possible
- Remove throws IOException from method where it is not needed.
Result:
Cleaner code.
Motivation:
DefaultStompFrame.toString() implementations returned a String that contained DefaultFullStompFrame.
Modifications:
Replace DefaultFullStompFrame with DefaultStompFrame.
Result:
Less confusing and more correct return value of toString()
Motivation:
DefaultStompFrame.retain(increment) missed to pass on the increment parameter.
Modifications:
Correctly pass on increment paramter.
Result:
Correctly handle the retain when increment value is given.
Motivation:
There were some warning in the resolver-dns code base.
Modifications:
- Fix javadocs
- Use the base class to call static method.
Result:
Cleaner code.
Motivation:
SSLContext.buildTrustManagerFactory(...) builds a KeyStore to
initialize the TrustManagerFactory from an array of X509Certificates,
assuming that array is a chain and that each certificate will have a
unique Subject Distinguised Name.
However, the collection of certificates used as trust anchors is generally
not a chain (it is an unordered collection), and it is legitimate for it
to contain multiple certificates with the same Subject DN.
The existing code uses the Subject DN as the alias name when filling in
the `KeyStore`, thereby overwriting other certificates with the same
Subject DN in this collection, so some certificates may be discarded.
In addition, the code related to building trust managers can take an array of
X509Certificate instances to use as trust anchors. The variable name is
usually trustCertChain, and the documentation refers to them as a "chain".
However, while it makes sense to talk about a "chain" from a keymanager
point of view, these certificates are just an unordered collection in a
trust manager. (There is no chaining requirement, having the Subject DN
matching its predecessor's Issuer DN.)
This can create confusion to for users not used with PKI concepts.
Modifications:
SSLContext.buildTrustManagerFactory(...) now uses a distinct alias for each
array (simply using a counter, since this name is never used for reference
later). This patch also includes a unit test with CA certificates using the
same Subject DN.
Also renamed trustCertChain into trustCertCollection, and changed the
references to "chain" in the Javadoc.
Result:
Each loaded certificate now has a unique identifier when loaded, so it is
now possible to use multiple certificates with the same Subject DN as
trust anchors.
Hopefully, renaming the parameter should also reduce confusion around PKI
concepts.
Motivation:
DefaultCookie constructor performs a name validation that doesn’t match
RFC6265. Moreover, such validation is already performed in strict
encoders and decoders.
Modifications:
Drop DefaultCookie name validation, rely on encoders and decoders.
Result:
no more duplicate broken validation
Motivation:
We need to handle the trailing dot in the correct manner when creating DNS questions and responses.
Modifications:
- Add a trailing dot if not given to the hostname when construct a AbstractDnsRecord (this is the same as dig does).
Result:
Correctly handle trailing dots.
Motivation:
The method setBytes creates temporary heap buffer when source buffer is read-only.
But this temporary buffer is not used correctly and may lead to data corruption.
This problem occurs when target buffer is pooled and temporary buffer
arrayOffset() is not zero.
Modifications:
Use correct arrayOffset when calling PlatformDependent.copyMemory.
Unit test was added to test this case.
Result:
Setting buffer content works correctly when target is pooled buffer and source
is read-only ByteBuffer.
Motivation:
We need to ensure we call ctx.flush() before closing the actual channel when an handshake failure took place. If we miss to do so we may not send all pending data to the remote peer which also include SSL alerts.
Modifications:
Ensure we call ctx.flush() before ctx.close() on a handshake error.
Result:
All pending data (including SSL alerts) are written to the remote peer on a handshake error.
Motivation:
It should be possible to disable the Recycler with -Dio.netty.recycler.maxCapacity=0, but because of a typo this is not the case.
Modifications:
Replace <= with < to make it posible to disable the Recycler.
Result:
Correct behaviour when using -Dio.netty.recycler.maxCapacity=0
Motivation:
Fix a typo in the log message of the static initializer of Recycler.
Modifications:
Fix typo.
Result:
Correctly log system property io.netty.recycler.maxCapacity.
Motivation:
We currently not supported using KeyManagerFactory with OpenSslClientContext and so should throw an exception if the user tries to do so. This will at least not give suprising and hard to debug problems later.
Modifications:
Throw exception if a user tries to construct a OpenSslClientContext with a KeyManagerFactory
Result:
Fail fast if the user tries to use something that is not supported.
Motivation:
We need to ensure we do all checks inside of the try / catch block so we free native memory that was allocated in the constructor of the super class in a timely manner.
Modifications:
Move all checks inside of the try block.
Result:
Correctly release native memory (and not depend on the finalizer) when a check in the constructors fails
Motivation:
We missed to pass the decrement value to the wrapped FullHttpRequest and so missed to decrement the reference count in the correct way.
Modifications:
Correctly pass the decrement value to the wrapped request.
Result:
UpgradeEvent.release(decrement) works as expected.
Motivation:
If an error occurs during a write operation then DefaultHttp2ConnectionEncoder.FlowControlledData will clear the CoalescingBufferQueue which will reset the queue's readable bytes to 0. To recover from an error the DefaultHttp2RemoteFlowController will attempt to return bytes to the flow control window, but since the frame has reset its own size this will lead to invalid flow control accounting.
Modifications:
- DefaultHttp2ConnectionEncoder.FlowControlledData should not reset its size if an error occurs
Result:
No more flow controller errors due to DefaultHttp2ConnectionEncoder.FlowControlledData setting its size to 0 if an error occurs.
Motivation:
HttpServerUpgradeHandler.UpgradeCodec.prepareUpgradeResponse should allow to abort the upgrade and so just continue with using HTTP. Beside this we should only pass in the response HttpHeaders as this is inline with the docs.
Modifications:
- UpgradeCodec.prepareUpgradeResponse now allows to return a boolean and so allows to specifiy if the upgrade should take place.
- Change the param from FullHttpResponse to HttpHeaders to be inline with the javadocs.
Result:
More flexible and correct handling of upgrades.
Motivation:
In commit acbca192bd we changed to have our native operations which either gall getsockopt or setsockopt throw IOExceptions (to be more specific we throw a ClosedChannelException in some cases). Unfortunally I missed to also do the same for getSoError() and missed to add throws IOException to the native methods.
Modifications:
- Correctly throw IOException from getSoError()
- Add throws IOException to native methods where it was missed.
Result:
Correct declaration of getSoError() and other native methods.
Motivation:
upgradeTo(...) takes the response as paramater, but the respone itself was already written to the Channel. This gives the user the impression the response can be changed or even act on it which may not be safe anymore once it was written and has been released.
Modifications:
Remove the response param from the method.
Result:
Less confusion and safer usage.
Motivation:
A user may use a private key which is encrypted with an empty password. Because of this we should only handle a null password in a special way.
Modifications:
- Correctly handle private key that is encrypted with empty password.
- Make OpenSsl*Context implementions consistent in terms of initialization in the constructor.
Result:
Correctly support private key that is encrypted with empty password.
Motivation:
We also need to add synchronization when access fields to ensure we see the latest updates.
Modifications:
Add synchronization when read fields that are written concurrently.
Result:
Ensure correct visibility of updated.
Motivation:
If SO_LINGER is set to 0 the EPOLL transport will send a FIN followed by a RST. This is not consistent with the behavior of the NIO transport. This variation in behavior can cause protocol violations in streaming protocols (e.g. HTTP) where a FIN may be interpreted as a valid end to a data stream, but RST may be treated as the data is corrupted and should be discarded.
https://github.com/netty/netty/issues/4170 Claims the behavior of NIO always issues a shutdown when close occurs. I could not find any evidence of this in Netty's NIO transport nor in the JDK's SocketChannel.close() implementation.
Modifications:
- AbstractEpollChannel should be consistent with the NIO transport and not force a shutdown on every close
- FileDescriptor to keep state in a consistent manner with the JDK and not allow a shutdown after a close
- Unit tests for NIO and EPOLL to ensure consistent behavior
Result:
EPOLL is capable of sending just a RST to terminate a connection.
Motivation:
See #1811
Modifications:
Add LineEncoder and LineSeparator
Result:
The user can use LineEncoder to write a String with a line separator automatically
Motivation:
According to the MQTT 3.1.1 Protocol Specification: The Server MUST validate that the reserved flag in the CONNECT Control Packet is set to zero and disconnect the Client if it is not zero. (http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc385349230)
Resolves#4182
Modifications:
Check the CONNECT reserved flag for MQTT 3.1.1. If it's not 0, throw an exception.
Result:
If the CONNECT reserved flag, a decode failure will be emitted.
Motivation:
Domain name labels must be converted to ASCII and not be longer then 63 chars.
Modifications:
Correctly convert to ASCII which also will enforce the 63 chars length.
Result:
Correctly guard against invalid input.
Motivation:
We had some double spacing in the methods which should be removed to keep things consistent.
Modifications:
Remove redundant spaces.
Result:
Cleaner / consistent coding style.
Motivation:
My previous commit b88a980482 introduced a flawed unit test,
that executes an assertion in a different thread than the test thread.
If this assertion fails, the test doesn't fail.
Modifications:
Replace the assertion by a proper workaround.
Result:
More correct unit test
Motivation:
To be consistent with the JDK we should ensure our native methods throw a ClosedChannelException if the Channel was previously closed. This will then be wrapped in a ChannelException as usual. For all other errors we continue to just throw a ChannelException directly.
Modifications:
Ensure getsockopt and setsockopt will throw a ClosedChannelException if the channel was closed before, on other errors we throw a ChannelException as before diretly.
Result:
Consistent with the NIO Channel implementations.
Motivation:
Circular assignment of arenas to thread caches can lead to less than optimal
mappings in cases where threads are (frequently) shutdown and started.
Example Scenario:
There are a total of 2 arenas. The first two threads performing an allocation
would lead to the following mapping:
Thread 0 -> Arena 0
Thread 1 -> Arena 1
Now, assume Thread 1 is shut down and another Thread 2 is started. The current
circular assignment algorithm would lead to the following mapping:
Thread 0 -> Arena 0
Thread 2 -> Arena 0
Ideally, we want Thread 2 to use Arena 1 though.
Presumably, this is not much of an issue for most Netty applications that do all
the allocations inside the eventloop, as eventloop threads are seldomly shut down
and restarted. However, applications that only use the netty-buffer package
or implement their own threading model outside the eventloop might suffer from
increased contention. For example, gRPC Java when using the blocking stub
performs some allocations outside the eventloop and within its own thread pool
that is dynamically sized depending on system load.
Modifications:
Implement a linear scan algorithm that assigns a new thread cache to the arena
that currently backs the fewest thread caches.
Result:
Closer to ideal mappings between thread caches and arenas. In order to always
get an ideal mapping, we would have to re-balance the mapping whenever a thread
dies. However, that's difficult because of deallocation.
Motivation:
We want to allow the use of an uber jar that contains shared dynamic libraries for all platforms (including fedora).
Modifications:
Modified OpenSsl to try and load the fedora library if the OS is Linux and the platform specified library fails before using the default lib.
Result:
True uber support.
Motivation:
Http2Codec.SimpleChannelPromiseAggregator currently fails fast if as soon as a tryFailure or setFailure method is called. This can lead to write operations which pass the result of SimpleChannelPromiseAggregator.newPromise to multiple channel.write calls throwing exceptions due to the promise being already done. This behavior is not expected by most of the Netty codecs (SslHandler) and can also create unexpected leaks in the http2 codec (DefaultHttp2FrameWriter).
Modifications:
- Http2Codec.SimpleChannelPromiseAggregator shouldn't complete the promise until doneAllocatingPromises is called
- Usages of Http2Codec.SimpleChannelPromiseAggregator should be adjusted to handle the change in behavior
- What were leaks in DefaultHttp2FrameWriter should be fixed to catch any other cases where ctx.write may throw
Result:
SimpleChannelPromiseAggregator won't generate promises which are done when newPromise is called.
Motivation:
PromiseAggregator's API allows for the aggregate promise to complete before the user is done adding promises. In order to support this use case the API structure would need to change in a breaking manner.
Modifications:
- Deprecate PromiseAggregator and subclasses
- Introduce PromiseCombiner which corrects these issues
Result:
PromiseCombiner corrects the deficiencies in PromiseAggregator.
Motivation:
We should always first notify the promise before trigger an event through the pipeline to be consistent.
Modifications:
Ensure we notify the promise before fire event.
Result:
Consistent behavior
Motivation:
UDP-oriented codec reusing the existing encoders and decoders would be helpful. See #1350
Modifications:
Add DatagramPacketEncoder and DatagramPacketDecoder to reuse the existing encoders and decoders.
Result:
People can use DatagramPacketEncoder and DatagramPacketDecoder to wrap existing encoders and decoders to create UDP-oriented codec.
Motivation:
EpollServerSocketConfig.isFreebind() throws an exception when called.
Modifications:
Use the correct getsockopt arguments.
Result:
No more exception when call EpollServerSocketConfig.isFreebind()
Motivation:
TCP_MD5 is only supported by SocketChannels so remove it from EpollServerChannelConfig which is generic.
Modifications:
Remove invalid code.
Result:
Remove invalid / dead code.
Motivation:
A custom SecurityManager may prevent calling File.exists() and so throw a SecurityException in the static init block of NetUtil.
Modifications:
Correctly catch the exception and so allow to static init NetUtil.
Result:
Allow static init method of NetUtil to work even with custom SecurityManager.
Motivation:
The statistic counters PoolArena.(allocationsTiny|allocationsSmall) are
not protected by a per arena lock, but by a per size class lock. Thus,
two concurrent allocations of different size (class) could lead to a
race and ultimately to wrong statistics.
Modifications:
Use a thread-safe LongCounter instead of a plain long data type.
Result:
Fewer data races.
Motivation:
PendingWriteQueue should guard against re-entrant writes once
removeAndFailAll() is run.
Modifications:
removeAndFailAll() should repeat until the queue is finally empty.
Result:
assertEmpty() will always hold.
Motiviation:
691bc1690e made writeUtf8 consistent with String.getBytes() so that it never throws.
94f27be59b provided a writeUtf8 method which takes a ByteBufAllocator to do an appropriately sized buffer allocation.
Result:
- Assume writeUtf8 will not throw in HTTP/2 codec
- Use the new writeUtf8 method
Result:
Cleaner code in codec-http2.
Motivation:
EPOLL does not support autoread when in ET mode.
Modifications:
- EpollRecvByteAllocatorHandle should not unconditionally force reading just because ET is enabled
- AbstractEpollChannel and all derived classes which implement epollInReady must support a variable which indicates
there may be more data to read. The variable will be used when read is called to simulate a EPOLL wakeup and call epollInReady if necessary. This will ensure that if we don't read until EAGAIN that we will try to read again and not rely on EPOLL to notify us.
Result:
EPOLL ET supports auto read.
Motivation:
If while iterating the active streams a close operation occurs this will be queued and process after the iteration has completed to avoid a concurrent modification exception. However it is possible that during the iteration the stream which was closed could have been removed from the priority tree and its parent would be set to null. Then after the iteration completes the close operation will attempt to dereference the parent and results in a NPE.
Modifications:
- pending close operations should verify the stream's parent is not null before processing the event
Result:
No More NPE.
Motivation:
The current HttpPostMultipartRequestDecoder can decode multipart/form-data parts with a Content-Type that specifies a charset. When this charset is invalid the Charset.forName() throws an unchecked UnsupportedCharsetException. This exception is not catched by the decoder. It should actually be rethrown as an ErrorDataDecoderException, because the developer using the API would expect this validation failure to be reported as such.
Modifications:
Add a catch block for UnsupportedCharsetException and rethrow it as an ErrorDataDecoderException.
Result:
UnsupportedCharsetException are now rethrown as ErrorDataDecoderException.
Motivation:
The current implementation will provide a DnsRawRecord, which, while
containing the host name it resolves to, would require the user to
decode the name using the decode method currently private to
DefaultDnsRecordDecoder, which in fact means copying it.
Modifications:
Introduce DnsPtrRecord, which is a specialization of DnsRecord which
provides a decoded host name.
Result:
PTR Records are much easier to work with, as the name is decoded already.