* Decouble EventLoop details from the IO handling for each transport to allow easy re-use of code and customization
Motiviation:
As today extending EventLoop implementations to add custom logic / metrics / instrumentations is only possible in a very limited way if at all. This is due the fact that most implementations are final or even package-private. That said even if these would be public there are the ability to do something useful with these is very limited as the IO processing and task processing are very tightly coupled. All of the mentioned things are a big pain point in netty 4.x and need improvement.
Modifications:
This changeset decoubled the IO processing logic from the task processing logic for the main transport (NIO, Epoll, KQueue) by introducing the concept of an IoHandler. The IoHandler itself is responsible to wait for IO readiness and process these IO events. The execution of the IoHandler itself is done by the SingleThreadEventLoop as part of its EventLoop processing. This allows to use the same EventLoopGroup (MultiThreadEventLoupGroup) for all the mentioned transports by just specify a different IoHandlerFactory during construction.
Beside this core API change this changeset also allows to easily extend SingleThreadEventExecutor / SingleThreadEventLoop to add custom logic to it which then can be reused by all the transports. The ideas are very similar to what is provided by ScheduledThreadPoolExecutor (that is part of the JDK). This allows for example things like:
* Adding instrumentation / metrics:
* how many Channels are registered on an SingleThreadEventLoop
* how many Channels were handled during the IO processing in an EventLoop run
* how many task were handled during the last EventLoop / EventExecutor run
* how many outstanding tasks we have
...
...
* Implementing custom strategies for choosing the next EventExecutor / EventLoop to use based on these metrics.
* Use different Promise / Future / ScheduledFuture implementations
* decorate Runnable / Callables when submitted to the EventExecutor / EventLoop
As a lot of functionalities are folded into the MultiThreadEventLoopGroup and SingleThreadEventLoopGroup this changeset also removes:
* AbstractEventLoop
* AbstractEventLoopGroup
* EventExecutorChooser
* EventExecutorChooserFactory
* DefaultEventLoopGroup
* DefaultEventExecutor
* DefaultEventExecutorGroup
Result:
Fixes https://github.com/netty/netty/issues/8514 .
Motivation:
Custom Netty ThreadLocalRandom and ThreadLocalRandomProvider classes are no longer needed and can be removed.
Modification:
Remove own ThreadLocalRandom
Result:
Less code to maintain
Motivation:
PlatformDependent.newConcurrentHashMap() is no longer needed so it could be easily removed and new ConcurrentHashMap<>() inlined instead of invoking PlatformDependent.newConcurrentHashMap().
Modification:
Use ConcurrentHashMap provided by the JDK directly.
Result:
Less code to maintain.
Motivation:
We can use the diamond operator these days.
Modification:
Use diamond operator whenever possible.
Result:
More modern code and less boiler-plate.
Motivation:
When using multiple nameservers and a nameserver respond with NXDOMAIN we should only fail the query if the nameserver in question is authoritive or no nameservers are left to try.
Modifications:
- Try next nameserver if NXDOMAIN was returned but the nameserver is not authoritive
- Adjust testcase to respect correct behaviour.
Result:
Fixes https://github.com/netty/netty/issues/8261
Motiviation:
Because of how we implemented the registration / deregistration of an EventLoop it was not possible to wrap an EventLoop implementation and use it with a Channel.
Modification:
- Introduce EventLoop.Unsafe which is responsible for the actual registration.
- Move validation of EventLoop / Channel combo to the EventLoop
- Add unit test that verifies that wrapping works
Result:
Be able to wrap an EventLoop and so add some extra functionality.
Motivation:
At the moment it’s possible to have a Channel in Netty that is not registered / assigned to an EventLoop until register(...) is called. This is suboptimal as if the Channel is not registered it is also not possible to do anything useful with a ChannelFuture that belongs to the Channel. We should think about if we should have the EventLoop as a constructor argument of a Channel and have the register / deregister method only have the effect of add a Channel to KQueue/Epoll/... It is also currently possible to deregister a Channel from one EventLoop and register it with another EventLoop. This operation defeats the threading model assumptions that are wide spread in Netty, and requires careful user level coordination to pull off without any concurrency issues. It is not a commonly used feature in practice, may be better handled by other means (e.g. client side load balancing), and therefore we propose removing this feature.
Modifications:
- Change all Channel implementations to require an EventLoop for construction ( + an EventLoopGroup for all ServerChannel implementations)
- Remove all register(...) methods from EventLoopGroup
- Add ChannelOutboundInvoker.register(...) which now basically means we want to register on the EventLoop for IO.
- Change ChannelUnsafe.register(...) to not take an EventLoop as parameter (as the EventLoop is supplied on custruction).
- Change ChannelFactory to take an EventLoop to create new Channels and introduce ServerChannelFactory which takes an EventLoop and one EventLoopGroup to create new ServerChannel instances.
- Add ServerChannel.childEventLoopGroup()
- Ensure all operations on the accepted Channel is done in the EventLoop of the Channel in ServerBootstrap
- Change unit tests for new behaviour
Result:
A Channel always has an EventLoop assigned which will never change during its life-time. This ensures we are always be able to call any operation on the Channel once constructed (unit the EventLoop is shutdown). This also simplifies the logic in DefaultChannelPipeline a lot as we can always call handlerAdded / handlerRemoved directly without the need to wait for register() to happen.
Also note that its still possible to deregister a Channel and register it again. It's just not possible anymore to move from one EventLoop to another (which was not really safe anyway).
Fixes https://github.com/netty/netty/issues/8513.
Motivation:
We do not correctly detect loops when follow CNAMEs and so may try to follow it without any success.
Modifications:
- Correctly detect CNAME loops
- Do not cache CNAME entries which point to itself
- Add unit test.
Result:
Fixes https://github.com/netty/netty/issues/8687.
Motivation:
Andoid does not contain javax.naming.* so we should not try to use it to prevent a NoClassDefFoundError on init.
Modifications:
Only try to use javax.naming.* to retrieve nameservers when not using Android.
Result:
Fixes https://github.com/netty/netty/issues/8654.
Motivation:
ByteBuf supports “marker indexes”. The intended use case for these is if a speculative operation (e.g. decode) is in process the user can “mark” and interface and refer to it later if the operation isn’t successful (e.g. not enough data). However this is rarely used in practice,
requires extra memory to maintain, and introduces complexity in the state management for derived/pooled buffer initialization, resizing, and other operations which may modify reader/writer indexes.
Modifications:
Remove support for marking and adjust testcases / code.
Result:
Fixes https://github.com/netty/netty/issues/8535.
Motivation:
Most of the maven modules do not explicitly declare their
dependencies and rely on transitivity, which is not always correct.
Modifications:
For all maven modules, add all of their dependencies to pom.xml
Result:
All of the (essentially non-transitive) depepdencies of the modules are explicitly declared in pom.xml
Motivation:
Some of transports support gathering writes when using datagrams. For example this is the case for EpollDatagramChannel. We should minimize the calls to flush() to allow making efficient usage of sendmmsg in this case.
Modifications:
- minimize flush() operations when we query for multiple address types.
- reduce GC by always directly schedule doResolveAll0(...) on the EventLoop.
Result:
Be able to use sendmmsg internally in the DnsNameResolver.
Motivation:
We should refresh the DNS configuration each 5 minutes to be able to detect changes done by the user. This is inline with what OpenJDK is doing
Modifications:
Refresh config every 5 minutes.
Result:
Be able to consume changes made by the user.
Motivation:
It should be possible to build a DnsNameResolver with a null resolvedAddressTypes, defaulting then to DEFAULT_RESOLVE_ADDRESS_TYPES (see line 309).
Sadly, `preferredAddressType` is then called on line 377 with the original parameter instead of the instance attribute, causing an NPE when it's null.
Modification:
Call preferredAddressType with instance attribuet instead of constructor parameter.
Result:
No more NPE
Motivation:
ba594bcf4a added a utility to parse searchdomains defined in /etc/resolv.conf but did not correctly handle the case when multiple are defined that are seperated by either whitespace or tab.
Modifications:
- Correctly parse multiple entries
- Add unit test.
Result:
Correctly parse multiple searchdomain entries.
* Use AuthoritativeDnsServerCache for creating the new redirect stream.
Motivation:
At the moment if a user wants to provide custom sorting of the nameservers used for redirects it needs to be implemented in two places. This is more complicated as it needs to be.
Modifications:
- Just delegate to the AuthoritativeDnsServerCache always as we fill it before we call newRedirectDnsServerStream anyway.
Result:
Easier way for the user to implement custom sorting.
* Add cache for CNAME mappings resolved during lookup of DNS entries.
Motivation:
If the CNAMEd hostname is backed by load balancing component, typically the final A or AAAA DNS records have small TTL. However, the CNAME record itself is setup with longer TTL.
For example:
* x.netty.io could be CNAMEd to y.netty.io with TTL of 5 min
* A / AAAA records for y.netty.io has a TTL of 0.5 min
In current Netty implementation, original hostname is saved in resolved cached with the TTL of final A / AAAA records. When that cache entry expires, Netty recursive resolver sends at least two queries — 1st one to be resolved as CNAME record and the 2nd one to resolve the hostname in CNAME record.
If CNAME record was cached, only the 2nd query would be needed most of the time. 1st query would be needed less frequently.
Modifications:
Add a new CnameCache that will be used to cache CNAMEs and so may reduce queries.
Result:
Less queries needed when CNAME is used.
Motivation
Applications should not depend on internal packages with Java 9 and later. This cause a warning now, but will break in future versions of Java.
Modification
This change adds methods to UnixResolverDnsServerAddressStreamProvider (following after #6844) that parse /etc/resolv.conf for domain and search entries. Then DnsNameResolver does not need to rely on sun.net.dns.ResolverConfiguration to do this.
Result
Fixes#8318. Furthermore, at least in my testing with Java 11, this also makes multiple search entries work properly (previously I was only getting the first entry).
Motivation:
We should not try to cast the Channel to a DatagramChannel as this will cause a ClassCastException.
Modifications:
- Do not cast
- rethrow from constructor if we detect the registration failed.
- Add unit test.
Result:
Propagate correct exception.
Motiviation:
We incorrectly did ignore NS servers during redirect which had no ADDITIONAL record. This could at worse have the affect that we failed the query completely as none of the NS servers had a ADDITIONAL record. Beside this using a DnsCache to cache authoritative nameservers does not work in practise as we we need different features and semantics when cache these servers (for example we also want to cache unresolved nameservers and resolve these on the fly when needed).
Modifications:
- Correctly take NS records into account that have no matching ADDITIONAL record
- Correctly handle multiple ADDITIONAL records for the same NS record
- Introduce AuthoritativeDnsServerCache as a replacement of the DnsCache when caching authoritative nameservers + adding default implementation
- Add an adapter layer to reduce API breakage as much as possible
- Replace DnsNameResolver.uncachedRedirectDnsServerStream(...) with newRedirectDnsServerStream(...)
- Add unit tests
Result:
Our DnsResolver now correctly handle redirects in all cases.
Motivation:
We are currently always remove all entries from the cache for a hostname if the lowest TTL was reached but schedule one for each of the cached entries. This is wasteful.
Modifications:
- Reimplement logic to schedule TTL to only schedule a new removal task if the requested TTL was actual lower then the one for the already scheduled task.
- Ensure we only remove from the internal map if we did not replace the Entries in the meantime.
Result:
Less overhead in terms of scheduled tasks for the DefaultDnsCache
Motivation:
We should ensure we return the same cached entries for the hostname and hostname ending with dot. Beside this we also should use it for the searchdomains as well.
Modifications:
- Internally always use hostname with a dot as a key and so ensure we correctly handle it in the cache.
- Also query the cache for each searchdomain
- Add unit tests
Result:
Use the same cached entries for hostname with and without trailing dot. Query the cache for each searchdomain query as well
Motivation:
55fec94592 fixed a bug where we did not correctly clear all caches when the resolver was closed but did not add a testcase.
Modifications:
Add testcase.
Result:
More tests.
Motivation:
DnsNameResolver manages search domains and will retry the request with the different search domains provided to it. However if the query results in an invalid hostname, the Future corresponding to the resolve request will never be completed.
Modifications:
- If a resolve attempt results in an invalid hostname and the query isn't issued we should fail the associated promise
Result:
No more hang from DnsNameResolver if search domain results in invalid hostname.
Motivation:
At the moment we only clear the resolveCache when the Channel is closed. We should also do the same for the authoritativeDnsServerCache.
Modifications:
Add authoritativeDnsServerCache.clear() to the Channel closeFuture.
Result:
Correctly clear all caches.
Motivation:
We did not handle the case when the query was cancelled which could lead to an exhausted id space. Beside this we did not not cancel the timeout on failed promises.
Modifications:
- Do the removal of the id from the manager in a FutureListener so its handled in all cases.
- Cancel the timeout whenever the original promise was full-filled.
Result:
Fixes https://github.com/netty/netty/issues/8013.
Motivation:
Whenever we fail the query we should also remove the id from the DnsQueryContextManager.
Modifications:
Remove the id from the DnsQueryContextManager if we fail the query because the channel failed to become active.
Result:
More correct code.
Motivation:
At the moment if you do a resolveAll and at least one A / AAAA record is present we will not follow any CNAMEs that are also present. This is different to how the JDK behaves.
Modifications:
- Allows follow CNAMEs.
- Add unit test.
Result:
Fixes https://github.com/netty/netty/issues/7915.
Motivation:
a598c3b69b added a upper limit for ttl but missed to also do the same for minTtl.
Modifications:
- Add upper limit for minTtl
- Add testcase.
Result:
No more IllegalArgumentException possible.
Motivation:
Due a bug we did never store more then one address per hostname in DefaultDnsCache.
Modifications:
- Correctly store multiple entries per hostname
- Add tests
Result:
DefaultDnsCache correctly stores more then one entry. Also fixes https://github.com/netty/netty/issues/7882 .