Motivation:
In the SslHandler we schedule a timeout at which we close the Channel if a timeout was detected during close_notify. Because this can race with notify the flushFuture we can see an IllegalStateException when the Channel is closed.
Modifications:
- Use a trySuccess() and tryFailure(...) to guard against race.
Result:
No more race.
Motivation:
All read operations should be safe to execute from multiple threads which was not the case and so could produce a livelock.
Modifications:
Modify methods so these are safe to be called from multiple threads.
Result:
No more livelock.
Motivation:
Many projects need some kind a Channel/Connection pool implementation. While the protocols are different many things can be shared, so we should provide a generic API and implementation.
Modifications:
Add ChannelPool / ChannelPoolMap API and implementations.
Result:
Reusable / Generic pool implementation that users can use.
Motivation:
When a SecurityManager is in place that preven reading the somaxconn file trying to bootstrap a channel later will result in a ClassNotFoundError.
Modifications:
- Reading the file in a privileged block.
Result:
No more ClassNotFoundError when a SecurityManager is in place.
Motivation:
static Package getPackage(Class<?> c) uses synchronized block internally.
Thanks to @jingene for the hint and initial report of the issue.
Modifications:
-Use simple lastIndexOf(...) and substring for a faster implementation
Result:
No more lock condition.
Motivation:
Attribute.getAndRemove() will return the value but also remove the AttributeKey itself from the AttributeMap. This may not
what you want as you may want to keep an instance of it and just set it later again. Document the contract so the user know what to expect.
Modifications:
- Make it clear when to use AttributeKey.getAndRemove() / AttributeKey.remove() and when AttributeKey.getAndSet(null) / AttributeKey.set(null).
Result:
Less suprising behaviour.
Motivation:
To support HTTP2 we need APLN support. This was not provided before when using OpenSslEngine, so SSLEngine (JDK one) was the only bet.
Beside this CipherSuiteFilter was not supported
Modifications:
- Upgrade netty-tcnative and make use of new features to support ALPN and NPN in server and client mode.
- Guard against segfaults after the ssl pointer is freed
- support correctly different failure behaviours
- add support for CipherSuiteFilter
Result:
Be able to use OpenSslEngine for ALPN / NPN for server and client.
Motivation:
On a system where ipv4 and ipv6 are supported a user may want to use -Djava.net.preferIPv4Stack=true to restrict it to use ipv4 only.
This is currently ignored with the epoll transport.
Modifications:
Respect java.net.preferIPv4Stack system property.
Result:
-Djava.net.preferIPv4Stack=true will have the effect the user is looking for.
Motivation:
When remove0() is called multiple times for an DefaultAttribute it can cause corruption of the internal linked-list structure.
Modifications:
- Ensure remove0() can not cause corruption by null out prev and next references.
Result:
No more corruption possible
Motivation:
At the moment when EmbeddedChannel is used and a ChannelHandler tries to schedule and task it will throw an UnsupportedOperationException. This makes it impossible to test these handlers or even reuse them with EmbeddedChannel.
Modifications:
- Factor out reusable scheduling code into AbstractSchedulingEventExecutor
- Let EmbeddedEventLoop and SingleThreadEventExecutor extend AbstractSchedulingEventExecutor
- add EmbbededChannel.runScheduledPendingTasks() which allows to run all scheduled tasks that are ready
Result:
Embeddedchannel is now usable even with ChannelHandler that try to schedule tasks.
Motivation:
We should allow to get a ChannelOption/AttributeKey from a String. This will make it a lot easier to make use of configuration files in applications.
Modifications:
- Add exists(...), newInstance(...) method to ChannelOption and AttributeKey and alter valueOf(...) to return an existing instance for a String or create one.
- Add unit tests.
Result:
Much more flexible usage of ChannelOption and AttributeKey.
Motivation:
isRoot() is an expensive operation. We should avoid calling it if
possible.
Modifications:
Move the isRoot() checks to the end of the 'if' block, so that isRoot()
is evaluated only when really necessary.
Result:
isRoot() is evaluated only when SO_BROADCAST is set and the bind address
is anylocal address.
Motivation:
When a user sees an error message, sometimes he or she does not know
what exactly he or she has to do to fix the problem.
Modifications:
Log the URL of the wiki pages that might help the user troubleshoot.
Result:
We are more friendly.
Related: #3166
Motivation:
When the recyclable object created at one thread is returned at the
other thread, it is stored in a WeakOrderedQueue.
The objects stored in the WeakOrderedQueue is added back to the stack by
WeakOrderedQueue.transfer() when the owner thread ran out of recyclable
objects.
However, WeakOrderedQueue.transfer() does not have any mechanism that
prevents the stack from growing beyond its maximum capacity.
Modifications:
- Make WeakOrderedQueue.transfer() increase the capacity of the stack
only up to its maximum
- Add tests for the cases where the recyclable object is returned at the
non-owner thread
- Fix a bug where Stack.scavengeSome() does not scavenge the objects
when it's the first time it ran out of objects and thus its cursor is
null.
- Overall clean-up of scavengeSome() and transfer()
Result:
The capacity of Stack never increases beyond its maximum.
Motivation:
io.netty.util.internal.PlatformDependent.isRoot() depends on the IS_ROOT field which is filled in during class initialization. This spawns processes and consumes resources, which are not generally necessary to the complete functioning of that class.
Modifications:
This switches the class to use lazy initialization this field inside of the isRoot() method using double-checked locking (http://en.wikipedia.org/wiki/Double-checked_locking).
Result:
The first call to isRoot() will be slightly slower, at a tradeoff that class loading is faster, uses fewer resources and platform errors are avoided unless necessary.
- Parameterize DomainNameMapping to make it useful for other use cases
than just mapping to SslContext
- Move DomainNameMapping to io.netty.util
- Clean-up the API documentation
- Make SniHandler.hostname and sslContext volatile because they can be
accessed by non-I/O threads
Motivation:
When we need to host multiple server name with a single IP, it requires
the server to support Server Name Indication extension to serve clients
with proper certificate. So the SniHandler will host multiple
SslContext(s) and append SslHandler for requested hostname.
Modification:
* Added SniHandler to host multiple certifications in a single server
* Test case
Result:
User could use SniHandler to host multiple certifcates at a time.
It's server-side only.
Motivation:
8fbc513 introduced stray warnings in callsites of
PromiseAggregator#add and PromiseNotifier#(...).
Modifications:
This commit adds the @SafeVarargs annotation to PromiseAggregator#add
and PromiseNotifier#(...). As Netty is built with JDK7, this is a
recognized annotation and should not affect runtime VM versions 1.5 and
1.6.
Result:
Building Netty with JDK7 will no longer produce warnings in the
callsites mentioned above.
Motivation:
Although the new IntObjectMap.values() that returns Collection is
useful, the removed values(Class<V>) that returns an array is also
useful. It's also good for backward compatibility.
Modifications:
- Add IntObjectMap.values(Class<V>) back
- Miscellaneous improvements
- Cache the collection returned by IntObjectHashMap.values()
- Inspector warnings
- Update the IntObjectHashMapTest to test both values()
Result:
- Backward compatibility
- Potential performance improvement of values()
Motivation:
The mentioned commit contains a bug fix and an improvement in
IntObjectHashMap that requires backporting.
Modifications:
Update IntObjectMap, IntObjectHashMap, and IntObjectHashMapTest
Result:
Easier to backport HTTP/2 and other changes in master in the future
Motivation:
Found performance issues via FindBugs and PMD.
Modifications:
- Removed unnecessary boxing/unboxing operations in DefaultTextHeaders.convertToInt(CharSequence) and DefaultTextHeaders.convertToLong(CharSequence). A boxed primitive is created from a string, just to extract the unboxed primitive value.
- Added a static modifier for DefaultHttp2Connection.ParentChangedEvent class. This class is an inner class, but does not use its embedded reference to the object which created it. This reference makes the instances of the class larger, and may keep the reference to the creator object alive longer than necessary.
- Added a static compiled Pattern to avoid compile it each time it is used when we need to replace some part of authority.
- Improved using of StringBuilders.
Result:
Performance improvements.
Motivation:
NetUtil.isValidIpV6Address() handles the interface name in IPv6 address
incorrectly. For example, it returns false for the following addresses:
- ::1%lo
- ::1%_%_in_name_
Modifications:
- Strip the square brackets before validation for simplicity
- Strip the part after the percent sign completely before validation for
simplicity
- Simplify and reformat NetUtilTest
Result:
- The interface names in IPv6 addresses are handled correctly.
- NetUtilTest is cleaner
Motivation:
ChannelPromiseAggregator and ChannelPromiseNotifiers only allow
consumers to work with Channels as the result type. Generic versions
of these classes allow consumers to aggregate or broadcast the results
of an asynchronous execution with other result types.
Modifications:
Add PromiseAggregator and PromiseNotifier. Add unit tests for both.
Remove code in ChannelPromiseAggregator and ChannelPromiseNotifier and
modify them to extend the new base classes.
Result:
Consumers can now aggregate or broadcast the results of an asynchronous
execution with results types other than Channel.
Motivation:
Improvements were made on the main line to support ALPN and mutual
authentication for TLS. These should be backported.
Modifications:
- Backport commits from the master branch
- f8af84d599
- e74c8edba3
Result:
Support for ALPN and mutual authentication.
Motivation
4.0 was not modified in the same time than 4.1 while the difference was
limited.
Include the fix on "=" character in Boundary.
Issue #3004 shows that "=" character was not supported as it should in
the HttpPostRequestDecoder in form-data boundary.
Modifications:
Backport from 4.1 to 4.0 while respecting interfaces.
Add 2 methods in StringUtil
- split with maxParm argument: String split with max parts only (to prevent multiple '='
to be source of extra split while not needed)
- substringAfter: String part after delimiter (since first part is not
needed)
Use those methods in HttpPostRequestDecoder.
Change and the HttpPostRequestDecoderTest to check using a boundary
beginning with "=".
Results:
Backport done (Issue #2886 fix)
Issue #3004 fix too
The fix implies more stability and fix the relative issues.
Motivation:
Using a needless local copy of keys.length.
Modifications:
Using keys.length explicitly everywhere.
Result:
Slight performance improvement of hashIndex.
Motivation:
The hashIndex method currently uses a conditional to handle negative
keys. This could be done without a conditional to slightly improve
performance.
Modifications:
Modified hashIndex() to avoid using a conditional.
Result:
Slight performance improvement to hashIndex().
Motivation:
IntObjectHashMap throws an exception when using negative values for
keys.
Modifications:
Changed hashIndex() to normalize the index if the mod operation returns
a negative number.
Result:
IntObjectHashMap supports negative key values.
Motivation:
An IPv6 string can have a zone index which is followed by the '%' sign.
When a user passes an IPv6 string with a zone index,
NetUtil.createByteArrayFromIpAddressString() returns an incorrect value.
Modification:
- Strip the zone index before conversion
Result:
An IPv6 string with a zone index is decoded correctly.
Motivation:
This fixes bug #2848 which caused Recycler to become unbounded and cache infinite number of objects with maxCapacity that's not a power of two. This can result in general sluggishness of the application and OutOfMemoryError.
Modifications:
The test for maxCapacity has been moved out of test to check if the buffer has filled. The buffer is now also capped at maxCapacity and cannot grow over it as it jumps from one power of two to the other.
Additionally, a unit test was added to verify maxCapacity is honored even when it's not a power of two.
Result:
With these changes the user is able to use a custom maxCapacity number and not have it ignored. The unit test assures this bug will not repeat itself.
Motivation:
There is not need todo redunant reads of head in peakNode as we can just spin on next() until it becomes visible.
Modifications:
Remove redundant reads of head in peakNode. This is based on @nitsanw's patch for akka.
See https://github.com/akka/akka/pull/15596
Result:
Less volatile access.
Motivation:
Recently we changed the default value of SOMAXCONN that is used when we can not determine it by reading /proc/sys/net/core/somaxconn. While doing this we missed to update the javadocs to reflect the new default value that is used.
Modifications:
List correct default value in the javadocs of SOMAXCONN.
Result:
Correct javadocs.
Related issue: #2407
Motivation:
The current fallback SOMAXCONN value is 3072. It is way too large
comparing to the default SOMAXCONN value of popular OSes.
Modifications:
Decrease the fallback SOMAXCONN value to 128 or 200 depending on the
current OS
Result:
Saner fallback value
Motivation:
The calculation of the max wait time for HashedWheelTimerTest.testExecutionOnTime() was wrong and so the test sometimes failed.
Modifications:
Fix the max wait time.
Result:
No more test-failures
Motivation:
We forgot to do a null check on the cause parameter of
ChannelFuture.setFailure(cause)
Modifications:
Add a null check
Result:
Fixed issue: #2728
Motivation:
While trying to merge our ChannelOutboundBuffer changes we've made last
week, I realized that we have quite a bit of conflicting changes at 4.1
and master. It was primarily because we added
ChannelOutboundBuffer.beforeAdd() and moved some logic there, such as
direct buffer conversion.
However, this is not possible with the changes we've made for 4.0. We
made ChannelOutboundBuffer final for example.
Maintaining multiple branch is already getting painful and having
different core will make it even worse, so I think we should keep the
differences between 4.0 and other branches minimal.
Modifications:
- Move ChannelOutboundBuffer.safeRelease() to ReferenceCountUtil
- Add ByteBufUtil.threadLocalBuffer()
- Backported from ThreadLocalPooledDirectByteBuf
- Make most methods in AbstractUnsafe final
- Add AbstractChannel.filterOutboundMessage() so that a transport can
convert a message to another (e.g. heap -> off-heap), and also
reject unsupported messages
- Move all direct buffer conversions to filterOutboundMessage()
- Move all type checks to filterOutboundMessage()
- Move AbstractChannel.checkEOF() to OioByteStreamChannel, because it's
the only place it is used at all
- Remove ChannelOutboundBuffer.current(Object), because it's not used
anymore
- Add protected direct buffer conversion methods to AbstractNioChannel
and AbstractEpollChannel so that they can be used by their subtypes
- Update all transport implementations according to the changes above
Result:
- The missing extension point in 4.0 has been added.
- AbstractChannel.filterOutboundMessage()
- Thanks to the new extension point, we moved all transport-specific
logic from ChannelOutboundBuffer to each transport implementation
- We can copy most of the transport implementations in 4.0 to 4.1 and
master now, so that we have much less merge conflict when we modify
the core.
Motivation:
As /proc/sys/net/core/somaxconn does not exists on non-linux platforms you see a noisy stacktrace when debug level is enabled while the static method of NetUtil is executed.
Modifications:
Check if the file exists before try to parse it.
Result:
Less noisy logging on non-linux platforms.
Motivation:
While benchmarking the native transport, I noticed that gathering write
is not as fast as expected. It was due to the fact that we have to do a
lot of array copies to put the buffer addresses into the iovec struct
array.
Modifications:
Introduce a new class called IovArray, which allows to fill buffers
directly into an off-heap array of iovec structs, so that it can be
passed over to JNI without any extra array copies.
Result:
Big performance improvement when doing gathering writes:
Before:
[nmaurer@xxx]~% wrk/wrk -H 'Host: localhost' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Connection: keep-alive' -d 120 -c 256 -t 16 --pipeline 256 http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 23.44ms 16.37ms 259.57ms 91.77%
Req/Sec 181.99k 31.69k 304.60k 78.12%
346544071 requests in 2.00m, 46.48GB read
Requests/sec: 2887885.09
Transfer/sec: 396.59MB
After:
[nmaurer@xxx]~% wrk/wrk -H 'Host: localhost' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Connection: keep-alive' -d 120 -c 256 -t 16 --pipeline 256 http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 21.93ms 16.33ms 305.73ms 92.34%
Req/Sec 194.56k 33.75k 309.33k 77.04%
369617503 requests in 2.00m, 49.57GB read
Requests/sec: 3080169.65
Transfer/sec: 423.00MB
Motivation:
Due some race-condition while handling canellation of TimerTasks it was possibleto corrupt the linked-list structure that is represent by HashedWheelBucket and so produce a NPE.
Modification:
Fix the problem by adding another MpscLinkedQueue which holds the cancellation tasks and process them on each tick. This allows to use no synchronization / locking at all while introduce a latency of max 1 tick before the TimerTask can be GC'ed.
Result:
No more NPE
- Rewrite with linear probing, no state array, compaction at cleanup
- Optimize keys() and values() to not use reflection
- Optimize hashCode() and equals() for efficient iteration
- Fixed equals() to not return true for equals(null)
- Optimize iterator to not allocate new Entry at each next()
- Added toString()
- Added some new unit tests
Motivation:
Now Netty has a few problems with null values.
Modifications:
- Check File in DiskFileUpload.toString().
If File is null we will get NPE when calling toString() method.
- Check Result<String> in MqttDecoder.decodeConnectionPayload(...).
- Check Unsafe before calling unsafe.getClass() in PlatformDependent0 static block.
- Removed unnecessary null check in WebSocket08FrameEncoder.encode(...).
Because msg.content() can not return null.
- Removed unnecessary null checks in ConcurrentHashMapV8.removeTreeNode(TreeNode<K,V>).
- Removed unnecessary null check in OioDatagramChannel.doReadMessages(List<Object>).
Because tmpPacket.getSocketAddress() always returns new SocketAddress instance.
- Removed unnecessary null check in OioServerSocketChannel.doReadMessages(List<Object>).
Because socket.accept() always returns new Socket instance.
- Pass Unpooled.buffer(0) instead of null inside CloseWebSocketFrame(boolean, int) constructor.
If we will pass null we will get NPE in super class constructor.
- Added throw new IllegalStateException in GlobalEventExecutor.awaitInactivity(long, TimeUnit) if it will be called before GlobalEventExecutor.execute(Runnable).
Because now we will get NPE. IllegalStateException will be better in this case.
- Fixed null check in OpenSslServerContext.setTicketKeys(byte[]).
Now we throw new NPE if byte[] is not null.
Result:
Added new null checks when it is necessary, removed unnecessary null checks and fixed some NPE problems.
Motivation:
Fix some typos in Netty.
Modifications:
- Fix potentially dangerous use of non-short-circuit logic in Recycler.transfer(Stack<?>).
- Removed double 'the the' in javadoc of EmbeddedChannel.
- Write to log an exception message if we can not get SOMAXCONN in the NetUtil's static block.
Modifications:
- Added a static modifier for CompositeByteBuf.Component.
This class is an inner class, but does not use its embedded reference to the object which created it. This reference makes the instances of the class larger, and may keep the reference to the creator object alive longer than necessary.
A boxed primitive is created from a String, just to extract the unboxed primitive value.
- Removed unnecessary checks if file exists before call mkdirs() in NativeLibraryLoader and PlatformDependent.
Because the method mkdirs() has this check inside.
Conflicts:
codec-http/src/main/java/io/netty/handler/codec/http/multipart/DiskAttribute.java
codec-stomp/src/main/java/io/netty/handler/codec/stomp/StompSubframeAggregator.java
codec-stomp/src/main/java/io/netty/handler/codec/stomp/StompSubframeDecoder.java
Motivations:
In our new version of HWT we used some kind of lazy cancelation of timeouts by put them back in the queue and let them pick up on the next tick. This multiple problems:
- we may corrupt the MpscLinkedQueue if the task is used as tombstone
- this sometimes lead to an uncessary delay especially when someone did executed some "heavy" logic in the TimeTask
Modifications:
Use a Lock per HashedWheelBucket for save and fast removal.
Modifications:
Cancellation of tasks can be done fast and so stuff can be GC'ed and no more infinite-loop possible
Motivation:
We use the nanoTime of the scheduledTasks to calculate the milli-seconds to wait for a select operation to select something. Once these elapsed we check if there was something selected or some task is ready for processing. Unfortunally we not take into account scheduled tasks here so the selection loop will continue if only scheduled tasks are ready for processing. This will delay the execution of these tasks.
Modification:
- Check if a scheduled task is ready after selecting
- also make a tiny change in NioEventLoop to not trigger a rebuild if nothing was selected because the timeout was reached a few times in a row.
Result:
Execute scheduled tasks on time.
Motivation:
When a user tries to use netty on android it currently fails with "Could not find class 'sun.misc.Cleaner'"
Modification:
Encapsulate sun.misc.Cleaner usage in extra class to workaround this isssue.
Result:
Netty can be used on android again
Motivation:
During some refactoring we changed PlatformDependend0 to use sun.nio.ch.DirectBuffer for release direct buffers. This broke support for android as the class does not exist there and so an exception is thrown.
Modification:
Use again the fieldoffset to get access to Cleaner for release direct buffers.
Result:
Netty can be used on android again
Motivation:
Recycler is used in many places to reduce GC-pressure but is still not as fast as possible because of the internal datastructures used.
Modification:
- Rewrite Recycler to use a WeakOrderQueue which makes minimal guaranteer about order and visibility for max performance.
- Recycling of the same object multiple times without acquire it will fail.
- Introduce a RecyclableMpscLinkedQueueNode which can be used for MpscLinkedQueueNodes that use Recycler
These changes are based on @belliottsmith 's work that was part of #2504.
Result:
Huge increase in performance.
4.0 branch without this commit:
Benchmark (size) Mode Samples Score Score error Units
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 00000 thrpt 20 116026994.130 2763381.305 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 00256 thrpt 20 110823170.627 3007221.464 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 01024 thrpt 20 118290272.413 7143962.304 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 04096 thrpt 20 120560396.523 6483323.228 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 16384 thrpt 20 114726607.428 2960013.108 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 65536 thrpt 20 119385917.899 3172913.684 ops/s
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 297.617 sec - in io.netty.microbench.internal.RecyclableArrayListBenchmark
4.0 branch with this commit:
Benchmark (size) Mode Samples Score Score error Units
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 00000 thrpt 20 204158855.315 5031432.145 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 00256 thrpt 20 205179685.861 1934137.841 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 01024 thrpt 20 209906801.437 8007811.254 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 04096 thrpt 20 214288320.053 6413126.689 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 16384 thrpt 20 215940902.649 7837706.133 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 65536 thrpt 20 211141994.206 5017868.542 ops/s
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 297.648 sec - in io.netty.microbench.internal.RecyclableArrayListBenchmark
Motivation:
The tail node reference writes (by producer threads) are very likely to
invalidate the cache line holding the headRef which is read by the
consumer threads in order to access the padded reference to the head
node. This is because the resulting layout for the object is:
- header
- Object AtomicReference.value -> Tail node
- Object MpscLinkedQueue.headRef -> PaddedRef -> Head node
This is 'passive' false sharing where one thread reads and the other
writes. The current implementation suffers from further passive false
sharing potential from any and all neighbours to the queue object as no
pre/post padding is provided for the class fields.
Modifications:
Fix the memory layout by adding pre-post padding for the head node and
putting the tail node reference in the same object.
Result:
Fixed false sharing
Motivation:
Maps with integer keys are used in several places (HTTP/2 code, for
example). To reduce the memory footprint of these structures, we need a
specialized map class that uses ints as keys.
Modifications:
Added IntObjectHashMap, which is uses open addressing and double hashing
for collision resolution.
Result:
A new int-based map class that can be shared across Netty.
Motivation:
When Netty runs in a managed environment such as web application server,
Netty needs to provide an explicit way to remove the thread-local
variables it created to prevent class loader leaks.
FastThreadLocal uses different execution paths for storing a
thread-local variable depending on the type of the current thread.
It increases the complexity of thread-local removal.
Modifications:
- Moved FastThreadLocal and FastThreadLocalThread out of the internal
package so that a user can use it.
- FastThreadLocal now keeps track of all thread local variables it has
initialized, and calling FastThreadLocal.removeAll() will remove all
thread-local variables of the caller thread.
- Added FastThreadLocal.size() for diagnostics and tests
- Introduce InternalThreadLocalMap which is a mixture of hard-wired
thread local variable fields and extensible indexed variables
- FastThreadLocal now uses InternalThreadLocalMap to implement a
thread-local variable.
- Added ThreadDeathWatcher.unwatch() so that PooledByteBufAllocator
tells it to stop watching when its thread-local cache has been freed
by FastThreadLocal.removeAll().
- Added FastThreadLocalTest to ensure that removeAll() works
- Added microbenchmark for FastThreadLocal and JDK ThreadLocal
- Upgraded to JMH 0.9
Result:
- A user can remove all thread-local variables Netty created, as long as
he or she did not exit from the current thread. (Note that there's no
way to remove a thread-local variable from outside of the thread.)
- FastThreadLocal exposes more useful operations such as isSet() because
we always implement a thread local variable via InternalThreadLocalMap
instead of falling back to JDK ThreadLocal.
- FastThreadLocalBenchmark shows that this change improves the
performance of FastThreadLocal even more.
Motivation:
Allow to make use of our new FastThreadLocal whereever possible
Modification:
Make use of an array to store FastThreadLocals and so allow to also use it in PooledByteBufAllocator that is instanced by users.
The maximal size of the array is configurable per system property to allow to tune it if needed. As default we use 64 entries which should be good enough.
Result:
More flexible usage of FastThreadLocal
Motivation:
Provide a faster ThreadLocal implementation
Modification:
Add a "FastThreadLocal" which uses an EnumMap and a predefined fixed set of possible thread locals (all of the static instances created by netty) that is around 10-20% faster than standard ThreadLocal in my benchmarks (and can be seen having an effect in the direct PooledByteBufAllocator benchmark that uses the DEFAULT ByteBufAllocator which uses this FastThreadLocal, as opposed to normal instantiations that do not, and in the new RecyclableArrayList benchmark);
Result:
Improved performance
Motivation:
At the moment the HashedWheelTimer will only remove the cancelled Timeouts once the HashedWheelBucket is processed again. Until this the instance will not be able to be GC'ed as there are still strong referenced to it even if the user not reference it by himself/herself. This can cause to waste a lot of memory even if the Timeout was cancelled before.
Modification:
Add a new queue which holds CancelTasks that will be processed on each tick to remove cancelled Timeouts. Because all of this is done only by the WorkerThread there is no need for synchronization and only one extra object creation is needed when cancel() is executed. For addTimeout(...) no new overhead is introduced.
Result:
Less memory usage for cancelled Timeouts.
Motivation:
MpscLinkedQueue has various issues:
- It does not work without sun.misc.Unsafe.
- Some field names are confusing.
- Node.tail does not refer to the tail node really.
- The tail node is the starting point of iteration. I think the tail
node should be the head node and vice versa to reduce confusion.
- Some important methods are not implemented (e.g. iterator())
- Not serializable
- Potential false cache sharing problem due to lack of padding
- MpscLinkedQueue extends AtomicReference and thus exposes various
operations that mutates the internal state of the queue directly.
Modifications:
- Use AtomicReferenceFieldUpdater wherever possible so that we do not
use Unsafe directly. (e.g. use lazySet() instead of putOrderedObject)
- Extend AbstractQueue to implement most operations
- Implement serialization and iterator()
- Rename tail to head and head to tail to reduce confusion.
- Rename Node.tail to Node.next.
- Fix a leak where the references in the removed head are not cleared
properly.
- Add Node.clearMaybe() method so that the value of the new head node
is cleared if possible.
- Add some comments for my own educational purposes
- Add padding to the head node
- Add FullyPaddedReference and RightPaddedReference for future reuse
- Make MpscLinkedQueue package-local so that a user cannot access the
dangerous yet public operations exposed by the superclass.
- MpscLinkedQueue.Node becomes MpscLinkedQueueNode, a top level class
Result:
- It's more like a drop-in replacement of ConcurrentLinkedQueue for the
MPSC case.
- Works without sun.misc.Unsafe
- Code potentially easier to understand
- Fixed leak (related: #2372)
Motivation:
When running Netty on a container environment, the container will often
complain about the lingering threads such as the worker threads of
ThreadDeathWatcher and GlobalEventExecutor. We should provide an
operation that allows a use to wait until such threads are terminated.
Modifications:
- Add awaitInactivity()
- (misc) Fix typo in GlobalEventExecutorTest
- (misc) Port ThreadDeathWatch's CAS-based thread life cycle management
to GlobalEventExecutor
Result:
- Fixes#2084
- Less overhead on task submission of GlobalEventExecutor
Motivation:
PooledByteBufAllocator's thread local cache and
ReferenceCountUtil.releaseLater() are in need of a way to run an
arbitrary logic when a certain thread is terminated.
Modifications:
- Add ThreadDeathWatcher, which spawns a low-priority daemon thread
that watches a list of threads periodically (every second) and
invokes the specified tasks when the associated threads are not alive
anymore
- Start-stop logic based on CAS operation proposed by @tea-dragon
- Add debug-level log messages to see if ThreadDeathWatcher works
Result:
- Fixes#2519 because we don't use GlobalEventExecutor anymore
- Cleaner code
Motivation:
The current DefaultAttributeMap cause an infinite-loop when the user removes an attribute and create the same attribute again. This regression was introduced by c3bd7a8ff1.
Modification:
Correctly break out loop
Result:
No infinite-loop anymore.
Motivation:
When (listeners == null && lateListeners == null) and (stackDepth >= MAX_LISTENER_STACK_DEPTH), the listener is not notified at all. The discard client does not work.
Modification:
Make sure to submit the notification task.
Result:
The discard client works again and all listeners are notified.
Motivation:
During a large memory copy, safepoint polling is diabled, hindering
accurate profiling.
Modifications:
Only copy up to 1 MiB per Unsafe.copyMemory()
Result:
Potentially more reliable performance