Motivation:
Sometimes it is useful to disable recycling completely if memory constraints are very tight.
Modifications:
Allow to use -Dio.netty.recycler.maxCapacity=0 to disable recycling completely.
Result:
It's possible to disable recycling now.
Motivation:
The javadocs are incorrect and so give false impressions of use-pattern.
Modifications:
- Fix javadocs of which operations are allowed from multiple threads concurrently.
- Let isEmpty() work concurrently.
Result:
Correctly document usage-patterns.
Motivation:
The StringUtil class creates a Formatter object, but does not close it. There are also a 2 utility methods which would be generally useful.
Modifications:
- Close the Formatter
- Add length and isNullOrEmpty
Result:
No more resource leaks. Additional utility methods.
Motivation:
A degradation in performance has been observed from the 4.0 branch as documented in https://github.com/netty/netty/issues/3962.
Modifications:
- Simplify Headers class hierarchy.
- Restore the DefaultHeaders to be based upon DefaultHttpHeaders from 4.0.
- Make various other modifications that are causing hot spots.
Result:
Performance is now on par with 4.0.
Motivation:
We noticed that the headers implementation in Netty for HTTP/2 uses quite a lot of memory
and that also at least the performance of randomly accessing a header is quite poor. The main
concern however was memory usage, as profiling has shown that a DefaultHttp2Headers
not only use a lot of memory it also wastes a lot due to the underlying hashmaps having
to be resized potentially several times as new headers are being inserted.
This is tracked as issue #3600.
Modifications:
We redesigned the DefaultHeaders to simply take a Map object in its constructor and
reimplemented the class using only the Map primitives. That way the implementation
is very concise and hopefully easy to understand and it allows each concrete headers
implementation to provide its own map or to even use a different headers implementation
for processing requests and writing responses i.e. incoming headers need to provide
fast random access while outgoing headers need fast insertion and fast iteration. The
new implementation can support this with hardly any code changes. It also comes
with the advantage that if the Netty project decides to add a third party collections library
as a dependency, one can simply plug in one of those very fast and memory efficient map
implementations and get faster and smaller headers for free.
For now, we are using the JDK's TreeMap for HTTP and HTTP/2 default headers.
Result:
- Significantly fewer lines of code in the implementation. While the total commit is still
roughly 400 lines less, the actual implementation is a lot less. I just added some more
tests and microbenchmarks.
- Overall performance is up. The current implementation should be significantly faster
for insertion and retrieval. However, it is slower when it comes to iteration. There is simply
no way a TreeMap can have the same iteration performance as a linked list (as used in the
current headers implementation). That's totally fine though, because when looking at the
benchmark results @ejona86 pointed out that the performance of the headers is completely
dominated by insertion, that is insertion is so significantly faster in the new implementation
that it does make up for several times the iteration speed. You can't iterate what you haven't
inserted. I am demonstrating that in this spreadsheet [1]. (Actually, iteration performance is
only down for HTTP, it's significantly improved for HTTP/2).
- Memory is down. The implementation with TreeMap uses on avg ~30% less memory. It also does not
produce any garbage while being resized. In load tests for GRPC we have seen a memory reduction
of up to 1.2KB per RPC. I summarized the memory improvements in this spreadsheet [1]. The data
was generated by [2] using JOL.
- While it was my original intend to only improve the memory usage for HTTP/2, it should be similarly
improved for HTTP, SPDY and STOMP as they all share a common implementation.
[1] https://docs.google.com/spreadsheets/d/1ck3RQklyzEcCLlyJoqDXPCWRGVUuS-ArZf0etSXLVDQ/edit#gid=0
[2] https://gist.github.com/buchgr/4458a8bdb51dd58c82b4
Motivation:
The HttpObjectDecoder is on the hot code path for the http codec. There are a few hot methods which can be modified to improve performance.
Modifications:
- Modify AppendableCharSequence to provide unsafe methods which don't need to re-check bounds for every call.
- Update HttpObjectDecoder methods to take advantage of new AppendableCharSequence methods.
Result:
Peformance boost for decoding http objects.
Motivation:
We should support XXXCollections methods for all primitive map types.
Modifications:
Removed PrimitiveCollections and added a template for XXXCollections.
Result:
Fixes#4001
Motivation:
It would be useful to support the Java `Map` interface in our primitive maps.
Modifications:
Renamed current methods to "pXXX", where p is short for "primitive". Made the template for all primitive maps extend the appropriate Map interface.
Result:
Fixes#3970
Motivation:
Prior we used a purge task that would remove previous canceled scheduled tasks from the internal queue. This could introduce some delay and so use a lot of memory even if the task itself is already canceled.
Modifications:
Schedule removal of task from queue via EventLoop if cancel operation is not done in the EventLoop Thread or just remove directly if the Thread that cancels the scheduled task is in the EventLoop.
Result:
Faster possibility to GC a canceled ScheduledFutureTask.
Motivation:
PoolThreadCache did only cache allocations if the allocation and deallocation Thread were the same. This is not optimal as often people write from differen thread then the actual EventLoop thread.
Modification:
- Add MpscArrayQueue which was forked from jctools and lightly modified.
- Use MpscArrayQueue for caches and always add buffer back to the cache that belongs to the allocation thread.
Result:
ThreadPoolCache is now also usable and so gives performance improvements when allocation and deallocation thread are different.
Performance when using same thread for allocation and deallocation is noticable worse then before.
Motivation:
The PooledByteBufAllocator is more or less a black-box atm. We need to expose some metrics to allow the user to get a better idea how to tune it.
Modifications:
- Expose different metrics via PooledByteBufAllocator
- Add *Metrics interfaces
Result:
It is now easy to gather metrics and detail about the PooledByteBufAllocator and so get a better understanding about resource-usage etc.
Motivation:
There is an error in the ByteString test logic which is resulting in test failures.
Modifications:
- Fix the loop iteration to use the loop iteration variable instead of a fixed index.
Result:
Tests are less buggy.
Motivation:
In the SslHandler we schedule a timeout at which we close the Channel if a timeout was detected during close_notify. Because this can race with notify the flushFuture we can see an IllegalStateException when the Channel is closed.
Modifications:
- Use a trySuccess() and tryFailure(...) to guard against race.
Result:
No more race.
Motivation:
All read operations should be safe to execute from multiple threads which was not the case and so could produce a livelock.
Modifications:
Modify methods so these are safe to be called from multiple threads.
Result:
No more livelock.
Motivation:
There are various known issues in netty-codec-dns:
- Message types are not interfaces, which can make it difficult for a
user to implement his/her own message implementation.
- Some class names and field names do not match with the terms in the
RFC.
- The support for decoding a DNS record was limited. A user had to
encode and decode by him/herself.
- The separation of DnsHeader from DnsMessage was unnecessary, although
it is fine conceptually.
- Buffer leak caused by DnsMessage was difficult to analyze, because the
leak detector tracks down the underlying ByteBuf rather than the
DnsMessage itself.
- DnsMessage assumes DNS-over-UDP.
- To send an EDNS message, a user have to create a new DNS record class
instance unnecessarily.
Modifications:
- Make all message types interfaces and add default implementations
- Rename some classes, properties, and constants to match the RFCs
- DnsResource -> DnsRecord
- DnsType -> DnsRecordType
- and many more
- Remove DnsClass and use an integer to support EDNS better
- Add DnsRecordEncoder/DnsRecordDecoder and their default
implementations
- DnsRecord does not require RDATA to be ByteBuf anymore.
- Add DnsRawRecord as the catch-all record type
- Merge DnsHeader into DnsMessage
- Make ResourceLeakDetector track AbstractDnsMessage
- Remove DnsMessage.sender/recipient properties
- Wrap DnsMessage with AddressedEnvelope
- Add DatagramDnsQuest and DatagramDnsResponse for ease of use
- Rename DnsQueryEncoder to DatagramDnsQueryEncoder
- Rename DnsResponseDecoder to DatagramDnsResponseDecoder
- Miscellaneous changes
- Add StringUtil.TAB
Result:
- Cleaner APi
- Can support DNS-over-TCP more easily in the future
- Reduced memory footprint in the default DnsQuery/Response
implementations
- Better leak tracking for DnsMessages
- Possibility to introduce new DnsRecord types in the future and provide
full record encoder/decoder implementation.
- No unnecessary instantiation for an EDNS pseudo resource record
Motivation:
Many projects need some kind a Channel/Connection pool implementation. While the protocols are different many things can be shared, so we should provide a generic API and implementation.
Modifications:
Add ChannelPool / ChannelPoolMap API and implementations.
Result:
Reusable / Generic pool implementation that users can use.
Motivation:
'length2 == 0' is not reachable because length1 and length2 are same at this point.
Motification:
Removed 'length2 == 0'.
Result:
Cleaner code.
Motivation:
Currently, valueOf() and newInstance() use almost same code to create new constant.
For maintainability, it's better to share duplicate code among them.
Motification:
Added new private functions.
- checkNotNullAndNotEmpty() is for checking whether the name of a constant is null and empty.
- newConstant0() is for creating a new constant.
Result:
- Compact source code
- Improvement of maintainability
Motivation:
When a SecurityManager is in place that preven reading the somaxconn file trying to bootstrap a channel later will result in a ClassNotFoundError.
Modifications:
- Reading the file in a privileged block.
Result:
No more ClassNotFoundError when a SecurityManager is in place.
Motivation:
The ByteString class currently assumes the underlying array will be a complete representation of data. This is limiting as it does not allow a subsection of another array to be used. The forces copy operations to take place to compensate for the lack of API support.
Modifications:
- add arrayOffset method to ByteString
- modify all ByteString and AsciiString methods that loop over or index into the underlying array to use this offset
- update all code that uses ByteString.array to ensure it accounts for the offset
- add unit tests to test the implementation respects the offset
Result:
ByteString and AsciiString can represent a sub region of a byte[].
Motivation:
static Package getPackage(Class<?> c) uses synchronized block internally.
Thanks to @jingene for the hint and initial report of the issue.
Modifications:
-Use simple lastIndexOf(...) and substring for a faster implementation
Result:
No more lock condition.
Motivation:
Each different *ChannelOption did extend ChannelOption in 4.0, which we changed in 4.1. This is a breaking change in terms of the API so we need to ensure we keep the old hierarchy.
Modifications:
- Let all *ChannelOption extend ChannelOption
- Add back constructor and mark it as @deprecated
Result:
No API breakage between 4.0 and 4.1
Motivation:
The current implementation does byte by byte comparison, which we have seen
can be a performance bottleneck when the AsciiString is used as the key in
a Map.
Modifications:
Use sun.misc.Unsafe (on supporting platforms) to compare up to eight bytes at a time
and get closer to the performance of String.equals(Object).
Result:
Significant improvement (2x - 6x) in performance over the current implementation.
Benchmark (size) Mode Samples Score Score error Units
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 10 thrpt 10 118843477.518 2347259.347 ops/s
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 50 thrpt 10 43910319.773 198376.996 ops/s
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 100 thrpt 10 26339969.001 159599.252 ops/s
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 1000 thrpt 10 2873119.030 20779.056 ops/s
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 10000 thrpt 10 306370.450 1933.303 ops/s
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 100000 thrpt 10 25750.415 108.391 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 10 thrpt 10 248077563.510 635320.093 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 50 thrpt 10 128198943.138 614827.548 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 100 thrpt 10 86195621.349 1063959.307 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 1000 thrpt 10 16920264.598 61615.365 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 10000 thrpt 10 1687454.747 6367.602 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 100000 thrpt 10 153717.851 586.916 ops/s
Motivation:
ByteString#hashCode() trashes its own hash code if it's being accessed concurrently
Modifications:
Pull the ByteString#hash into a local variable and calculate it locally.
Result:
ByteString#hashCode() is no longer returning a junk value.
Motivation:
The IntObjectHashMap benchmarks show the Agrona collections to be faster on put, lookup, and remove. One major difference is that we're using 2 modulus operations each time we increment the position index while iterating. Agrona uses a mask instead.
Modifications:
Modified the KObjectHashMap to use masking rather than modulus when wrapping the position index. This requires that the capacity be a power of 2.
Result:
Improved performance of IntObjectHashMap.
Motivation:
The usage and code within AsciiString has exceeded the original design scope for this class. Its usage as a binary string is confusing and on the verge of violating interface assumptions in some spots.
Modifications:
- ByteString will be created as a base class to AsciiString. All of the generic byte handling processing will live in ByteString and all the special character encoding will live in AsciiString.
Results:
The AsciiString interface will be clarified. Users of AsciiString can now be clear of the limitations the class imposes while users of the ByteString class don't have to live with those limitations.
Motivation:
Attribute.getAndRemove() will return the value but also remove the AttributeKey itself from the AttributeMap. This may not
what you want as you may want to keep an instance of it and just set it later again. Document the contract so the user know what to expect.
Modifications:
- Make it clear when to use AttributeKey.getAndRemove() / AttributeKey.remove() and when AttributeKey.getAndSet(null) / AttributeKey.set(null).
Result:
Less suprising behaviour.
Motivation:
To support HTTP2 we need APLN support. This was not provided before when using OpenSslEngine, so SSLEngine (JDK one) was the only bet.
Beside this CipherSuiteFilter was not supported
Modifications:
- Upgrade netty-tcnative and make use of new features to support ALPN and NPN in server and client mode.
- Guard against segfaults after the ssl pointer is freed
- support correctly different failure behaviours
- add support for CipherSuiteFilter
Result:
Be able to use OpenSslEngine for ALPN / NPN for server and client.
Motivation:
Currently we have IntObjectMap/HashMap, but it will be useful to support other primitive-based maps.
Modifications:
Moved the code int the current maps to template files and run Groovy code from common/pom.xml to apply the templates.
Result:
Autogeneration of int and char-based hash maps.
Motivation:
We are allocating a hash map for every HTTP2 Stream to store it's children.
Most streams are leafs in the priority tree and don't have children.
Modification:
- Only allocate children when we actually use them.
- Make EmptyIntObjectMap not throw a UnsupportedOperationException on remove, but return null instead (as is stated in it's javadoc).
Result:
Fewer unnecessary allocations.
Motivation:
PrimitiveCollections is not in the 4.1 branch. It is needed by HTTP/2.
Modifications:
Backport this class.
Result:
PrimitiveCollections is in 4.1.
Motivation:
The Http2Settings class currently disallows setting non-standard settings, which violates the spec.
Modifications:
Updated Http2Settings to permit arbitrary settings. Also adjusting the default initial capacity to allow setting all of the standard settings without reallocation.
Result:
Fixes#3560
Motivation:
On a system where ipv4 and ipv6 are supported a user may want to use -Djava.net.preferIPv4Stack=true to restrict it to use ipv4 only.
This is currently ignored with the epoll transport.
Modifications:
Respect java.net.preferIPv4Stack system property.
Result:
-Djava.net.preferIPv4Stack=true will have the effect the user is looking for.
Motivation:
When remove0() is called multiple times for an DefaultAttribute it can cause corruption of the internal linked-list structure.
Modifications:
- Ensure remove0() can not cause corruption by null out prev and next references.
Result:
No more corruption possible
Motivation:
At the moment when EmbeddedChannel is used and a ChannelHandler tries to schedule and task it will throw an UnsupportedOperationException. This makes it impossible to test these handlers or even reuse them with EmbeddedChannel.
Modifications:
- Factor out reusable scheduling code into AbstractSchedulingEventExecutor
- Let EmbeddedEventLoop and SingleThreadEventExecutor extend AbstractSchedulingEventExecutor
- add EmbbededChannel.runScheduledPendingTasks() which allows to run all scheduled tasks that are ready
Result:
Embeddedchannel is now usable even with ChannelHandler that try to schedule tasks.
Motivation:
At the moment if you want to return a HTTP header containing multiple
values you have to set/add that header once with the values wanted. If
you used set/add with an array/iterable multiple HTTP header fields will
be returned in the response.
Note, that this is indeed a suggestion and additional work and tests
should be added. This is mainly to bring up a discussion.
Modifications:
Added a flag to specify that when multiple values exist for a single
HTTP header then add them as a comma separated string.
In addition added a method to StringUtil to help escape comma separated
value charsequences.
Result:
Allows for responses to be smaller.
Motivation:
We should allow to get a ChannelOption/AttributeKey from a String. This will make it a lot easier to make use of configuration files in applications.
Modifications:
- Add exists(...), newInstance(...) method to ChannelOption and AttributeKey and alter valueOf(...) to return an existing instance for a String or create one.
- Add unit tests.
Result:
Much more flexible usage of ChannelOption and AttributeKey.
While implementing netty-handler-proxy, I realized various issues in our
current socksx package. Here's the list of the modifications and their
background:
- Split message types into interfaces and default implementations
- so that a user can implement an alternative message implementations
- Use classes instead of enums when a user might want to define a new
constant
- so that a user can extend SOCKS5 protocol, such as:
- defining a new error code
- defining a new address type
- Rename the message classes
- to avoid abbreviated class names. e.g:
- Cmd -> Command
- Init -> Initial
- so that the class names align better with the protocol
specifications. e.g:
- AuthRequest -> PasswordAuthRequest
- AuthScheme -> AuthMethod
- Rename the property names of the messages
- so that the property names align better when the field names in the
protocol specifications
- Improve the decoder implementations
- Give a user more control over when a decoder has to be removed
- Use DecoderResult and DecoderResultProvider to handle decode failure
gracefully. i.e. no more Unknown* message classes
- Add SocksPortUnifinicationServerHandler since it's useful to the users
who write a SOCKS server
- Cleaned up and moved from the socksproxy example
Motivation:
isRoot() is an expensive operation. We should avoid calling it if
possible.
Modifications:
Move the isRoot() checks to the end of the 'if' block, so that isRoot()
is evaluated only when really necessary.
Result:
isRoot() is evaluated only when SO_BROADCAST is set and the bind address
is anylocal address.
Motivation:
When a user sees an error message, sometimes he or she does not know
what exactly he or she has to do to fix the problem.
Modifications:
Log the URL of the wiki pages that might help the user troubleshoot.
Result:
We are more friendly.
Related: #3166
Motivation:
When the recyclable object created at one thread is returned at the
other thread, it is stored in a WeakOrderedQueue.
The objects stored in the WeakOrderedQueue is added back to the stack by
WeakOrderedQueue.transfer() when the owner thread ran out of recyclable
objects.
However, WeakOrderedQueue.transfer() does not have any mechanism that
prevents the stack from growing beyond its maximum capacity.
Modifications:
- Make WeakOrderedQueue.transfer() increase the capacity of the stack
only up to its maximum
- Add tests for the cases where the recyclable object is returned at the
non-owner thread
- Fix a bug where Stack.scavengeSome() does not scavenge the objects
when it's the first time it ran out of objects and thus its cursor is
null.
- Overall clean-up of scavengeSome() and transfer()
Result:
The capacity of Stack never increases beyond its maximum.
Motivation:
io.netty.util.internal.PlatformDependent.isRoot() depends on the IS_ROOT field which is filled in during class initialization. This spawns processes and consumes resources, which are not generally necessary to the complete functioning of that class.
Modifications:
This switches the class to use lazy initialization this field inside of the isRoot() method using double-checked locking (http://en.wikipedia.org/wiki/Double-checked_locking).
Result:
The first call to isRoot() will be slightly slower, at a tradeoff that class loading is faster, uses fewer resources and platform errors are avoided unless necessary.
- Parameterize DomainNameMapping to make it useful for other use cases
than just mapping to SslContext
- Move DomainNameMapping to io.netty.util
- Clean-up the API documentation
- Make SniHandler.hostname and sslContext volatile because they can be
accessed by non-I/O threads
Motivation:
When we need to host multiple server name with a single IP, it requires
the server to support Server Name Indication extension to serve clients
with proper certificate. So the SniHandler will host multiple
SslContext(s) and append SslHandler for requested hostname.
Modification:
* Added SniHandler to host multiple certifications in a single server
* Test case
Result:
User could use SniHandler to host multiple certifcates at a time.
It's server-side only.
Motivation:
8fbc513 introduced stray warnings in callsites of
PromiseAggregator#add and PromiseNotifier#(...).
Modifications:
This commit adds the @SafeVarargs annotation to PromiseAggregator#add
and PromiseNotifier#(...). As Netty is built with JDK7, this is a
recognized annotation and should not affect runtime VM versions 1.5 and
1.6.
Result:
Building Netty with JDK7 will no longer produce warnings in the
callsites mentioned above.
Motivation:
Although the new IntObjectMap.values() that returns Collection is
useful, the removed values(Class<V>) that returns an array is also
useful. It's also good for backward compatibility.
Modifications:
- Add IntObjectMap.values(Class<V>) back
- Miscellaneous improvements
- Cache the collection returned by IntObjectHashMap.values()
- Inspector warnings
- Update the IntObjectHashMapTest to test both values()
Result:
- Backward compatibility
- Potential performance improvement of values()
Motivation:
The mentioned commit contains a bug fix and an improvement in
IntObjectHashMap that requires backporting.
Modifications:
Update IntObjectMap, IntObjectHashMap, and IntObjectHashMapTest
Result:
Easier to backport HTTP/2 and other changes in master in the future
Motivation:
Found performance issues via FindBugs and PMD.
Modifications:
- Removed unnecessary boxing/unboxing operations in DefaultTextHeaders.convertToInt(CharSequence) and DefaultTextHeaders.convertToLong(CharSequence). A boxed primitive is created from a string, just to extract the unboxed primitive value.
- Added a static modifier for DefaultHttp2Connection.ParentChangedEvent class. This class is an inner class, but does not use its embedded reference to the object which created it. This reference makes the instances of the class larger, and may keep the reference to the creator object alive longer than necessary.
- Added a static compiled Pattern to avoid compile it each time it is used when we need to replace some part of authority.
- Improved using of StringBuilders.
Result:
Performance improvements.
Motivation:
NetUtil.isValidIpV6Address() handles the interface name in IPv6 address
incorrectly. For example, it returns false for the following addresses:
- ::1%lo
- ::1%_%_in_name_
Modifications:
- Strip the square brackets before validation for simplicity
- Strip the part after the percent sign completely before validation for
simplicity
- Simplify and reformat NetUtilTest
Result:
- The interface names in IPv6 addresses are handled correctly.
- NetUtilTest is cleaner
Motivation:
ChannelPromiseAggregator and ChannelPromiseNotifiers only allow
consumers to work with Channels as the result type. Generic versions
of these classes allow consumers to aggregate or broadcast the results
of an asynchronous execution with other result types.
Modifications:
Add PromiseAggregator and PromiseNotifier. Add unit tests for both.
Remove code in ChannelPromiseAggregator and ChannelPromiseNotifier and
modify them to extend the new base classes.
Result:
Consumers can now aggregate or broadcast the results of an asynchronous
execution with results types other than Channel.
Motivation:
CollectionUtils has only one method and it is used only in DefaultHeaders.
Modification:
Move CollectionUtils.equals() to DefaultHeaders and make it private
Result:
One less class to expose in our public API
Motivation:
The header class hierarchy and algorithm was improved on the master branch for versions 5.x. These improvments should be backported to the 4.1 baseline.
Modifications:
- cherry-pick the following commits from the master branch: 2374e17, 36b4157, 222d258
Result:
Header improvements in master branch are available in 4.1 branch.
Motivation:
Improvements were made on the main line to support ALPN and mutual
authentication for TLS. These should be backported.
Modifications:
- Backport commits from the master branch
- f8af84d599
- e74c8edba3
Result:
Support for ALPN and mutual authentication.
Motivation:
The java implementations for Inet6Address.getHostName() do not follow the RFC 5952 (http://tools.ietf.org/html/rfc5952#section-4) for recommended string representation. This introduces inconsistencies when integrating with other technologies that do follow the RFC.
Modifications:
-NetUtil.java to have another public static method to convert InetAddress to string. Inet4Address will use the java InetAddress.getHostAddress() implementation and there will be new code to implement the RFC 5952 IPV6 string conversion.
-New unit tests to test the new method
Result:
Netty provides a RFC 5952 compliant string conversion method for IPV6 addresses
Motivation
Issue #3004 shows that "=" character was not supported as it should in
the HttpPostRequestDecoder in form-data boundary.
Modifications:
Add 2 methods in StringUtil
- split with maxPart argument: String split with max parts only (to prevent multiple '='
to be source of extra split while not needed)
- substringAfter: String part after delimiter (since first part is not
needed)
Use those methods in HttpPostRequestDecoder.
Change and the HttpPostRequestDecoderTest to check using a boundary
beginning with "=".
Results:
The fix implies more stability and fix the issue.
Motivation:
Using a needless local copy of keys.length.
Modifications:
Using keys.length explicitly everywhere.
Result:
Slight performance improvement of hashIndex.
Motivation:
The hashIndex method currently uses a conditional to handle negative
keys. This could be done without a conditional to slightly improve
performance.
Modifications:
Modified hashIndex() to avoid using a conditional.
Result:
Slight performance improvement to hashIndex().
Motivation:
IntObjectHashMap throws an exception when using negative values for
keys.
Modifications:
Changed hashIndex() to normalize the index if the mod operation returns
a negative number.
Result:
IntObjectHashMap supports negative key values.
Motivation:
An IPv6 string can have a zone index which is followed by the '%' sign.
When a user passes an IPv6 string with a zone index,
NetUtil.createByteArrayFromIpAddressString() returns an incorrect value.
Modification:
- Strip the zone index before conversion
Result:
An IPv6 string with a zone index is decoded correctly.
Motivation:
This fixes bug #2848 which caused Recycler to become unbounded and cache infinite number of objects with maxCapacity that's not a power of two. This can result in general sluggishness of the application and OutOfMemoryError.
Modifications:
The test for maxCapacity has been moved out of test to check if the buffer has filled. The buffer is now also capped at maxCapacity and cannot grow over it as it jumps from one power of two to the other.
Additionally, a unit test was added to verify maxCapacity is honored even when it's not a power of two.
Result:
With these changes the user is able to use a custom maxCapacity number and not have it ignored. The unit test assures this bug will not repeat itself.
Motivation:
There is not need todo redunant reads of head in peakNode as we can just spin on next() until it becomes visible.
Modifications:
Remove redundant reads of head in peakNode. This is based on @nitsanw's patch for akka.
See https://github.com/akka/akka/pull/15596
Result:
Less volatile access.
Motivation:
Recently we changed the default value of SOMAXCONN that is used when we can not determine it by reading /proc/sys/net/core/somaxconn. While doing this we missed to update the javadocs to reflect the new default value that is used.
Modifications:
List correct default value in the javadocs of SOMAXCONN.
Result:
Correct javadocs.
Related issue: #2407
Motivation:
The current fallback SOMAXCONN value is 3072. It is way too large
comparing to the default SOMAXCONN value of popular OSes.
Modifications:
Decrease the fallback SOMAXCONN value to 128 or 200 depending on the
current OS
Result:
Saner fallback value
Motivation:
The calculation of the max wait time for HashedWheelTimerTest.testExecutionOnTime() was wrong and so the test sometimes failed.
Modifications:
Fix the max wait time.
Result:
No more test-failures
Motivation:
We forgot to do a null check on the cause parameter of
ChannelFuture.setFailure(cause)
Modifications:
Add a null check
Result:
Fixed issue: #2728
Motivation:
We did various changes related to the ChannelOutboundBuffer in 4.0 branch. This commit port all of them over and so make sure our branches are synced in terms of these changes.
Related to [#2734], [#2709], [#2729], [#2710] and [#2693] .
Modification:
Port all changes that was done on the ChannelOutboundBuffer.
This includes the port of the following commits:
- 73dfd7c01b
- 997d8c32d2
- e282e504f1
- 5e5d1a58fd
- 8ee3575e72
- d6f0d12a86
- 16e50765d1
- 3f3e66c31a
Result:
- Less memory usage by ChannelOutboundBuffer
- Same code as in 4.0 branch
- Make it possible to use ChannelOutboundBuffer with Channel implementation that not extends AbstractChannel
Motivation:
As /proc/sys/net/core/somaxconn does not exists on non-linux platforms you see a noisy stacktrace when debug level is enabled while the static method of NetUtil is executed.
Modifications:
Check if the file exists before try to parse it.
Result:
Less noisy logging on non-linux platforms.
Related issue: #2354
Motivation:
AbstractConstant.compareTo() can return 0 even if the specified constant
object is not the same instance with 'this'.
Modifications:
- Compare the identityHashCode of constant first. If that fails,
allocate a small direct buffer and use its memory address as a unique
value. If the platform does not provide a way to get the memory
address of a direct buffer, use a thread-local random value.
- Signal cannot extend AbstractConstant. Use delegation.
Result:
It is practically impossible for AbstractConstant.compareTo() to return
0 for different constant objects.
Motivation:
While benchmarking the native transport with gathering writes I noticed that it is quite slow. This is due the fact that we need to do a lot of array copies to get the buffers into the iov array.
Modification:
Introduce a new class calles IovArray which allows to fill buffers directly in a iov array that can be passed over to JNI without any array copies. This gives a nice optimization in terms of speed when doing gathering writes.
Result:
Big performance improvement when doing gathering writes. See the included benchmark...
Before:
[nmaurer@xxx]~% wrk/wrk -H 'Host: localhost' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Connection: keep-alive' -d 120 -c 256 -t 16 --pipeline 256 http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 23.44ms 16.37ms 259.57ms 91.77%
Req/Sec 181.99k 31.69k 304.60k 78.12%
346544071 requests in 2.00m, 46.48GB read
Requests/sec: 2887885.09
Transfer/sec: 396.59MB
With this change:
[nmaurer@xxx]~% wrk/wrk -H 'Host: localhost' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Connection: keep-alive' -d 120 -c 256 -t 16 --pipeline 256 http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 21.93ms 16.33ms 305.73ms 92.34%
Req/Sec 194.56k 33.75k 309.33k 77.04%
369617503 requests in 2.00m, 49.57GB read
Requests/sec: 3080169.65
Transfer/sec: 423.00MB
Motivation:
Due some race-condition while handling canellation of TimerTasks it was possibleto corrupt the linked-list structure that is represent by HashedWheelBucket and so produce a NPE.
Modification:
Fix the problem by adding another MpscLinkedQueue which holds the cancellation tasks and process them on each tick. This allows to use no synchronization / locking at all while introduce a latency of max 1 tick before the TimerTask can be GC'ed.
Result:
No more NPE
- Rewrite with linear probing, no state array, compaction at cleanup
- Optimize keys() and values() to not use reflection
- Optimize hashCode() and equals() for efficient iteration
- Fixed equals() to not return true for equals(null)
- Optimize iterator to not allocate new Entry at each next()
- Added toString()
- Added some new unit tests
Motivation:
Now Netty has a few problems with null values.
Modifications:
- Check HAProxyProxiedProtocol in HAProxyMessage constructor and throw NPE if it is null.
If HAProxyProxiedProtocol is null we will set AddressFamily as null. So we will get NPE inside checkAddress(String, AddressFamily) and it won't be easy to understand why addrFamily is null.
- Check File in DiskFileUpload.toString().
If File is null we will get NPE when calling toString() method.
- Check Result<String> in MqttDecoder.decodeConnectionPayload(...).
If !mqttConnectVariableHeader.isWillFlag() || !mqttConnectVariableHeader.hasUserName() || !mqttConnectVariableHeader.hasPassword() we will get NPE when we will try to create new instance of MqttConnectPayload.
- Check Unsafe before calling unsafe.getClass() in PlatformDependent0 static block.
- Removed unnecessary null check in WebSocket08FrameEncoder.encode(...).
Because msg.content() can not return null.
- Removed unnecessary null check in DefaultStompFrame(StompCommand) constructor.
Because we have this check in the super class.
- Removed unnecessary null checks in ConcurrentHashMapV8.removeTreeNode(TreeNode<K,V>).
- Removed unnecessary null check in OioDatagramChannel.doReadMessages(List<Object>).
Because tmpPacket.getSocketAddress() always returns new SocketAddress instance.
- Removed unnecessary null check in OioServerSocketChannel.doReadMessages(List<Object>).
Because socket.accept() always returns new Socket instance.
- Pass Unpooled.buffer(0) instead of null inside CloseWebSocketFrame(boolean, int) constructor.
If we will pass null we will get NPE in super class constructor.
- Added throw new IllegalStateException in GlobalEventExecutor.awaitInactivity(long, TimeUnit) if it will be called before GlobalEventExecutor.execute(Runnable).
Because now we will get NPE. IllegalStateException will be better in this case.
- Fixed null check in OpenSslServerContext.setTicketKeys(byte[]).
Now we throw new NPE if byte[] is not null.
Result:
Added new null checks when it is necessary, removed unnecessary null checks and fixed some NPE problems.
Motivation:
Fix some typos in Netty.
Modifications:
- Fix potentially dangerous use of non-short-circuit logic in Recycler.transfer(Stack<?>).
- Removed double 'the the' in javadoc of EmbeddedChannel.
- Write to log an exception message if we can not get SOMAXCONN in the NetUtil's static block.
Modifications:
- Added a static modifier for CompositeByteBuf.Component.
This class is an inner class, but does not use its embedded reference to the object which created it. This reference makes the instances of the class larger, and may keep the reference to the creator object alive longer than necessary.
- Removed unnecessary boxing/unboxing operations in HttpResponseDecoder, RtspResponseDecoder, PerMessageDeflateClientExtensionHandshaker and PerMessageDeflateServerExtensionHandshaker
A boxed primitive is created from a String, just to extract the unboxed primitive value.
- Removed unnecessary 3 times calculations in DiskAttribute.addContent(...).
- Removed unnecessary checks if file exists before call mkdirs() in NativeLibraryLoader and PlatformDependent.
Because the method mkdirs() has this check inside.
- Removed unnecessary `instanceof AsciiString` check in StompSubframeAggregator.contentLength(StompHeadersSubframe) and StompSubframeDecoder.getContentLength(StompHeaders, long).
Because StompHeaders.get(CharSequence) always returns java.lang.String.
Motivations:
In our new version of HWT we used some kind of lazy cancelation of timeouts by put them back in the queue and let them pick up on the next tick. This multiple problems:
- we may corrupt the MpscLinkedQueue if the task is used as tombstone
- this sometimes lead to an uncessary delay especially when someone did executed some "heavy" logic in the TimeTask
Modifications:
Use a Lock per HashedWheelBucket for save and fast removal.
Modifications:
Cancellation of tasks can be done fast and so stuff can be GC'ed and no more infinite-loop possible
Motivation:
When system is in short of entrophy, the initialization of
ThreadLocalRandom can take at most 3 seconds. The initialization occurs
when ThreadLocalRandom.current() is invoked first time, which might be
much later than the moment when the application has started. If we
start the initialization of ThreadLocalRandom as early as possible, we
can reduce the perceived time taken for the retrieval.
Modification:
Begin the initialization of ThreadLocalRandom in InternalLoggerFactory,
potentially one of the firstly initialized class in a Netty application.
Make DefaultChannelId retrieve the current process ID before retrieving
the current machine ID, because retrieval of a machine ID is more likely
to use ThreadLocalRandom.current().
Use a dummy channel ID for EmbeddedChannel, which prevents many unit
tests from creating a ThreadLocalRandom instance.
Result:
We gain extra 100ms at minimum for initialSeedUniquifier generation. If
an application has its own initialization that takes long enough time
and generates good amount of entrophy, it is very likely that we will
gain a lot more.
Motivation:
We use the nanoTime of the scheduledTasks to calculate the milli-seconds to wait for a select operation to select something. Once these elapsed we check if there was something selected or some task is ready for processing. Unfortunally we not take into account scheduled tasks here so the selection loop will continue if only scheduled tasks are ready for processing. This will delay the execution of these tasks.
Modification:
- Check if a scheduled task is ready after selecting
- also make a tiny change in NioEventLoop to not trigger a rebuild if nothing was selected because the timeout was reached a few times in a row.
Result:
Execute scheduled tasks on time.
Motivation:
When a user tries to use netty on android it currently fails with "Could not find class 'sun.misc.Cleaner'"
Modification:
Encapsulate sun.misc.Cleaner usage in extra class to workaround this isssue.
Result:
Netty can be used on android again
Motivation:
During some refactoring we changed PlatformDependend0 to use sun.nio.ch.DirectBuffer for release direct buffers. This broke support for android as the class does not exist there and so an exception is thrown.
Modification:
Use again the fieldoffset to get access to Cleaner for release direct buffers.
Result:
Netty can be used on android again
Motivation:
Recycler is used in many places to reduce GC-pressure but is still not as fast as possible because of the internal datastructures used.
Modification:
- Rewrite Recycler to use a WeakOrderQueue which makes minimal guaranteer about order and visibility for max performance.
- Recycling of the same object multiple times without acquire it will fail.
- Introduce a RecyclableMpscLinkedQueueNode which can be used for MpscLinkedQueueNodes that use Recycler
These changes are based on @belliottsmith 's work that was part of #2504.
Result:
Huge increase in performance.
4.0 branch without this commit:
Benchmark (size) Mode Samples Score Score error Units
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 00000 thrpt 20 116026994.130 2763381.305 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 00256 thrpt 20 110823170.627 3007221.464 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 01024 thrpt 20 118290272.413 7143962.304 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 04096 thrpt 20 120560396.523 6483323.228 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 16384 thrpt 20 114726607.428 2960013.108 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 65536 thrpt 20 119385917.899 3172913.684 ops/s
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 297.617 sec - in io.netty.microbench.internal.RecyclableArrayListBenchmark
4.0 branch with this commit:
Benchmark (size) Mode Samples Score Score error Units
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 00000 thrpt 20 204158855.315 5031432.145 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 00256 thrpt 20 205179685.861 1934137.841 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 01024 thrpt 20 209906801.437 8007811.254 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 04096 thrpt 20 214288320.053 6413126.689 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 16384 thrpt 20 215940902.649 7837706.133 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 65536 thrpt 20 211141994.206 5017868.542 ops/s
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 297.648 sec - in io.netty.microbench.internal.RecyclableArrayListBenchmark
Motivation:
The tail node reference writes (by producer threads) are very likely to
invalidate the cache line holding the headRef which is read by the
consumer threads in order to access the padded reference to the head
node. This is because the resulting layout for the object is:
- header
- Object AtomicReference.value -> Tail node
- Object MpscLinkedQueue.headRef -> PaddedRef -> Head node
This is 'passive' false sharing where one thread reads and the other
writes. The current implementation suffers from further passive false
sharing potential from any and all neighbours to the queue object as no
pre/post padding is provided for the class fields.
Modifications:
Fix the memory layout by adding pre-post padding for the head node and
putting the tail node reference in the same object.
Result:
Fixed false sharing
Motivation:
Maps with integer keys are used in several places (HTTP/2 code, for
example). To reduce the memory footprint of these structures, we need a
specialized map class that uses ints as keys.
Modifications:
Added IntObjectHashMap, which is uses open addressing and double hashing
for collision resolution.
Result:
A new int-based map class that can be shared across Netty.
Motivation:
When Netty runs in a managed environment such as web application server,
Netty needs to provide an explicit way to remove the thread-local
variables it created to prevent class loader leaks.
FastThreadLocal uses different execution paths for storing a
thread-local variable depending on the type of the current thread.
It increases the complexity of thread-local removal.
Modifications:
- Moved FastThreadLocal and FastThreadLocalThread out of the internal
package so that a user can use it.
- FastThreadLocal now keeps track of all thread local variables it has
initialized, and calling FastThreadLocal.removeAll() will remove all
thread-local variables of the caller thread.
- Added FastThreadLocal.size() for diagnostics and tests
- Introduce InternalThreadLocalMap which is a mixture of hard-wired
thread local variable fields and extensible indexed variables
- FastThreadLocal now uses InternalThreadLocalMap to implement a
thread-local variable.
- Added ThreadDeathWatcher.unwatch() so that PooledByteBufAllocator
tells it to stop watching when its thread-local cache has been freed
by FastThreadLocal.removeAll().
- Added FastThreadLocalTest to ensure that removeAll() works
- Added microbenchmark for FastThreadLocal and JDK ThreadLocal
- Upgraded to JMH 0.9
Result:
- A user can remove all thread-local variables Netty created, as long as
he or she did not exit from the current thread. (Note that there's no
way to remove a thread-local variable from outside of the thread.)
- FastThreadLocal exposes more useful operations such as isSet() because
we always implement a thread local variable via InternalThreadLocalMap
instead of falling back to JDK ThreadLocal.
- FastThreadLocalBenchmark shows that this change improves the
performance of FastThreadLocal even more.
Motivation:
We have quite a bit of code duplication between HTTP/1, HTTP/2, SPDY,
and STOMP codec, because they all have a notion of 'headers', which is a
multimap of string names and values.
Modifications:
- Add TextHeaders and its default implementation
- Add AsciiString to replace HttpHeaderEntity
- Borrowed some portion from Apache Harmony's java.lang.String.
- Reimplement HttpHeaders, SpdyHeaders, and StompHeaders using
TextHeaders
- Add AsciiHeadersEncoder to reuse the encoding a TextHeaders
- Used a dedicated encoder for HTTP headers for better performance
though
- Remove shortcut methods in SpdyHeaders
- Replace SpdyHeaders.getStatus() with HttpResponseStatus.parseLine()
Result:
- Removed quite a bit of code duplication in the header implementations.
- Slightly better performance thanks to improved header validation and
hash code calculation
Motivation:
Provide a faster ThreadLocal implementation
Modification:
Add a "FastThreadLocal" which uses an EnumMap and a predefined fixed set of possible thread locals (all of the static instances created by netty) that is around 10-20% faster than standard ThreadLocal in my benchmarks (and can be seen having an effect in the direct PooledByteBufAllocator benchmark that uses the DEFAULT ByteBufAllocator which uses this FastThreadLocal, as opposed to normal instantiations that do not, and in the new RecyclableArrayList benchmark);
Result:
Improved performance
Motivation:
At the moment the HashedWheelTimer will only remove the cancelled Timeouts once the HashedWheelBucket is processed again. Until this the instance will not be able to be GC'ed as there are still strong referenced to it even if the user not reference it by himself/herself. This can cause to waste a lot of memory even if the Timeout was cancelled before.
Modification:
Add a new queue which holds CancelTasks that will be processed on each tick to remove cancelled Timeouts. Because all of this is done only by the WorkerThread there is no need for synchronization and only one extra object creation is needed when cancel() is executed. For addTimeout(...) no new overhead is introduced.
Result:
Less memory usage for cancelled Timeouts.
Motivation:
As part of GSOC 2013 we had @mbakkar working on a DNS codec but did not integrate it yet as it needs some cleanup. This commit is based on @mbakkar's work and provide the codec for DNS.
Modifications:
Add DNS codec
Result:
Reusable DNS codec will be included in netty.
This PR also includes a AsynchronousDnsResolver which allows to resolve DNS entries in a non blocking way by make use
of the dns codec and netty transport itself.
Motivation:
MpscLinkedQueue has various issues:
- It does not work without sun.misc.Unsafe.
- Some field names are confusing.
- Node.tail does not refer to the tail node really.
- The tail node is the starting point of iteration. I think the tail
node should be the head node and vice versa to reduce confusion.
- Some important methods are not implemented (e.g. iterator())
- Not serializable
- Potential false cache sharing problem due to lack of padding
- MpscLinkedQueue extends AtomicReference and thus exposes various
operations that mutates the internal state of the queue directly.
Modifications:
- Use AtomicReferenceFieldUpdater wherever possible so that we do not
use Unsafe directly. (e.g. use lazySet() instead of putOrderedObject)
- Extend AbstractQueue to implement most operations
- Implement serialization and iterator()
- Rename tail to head and head to tail to reduce confusion.
- Rename Node.tail to Node.next.
- Fix a leak where the references in the removed head are not cleared
properly.
- Add Node.clearMaybe() method so that the value of the new head node
is cleared if possible.
- Add some comments for my own educational purposes
- Add padding to the head node
- Add FullyPaddedReference and RightPaddedReference for future reuse
- Make MpscLinkedQueue package-local so that a user cannot access the
dangerous yet public operations exposed by the superclass.
- MpscLinkedQueue.Node becomes MpscLinkedQueueNode, a top level class
Result:
- It's more like a drop-in replacement of ConcurrentLinkedQueue for the
MPSC case.
- Works without sun.misc.Unsafe
- Code potentially easier to understand
- Fixed leak (related: #2372)
Motivation:
When running Netty on a container environment, the container will often
complain about the lingering threads such as the worker threads of
ThreadDeathWatcher and GlobalEventExecutor. We should provide an
operation that allows a use to wait until such threads are terminated.
Modifications:
- Add awaitInactivity()
- (misc) Fix typo in GlobalEventExecutorTest
- (misc) Port ThreadDeathWatch's CAS-based thread life cycle management
to GlobalEventExecutor
Result:
- Fixes#2084
- Less overhead on task submission of GlobalEventExecutor
Motivation:
PooledByteBufAllocator's thread local cache and
ReferenceCountUtil.releaseLater() are in need of a way to run an
arbitrary logic when a certain thread is terminated.
Modifications:
- Add ThreadDeathWatcher, which spawns a low-priority daemon thread
that watches a list of threads periodically (every second) and
invokes the specified tasks when the associated threads are not alive
anymore
- Start-stop logic based on CAS operation proposed by @tea-dragon
- Add debug-level log messages to see if ThreadDeathWatcher works
Result:
- Fixes#2519 because we don't use GlobalEventExecutor anymore
- Cleaner code
Motivation:
The current DefaultAttributeMap cause an infinite-loop when the user removes an attribute and create the same attribute again. This regression was introduced by c3bd7a8ff1.
Modification:
Correctly break out loop
Result:
No infinite-loop anymore.
Motivation:
When (listeners == null && lateListeners == null) and (stackDepth >= MAX_LISTENER_STACK_DEPTH), the listener is not notified at all. The discard client does not work.
Modification:
Make sure to submit the notification task.
Result:
The discard client works again and all listeners are notified.
Motivation:
During a large memory copy, safepoint polling is diabled, hindering
accurate profiling.
Modifications:
Only copy up to 1 MiB per Unsafe.copyMemory()
Result:
Potentially more reliable performance
Motivation:
Some users already use an SSLEngine implementation in finagle-native. It
wraps OpenSSL to get higher SSL performance. However, to take advantage
of it, finagle-native must be compiled manually, and it means we cannot
pull it in as a dependency and thus we cannot test our SslHandler
against the OpenSSL-based SSLEngine. For an instance, we had #2216.
Because the construction procedures of JDK SSLEngine and OpenSslEngine
are very different from each other, we also need to provide a universal
way to enable SSL in a Netty application.
Modifications:
- Pull netty-tcnative in as an optional dependency.
http://netty.io/wiki/forked-tomcat-native.html
- Backport NativeLibraryLoader from 4.0
- Move OpenSSL-based SSLEngine implementation into our code base.
- Copied from finagle-native; originally written by @jpinner et al.
- Overall cleanup by @trustin.
- Run all SslHandler tests with both default SSLEngine and OpenSslEngine
- Add a unified API for creating an SSL context
- SslContext allows you to create a new SSLEngine or a new SslHandler
with your PKCS#8 key and X.509 certificate chain.
- Add JdkSslContext and its subclasses
- Add OpenSslServerContext
- Add ApplicationProtocolSelector to ensure the future support for NPN
(NextProtoNego) and ALPN (Application Layer Protocol Negotiation) on
the client-side.
- Add SimpleTrustManagerFactory to help a user write a
TrustManagerFactory easily, which should be useful for those who need
to write an alternative verification mechanism. For example, we can
use it to implement an unsafe TrustManagerFactory that accepts
self-signed certificates for testing purposes.
- Add InsecureTrustManagerFactory and FingerprintTrustManager for quick
and dirty testing
- Add SelfSignedCertificate class which generates a self-signed X.509
certificate very easily.
- Update all our examples to use SslContext.newClient/ServerContext()
- SslHandler now logs the chosen cipher suite when handshake is
finished.
Result:
- Cleaner unified API for configuring an SSL client and an SSL server
regardless of its internal implementation.
- When native libraries are available, OpenSSL-based SSLEngine
implementation is selected automatically to take advantage of its
performance benefit.
- Examples take advantage of this modification and thus are cleaner.
Motivation:
The old DefaultAttributeMap impl did more synchronization then needed and also did not expose a efficient way to check if an attribute exists with a specific key.
Modifications:
* Rewrite DefaultAttributeMap to not use IdentityHashMap and synchronization on the map directly. The new impl uses a combination of AtomicReferenceArray and synchronization per chain (linked-list). Also access the first Attribute per bucket can be done without any synchronization at all and just uses atomic operations. This should fit for most use-cases pretty weel.
* Add hasAttr(...) implementation
Result:
It's now possible to check for the existence of a attribute without create one. Synchronization is per linked-list and the first entry can even be added via atomic operation.
Motivation:
At the moment there are two issues with HashedWheelTimer:
* the memory footprint of it is pretty heavy (250kb fon an empty instance)
* the way how added Timeouts are handled is inefficient in terms of how locks etc are used and so a lot of context-switching / condition can happen.
Modification:
Rewrite HashedWheelTimer to use an optimized bucket implementation to store the submitted Timeouts and a MPSC queue to handover the timeouts. So volatile writes are reduced to a minimum and also the memory foot-print of the buckets itself is reduced a lot as the bucket uses a double-linked-list. Beside this we use Atomic*FieldUpdater where-ever possible to improve the memory foot-print and performance.
Result:
Lower memory-footprint and better performance
Motivation:
Some JDK versions of Mac OS X generates a JNI dynamic library with '.jnilib' extension rather than with '.dynlib' extension. However, System.mapLibraryName() always returns 'lib<name>.dynlib'. As a result, NativeLibraryLoader fails to load the native library whose extension is .jnilib.
Modification:
Try to find both '.jnilib' and '.dynlib' resources on OS X.
Result:
Dynamic libraries are loaded correctly in Mac OS X, and thus we can continue the OpenSslEngine work.
Motivation:
So far, we used a very simple platform string such as linux64 and
linux32. However, this is far from perfection because it does not
include anything about the CPU architecture.
Also, the current build tries to put multiple versions of .so files into
a single JAR. This doesn't work very well when we have to ship for many
different platforms. Think about shipping .so/.dynlib files for both
Linux and Mac OS X.
Modification:
- Use os-maven-plugin as an extension to determine the current OS and
CPU architecture reliable at build time
- Use Maven classifier instead of trying to put all shared libraries
into a single JAR
- NativeLibraryLoader does not guess the OS and bit mode anymore and it
always looks for the same location regardless of platform, because the
Maven classifier does the job instead.
Result:
Better scalable native library deployment and retrieval
Motivation:
It is less confusing not to spread Thread.interrupt() calls.
Modification:
- Comments
- Move generatorThread.interrupt() to where currentThread.interrupt() is
triggered
Result:
Code that is easier to read
Motivation:
4 and 5 were diverged long time ago and we recently reverted some of the
early commits in master. We must make sure 4.1 and master are not very
different now.
Modification:
Remove ChannelHandlerInvoker.writeAndFlush(...) and the related
implementations.
Result:
4.1 and master got closer.
Motivation:
4 and 5 were diverged long time ago and we recently reverted some of the
early commits in master. We must make sure 4.1 and master are not very
different now.
Modification:
Fix found differences
Result:
4.1 and master got closer.
Motivation:
ThreadLocalRandomTest reveals that ThreadLocalRandom's initial seed generation loop becomes tight if the thread is interrupted.
We currently interrupt ourselves inside the wait loop, which will raise an InterruptedException again in the next iteration, resulting in infinite (up to 3 seconds) exception construction and thread interruptions.
Modification:
- When the initial seed generator thread is interrupted, break out of the wait loop immediately.
- Log properly when the initial seed generation failed due to interruption.
- When failed to generate the initial seed, interrupt the generator thread just in case the SecureRandom implementation handles it properly.
- Make the initial seed generator thread daemon and handle potential exceptions raised due to the interruption.
Result:
No more tight loop on interruption. More robust generator thread termination. Fixes#2412
Motivation:
When using System.getProperty(...) and various methods to get a ClassLoader it will fail when a SecurityManager is in place.
Modifications:
Use a priveled block if needed. This work is based in the PR #2353 done by @anilsaldhana .
Result:
Code works also when SecurityManager is present
Motivation:
EventExecutor.iterator() is fixed to return Iterator<EventExecutor> and there's no way to change that as long as we don't extend Iterable. However, a user should have a way to cast the returned set of executors painlessly. Currently, it is only possible with an explicit cast like (Iterator<NioEventLoop>).
Modifications:
Instead, I added a new method called 'children()' which returns an immutable collection of child executors whose method signature looks like the following:
<E extends EventExecutor> Set<E> children();
Result:
A user can now do this:
Set<NioEventLoop> loops = group.children();
for (NioEventLoop l: loops) { ... }
Unfortunately, this is not possible:
for (NioEventLoop l: group.children()) { ... }
However, it's still a gain that a user doesn't need to down-cast explicitly and to add the '@SuppressWarnings` annotation.
Motivation:
There's no reason to keep our users from using DefaultEventExecutor directly. It should be actually very useful to them.
Modifications:
Make DefaultEventExecutor public and add useful public constructors.
Result:
DefaultEventExecutor got usable by anyone, yielding more value as a generic library.
Motivation:
AbstractEventExecutor and AbstractEventExecutorGroup have hard-coded magic timeout numbers. They should have the same timeout numbers, but it's easy to break that rule because they are hard-coded in each place.
Modifications:
Add package private constants to AbstractEventExecutor and let AbstractEventExecutorGroup use them.
Result:
Single timeout change affects two classes.
Motivation:
EventExecutor.parent() and EventLoop.parent() almost always return a constant parent executor. There's not much reason to let it implemented in subclasses.
Modifications:
- Implement AbstractEventExecutor.parent() with an additional contructor
- Add AbstractEventLoop so that subclasses extend AbstractEventLoop, which implements parent() appropriately
- Remove redundant parent() implementations in the subclasses
- Fix inspector warnings
Result:
Less duplication.
Motivation:
Previously, we used SecureRandom.nextLong() to generate the initialSeedUniquifier. This required more entrophy than necessary because it has to 1) generate the seed of SecureRandom first and then 2) generate a random long integer. Instead, we can use generateSeed() to skip the step (2)
Modifications:
Use generateSeed() instead of nextLong()
Result:
ThreadLocalRandom requires less amount of entrphy to start up
Motivation:
As reported in #2331, some query operations in NetworkInterface takes much longer time than we expected. For example, specifying -Djava.net.preferIPv4Stack=true option in Window increases the execution time by more than 4 times. Some Windows systems have more than 20 network interfaces, and this problem gets bigger as the number of unused (virtual) NICs increases.
Modification:
Use NetworkInterface.getInetAddresses() wherever possible.
Before iterating over all NICs reported by NetworkInterface, filter the NICs without proper InetAddresses. This reduces the number of candidates quite a lot.
NetUtil does not query hardware address of NIC in the first place but uses InetAddress.isLoopbackAddress().
Do not call unnecessary query operations on NetworkInterface. Just get hardware address and compare.
Result:
Significantly reduced class initialization time, which should fix#2331.
Motivation:
6e8ba291cf introduced a regression in Android because Android does not have sun.nio.ch.DirectBuffer (see #2330.) I also found PlatformDependent0.freeDirectBuffer() and freeDirectBufferUnsafe() are pretty much same after the commit and the unsafe version should be removed.
Modifications:
- Do not use the pooled allocator in Android because it's too resource hungry for Androids.
- Merge PlatformDependent0.freeDirectBuffer() and freeDirectBufferUnsafe() into one method.
- Make the Unsafe unavailable when sun.nio.ch.DirectBuffer is unavailable. We could keep the Unsafe available and handle the sun.nio.ch.DirectBuffer case separately, but I don't want to complicate our code just because of that. All supported JDK versions have sun.nio.ch.DirectBuffer if the Unsafe is available.
Result:
Simpler code. Fixes Android support (#2330)
Motivation:
'io.netty.recycler.maxCapacity.default' is the only property for recycler's default maximum capacity, so having the 'default' suffix only increases the length of the property name.
Modifications:
Rename "io.netty.recycler.maxCapacity.default" to "io.netty.recycler.maxCapacity"
Result:
Shorter system property name. The future addition of system properties, such as io.netty.recycler.maxCapacity.outboundBuffer, are not confusing either.
Motivation:
Remove the synchronization bottleneck in startThread() which is called by each execute(..) call from outside the EventLoop.
Modifications:
Replace the synchronized block with the use of AtomicInteger and compareAndSet loops.
Result:
Less conditions during SingleThreadEventExecutor.execute(...)
Motivation:
- As reported recently [1], Recycler's thread-local object pool has unbounded capacity which is a potential problem.
- It accesses a hash table on each push and pop for debugging purposes. We don't really need it besides debugging Netty itself.
Modifications:
- Introduced the maxCapacity constructor parameter to Recycler. The default default maxCapacity is retrieved from the system property whose default is 256K, which should be plenty for most cases.
- Recycler.Stack.map is now created and accessed only when assertion is enabled for Recycler.
Result:
- Recycler does not grow infinitely anymore.
- If assertion is disabled, Recycler should be much faster.
[1] https://github.com/netty/netty/issues/1841
Previously ConcurrentHashMapV8 evaulated ((x | 1) == 0), an expression
that always returned false. This commit brings Netty closer to the
Java 8 implementation.
This transport use JNI (C) to directly make use of epoll in Edge-Triggered mode for maximal performance on Linux. Beside this it also support using TCP_CORK and produce less GC then the NIO transport using JDK NIO.
It only builds on linux and skip the build if linux is not used. The transport produce a jar which contains all needed .so files for 32bit and 64 bit. The user only need to include the jar as dependency as usually
to make use of it and use the correct classes.
This includes also some cleanup of @trustin
- Related: #2163
- Add ResourceLeakHint to allow a user to provide a meaningful information about the leak when touching it
- DefaultChannelHandlerContext now implements ResourceLeakHint to tell where the message is going.
- Cleaner resource leak report by excluding noisy stack trace elements
- Proposed fix for #1824
UniqueName and its subtypes do not allow getting the previously registered instance. For example, let's assume that a user is running his/her application in an OSGi container with Netty bundles and his server bundle. Whenever the server bundle is reloaded, the server will try to create a new AttributeKey instance with the same name. However, Netty bundles were not reloaded at all, so AttributeKey will complain that the name is taken already (by the previously loaded bundle.)
To fix this problem:
- Replaced UniqueName with Constant, AbstractConstant, and ConstantPool. Better name and better design.
- Sctp/Udt/RxtxChannelOption is not a ChannelOption anymore. They are just constant providers and ChannelOption is final now. It's because caching anything that's from outside of netty-transport will lead to ClassCastException on reload, because ChannelOption's constant pool will keep all option objects for reuse.
- Signal implements Constant because we can't ensure its uniqueness anymore by relying on the exception raised by UniqueName's constructor.
- Fixes#2220
- Its Javadoc says it returns true when the promise is done (but not cancelled) or the promise is uncancellable, but it returns false when the promise is done.
.. which occurs when a user adds a listener from different threads after the promise is done and the notifications for the listeners, that were added before the promise is done, is in progress. For instance:
Thread-1: p.addListener(listenerA);
Thread-1: p.setSuccess(null);
Thread-2: p.addListener(listenerB);
Thread-2: p.executor.execute(taskNotifyListenerB);
Thread-1: p.executor.execute(taskNotifyListenerA);
taskNotifyListenerB should not really notify listenerB until taskNotifyListenerA is finished.
To fix this issue:
- Change the semantic of (listeners == null) to determine if the early
listeners [1] were notified
- If a late listener is added before the early listeners are notified,
the notification of the late listener is deferred until the early
listeners are notified (i.e. until listeners == null)
- The late listeners with deferred notifications are stored in a lazily
instantiated queue to preserve ordering, and then are notified once
the early listeners are notified.
[1] the listeners that were added before the promise is done
[2] the listeners that were added after the promise is done
- Remove the reference to ResourceLeak from the buffer implementations
and use wrappers instead:
- SimpleLeakAwareByteBuf and AdvancedLeakAwareByteBuf
- It is now allocator's responsibility to create a leak-aware buffer.
- Added AbstractByteBufAllocator.toLeakAwareBuffer() for easier
implementation
- Add WrappedByteBuf to reduce duplication between *LeakAwareByteBuf and
UnreleasableByteBuf
- Raise the level of leak reports to ERROR - because it will break the
app eventually
- Replace enabled/disabled property with the leak detection level
- Only print stack trace when level is ADVANCED or above to avoid user
confusion
- Add the 'leak' build profile, which enables highly detailed leak
reporting during the build
- Remove ResourceLeakException which is unsed anymore
- Fixes#2003 properly
- Instead of using 'bundle' packaging, use 'jar' packaging. This is
more robust because some strict build tools fail to retrieve the
artifacts from a Maven repository unless their packaging is not 'jar'.
- All artifacts now contain META-INF/io.netty.version.properties, which
provides the detailed information about the build and repository.
- Removed OSGi testsuite temporarily because it gives false errors
during split package test and examination.
- Add io.netty.util.Version for easy retrieval of version information
- Fixes#1765
Java 6 did a poor job of generating seedUniquifier unlike 7, so I implemented platform-independent seedUniquifier generator with configurability
The problem was that with OioSocketChannel there was always a read Task in the taskQueue and with the old logic it never tried to execute scheduled tasks if there was at least one task in the taskQueue.
- Fix a bug in DefaultProgressivePromise.tryProgress() where the notification is dropped
- Fix a bug in AbstractChannel.calculateMessageSize() where FileRegion is not counted
- HttpStaticFileServer example now uses zero copy file transfer if possible.
- Fixes#1445
- Add PlatformDependent.maxDirectMemory()
- Ensure the default number or arenas is decreased if the max memory of the VM is not large enough.
- Related issue: #1432
- Add Future.isCancellable()
- Add Promise.setUncancellable() which is meant to be used for the party that runs the task uncancellable once started
- Implement Future.isCancelled() and Promise.cancel(boolean) properly
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.
- Fixes#1282 (not perfectly, but to the extent it's possible with the current API)
- Add AddressedEnvelope and DefaultAddressedEnvelope
- Make DatagramPacket extend DefaultAddressedEnvelope<ByteBuf, InetSocketAddress>
- Rename ByteBufHolder.data() to content() so that a message can implement both AddressedEnvelope and ByteBufHolder (DatagramPacket does) without introducing two getter methods for the content
- Datagram channel implementations now understand ByteBuf and ByteBufHolder as a message with unspecified remote address.
shutdownGracefully() provides two optional parameters that give more
control over when an executor has to be shut down.
- Related issue: #1307
- Add shutdownGracefully(..) and isShuttingDown()
- Deprecate shutdown() / shutdownNow()
- Replace lastAccessTime with lastExecutionTime and update it after task
execution for accurate quiet period check
- runAllTasks() and runShutdownTasks() update it automatically.
- Add updateLastExecutionTime() so that subclasses can update it
- Add a constructor parameter that tells not to add an unncessary wakeup
task in execute() if addTask() wakes up the executor thread
automatically. Previously, execute() always called wakeup() after
addTask(), which often caused an extra dummy task in the task queue.
- Use shutdownGracefully() wherever possible / Deprecation javadoc
- Reduce the running time of SingleThreadEventLoopTest from 40s to 15s
using custom graceful shutdown parameters
- Other changes made along with this commit:
- takeTask() does not throw InterruptedException anymore.
- Returns null on interruption or wakeup
- Make sure runShutdownTasks() return true even if an exception was
raised while running the shutdown tasks
- Remove unnecessary isShutdown() checks
- Consistent use of SingleThreadEventExecutor.nanoTime()
Replace isWakeupOverridden with a constructor parameter
- Do not attempt to validate localhost by binding a socket because it can fail when SecurityManager is in use
- Find loopback interface first and get address from there instead of getting loopback address from InetAddress.getLocalHost() (because it's more reliable)
- Instead of throwing an Error, just log and fall back to 127.0.0.1 while determining localhost address
- Now works without the transport package
- Renamed TransferFuture to ProgressiveFuture and ChannelProgressiveFuture / same for promises
- ProgressiveFutureListener now extends GenericProgressiveFutureListener and GenericFutureListener (add/removeTransferListener*() were removed)
- Renamed DefaultEventListeners to DefaultFutureListeners and only accept GenericFutureListeners
- Various clean-up
- Count the number of select() calls made to wait until reaching at the expected dead line, and rebuild selectors if too many select() calls were made.
- Rename directbyDefault to preferDirect
- Add a system property 'io.netty.prederDirect' to allow a user from changing the preference on launch-time
- Merge UnpooledByteBufAllocator.DEFAULT_BY_* to DEFAULT
- Borrow SLF4J API which is the best of the best
- InternalLoggerFactory now automatically detects the logging framework
using static class loading. It tries SLF4J, Log4J, and then falls back
to java.util.logging.
- Remove OsgiLogger because it is very likely that OSGi container
already provides a bridge for existing logging frameworks
- Remove JBossLogger because the latest JBossLogger implementation seems
to implement SLF4J binding
- Upgrade SLF4J to 1.7.2
- Remove tests for the untestable logging frameworks
- Remove TestAny
Now that we are going to use buffer pooling by default, it is obvious
that a user will forget to call .free() and report memory leak. In this
case, we should have a tool to determine if it is a bug in our allocator
implementation or in the user's code.
This pull request adds a system property flag called
'io.netty.resourceLeakDetection'. If set, when a user forgets to call
.free(), the ResourceLeakDetector will detect it and log a message with
detailed stack trace to tell where the leaked buffer has been allocated.
Because obtaining stack trace is an expensive operation, I used sampling
technique. Allocation is recorded only for every 113th allocation. I
chose 113 because it's a prime number.
In production, a user might not want to enable this option due to
potential performance impact. If a user does not specify the
'-Dio.netty.resourceLeakDetection' option leak detection is disabled.
Even if the leak detection is enabled, the overhead should be less than
5% because only ~1% of allocations are monitored.
I also replaced SharedResourceMisuseDetector with ResourceLeakDetector.
- Add PooledUnsafeDirectByteBuf, a variant of PooledDirectByteBuf, which
accesses its underlying direct ByteBuffer using sun.misc.Unsafe.
- To decouple Netty from sun.misc.*, sun.misc.Unsafe is accessed via
PlatformDependent.
- This change solely introduces about 8+% improvement in direct memory
access according to the tests conducted as described in #918
use single static initialization of available metrics monitor registries
* This changes the original implementation to work in a similar way to
how slf4j selects and loads an implementation.
* Uses a single static instance so intialization is done only once.
* Doesn't throw IllegalStateException if multiple implementations are
found on the classpath. It instead selects and uses the first
implementation returned by iterator()
* Class left as an iterable to keep the API the same
add yammer metrics to examples to allow them to publish metrics
publish the number of threads used in an EventLoopGroup see issue #718
* seems like the better place to put this because it sets the default
thread count if the MultithreadEventLoopGroup uses super(0,...)
* It also happens to be the common parent class amongst all the
MultiThreadedEventLoopGroup implementations
* Count is reported for
io.netty.channel.{*,.local,.socket.aio,.socket.nio}
fix cosmetic issues pointed out in pull request and updated notice.txt
see https://github.com/netty/netty/pull/780
count # of channels registered in single threaded event loop
measure how many times Selector.select return before SELECT_TIME
This commit introduces a new API for ByteBuf allocation which fixes
issue #643 along with refactoring of ByteBuf for simplicity and better
performance. (see #62)
A user can configure the ByteBufAllocator of a Channel via
ChannelOption.ALLOCATOR or ChannelConfig.get/setAllocator(). The
default allocator is currently UnpooledByteBufAllocator.HEAP_BY_DEFAULT.
To allocate a buffer, do not use Unpooled anymore. do the following:
ctx.alloc().buffer(...); // allocator chooses the buffer type.
ctx.alloc().heapBuffer(...);
ctx.alloc().directBuffer(...);
To deallocate a buffer, use the unsafe free() operation:
((UnsafeByteBuf) buf).free();
The following is the list of the relevant changes:
- Add ChannelInboundHandler.freeInboundBuffer() and
ChannelOutboundHandler.freeOutboundBuffer() to let a user free the
buffer he or she allocated. ChannelHandler adapter classes implement
is already, so most users won't need to call free() by themselves.
freeIn/OutboundBuffer() methods are invoked when a Channel is closed
and deregistered.
- All ByteBuf by contract must implement UnsafeByteBuf. To access an
unsafe operation: ((UnsafeByteBuf) buf).internalNioBuffer()
- Replace WrappedByteBuf and ByteBuf.Unsafe with UnsafeByteBuf to
simplify overall class hierarchy and to avoid unnecesary instantiation
of Unsafe instances on an unsafe operation.
- Remove buffer reference counting which is confusing
- Instantiate SwappedByteBuf lazily to avoid instantiation cost
- Rename ChannelFutureFactory to ChannelPropertyAccess and move common
methods between Channel and ChannelHandlerContext there. Also made it
package-private to hide it from a user.
- Remove unused unsafe operations such as newBuffer()
- Add DetectionUtil.canFreeDirectBuffer() so that an allocator decides
which buffer type to use safely
First cut at implementing a generic abstraction layer for pluggable
metrics providers. This first cut is closely modeled after Yammer
Metrics. It remains to be seen if it is indeed flexibel enough to
support other providers.
Provide a default implementation of this new abstraction layer
based on Yammer Metrics.
Support pluggable Monitoring Providers using Java 6's ServiceLoader.
Use this new abstraction layer to provide stats on (a) number of
Timeouts executed per second and (b) distribution of absolute
deviation between scheduled and actual Timeout execution time in
HashedWheelTimer.
* Interface ValueDistributionMonitor, a monitor for histograms.
* Interface EventRateMonitor, a monitor for measuring the rate per time
unit of specific events.
* Interface ValueMonitor, a monitor for tracking an arbitrary datum's
current value
* Interface CounterMonitor, a monitor for incrementing/decrementing a
long value
* Interface MonitorRegistry, a registry for monitors that serves as the
interface between Netty and concrete metrics providers as e.g. Yammer
Metrics.
* Interface MonitorRegistryFactory, to be implemented by metrics
providers.
* Document how to use Netty's new monitoring support in javadocs for
package io.netty.monitor.
This tests the following classes more:
1: InternalLoggerFactoryTest
Tests InternalLoggerFactory.getInstance(Class)
2: UniqueName
Paired with #543, this achieves 100% code coverage with tests
Signed-off-by: Cruz Julian Bishop <cruzjbishop@gmail.com>
- Removed VoidEnum because a user can now specify Void instead
- AIO: Prefer discardReadBytes to clear
- AIO: Fixed a potential bug where notifyFlushFutures() is not called
if flush() was requested with no outbound data
- Really attempt to create a queue to determine LTQ can be initialized
in runtime, and cache the result
- Remove unnecessary Class<T> parameter in createQueue()
- Remove unused createQueue(Collection)
- AbstractChannel now handles flushing a message buffer
- Cleaned up DatagramChannel interface
- Removed ProtocolFamily because a user can create an NIO
DatagramChannel and specify it as a constructor parameter
- UniqueName and UniqueKey constructors became public so that
I don't need to create a subclass every time.
- UniqueKey removes the duplication between ChannelOption and
AttributeKey
- UniqueName provides common name collision check for AttributeKey,
ChannelOption, and Signal.
- Replaced ReplayError with Signal
- SingleThreadEventLoop now implements ScheduledExecutorService
- Scheduled tasks are automatically fetched into taskQueue by
pollTask() and takeTask()
- Removed MapBackedSet because Java 6 provides it