Motivation:
The tail node reference writes (by producer threads) are very likely to
invalidate the cache line holding the headRef which is read by the
consumer threads in order to access the padded reference to the head
node. This is because the resulting layout for the object is:
- header
- Object AtomicReference.value -> Tail node
- Object MpscLinkedQueue.headRef -> PaddedRef -> Head node
This is 'passive' false sharing where one thread reads and the other
writes. The current implementation suffers from further passive false
sharing potential from any and all neighbours to the queue object as no
pre/post padding is provided for the class fields.
Modifications:
Fix the memory layout by adding pre-post padding for the head node and
putting the tail node reference in the same object.
Result:
Fixed false sharing
- Use simple string concatenation instead of String.format()
- Rewrite exception messages so that it follows our style
- Merge MqttCommonUtil and MqttValidationUtil into MqttCodecUtil
- Hide MqttCodecUtil from users
- Rename MqttConnectReturnCode.value to byteValue
- Rename MqttMessageFactory.create*() to new*()
- Rename QoS to MqttQoS
- Make MqttSubAckPayload.grantedQoSLevels immutable and add more useful
constructor
MQTT is a open source protocol on top of TCP which is widely used in
mobile communication and also for IoT (Internet of Things) today. This
will add an open source implementation of MQTT so that it becomes easier
for Netty users to implement an MQTT application.
For more information about the MQTT protocol, read this:
http://public.dhe.ibm.com/software/dw/webservices/ws-mqtt/mqtt-v3r1.html
- Convert constant classes to enum
- Rename HAProxyProtocolMessage to HAProxyMessage for simpilicity
- Rename HAProxyProtocolDecoder to HAProxyMessageDecoder
- Rename HAProxyProtocolCommand to HAProxyCommand
- Merge ProxiedProtocolAndFamity, ProxiedAddressFamily, and
ProxiedTransportProtocol into HAProxiProxiedProtocol and its inner
enums
- Overall clean-up
Motivation:
The proxy protocol provides client connection information for proxied
network services. Several implementations exist (e.g. Haproxy, Stunnel,
Stud, Postfix), but the primary motivation for this implementation is to
support the proxy protocol feature of Amazon Web Services Elastic Load
Balancing.
Modifications:
This commit adds a proxy protocol decoder for proxy protocol version 1
as specified at:
http://haproxy.1wt.eu/download/1.5/doc/proxy-protocol.txt
The foundation for version 2 support is also in this commit but it is
explicitly NOT supported due to a lack of external implementations to
test against.
Result:
The proxy protocol decoder can be used to send client connection
information to inbound handlers in a channel pipeline from services
which support the proxy protocol.
Motivation:
Maps with integer keys are used in several places (HTTP/2 code, for
example). To reduce the memory footprint of these structures, we need a
specialized map class that uses ints as keys.
Modifications:
Added IntObjectHashMap, which is uses open addressing and double hashing
for collision resolution.
Result:
A new int-based map class that can be shared across Netty.
Motivation:
default*() tests are performing a test in a different way, and they must be same with other tests.
Modification:
Make sure default*() tests are same with the others
Result:
Easier to compare default and non-default allocators
Motivation:
Depth-first search is not always efficient for buddy allocation.
Modification:
Employ a new faster search algorithm with different memoryMap layout.
Result:
With thread-local cache disabled, we see a lot of performance
improvment, especially when the size of the allocation is as small as
the page size, which had the largest search space previously:
-- master head --
Benchmark (size) Mode Score Error Units
pooledDirectAllocAndFree 8192 thrpt 215.392 1.565 ops/ms
pooledDirectAllocAndFree 16384 thrpt 594.625 2.154 ops/ms
pooledDirectAllocAndFree 65536 thrpt 1221.520 18.965 ops/ms
pooledHeapAllocAndFree 8192 thrpt 217.175 1.653 ops/ms
pooledHeapAllocAndFree 16384 thrpt 587.250 14.827 ops/ms
pooledHeapAllocAndFree 65536 thrpt 1217.023 44.963 ops/ms
-- changes --
Benchmark (size) Mode Score Error Units
pooledDirectAllocAndFree 8192 thrpt 3656.744 94.093 ops/ms
pooledDirectAllocAndFree 16384 thrpt 4087.152 22.921 ops/ms
pooledDirectAllocAndFree 65536 thrpt 4058.814 29.276 ops/ms
pooledHeapAllocAndFree 8192 thrpt 3640.355 44.418 ops/ms
pooledHeapAllocAndFree 16384 thrpt 4030.206 24.365 ops/ms
pooledHeapAllocAndFree 65536 thrpt 4103.991 70.991 ops/ms
Motivation:
LocalServerChannel.doClose() calls LocalChannelRegistry.unregister(localAddress); without check if localAddress is null and so produce a NPE when pass null the used ConcurrentHashMapV8
Modification:
Check for localAddress != null before try to remove it from Map. Also added a unit test which showed the stacktrace of the error.
Result:
No more NPE during doClose().
Motivation:
To improve the speed of ByteBuf with order LITTLE_ENDIAN and where the native order is also LITTLE_ENDIAN (intel) we introduces a new special SwappedByteBuf before in commit 4ad3984c8b. Unfortunally the commit has a flaw which does not handle correctly the case when a ByteBuf expands. This was caused because the memoryAddress was cached and never changed again even if the underlying buffer expanded. This can lead to corrupt data or even to SEGFAULT the JVM if you are lucky enough.
Modification:
Always lookup the actual memoryAddress of the wrapped ByteBuf.
Result:
No more data-corruption for ByteBuf with order LITTLE_ENDIAN and no JVM crashes.
Motivation:
At the moment AbstractBoostrap.bind(...) will always use the GlobalEventExecutor to notify the returned ChannelFuture if the registration is not done yet. This should only be done if the registration fails later. If it completes successful we should just notify with the EventLoop of the Channel.
Modification:
Use EventLoop of the Channel if possible to use the correct Thread to notify and so guaranteer the right order of events.
Result:
Use the correct EventLoop for notification
Motivation:
When Netty runs in a managed environment such as web application server,
Netty needs to provide an explicit way to remove the thread-local
variables it created to prevent class loader leaks.
FastThreadLocal uses different execution paths for storing a
thread-local variable depending on the type of the current thread.
It increases the complexity of thread-local removal.
Modifications:
- Moved FastThreadLocal and FastThreadLocalThread out of the internal
package so that a user can use it.
- FastThreadLocal now keeps track of all thread local variables it has
initialized, and calling FastThreadLocal.removeAll() will remove all
thread-local variables of the caller thread.
- Added FastThreadLocal.size() for diagnostics and tests
- Introduce InternalThreadLocalMap which is a mixture of hard-wired
thread local variable fields and extensible indexed variables
- FastThreadLocal now uses InternalThreadLocalMap to implement a
thread-local variable.
- Added ThreadDeathWatcher.unwatch() so that PooledByteBufAllocator
tells it to stop watching when its thread-local cache has been freed
by FastThreadLocal.removeAll().
- Added FastThreadLocalTest to ensure that removeAll() works
- Added microbenchmark for FastThreadLocal and JDK ThreadLocal
- Upgraded to JMH 0.9
Result:
- A user can remove all thread-local variables Netty created, as long as
he or she did not exit from the current thread. (Note that there's no
way to remove a thread-local variable from outside of the thread.)
- FastThreadLocal exposes more useful operations such as isSet() because
we always implement a thread local variable via InternalThreadLocalMap
instead of falling back to JDK ThreadLocal.
- FastThreadLocalBenchmark shows that this change improves the
performance of FastThreadLocal even more.
Motivation:
The code in ChannelOutboundBuffer can be simplified by using AtomicLongFieldUpdater.addAndGet(...)
Modification:
Replace our manual looping with AtomicLongFieldUpdater.addAndGet(...)
Result:
Cleaner code
Motivation:
Currently we have the algorithm of calculate the new capacity of a ByteBuf implemented in AbstractByteBuf. The problem with this is that it is impossible for a user to change it if it not fits well it's use-case. We should better move it to ByteBufAllocator and so let the user implement it's own by either write his/her own ByteBufAllocator or just override the default implementation in one of our provided ByteBufAllocators.
Modifications:
Move calculateNewCapacity(...) to ByteBufAllocator and move the implementation (which was part of AbstractByteBuf) to AbstractByteBufAllocator.
Result:
The user can now override the default calculation algorithm when needed.
Motivation:
If ChannelOutboundBuffer.addFlush() is called multiple times and flushed != unflushed it will still loop through all entries that are not flushed yet even if it is not needed anymore as these were marked uncancellable before.
Modifications:
Check if new messages were added since addFlush() was called and only if this was the case loop through all entries and try to mark the uncancellable.
Result:
Less overhead when ChannelOuboundBuffer.addFlush() is called multiple times without new messages been added.
Motivation:
UnpooledUnsafeDirectByteBuf.setBytes(int,ByteBuf,int,int) fails to use fast-path when src uses an array as backing storage. This is because the if else uses the wrong ByteBuf for its check.
Modifications:
- Use correct ByteBuf when check for array as backing storage
- Also eliminate unecessary check in UnpooledDirectByteBuf which always fails anyway
Result:
Faster setBytes(...) when src ByteBuf is backed by an array.
No more IndexOutOfBoundsException or data-corruption.
Motivation:
JdkZlibDecoder fails to decode because the length of the output buffer is not calculated correctly.
This can cause an IndexOutOfBoundsException or data-corruption when the PooledByteBuffAllocator is used.
Modifications:
Correctly calculate the length
Result:
No more IndexOutOfBoundsException or data-corruption.
Motivation:
We have quite a bit of code duplication between HTTP/1, HTTP/2, SPDY,
and STOMP codec, because they all have a notion of 'headers', which is a
multimap of string names and values.
Modifications:
- Add TextHeaders and its default implementation
- Add AsciiString to replace HttpHeaderEntity
- Borrowed some portion from Apache Harmony's java.lang.String.
- Reimplement HttpHeaders, SpdyHeaders, and StompHeaders using
TextHeaders
- Add AsciiHeadersEncoder to reuse the encoding a TextHeaders
- Used a dedicated encoder for HTTP headers for better performance
though
- Remove shortcut methods in SpdyHeaders
- Replace SpdyHeaders.getStatus() with HttpResponseStatus.parseLine()
Result:
- Removed quite a bit of code duplication in the header implementations.
- Slightly better performance thanks to improved header validation and
hash code calculation
Motivation:
Provide a faster ThreadLocal implementation
Modification:
Add a "FastThreadLocal" which uses an EnumMap and a predefined fixed set of possible thread locals (all of the static instances created by netty) that is around 10-20% faster than standard ThreadLocal in my benchmarks (and can be seen having an effect in the direct PooledByteBufAllocator benchmark that uses the DEFAULT ByteBufAllocator which uses this FastThreadLocal, as opposed to normal instantiations that do not, and in the new RecyclableArrayList benchmark);
Result:
Improved performance
Motivation:
At the moment the HashedWheelTimer will only remove the cancelled Timeouts once the HashedWheelBucket is processed again. Until this the instance will not be able to be GC'ed as there are still strong referenced to it even if the user not reference it by himself/herself. This can cause to waste a lot of memory even if the Timeout was cancelled before.
Modification:
Add a new queue which holds CancelTasks that will be processed on each tick to remove cancelled Timeouts. Because all of this is done only by the WorkerThread there is no need for synchronization and only one extra object creation is needed when cancel() is executed. For addTimeout(...) no new overhead is introduced.
Result:
Less memory usage for cancelled Timeouts.
Motivation:
Each of DefaultChannelPipeline instance creates an head and tail that wraps a handler. These are used to chain together other DefaultChannelHandlerContext that are created once a new ChannelHandler is added. There are a few things here that can be improved in terms of memory usage and initialization time.
Modification:
- Only generate the name for the tail and head one time as it will never change anyway
- Rename DefaultChannelHandlerContext to AbstractChannelHandlerContext and make it abstract
- Create a new DefaultChannelHandlerContext that is used when a ChannelHandler is added to the DefaultChannelPipeline
- Rename TailHandler to TailContext and HeadHandler to HeadContext and let them extend AbstractChannelHandlerContext. This way we can save 2 object creations per DefaultChannelPipeline
Result:
- Less memory usage because we have 2 less objects per DefaultChannelPipeline
- Faster creation of DefaultChannelPipeline as we not need to generate the name for the head and tail
Motivation:
As part of GSOC 2013 we had @mbakkar working on a DNS codec but did not integrate it yet as it needs some cleanup. This commit is based on @mbakkar's work and provide the codec for DNS.
Modifications:
Add DNS codec
Result:
Reusable DNS codec will be included in netty.
This PR also includes a AsynchronousDnsResolver which allows to resolve DNS entries in a non blocking way by make use
of the dns codec and netty transport itself.
Motivation:
According to RFC2616 section 19, boundary string could be quoted, but
currently the PostRequestDecoder does not support it while it should.
Modifications:
Once the boundary is found, one check is made to verify if the boundary
is "quoted", and if so, it is "unqoted".
Note: in following usage of this boundary (as delimiter), quote seems no
more allowed according to the same RFC, so the reason that only the
boundary definition is corrected.
Result:
Now the boundary could be whatever quoted or not. A Junit test case
checks it.
Motivation:
When an attribute is ending with an odd number of CR (0x0D), the decoder
add an extra CR in the decoded attribute and should not.
Modifications:
Each time a CR is detected, the next byte was tested to be LF or not. If
not, in a number of places, the CR byte was lost while it should not be.
When a CR is detected, if the next byte is not LF, the CR byte should be
saved as the position point to the next byte (not LF). When a CR is
detected, if there is not yet other available bytes, the position is
reset to the position of CR (since a LF could follow).
A new Junit test case is added, using DECODER and variable number of CR
in the final attribute (testMultipartCodecWithCRasEndOfAttribute).
Result:
The attribute is now correctly decoded with the right number of CR
ending bytes.
Motivation:
Our Unsafe*ByteBuf implementation always invert bytes when the native ByteOrder is LITTLE_ENDIAN (this is true on intel), even when the user calls order(ByteOrder.LITTLE_ENDIAN). This is not optimal for performance reasons as the user should be able to set the ByteOrder to LITTLE_ENDIAN and so write bytes without the extra inverting.
Modification:
- Introduce a new special SwappedByteBuf (called UnsafeDirectSwappedByteBuf) that is used by all the Unsafe*ByteBuf implementation and allows to write without inverting the bytes.
- Add benchmark
- Upgrade jmh to 0.8
Result:
The user is be able to get the max performance even on servers that have ByteOrder.LITTLE_ENDIAN as their native ByteOrder.
Motivation:
StompSubframeEncoderTest fails because StompHeaders does not respect the order of the headers set.
Modifications:
Use LinkedHashMap instead of HashMap
Result:
Fixes test failures
Motivation:
We have different message aggregator implementations for different
protocols, but they are very similar with each other. They all stems
from HttpObjectAggregator. If we provide an abstract class that provide
generic message aggregation functionality, we will remove their code
duplication.
Modifications:
- Add MessageAggregator which provides generic message aggregation
- Reimplement all existing aggregators using MessageAggregator
- Add DecoderResultProvider interface and extend it wherever possible so
that MessageAggregator respects the state of the decoded message
Result:
Less code duplication
Motivation:
MpscLinkedQueue has various issues:
- It does not work without sun.misc.Unsafe.
- Some field names are confusing.
- Node.tail does not refer to the tail node really.
- The tail node is the starting point of iteration. I think the tail
node should be the head node and vice versa to reduce confusion.
- Some important methods are not implemented (e.g. iterator())
- Not serializable
- Potential false cache sharing problem due to lack of padding
- MpscLinkedQueue extends AtomicReference and thus exposes various
operations that mutates the internal state of the queue directly.
Modifications:
- Use AtomicReferenceFieldUpdater wherever possible so that we do not
use Unsafe directly. (e.g. use lazySet() instead of putOrderedObject)
- Extend AbstractQueue to implement most operations
- Implement serialization and iterator()
- Rename tail to head and head to tail to reduce confusion.
- Rename Node.tail to Node.next.
- Fix a leak where the references in the removed head are not cleared
properly.
- Add Node.clearMaybe() method so that the value of the new head node
is cleared if possible.
- Add some comments for my own educational purposes
- Add padding to the head node
- Add FullyPaddedReference and RightPaddedReference for future reuse
- Make MpscLinkedQueue package-local so that a user cannot access the
dangerous yet public operations exposed by the superclass.
- MpscLinkedQueue.Node becomes MpscLinkedQueueNode, a top level class
Result:
- It's more like a drop-in replacement of ConcurrentLinkedQueue for the
MPSC case.
- Works without sun.misc.Unsafe
- Code potentially easier to understand
- Fixed leak (related: #2372)
Motivation:
At the moment ChannelFlushPromiseNotifier.add(....) takes an int value for pendingDataSize, which may be too small as a user may need to use a long. This can for example be useful when a user writes a FileRegion etc. Beside this the notify* method names are kind of missleading as these should not contain *Future* because it is about ChannelPromises.
Modification:
Add a new add(...) method that takes a long for pendingDataSize and @deprecated the old method. Beside this also @deprecated all *Future* methods and add methods that have *Promise* in the method name to better reflect usage.
Result:
ChannelFlushPromiseNotifier can be used with bigger data.
Motivation:
Currently OkResponseHandler returns a DefaultHttpResponse which is not
correct and it should be returning complete http response.
Modifications:
Updated OkResponseHandler to return an instance of
DefaultFullHttpResponse.
Result:
It is not possible to add compression to the example without getting any
errors.
Motivation:
ChannelTrafficShapingHandler may corrupt inbound data stream by
scheduling the fireChannelRead event.
Modification:
Always call fireChannelRead(...) and only suspend reads after it
Result:
No more data corruption