Commit Graph

7 Commits

Author SHA1 Message Date
Norman Maurer
030bcaae81 Improve performance of Recycler
Motivation:

Recycler is used in many places to reduce GC-pressure but is still not as fast as possible because of the internal datastructures used.

Modification:

 - Rewrite Recycler to use a WeakOrderQueue which makes minimal guaranteer about order and visibility for max performance.
 - Recycling of the same object multiple times without acquire it will fail.
 - Introduce a RecyclableMpscLinkedQueueNode which can be used for MpscLinkedQueueNodes that use Recycler

These changes are based on @belliottsmith 's work that was part of #2504.

Result:

Huge increase in performance.

4.0 branch without this commit:

Benchmark                                                (size)   Mode   Samples        Score  Score error    Units
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread    00000  thrpt        20 116026994.130  2763381.305    ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread    00256  thrpt        20 110823170.627  3007221.464    ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread    01024  thrpt        20 118290272.413  7143962.304    ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread    04096  thrpt        20 120560396.523  6483323.228    ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread    16384  thrpt        20 114726607.428  2960013.108    ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread    65536  thrpt        20 119385917.899  3172913.684    ops/s
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 297.617 sec - in io.netty.microbench.internal.RecyclableArrayListBenchmark

4.0 branch with this commit:

Benchmark                                                (size)   Mode   Samples        Score  Score error    Units
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread    00000  thrpt        20 204158855.315  5031432.145    ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread    00256  thrpt        20 205179685.861  1934137.841    ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread    01024  thrpt        20 209906801.437  8007811.254    ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread    04096  thrpt        20 214288320.053  6413126.689    ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread    16384  thrpt        20 215940902.649  7837706.133    ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread    65536  thrpt        20 211141994.206  5017868.542    ops/s
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 297.648 sec - in io.netty.microbench.internal.RecyclableArrayListBenchmark
2014-06-24 10:47:38 +02:00
Norman Maurer
8180f7922f Optimize DefaultChannelPipeline in terms of memory usage and initialization time
Motivation:
Each of DefaultChannelPipeline instance creates an head and tail that wraps a handler. These are used to chain together other DefaultChannelHandlerContext that are created once a new ChannelHandler is added. There are a few things here that can be improved in terms of memory usage and initialization time.

Modification:
- Only generate the name for the tail and head one time as it will never change anyway
- Rename DefaultChannelHandlerContext to AbstractChannelHandlerContext and make it abstract
- Create a new DefaultChannelHandlerContext that is used when a ChannelHandler is added to the DefaultChannelPipeline
- Rename TailHandler to TailContext and HeadHandler to HeadContext and let them extend AbstractChannelHandlerContext. This way we can save 2 object creations per DefaultChannelPipeline

 Result:
- Less memory usage because we have 2 less objects per DefaultChannelPipeline
- Faster creation of DefaultChannelPipeline as we not need to generate the name for the head and tail
2014-06-10 12:18:38 +02:00
Norman Maurer
8c3eaf3b56 Synchronized between 4.1 and master again
Motivation:
4 and 5 were diverged long time ago and we recently reverted some of the
early commits in master.  We must make sure 4.1 and master are not very
different now.

Modification:
Remove ChannelHandlerInvoker.writeAndFlush(...) and the related implementations.

Result:
4.1 and master got closer.
2014-04-24 21:28:24 +02:00
Trustin Lee
db3709e652 Synchronized between 4.1 and master
Motivation:

4 and 5 were diverged long time ago and we recently reverted some of the
early commits in master.  We must make sure 4.1 and master are not very
different now.

Modification:

Fix found differences

Result:

4.1 and master got closer.
2014-04-25 00:38:02 +09:00
Norman Maurer
9e02a9bbee Move validatePromise(...) to ChannelHandlerInvokerUtil. Related to [#2398]
Motivation:
Once a user implement a custom ChannelHandlerInvoker it is needed to validate the ChannelPromise. We should expose a utility method for this.

Modifications:
Move validatePromise(...) from DefaultChannelHandlerInvoker to ChannelHandlerInvokerUtil and make it public.

Result:
User is able to reuse code
2014-04-17 16:01:26 +02:00
Norman Maurer
772a9d2610 [#2349] Correctly handle cancelled ChannelPromise in DefaultChannelHandlerContext
Motivation:
At the moment an IllegalArgumentException will be thrown if a ChannelPromise is cancelled while propagate through the ChannelPipeline. This is not correct, we should just stop to propagate it as it is valid to cancel at any time.

Modifications:
Stop propagate the operation through the ChannelPipeline once a ChannelPromise is cancelled.

Result:
No more IllegalArgumentException when cancel a ChannelPromise while moving through the ChannelPipeline.
2014-03-31 07:53:22 +02:00
Trustin Lee
844362a947 User-definable thread model via ChannelHandlerInvoker
Motivation:

While the default thread model provided by Netty is reasonable enough for most applications, some users might have a special requirement for the thread model.  Here are a few examples:

- A user might want to invoke handlers from the caller thread directly, assuming that his or her application is completely asynchronous and does not make any invocation from non-I/O thread.  In this case, the default invoker implementation will only add the overhead of checking if the current thread is an I/O thread or not.
- A user might want to invoke handlers from different threads depending on the type of events flexibly.

Modifications:

- Backport 132af3a485 which is a fix for #1912
  - Add a new interface called 'ChannelHandlerInvoker' that performs the invocation of event handler methods.
  - Add pipeline manipulation methods that accept ChannelHandlerInvoker
- The differences from the original commit:
  - Separated the irrelevant changes out
  - Channel.eventLoop is null until the registration is complete in this branch, so Channel.Unsafe.invoker() doesn't work before registration.
  - Deregistration is not gone in this branch, so the methods related with deregistration were added to ChannelHandlerInvoker
2014-03-24 18:09:27 +09:00