Commit Graph

535 Commits

Author SHA1 Message Date
Norman Maurer
f18990a8a5 [#3654] Synchronize on PoolSubpage head when allocate / free PoolSubpages
Motivation:

Currently we hold a lock on the PoolArena when we allocate / free PoolSubpages, which is wasteful as this also affects "normal" allocations. The same is true vice-verse.

Modifications:

Ensure we synchronize on the head of the PoolSubPages pool. This is done per size and so it is possible to concurrently allocate / deallocate PoolSubPages with different sizes, and also normal allocations.

Result:

Less condition and so faster allocation/deallocation.

Before this commit:
xxx:~/wrk $ ./wrk -H 'Connection: keep-alive' -d 120 -c 256 -t 16 -s scripts/pipeline-many.lua  http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
  16 threads and 256 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    17.61ms   29.52ms 689.73ms   97.27%
    Req/Sec   278.93k    41.97k  351.04k    84.83%
  530527460 requests in 2.00m, 71.64GB read
Requests/sec: 4422226.13
Transfer/sec:    611.52MB

After this commit:
xxx:~/wrk $ ./wrk -H 'Connection: keep-alive' -d 120 -c 256 -t 16 -s scripts/pipeline-many.lua  http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
  16 threads and 256 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    15.85ms   24.50ms 681.61ms   97.42%
    Req/Sec   287.14k    38.39k  360.33k    85.88%
  547902773 requests in 2.00m, 73.99GB read
Requests/sec: 4567066.11
Transfer/sec:    631.55MB

This is reproducable every time.
2015-05-27 10:32:52 +02:00
Norman Maurer
7e80e1bf97 [#3654] No need to hold lock while destroy a chunk
Motiviation:

At the moment we sometimes hold the lock on the PoolArena during destroy a PoolChunk. This is not needed.

Modification:

- Ensure we not hold the lock during destroy a PoolChunk
- Move all synchronized usage in PoolArena
- Cleanup

Result:

Less condition.
2015-05-27 09:47:34 +02:00
Norman Maurer
8fed94eee3 Expose metrics for PooledByteBufAllocator
Motivation:

The PooledByteBufAllocator is more or less a black-box atm. We need to expose some metrics to allow the user to get a better idea how to tune it.

Modifications:

- Expose different metrics via PooledByteBufAllocator
- Add *Metrics interfaces

Result:

It is now easy to gather metrics and detail about the PooledByteBufAllocator and so get a better understanding about resource-usage etc.
2015-05-20 21:01:37 +02:00
Norman Maurer
8a1a0b2c1e Add PooledSlicedByteBuf and PooledDuplicatedByteBuf
Motivation:

At the moment when calling slice(...) or duplicate(...) on a Pooled*ByteBuf a new SlicedByteBuf or DuplicatedByteBuf. This can create a lot of GC.

Modifications:

Add PooledSlicedByteBuf and PooledDuplicatedByteBuf which will be used when a PooledByteBuf is used.

Result:

Less GC.
2015-05-20 11:37:41 +02:00
Norman Maurer
981292ffc0 Clarify ByteBuf.duplicate() semantics.
Motivation:

From the javadocs of ByteBuf.duplicate() it is not clear if the reader and writer marks will be duplicated.

Modifications:

Add sentence to clarify that marks will not be duplicated.

Result:

Clear semantics.
2015-05-20 08:32:43 +02:00
Norman Maurer
979fdfd3d3 Reset markers when obtain PooledByteBuf.
Motivation:

When allocate a PooledByteBuf we need to ensure to also reset the markers for the readerIndex and writerIndex.

Modifications:

- Correct reset the markers
- Add test-case for it

Result:

Correctly reset markers.
2015-05-20 07:29:21 +02:00
Norman Maurer
dc73c15df3 No need to release lock and acquire again when allocate normal size.
Motiviation:

When tried to allocate tiny and small sized and failed to serve these out of the PoolSubPage we exit the synchronization
block just to enter it again when call allocateNormal(...).

Modification:

Not exit the synchronized block until allocateNormal(...) is done.

Result:

Better performance.
2015-05-18 10:23:29 +02:00
Norman Maurer
d1c46ca987 [maven-release-plugin] prepare for next development iteration 2015-05-07 11:33:47 -04:00
Norman Maurer
005d4a42fc [maven-release-plugin] prepare release netty-4.0.28.Final 2015-05-07 11:33:09 -04:00
Norman Maurer
0f4d3a981e Revert "[maven-release-plugin] prepare for next development iteration"
This reverts commit 3c10ffab5e.
2015-05-07 11:02:03 -04:00
Norman Maurer
3c10ffab5e [maven-release-plugin] prepare for next development iteration 2015-05-07 09:09:23 -04:00
Alwayswithme
c727c16707 ByteBufUtil use IndexOfProcessor to find occurrence.
Motivation:
The way of firstIndexOf and lastIndexOf iterating the ByteBuf is similar to forEachByte and forEachByteDesc, but have many range checks.
Modifications:
Use forEachByte and a IndexOfProcessor to find occurrence.
Result:
eliminate range checks
2015-05-07 09:50:07 +02:00
Norman Maurer
f242bc5a1a [#3623] CompositeByteBuf.iterator() should return optimized Iterable
Motivation:

CompositeByteBuf.iterator() currently creates a new ArrayList and fill it with the ByteBufs, which is more expensive then it needs to be.

Modifications:

- Use special Iterator implementation

Result:

Less overhead when calling iterator()
2015-04-20 10:45:28 +02:00
garywu
8843d2b1a1 [#2925] Bug fix for NormalMemoryRegionCache overbooked for PoolThreadCache
Motivation:

When create NormalMemoryRegionCache for PoolThreadCache, we overbooked
cache array size. This means unnecessary overhead for thread local cache
as we will create multi cache enties for each element in cache array.

Modifications:

change:
int arraySize = Math.max(1, max / area.pageSize);
to:
int arraySize = Math.max(1, log2(max / area.pageSize) + 1);

Result:

Now arraySize won't introduce unnecessary overhead.

 Changes to be committed:
	modified:   buffer/src/main/java/io/netty/buffer/PoolThreadCache.java
2015-04-13 09:00:51 +02:00
Norman Maurer
bfb6189f77 Let CompositeByteBuf implement Iterable
Motivation:

CompositeByteBuf has an iterator() method but fails to implement Iterable

Modifications:

Let CompositeByteBuf implement Iterable<ByteBuf>

Result:

Easier usage
2015-04-12 13:38:39 +02:00
Norman Maurer
e4900cbdd3 Revert "Dereference when calling PooledByteBuf.deallocate()"
This reverts commit ddd19f4f21.
2015-04-11 06:44:10 +02:00
Norman Maurer
ddd19f4f21 Dereference when calling PooledByteBuf.deallocate()
Motivation:

We missed to dereference the chunk and tmpNioBuf when calling deallocate(). This means the GC can not collect these as we still hold a reference while have the PooledByteBuf in the recycler stack.

Modifications:

Dereference chunk and tmpNioBuf.

Result:

GC can collect things.
2015-04-10 21:46:43 +02:00
Norman Maurer
30f60ac68a Change PoolThreadCache to use LIFO for better cache performance
Motiviation:

At the moment we use FIFO for the PoolThreadCache which is sub-optimal as this may reduce the changes to have the cached memory actual still in the cpu-cache.

Modification:

- Change to use LIFO as this increase the chance to be able to serve buffers from the cpu-cache

Results:

Faster allocation out of the ThreadLocal cache.

Before the commit:
[xxx wrk]$ ./wrk -H 'Connection: keep-alive' -d 120 -c 256 -t 16 -s scripts/pipeline-many.lua  http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
  16 threads and 256 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    14.69ms   10.06ms 131.43ms   80.10%
    Req/Sec   283.89k    40.37k  433.69k    66.81%
  533859742 requests in 2.00m, 72.09GB read
Requests/sec: 4449510.51
Transfer/sec:    615.29MB

After the commit:
[xxx wrk]$ ./wrk -H 'Connection: keep-alive' -d 120 -c 256 -t 16 -s scripts/pipeline-many.lua  http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
  16 threads and 256 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    16.38ms   26.32ms 734.06ms   97.38%
    Req/Sec   283.86k    39.31k  361.69k    83.38%
  540836511 requests in 2.00m, 73.04GB read
Requests/sec: 4508150.18
Transfer/sec:    623.40MB
2015-04-10 20:57:31 +02:00
Norman Maurer
f2fedbcdef [maven-release-plugin] prepare for next development iteration 2015-03-31 22:06:30 -04:00
Norman Maurer
054e7c5d17 [maven-release-plugin] prepare release netty-4.0.27.Final 2015-03-31 22:05:43 -04:00
Leo Gomes
2cc7466266 Updates the javadoc of Unpooled to remove mention to methods it does not provide
Motivation:

`Unpooled` javadoc's mentioned the generation of hex dump and swapping an integer's byte order,
which are actually provided by `ByteBufUtil`.

Modifications:

 Sentence moved to `ByteBufUtil` javadoc.

Result:

`Unpooled` javadoc is correct.
2015-03-04 12:03:50 +09:00
Norman Maurer
37264bb72b [maven-release-plugin] prepare for next development iteration 2015-03-02 01:31:30 -05:00
Norman Maurer
0dbc96cffd [maven-release-plugin] prepare release netty-4.0.26.Final 2015-03-02 01:30:58 -05:00
Norman Maurer
e99d89c04d [maven-release-plugin] rollback the release of netty-4.0.26.Final 2015-02-28 21:28:06 +01:00
Norman Maurer
b86e2e6ac0 [maven-release-plugin] prepare release netty-4.0.26.Final 2015-02-28 13:55:01 -05:00
Norman Maurer
5f71b6dc54 Ensure CompositeByteBuf.addComponent* handles buffer in consistent way and not causes leaks
Motivation:

At the moment we have two problems:
 - CompositeByteBuf.addComponent(...) will not add the supplied buffer to the CompositeByteBuf if its empty, which means it will not be released on CompositeByteBuf.release() call. This is a problem as a user will expect everything added will be released (the user not know we not added it).
 - CompositeByteBuf.addComponents(...) will either add no buffers if none is readable and so has the same problem as addComponent(...) or directly release the ByteBuf if at least one ByteBuf is readable. Again this gives inconsistent handling and may lead to memory leaks.

Modifications:

 - Always add the buffer to the CompositeByteBuf and so release it on release call.

Result:

Consistent handling and no buffer leaks.
2015-02-12 16:09:24 +01:00
Trustin Lee
0e61aeb849 [maven-release-plugin] prepare for next development iteration 2014-12-31 20:58:44 +09:00
Trustin Lee
087db82e78 [maven-release-plugin] prepare release netty-4.0.25.Final 2014-12-31 20:58:33 +09:00
Trustin Lee
0b8f47da04 Implement internal memory access methods of CompositeByteBuf correctly
Motivation:

When a CompositeByteBuf is empty (i.e. has no component), its internal
memory access operations do not always behave as expected.

Modifications:

Check if the nunmber of components is zero. If so, return an empty
array or an empty NIO buffer, etc.

Result:

More robustness
2014-12-30 15:52:57 +09:00
Trustin Lee
38fae09f23 Add more tests to EmptyByteBufTest
- Ensure an EmptyByteBuf has an array, an NIO buffer, and a memory
  address at the same time
- Add an assertion that checks if EMPTY_BUFFER is an EmptyByteBuf,
  just in case we make a mistake in the future
2014-12-30 15:50:49 +09:00
Norman Maurer
61a5e60513 Provide helper methods in ByteBufUtil to write UTF-8/ASCII CharSequences. Related to [#909]
Motivation:

We expose no methods in ByteBuf to directly write a CharSequence into it. This leads to have the user either convert the CharSequence first to a byte array or use CharsetEncoder. Both cases have some overheads and we can do a lot better for well known Charsets like UTF-8 and ASCII.

Modifications:

Add ByteBufUtil.writeAscii(...) and ByteBufUtil.writeUtf8(...) which can do the task in an optimized way. This is especially true if the passed in ByteBuf extends AbstractByteBuf which is true for all of our implementations which not wrap another ByteBuf.

Result:

Writing an ASCII and UTF-8 CharSequence into a AbstractByteBuf is a lot faster then what the user could do by himself as we can make use of some package private methods and so eliminate reference and range checks. When the Charseq is not ASCII or UTF-8 we can still do a very good job and are on par in most of the cases with what the user would do.

The following benchmark shows the improvements:

Result: 2456866.966 ?(99.9%) 59066.370 ops/s [Average]
  Statistics: (min, avg, max) = (2297025.189, 2456866.966, 2586003.225), stdev = 78851.914
  Confidence interval (99.9%): [2397800.596, 2515933.336]

Benchmark                                                        Mode   Samples        Score  Score error    Units
i.n.m.b.ByteBufUtilBenchmark.writeAscii                         thrpt        50  9398165.238   131503.098    ops/s
i.n.m.b.ByteBufUtilBenchmark.writeAsciiString                   thrpt        50  9695177.968   176684.821    ops/s
i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringViaArray           thrpt        50  4788597.415    83181.549    ops/s
i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringViaArrayWrapped    thrpt        50  4722297.435    98984.491    ops/s
i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringWrapped            thrpt        50  4028689.762    66192.505    ops/s
i.n.m.b.ByteBufUtilBenchmark.writeAsciiViaArray                 thrpt        50  3234841.565    91308.009    ops/s
i.n.m.b.ByteBufUtilBenchmark.writeAsciiViaArrayWrapped          thrpt        50  3311387.474    39018.933    ops/s
i.n.m.b.ByteBufUtilBenchmark.writeAsciiWrapped                  thrpt        50  3379764.250    66735.415    ops/s
i.n.m.b.ByteBufUtilBenchmark.writeUtf8                          thrpt        50  5671116.821   101760.081    ops/s
i.n.m.b.ByteBufUtilBenchmark.writeUtf8String                    thrpt        50  5682733.440   111874.084    ops/s
i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringViaArray            thrpt        50  3564548.995    55709.512    ops/s
i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringViaArrayWrapped     thrpt        50  3621053.671    47632.820    ops/s
i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringWrapped             thrpt        50  2634029.071    52304.876    ops/s
i.n.m.b.ByteBufUtilBenchmark.writeUtf8ViaArray                  thrpt        50  3397049.332    57784.119    ops/s
i.n.m.b.ByteBufUtilBenchmark.writeUtf8ViaArrayWrapped           thrpt        50  3318685.262    35869.562    ops/s
i.n.m.b.ByteBufUtilBenchmark.writeUtf8Wrapped                   thrpt        50  2473791.249    46423.114    ops/s
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,387.417 sec - in io.netty.microbench.buffer.ByteBufUtilBenchmark

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

The *ViaArray* benchmarks are basically doing a toString().getBytes(Charset) which the others are using ByteBufUtil.write*(...).
2014-12-26 15:57:59 +09:00
Norman Maurer
93237d435f CompositeByteBuf.nioBuffers(...) must not return an empty ByteBuffer array
Motivation:

CompositeByteBuf.nioBuffers(...) returns an empty ByteBuffer array if the specified length is 0. This is not consistent with other ByteBuf implementations which return an ByteBuffer array of size 1 with an empty ByteBuffer included.

Modifications:

Make CompositeByteBuf.nioBuffers(...) consistent with other ByteBuf implementations.

Result:

Consistent and correct behaviour of nioBufffers(...)
2014-12-22 11:17:53 +01:00
Norman Maurer
fa079baf29 Always return SliceByteBuf on slice(...) to eliminate possible leak
Motivation:

When calling slice(...) on a ByteBuf the returned ByteBuf should be the slice of a ByteBuf and shares it's reference count. This is important as it is perfect legal to use buf.slice(...).release() and have both, the slice and the original ByteBuf released. At the moment this is only the case if the requested slice size is > 0. This makes the behavior inconsistent and so may lead to a memory leak.

Modifications:

- Never return Unpooled.EMPTY_BUFFER when calling slice(...).
- Adding test case for buffer.slice(...).release() and buffer.duplicate(...).release()

Result:

Consistent behaviour and so no more leaks possible.
2014-12-22 11:15:28 +01:00
Norman Maurer
d62932c7de Ensure buffer is not released when call array() / memoryAddress()
Motivation:

Before we missed to check if a buffer was released before we return the backing byte array or memoryaddress. This could lead to JVM crashes when someone tried various bulk operations on the Unsafe*ByteBuf implementations.

Modifications:

Always check if the buffer is released before all to return the byte array and memoryaddress.

Result:

No more JVM crashes because of released buffers when doing bulk operations on Unsafe*ByteBuf implementations.
2014-12-11 11:30:10 +01:00
Idel Pivnitskiy
3d200085a4 Small performance improvements
Motivation:

Found performance issues via FindBugs and PMD.

Modifications:

- Removed unnecessary boxing/unboxing operations in DefaultTextHeaders.convertToInt(CharSequence) and DefaultTextHeaders.convertToLong(CharSequence). A boxed primitive is created from a string, just to extract the unboxed primitive value.
- Added a static modifier for DefaultHttp2Connection.ParentChangedEvent class. This class is an inner class, but does not use its embedded reference to the object which created it. This reference makes the instances of the class larger, and may keep the reference to the creator object alive longer than necessary.
- Added a static compiled Pattern to avoid compile it each time it is used when we need to replace some part of authority.
- Improved using of StringBuilders.

Result:

Performance improvements.
2014-11-20 00:58:35 -05:00
Norman Maurer
1914b77c71 [maven-release-plugin] prepare for next development iteration 2014-10-29 11:48:40 +01:00
Norman Maurer
c170e7df3f [maven-release-plugin] prepare release netty-4.0.24.Final 2014-10-29 11:47:19 +01:00
Norman Maurer
cb85ed9d66 Disable caching of PooledByteBuf for different threads.
Motivation:

We introduced a PoolThreadCache which is used in our PooledByteBufAllocator to reduce the synchronization overhead on PoolArenas when allocate / deallocate PooledByteBuf instances. This cache is used for both the allocation path and deallocation path by:
  - Look for cached memory in the PoolThreadCache for the Thread that tries to allocate a new PooledByteBuf and if one is found return it.
  - Add the memory that is used by a PooledByteBuf to the PoolThreadCache of the Thread that release the PooledByteBuf

This works out very well when all allocation / deallocation is done in the EventLoop as the EventLoop will be used for read and write. On the otherside this can lead to surprising side-effects if the user allocate from outside the EventLoop and and pass the ByteBuf over for writing. The problem here is that the memory will be added to the PoolThreadCache that did the actual write on the underlying transport and not on the Thread that previously allocated the buffer.

Modifications:

Don't cache if different Threads are used for allocating/deallocating

Result:

Less confusing behavior for users that allocate PooledByteBufs from outside the EventLoop.
2014-09-22 13:38:35 +02:00
Norman Maurer
687d3d3b5c [#2924] Correctly update head in MemoryRegionCache.trim()
Motivation:
When MemoryRegionCache.trim() is called, some unused cache entries will be freed (started from head). However, in MeoryRegionCache.trim() the head is not updated, which make entry list's head point to an entry whose chunk is null now and following allocate of MeoryRegionCache will return false immediately.

In other word, cache is no longer usable once trim happen.

Modifications:

Update head to correct idx after free entries in trim().

Result:

MemoryRegionCache behaves correctly even after calling trim().
2014-09-22 10:56:17 +02:00
Norman Maurer
0bab6116f3 [#2843] Add test-case to show correct behavior of ByteBuf.refCnt() and ByteBuf.release(...)
Motivation:

We received a bug-report that the ByteBuf.refCnt() does sometimes not show the correct value when release() and refCnt() is called from different Threads.

Modifications:

Add test-case which shows that all is working like expected

Result:

Test-case added which shows everything is ok.
2014-09-01 08:48:15 +02:00
Trustin Lee
7710e7da44 [maven-release-plugin] prepare for next development iteration 2014-08-16 03:02:02 +09:00
Trustin Lee
208198c0cb [maven-release-plugin] prepare release netty-4.0.23.Final 2014-08-16 03:01:57 +09:00
Trustin Lee
a2d508711d [maven-release-plugin] prepare for next development iteration 2014-08-14 09:41:33 +09:00
Trustin Lee
3051db9d59 [maven-release-plugin] prepare release netty-4.0.22.Final 2014-08-14 09:41:28 +09:00
Trustin Lee
2b5aa716ba Use heap buffers for Unpooled.copiedBuffer()
Related issue: #2028

Motivation:

Some copiedBuffer() methods in Unpooled allocated a direct buffer.  An
allocation of a direct buffer is an expensive operation, and thus should
be avoided for unpooled buffers.

Modifications:

- Use heap buffers in all copiedBuffer() methods

Result:

Unpooled.copiedBuffers() are less expensive now.
2014-08-13 15:10:00 -07:00
Trustin Lee
fb5583d788 Refactoring in preparation to unify I/O logic for all branches
Motivation:

While trying to merge our ChannelOutboundBuffer changes we've made last
week, I realized that we have quite a bit of conflicting changes at 4.1
and master. It was primarily because we added
ChannelOutboundBuffer.beforeAdd() and moved some logic there, such as
direct buffer conversion.

However, this is not possible with the changes we've made for 4.0. We
made ChannelOutboundBuffer final for example.

Maintaining multiple branch is already getting painful and having
different core will make it even worse, so I think we should keep the
differences between 4.0 and other branches minimal.

Modifications:

- Move ChannelOutboundBuffer.safeRelease() to ReferenceCountUtil
- Add ByteBufUtil.threadLocalBuffer()
  - Backported from ThreadLocalPooledDirectByteBuf
- Make most methods in AbstractUnsafe final
  - Add AbstractChannel.filterOutboundMessage() so that a transport can
    convert a message to another (e.g. heap -> off-heap), and also
    reject unsupported messages
  - Move all direct buffer conversions to filterOutboundMessage()
  - Move all type checks to filterOutboundMessage()
- Move AbstractChannel.checkEOF() to OioByteStreamChannel, because it's
  the only place it is used at all
- Remove ChannelOutboundBuffer.current(Object), because it's not used
  anymore
- Add protected direct buffer conversion methods to AbstractNioChannel
  and AbstractEpollChannel so that they can be used by their subtypes
- Update all transport implementations according to the changes above

Result:

- The missing extension point in 4.0 has been added.
  - AbstractChannel.filterOutboundMessage()
  - Thanks to the new extension point, we moved all transport-specific
    logic from ChannelOutboundBuffer to each transport implementation
- We can copy most of the transport implementations in 4.0 to 4.1 and
  master now, so that we have much less merge conflict when we modify
  the core.
2014-08-05 08:04:23 +02:00
Trustin Lee
07801d7b38 Remove duplicate range check in AbstractByteBuf.skipBytes() 2014-07-29 15:58:38 -07:00
Idel Pivnitskiy
01b11ca2cb Small performance improvements
Modifications:

- Added a static modifier for CompositeByteBuf.Component.
This class is an inner class, but does not use its embedded reference to the object which created it. This reference makes the instances of the class larger, and may keep the reference to the creator object alive longer than necessary.
A boxed primitive is created from a String, just to extract the unboxed primitive value.
- Removed unnecessary checks if file exists before call mkdirs() in NativeLibraryLoader and PlatformDependent.
Because the method mkdirs() has this check inside.

Conflicts:
	codec-http/src/main/java/io/netty/handler/codec/http/multipart/DiskAttribute.java
	codec-stomp/src/main/java/io/netty/handler/codec/stomp/StompSubframeAggregator.java
	codec-stomp/src/main/java/io/netty/handler/codec/stomp/StompSubframeDecoder.java
2014-07-20 09:29:33 +02:00
Norman Maurer
c4d0a87e19 [#2653] Remove unnecessary ensureAccessible() calls
Motivation:

I introduced ensureAccessible() class as part of 6c47cc9711 in some places. Unfortunally I also added some where these are not needed and so caused a performance regression.

Modification:

Remove calls where not needed.

Result:

Fixed performance regression.
2014-07-14 21:02:07 +02:00
Norman Maurer
ccd88596f3 [#2653] Remove uncessary range checks for performance reasons
Motivation:

I introduced range checks as part of 6c47cc9711 in some places. Unfortunally I also added some where these are not needed and so caused a performance regression.

Modification:

Remove range checks where not needed

Result:

Fixed performance regression.
2014-07-14 11:40:27 +02:00
Brendt Lucas
5061be1b03 [#2642] CompositeByteBuf.deallocate memory/GC improvement
Motivation:

CompositeByteBuf.deallocate generates unnecessary GC pressure when using the 'foreach' loop, as a 'foreach' loop creates an iterator when looping.

Modification:

Convert 'foreach' loop into regular 'for' loop.

Result:

Less GC pressure (and possibly more throughput) as the 'for' loop does not create an iterator
2014-07-08 21:08:34 +02:00
Trustin Lee
3d8d843198 Fix the build timeout when 'leak' profile is active
Motivation:

AbstractByteBufTest.testInternalBuffer() uses writeByte() operations to
populate the sample data.  Usually, this isn't a problem, but it starts
to take a lot of time when the resource leak detection level gets
higher.

In our CI machine, testInternalBuffer() takes more than 30 minutes,
causing the build timeout when the 'leak' profile is active (paranoid
level resource detection.)

Modification:

Populate the sample data using ThreadLocalRandom.nextBytes() instead of
using millions of writeByte() operations.

Result:

Test runs much faster when leak detection level is high.
2014-07-03 17:55:18 +09:00
Trustin Lee
0a8ff3b52d Fix most inspector warnings
Motivation:

It's good to minimize potentially broken windows.

Modifications:

Fix most inspector warnings from our profile
Update IntObjectHashMap

Result:

Cleaner code
2014-07-02 20:21:30 +09:00
Norman Maurer
e8f4def2a3 [maven-release-plugin] prepare for next development iteration 2014-06-30 14:31:08 +02:00
Norman Maurer
25e3c8ce3d [maven-release-plugin] prepare release netty-4.0.21.Final 2014-06-30 14:29:15 +02:00
Norman Maurer
6c47cc9711 [#2622] Correctly check reference count before try to work on the underlying memory
Motivation:

Because of how we use reference counting we need to check for the reference count before each operation that touches the underlying memory. This is especially true as we use sun.misc.Cleaner.clean() to release the memory ASAP when possible. Because of this the user may cause a SEGFAULT if an operation is called that tries to access the backing memory after it was released.

Modification:

Correctly check the reference count on all methods that access the underlying memory or expose it via a ByteBuffer.

Result:

Safer usage of ByteBuf
2014-06-30 07:10:12 +02:00
Trustin Lee
d8d0bbfc26 Optimize PoolChunk
- Using short[] for memoryMap did not improve performance. Reverting
  back to the original dual-byte[] structure in favor of simplicity.
- Optimize allocateRun() which yields small performence improvement
- Use local variable when member fields are accessed more than once
2014-06-26 17:06:29 +09:00
Trustin Lee
c41538050c Fix inspector warnings 2014-06-26 17:06:29 +09:00
Pavan Kumar
d2e36a49c7 Improve the allocation algorithm in PoolChunk
Motivation:

Depth-first search is not always efficient for buddy allocation.

Modification:

Employ a new faster search algorithm with different memoryMap layout.

Result:

With thread-local cache disabled, we see a lot of performance
improvment, especially when the size of the allocation is as small as
the page size, which had the largest search space previously.
2014-06-26 17:06:29 +09:00
Trustin Lee
4a13f66e13 Remove 'get' prefix from all HTTP/SPDY messages
Motivation:

Persuit for the consistency in method naming

Modifications:

- Remove the 'get' prefix from all HTTP/SPDY message classes
- Fix some inspector warnings

Result:

Consistency
Fixes #2594
2014-06-24 18:33:30 +09:00
Norman Maurer
22f16e52bf MessageToByteEncoder always starts with ByteBuf that use initalCapacity == 0
Motivation:

MessageToByteEncoder always starts with ByteBuf that use initalCapacity == 0 when preferDirect is used. This is really wasteful in terms of performance as every first write into the buffer will cause an expand of the buffer itself.

Modifications:

 - Change ByteBufAllocator.ioBuffer() use the same default initialCapacity as heapBuffer() and directBuffer()
 - Add new allocateBuffer method to MessageToByteEncoder that allow the user to do some smarter allocation based on the message that will be encoded.

Result:

Less expanding of buffer and more flexibilty when allocate the buffer for encoding.
2014-06-24 13:55:02 +09:00
Trustin Lee
7162d96ed5 Revert "Improve the allocation algorithm in PoolChunk"
This reverts commit 36305d7dce, which
seems to cause an assertion failure on our CI machine.
2014-06-21 19:19:49 +09:00
Pavan Kumar
004ffbad90 Improve the allocation algorithm in PoolChunk
Motivation:

Depth-first search is not always efficient for buddy allocation.

Modification:

Employ a new faster search algorithm with different memoryMap layout.

Result:

With thread-local cache disabled, we see a lot of performance
improvment, especially when the size of the allocation is as small as
the page size, which had the largest search space previously:

-- master head --
Benchmark                (size) Mode    Score  Error Units
pooledDirectAllocAndFree  8192 thrpt  215.392  1.565 ops/ms
pooledDirectAllocAndFree 16384 thrpt  594.625  2.154 ops/ms
pooledDirectAllocAndFree 65536 thrpt 1221.520 18.965 ops/ms
pooledHeapAllocAndFree    8192 thrpt  217.175  1.653 ops/ms
pooledHeapAllocAndFree   16384 thrpt  587.250 14.827 ops/ms
pooledHeapAllocAndFree   65536 thrpt 1217.023 44.963 ops/ms

-- changes --
Benchmark                (size) Mode    Score  Error Units
pooledDirectAllocAndFree  8192 thrpt 3656.744 94.093 ops/ms
pooledDirectAllocAndFree 16384 thrpt 4087.152 22.921 ops/ms
pooledDirectAllocAndFree 65536 thrpt 4058.814 29.276 ops/ms
pooledHeapAllocAndFree    8192 thrpt 3640.355 44.418 ops/ms
pooledHeapAllocAndFree   16384 thrpt 4030.206 24.365 ops/ms
pooledHeapAllocAndFree   65536 thrpt 4103.991 70.991 ops/ms
2014-06-21 13:20:56 +09:00
Norman Maurer
19a1b603d0 Remove System.out.println(...) debug messages 2014-06-20 19:42:08 +02:00
Norman Maurer
d2b8560a76 [#2580] [#2587] Fix buffer corruption regression when ByteBuf.order(LITTLE_ENDIAN) is used
Motivation:

To improve the speed of ByteBuf with order LITTLE_ENDIAN and where the native order is also LITTLE_ENDIAN (intel) we introduces a new special SwappedByteBuf before in commit 4ad3984c8b. Unfortunally the commit has a flaw which does not handle correctly the case when a ByteBuf expands. This was caused because the memoryAddress was cached and never changed again even if the underlying buffer expanded. This can lead to corrupt data or even to SEGFAULT the JVM if you are lucky enough.

Modification:

Always lookup the actual memoryAddress of the wrapped ByteBuf.

Result:

No more data-corruption for ByteBuf with order LITTLE_ENDIAN and no JVM crashes.
2014-06-20 18:25:54 +02:00
Trustin Lee
fb538ea532 Refactor FastThreadLocal to simplify TLV management
Motivation:

When Netty runs in a managed environment such as web application server,
Netty needs to provide an explicit way to remove the thread-local
variables it created to prevent class loader leaks.

FastThreadLocal uses different execution paths for storing a
thread-local variable depending on the type of the current thread.
It increases the complexity of thread-local removal.

Modifications:

- Moved FastThreadLocal and FastThreadLocalThread out of the internal
  package so that a user can use it.
- FastThreadLocal now keeps track of all thread local variables it has
  initialized, and calling FastThreadLocal.removeAll() will remove all
  thread-local variables of the caller thread.
- Added FastThreadLocal.size() for diagnostics and tests
- Introduce InternalThreadLocalMap which is a mixture of hard-wired
  thread local variable fields and extensible indexed variables
- FastThreadLocal now uses InternalThreadLocalMap to implement a
  thread-local variable.
- Added ThreadDeathWatcher.unwatch() so that PooledByteBufAllocator
  tells it to stop watching when its thread-local cache has been freed
  by FastThreadLocal.removeAll().
- Added FastThreadLocalTest to ensure that removeAll() works
- Added microbenchmark for FastThreadLocal and JDK ThreadLocal
- Upgraded to JMH 0.9

Result:

- A user can remove all thread-local variables Netty created, as long as
  he or she did not exit from the current thread. (Note that there's no
  way to remove a thread-local variable from outside of the thread.)
- FastThreadLocal exposes more useful operations such as isSet() because
  we always implement a thread local variable via InternalThreadLocalMap
  instead of falling back to JDK ThreadLocal.
- FastThreadLocalBenchmark shows that this change improves the
  performance of FastThreadLocal even more.
2014-06-19 21:08:16 +09:00
Norman Maurer
6fdf1138ca [#2573] UnpooledUnsafeDirectByteBuf.setBytes(int,ByteBuf,int,int) fails to use fast-path when src has array
Motivation:

UnpooledUnsafeDirectByteBuf.setBytes(int,ByteBuf,int,int) fails to use fast-path when src uses an array as backing storage. This is because the if else uses the wrong ByteBuf for its check.

Modifications:

- Use correct ByteBuf when check for array as backing storage
- Also eliminate unecessary check in UnpooledDirectByteBuf which always fails anyway

Result:

Faster setBytes(...) when src ByteBuf is backed by an array.

No more IndexOutOfBoundsException or data-corruption.
2014-06-16 11:11:54 +02:00
Norman Maurer
b737d631f1 [maven-release-plugin] prepare for next development iteration 2014-06-12 16:20:52 +02:00
Norman Maurer
1709113a1f [maven-release-plugin] prepare release netty-4.0.20.Final 2014-06-12 16:14:48 +02:00
Norman Maurer
76043bc8c8 Make use of an array to store FastThreadLocals and so allow to also use it in PooledByteBufAllocator that is instanced by users.
Motivation:
Allow to make use of our new FastThreadLocal whereever possible

Modification:
Make use of an array to store FastThreadLocals and so allow to also use it in PooledByteBufAllocator that is instanced by users.
The maximal size of the array is configurable per system property to allow to tune it if needed. As default we use 64 entries which should be good enough.

Result:
More flexible usage of FastThreadLocal
2014-06-12 15:43:20 +02:00
belliottsmith
1ac2ff8d7b Introduce FastThreadLocal which uses an EnumMap and a predefined fixed set of possible thread locals
Motivation:
Provide a faster ThreadLocal implementation

Modification:
Add a "FastThreadLocal" which uses an EnumMap and a predefined fixed set of possible thread locals (all of the static instances created by netty) that is around 10-20% faster than standard ThreadLocal in my benchmarks (and can be seen having an effect in the direct PooledByteBufAllocator benchmark that uses the DEFAULT ByteBufAllocator which uses this FastThreadLocal, as opposed to normal instantiations that do not, and in the new RecyclableArrayList benchmark);

Result:
Improved performance
2014-06-12 15:43:20 +02:00
Norman Maurer
4ad3984c8b [#2436] Unsafe*ByteBuf implementation should only invert bytes if ByteOrder differ from native ByteOrder
Motivation:
Our Unsafe*ByteBuf implementation always invert bytes when the native ByteOrder is LITTLE_ENDIAN (this is true on intel), even when the user calls order(ByteOrder.LITTLE_ENDIAN). This is not optimal for performance reasons as the user should be able to set the ByteOrder to LITTLE_ENDIAN and so write bytes without the extra inverting.

Modification:
- Introduce a new special SwappedByteBuf (called UnsafeDirectSwappedByteBuf) that is used by all the Unsafe*ByteBuf implementation and allows to write without inverting the bytes.
- Add benchmark
- Upgrade jmh to 0.8

Result:
The user is be able to get the max performance even on servers that have ByteOrder.LITTLE_ENDIAN as their native ByteOrder.
2014-06-05 11:09:58 +02:00
Trustin Lee
0928e28385 Use Java 5 foreach for arrays for brevity at no cost 2014-06-02 18:25:42 +09:00
Trustin Lee
ed6df98653 Introduce ThreadDeathWatcher
Motivation:

PooledByteBufAllocator's thread local cache and
ReferenceCountUtil.releaseLater() are in need of a way to run an
arbitrary logic when a certain thread is terminated.

Modifications:

- Add ThreadDeathWatcher, which spawns a low-priority daemon thread
  that watches a list of threads periodically (every second) and
  invokes the specified tasks when the associated threads are not alive
  anymore
  - Start-stop logic based on CAS operation proposed by @tea-dragon
- Add debug-level log messages to see if ThreadDeathWatcher works

Result:

- Fixes #2519 because we don't use GlobalEventExecutor anymore
- Cleaner code
2014-06-02 18:23:48 +09:00
Trustin Lee
350ac9787e Do not use a pseudo random for tree traversal
Motivation:

If we make allocateRun/SubpageSimple() always try the left node first and make allocateRun/Subpage() always tries the right node first,  it is more likely that allocateRun/Subpage() will find a node with ST_UNUSED sooner.

Modifications:

- Make allocateRunSimple() and allocateSubpageSimple() always try the left node first.
- Make allocateRun() and allocateSubpage() always try the right node first.
- Remove randome

Result:

We get the same performance without using random numbers.
2014-05-30 11:24:22 +09:00
Trustin Lee
3e7dbe072e Optimize PooledByteBufAllocator
Motivation:

We still have a room for improvement in PoolChunk.allocateRun() and
Subpage.allocate().

Modifications:

- Unroll the recursion in PoolChunk.allocateRun()
- Subpage.allocate() makes use of the 'nextAvail' value set by previous
  free().

Result:

- PoolChunk.allocateRun() optimization yields 10%+ improvements in
  allocation throughput for non-subpage allocations.
- Subpage.allocate() optimization makes the subpage allocations for
  tiny buffers as fast as non-tiny buffers even when the pageSize is
  huge (e.g. 1048576) because it doesn't need to perform a linear search
  in most cases.
2014-05-30 10:51:27 +09:00
Jake Luciani
795507aa7b Fix capacity check bug affecting offheap buffers 2014-05-13 07:24:56 +02:00
Norman Maurer
a597087a9f [maven-release-plugin] prepare for next development iteration 2014-04-30 15:40:54 +02:00
Norman Maurer
b562148e2d [maven-release-plugin] prepare release netty-4.0.19.Final 2014-04-30 15:40:31 +02:00
ian
fde13d96f9 Fix error that causes (up to) double memory usage
Motivation:

PoolArena's 'normalizeCapacity' function was micro-optimized some
time ago to remove a while loop. However, there was a change of
behavior in the function as a result. Capacities passed into it
that are already powers of 2 (and >= 512) are doubled in size. So
if I ask for a buffer with a capacity of 1024, I will get back one
that actually uses 2048 bytes (stored in maxLength).

Aligning to powers of two for book keeping ease is reasonable,
and if someone tries to expand a buffer, you might as well use some
of the previously wasted space. However, since this distinction
between 'easily expanded' and 'costly to expand' space is not
supported at all by the APIs, I cannot imagine this change to
doubling is desirable or intentional.

This is especially costly when using composite buffers. They
frequently allocate components with a capacity that is a power of
2, and they never attempt to expand components themselves. The end
result is that heavy use of pool-backed composite buffers wastes
almost half of the memory pool (the smaller / initial components are
<512 and so are not affected by the off-by-one bug).

Modifications:

Although I find it difficult to believe that such an optimization
is really helpful, I left it in and fixed the off-by-one issue by
decrementing the value at the start.

I also added a simple test to both attempt to verify that the
decrement fixes the issue without introducing any other change, and
to make it easy for a reviewer to test the existing behavior. PoolArena
does not seem to have much testing or testability support though so
the test is kind of a hack and will break for unrelated changes. I
suggest either removing it or factoring out the single non-static
portion of normalizeCapacity so that the fragile dummy PoolArena is
not required.

Result:

Pooled allocators will allocate less resources to the highly
inefficient and undocumented buffer section between length and
maxLength.

Composite buffers of non-trivial size that are backed by pooled
allocators will use about half as much memory.
2014-04-15 07:02:49 +02:00
Norman Maurer
9c934116f2 [#2370] Periodically check for not alive Threads and free up their ThreadPoolCache
Motivation:
At the moment we create new ThreadPoolCache whenever a Thread tries either allocate or release something on the PooledByteBufAllocator. When something is released we put it then in its ThreadPoolCache. The problem is we never check if a Thread is not alive anymore and so we may end up with memory that is never freed again if a user create many short living Threads that use the PooledByteBufAllocator.

Modifications:
Periodically check if the Thread is still alive that has a ThreadPoolCache assinged and if not free it.

Result:
Memory is freed up correctly even for short living Threads.
2014-04-09 11:44:51 +02:00
Norman Maurer
816165c96a [maven-release-plugin] prepare for next development iteration 2014-04-01 07:21:40 +02:00
Norman Maurer
1512a4dcca [maven-release-plugin] prepare release netty-4.0.18.Final 2014-04-01 07:20:16 +02:00
Norman Maurer
13fd69e871 Implement Thread caches for pooled buffers to minimize conditions. This fixes [#2264] and [#808].
Motivation:
Remove the synchronization bottleneck in PoolArena and so speed up things

Modifications:

This implementation uses kind of the same technics as outlined in the jemalloc paper and jemalloc
blogpost https://www.facebook.com/notes/facebook-engineering/scalable-memory-allocation-using-jemalloc/480222803919.

At the moment we only cache for "known" Threads (that powers EventExecutors) and not for others to keep the overhead
minimal when need to free up unused buffers in the cache and free up cached buffers once the Thread completes. Here
we use multi-level caches for tiny, small and normal allocations. Huge allocations are not cached at all to keep the
memory usage at a sane level. All the different cache configurations can be adjusted via system properties or the constructor
directly where it makes sense.

Result:
Less conditions as most allocations can be served by the cache itself
2014-03-20 09:18:04 -07:00
Jakob Buchgraber
17ba35b6d0 Bit tricks to check for and calculate power of two.
Motivation:
I was studying the code and thought this was simpler and easier to
understand.

Modifications:
Replaced the for loop and if conditions, with a simple implementation.

Result:
Code is easier to understand.
2014-03-18 15:59:58 +09:00
Bourne, Geoff
1c074eabe5 Fix limit computation of NIO ByteBuffers obtained via ReadOnlyByteBufferBuf.nioBuffer
Motivation:

When starting with a read-only NIO buffer, wrapping it in a ByteBuf,
and then later retrieving a re-wrapped NIO buffer the limit was getting
too short.

Modifications:

Changed ReadOnlyByteBufferBuf.nioBuffer(int,int) to compute the
limit in the same manner as the internalNioBuffer method.

Result:

Round-trip conversion from NIO to ByteBuf to NIO will work reliably.
2014-03-14 08:07:19 +01:00
Norman Maurer
ccd135df01 [maven-release-plugin] prepare for next development iteration 2014-02-24 15:39:26 +01:00
Norman Maurer
33587eb183 [maven-release-plugin] prepare release netty-4.0.17.Final 2014-02-24 15:37:31 +01:00
Trustin Lee
0fc66a411f The default buffer must be unpooled for backward compatibility
Mistakenly set to pooled while merging the changes from 4.1 and master.
2014-02-21 14:43:07 -08:00
Norman Maurer
66e2bb1e75 [maven-release-plugin] prepare for next development iteration 2014-02-19 03:41:24 +01:00
Norman Maurer
c466bb803d [maven-release-plugin] prepare release netty-4.0.16.Final 2014-02-19 03:36:54 +01:00
Trustin Lee
b18c8fe688 Determine the default allocator from system property
- Add ByteBufAllocator.DEFAULT
- The default allocator is 'unpooled'
2014-02-14 13:05:57 -08:00
Norman Maurer
f23d68b42f [#2187] Always do a volatile read on the refCnt 2014-02-07 09:23:16 +01:00
Norman Maurer
9bee78f91c Provide an optimized AtomicIntegerFieldUpdater, AtomicLongFieldUpdater and AtomicReferenceFieldUpdater 2014-02-06 20:08:45 +01:00
Norman Maurer
d67184b488 [maven-release-plugin] prepare for next development iteration 2014-01-21 08:18:32 +01:00
Norman Maurer
287515210d [maven-release-plugin] prepare release netty-4.0.15.Final 2014-01-21 08:18:26 +01:00
Trustin Lee
e83d2e0b4e [maven-release-plugin] prepare for next development iteration 2013-12-22 21:57:48 +09:00
Trustin Lee
cdb700c7a4 [maven-release-plugin] prepare release netty-4.0.14.Final 2013-12-22 21:57:40 +09:00
Trustin Lee
0b7aedb13b [maven-release-plugin] rollback the release of netty-4.0.14.Final 2013-12-22 21:53:24 +09:00
Trustin Lee
4bf6ec7171 [maven-release-plugin] prepare release netty-4.0.14.Final 2013-12-22 21:52:56 +09:00