Related pull request: #2481 written by @nmittler
Motivation:
Some tests were still sending requests from the test thread rather than
from the event loop.
Modifications:
- Modified the roundtrip tests to use a new utility Http2TestUtil to
perform the write operations in the event loop thread.
- Modified the Http2Client under examples to write all requests in the
event loop thread.
- Added HttpPrefaceHandler and its test which were missing.
- Fixed various inspector warnings
Result:
Tests and examples now send requests in the correct thread context.
Motivation:
Currently it is impossible to build netty on linux system that not define SO_REUSEPORT even if it is supported.
Modification:
Define SO_REUSEPORT if not defined.
Result:
Possible to build on more linux dists.
Motivation:
We use the nanoTime of the scheduledTasks to calculate the milli-seconds to wait for a select operation to select something. Once these elapsed we check if there was something selected or some task is ready for processing. Unfortunally we not take into account scheduled tasks here so the selection loop will continue if only scheduled tasks are ready for processing. This will delay the execution of these tasks.
Modification:
- Check if a scheduled task is ready after selecting
- also make a tiny change in NioEventLoop to not trigger a rebuild if nothing was selected because the timeout was reached a few times in a row.
Result:
Execute scheduled tasks on time.
Motivation:
When we do a (env*)->GetObjectArrayElement(...) call we may created many local references which will only be cleaned up once we exist the native method. Thus a lot of memory can be used and so a StackOverFlow may be triggered. Beside this the JNI specification only say that an implementation must cope with 16 local references.
Modification:
Call (env*)->ReleaseLocalRef(...) to release the resource once not needed anymore.
Result:
Less memory usage and guard against StackOverflow
Motivation:
Because of how we use reference counting we need to check for the reference count before each operation that touches the underlying memory. This is especially true as we use sun.misc.Cleaner.clean() to release the memory ASAP when possible. Because of this the user may cause a SEGFAULT if an operation is called that tries to access the backing memory after it was released.
Modification:
Correctly check the reference count on all methods that access the underlying memory or expose it via a ByteBuffer.
Result:
Safer usage of ByteBuf
Motivation:
DecodeResult is dropped when aggregate HTTP messages.
Modification:
Make sure we not drop the DecodeResult while aggregate HTTP messages.
Result:
Correctly include the DecodeResult for later processing.
Motivation:
When a select rebuild was triggered the reference to the SelectionKey is not updated in AbstractNioChannel. This will cause a CancelledKeyException later.
Modification:
Correctly update SelectionKey reference after rebuild
Result:
Fix exception
Motivation:
When a user tries to use netty on android it currently fails with "Could not find class 'sun.misc.Cleaner'"
Modification:
Encapsulate sun.misc.Cleaner usage in extra class to workaround this isssue.
Result:
Netty can be used on android again
Motivation:
At the moment there is no simple way for a user to check if the native epoll transport can be used on the running platform. Thus the user can only try to instance it and catch any exception and fallback to nio transport.
Modification:
Add Epoll.isAvailable() which allows to check if epoll can be used.
Result:
User can easily check if epoll transport can be used or not
Motivation:
When using openjdk and oracle jdk's nio (while using the nio transport) the ServerSocketChannel uses SO_REUSEADDR by default. Our native transport should do the same to make it easier to switch between the different implementations and get the expected result.
Modification:
Change EpollServerSocketChannelConfig to set SO_REUSEADDR on the created socket.
Result:
SO_REUSEADDR is used by default on servers.
Motivation:
bytesBefore(length, ...), bytesBefore(index, length, ...), and
indexOf(fromIndex, toIndex,...) in ReplayingDecoderBuffer are buggy.
They trigger 'REPLAY even when they don't need to.
Modification:
Implement the buggy methods properly so that REPLAYs are not triggered
unnecessarily.
Result:
Correct behvaior
Motivation:
At the moment we use a lot of unnecessary memory copies in JdkZlibEncoder. This is caused by either allocate a to small ByteBuf and expand it later or using a temporary byte array.
Beside this the memory footprint of JdkZlibEncoder is pretty high because of the byte[] used for compressing.
Modification:
- Override allocateBuffer(...) and calculate the estimatedsize in there, this reduce expanding of the ByteBuf later
- Not use byte[] in the instance itself but allocate a heap ByteBuf and write directly into the byte array
Result:
Less memory copies and smaller memory footprint
If decompression fails, the buffer that contains the decompressed data
is not released. Bzip2DecoderTest.testStreamCrcError() also does not
release the partial output Bzip2Decoder produces.
- Using short[] for memoryMap did not improve performance. Reverting
back to the original dual-byte[] structure in favor of simplicity.
- Optimize allocateRun() which yields small performence improvement
- Use local variable when member fields are accessed more than once
Motivation:
Depth-first search is not always efficient for buddy allocation.
Modification:
Employ a new faster search algorithm with different memoryMap layout.
Result:
With thread-local cache disabled, we see a lot of performance
improvment, especially when the size of the allocation is as small as
the page size, which had the largest search space previously.
Motivation:
We need to map from ints to AbstractEpollChannel in EpollEventLoop but there is no need for box to Integer.
Modification:
Replace Map with IntObjectMap.
Result:
No more auto-boxing needed.
Motivation:
During some refactoring we changed PlatformDependend0 to use sun.nio.ch.DirectBuffer for release direct buffers. This broke support for android as the class does not exist there and so an exception is thrown.
Modification:
Use again the fieldoffset to get access to Cleaner for release direct buffers.
Result:
Netty can be used on android again
Motivation:
Due to integer overflow bug, writes of FileRegions to http server pipeline (eg like one from HttpStaticFileServer example) with length greater than Integer.MAX_VALUE are ignored in 1/2 of cases (ie no data gets sent to client)
Modification:
Correctly handle chunk sized > Integer.MAX_VALUE
Result:
Be able to use FileRegion > Integer.MAX_VALUE when using chunked encoding.
Motivation:
Persuit for the consistency in method naming
Modifications:
- Remove the 'get' prefix from all HTTP/SPDY message classes
- Fix some inspector warnings
Result:
Consistency
Motivation:
Recycler is used in many places to reduce GC-pressure but is still not as fast as possible because of the internal datastructures used.
Modification:
- Rewrite Recycler to use a WeakOrderQueue which makes minimal guaranteer about order and visibility for max performance.
- Recycling of the same object multiple times without acquire it will fail.
- Introduce a RecyclableMpscLinkedQueueNode which can be used for MpscLinkedQueueNodes that use Recycler
These changes are based on @belliottsmith 's work that was part of #2504.
Result:
Huge increase in performance.
4.0 branch without this commit:
Benchmark (size) Mode Samples Score Score error Units
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 00000 thrpt 20 116026994.130 2763381.305 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 00256 thrpt 20 110823170.627 3007221.464 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 01024 thrpt 20 118290272.413 7143962.304 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 04096 thrpt 20 120560396.523 6483323.228 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 16384 thrpt 20 114726607.428 2960013.108 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 65536 thrpt 20 119385917.899 3172913.684 ops/s
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 297.617 sec - in io.netty.microbench.internal.RecyclableArrayListBenchmark
4.0 branch with this commit:
Benchmark (size) Mode Samples Score Score error Units
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 00000 thrpt 20 204158855.315 5031432.145 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 00256 thrpt 20 205179685.861 1934137.841 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 01024 thrpt 20 209906801.437 8007811.254 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 04096 thrpt 20 214288320.053 6413126.689 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 16384 thrpt 20 215940902.649 7837706.133 ops/s
i.n.m.i.RecyclableArrayListBenchmark.recycleSameThread 65536 thrpt 20 211141994.206 5017868.542 ops/s
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 297.648 sec - in io.netty.microbench.internal.RecyclableArrayListBenchmark
If a connection is closed unexpectedly while
AbstractBinaryMemcacheDecoder decodes a message, the half-constructed
message's content might not be released.
Motivation:
Persuit for consistent method naming across all classes
Modifications:
Remove 'get' prefix for the getter methods in codec-memcache
Result:
More consistent method naming
Motivation:
DefaultFullBinaryMemcacheRequest/Response overrides release(), retain(),
and touch() methods without calling its super, resulting in a leak of
the extras.
Modifications:
When overriding release(), retain(), and touch(), ensure to call super.
Result:
Fixes#2533 by fixing the buffer leak
Motivation:
MessageToByteEncoder always starts with ByteBuf that use initalCapacity == 0 when preferDirect is used. This is really wasteful in terms of performance as every first write into the buffer will cause an expand of the buffer itself.
Modifications:
- Change ByteBufAllocator.ioBuffer() use the same default initialCapacity as heapBuffer() and directBuffer()
- Add new allocateBuffer method to MessageToByteEncoder that allow the user to do some smarter allocation based on the message that will be encoded.
Result:
Less expanding of buffer and more flexibilty when allocate the buffer for encoding.