Motivation:
Currently it is not possible to load an encrypted private key when
creating a JDK based SSL server context.
Modifications:
- Added static method to JdkSslServerContext which handles key spec generation for (encrypted) private keys and make use of it.
-Added tests for creating a SSL server context based on a (encrypted)
private key.
Result:
It is now possible to create a JDK based SSL server context with an
encrypted (password protected) private key.
Motivation:
Our ChannelOutboundBuffer implementation is not based on ArrayDeque anymore so we can remove the license notice for it.
Modifications:
Remove license of deque and entry in NOTICE.
Result:
Cleaned up licenses
Motivation:
When a ChannelOutboundBuffer contains a series of entries whose messages
are all empty buffers, EpollSocketChannel sometimes fails to remove
them. As a result, the result of the write(EmptyByteBuf) is never
notified, making the user application hang.
Modifications:
- Add ChannelOutboundBuffer.removeBytes(long) method that updates the
progress of the entries and removes them as much as the specified
number of written bytes. It also updates the reader index of
partially flushed buffer.
- Make both NioSocketChannel and EpollSocketChannel use it to reduce
code duplication
- Replace EpollSocketChannel.updateOutboundBuffer()
- Refactor EpollSocketChannel.doWrite() for simplicity
- Split doWrite() into doWriteSingle() and doWriteMultiple()
- Do not add a zero-length buffer to IovArray
- Do not perform any real I/O when the size of IovArray is 0
Result:
Another regression is gone.
Motivation:
As /proc/sys/net/core/somaxconn does not exists on non-linux platforms you see a noisy stacktrace when debug level is enabled while the static method of NetUtil is executed.
Modifications:
Check if the file exists before try to parse it.
Result:
Less noisy logging on non-linux platforms.
Related issue: #2717, #2710, #2704, #2693
Motivation:
When ChannelOutboundBuffer.nioBuffers() iterates over the linked list of
entries, it is not supposed to visit unflushed entries, but it does.
Modifications:
- Make sure ChannelOutboundBuffer.nioBuffers() stops the iteration before
it visits an unflushed entry
- Add isFlushedEntry() to reduce the chance of the similar mistakes
Result:
Another regression is gone.
Motivation:
ChannelOutboundBuffer.forEachFlushedMessage() visits even an unflushed
messages.
Modifications:
Stop the loop if the currently visiting entry is unflushedEntry.
Result:
forEachFlushedMessage() behaves correctly.
- ChannelOutboundBuffer.Entry.buffers -> bufs for consistency
- Make Native.IOV_MAX final because it's a constant
- Naming changes
- FlushedMessageProcessor -> MessageProcessor just in case we can
reuse it for unflushed messages in the future
- Add ChannelOutboundBuffer.Entry.recycle() that does not return the
next entry, and use it wherever possible
- Javadoc clean-up
Motivation:
While benchmarking the native transport, I noticed that gathering write
is not as fast as expected. It was due to the fact that we have to do a
lot of array copies to put the buffer addresses into the iovec struct
array.
Modifications:
Introduce a new class called IovArray, which allows to fill buffers
directly into an off-heap array of iovec structs, so that it can be
passed over to JNI without any extra array copies.
Result:
Big performance improvement when doing gathering writes:
Before:
[nmaurer@xxx]~% wrk/wrk -H 'Host: localhost' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Connection: keep-alive' -d 120 -c 256 -t 16 --pipeline 256 http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 23.44ms 16.37ms 259.57ms 91.77%
Req/Sec 181.99k 31.69k 304.60k 78.12%
346544071 requests in 2.00m, 46.48GB read
Requests/sec: 2887885.09
Transfer/sec: 396.59MB
After:
[nmaurer@xxx]~% wrk/wrk -H 'Host: localhost' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Connection: keep-alive' -d 120 -c 256 -t 16 --pipeline 256 http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 21.93ms 16.33ms 305.73ms 92.34%
Req/Sec 194.56k 33.75k 309.33k 77.04%
369617503 requests in 2.00m, 49.57GB read
Requests/sec: 3080169.65
Transfer/sec: 423.00MB
Motivation:
73dfd7c01b introduced various test
failures because:
- EpollSocketChannel.doWrite() raised a NullPointerException when
notifying the write progress.
- ChannelOutboundBuffer.nioBuffers() did not expand the internal array
when the pending entries contained more than 1024 buffers, dropping
the remainder.
Modifications:
- Fix the NPE in EpollSocketChannel by removing an unnecessary progress
update
- Expand the thread-local buffer array if there is not enough room,
which was the original behavior dropped by the offending commit
Result:
Regression is gone.
Motiviation:
ChannelOuboundBuffer uses often too much memory. This is especially a problem if you want to serve a lot of connections. This is due the fact that it uses 2 arrays internally. One if used as a circular buffer and store the Entries that are never released (ChannelOutboundBuffer is pooled) and one is used to hold the ByteBuffers that are used for gathering writes.
Modifications:
Rewrite ChannelOutboundBuffer to remove these two arrays by:
- Make Entry recyclable and use it as linked Node
- Remove the circular buffer which was used for the Entries as we use a Linked-List like structure now
- Remove the array that did hold the ByteBuffers and replace it by an ByteBuffer array that is hold by a FastThreadLocal. We use a fixed capacity of 1024 here which is fine as we share these anyway.
- ChannelOuboundBuffer is not recyclable anymore as it is now a "light-weight" object. We recycle the internally used Entries instead.
Result:
Less memory footprint and resource usage. Performance seems to be a bit better but most likely as we not need to expand any arrays anymore.
Benchmark before change:
[nmaurer@xxx]~% wrk/wrk -H 'Host: localhost' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Connection: keep-alive' -d 120 -c 256 -t 16 --pipeline 256 http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 26.88ms 67.47ms 1.26s 97.97%
Req/Sec 191.81k 28.22k 255.63k 83.86%
364806639 requests in 2.00m, 48.92GB read
Requests/sec: 3040101.23
Transfer/sec: 417.49MB
Benchmark after change:
[nmaurer@xxx]~% wrk/wrk -H 'Host: localhost' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Connection: keep-alive' -d 120 -c 256 -t 16 --pipeline 256 http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 22.22ms 17.22ms 301.77ms 90.13%
Req/Sec 194.98k 41.98k 328.38k 70.50%
371816023 requests in 2.00m, 49.86GB read
Requests/sec: 3098461.44
Transfer/sec: 425.51MB
Motivation:
We sometimes not use the correct exception message when throw it from the native code.
Modifications:
Fixed the message.
Result:
Correct message in exception
Motivation:
We have some inconsistency when handling writes. Sometimes we call ChannelOutboundBuffer.progress(...) also for complete writes and sometimes not. We should call it always.
Modifications:
Correctly call ChannelOuboundBuffer.progress(...) for complete and incomplete writes.
Result:
Consistent behavior
Motivation:
Due some race-condition while handling canellation of TimerTasks it was possibleto corrupt the linked-list structure that is represent by HashedWheelBucket and so produce a NPE.
Modification:
Fix the problem by adding another MpscLinkedQueue which holds the cancellation tasks and process them on each tick. This allows to use no synchronization / locking at all while introduce a latency of max 1 tick before the TimerTask can be GC'ed.
Result:
No more NPE
Motivation:
In ReplayingDecoder / ByteToMessageDecoder channelInactive(...) method we try to decode a last time and fire all decoded messages throw the pipeline before call ctx.fireChannelInactive(...). To keep the correct order of events we also need to call ctx.fireChannelReadComplete() if we read anything.
Modifications:
- Channel channelInactive(...) to call ctx.fireChannelReadComplete() if something was decoded
- Move out.recycle() to finally block
Result:
Correct order of events.
Motivation:
ChannelOutboundBuffer is basically a circular array queue of its entry
objects. Once an entry is created in the array, it is never nulled out
to reduce the allocation cost.
However, because it is a circular queue, the array almost always ends up
with as many entry instances as the size of the array, regardless of the
number of pending writes.
At worst case, a channel might have only 1 pending writes at maximum
while creating 32 entry objects, where 32 is the initial capacity of the
array.
Modifications:
- Reduce the initial capacity of the circular array queue to 4.
- Make the initial capacity of the circular array queue configurable
Result:
We spend 4 times less memory for entry objects under certain
circumstances.
- Rewrite with linear probing, no state array, compaction at cleanup
- Optimize keys() and values() to not use reflection
- Optimize hashCode() and equals() for efficient iteration
- Fixed equals() to not return true for equals(null)
- Optimize iterator to not allocate new Entry at each next()
- Added toString()
- Added some new unit tests
Motivation:
Message from FindBugs:
This method performs synchronization an object that is an instance of a class from the java.util.concurrent package (or its subclasses). Instances of these classes have their own concurrency control mechanisms that are orthogonal to the synchronization provided by the Java keyword synchronized. For example, synchronizing on an AtomicBoolean will not prevent other threads from modifying the AtomicBoolean.
Such code may be correct, but should be carefully reviewed and documented, and may confuse people who have to maintain the code at a later date.
Modification:
Use synchronized(this)
Result:
Less confusing code
Motivation:
At the moment we use Get*ArrayElement all the time in the epoll transport which may be wasteful as the JVM may do a memory copy for this. For code-path that will get executed fast (without blocking) we should better make use of GetPrimitiveArrayCritical and ReleasePrimitiveArrayCritical as this signal the JVM that we not want to do any memory copy if not really needed. It is important to only do this on non-blocking code-path as this may even suspend the GC to disallow the JVM to move the arrays around.
See also http://docs.oracle.com/javase/7/docs/technotes/guides/jni/spec/functions.html#GetPrimitiveArrayCritical
Modification:
Make use of GetPrimitiveArrayCritical / ReleasePrimitiveArrayCritical as replacement for Get*ArrayElement / Release*ArrayElement where possible.
Result:
Better performance due less memory copies.
Motivation:
In EpollSocketchannel.writeBytesMultiple(...) we loop over all buffers to see if we need to adjust the readerIndex for incomplete writes. We can skip this if we know that everything was written (a.k.a complete write).
Modification:
Use fast-path if all bytes are written and so no need to loop over buffers
Result:
Fast write path for the average use.
Motivation:
At the moment ChannelOutboundBuffer.nioBuffers() returns null if something is contained in the ChannelOutboundBuffer which is not a ByteBuf. This is a problem for two reasons:
1 - In the javadocs we state that it will never return null
2 - We may do a not optimal write as there may be things that could be written via gathering writes
Modifications:
Change ChannelOutboundBuffer.nioBuffers() to never return null but have it contain all ByteBuffer that were found before the non ByteBuf. This way we can do a gathering write and also conform to the javadocs.
Result:
Better speed and also correct implementation in terms of the api.
Motivation:
Now Netty has a few problems with null values.
Modifications:
- Check File in DiskFileUpload.toString().
If File is null we will get NPE when calling toString() method.
- Check Result<String> in MqttDecoder.decodeConnectionPayload(...).
- Check Unsafe before calling unsafe.getClass() in PlatformDependent0 static block.
- Removed unnecessary null check in WebSocket08FrameEncoder.encode(...).
Because msg.content() can not return null.
- Removed unnecessary null checks in ConcurrentHashMapV8.removeTreeNode(TreeNode<K,V>).
- Removed unnecessary null check in OioDatagramChannel.doReadMessages(List<Object>).
Because tmpPacket.getSocketAddress() always returns new SocketAddress instance.
- Removed unnecessary null check in OioServerSocketChannel.doReadMessages(List<Object>).
Because socket.accept() always returns new Socket instance.
- Pass Unpooled.buffer(0) instead of null inside CloseWebSocketFrame(boolean, int) constructor.
If we will pass null we will get NPE in super class constructor.
- Added throw new IllegalStateException in GlobalEventExecutor.awaitInactivity(long, TimeUnit) if it will be called before GlobalEventExecutor.execute(Runnable).
Because now we will get NPE. IllegalStateException will be better in this case.
- Fixed null check in OpenSslServerContext.setTicketKeys(byte[]).
Now we throw new NPE if byte[] is not null.
Result:
Added new null checks when it is necessary, removed unnecessary null checks and fixed some NPE problems.
Motivation:
Fix some typos in Netty.
Modifications:
- Fix potentially dangerous use of non-short-circuit logic in Recycler.transfer(Stack<?>).
- Removed double 'the the' in javadoc of EmbeddedChannel.
- Write to log an exception message if we can not get SOMAXCONN in the NetUtil's static block.
Motivation:
Fixed founded mistakes in compression codecs.
Modifications:
- Changed return type of ZlibUtil.inflaterException() from CompressionException to DecompressionException
- Updated @throws in javadoc of JZlibDecoder to throw DecompressionException instead of CompressionException
- Fixed JdkZlibDecoder to throw DecompressionException instead of CompressionException
- Removed unnecessary empty lines in JdkZlibEncoder and JZlibEncoder
- Removed public modifier from Snappy class
- Added MAX_UNCOMPRESSED_DATA_SIZE constant in SnappyFramedDecoder
- Used in.readableBytes() instead of (in.writerIndex() - in.readerIndex()) in SnappyFramedDecoder
- Added private modifier for enum ChunkType in SnappyFramedDecoder
Result:
Fixed sum overflow in Bzip2HuffmanAllocator, improved exceptions in ZlibDecoder implementations, hid Snappy class
Modifications:
- Added a static modifier for CompositeByteBuf.Component.
This class is an inner class, but does not use its embedded reference to the object which created it. This reference makes the instances of the class larger, and may keep the reference to the creator object alive longer than necessary.
A boxed primitive is created from a String, just to extract the unboxed primitive value.
- Removed unnecessary checks if file exists before call mkdirs() in NativeLibraryLoader and PlatformDependent.
Because the method mkdirs() has this check inside.
Conflicts:
codec-http/src/main/java/io/netty/handler/codec/http/multipart/DiskAttribute.java
codec-stomp/src/main/java/io/netty/handler/codec/stomp/StompSubframeAggregator.java
codec-stomp/src/main/java/io/netty/handler/codec/stomp/StompSubframeDecoder.java
Motivation:
We create a new CompactObjectInputStream with ByteBufInputStream in ObjectDecoder.decode(...) method and don't close this InputStreams before return statement.
Modifications:
Save link to the ObjectInputStream and close it before return statement.
Result:
Close InputStreams and clean up unused resources. It will be better for GC.
Motivation:
In the previous fix for #2667 I did introduce a bit overhead by calling setEpollOut() too often.
Modification:
Only call setEpollOut() if really needed and remove unused code.
Result:
Less overhead when saturate network.
Motivation:
As a DatagramChannel supports to write to multiple remote peers we must not close the Channel once a IOException accours as this error may be only valid for one remote peer.
Modification:
Continue writing on IOException.
Result:
DatagramChannel can be used even after an IOException accours during writing.
Motivation:
We need to continue write until we hit EAGAIN to make sure we not see an starvation
Modification:
Write until EAGAIN is returned
Result:
No starvation when using native transport with ET.
Motivation:
Because of a missing return statement we may produce a NPE when try to fullfill the connect ChannelPromise when it was fullfilled before.
Modification:
Add missing return statement.
Result:
No more NPE.
Motivation:
The handling of IOV_MAX was done in JNI code base which makes stuff really complicated to maintain etc.
Modifications:
Move handling of IOV_MAX to java code to simplify stuff
Result:
Cleaner code.
Motivation:
In our nio implementation we use write-spinning for maximize throughput, but in the native implementation this is not used.
Modification:
Respect writeSpinCount in native transport.
Result:
Better throughput
Motivation:
When we receive an incomplete WebSocketFrame we need to make sure to wait for more data. Because we not did this we could produce a NPE.
Modification:
Make sure we not try to add null into the RecyclableArrayList
Result:
no more NPE on incomplete frames.
Motivation:
Currently we do 4 ByteBuf.writeBytes(...) calls per header line. This is can be improved.
Modification:
Introduce two new HttpHeaders methods to allow create HttpHeaderEntity which contains the separator. With this we can minimize it to 2 ByteBuf.writeBytes(...) calls per header line
Result:
Performance improvement.
Motivation:
I introduced ensureAccessible() class as part of 6c47cc9711 in some places. Unfortunally I also added some where these are not needed and so caused a performance regression.
Modification:
Remove calls where not needed.
Result:
Fixed performance regression.
Motivation:
I introduced range checks as part of 6c47cc9711 in some places. Unfortunally I also added some where these are not needed and so caused a performance regression.
Modification:
Remove range checks where not needed
Result:
Fixed performance regression.
Motivations:
In our new version of HWT we used some kind of lazy cancelation of timeouts by put them back in the queue and let them pick up on the next tick. This multiple problems:
- we may corrupt the MpscLinkedQueue if the task is used as tombstone
- this sometimes lead to an uncessary delay especially when someone did executed some "heavy" logic in the TimeTask
Modifications:
Use a Lock per HashedWheelBucket for save and fast removal.
Modifications:
Cancellation of tasks can be done fast and so stuff can be GC'ed and no more infinite-loop possible
Motivation:
HTTP header validation can be expensive so we should allow to disable it like we do in HttpObjectDecoder.
Modification:
Add constructor argument to disable validation.
Result:
Performance improvement
Motivation:
HttpObjectAggregator currently creates a new FullHttpResponse / FullHttpRequest for each message it needs to aggregate. While doing so it also creates 2 DefaultHttpHeader instances (one for the headers and one for the trailing headers). This is bad for two reasons:
- More objects are created then needed and also populate the headers is not for free
- Headers may get validated even if the validation was disabled in the decoder
Modification:
- Wrap the previous created HttpResponse / HttpRequest and so reuse the original HttpHeaders
- Reuse the previous created trailing HttpHeader.
- Fix a bug where the trailing HttpHeader was incorrectly mixed in the headers.
Result:
- Less GC
- Faster HttpObjectAggregator implementation
Motivation:
It's not always the case that there is another handler in the pipeline that will intercept the exceptionCaught event because sometimes users just sub-class. In this case the exception will just hit the end of the pipeline.
Modification:
Throw the TooLongFrameException so that sub-classes can handle it in the exceptionCaught(...) method directly.
Result:
Sub-classes can correctly handle the exception,
Motivation:
epoll transport fails on gathering write of more then 1024 buffers. As linux supports max. 1024 iov entries when calling writev(...) the epoll transport throws an exception.
Thanks again to @blucas to provide me with a reproducer and so helped me to understand what the issue is.
Modifications:
Make sure we break down the writes if to many buffers are uses for gathering writes.
Result:
Gathering writes work with any number of buffers
Motivation:
CompositeByteBuf.deallocate generates unnecessary GC pressure when using the 'foreach' loop, as a 'foreach' loop creates an iterator when looping.
Modification:
Convert 'foreach' loop into regular 'for' loop.
Result:
Less GC pressure (and possibly more throughput) as the 'for' loop does not create an iterator
Motivation:
When an exception is thrown during try to send DatagramPacket or SctpMessage a buffer may leak.
Modification:
Correctly handle allocated buffers in case of exception
Result:
No more leaks