Motivation:
Currently when there are bytes left in the cumulation buffer we do a byte copy to produce the input buffer for the decode method. This can put quite some overhead on the impl.
Modification:
- Use a CompositeByteBuf to eliminate the byte copy.
- Allow to specify if a CompositeBytebug should be used or not as some handlers can only act on one ByteBuffer in an efficient way (like SslHandler :( ).
Result:
Performance improvement as shown in the following benchmark.
Without this patch:
[xxx@xxx ~]$ ./wrk-benchmark
Running 5m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 20.19ms 38.34ms 1.02s 98.70%
Req/Sec 241.10k 26.50k 303.45k 93.46%
1153994119 requests in 5.00m, 155.84GB read
Requests/sec: 3846702.44
Transfer/sec: 531.93MB
With the patch:
[xxx@xxx ~]$ ./wrk-benchmark
Running 5m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 17.34ms 27.14ms 877.62ms 98.26%
Req/Sec 252.55k 23.77k 329.50k 87.71%
1209772221 requests in 5.00m, 163.37GB read
Requests/sec: 4032584.22
Transfer/sec: 557.64MB
Motivation:
When a user sees an error message, sometimes he or she does not know
what exactly he or she has to do to fix the problem.
Modifications:
Log the URL of the wiki pages that might help the user troubleshoot.
Result:
We are more friendly.
Motivation:
When a user deliberatively omitted netty-tcnative from classpath, he or
she will see an ugly stack trace of ClassNotFoundException.
Modifications:
Log more briefly when netty-tcnative is not in classpath.
Result:
Better-looking log at DEBUG level
Motivations:
It seems that slicing a buffer and using this slice to write to CTX will
decrease the initial refCnt to 0, while the original buffer is not yet
fully used (not empty).
Modifications:
As suggested in the ticket and tested, when the currentBuffer is sliced
since it will still be used later on, the currentBuffer is retained.
Add a test case for this issue.
Result::
The currentBuffer still has its correct refCnt when reaching the last
write (not sliced) of 1 and therefore will be released correctly.
The exception does no more occur.
This fix should be applied to all branches >= 4.0.
Motivation:
The latest stable RHEL version of 6.x is now 6.6.
Modification:
Update pom.xml's validation configuration
Result:
Can release on the latest stable RHEL version in 6.x
Motivation:
- There's no point of pre-population.
- Waste of memory and time because they are going to be cached lazily
- Some pre-populated cipher suites are ancient and will be unused
Modification:
- Remove cache pre-population
Result:
Sanity restored
Motivation
----------
The performance tests for utf8 also used the getBytes on ASCII,
which is incorrect and also provides different performance numbers.
Modifications
-------------
Use CharsetUtil.UTF_8 instead of US_ASCII for the getBytes calls.
Result
------
Accurate and semantically correct benchmarking results on utf8
comparisons.
When handling an oversized message, HttpObjectAggregator does not wait
until the last chunk is received to produce the failed message, making
AggregatedFullHttpMessage.trailingHeaders() return null.
Related: #3019
Motivation:
We have multiple (Full)HttpRequest/Response implementations and only
some of them implements toString() properly.
Modifications:
- Add the reusable string converter for HttpMessages to HttpMessageUtil
- Implement toString() of (Full)HttpRequest/Response implementations
properly using HttpMessageUtil
Result:
Prettier string representation is returned by HttpMessage
implementations.
Related: #3274
Motivation:
channelReadComplete() event is not triggered after reading successfully
in EpollDatagramChannel.
Modifications:
- Trigger exceptionCaught() event for read failure only once for less
noise
- Trigger channelReadComplete() event at the end of the read.
Result:
Fix#3274
Motivation:
Calling JNI methods is pretty expensive, so we should only do if needed.
Modifications:
Lazy call methods if needed.
Result:
Better performance.
Motivation:
SSL_set_cipher_list() in OpenSSL does not fail as long as at least one
cipher suite is available. It is different from the semantics of
SSLEngine.setEnabledCipherSuites(), which raises an exception when the
list contains an unavailable cipher suite.
Modifications:
- Add OpenSsl.isCipherSuiteAvailable(String) which checks the
availability of a cipher suite
- Raise an IllegalArgumentException when the specified cipher suite is
not available
Result:
Fixed compatibility
Motivation:
To make OpenSslEngine a full drop-in replacement, we need to implement
getSupportedCipherSuites() and get/setEnabledCipherSuites().
Modifications:
- Retrieve the list of the available cipher suites when initializing
OpenSsl.
- Improve CipherSuiteConverter to understand SRP
- Add more test data to CipherSuiteConverterTest
- Add bulk-conversion method to CipherSuiteConverter
Result:
OpenSslEngine should now be a drop-in replacement for JDK SSLEngineImpl
for most cases.
Related: #3285
Motivation:
When a user attempts to switch from JdkSslContext to OpenSslContext, he
or she will see the initialization failure if he or she specified custom
cipher suites.
Modifications:
- Provide a utility class that converts between Java cipher suite string
and OpenSSL cipher suite string
- Attempt to convert the cipher suite so that a user can use the cipher
suite string format of Java regardless of the chosen SslContext impl
Result:
- It is possible to convert all known cipher suite strings.
- It is possible to switch from JdkSslContext and OpenSslContext and
vice versa without any configuration changes
Motivation:
When a CompositeByteBuf is empty (i.e. has no component), its internal
memory access operations do not always behave as expected.
Modifications:
Check if the nunmber of components is zero. If so, return an empty
array or an empty NIO buffer, etc.
Result:
More robustness
- Ensure an EmptyByteBuf has an array, an NIO buffer, and a memory
address at the same time
- Add an assertion that checks if EMPTY_BUFFER is an EmptyByteBuf,
just in case we make a mistake in the future
Rebased and cleaned-up based on the work by @normanmaurer
Motivation:
Currently, IOExceptions and ClosedChannelExceptions are thrown from
inside the JNI methods. Instantiation of Java objects inside JNI code is
an expensive operation, needless to say about filling stack trace for
every instantiation of an exception.
Modifications:
Change most JNI methods to return a negative value on failure so that
the exceptions are instantiated outside the native code.
Also, pre-instantiate some commonly-thrown exceptions for better
performance.
Result:
Performance gain
Motivation:
Several issues were shown by various ticket (#2900#2956).
Also use the improvement on writability user management from #3036.
And finally add a mixte handler, both for Global and Channels, with
the advantages of being uniquely created and using less memory and
less shaping.
Issue #2900
When a huge amount of data are written, the current behavior of the
TrafficShaping handler is to limit the delay to 15s, whatever the delay
the previous write has. This is wrong, and when a huge amount of writes
are done in a short time, the traffic is not correctly shapened.
Moreover, there is a high risk of OOM if one is not using in his/her own
handler for instance ChannelFuture.addListener() to handle the write
bufferisation in the TrafficShapingHandler.
This fix use the "user-defined writability flags" from #3036 to
allow the TrafficShapingHandlers to "user-defined" managed writability
directly, as for reading, thus using the default isWritable() and
channelWritabilityChanged().
This allows for instance HttpChunkedInput to be fully compatible.
The "bandwidth" compute on write is only on "acquired" write orders, not
on "real" write orders, which is wrong from statistic point of view.
Issue #2956
When using GlobalTrafficShaping, every write (and read) are
synchronized, thus leading to a drop of performance.
ChannelTrafficShaping is not touched by this issue since synchronized is
then correct (handler is per channel, so the synchronized).
Modifications:
The current write delay computation takes into account the previous
write delay and time to check is the 15s delay (maxTime) is really
exceeded or not (using last scheduled write time). The algorithm is
simplified and in the same time more accurate.
This proposal uses the #3036 improvement on user-defined writability
flags.
When the real write occurs, the statistics are update accordingly on a
new attribute (getRealWriteThroughput()).
To limit the synchronisations, all synchronized on
GlobalTrafficShapingHandler on submitWrite were removed. They are
replaced with a lock per channel (since synchronization is still needed
to prevent unordered write per channel), as in the sendAllValid method
for the very same reason.
Also all synchronized on TrafficCounter on read/writeTimeToWait() are
removed as they are unnecessary since already locked before by the
caller.
Still the creation and remove operations on lock per channel (PerChannel
object) are synchronized to prevent concurrency issue on this critical
part, but then limited.
Additionnal changes:
1) Use System.nanoTime() instead of System.currentTimeMillis() and
minimize calls
2) Remove / 10 ° 10 since no more sleep usage
3) Use nanoTime instead of currentTime such that time spend is computed,
not real time clock. Therefore the "now" relative time (nanoTime based)
is passed on all sub methods.
4) Take care of removal of the handler to force write all pending writes
and release read too
8) Review Javadoc to explicit:
- recommandations to take into account isWritable
- recommandations to provide reasonable message size according to
traffic shaping limit
- explicit "best effort" traffic shaping behavior when changing
configuration dynamically
Add a MixteGlobalChannelTrafficShapingHandler which allows to use only one
handler for mixing Global and Channel TSH. I enables to save more memory and
tries to optimize the traffic among various channels.
Result:
The traffic shaping is more stable, even with a huge number of writes in
short time by taking into consideration last scheduled write time.
The current implementation of TrafficShapingHandler using user-defined
writability flags and default isWritable() and
fireChannelWritabilityChanged works as expected.
The statistics are more valuable (asked write vs real write).
The Global TrafficShapingHandler should now have less "global"
synchronization, hoping to the minimum, but still per Channel as needed.
The GlobalChannel TrafficShapingHandler allows to have only one handler for all channels while still offering per channel in addition to global traffic shaping.
And finally maintain backward compatibility.
Motivation:
Even if its against the HTTP RFC there are situations where it may be useful to use other chars then US_ASCII in the headers. We should allow to make it possible by allow the user to override the how headers are encoded.
Modifications:
- Add encodeHeaders(...) method and so allow to override it.
Result:
It's now possible to encode headers with other charset then US_ASCII by just extend the encoder and override the encodeHeaders(...) method.
Motivation:
Openssl supports the SSL_CTX_set_session_id_context function to limit for which context a session can be used. We should support this.
Modifications:
Add OpenSslServerSessionContext that exposes a setSessionIdContext(...) method now.
Result:
It's now possible to use SSL_CTX_set_session_id_context.
Motivation:
It is sometimes useful to enable / disable the session cache.
Modifications:
* Add OpenSslSessionContext.setSessionCacheEnabled(...) and isSessionCacheEnabled()
Result:
It is now possible to enable / disable cache on the fly
Motivation:
To be compatible with SSLEngine we need to support enable / disable procols on the OpenSslEngine
Modifications:
Implement OpenSslEngine.getSupportedProtocols() , getEnabledProtocols() and setEnabledProtocols(...)
Result:
Better compability with SSLEngine
Motivation:
The current implementation not returns the real session as byte[] representation.
Modifications:
Create a proper Openssl.SSLSession.get() implementation which returns the real session as byte[].
Result:
More correct implementation
Motivation:
At the moment it is not possible to make use of the session cache when OpenSsl is used. This should be possible when server mode is used.
Modifications:
- Add OpenSslSessionContext (implements SSLSessionContext) which exposes all the methods to modify the session cache.
- Add various extra methods to OpenSslSessionContext for extra functionality
- Return OpenSslSessionContext when OpenSslEngine.getSession().getContext() is called.
- Add sessionContext() to SslContext
- Move OpenSsl specific session operations to OpenSslSessionContext and mark the old methods @deprecated
Result:
It's now possible to use session cache with OpenSsl
Motivation:
We expose no methods in ByteBuf to directly write a CharSequence into it. This leads to have the user either convert the CharSequence first to a byte array or use CharsetEncoder. Both cases have some overheads and we can do a lot better for well known Charsets like UTF-8 and ASCII.
Modifications:
Add ByteBufUtil.writeAscii(...) and ByteBufUtil.writeUtf8(...) which can do the task in an optimized way. This is especially true if the passed in ByteBuf extends AbstractByteBuf which is true for all of our implementations which not wrap another ByteBuf.
Result:
Writing an ASCII and UTF-8 CharSequence into a AbstractByteBuf is a lot faster then what the user could do by himself as we can make use of some package private methods and so eliminate reference and range checks. When the Charseq is not ASCII or UTF-8 we can still do a very good job and are on par in most of the cases with what the user would do.
The following benchmark shows the improvements:
Result: 2456866.966 ?(99.9%) 59066.370 ops/s [Average]
Statistics: (min, avg, max) = (2297025.189, 2456866.966, 2586003.225), stdev = 78851.914
Confidence interval (99.9%): [2397800.596, 2515933.336]
Benchmark Mode Samples Score Score error Units
i.n.m.b.ByteBufUtilBenchmark.writeAscii thrpt 50 9398165.238 131503.098 ops/s
i.n.m.b.ByteBufUtilBenchmark.writeAsciiString thrpt 50 9695177.968 176684.821 ops/s
i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringViaArray thrpt 50 4788597.415 83181.549 ops/s
i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringViaArrayWrapped thrpt 50 4722297.435 98984.491 ops/s
i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringWrapped thrpt 50 4028689.762 66192.505 ops/s
i.n.m.b.ByteBufUtilBenchmark.writeAsciiViaArray thrpt 50 3234841.565 91308.009 ops/s
i.n.m.b.ByteBufUtilBenchmark.writeAsciiViaArrayWrapped thrpt 50 3311387.474 39018.933 ops/s
i.n.m.b.ByteBufUtilBenchmark.writeAsciiWrapped thrpt 50 3379764.250 66735.415 ops/s
i.n.m.b.ByteBufUtilBenchmark.writeUtf8 thrpt 50 5671116.821 101760.081 ops/s
i.n.m.b.ByteBufUtilBenchmark.writeUtf8String thrpt 50 5682733.440 111874.084 ops/s
i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringViaArray thrpt 50 3564548.995 55709.512 ops/s
i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringViaArrayWrapped thrpt 50 3621053.671 47632.820 ops/s
i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringWrapped thrpt 50 2634029.071 52304.876 ops/s
i.n.m.b.ByteBufUtilBenchmark.writeUtf8ViaArray thrpt 50 3397049.332 57784.119 ops/s
i.n.m.b.ByteBufUtilBenchmark.writeUtf8ViaArrayWrapped thrpt 50 3318685.262 35869.562 ops/s
i.n.m.b.ByteBufUtilBenchmark.writeUtf8Wrapped thrpt 50 2473791.249 46423.114 ops/s
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,387.417 sec - in io.netty.microbench.buffer.ByteBufUtilBenchmark
Results :
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
Results :
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
The *ViaArray* benchmarks are basically doing a toString().getBytes(Charset) which the others are using ByteBufUtil.write*(...).
Motivation:
ProxyHandlerTest fails with NoClassDefFoundError raised by
SslContext.newClientContext().
Modifications:
Fix a missing 'return' statement that makes the switch-case block fall
through unncecessarily
Result:
- ProxyHandlerTest does not fail anymore.
- SslContext.newClientContext() does not raise NoClassDefFoundError
anymore.
Motivation:
HEAD requests will have a Content-Length set that doesn't match the
actual length. So we only want to set Content-Length header if it isn't
already set.
Modifications:
If check around setting the Content-Length.
Result:
A HEAD request will no correctly return the specified Content-Length
instead of the body length.
Motivation:
At the moment we use SSL.getLastError() in unwrap(...) to check for error. This is very inefficient as it creates a new String for each check and we also use a String.startsWith(...) to detect if there was an error we need to handle.
Modifications:
Use SSL.getLastErrorNumber() to detect if we need to handle an error, as this only returns a long and so no String creation happens. Also the detection is much cheaper as we can now only compare longs. Once an error is detected the lately SSL.getErrorString(long) is used to conver the error number to a String and include it in log and exception message.
Result:
Performance improvements in OpenSslEngine.unwrap(...) due less object allocation and also faster comparations.
Motivation:
As we now support OpenSslEngine for client side, we should use it when avaible.
Modifications:
Use SslProvider.OPENSSL when openssl can be found
Result:
OpenSslEngine is used whenever possible
Motivation:
When using client auth it is sometimes needed to use a custom TrustManagerFactory.
Modifications:
Allow to pass in TrustManagerFactory
Result:
It's now possible to use custom TrustManagerFactories for JdkSslServerContext and OpenSslServerContext
Motivation:
To make OpenSsl*Context a drop in replacement for JdkSsl*Context we need to use TrustManager.
Modifications:
Correctly hook in the TrustManager
Result:
Better compatibility
Motivation:
At the moment there is no way to enable client authentication when using OpenSslEngine. This limits the uses of OpenSslEngine.
Modifications:
Add support for different authentication modes.
Result:
OpenSslEngine can now also be used when client authenticiation is needed.
Motivation:
The current SSLSession implementation used by OpenSslEngine does not support various operations and so may not be a good replacement by the SSLEngine provided by the JDK implementation.
Modifications:
- Add SSLSession.getCreationTime()
- Add SSLSession.getLastAccessedTime()
- Add SSLSession.putValue(...), getValue(...), removeValue(...), getValueNames()
- Add correct SSLSession.getProtocol()
- Ensure OpenSSLEngine.getSession() is thread-safe
- Use optimized AtomicIntegerFieldUpdater when possible
Result:
More complete OpenSslEngine SSLSession implementation
Motivation:
We only support openssl for server side at the moment but it would be also useful for client side.
Modification:
* Upgrade to new netty-tcnative snapshot to support client side openssl support
* Add OpenSslClientContext which can be used to create SslEngine for client side usage
* Factor out common logic between OpenSslClientContext and OpenSslServerContent into new abstract base class called OpenSslContext
* Correctly detect handshake failures as soon as possible
* Guard against segfault caused by multiple calls to destroyPools(). This can happen if OpenSslContext throws an exception in the constructor and the finalize() method is called later during GC
Result:
openssl can be used for client and servers now.
Motivation:
SslHandler.wrap(...) does a poor job when handling CompositeByteBuf as it always call ByteBuf.nioBuffer() which will do a memory copy when a CompositeByteBuf is used that is backed by multiple ByteBuf.
Modifications:
- Use SslEngine.wrap(ByteBuffer[]...) to allow wrap CompositeByteBuf in an efficient manner
- Reduce object allocation in unwrapNonAppData(...)
Result:
Performance improvement when a CompositeByteBuf is written and the SslHandler is in the ChannelPipeline.
Motivation:
CompositeByteBuf.nioBuffers(...) returns an empty ByteBuffer array if the specified length is 0. This is not consistent with other ByteBuf implementations which return an ByteBuffer array of size 1 with an empty ByteBuffer included.
Modifications:
Make CompositeByteBuf.nioBuffers(...) consistent with other ByteBuf implementations.
Result:
Consistent and correct behaviour of nioBufffers(...)
Motivation:
When calling slice(...) on a ByteBuf the returned ByteBuf should be the slice of a ByteBuf and shares it's reference count. This is important as it is perfect legal to use buf.slice(...).release() and have both, the slice and the original ByteBuf released. At the moment this is only the case if the requested slice size is > 0. This makes the behavior inconsistent and so may lead to a memory leak.
Modifications:
- Never return Unpooled.EMPTY_BUFFER when calling slice(...).
- Adding test case for buffer.slice(...).release() and buffer.duplicate(...).release()
Result:
Consistent behaviour and so no more leaks possible.
Motivation:
When a remote peer did open a connection and only do the handshake without sending any data and then directly close the connection we did not call shutdown() in the OpenSslEngine. This leads to a native memory leak. Beside this it also was not fireed when a OpenSslEngine was created but never used.
Modifications:
- Make sure shutdown() is called in all cases when closeInbound() is called
- Call shutdown() also in the finalize() method to ensure we release native memory when the OpenSslEngine is GC'ed
Result:
No more memory leak when using OpenSslEngine
Motivation:
TrafficShapingHandlerTest uses Logback API directly, which is
discouraged. Also, it overrides the global default log level, which
silences the DEBUG messages from other tests.
Modifications:
Remove the direct use of Logback API
Result:
The tests executed after TrafficShapingHandlerTest logs their DEBUG
messages correctly.
Motivation:
ALPN version updates revealed an inconsistency visible by defaulting to npn when alpn was expected.
Modifications:
Default to ALPN.
Result:
Build and unit tests should pass.
Motivation:
There was a bug in the Java ALPN library we are using. A new version was released to fix this bug and we should update our pom.xml to use the new version.
Modifications:
Update pom.xml to use new ALPN library.
Result:
Newer versions of JDK (1.7_u71, 1.7_u72, 1.8_u25) have the bug fixed.