Motivation:
On Linux, you can gather various metrics using getsockopt(..., TCP_INFO,
...).
Modifications:
Add EpollSocketChannel.tcpInfo() which returns EpollTcpInfo that exposes
all metrics exposed via getsockopt(..., TCP_INFO, ...)
Result:
TCP_INFO support implemented
Motivation:
The JdkZlibDecoder and JZlibDecoder call isReadable and readableBytes in the same method. There is an opportunity to reduce the number of methods calls to just use readableBytes. JdkZlibDecoder reads from a ByteBuf with an absolute index instead of using readerIndex()
Modifications:
- Use readableBytes where isReadable was used
- Correct absolute ByteBuf index to be relative to readerIndex()
Result:
Less method calls duplicating work and preventing an index out of bounds exception.
Motivation:
In the native transport we use getpeername to obtain the remote address from the file descriptor. This may fail for various reasons in which case NULL is returned.
Modifications:
- Check for null when try to obtain remote / local address
Result:
No more NPE
Motivation:
Internet Explorer doesn't honor Set-Cookie header Max-Age attribute. It only honors the Expires one.
Modification:
Always generate an Expires attribute along the Max-Age one.
Result:
Internet Explorer compatible expiring cookies. Close#1466.
Motivation:
HttpContentDecoder had the following issues:
- For chunked content, the decoder set invalid "Content-Length" header
with length of the first decoded chunk.
- Decoding of FullHttpRequests put both the original conent and decoded
content into output. As result, using HttpObjectAggregator before the
decoder lead to errors.
- Requests with "Expect: 100-continue" header were not acknowleged:
the decoder didn't pass the header message down the handler's chain
until content is received. If client expected "100 Continue" response,
deadlock happened.
Modification:
- Invalid "Content-Length" header is removed; handlers down the chain can either
rely on LastHttpContent message or ask HttpObjectAggregator to add the header.
- FullHttpRequest is split into HttpRequest and HttpContent (decoded) parts.
- Header (HttpRequest) part of request is sent down the chain as soon as it's received.
Result:
The issues are fixed, unittest is added.
Motivation:
Pull request for RFC6265 support had some unused flag first in ClientCookieDecoder.
Modification:
Remove unused flag first.
Result:
Cleaner code.
Motivation:
A downstream consumer of Netty failed as emitting zero-length http2 data frames in a unit test resulted in assertion errors in Http2LocalFlowController. Since zero-length frames are valid, an assertion that http2 data frame length must be positive is invalid.
Modifications:
Assertions of data length in Http2LocalFlowController now permit zero.
Result:
Those running netty with assertions on can now emit zero length http2 data frames.
Motivation:
There are two member variables (addAllVisitor, setAllVisitor) which are likely not to be used in the majority of use cases.
Modifications:
Remove these member variables and rely on a method to return a new object when needed.
Result:
Two less member variables for each DefaultHeaders instance.
Motivation:
The Headers interface had two member variables (addAllVisitor, setAllVisitor) which are not necessarily always needed but are always instantiated. This may result in excess memory being used.
Modifications:
- addAllVisitor will be accessed via a method addAllVisitor() which will use lazy initialization.
- setAllVisitor will be accessed via a method addAllVisitor() which will use lazy initialization.
Result:
Potential memory savings by using lazy initialization.
Motivation:
Rfc6265Client/ServerCookieEncoder is a better replacement of the old
Client/ServerCookieEncoder, and thus there's no point of keeping both.
Modifications:
- Remove the old Client/ServerCookieEncoder
- Remove the 'Rfc6265' prefix from the new cookie encoder/decoder
classes
- Deprecate CookieDecoder
Result:
We have much better cookie encoder/decoder implementation now.
Motivation:
Currently Netty supports a weird implementation of RFC 2965.
First, this RFC has been deprecated by RFC 6265 and nobody on the
internet use this format.
Then, there's a confusion between client side and server side encoding
and decoding.
Typically, clients should only send name=value pairs.
This PR introduces RFC 6265 support, but keeps on supporting RFC 2965 in
the sense that old unused fields are simply ignored, and Cookie fields
won't be populated. Deprecated fields are comment, commentUrl, version,
discard and ports.
It also provides a mechanism for safe server-client-server roundtrip, as
User-Agents are not supposed to interpret cookie values but return them
as-is (e.g. if Set-Cookie contained a quoted value, it should be sent
back in the Cookie header in quoted form too).
Also, there are performance gains to be obtained by not allocating the
attribute name Strings, as we only want to match them to find which POJO
field to populate.
Modifications:
- New RFC6265ClientCookieEncoder/Decoder and
RFC6265ServerCookieEncoder/Decoder pairs that live alongside old
CookieEncoder/Decoder pair to not break backward compatibility.
- New Cookie.rawValue field, used for lossless server-client-server
roundtrip.
Result:
RFC 6265 support.
Clean separation of client and server side.
Decoder performance gain:
Benchmark Mode Samples Score Error
Units
parseOldClientDecoder thrpt 20 2070169,228 ± 105044,970
ops/s
parseRFC6265ClientDecoder thrpt 20 2954015,476 ± 126670,633
ops/s
This commit closes#3221 and #1406.
Motivation:
HttpPostMultipartRequestDecoder threw an ArrayIndexOutOfBoundsException
when trying to decode Content-Disposition header with filename
containing ';' or protected \\".
See issue #3326 and #3327.
Modifications:
Added splitMultipartHeaderValues method which cares about quotes, and
use it in splitMultipartHeader method, instead of StringUtils.split.
Result:
Filenames can contain semicolons and protected \\".
Motivation:
When SslHandler.unwrap() copies SSL records into a heap buffer, it does
not update the start offset, causing IndexOutOfBoundsException.
Modifications:
- Copy to a heap buffer before calling unwrap() for simplicity
- Do not copy an empty buffer to a heap buffer.
- unwrap(... EMPTY_BUFFER ...) never involves copying now.
- Use better parameter names for unwrap()
- Clean-up log messages
Result:
- Bugs fixed
- Cleaner code
Motivation:
When using OpenSslEngine with the SslHandler it is possible to reduce memory copies by unwrap(...) multiple ByteBuffers at the same time. This way we can eliminate a memory copy that is needed otherwise to cumulate partial received data.
Modifications:
- Add OpenSslEngine.unwrap(ByteBuffer[],...) method that can be used to unwrap multiple src ByteBuffer a the same time
- Use a CompositeByteBuffer in SslHandler for inbound data so we not need to memory copy
- Add OpenSslEngine.unwrap(ByteBuffer[],...) in SslHandler if OpenSslEngine is used and the inbound ByteBuf is backed by more then one ByteBuffer
- Reduce object allocation
Result:
SslHandler is faster when using OpenSslEngine and produce less GC
Motivation:
Decompression handlers contain heavy use of switch-case statements. We
use compact indentation style for 'case' so that we utilize our screen
real-estate more efficiently.
Also, the following decompression handlers do not need to run a loop,
because ByteToMessageDecoder already runs a loop for them:
- FastLzFrameDecoder
- Lz4FrameDecoder
- LzfDecoder
Modifications:
- Fix indentations
- Do not wrap the decoding logic with a for loop when unnecessary
- Handle the case where a FastLz/Lzf frame contains no data properly so
that the buffer does not leak and less garbage is produced.
Result:
- Efficiency
- Compact source code
- No buffer leak
Motivation:
HttpResponseStaus, HttpMethod and HttpVersion have methods that return
AsciiString. There's no need for object-to-string conversion.
Modifications:
Use codeAsText(), name(), text() instead of setInt() and setObject()
Result:
Efficiency
Motivation:
The SpdyHttpDecoder was modified to support pushed resources that are
divided into multiple frames. The decoder accepts a pushed
SpdySynStreamFrame containing the request headers, followed by a
SpdyHeadersFrame containing the response headers.
Modifications:
This commit modifies the SpdyHttpEncoder so that it encodes pushed
resources in a format that the SpdyHttpDecoder can decode. The encoder
will accept an HttpRequest object containing the request headers,
followed by an HttpResponse object containing the response headers.
Result:
The SpdyHttpEncoder will create a SpdySynStreamFrame followed by a
SpdyHeadersFrame when sending pushed resources.
Motivation:
NioUdtMessageRendezvoudChannelTest.basicEcho() is flakey on Linux and
failing on Windows.
Modifications:
Disable the problematic test until it's fixed.
Result:
Less annoyance
Motivation:
Currently when there are bytes left in the cumulation buffer we do a byte copy to produce the input buffer for the decode method. This can put quite some overhead on the impl.
Modification:
- Use a CompositeByteBuf to eliminate the byte copy.
- Allow to specify if a CompositeBytebug should be used or not as some handlers can only act on one ByteBuffer in an efficient way (like SslHandler :( ).
Result:
Performance improvement as shown in the following benchmark.
Without this patch:
[xxx@xxx ~]$ ./wrk-benchmark
Running 5m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 20.19ms 38.34ms 1.02s 98.70%
Req/Sec 241.10k 26.50k 303.45k 93.46%
1153994119 requests in 5.00m, 155.84GB read
Requests/sec: 3846702.44
Transfer/sec: 531.93MB
With the patch:
[xxx@xxx ~]$ ./wrk-benchmark
Running 5m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 17.34ms 27.14ms 877.62ms 98.26%
Req/Sec 252.55k 23.77k 329.50k 87.71%
1209772221 requests in 5.00m, 163.37GB read
Requests/sec: 4032584.22
Transfer/sec: 557.64MB
Motivation:
When a user sees an error message, sometimes he or she does not know
what exactly he or she has to do to fix the problem.
Modifications:
Log the URL of the wiki pages that might help the user troubleshoot.
Result:
We are more friendly.
Motivation:
When a user deliberatively omitted netty-tcnative from classpath, he or
she will see an ugly stack trace of ClassNotFoundException.
Modifications:
Log more briefly when netty-tcnative is not in classpath.
Result:
Better-looking log at DEBUG level
Motivations:
It seems that slicing a buffer and using this slice to write to CTX will
decrease the initial refCnt to 0, while the original buffer is not yet
fully used (not empty).
Modifications:
As suggested in the ticket and tested, when the currentBuffer is sliced
since it will still be used later on, the currentBuffer is retained.
Add a test case for this issue.
Result:
The currentBuffer still has its correct refCnt when reaching the last
write (not sliced) of 1 and therefore will be released correctly.
The exception does no more occur.
This fix should be applied to all branches >= 4.0.
Motivation:
The latest stable RHEL version of 6.x is now 6.6.
Modification:
Update pom.xml's validation configuration
Result:
Can release on the latest stable RHEL version in 6.x
Motivation:
- There's no point of pre-population.
- Waste of memory and time because they are going to be cached lazily
- Some pre-populated cipher suites are ancient and will be unused
Modification:
- Remove cache pre-population
Result:
Sanity restored
Motivation
----------
The performance tests for utf8 also used the getBytes on ASCII,
which is incorrect and also provides different performance numbers.
Modifications
-------------
Use CharsetUtil.UTF_8 instead of US_ASCII for the getBytes calls.
Result
------
Accurate and semantically correct benchmarking results on utf8
comparisons.
When handling an oversized message, HttpObjectAggregator does not wait
until the last chunk is received to produce the failed message, making
AggregatedFullHttpMessage.trailingHeaders() return null.
Related: #3019
Motivation:
We have multiple (Full)HttpRequest/Response implementations and only
some of them implements toString() properly.
Modifications:
- Add the reusable string converter for HttpMessages to HttpMessageUtil
- Implement toString() of (Full)HttpRequest/Response implementations
properly using HttpMessageUtil
Result:
Prettier string representation is returned by HttpMessage
implementations.
Related: #3274
Motivation:
channelReadComplete() event is not triggered after reading successfully
in EpollDatagramChannel.
Modifications:
- Trigger exceptionCaught() event for read failure only once for less
noise
- Trigger channelReadComplete() event at the end of the read.
Result:
Fix#3274
Motivation:
Calling JNI methods is pretty expensive, so we should only do if needed.
Modifications:
Lazy call methods if needed.
Result:
Better performance.
Motivation:
SSL_set_cipher_list() in OpenSSL does not fail as long as at least one
cipher suite is available. It is different from the semantics of
SSLEngine.setEnabledCipherSuites(), which raises an exception when the
list contains an unavailable cipher suite.
Modifications:
- Add OpenSsl.isCipherSuiteAvailable(String) which checks the
availability of a cipher suite
- Raise an IllegalArgumentException when the specified cipher suite is
not available
Result:
Fixed compatibility
Motivation:
To make OpenSslEngine a full drop-in replacement, we need to implement
getSupportedCipherSuites() and get/setEnabledCipherSuites().
Modifications:
- Retrieve the list of the available cipher suites when initializing
OpenSsl.
- Improve CipherSuiteConverter to understand SRP
- Add more test data to CipherSuiteConverterTest
- Add bulk-conversion method to CipherSuiteConverter
Result:
OpenSslEngine should now be a drop-in replacement for JDK SSLEngineImpl
for most cases.
Related: #3285
Motivation:
When a user attempts to switch from JdkSslContext to OpenSslContext, he
or she will see the initialization failure if he or she specified custom
cipher suites.
Modifications:
- Provide a utility class that converts between Java cipher suite string
and OpenSSL cipher suite string
- Attempt to convert the cipher suite so that a user can use the cipher
suite string format of Java regardless of the chosen SslContext impl
Result:
- It is possible to convert all known cipher suite strings.
- It is possible to switch from JdkSslContext and OpenSslContext and
vice versa without any configuration changes
Motivation:
When a CompositeByteBuf is empty (i.e. has no component), its internal
memory access operations do not always behave as expected.
Modifications:
Check if the nunmber of components is zero. If so, return an empty
array or an empty NIO buffer, etc.
Result:
More robustness
- Ensure an EmptyByteBuf has an array, an NIO buffer, and a memory
address at the same time
- Add an assertion that checks if EMPTY_BUFFER is an EmptyByteBuf,
just in case we make a mistake in the future
Rebased and cleaned-up based on the work by @normanmaurer
Motivation:
Currently, IOExceptions and ClosedChannelExceptions are thrown from
inside the JNI methods. Instantiation of Java objects inside JNI code is
an expensive operation, needless to say about filling stack trace for
every instantiation of an exception.
Modifications:
Change most JNI methods to return a negative value on failure so that
the exceptions are instantiated outside the native code.
Also, pre-instantiate some commonly-thrown exceptions for better
performance.
Result:
Performance gain
Motivation:
Several issues were shown by various ticket (#2900#2956).
Also use the improvement on writability user management from #3036.
And finally add a mixte handler, both for Global and Channels, with
the advantages of being uniquely created and using less memory and
less shaping.
Issue #2900
When a huge amount of data are written, the current behavior of the
TrafficShaping handler is to limit the delay to 15s, whatever the delay
the previous write has. This is wrong, and when a huge amount of writes
are done in a short time, the traffic is not correctly shapened.
Moreover, there is a high risk of OOM if one is not using in his/her own
handler for instance ChannelFuture.addListener() to handle the write
bufferisation in the TrafficShapingHandler.
This fix use the "user-defined writability flags" from #3036 to
allow the TrafficShapingHandlers to "user-defined" managed writability
directly, as for reading, thus using the default isWritable() and
channelWritabilityChanged().
This allows for instance HttpChunkedInput to be fully compatible.
The "bandwidth" compute on write is only on "acquired" write orders, not
on "real" write orders, which is wrong from statistic point of view.
Issue #2956
When using GlobalTrafficShaping, every write (and read) are
synchronized, thus leading to a drop of performance.
ChannelTrafficShaping is not touched by this issue since synchronized is
then correct (handler is per channel, so the synchronized).
Modifications:
The current write delay computation takes into account the previous
write delay and time to check is the 15s delay (maxTime) is really
exceeded or not (using last scheduled write time). The algorithm is
simplified and in the same time more accurate.
This proposal uses the #3036 improvement on user-defined writability
flags.
When the real write occurs, the statistics are update accordingly on a
new attribute (getRealWriteThroughput()).
To limit the synchronisations, all synchronized on
GlobalTrafficShapingHandler on submitWrite were removed. They are
replaced with a lock per channel (since synchronization is still needed
to prevent unordered write per channel), as in the sendAllValid method
for the very same reason.
Also all synchronized on TrafficCounter on read/writeTimeToWait() are
removed as they are unnecessary since already locked before by the
caller.
Still the creation and remove operations on lock per channel (PerChannel
object) are synchronized to prevent concurrency issue on this critical
part, but then limited.
Additionnal changes:
1) Use System.nanoTime() instead of System.currentTimeMillis() and
minimize calls
2) Remove / 10 ° 10 since no more sleep usage
3) Use nanoTime instead of currentTime such that time spend is computed,
not real time clock. Therefore the "now" relative time (nanoTime based)
is passed on all sub methods.
4) Take care of removal of the handler to force write all pending writes
and release read too
8) Review Javadoc to explicit:
- recommandations to take into account isWritable
- recommandations to provide reasonable message size according to
traffic shaping limit
- explicit "best effort" traffic shaping behavior when changing
configuration dynamically
Add a MixteGlobalChannelTrafficShapingHandler which allows to use only one
handler for mixing Global and Channel TSH. I enables to save more memory and
tries to optimize the traffic among various channels.
Result:
The traffic shaping is more stable, even with a huge number of writes in
short time by taking into consideration last scheduled write time.
The current implementation of TrafficShapingHandler using user-defined
writability flags and default isWritable() and
fireChannelWritabilityChanged works as expected.
The statistics are more valuable (asked write vs real write).
The Global TrafficShapingHandler should now have less "global"
synchronization, hoping to the minimum, but still per Channel as needed.
The GlobalChannel TrafficShapingHandler allows to have only one handler for all channels while still offering per channel in addition to global traffic shaping.
And finally maintain backward compatibility.
Motivation:
Even if its against the HTTP RFC there are situations where it may be useful to use other chars then US_ASCII in the headers. We should allow to make it possible by allow the user to override the how headers are encoded.
Modifications:
- Add encodeHeaders(...) method and so allow to override it.
Result:
It's now possible to encode headers with other charset then US_ASCII by just extend the encoder and override the encodeHeaders(...) method.
Motivation:
Openssl supports the SSL_CTX_set_session_id_context function to limit for which context a session can be used. We should support this.
Modifications:
Add OpenSslServerSessionContext that exposes a setSessionIdContext(...) method now.
Result:
It's now possible to use SSL_CTX_set_session_id_context.