Related: #3107
Motivation:
ZlibEn/Decoder and JdkZlibEncoder in 3.9 do not have any unit tests.
Before applying any patches, we should backport the tests in 4.x so that
we can make sure we do not break anything.
Modification:
- Backport ZlibTest and its subtypes
- Remove the test for automatic GZIP header detection because the
ZlibDecoders in 3.9 do not have that feature
- Initialize JdkZlibEncoder.out and crc only when necessary for reduced
memory footprint
- Fix the bugs in the ZlibEncoders where it fails to compress correctly
when there are not enough room in the output buffer
Result:
We are more confident when we make changes in ZlibEncoder/Decoder
Bugs have been squashed
Motivation:
The SPDY/3.1 spec does not adequate describe how to push resources
from the server. This was solidified in the HTTP/2 drafts by dividing
the push into two frames, a PushPromise containing the request,
followed by a Headers frame containing the response.
Modifications:
This commit modifies the SpdyHttpDecoder to support pushed resources
that are divided into multiple frames. The decoder will accept a
pushed SpdySynStreamFrame containing the request headers, followed by
a SpdyHeadersFrame containing the response headers.
Result:
The SpdyHttpDecoder will create an HttpRequest object followed by an
HttpResponse object when receiving pushed resources.
Related: #3131
Motivation:
To prevent users from accidentally enabling SSLv3 and making their
services vulnerable to POODLE, disable SSLv3 when SSLEngine is
instantiated via SslContext.
Modification:
- Disable SSLv3 for JdkSslContext and OpenSslServerContext
Result:
Saner default set of protocols
Related:
- 37a6f5ed5dd56284ec37e14e0554e912f84426a1
Motivation:
Minimize the backport cost by synchronizing NetUtil between 3.9 and 4.x
Modifications:
- Backport the bug fixes in NetUtil
- Backport the new IP address methods in NetUtil
Result:
- New useful methods in NetUtil
- Easier to backport the future bug fixes
Motivation:
Since JDK 1.8, javadoc has enabled a new feature called 'doclint', which
fails the build when javadoc has markup problems and more.
Modifications:
Do not fail the build until we fix our API documentation.
Result:
No more build failure because of malformed Javadoc
Motivation:
Sonar uses the project name in the pom.xml as the project name. (no pun
intended) 4.x and master uses 'Netty' as the project name, so we should
be consistent.
Modifications:
Rename the project from 'The Netty project' to 'Netty'
Result:
Prettier SonarQube result
Related: #3076
Motivation:
When a user writes a chunked HTTP response with a Content-Length header,
the HttpContentEncoder should remove the Content-Length header because
the length of the encoded content is supposed to be different from the
original one.
Actually, HttpContentEncoder currently never touches the Content-Length
header when the response is chunked.
Modifications:
- Remove the Content-Length header when an HTTP response being encoded
is chunked
- Add a test case
Result:
HttpContentEncoder sanitizes the Content-Length header properly.
Motivation:
When a the surefire plugin launches a new JVM, it does not specify any
limit on its maximum heap size. In a machine with less RAM, this is a
problem because it often tries to consume too much RAM.
Modifications:
- Specify -Xmx256m option when running a test
- Fix a build failure due to the outdated APIviz
Result:
Higher build stability
Motivation
Issue #3004 shows that "=" character was not supported as it should in
the HttpPostRequestDecoder in form-data boundary.
Modifications:
Add 2 methods in StringUtil
split with maxParm argument: String split with max parts only (to prevent multiple '=' to
be source of extra split while not needed)
substringAfter: String part after delimiter (since first part is not needed)
Use those methods in HttpPostRequestDecoder. Change and the
HttpPostRequestDecoderTest to check using a boundary beginning with "=".
Results:
The fix implies more stability and fix the issue.
Motivation:
We must only cancel the SelectionKey if the connection is not pending while try to workaround the epoll bug, otherwise we may fail to notify the future later.
Modifications:
Check if the connection is pending before cancel the SelectionKey.
Result:
Only cancel correct SelectionKeys and so also make sure the futures are notified.
Motivation:
Currently the last read/write throughput is calculated by first division,this will be 0 if the last read/write bytes < interval,change the order will get the correct result
Modifications:
Change the operator order from first do division to multiplication
Result:
Get the correct result instead of 0 when bytes are smaller than interval
Motivation:
channelConnected was override but super was never called, while it
should. It prevents next handlers to be informed of the connection.
Also add one missing information in the toString method.
Modifications:
Add the super corresponding call, and add checkInterval to the
toString() method
Result;
Now, if the channelConnected method is correctly passed to the the next
handler.
Motivation:
Because Thread.currentThread().interrupt() will unblock Selector.select() we need to take special care when check if we need to rebuild the Selector. If the unblock was caused by the interrupt() we will clear it and move on as this is most likely a bug in a custom ChannelHandler or a library the user makes use of.
Modification:
Clear the interrupt state of the Thread if the Selector was unblock because of an interrupt and the returned keys was 0.
Result:
No more busy loop caused by Thread.currentThread().interrupt()
Related issue: #2767
Motivation:
CIDR.contains(InetAddress) implementations should always return true
when the CIDR's prefix length is 0.
Modifications:
- Make CIDR.contains(InetAddress) return true if the current cidrMask is
0
- Add tests
Result:
Fixed the issue #2767
Related issue: #2821
Motivation:
There's no way for a user to change the default ZlibEncoder
implementation.
Modifications:
Add a new system property 'io.netty.noJdkZlibEncoder'.
Use JZlib-based encoder if windowBits or memoryLevel is different from
the JDK default.
Result:
A user can tell HttpContentCompressor not to use JDK ZlibEncoder even if
the current Java version is 7+.
Motivation:
Possibly due to very small time (< 100ms), the trafficShaping time might be a bit high in rare condition.
Modification:
Remove extra time (only stepms is kept, not minimalms).
Result:
Time shall be ok now.
Motivation:
The test procedure is unstable due to not enough precise timestamping
during the check.
Modifications:
Reducing the test cases and cibling "stable" test ("timestamp-able")
bring more stability to the tests.
Result:
Tests for TrafficShapingHandler seem more stable (whatever using JVM 6,
7 or 8).
Renaming to:
src/test/java/org/jboss/netty/handler/traffic/TrafficShapingHandlerTest.java
Fix for issue #2765 relative to unstable trafficshaping test procedure
Motivation:
The test procedure is unstable due to not enough precise timestamping during the check.
Modifications:
Reducing the test cases and cibling "stable" test ("timestamp-able") bring more stability to the tests.
Result:
Tests for TrafficShapingHandler seem more stable (whatever using JVM 6, 7 or 8).
Same version as in 4.0, 4.1 and Master.
Motivation:
The test procedure is unstable when testing quick time (factor less or equal to 1). Changing to default 10ms in this case will force time to be correct and time to be checked only when factor is >= 2.
Modifications:
When factor is <= 1, minimalWaitBetween is 10ms
Result:
Hoping this version is finally stable.
Motivation:
Currently Traffic Shaping is using 1 timer only and could lead to
"partial" wrong bandwidth computation when "short" time occurs between
adding used bytes and when the TrafficCounter updates itself and finally
when the traffic is computed.
Indeed, the TrafficCounter is updated every x delay and it is at the
same time saved into "lastXxxxBytes" and set to 0. Therefore, when one
request the counter, it first updates the TrafficCounter with the added
used bytes. If this value is set just before the TrafficCounter is
updated, then the bandwidth computation will use the TrafficCounter with
a "0" value (this value being reset once the delay occurs). Therefore,
the traffic shaping computation is wrong in rare cases.
Secondly the traffic shapping should avoid if possible the "Timeout"
effect by not stopping reading or writing more than a maxTime, this
maxTime being less than the TimeOut limit.
Use same algorithm than in V4 and V5.
Modifications:
The TrafficCounter has 2 new methods that compute the time to wait
according to read or write) using in priority the currentXxxxBytes (as
before), but could used (if current is at 0) the lastXxxxxBytes, and
therefore having more chance to take into account the real traffic.
Moreover the Handler could change the default "max time to wait", which
is by default set to half of "standard" Time Out (30s:2 = 15s).
Include a test as in V4 but limited in the example to Nio.
Result:
The Traffic Shaping is better take into account (no 0 value when it
shouldn't) and it tries to not block traffic more than Time Out event.
This version is for V3.9 but could simply be port to V4.X and Master.
Related issue: #2179
Motivation:
Previous fix e71cbb9308bf8788e1e0fb8db99766d89156386d was not enough.
Modifications:
- Add more test cases for WebSocket handshake
- Fix a bug in HttpMessageDecoder where it does not always enter
UPGRADED state
- Fix incorrect decoder replacement logic in WebSocketClientHandshaker
implementations
- Add WebSocketClientHandshaker.replaceDecoder() as a helper
Result:
We never lose the first WebSocket frame for all WebSocket protocol
versions.
Related issue: #2735
Motivation:
When an application is under load and experiencing a lot of failure, the
instantiation of DefaultExceptionEvent spends noticeable amount of time
because of StackTraceSimplifier.
Also, StackTraceSimplifier makes people sometimes confused because it
hides the execution path partially.
Modifications:
Remove the use of StackTraceSimplifier
Result:
JVM spends much less time at Throwable.getStackTrace()
Related issue: #2093
Motivation:
When a channel with SslHandler is closed with pending writes (either
unencrypted or encrypted), SslHandler.channelClosed() attempts to
acquire a lock and fail the futures of the pending writes.
If a user added a listener to any of the failed futures and the listener
attempts to write something, it will make SslHandler.handleDownstream()
invoked, which also attempts to acquire the same lock.
Because the lock is non-reentrant, even if these two lock acquisitions
are taking place in the same thread, the second attempt will block for
ever.
Modification:
Do not fail the futures while holding a lock. Notify them after
releasing the lock.
Result:
One less dead lock
Related issue: #2742
Motivation:
When there are more than one stream with the same priority, the set
returned by SpdySession.getActiveStream() will not include all of them,
because it uses TreeSet and only compares the priority of streams. If
two different streams have the same priority, one of them will be
discarded by TreeSet.
Modification:
- Rename getActiveStreams() to activeStreams()
- Replace PriorityComparator with StreamComparator
Result:
Two different streams with the same priority are compared correctly.
Motivation:
When Content-Encoding is deflate or gzip and Content-Length is 0,
there's no need to generate an empty stream (e.g. 20-byte gzip stream).
We can just produce nothing. At least, it works fine with most modern
browsers. Also, we can skip the instantiation of an encoder entirely by
instantiating it lazily.
Modifications:
Backport HttpContentEncoderTest from 4.0
Add similar tests for HttpContentCompressor to ensure the same tests
work with compression.
Result:
Fixes issue #2321
Motivation:
We sometimes get the following exception:
javax.net.ssl.SSLException: SSLEngine is closing/closed
at sun.security.ssl.SSLEngineImpl.kickstartHandshake(SSLEngineImpl.java:692)
at sun.security.ssl.SSLEngineImpl.beginHandshake(SSLEngineImpl.java:734)
at org.jboss.netty.handler.ssl.SslHandler.handshake(SslHandler.java:433)
at org.jboss.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1233)
at org.jboss.netty.handler.ssl.SslHandler.channelDisconnected(SslHandler.java:668)
.. which is triggered when we attempt to issue a handshake even if the
SSLEngine is closed (or being closed).
We don't really need to initiate handshake if:
- SSLEngine is closed already.
- Connection is closed already
Modifications:
Add a boolean parameter to unwrap() and unwrapNonApp() to suppress the
invocation of handshake() when SSLEngine or connection is closed
already.
Motivation:
We forgot to do a null check on the cause parameter of
ChannelFuture.setFailure(cause)
Modifications:
Add a null check
Result:
Fixed issue: #2728
Motivation:
If the requests contains uri parameters but not path the HttpRequestEncoder does produce an invalid uri while try to add the missing path.
Modifications:
Correctly handle the case of uri with paramaters but no path.
Result:
HttpRequestEncoder produce correct uri in all cases.
Backported: d9cccccbb3344997e016e6a3603126ce65705c4d
Motivation:
Currently it is not possible to load an encrypted private key when
creating a JDK based SSL server context.
Modifications:
- Added static method to JdkSslServerContext which handles key spec
generation for (encrypted) private keys and make use of it.
- Added tests for creating a SSL server context based on a (encrypted)
private key.
Result:
It is now possible to create a JDK based SSL server context with an
encrypted (password protected) private key.
Motivation:
As we cached ConnectTimeoutException we sometimes ended up using the wrong remote address in the exception message.
Modifications:
Always create a new ConnectTimeException and so make sure we use the connect remote address. This has a bit more overhead because of fill in the stacktrace everytime but as this only happens on connection timeouts it should be ok.
Result:
Always include the correct remote address in ConnectTimeoutException.
Motivation:
We forgot to set the flag 'changed' to 'true' after updating
interestOps.
Modifications:
Add a missing 'changed = true;'
Result:
ChannelStateEvent(INTEREST_OPS) is always triggered correctly.
Related issue: #2179 and #2173
Motivation:
When a WebSocket server responds to a WebSocket handshake request with
its first WebSocket frame, it is sometimes swallowed by
HttpMessageDecoder. This issue has been fixed by
4f6ccbbb78c88c82aa02c77fe7592709cfbf7922 in 4 and 5, but was not.
Modification:
Backport the UPGRADED state and the fix for #2173 to HttpMessageDecoder
Result:
The data sent by the server following an HTTP upgrade response is not
lost anymore.
Motivation:
At the moment the HashedWheelTimer will only remove the cancelled Timeouts once the HashedWheelBucket is processed again. Until this the instance will not be able to be GC'ed as there are still strong referenced to it even if the user not reference it by himself/herself. This can cause to waste a lot of memory even if the Timeout was cancelled before.
Modification:
Add a new queue which holds CancelTasks that will be processed on each tick to remove cancelled Timeouts. Because all of this is done only by the WorkerThread there is no need for synchronization and only one extra object creation is needed when cancel() is executed. For addTimeout(...) no new overhead is introduced.
Result:
Less memory usage for cancelled Timeouts.
Motivation:
When a SSLv2Hello message is received, SSLEngine expects the application buffer size to be more than 30KB which is larger than what SslBufferPool can provide. SSLEngine will always return with BUFFER_OVERFLOW status, blocking the SSL session from continuing the handshake.
Modifications:
When SSLEngine.getSession().getApplicationBufferSize() returns a value larger than what SslBufferPool provides, allocate a temporary heap buffer.
Result:
SSLv2Hello is handled correctly.
Motivation:
When an attribute is ending with an odd number of CR (0x0D), the decoder
add an extra CR in the decoded attribute and should not.
Modifications:
Each time a CR is detected, the next byte was tested to be LF or not. If
not, in a number of places, the CR byte was lost while it should not be.
When a CR is detected, if the next byte is not LF, the CR byte should be
saved as the position point to the next byte (not LF). When a CR is
detected, if there is not yet other available bytes, the position is
reset to the position of CR (since a LF could follow).
A new Junit test case is added, using DECODER and variable number of CR
in the final attribute (testMultipartCodecWithCRasEndOfAttribute).
Result:
The attribute is now correctly decoded with the right number of CR
ending bytes.