Motivations:
When using HttpPostRequestEncoder and trying to set an attribute if a
charset is defined, currenlty implicit Charset.toStrng() is used, given
wrong format.
As in Android for UTF-16 = "com.ibm.icu4jni.charset.CharsetICU[UTF-16]".
Modifications:
Each time charset is used to be printed as its name, charset.name() is
used to get the canonical name.
Result:
Now get "UTF-16" instead.
(3.10 version)
Motivation:
RFC6265 specifies which characters are allowed in a cookie name and value.
Netty is currently too lax, which can used for HttpOnly escaping.
Modification:
Backport new RFC6265 compliant Cookie parsers in cookie subpackage.
Deprecate old Cookie encoders and decoders that will be dropped in 5.0.
Result:
The problem described in the motivation section is fixed.
Motivation:
This AE was seen in the wild at a non-negligible rate among AeroFS
clients (JDK 8, TLS 1.2, mutual auth with RSA certs).
Upon examination of SslHandler's code a few things became apparent:
- the AE is unnecessary given the contract of decode()
- the AE was introduced between 3.8 and 3.9
- the AE is no longer present in in 4.x and master
- branches that do not have the AE skip all the bytes being fed to
unwrap()
It is not entirely clear what sequence of SSL records can trip the
assert but it seems to happen before the handshake is completed. The
little detailed data we've been able to gather shows the assert being
triggered when
- SSLEngine.unwrap returns NEED_WRAP
- the remaining buffer is a TLS heartbeat record
Likewise, it is not entirely clear if skipping the remaining bytes is
the right thing to do or if they should be fed back to unwrap.
Modifications:
Mirror behavior in newer versions by removing the assert and skipping
bytes fed to unwrap()
Add logging in an effort to get a better understanding of this corner
case.
Result:
Avoid crashes
Motivation:
During the reading of the source codes of Netty 3.9.5.Final, I thought there might have a lurking concurrency bug in the AbstractNioBossPool#init method, of which a single volatile variable cannot correctly control the initialization sequence happened exactly only once under concurrency environment. Also I found there's already a much more elegant and concurrency safe way to do the work like this, I decided to make this PR. (Please refer to the discussion in https://github.com/netty/netty/issues/3249.)
Modifications:
Change the type of the variable that control the initialization from "volatile boolean" to "final AtomicBoolean".
Result:
The potential concurrency hazard of the initialization in AbstractNioBoss(Worker)Pool will be eliminated.
Motivations:
The chunkSize might be oversized after comparison (size being > of int capacity) if file size is bigger than an integer.
Modifications:
Changing the type to long fix the issue.
Result:
There is no more int oversized.
Related:
- d6c3b3063f
- Original author: @grahamedgecombe
Motivation:
JdkSslContext used SSL_RSA_WITH_DES_CBC_SHA in its cipher suite list.
OpenSslServerContext used DES-CBC3-SHA in the same place in its cipher suite
list, which is equivalent to SSL_RSA_WITH_3DES_EDE_CBC_SHA.
This means the lists were out of sync. Furthermore, using
SSL_RSA_WITH_DES_CBC_SHA is not desirable as it uses DES, a weak cipher. Triple
DES should be used instead.
Modifications:
Replace SSL_RSA_WITH_DES_CBC_SHA with SSL_RSA_WITH_3DES_EDE_CBC_SHA in
JdkSslContext.
Result:
The JdkSslContext and OpenSslServerContext cipher suite lists are now in sync.
Triple DES is used instead of DES, which is stronger.
Motivation:
RC4 is not a recommended cipher suite anymore, as the recent research
reveals, such as:
- http://www.isg.rhul.ac.uk/tls/
Modifications:
- Remove most RC4 cipher suites from the default cipher suites
- For backward compatibility, leave RC4-SHA, while de-prioritizing it
Result:
Potentially safer default
Related: #3107, origianlly written by @Scottmitch
Motiviation:
The HttpContentEncoder does not account for an EmptyLastHttpContent
being provides as input. This is useful in situations where the client
is unable to determine if the current content chunk is the last content
chunk (i.e. a proxy forwarding content when trnasfer encoding is
chunked)
Modifications:
- HttpContentEncoder should not attempt to compress empty HttpContent
objects.
Result:
HttpContentEncoder supports a EmptyLastHttpContent to terminate the
response.
Related: #3107
Motivation:
ZlibEn/Decoder and JdkZlibEncoder in 3.9 do not have any unit tests.
Before applying any patches, we should backport the tests in 4.x so that
we can make sure we do not break anything.
Modification:
- Backport ZlibTest and its subtypes
- Remove the test for automatic GZIP header detection because the
ZlibDecoders in 3.9 do not have that feature
- Initialize JdkZlibEncoder.out and crc only when necessary for reduced
memory footprint
- Fix the bugs in the ZlibEncoders where it fails to compress correctly
when there are not enough room in the output buffer
Result:
We are more confident when we make changes in ZlibEncoder/Decoder
Bugs have been squashed
Motivation:
The SPDY/3.1 spec does not adequate describe how to push resources
from the server. This was solidified in the HTTP/2 drafts by dividing
the push into two frames, a PushPromise containing the request,
followed by a Headers frame containing the response.
Modifications:
This commit modifies the SpdyHttpDecoder to support pushed resources
that are divided into multiple frames. The decoder will accept a
pushed SpdySynStreamFrame containing the request headers, followed by
a SpdyHeadersFrame containing the response headers.
Result:
The SpdyHttpDecoder will create an HttpRequest object followed by an
HttpResponse object when receiving pushed resources.
Related: #3131
Motivation:
To prevent users from accidentally enabling SSLv3 and making their
services vulnerable to POODLE, disable SSLv3 when SSLEngine is
instantiated via SslContext.
Modification:
- Disable SSLv3 for JdkSslContext and OpenSslServerContext
Result:
Saner default set of protocols
Related:
- 37a6f5ed5d
Motivation:
Minimize the backport cost by synchronizing NetUtil between 3.9 and 4.x
Modifications:
- Backport the bug fixes in NetUtil
- Backport the new IP address methods in NetUtil
Result:
- New useful methods in NetUtil
- Easier to backport the future bug fixes
Motivation:
Since JDK 1.8, javadoc has enabled a new feature called 'doclint', which
fails the build when javadoc has markup problems and more.
Modifications:
Do not fail the build until we fix our API documentation.
Result:
No more build failure because of malformed Javadoc
Motivation:
Sonar uses the project name in the pom.xml as the project name. (no pun
intended) 4.x and master uses 'Netty' as the project name, so we should
be consistent.
Modifications:
Rename the project from 'The Netty project' to 'Netty'
Result:
Prettier SonarQube result
Related: #3076
Motivation:
When a user writes a chunked HTTP response with a Content-Length header,
the HttpContentEncoder should remove the Content-Length header because
the length of the encoded content is supposed to be different from the
original one.
Actually, HttpContentEncoder currently never touches the Content-Length
header when the response is chunked.
Modifications:
- Remove the Content-Length header when an HTTP response being encoded
is chunked
- Add a test case
Result:
HttpContentEncoder sanitizes the Content-Length header properly.
Motivation:
When a the surefire plugin launches a new JVM, it does not specify any
limit on its maximum heap size. In a machine with less RAM, this is a
problem because it often tries to consume too much RAM.
Modifications:
- Specify -Xmx256m option when running a test
- Fix a build failure due to the outdated APIviz
Result:
Higher build stability
Motivation
Issue #3004 shows that "=" character was not supported as it should in
the HttpPostRequestDecoder in form-data boundary.
Modifications:
Add 2 methods in StringUtil
split with maxParm argument: String split with max parts only (to prevent multiple '=' to
be source of extra split while not needed)
substringAfter: String part after delimiter (since first part is not needed)
Use those methods in HttpPostRequestDecoder. Change and the
HttpPostRequestDecoderTest to check using a boundary beginning with "=".
Results:
The fix implies more stability and fix the issue.
Motivation:
We must only cancel the SelectionKey if the connection is not pending while try to workaround the epoll bug, otherwise we may fail to notify the future later.
Modifications:
Check if the connection is pending before cancel the SelectionKey.
Result:
Only cancel correct SelectionKeys and so also make sure the futures are notified.
Motivation:
Currently the last read/write throughput is calculated by first division,this will be 0 if the last read/write bytes < interval,change the order will get the correct result
Modifications:
Change the operator order from first do division to multiplication
Result:
Get the correct result instead of 0 when bytes are smaller than interval
Motivation:
channelConnected was override but super was never called, while it
should. It prevents next handlers to be informed of the connection.
Also add one missing information in the toString method.
Modifications:
Add the super corresponding call, and add checkInterval to the
toString() method
Result;
Now, if the channelConnected method is correctly passed to the the next
handler.
Motivation:
Because Thread.currentThread().interrupt() will unblock Selector.select() we need to take special care when check if we need to rebuild the Selector. If the unblock was caused by the interrupt() we will clear it and move on as this is most likely a bug in a custom ChannelHandler or a library the user makes use of.
Modification:
Clear the interrupt state of the Thread if the Selector was unblock because of an interrupt and the returned keys was 0.
Result:
No more busy loop caused by Thread.currentThread().interrupt()
Related issue: #2767
Motivation:
CIDR.contains(InetAddress) implementations should always return true
when the CIDR's prefix length is 0.
Modifications:
- Make CIDR.contains(InetAddress) return true if the current cidrMask is
0
- Add tests
Result:
Fixed the issue #2767
Related issue: #2821
Motivation:
There's no way for a user to change the default ZlibEncoder
implementation.
Modifications:
Add a new system property 'io.netty.noJdkZlibEncoder'.
Use JZlib-based encoder if windowBits or memoryLevel is different from
the JDK default.
Result:
A user can tell HttpContentCompressor not to use JDK ZlibEncoder even if
the current Java version is 7+.
Motivation:
Possibly due to very small time (< 100ms), the trafficShaping time might be a bit high in rare condition.
Modification:
Remove extra time (only stepms is kept, not minimalms).
Result:
Time shall be ok now.
Motivation:
The test procedure is unstable due to not enough precise timestamping
during the check.
Modifications:
Reducing the test cases and cibling "stable" test ("timestamp-able")
bring more stability to the tests.
Result:
Tests for TrafficShapingHandler seem more stable (whatever using JVM 6,
7 or 8).
Renaming to:
src/test/java/org/jboss/netty/handler/traffic/TrafficShapingHandlerTest.java
Fix for issue #2765 relative to unstable trafficshaping test procedure
Motivation:
The test procedure is unstable due to not enough precise timestamping during the check.
Modifications:
Reducing the test cases and cibling "stable" test ("timestamp-able") bring more stability to the tests.
Result:
Tests for TrafficShapingHandler seem more stable (whatever using JVM 6, 7 or 8).
Same version as in 4.0, 4.1 and Master.
Motivation:
The test procedure is unstable when testing quick time (factor less or equal to 1). Changing to default 10ms in this case will force time to be correct and time to be checked only when factor is >= 2.
Modifications:
When factor is <= 1, minimalWaitBetween is 10ms
Result:
Hoping this version is finally stable.
Motivation:
Currently Traffic Shaping is using 1 timer only and could lead to
"partial" wrong bandwidth computation when "short" time occurs between
adding used bytes and when the TrafficCounter updates itself and finally
when the traffic is computed.
Indeed, the TrafficCounter is updated every x delay and it is at the
same time saved into "lastXxxxBytes" and set to 0. Therefore, when one
request the counter, it first updates the TrafficCounter with the added
used bytes. If this value is set just before the TrafficCounter is
updated, then the bandwidth computation will use the TrafficCounter with
a "0" value (this value being reset once the delay occurs). Therefore,
the traffic shaping computation is wrong in rare cases.
Secondly the traffic shapping should avoid if possible the "Timeout"
effect by not stopping reading or writing more than a maxTime, this
maxTime being less than the TimeOut limit.
Use same algorithm than in V4 and V5.
Modifications:
The TrafficCounter has 2 new methods that compute the time to wait
according to read or write) using in priority the currentXxxxBytes (as
before), but could used (if current is at 0) the lastXxxxxBytes, and
therefore having more chance to take into account the real traffic.
Moreover the Handler could change the default "max time to wait", which
is by default set to half of "standard" Time Out (30s:2 = 15s).
Include a test as in V4 but limited in the example to Nio.
Result:
The Traffic Shaping is better take into account (no 0 value when it
shouldn't) and it tries to not block traffic more than Time Out event.
This version is for V3.9 but could simply be port to V4.X and Master.
Related issue: #2179
Motivation:
Previous fix e71cbb9308 was not enough.
Modifications:
- Add more test cases for WebSocket handshake
- Fix a bug in HttpMessageDecoder where it does not always enter
UPGRADED state
- Fix incorrect decoder replacement logic in WebSocketClientHandshaker
implementations
- Add WebSocketClientHandshaker.replaceDecoder() as a helper
Result:
We never lose the first WebSocket frame for all WebSocket protocol
versions.