Motivation:
WebSocketClientHandshaker#upgradeUrl doesn't comperly compute relative url when path is empty and produces url such as `?access_token=foo` instead of `/?access_token=foo`.
Modifications:
* fix WebSocketClientHandshaker#upgradeUrl
* add tests for urls without path, with and without query
Result:
WebSocketClientHandshaker properly connects to url without path.
Motivation:
I am receiving a mutlipart/form_data upload from postman. The filename contains Chinese, and so some invalid chars. We should ensure all of these are removed before trying to decode.
Modification:
Ensure all invalid characters are removed
Result:
Fixes#10087
Co-authored-by: liming.zhang <liming.zhang@luckincoffee.com>
Motivation:
At the moment the next / prev references are not set to "null" in the DefaultChannelHandlerContext once the ChannelHandler is removed. This is bad as it basically let users still use the ChannelHandlerContext of a ChannelHandler after it is removed and may produce very suprising behaviour.
Modifications:
- Fail if someone tries to use the ChannelHandlerContext once the ChannelHandler was removed (for outbound operations fail the promise, for inbound fire the error through the ChannelPipeline)
- Fix some handlers to ensure we not use the ChannelHandlerContext after the handler was removed
- Adjust DefaultChannelPipeline / DefaultChannelHandlerContext to fixes races with removal / replacement of handlers
Result:
Cleanup behaviour and make it more predictable for pipeline modifications
Motivation:
Our parsing of the initial line / http headers did treat some characters as separators which should better trigger an exception during parsing.
Modifications:
- Tighten up parsing of the inital line by follow recommentation of RFC7230
- Restrict separators to OWS for http headers
- Add unit test
Result:
Stricter parsing of HTTP1
Motivation:
We did had some System.out.println(...) call in a test which seems to be some left-over from debugging.
Modifications:
Remove System.out.println(...)
Result:
Code cleanup
Motivation
As part of a recent commit for issue
https://github.com/netty/netty/issues/9861 the HttpObjectDecoder was
changed to throw an IllegalArgumentException (and produce a failed
decoder result) when decoding a message with both "Transfer-Encoding:
chunked" and "Content-Length".
While it seems correct for Netty to try to sanitize these types of
messages, the spec explicitly mentions that the Content-Length header
should be *removed* in this scenario.
Both Nginx 1.15.9 and Tomcat 9.0.31 also opt to remove the header:
b693d7c198/java/org/apache/coyote/http11/Http11Processor.java (L747-L755)0ad4393e30/src/http/ngx_http_request.c (L1946-L1953)
Modifications
* Change the default behavior from throwing an IllegalArgumentException
to removing the "Content-Length" header
* Extract the behavior to a new protected method,
handleChunkedEncodingWithContentLength(), that can be overridden to
change this behavior (or capture metrics)
Result
Messages of this nature will now be successfully decoded and have their
"Content-Length" header removed, rather than creating invalid messages
(decoder result failures). Users will be allowed to override and
configure this behavior.
Motivation:
Need tests to ensure that CVE-2020-7238 is fixed.
Modifications:
Added two test cases into HttpRequestDecoderTest which check that
no whitespace is allowed before the Transfer-Encoding header.
Result:
Improved test coverage for #9861
Motivation:
Avoid allocation of default static `WebSocketServerProtocolConfig` and `WebSocketClientProtocolConfig` configs. Prefer compile time constants instead.
Modification:
Static field with config object replaced with constructor with default fields.
Result:
No more default config allocation and static field for it. Compile time variables used instead.
Motivation:
At the moment we add a handler which will respond with 403 forbidden if a websocket handshake is in progress (and after). This makes not much sense as it is unexpected to have a remote peer to send another http request when the handshake was started. In this case it is much better to let the websocket decoder bail out.
Modifications:
Remove usage of forbiddenHttpRequestResponder
Result:
Fixes https://github.com/netty/netty/issues/9913
Motivation:
In next major version of netty users should use ChannelHandler everywhere. We should ensure we do the same
Modifications:
Replace usage of deprecated classes / interfaces with ChannelHandler
Result:
Use non-deprecated code
Motivation:
We should remove WebSocketServerExtensionHandler from pipeline after successful WebSocket upgrade even if the client has not selected any extensions.
Modification:
Remove handler once upgrade is complete and no extensions are used.
Result:
Fixes#9939.
Motivation:
Utf8FrameValidator must release the input buffer if the validation fails to ensure no memory leak happens
Modifications:
- Catch exception, release frame and rethrow
- Adjust unit test
Result:
Fixes https://github.com/netty/netty/issues/9906
Motivation:
Currently, characters are appended to the encoded string char-by-char even when no encoding is needed. We can instead separate out codepath that appends the entire string in one go for better `StringBuilder` allocation performance.
Modification:
Only go into char-by-char loop when finding a character that requires encoding.
Result:
The results aren't so clear with noise on my hot laptop - the biggest impact is on long strings, both to reduce resizes of the buffer and also to reduce complexity of the loop. I don't think there's a significant downside though for the cases that hit the slow path.
After
```
Benchmark Mode Cnt Score Error Units
QueryStringEncoderBenchmark.longAscii thrpt 6 1.406 ± 0.069 ops/us
QueryStringEncoderBenchmark.longAsciiFirst thrpt 6 0.046 ± 0.001 ops/us
QueryStringEncoderBenchmark.longUtf8 thrpt 6 0.046 ± 0.001 ops/us
QueryStringEncoderBenchmark.shortAscii thrpt 6 15.781 ± 0.949 ops/us
QueryStringEncoderBenchmark.shortAsciiFirst thrpt 6 3.171 ± 0.232 ops/us
QueryStringEncoderBenchmark.shortUtf8 thrpt 6 3.900 ± 0.667 ops/us
```
Before
```
Benchmark Mode Cnt Score Error Units
QueryStringEncoderBenchmark.longAscii thrpt 6 0.444 ± 0.072 ops/us
QueryStringEncoderBenchmark.longAsciiFirst thrpt 6 0.043 ± 0.002 ops/us
QueryStringEncoderBenchmark.longUtf8 thrpt 6 0.047 ± 0.001 ops/us
QueryStringEncoderBenchmark.shortAscii thrpt 6 16.503 ± 1.015 ops/us
QueryStringEncoderBenchmark.shortAsciiFirst thrpt 6 3.316 ± 0.154 ops/us
QueryStringEncoderBenchmark.shortUtf8 thrpt 6 3.776 ± 0.956 ops/us
```
Motivation:
https://github.com/netty/netty/pull/9883 added a bug-fix for the Comparator in ClientCookieEncoder but did not add a testcase.
Modifications:
- Add testcase
- Simplify code
Result:
Include a test to ensure we not regress.
Motivation:
The current implementation causes IllegalArgumetExceptions to be thrown on Java 11.
The current implementation would violate comparison contract for two cookies C1 and C2 with same path length, since C1 < C2 and C2 < C1. Returning 0 (equality) does not since C1 == C2 and C2 == C1. See #9881
Modification:
Return equality instead of less than on same path length.
Result:
Fixes#9881.
Motivation:
In Java, it is almost always at least slower to use `ByteBuffer` than `byte[]` without pooling or I/O. `QueryStringDecoder` can use `byte[]` with arguably simpler code.
Modification:
Replace `ByteBuffer` / `CharsetDecoder` with `byte[]` and `new String`
Result:
After
```
Benchmark Mode Cnt Score Error Units
QueryStringDecoderBenchmark.noDecoding thrpt 6 5.612 ± 2.639 ops/us
QueryStringDecoderBenchmark.onlyDecoding thrpt 6 1.393 ± 0.067 ops/us
QueryStringDecoderBenchmark.mixedDecoding thrpt 6 1.223 ± 0.048 ops/us
```
Before
```
Benchmark Mode Cnt Score Error Units
QueryStringDecoderBenchmark.noDecoding thrpt 6 6.123 ± 0.250 ops/us
QueryStringDecoderBenchmark.onlyDecoding thrpt 6 0.922 ± 0.159 ops/us
QueryStringDecoderBenchmark.mixedDecoding thrpt 6 1.032 ± 0.178 ops/us
```
I notice #6781 switched from an array to `ByteBuffer` but I can't find any motivation for that in the PR. Unit tests pass fine with an array and we get a reasonable speed bump.
Motivation:
ByteToMessageDecoder requires using an intermediate List to put results into. This intermediate list adds overhead (memory/CPU) which grows as the number of objects increases. This overhead can be avoided by directly propagating events through the ChannelPipeline via ctx.fireChannelRead(...). This also makes the semantics more clear and allows us to keep track if we need to call ctx.read() in all cases.
Modifications:
- Remove List from the method signature of ByteToMessageDecoder.decode(...) and decodeLast(...)
- Adjust all sub-classes
- Adjust unit tests
- Fix javadocs.
Result:
Adjust ByteToMessageDecoder as noted in https://github.com/netty/netty/issues/8525.
Motivation:
RFC7230 states that we should not accept multiple content-length headers and also should not accept a content-length header in combination with transfer-encoding: chunked
Modifications:
- Check for multiple content-length headers and if found mark message as invalid
- Check if we found a content-length header and also a transfer-encoding: chunked and if so mark the message as invalid
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9861
Motivation:
Technical speaking its valid to have http headers with no values so we should support it. That said we need to detect if these are "generated" because of an "invalid" fold.
Modifications:
- Detect if a colon is missing when parsing headers.
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9866
Motivation:
Client can split data into different numbers of fragments and sometimes the last frame may contain trash data that doesn't affect decompression process.
Modification:
Added check if last frame is `ContinuationWebSocketFrame` and decompression data is empty
then don't throw an exception.
Result:
Fixes#9770
Motivation:
Multiple cookie values can be present in a single header.
Modification:
Add `decodeAll` overload which returns all cookies
Result:
Fixes#7210
Note:
This change is not as perscriptive as the ideas brought up in the linked issue. Changing the Set implementation or the equals/compareTo definition is likely a breaking change, so they are practical.
Motivation:
We should use Objects.requireNonNull(...) as we require java8
Modifications:
Replace ObjectUtil.checkNonNull(...) with Objects.requireNonNull(...)
Result:
Code cleanup
Motivation:
HttpPostStandardRequestDecoder may throw multiple different exceptions in the constructor which could lead to memory leaks. We need to guard against this by explicit catch all of them and rethrow after we released any allocated memory.
Modifications:
- Catch, destroy and rethrow in any case
- Ensure we correctly wrap IllegalArgumentExceptions
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/9829
Those who need 'Origin' or 'Sec-WebSocket-Origin' headers should provide them explicitly, like it is stated in WebSocket specs.
E.g. through custom headers:
HttpHeaders customHeaders = new DefaultHttpHeaders()
.add(HttpHeaderNames.ORIGIN, "http://localhost:8080");
new WebSocketClientProtocolHandler(
new URI("ws://localhost:1234/test"), WebSocketVersion.V13, subprotocol,
allowExtensions, customHeaders, maxFramePayloadLength, handshakeTimeoutMillis)
* Remove enforced origin headers.
* Update tests
Fixes#9673: Origin header is always sent from WebSocket client
Motivation:
7ff8cde66f introduced some tests which needs some small adjustments for master.
Modifications:
Add explicit casts
Result:
master builds again without test failures
Motivation:
By default CloseWebSocketFrames are handled automatically.
However I need manually manage their sending both on client- and on server-sides.
Modification:
Send close frame on channel close automatically, when it was not send before explicitly.
Result:
No more messages like "Connection closed by remote peer" for normal close flows.
Motivation
Per javadoc in 4.1.x SimpleChannelInboundHandler:
"Please keep in mind that channelRead0(ChannelHandlerContext, I) will be
renamed to messageReceived(ChannelHandlerContext, I) in 5.0."
Modifications
Rename aforementioned method and all references/overrides.
Result
Method is renamed.
### Motivation:
Introduction of `WebSocketDecoderConfig` made our server-side code more elegant and simpler for support.
However there is still some problem with maintenance and new features development for WebSocket codecs (`WebSocketServerProtocolHandler`, `WebSocketServerProtocolHandler`).
Particularly, it makes me ~~crying with blood~~ extremely sad to add new parameter and yet another one constructor into these handlers, when I want to contribute new feature.
### Modification:
I've extracted all parameters for client and server WebSocket handlers into config/builder structures, like it was made for decoders in PR #9116.
### Result:
* Fixes#9698: Simplify WebSocket handlers constructor arguments hell
* Unblock further development in this module (configurable close frame handling on server-side; automatic close-frame sending, when missed; memory leaks on protocol violations; etc...)
Bonuses:
* All defaults are gathered in one place and could be easily found/reused.
* New API greatly simplifies usage, but does NOT allow inheritance or modification.
* New API would simplify long-term maintenance of WebSockets module.
### Example
WebSocketClientProtocolConfig config = WebSocketClientProtocolConfig.newBuilder()
.webSocketUri("wss://localhost:8443/fx-spot")
.subprotocol("trading")
.handshakeTimeoutMillis(15000L)
.build();
ctx.pipeline().addLast(new WebSocketClientProtocolHandler(config));
Motivation:
At the moment we miss to poll the method queue when we see an Informational response code. This can lead to out-of-sync of request / response pairs when later try to compare these.
Modifications:
Always poll the queue correctly
Result:
Always compare the correct request / response pairs
Motivation:
HTTP 102 (WebDAV) is not correctly treated as an informational response
Modification:
Delegate all `1XX` status codes to superclass, not just `100` and `101`.
Result:
Supports WebDAV response.
Removes a huge maintenance [headache](https://github.com/line/armeria/pull/2210) in Armeria which has forked the class for these features
Motivation:
HttpPostRequestDecoder.splitHeaderContentType() throws a StringIndexOutOfBoundsException when it parses a Content-Type header that starts with a semicolon ;. We should skip the execution for incorrect multipart form data.
Modification:
Avoid invocation of HttpPostRequestDecoder#splitHeaderContentType(...) for incorrect multipart form data content-type.
Result:
Fixes#8554
### Motivation:
I've now found two libraries that use Netty to be vulnerable to [CWE-113: Improper Neutralization of CRLF Sequences in HTTP Headers ('HTTP Response Splitting')](https://cwe.mitre.org/data/definitions/113.html) due to using `new DefaultHttpHeaders(false)`.
Some part of me hopes that this warning will help dissuade library authors from disabling this important security check.
### Modification:
Add documentation to `DefaultHttpHeaders(boolean)` to warn about the implications of `false`.
### Result:
This improves the documentation on `DefaultHttpHeaders`.
Motivation:
Optimize the QueryStringEncoder for lower memory overhead and higher encode speed.
Modification:
Encode the space to + directly, and reuse the uriStringBuilder rather then create a new one.
Result:
Improved performance
Motivation:
When parsing HTTP headers special care needs to be taken when a whitespace is detected in the header name.
Modifications:
- Ignore whitespace when decoding response (just like before)
- Throw exception when whitespace is detected during parsing
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/9571
Motivation:
At the current moment HttpContentEncoder handle only first value of multiple accept-encoding headers.
Modification:
Join multiple accept-encoding headers to one separated by comma.
Result:
Fixes#9553
Motivation:
At the moment it is quite easy to hit reentrance issues when you have multiple handlers in the pipeline and each of the handlers does not correctly protect against these. To make it easier for the user we should try to protect from these. The issue is usually if and inbound event will trigger and outbound event and this outbound event then against triggeres an inbound event. This may result in having methods in a ChannelHandler re-enter some method and so state can be corrupted or messages be re-ordered.
Modifications:
- Keep track of inbound / outbound operations in DefaultChannelHandlerContext and if reentrancy is detected break it by scheduling the action on the EventLoop. This will then be picked up once the method returns and so the reentrancy is broken up.
- Adjust tests which made strange assumptions about execution order
Result:
No more reentrancy of handlers possible.
Motivation:
Currently when HttpPostStandardRequestDecoder throws a ErrorDataDecoderException during construction we leak memory. We need to ensure all is released correctly.
Modifications:
- Call destroy() if parseBody() throws and rethrow the ErrorDataDecoderException
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9513.
Motivation:
We did not correctly pass all supplied parameters to the called constructor and so did not apply the timeout.
Modification:
Correctly pass on the parameters.
Result:
Use timeout
Motivation:
Some of the links in javadoc point to the obsolete drafts of HTTP/2
specifications. We should point them to the latest RFC 7540 or 7541.
Modifications:
Update links from `draft-ietf-httpbis-*` to the `rfc7540` and `rfc7541`.
Result:
Correct links in javadoc.
Motivation:
`HttpObjectDecoder` pre-checks that it doesn't request characters
outside of the `AppendableCharSequence`'s length. `0` is always allowed
because the minimal length of `AppendableCharSequence` is `1`. We can
legally skip index check by using
`AppendableCharSequence.charAtUnsafe(int)` in all existing cases in
`HttpObjectDecoder`.
Modifications:
- Use `AppendableCharSequence.charAtUnsafe(int)` instead of
`AppendableCharSequence.charAt(int)` in `HttpObjectDecoder`.
Result:
No unnecessary index checks in `HttpObjectDecoder`.