Motivation:
When the HttpContentCompressor is put on an alternate EventExecutor, the order of events should be
preserved so that the compressed content is correctly created.
Modifications:
- checking that the executor in the ChannelHandlerContext is the same executor as the current executor when evaluating if the handler should be skipped.
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/10067
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
Our parsing of the initial line / http headers did treat some characters as separators which should better trigger an exception during parsing.
Modifications:
- Tighten up parsing of the inital line by follow recommentation of RFC7230
- Restrict separators to OWS for http headers
- Add unit test
Result:
Stricter parsing of HTTP1
Motivation:
We did had some System.out.println(...) call in a test which seems to be some left-over from debugging.
Modifications:
Remove System.out.println(...)
Result:
Code cleanup
Motivation
As part of a recent commit for issue
https://github.com/netty/netty/issues/9861 the HttpObjectDecoder was
changed to throw an IllegalArgumentException (and produce a failed
decoder result) when decoding a message with both "Transfer-Encoding:
chunked" and "Content-Length".
While it seems correct for Netty to try to sanitize these types of
messages, the spec explicitly mentions that the Content-Length header
should be *removed* in this scenario.
Both Nginx 1.15.9 and Tomcat 9.0.31 also opt to remove the header:
b693d7c198/java/org/apache/coyote/http11/Http11Processor.java (L747-L755)0ad4393e30/src/http/ngx_http_request.c (L1946-L1953)
Modifications
* Change the default behavior from throwing an IllegalArgumentException
to removing the "Content-Length" header
* Extract the behavior to a new protected method,
handleChunkedEncodingWithContentLength(), that can be overridden to
change this behavior (or capture metrics)
Result
Messages of this nature will now be successfully decoded and have their
"Content-Length" header removed, rather than creating invalid messages
(decoder result failures). Users will be allowed to override and
configure this behavior.
Motivation:
Need tests to ensure that CVE-2020-7238 is fixed.
Modifications:
Added two test cases into HttpRequestDecoderTest which check that
no whitespace is allowed before the Transfer-Encoding header.
Result:
Improved test coverage for #9861
Motivation:
Avoid allocation of default static `WebSocketServerProtocolConfig` and `WebSocketClientProtocolConfig` configs. Prefer compile time constants instead.
Modification:
Static field with config object replaced with constructor with default fields.
Result:
No more default config allocation and static field for it. Compile time variables used instead.
…in order to minimize pipeline
Motivation:
Handling of `WebSocketCloseFrame` is part of websocket protocol, so it's logical to put it within the `WebSocketProtocolHandler`. Also, removal of `WebSocketCloseFrameHandler` will decrease the channel pipeline.
Modification:
- `WebSocketCloseFrameHandler` code merged into `WebSocketProtocolHandler`. `WebSocketCloseFrameHandler` not added to the pipeline anymore
- Added additional constructor to `WebSocketProtocolHandler`
- `WebSocketProtocolHandler` now implements `ChannelOutboundHandler` and implements basic methods from it
Result:
`WebSocketCloseFrameHandler` is no longer used.
Fixes https://github.com/netty/netty/issues/9944
Motivation:
At the moment we add a handler which will respond with 403 forbidden if a websocket handshake is in progress (and after). This makes not much sense as it is unexpected to have a remote peer to send another http request when the handshake was started. In this case it is much better to let the websocket decoder bail out.
Modifications:
Remove usage of forbiddenHttpRequestResponder
Result:
Fixes https://github.com/netty/netty/issues/9913
Motivation:
We should remove WebSocketServerExtensionHandler from pipeline after successful WebSocket upgrade even if the client has not selected any extensions.
Modification:
Remove handler once upgrade is complete and no extensions are used.
Result:
Fixes#9939.
Motivation:
Utf8FrameValidator must release the input buffer if the validation fails to ensure no memory leak happens
Modifications:
- Catch exception, release frame and rethrow
- Adjust unit test
Result:
Fixes https://github.com/netty/netty/issues/9906
Motivation:
Currently, characters are appended to the encoded string char-by-char even when no encoding is needed. We can instead separate out codepath that appends the entire string in one go for better `StringBuilder` allocation performance.
Modification:
Only go into char-by-char loop when finding a character that requires encoding.
Result:
The results aren't so clear with noise on my hot laptop - the biggest impact is on long strings, both to reduce resizes of the buffer and also to reduce complexity of the loop. I don't think there's a significant downside though for the cases that hit the slow path.
After
```
Benchmark Mode Cnt Score Error Units
QueryStringEncoderBenchmark.longAscii thrpt 6 1.406 ± 0.069 ops/us
QueryStringEncoderBenchmark.longAsciiFirst thrpt 6 0.046 ± 0.001 ops/us
QueryStringEncoderBenchmark.longUtf8 thrpt 6 0.046 ± 0.001 ops/us
QueryStringEncoderBenchmark.shortAscii thrpt 6 15.781 ± 0.949 ops/us
QueryStringEncoderBenchmark.shortAsciiFirst thrpt 6 3.171 ± 0.232 ops/us
QueryStringEncoderBenchmark.shortUtf8 thrpt 6 3.900 ± 0.667 ops/us
```
Before
```
Benchmark Mode Cnt Score Error Units
QueryStringEncoderBenchmark.longAscii thrpt 6 0.444 ± 0.072 ops/us
QueryStringEncoderBenchmark.longAsciiFirst thrpt 6 0.043 ± 0.002 ops/us
QueryStringEncoderBenchmark.longUtf8 thrpt 6 0.047 ± 0.001 ops/us
QueryStringEncoderBenchmark.shortAscii thrpt 6 16.503 ± 1.015 ops/us
QueryStringEncoderBenchmark.shortAsciiFirst thrpt 6 3.316 ± 0.154 ops/us
QueryStringEncoderBenchmark.shortUtf8 thrpt 6 3.776 ± 0.956 ops/us
```
Motivation:
https://github.com/netty/netty/pull/9883 added a bug-fix for the Comparator in ClientCookieEncoder but did not add a testcase.
Modifications:
- Add testcase
- Simplify code
Result:
Include a test to ensure we not regress.
Motivation:
The current implementation causes IllegalArgumetExceptions to be thrown on Java 11.
The current implementation would violate comparison contract for two cookies C1 and C2 with same path length, since C1 < C2 and C2 < C1. Returning 0 (equality) does not since C1 == C2 and C2 == C1. See #9881
Modification:
Return equality instead of less than on same path length.
Result:
Fixes#9881.
Motivation:
In Java, it is almost always at least slower to use `ByteBuffer` than `byte[]` without pooling or I/O. `QueryStringDecoder` can use `byte[]` with arguably simpler code.
Modification:
Replace `ByteBuffer` / `CharsetDecoder` with `byte[]` and `new String`
Result:
After
```
Benchmark Mode Cnt Score Error Units
QueryStringDecoderBenchmark.noDecoding thrpt 6 5.612 ± 2.639 ops/us
QueryStringDecoderBenchmark.onlyDecoding thrpt 6 1.393 ± 0.067 ops/us
QueryStringDecoderBenchmark.mixedDecoding thrpt 6 1.223 ± 0.048 ops/us
```
Before
```
Benchmark Mode Cnt Score Error Units
QueryStringDecoderBenchmark.noDecoding thrpt 6 6.123 ± 0.250 ops/us
QueryStringDecoderBenchmark.onlyDecoding thrpt 6 0.922 ± 0.159 ops/us
QueryStringDecoderBenchmark.mixedDecoding thrpt 6 1.032 ± 0.178 ops/us
```
I notice #6781 switched from an array to `ByteBuffer` but I can't find any motivation for that in the PR. Unit tests pass fine with an array and we get a reasonable speed bump.
Motivation:
RFC7230 states that we should not accept multiple content-length headers and also should not accept a content-length header in combination with transfer-encoding: chunked
Modifications:
- Check for multiple content-length headers and if found mark message as invalid
- Check if we found a content-length header and also a transfer-encoding: chunked and if so mark the message as invalid
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9861
Motivation:
Technical speaking its valid to have http headers with no values so we should support it. That said we need to detect if these are "generated" because of an "invalid" fold.
Modifications:
- Detect if a colon is missing when parsing headers.
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9866
Motivation:
Client can split data into different numbers of fragments and sometimes the last frame may contain trash data that doesn't affect decompression process.
Modification:
Added check if last frame is `ContinuationWebSocketFrame` and decompression data is empty
then don't throw an exception.
Result:
Fixes#9770
Motivation:
Multiple cookie values can be present in a single header.
Modification:
Add `decodeAll` overload which returns all cookies
Result:
Fixes#7210
Note:
This change is not as perscriptive as the ideas brought up in the linked issue. Changing the Set implementation or the equals/compareTo definition is likely a breaking change, so they are practical.
Motivation:
HttpPostStandardRequestDecoder may throw multiple different exceptions in the constructor which could lead to memory leaks. We need to guard against this by explicit catch all of them and rethrow after we released any allocated memory.
Modifications:
- Catch, destroy and rethrow in any case
- Ensure we correctly wrap IllegalArgumentExceptions
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/9829
### Motivation:
Those who need 'Origin' or 'Sec-WebSocket-Origin' headers should provide them explicitly, like it is stated in WebSocket specs.
E.g. through custom headers:
HttpHeaders customHeaders = new DefaultHttpHeaders()
.add(HttpHeaderNames.ORIGIN, "http://localhost:8080");
new WebSocketClientProtocolHandler(
new URI("ws://localhost:1234/test"), WebSocketVersion.V13, subprotocol,
allowExtensions, customHeaders, maxFramePayloadLength, handshakeTimeoutMillis)
### Modification:
* Remove enforced origin headers.
* Update tests
### Result:
Fixes#9673: Origin header is always sent from WebSocket client
Motivation:
By default CloseWebSocketFrames are handled automatically.
However I need manually manage their sending both on client- and on server-sides.
Modification:
Send close frame on channel close automatically, when it was not send before explicitly.
Result:
No more messages like "Connection closed by remote peer" for normal close flows.
### Motivation:
Introduction of `WebSocketDecoderConfig` made our server-side code more elegant and simpler for support.
However there is still some problem with maintenance and new features development for WebSocket codecs (`WebSocketServerProtocolHandler`, `WebSocketServerProtocolHandler`).
Particularly, it makes me ~~crying with blood~~ extremely sad to add new parameter and yet another one constructor into these handlers, when I want to contribute new feature.
### Modification:
I've extracted all parameters for client and server WebSocket handlers into config/builder structures, like it was made for decoders in PR #9116.
### Result:
* Fixes#9698: Simplify WebSocket handlers constructor arguments hell
* Unblock further development in this module (configurable close frame handling on server-side; automatic close-frame sending, when missed; memory leaks on protocol violations; etc...)
Bonuses:
* All defaults are gathered in one place and could be easily found/reused.
* New API greatly simplifies usage, but does NOT allow inheritance or modification.
* New API would simplify long-term maintenance of WebSockets module.
### Example
WebSocketClientProtocolConfig config = WebSocketClientProtocolConfig.newBuilder()
.webSocketUri("wss://localhost:8443/fx-spot")
.subprotocol("trading")
.handshakeTimeoutMillis(15000L)
.build();
ctx.pipeline().addLast(new WebSocketClientProtocolHandler(config));
Motivation:
At the moment we miss to poll the method queue when we see an Informational response code. This can lead to out-of-sync of request / response pairs when later try to compare these.
Modifications:
Always poll the queue correctly
Result:
Always compare the correct request / response pairs
Motivation:
HTTP 102 (WebDAV) is not correctly treated as an informational response
Modification:
Delegate all `1XX` status codes to superclass, not just `100` and `101`.
Result:
Supports WebDAV response.
Removes a huge maintenance [headache](https://github.com/line/armeria/pull/2210) in Armeria which has forked the class for these features
Motivation:
HttpPostRequestDecoder.splitHeaderContentType() throws a StringIndexOutOfBoundsException when it parses a Content-Type header that starts with a semicolon ;. We should skip the execution for incorrect multipart form data.
Modification:
Avoid invocation of HttpPostRequestDecoder#splitHeaderContentType(...) for incorrect multipart form data content-type.
Result:
Fixes#8554
Motivation:
We can use the `@SuppressJava6Requirement` annotation to be more precise about when we use Java6+ APIs. This helps us to ensure we always protect these places.
Modifications:
Make use of `@SuppressJava6Requirement` explicit
Result:
Fixes https://github.com/netty/netty/issues/2509.
### Motivation:
I've now found two libraries that use Netty to be vulnerable to [CWE-113: Improper Neutralization of CRLF Sequences in HTTP Headers ('HTTP Response Splitting')](https://cwe.mitre.org/data/definitions/113.html) due to using `new DefaultHttpHeaders(false)`.
Some part of me hopes that this warning will help dissuade library authors from disabling this important security check.
### Modification:
Add documentation to `DefaultHttpHeaders(boolean)` to warn about the implications of `false`.
### Result:
This improves the documentation on `DefaultHttpHeaders`.
Motivation:
Optimize the QueryStringEncoder for lower memory overhead and higher encode speed.
Modification:
Encode the space to + directly, and reuse the uriStringBuilder rather then create a new one.
Result:
Improved performance
Motivation:
When parsing HTTP headers special care needs to be taken when a whitespace is detected in the header name.
Modifications:
- Ignore whitespace when decoding response (just like before)
- Throw exception when whitespace is detected during parsing
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/9571