Motivation:
LGTM reports multiple issues. They need to be triaged,
and real ones should be fixed.
Modifications:
- Fixed multiple issues reported by LGTM, such as redundant conditions,
resource leaks, typos, possible integer overflows.
- Suppressed false-positives.
- Added a few testcases.
Result:
Fixed several possible issues, get rid of false alarms in the LGTM report.
Motivation:
We have a few classes in which we store and reuse static instances of various exceptions. When doing so it is important to also override fillInStacktrace() and so prevent the leak of the ClassLoader in the internal backtrace field.
Modifications:
- Add overrides of fillInStracktrace when needed
- Move ThrowableUtil usage in the static methods
Result:
Fixes https://github.com/netty/netty/pull/10686
Motivation:
Fix Broken Link of OWASP HttpOnly Cookie in Cookie class.
Modification:
Updated the broken link.
Result:
Broken Link Fix for better Documentation.
Motivation:
We should use ObjectUtil for checking if Compression parameters are in range. This will reduce LOC and make code more readable.
Modification:
Used ObjectUtil
Result:
More readable code
Motivation:
Avoid implicit conversions to narrower types in
AbstractMemoryHttpData and Bzip2HuffmanStageEncoder classes
reported by LGTM.
Modifications:
Updated the classes to avoid implicit casting to narrower types.
It doesn't look like that an integer overflow is possible there,
therefore no checks for overflows were added.
Result:
No warnings about implicit conversions to narrower types.
Motivation:
LGTM reported that WebSocketUtil uses MD5 and SHA-1
that are considered weak. Although those algorithms
are insecure, they are required by draft-ietf-hybi-thewebsocketprotocol-00
specification that is implemented in the corresponding WebSocket
handshakers. Once the handshakers are removed, WebSocketUtil can be
updated to stop using those weak hash functions.
Modifications:
Added SuppressWarnings annotations.
Result:
Suppressed warnings.
Add validation check about websocket path
Motivation:
I add websocket handler in custom server with netty.
I first add WebSocketServerProtocolHandler in my channel pipeline.
It does work! but I found that it can pass "/websocketabc". (websocketPath is "/websocket")
Modification:
`isWebSocketPath()` method of `WebSocketServerProtocolHandshakeHandler` now checks that "startsWith" applies to the first URL path component, rather than the URL as a string.
Result:
Requests to "/websocketabc" are no longer passed to handlers for requests that starts-with "/websocket".
Motivation:
Consider a scenario when the client iniitiates a WebSocket handshake but before the handshake is complete,
the channel is closed due to some reason. In such scenario, the handshake timeout scheduled on the executor
is not cleared. The reason it is not cleared is because in such cases the handshakePromise is not completed.
Modifications:
This change completes the handshakePromise exceptinoally on channelInactive callback, if it has not been
completed so far. This triggers the callback on completion of the promise which clears the timeout scheduled
on the executor.
This PR also adds a test case which reproduces the scenario described above. The test case fails before the
fix is added and succeeds when the fix is applied.
Result:
After this change, the timeout scheduled on the executor will be cleared, thus freeing up thread resources.
Motivation:
If DeleteOnExitHook is in the open state and the program runs for a long time, the DeleteOnExitHook file keeps increasing.
This results in a memory leak
Modification:
I re-customized a DeleteOnExitHook hook. If DeleteOnExitHook is turned on, this hook will be added when creating a temporary file. After the request ends and the corresponding resources are released, the current file will be removed from this hook, so that it will not increase all the time.
Result:
Fixes https://github.com/netty/netty/issues/10351
Motivation
ByteBuf has an isAccessible method which was introduced as part of ref
counting optimizations but there are some places still doing
accessibility checks by accessing the volatile refCnt() directly.
Modifications
- Have PooledNonRetained(Duplicate|Sliced)ByteBuf#isAccessible() use
their refcount delegate's isAccessible() method
- Add static isAccessible(buf) and ensureAccessible(buf) methods to
ByteBufUtil
(since ByteBuf#isAccessible() is package-private)
- Adjust DefaultByteBufHolder and similar classes to use these methods
rather than access refCnt() directly
Result
- More efficient accessibility checks in more places
Motivation:
This is small fixes for #10453 PR according @njhill and @normanmaurer conversation.
Modification:
Simple refactor and takes into account remainder when calculate size.
Result:
Behavior is correct
Motivation:
AbstractDiskHttpData#getChunk opens and closes fileChannel every time when it is invoked,
as a result the uploaded file is corrupted. This is a regression caused by #10270.
Modifications:
- Close the fileChannel only if everything was read or an exception is thrown
- Add unit test
Result:
AbstractDiskHttpData#getChunk closes fileChannel only if everything was read or an exception is thrown
Motivation:
Regression appeared after making changes in fix#10360 .
The main problem here that `buffer.getBytes(buffer.readerIndex(), fileChannel, fileChannel.position(), localsize)`
doesn't change channel position after writes.
Modification:
Manually set position according to the written bytes.
Result:
Fixes#10449 .
Motivation:
Since https://github.com/netty/netty/pull/9865 (Netty 4.1.44) the
default behavior of the HttpObjectDecoder has been to reject any HTTP
message that is found to have multiple Content-Length headers when
decoding. This behavior is well-justified as per the risks outlined in
https://github.com/netty/netty/issues/9861, however, we can see from the
cited RFC section that there are multiple possible options offered for
responding to this scenario:
> If a message is received that has multiple Content-Length header
> fields with field-values consisting of the same decimal value, or a
> single Content-Length header field with a field value containing a
> list of identical decimal values (e.g., "Content-Length: 42, 42"),
> indicating that duplicate Content-Length header fields have been
> generated or combined by an upstream message processor, then the
> recipient MUST either reject the message as invalid or replace the
> duplicated field-values with a single valid Content-Length field
> containing that decimal value prior to determining the message body
> length or forwarding the message.
https://tools.ietf.org/html/rfc7230#section-3.3.2
Netty opted for the first option (rejecting as invalid), which seems
like the safest, but the second option (replacing duplicate values with
a single value) is also valid behavior.
Modifications:
* Introduce "allowDuplicateContentLengths" parameter to
HttpObjectDecoder (defaulting to false).
* When set to true, will allow multiple Content-Length headers only if
they are all the same value. The duplicated field-values will be
replaced with a single valid Content-Length field.
* Add new parameterized test class for testing different variations of
multiple Content-Length headers.
Result:
This is a backwards-compatible change with no functional change to the
existing behavior.
Note that the existing logic would result in NumberFormatExceptions
for header values like "Content-Length: 42, 42". The new logic correctly
reports these as IllegalArgumentException with the proper error message.
Additionally note that this behavior is only applied to HTTP/1.1, but I
suspect that we may want to expand that to include HTTP/1.0 as well...
That behavior is not modified here to minimize the scope of this change.
Motivation:
AbstractDiskHttpData may cause a memory leak when a CompositeByteBuf is used. This happened because we may call copy() but actually never release the newly created ByteBuf.
Modifications:
- Remove copy() call and just use ByteBuf.getBytes(...) which will internally handle the writing to the FileChannel without any extra copies that need to be released later on.
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/10354
Motivation
HttpObjectDecoder and its associated classes make frequent use of
default values for maxInitialLineLength, maxHeaderSize, maxChunkSize,
etc. Today, these defaults are defined in-line in constructors and
duplicated across many classes. This repetition is more prone to error
and inconsistencies.
Furthermore, due to the current lack of builder support, if a user wants
to change just one of these values (e.g., maxHeaderSize), they are also
required to know and repeat the other default values (e.g.,
maxInitialLineLength and maxChunkSize).
The primary motivation for this change is as we are considering adding
another constructor parameter (for multiple content length behavior),
appending this parameter may require some users to have prior knowledge
of the default initialBufferSize, and it would be cleaner to allow them
to reference the default constant.
Modifications
* Consolidate the HttpObjectDecoder default values into public constants
* Reference these constants where possible
Result
No functional change. Additional telescoping constructors will be easier
and safer to write. Users may have an easier experience changing single
parameters.
Motivation:
`containsValue()` will check if there are multiple values defined in the specific header name, we need to use this method instead of `contains()` for the `Transfer-Encoding` header to cover the case that multiple values defined, like: `Transfer-Encoding: gzip, chunked`
Modification:
Change from `contains()` to `containsValue()` in `HttpUtil.isTransferEncodingChunked()` method.
Result:
Fixes#10320
Motivation:
Currently we passing custom websocket handshaker response headers to a `WebSocketServerHandshaker` but they can contain a reserved headers (e.g. Connection, Upgrade, Sec-Websocket-Accept) what lead to duplication because we use response.headers().add(..) instead of response.headers().set(..).
Modification:
In each `WebSocketServerHandshaker00`, ... `WebSocketServerHandshaker13` implementation replace the method add(..) to set(..) for reserved response headers.
Result:
Less error-prone
Motivation:
`FileChannel.read()` may throw an IOException. We must deal with this in case of the occurrence of `I/O` error.
Modification:
Place the `FileChannel.read()` method call in the `try-finally` block.
Result:
Advoid fd leak.
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivations
-----------
HttpPostStandardRequestDecoder was changed in 4.1.50 to provide its own
ByteBuf UrlDecoder. Prior to this change, it was using the decodeComponent
method from QueryStringDecoder which decoded + characters to
whitespaces. This behavior needs to be preserved to maintain backward
compatibility.
Modifications
-------------
Changed HttpPostStandardRequestDecoder to detect + bytes and decode them
toe whitespaces. Added a test.
Results
-------
Addresses issues#10284
Motivation:
We cant reuse the ChannelPromise as it will cause an error when trying to ful-fill it multiple times.
Modifications:
- Use a new promise and chain it with the old one
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/10240
Motivation:
We should not use Unpooled to allocate buffers if possible to ensure we can make use of pooling etc.
Modifications:
- Only allocate a buffer if really needed
- Use the ByteBufAllocator of the offered ByteBuf
- Ensure we not use buffer.copy() but explicitly allocate a buffer and then copy into it to not hit the limit of maxCapacity()
Result:
Improve allocations
Motivation:
We need to release all ByteBufs that we allocate to prevent leaks. We missed to release the ByteBufs that are used to aggregate in two cases
Modifications:
Add release() calls
Result:
No more memory leak in HttpPostMultipartRequestDecoder
Motivations
-----------
There is no need to copy the "offered" ByteBuf in HttpPostRequestDecoder
when the first HttpContent ByteBuf is also the last (LastHttpContent) as
the full content can immediately be decoded. No extra bookeeping needed.
Modifications
-------------
HttpPostMultipartRequestDecoder
- Retain the first ByteBuf when it is both the first HttpContent offered
to the decoder and is also LastHttpContent.
- Retain slices of the final buffers values
Results
-------
ByteBufs of FullHttpMessage decoded by HttpPostRequestDecoder are no longer
unnecessarily copied. Attributes are extracted as retained slices when
the content is multi-part. Non-multi-part content continues to return
Unpooled buffers.
Partially addresses issue #10200
Motivation:
`AbstractHttpData.checkSize` may throw an IOException if we set the max size limit via `AbstractHttpData.setMaxSize`. However, if this exception happens, the `AbstractDiskHttpData.file` and the `AbstractHttpData.size` are still be modified. In other words, it may break the failure atomicity here.
Modification:
Just move up the size check.
Result:
Keep the failure atomicity even if `AbstractHttpData.checkSize` fails.
Motivation:
An unexpected IOException may be thrown from `FileChannel.force`. If it happens, the `FileChannel.close` may not be invoked.
Modification:
Place the `FileChannel.close` in a finally block.
Result:
Avoid fd leak.
Motivation:
`RandomAccessFile.setLength` may throw an IOException. We must deal with this in case of the occurrence of `I/O` error.
Modification:
Place the `RandomAccessFile.setLength` method call in the `try-finally` block.
Result:
Avoid fd leak.
Motivation:
`FileChannel.force` may throw an IOException. A fd leak may happen here.
Modification:
Close the fileChannel in a finally block.
Result:
Avoid fd leak.
Motivation:
An `IOException` may be thrown from `FileChannel.write` or `FileChannel.force`, and cause the fd leak.
Modification:
Close the file in a finally block.
Result:
Avoid fd leak.
Motivation:
An exception may occur between ByteBuf's allocation and release. So I think it's a good idea to place the release operation in a finally block.
Modification:
Release the temporary ByteBuf in finally blocks.
Result:
Avoid ByteBuf leak.
Motivation:
An IOException may be thrown from FileChannel.read, and cause the fd leak.
Modification:
Close the file when IOException occurs.
Result:
Avoid fd leak.
Motivation:
currently (http) disk based attributes or uploads are globally configured in a single directory and can also only globally be deleted on exit or not. it does not fit well multiple cases, in particular the case you have multiple servers in the same JVM.
Modification:
make it configurable per attribute/fileupload.
Result:
This PR duplicates Disk* constructor to add basedir and deleteonexit parameters and wires it in default http daa factory.
Motivation:
An IOException may be thrown from InputStream.read or checkSize method, and cause the ByteBuf leak.
Modification:
Release the ByteBuf when IOException occurs.
Result:
Avoid ByteBuf leak.
Motivation:
At the moment HttpObjectDecoder does not limit the number of controls chars. These should be counted with the initial line and so a limit should be exposed
Modifications:
- Change LineParser to also be used when skipping control chars and so enforce a limit
- Add various tests to ensure that limit is enforced
Result:
Fixes https://github.com/netty/netty/issues/10111
Motivation:
In our WebSocketClientHandshaker* implementations we "hardcode" the version number to use. This is error-prone, we should better use the WebSocketVersion so we dont need to maintain the value multiple times. Beside this we can also use an AsciiString to improve performance
Modifications:
- Use WebSocketVersion.toAsciiString
Result:
Less stuff to maintain and small performance win
Motivation:
The user may need to provide a specific HOST header. We should not override it when specified during handshake.
Modifications:
Check if a custom HOST header is already provided by the user and if so dont override it
Result:
Fixes https://github.com/netty/netty/issues/10101
Motivation:
Netty currently does not support the SameSite attribute for response cookies (see issue #8161 for discussion).
Modifications:
The attribute has been added to the DefaultCookie class as a quick fix since adding new methods to the Cookie interface would be backwards-incompatible.
ServerCookieEncoder and ClientCookieDecoder have been updated accordingly to process this value. No validation for allowed values (Lax, None, Strict) has been implemented.
Result:
Response cookies with the SameSite attribute set can be read or written by Netty.
Co-authored-by: David Latorre <a-dlatorre@hotels.com>
Motivation:
WebSocketClientHandshaker#upgradeUrl doesn't comperly compute relative url when path is empty and produces url such as `?access_token=foo` instead of `/?access_token=foo`.
Modifications:
* fix WebSocketClientHandshaker#upgradeUrl
* add tests for urls without path, with and without query
Result:
WebSocketClientHandshaker properly connects to url without path.
Motivation:
I am receiving a mutlipart/form_data upload from postman. The filename contains Chinese, and so some invalid chars. We should ensure all of these are removed before trying to decode.
Modification:
Ensure all invalid characters are removed
Result:
Fixes#10087
Co-authored-by: liming.zhang <liming.zhang@luckincoffee.com>
Motivation:
When the HttpContentCompressor is put on an alternate EventExecutor, the order of events should be
preserved so that the compressed content is correctly created.
Modifications:
- checking that the executor in the ChannelHandlerContext is the same executor as the current executor when evaluating if the handler should be skipped.
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/10067
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
Our parsing of the initial line / http headers did treat some characters as separators which should better trigger an exception during parsing.
Modifications:
- Tighten up parsing of the inital line by follow recommentation of RFC7230
- Restrict separators to OWS for http headers
- Add unit test
Result:
Stricter parsing of HTTP1
Motivation:
We did had some System.out.println(...) call in a test which seems to be some left-over from debugging.
Modifications:
Remove System.out.println(...)
Result:
Code cleanup
Motivation
As part of a recent commit for issue
https://github.com/netty/netty/issues/9861 the HttpObjectDecoder was
changed to throw an IllegalArgumentException (and produce a failed
decoder result) when decoding a message with both "Transfer-Encoding:
chunked" and "Content-Length".
While it seems correct for Netty to try to sanitize these types of
messages, the spec explicitly mentions that the Content-Length header
should be *removed* in this scenario.
Both Nginx 1.15.9 and Tomcat 9.0.31 also opt to remove the header:
b693d7c198/java/org/apache/coyote/http11/Http11Processor.java (L747-L755)0ad4393e30/src/http/ngx_http_request.c (L1946-L1953)
Modifications
* Change the default behavior from throwing an IllegalArgumentException
to removing the "Content-Length" header
* Extract the behavior to a new protected method,
handleChunkedEncodingWithContentLength(), that can be overridden to
change this behavior (or capture metrics)
Result
Messages of this nature will now be successfully decoded and have their
"Content-Length" header removed, rather than creating invalid messages
(decoder result failures). Users will be allowed to override and
configure this behavior.
Motivation:
Need tests to ensure that CVE-2020-7238 is fixed.
Modifications:
Added two test cases into HttpRequestDecoderTest which check that
no whitespace is allowed before the Transfer-Encoding header.
Result:
Improved test coverage for #9861
Motivation:
Avoid allocation of default static `WebSocketServerProtocolConfig` and `WebSocketClientProtocolConfig` configs. Prefer compile time constants instead.
Modification:
Static field with config object replaced with constructor with default fields.
Result:
No more default config allocation and static field for it. Compile time variables used instead.
…in order to minimize pipeline
Motivation:
Handling of `WebSocketCloseFrame` is part of websocket protocol, so it's logical to put it within the `WebSocketProtocolHandler`. Also, removal of `WebSocketCloseFrameHandler` will decrease the channel pipeline.
Modification:
- `WebSocketCloseFrameHandler` code merged into `WebSocketProtocolHandler`. `WebSocketCloseFrameHandler` not added to the pipeline anymore
- Added additional constructor to `WebSocketProtocolHandler`
- `WebSocketProtocolHandler` now implements `ChannelOutboundHandler` and implements basic methods from it
Result:
`WebSocketCloseFrameHandler` is no longer used.
Fixes https://github.com/netty/netty/issues/9944
Motivation:
At the moment we add a handler which will respond with 403 forbidden if a websocket handshake is in progress (and after). This makes not much sense as it is unexpected to have a remote peer to send another http request when the handshake was started. In this case it is much better to let the websocket decoder bail out.
Modifications:
Remove usage of forbiddenHttpRequestResponder
Result:
Fixes https://github.com/netty/netty/issues/9913
Motivation:
We should remove WebSocketServerExtensionHandler from pipeline after successful WebSocket upgrade even if the client has not selected any extensions.
Modification:
Remove handler once upgrade is complete and no extensions are used.
Result:
Fixes#9939.
Motivation:
Utf8FrameValidator must release the input buffer if the validation fails to ensure no memory leak happens
Modifications:
- Catch exception, release frame and rethrow
- Adjust unit test
Result:
Fixes https://github.com/netty/netty/issues/9906
Motivation:
Currently, characters are appended to the encoded string char-by-char even when no encoding is needed. We can instead separate out codepath that appends the entire string in one go for better `StringBuilder` allocation performance.
Modification:
Only go into char-by-char loop when finding a character that requires encoding.
Result:
The results aren't so clear with noise on my hot laptop - the biggest impact is on long strings, both to reduce resizes of the buffer and also to reduce complexity of the loop. I don't think there's a significant downside though for the cases that hit the slow path.
After
```
Benchmark Mode Cnt Score Error Units
QueryStringEncoderBenchmark.longAscii thrpt 6 1.406 ± 0.069 ops/us
QueryStringEncoderBenchmark.longAsciiFirst thrpt 6 0.046 ± 0.001 ops/us
QueryStringEncoderBenchmark.longUtf8 thrpt 6 0.046 ± 0.001 ops/us
QueryStringEncoderBenchmark.shortAscii thrpt 6 15.781 ± 0.949 ops/us
QueryStringEncoderBenchmark.shortAsciiFirst thrpt 6 3.171 ± 0.232 ops/us
QueryStringEncoderBenchmark.shortUtf8 thrpt 6 3.900 ± 0.667 ops/us
```
Before
```
Benchmark Mode Cnt Score Error Units
QueryStringEncoderBenchmark.longAscii thrpt 6 0.444 ± 0.072 ops/us
QueryStringEncoderBenchmark.longAsciiFirst thrpt 6 0.043 ± 0.002 ops/us
QueryStringEncoderBenchmark.longUtf8 thrpt 6 0.047 ± 0.001 ops/us
QueryStringEncoderBenchmark.shortAscii thrpt 6 16.503 ± 1.015 ops/us
QueryStringEncoderBenchmark.shortAsciiFirst thrpt 6 3.316 ± 0.154 ops/us
QueryStringEncoderBenchmark.shortUtf8 thrpt 6 3.776 ± 0.956 ops/us
```
Motivation:
https://github.com/netty/netty/pull/9883 added a bug-fix for the Comparator in ClientCookieEncoder but did not add a testcase.
Modifications:
- Add testcase
- Simplify code
Result:
Include a test to ensure we not regress.
Motivation:
The current implementation causes IllegalArgumetExceptions to be thrown on Java 11.
The current implementation would violate comparison contract for two cookies C1 and C2 with same path length, since C1 < C2 and C2 < C1. Returning 0 (equality) does not since C1 == C2 and C2 == C1. See #9881
Modification:
Return equality instead of less than on same path length.
Result:
Fixes#9881.
Motivation:
In Java, it is almost always at least slower to use `ByteBuffer` than `byte[]` without pooling or I/O. `QueryStringDecoder` can use `byte[]` with arguably simpler code.
Modification:
Replace `ByteBuffer` / `CharsetDecoder` with `byte[]` and `new String`
Result:
After
```
Benchmark Mode Cnt Score Error Units
QueryStringDecoderBenchmark.noDecoding thrpt 6 5.612 ± 2.639 ops/us
QueryStringDecoderBenchmark.onlyDecoding thrpt 6 1.393 ± 0.067 ops/us
QueryStringDecoderBenchmark.mixedDecoding thrpt 6 1.223 ± 0.048 ops/us
```
Before
```
Benchmark Mode Cnt Score Error Units
QueryStringDecoderBenchmark.noDecoding thrpt 6 6.123 ± 0.250 ops/us
QueryStringDecoderBenchmark.onlyDecoding thrpt 6 0.922 ± 0.159 ops/us
QueryStringDecoderBenchmark.mixedDecoding thrpt 6 1.032 ± 0.178 ops/us
```
I notice #6781 switched from an array to `ByteBuffer` but I can't find any motivation for that in the PR. Unit tests pass fine with an array and we get a reasonable speed bump.
Motivation:
RFC7230 states that we should not accept multiple content-length headers and also should not accept a content-length header in combination with transfer-encoding: chunked
Modifications:
- Check for multiple content-length headers and if found mark message as invalid
- Check if we found a content-length header and also a transfer-encoding: chunked and if so mark the message as invalid
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9861
Motivation:
Technical speaking its valid to have http headers with no values so we should support it. That said we need to detect if these are "generated" because of an "invalid" fold.
Modifications:
- Detect if a colon is missing when parsing headers.
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9866
Motivation:
Client can split data into different numbers of fragments and sometimes the last frame may contain trash data that doesn't affect decompression process.
Modification:
Added check if last frame is `ContinuationWebSocketFrame` and decompression data is empty
then don't throw an exception.
Result:
Fixes#9770
Motivation:
Multiple cookie values can be present in a single header.
Modification:
Add `decodeAll` overload which returns all cookies
Result:
Fixes#7210
Note:
This change is not as perscriptive as the ideas brought up in the linked issue. Changing the Set implementation or the equals/compareTo definition is likely a breaking change, so they are practical.
Motivation:
HttpPostStandardRequestDecoder may throw multiple different exceptions in the constructor which could lead to memory leaks. We need to guard against this by explicit catch all of them and rethrow after we released any allocated memory.
Modifications:
- Catch, destroy and rethrow in any case
- Ensure we correctly wrap IllegalArgumentExceptions
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/9829
### Motivation:
Those who need 'Origin' or 'Sec-WebSocket-Origin' headers should provide them explicitly, like it is stated in WebSocket specs.
E.g. through custom headers:
HttpHeaders customHeaders = new DefaultHttpHeaders()
.add(HttpHeaderNames.ORIGIN, "http://localhost:8080");
new WebSocketClientProtocolHandler(
new URI("ws://localhost:1234/test"), WebSocketVersion.V13, subprotocol,
allowExtensions, customHeaders, maxFramePayloadLength, handshakeTimeoutMillis)
### Modification:
* Remove enforced origin headers.
* Update tests
### Result:
Fixes#9673: Origin header is always sent from WebSocket client
Motivation:
By default CloseWebSocketFrames are handled automatically.
However I need manually manage their sending both on client- and on server-sides.
Modification:
Send close frame on channel close automatically, when it was not send before explicitly.
Result:
No more messages like "Connection closed by remote peer" for normal close flows.
### Motivation:
Introduction of `WebSocketDecoderConfig` made our server-side code more elegant and simpler for support.
However there is still some problem with maintenance and new features development for WebSocket codecs (`WebSocketServerProtocolHandler`, `WebSocketServerProtocolHandler`).
Particularly, it makes me ~~crying with blood~~ extremely sad to add new parameter and yet another one constructor into these handlers, when I want to contribute new feature.
### Modification:
I've extracted all parameters for client and server WebSocket handlers into config/builder structures, like it was made for decoders in PR #9116.
### Result:
* Fixes#9698: Simplify WebSocket handlers constructor arguments hell
* Unblock further development in this module (configurable close frame handling on server-side; automatic close-frame sending, when missed; memory leaks on protocol violations; etc...)
Bonuses:
* All defaults are gathered in one place and could be easily found/reused.
* New API greatly simplifies usage, but does NOT allow inheritance or modification.
* New API would simplify long-term maintenance of WebSockets module.
### Example
WebSocketClientProtocolConfig config = WebSocketClientProtocolConfig.newBuilder()
.webSocketUri("wss://localhost:8443/fx-spot")
.subprotocol("trading")
.handshakeTimeoutMillis(15000L)
.build();
ctx.pipeline().addLast(new WebSocketClientProtocolHandler(config));
Motivation:
At the moment we miss to poll the method queue when we see an Informational response code. This can lead to out-of-sync of request / response pairs when later try to compare these.
Modifications:
Always poll the queue correctly
Result:
Always compare the correct request / response pairs
Motivation:
HTTP 102 (WebDAV) is not correctly treated as an informational response
Modification:
Delegate all `1XX` status codes to superclass, not just `100` and `101`.
Result:
Supports WebDAV response.
Removes a huge maintenance [headache](https://github.com/line/armeria/pull/2210) in Armeria which has forked the class for these features
Motivation:
HttpPostRequestDecoder.splitHeaderContentType() throws a StringIndexOutOfBoundsException when it parses a Content-Type header that starts with a semicolon ;. We should skip the execution for incorrect multipart form data.
Modification:
Avoid invocation of HttpPostRequestDecoder#splitHeaderContentType(...) for incorrect multipart form data content-type.
Result:
Fixes#8554
Motivation:
We can use the `@SuppressJava6Requirement` annotation to be more precise about when we use Java6+ APIs. This helps us to ensure we always protect these places.
Modifications:
Make use of `@SuppressJava6Requirement` explicit
Result:
Fixes https://github.com/netty/netty/issues/2509.
### Motivation:
I've now found two libraries that use Netty to be vulnerable to [CWE-113: Improper Neutralization of CRLF Sequences in HTTP Headers ('HTTP Response Splitting')](https://cwe.mitre.org/data/definitions/113.html) due to using `new DefaultHttpHeaders(false)`.
Some part of me hopes that this warning will help dissuade library authors from disabling this important security check.
### Modification:
Add documentation to `DefaultHttpHeaders(boolean)` to warn about the implications of `false`.
### Result:
This improves the documentation on `DefaultHttpHeaders`.