Motivation:
We should use ObjectUtil for checking if Compression parameters are in range. This will reduce LOC and make code more readable.
Modification:
Used ObjectUtil
Result:
More readable code
Motivation:
Avoid implicit conversions to narrower types in
AbstractMemoryHttpData and Bzip2HuffmanStageEncoder classes
reported by LGTM.
Modifications:
Updated the classes to avoid implicit casting to narrower types.
It doesn't look like that an integer overflow is possible there,
therefore no checks for overflows were added.
Result:
No warnings about implicit conversions to narrower types.
Motivation:
LGTM reported that WebSocketUtil uses MD5 and SHA-1
that are considered weak. Although those algorithms
are insecure, they are required by draft-ietf-hybi-thewebsocketprotocol-00
specification that is implemented in the corresponding WebSocket
handshakers. Once the handshakers are removed, WebSocketUtil can be
updated to stop using those weak hash functions.
Modifications:
Added SuppressWarnings annotations.
Result:
Suppressed warnings.
Add validation check about websocket path
Motivation:
I add websocket handler in custom server with netty.
I first add WebSocketServerProtocolHandler in my channel pipeline.
It does work! but I found that it can pass "/websocketabc". (websocketPath is "/websocket")
Modification:
`isWebSocketPath()` method of `WebSocketServerProtocolHandshakeHandler` now checks that "startsWith" applies to the first URL path component, rather than the URL as a string.
Result:
Requests to "/websocketabc" are no longer passed to handlers for requests that starts-with "/websocket".
Motivation:
Consider a scenario when the client iniitiates a WebSocket handshake but before the handshake is complete,
the channel is closed due to some reason. In such scenario, the handshake timeout scheduled on the executor
is not cleared. The reason it is not cleared is because in such cases the handshakePromise is not completed.
Modifications:
This change completes the handshakePromise exceptinoally on channelInactive callback, if it has not been
completed so far. This triggers the callback on completion of the promise which clears the timeout scheduled
on the executor.
This PR also adds a test case which reproduces the scenario described above. The test case fails before the
fix is added and succeeds when the fix is applied.
Result:
After this change, the timeout scheduled on the executor will be cleared, thus freeing up thread resources.
Motivation:
If DeleteOnExitHook is in the open state and the program runs for a long time, the DeleteOnExitHook file keeps increasing.
This results in a memory leak
Modification:
I re-customized a DeleteOnExitHook hook. If DeleteOnExitHook is turned on, this hook will be added when creating a temporary file. After the request ends and the corresponding resources are released, the current file will be removed from this hook, so that it will not increase all the time.
Result:
Fixes https://github.com/netty/netty/issues/10351
Motivation
ByteBuf has an isAccessible method which was introduced as part of ref
counting optimizations but there are some places still doing
accessibility checks by accessing the volatile refCnt() directly.
Modifications
- Have PooledNonRetained(Duplicate|Sliced)ByteBuf#isAccessible() use
their refcount delegate's isAccessible() method
- Add static isAccessible(buf) and ensureAccessible(buf) methods to
ByteBufUtil
(since ByteBuf#isAccessible() is package-private)
- Adjust DefaultByteBufHolder and similar classes to use these methods
rather than access refCnt() directly
Result
- More efficient accessibility checks in more places
Motivation:
This is small fixes for #10453 PR according @njhill and @normanmaurer conversation.
Modification:
Simple refactor and takes into account remainder when calculate size.
Result:
Behavior is correct
Motivation:
AbstractDiskHttpData#getChunk opens and closes fileChannel every time when it is invoked,
as a result the uploaded file is corrupted. This is a regression caused by #10270.
Modifications:
- Close the fileChannel only if everything was read or an exception is thrown
- Add unit test
Result:
AbstractDiskHttpData#getChunk closes fileChannel only if everything was read or an exception is thrown
Motivation:
Regression appeared after making changes in fix#10360 .
The main problem here that `buffer.getBytes(buffer.readerIndex(), fileChannel, fileChannel.position(), localsize)`
doesn't change channel position after writes.
Modification:
Manually set position according to the written bytes.
Result:
Fixes#10449 .
Motivation:
Since https://github.com/netty/netty/pull/9865 (Netty 4.1.44) the
default behavior of the HttpObjectDecoder has been to reject any HTTP
message that is found to have multiple Content-Length headers when
decoding. This behavior is well-justified as per the risks outlined in
https://github.com/netty/netty/issues/9861, however, we can see from the
cited RFC section that there are multiple possible options offered for
responding to this scenario:
> If a message is received that has multiple Content-Length header
> fields with field-values consisting of the same decimal value, or a
> single Content-Length header field with a field value containing a
> list of identical decimal values (e.g., "Content-Length: 42, 42"),
> indicating that duplicate Content-Length header fields have been
> generated or combined by an upstream message processor, then the
> recipient MUST either reject the message as invalid or replace the
> duplicated field-values with a single valid Content-Length field
> containing that decimal value prior to determining the message body
> length or forwarding the message.
https://tools.ietf.org/html/rfc7230#section-3.3.2
Netty opted for the first option (rejecting as invalid), which seems
like the safest, but the second option (replacing duplicate values with
a single value) is also valid behavior.
Modifications:
* Introduce "allowDuplicateContentLengths" parameter to
HttpObjectDecoder (defaulting to false).
* When set to true, will allow multiple Content-Length headers only if
they are all the same value. The duplicated field-values will be
replaced with a single valid Content-Length field.
* Add new parameterized test class for testing different variations of
multiple Content-Length headers.
Result:
This is a backwards-compatible change with no functional change to the
existing behavior.
Note that the existing logic would result in NumberFormatExceptions
for header values like "Content-Length: 42, 42". The new logic correctly
reports these as IllegalArgumentException with the proper error message.
Additionally note that this behavior is only applied to HTTP/1.1, but I
suspect that we may want to expand that to include HTTP/1.0 as well...
That behavior is not modified here to minimize the scope of this change.
Motivation:
AbstractDiskHttpData may cause a memory leak when a CompositeByteBuf is used. This happened because we may call copy() but actually never release the newly created ByteBuf.
Modifications:
- Remove copy() call and just use ByteBuf.getBytes(...) which will internally handle the writing to the FileChannel without any extra copies that need to be released later on.
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/10354
Motivation
HttpObjectDecoder and its associated classes make frequent use of
default values for maxInitialLineLength, maxHeaderSize, maxChunkSize,
etc. Today, these defaults are defined in-line in constructors and
duplicated across many classes. This repetition is more prone to error
and inconsistencies.
Furthermore, due to the current lack of builder support, if a user wants
to change just one of these values (e.g., maxHeaderSize), they are also
required to know and repeat the other default values (e.g.,
maxInitialLineLength and maxChunkSize).
The primary motivation for this change is as we are considering adding
another constructor parameter (for multiple content length behavior),
appending this parameter may require some users to have prior knowledge
of the default initialBufferSize, and it would be cleaner to allow them
to reference the default constant.
Modifications
* Consolidate the HttpObjectDecoder default values into public constants
* Reference these constants where possible
Result
No functional change. Additional telescoping constructors will be easier
and safer to write. Users may have an easier experience changing single
parameters.
Motivation:
`containsValue()` will check if there are multiple values defined in the specific header name, we need to use this method instead of `contains()` for the `Transfer-Encoding` header to cover the case that multiple values defined, like: `Transfer-Encoding: gzip, chunked`
Modification:
Change from `contains()` to `containsValue()` in `HttpUtil.isTransferEncodingChunked()` method.
Result:
Fixes#10320
Motivation:
Currently we passing custom websocket handshaker response headers to a `WebSocketServerHandshaker` but they can contain a reserved headers (e.g. Connection, Upgrade, Sec-Websocket-Accept) what lead to duplication because we use response.headers().add(..) instead of response.headers().set(..).
Modification:
In each `WebSocketServerHandshaker00`, ... `WebSocketServerHandshaker13` implementation replace the method add(..) to set(..) for reserved response headers.
Result:
Less error-prone
Motivation:
`FileChannel.read()` may throw an IOException. We must deal with this in case of the occurrence of `I/O` error.
Modification:
Place the `FileChannel.read()` method call in the `try-finally` block.
Result:
Advoid fd leak.
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivations
-----------
HttpPostStandardRequestDecoder was changed in 4.1.50 to provide its own
ByteBuf UrlDecoder. Prior to this change, it was using the decodeComponent
method from QueryStringDecoder which decoded + characters to
whitespaces. This behavior needs to be preserved to maintain backward
compatibility.
Modifications
-------------
Changed HttpPostStandardRequestDecoder to detect + bytes and decode them
toe whitespaces. Added a test.
Results
-------
Addresses issues#10284
Motivation:
We cant reuse the ChannelPromise as it will cause an error when trying to ful-fill it multiple times.
Modifications:
- Use a new promise and chain it with the old one
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/10240
Motivation:
We should not use Unpooled to allocate buffers if possible to ensure we can make use of pooling etc.
Modifications:
- Only allocate a buffer if really needed
- Use the ByteBufAllocator of the offered ByteBuf
- Ensure we not use buffer.copy() but explicitly allocate a buffer and then copy into it to not hit the limit of maxCapacity()
Result:
Improve allocations
Motivation:
We need to release all ByteBufs that we allocate to prevent leaks. We missed to release the ByteBufs that are used to aggregate in two cases
Modifications:
Add release() calls
Result:
No more memory leak in HttpPostMultipartRequestDecoder
Motivations
-----------
There is no need to copy the "offered" ByteBuf in HttpPostRequestDecoder
when the first HttpContent ByteBuf is also the last (LastHttpContent) as
the full content can immediately be decoded. No extra bookeeping needed.
Modifications
-------------
HttpPostMultipartRequestDecoder
- Retain the first ByteBuf when it is both the first HttpContent offered
to the decoder and is also LastHttpContent.
- Retain slices of the final buffers values
Results
-------
ByteBufs of FullHttpMessage decoded by HttpPostRequestDecoder are no longer
unnecessarily copied. Attributes are extracted as retained slices when
the content is multi-part. Non-multi-part content continues to return
Unpooled buffers.
Partially addresses issue #10200
Motivation:
`AbstractHttpData.checkSize` may throw an IOException if we set the max size limit via `AbstractHttpData.setMaxSize`. However, if this exception happens, the `AbstractDiskHttpData.file` and the `AbstractHttpData.size` are still be modified. In other words, it may break the failure atomicity here.
Modification:
Just move up the size check.
Result:
Keep the failure atomicity even if `AbstractHttpData.checkSize` fails.
Motivation:
An unexpected IOException may be thrown from `FileChannel.force`. If it happens, the `FileChannel.close` may not be invoked.
Modification:
Place the `FileChannel.close` in a finally block.
Result:
Avoid fd leak.
Motivation:
`RandomAccessFile.setLength` may throw an IOException. We must deal with this in case of the occurrence of `I/O` error.
Modification:
Place the `RandomAccessFile.setLength` method call in the `try-finally` block.
Result:
Avoid fd leak.
Motivation:
`FileChannel.force` may throw an IOException. A fd leak may happen here.
Modification:
Close the fileChannel in a finally block.
Result:
Avoid fd leak.
Motivation:
An `IOException` may be thrown from `FileChannel.write` or `FileChannel.force`, and cause the fd leak.
Modification:
Close the file in a finally block.
Result:
Avoid fd leak.
Motivation:
An exception may occur between ByteBuf's allocation and release. So I think it's a good idea to place the release operation in a finally block.
Modification:
Release the temporary ByteBuf in finally blocks.
Result:
Avoid ByteBuf leak.
Motivation:
An IOException may be thrown from FileChannel.read, and cause the fd leak.
Modification:
Close the file when IOException occurs.
Result:
Avoid fd leak.
Motivation:
currently (http) disk based attributes or uploads are globally configured in a single directory and can also only globally be deleted on exit or not. it does not fit well multiple cases, in particular the case you have multiple servers in the same JVM.
Modification:
make it configurable per attribute/fileupload.
Result:
This PR duplicates Disk* constructor to add basedir and deleteonexit parameters and wires it in default http daa factory.
Motivation:
An IOException may be thrown from InputStream.read or checkSize method, and cause the ByteBuf leak.
Modification:
Release the ByteBuf when IOException occurs.
Result:
Avoid ByteBuf leak.
Motivation:
At the moment HttpObjectDecoder does not limit the number of controls chars. These should be counted with the initial line and so a limit should be exposed
Modifications:
- Change LineParser to also be used when skipping control chars and so enforce a limit
- Add various tests to ensure that limit is enforced
Result:
Fixes https://github.com/netty/netty/issues/10111
Motivation:
In our WebSocketClientHandshaker* implementations we "hardcode" the version number to use. This is error-prone, we should better use the WebSocketVersion so we dont need to maintain the value multiple times. Beside this we can also use an AsciiString to improve performance
Modifications:
- Use WebSocketVersion.toAsciiString
Result:
Less stuff to maintain and small performance win
Motivation:
The user may need to provide a specific HOST header. We should not override it when specified during handshake.
Modifications:
Check if a custom HOST header is already provided by the user and if so dont override it
Result:
Fixes https://github.com/netty/netty/issues/10101
Motivation:
Netty currently does not support the SameSite attribute for response cookies (see issue #8161 for discussion).
Modifications:
The attribute has been added to the DefaultCookie class as a quick fix since adding new methods to the Cookie interface would be backwards-incompatible.
ServerCookieEncoder and ClientCookieDecoder have been updated accordingly to process this value. No validation for allowed values (Lax, None, Strict) has been implemented.
Result:
Response cookies with the SameSite attribute set can be read or written by Netty.
Co-authored-by: David Latorre <a-dlatorre@hotels.com>
Motivation:
WebSocketClientHandshaker#upgradeUrl doesn't comperly compute relative url when path is empty and produces url such as `?access_token=foo` instead of `/?access_token=foo`.
Modifications:
* fix WebSocketClientHandshaker#upgradeUrl
* add tests for urls without path, with and without query
Result:
WebSocketClientHandshaker properly connects to url without path.