Motivation:
Some of the HttpPostMultiPartRequestDecoder specific tests were included in HttpPostRequestDecoderTest. We should better move these in the correct test class.
Modifications:
Move specific tests
Result:
Cleanup
Motivation:
2 years ago a change remove the default clearing of all HttpData, whatever
they are disk based or memory based.
A lot of users were probably releasing HttpData directly, so there was no issue.
But now, it seems, and as the Javadoc said, that `decoder.destroy()` shall clean up
also Memory based HttpData, and not only Disk based HttpData as currently.
Change:
- Add in `destroy()` method the necessary code to release if necessary
the underlying Memory based HttpDatas.
- Change one Junit Test (using Mixed, Memory and Disk based factories)
in order to check the correctness of this behavior and to really act
as a handler (releasing buffers or requests).
- Modify one Junit core to check validity when a delimiter is present in the Chunk
but not CRLF/LF (false delimiter), to ensure correctness.
Result:
No more issue on memory leak
Note that still the List and the Map are not cleaned, since they were not
before. No change is done on this, since it could produce backward issue compatibility.
Fix issues #11175 and #11184
Motivation:
We need to call destroy() if the constructor of HttpPostMultipartRequestDecoder throws as otherwise we may leak memory.
Modifications:
- Call destroy() if we throw
- Add unit test
Result:
No more leaks when constructor throws
Co-authored-by: Frederic Bregier <frederic.bregier@waarp.fr>
Motivation:
We didn't correctly handle the case when no content-type header was found or if the charset was illegal and just did throw a NPE or ICE. We should in both cases throw an ErrorDataDecoderException to reflect what is documented in the javadocs.
Modifications:
- Throw correct exception
- Merge private method into the constructor as it is only used there
- Add unit tests
Result:
Throw expected exceptions on decoding errors
Motivation:
When create a WebSocketServerProtocolConfig to check URI path starts from '/',
only '/' or '//subPath' can be passed by the checker,but '/subPath' should be
passed as well
Modifications:
in `WebSocketServerProtocolHandshakeHandler.isWebSocketPath()` treat '/' a special case
Result:
'/subPath' can be passed
Motivation:
NullChecks resulting in a NullPointerException or IllegalArgumentException, numeric ranges (>0, >=0) checks, not empty strings/arrays checks must never be anonymous but with the parameter or variable name which is checked. They must be specific and should not be done with an "OR-Logic" (if a == null || b == null) throw new NullPointerEx.
Modifications:
* import static relevant checks
* Replace manual checks with ObjectUtil methods
Result:
All checks needed are done with ObjectUtil, some exception texts are improved.
Fixes#11170
Motivation:
At the moment we only expose close(...) methods that take a Channel as paramater. This can be problematic as the write will start at the end of the pipeline which may contain ChannelOutboundHandler implementations that not expect WebSocketFrame objects. We should better also support to pass in a ChannelHandlerContext as starting point for the write which ensures that the WebSocketFrame objects will be handled correctly from this position of the pipeline.
Modifications:
- Add new close(...) methods that take a ChannelHandlerContext
- Add javadoc sentence to point users to the new methods.
Result:
Be able to "start" the close at the right position in the pipeline.
Motivation:
When Memory based Factory is used, if the first chunk starts with Line Break, the HttpData
is not filled with the current available buffer if the delimiter is not found yet, while it may
add some.
Fix JavaDoc to note potential wrong usage of content() or getByteBuf() if HttpDatais has
a huge content with the risk of Out Of Memory Exception.
Fix JavaDoc to explain how to release properly the Factory, whatever it is in Memory,
Disk or Mixed mode.
Fix issue #11143
Modifications:
First, when the delimiter is not found, instead of searching Line Break from readerIndex(), we should search
from readerIndex() + readableBytes() - delimiter size, since this is the only part where usefull
Line Break could be searched for, except if readableBytes is less than delimiter size (then we search from
readerIndex).
Second, when a Memory HttpData is created, it should be assigned an empty buffer to be
consistent with the other implementations (Disk or Mixed mode).
We cannot change the default behavior of the content() or getByteBuf() of the Memory based HttpData
since the ByteBuf is supposed to be null when released, but not empty.
When a new ByteBuf is added, one more check verifies if the current ByteBuf is empty, and if so, it
is released and replaced by the new one, without creating a new CompositeByteBuf.
Result:
In the tests testBIgFileUploadDelimiterInMiddleChunkDecoderMemoryFactory and related for other modes,
the buffers are starting with a CRLF.
When we offer only the prefix part of the multipart (no data at all), the current Partial HttpData has
an empty buffer.
The first time we offer the data starting with CRLF to the decoder, it now
has a correct current Partial HttpData with a buffer not empty.
The Benchmark was re-run against this new version.
Old Benchmark Mode Cnt Score Error Units
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 4,037 ± 0,358 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 4,226 ± 0,471 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,875 ± 0,029 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 4,346 ± 0,275 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 2,044 ± 0,020 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 2,278 ± 0,159 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,174 ± 0,004 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 2,370 ± 0,065 ops/ms
New Benchmark Mode Cnt Score Error Units
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 5,604 ± 0,415 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 6,058 ± 0,111 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,914 ± 0,031 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 6,053 ± 0,051 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 2,636 ± 0,141 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 3,033 ± 0,181 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,178 ± 0,006 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 2,859 ± 0,189 ops/ms
So +20 to +40% improvement due to not searching for CRLF/LF into the full buffer when no delimiter is found,
but only from the end and delimiter size + 2 (CRLF).
Motivation:
There are some redundant checks and so these can be removed
Modifications:
- First check frameOpcode != OPCODE_PING is removed because the code executed int the branch where frameOpcode <= 7, while OPCODE_PING is 9.
- Second check frameOpcode != OPCODE_PING is removed because its checked before.
Result:
Code cleanup
Motivation:
If compression is enabled and the HttpResponse also
implements HttpContent (but not LastHttpContent) then
the buffer will be freed to eagerly.
Modification:
I retain the buffer the same way that is done for the LastHttpContent case.
Note that there is another suspicious looking call a few lines above (if beginEncode returns null). I am not sure if this should also be retained.
Result:
Fixes#11092
Motivation:
As stated by https://tools.ietf.org/html/rfc7540#section-8.1.2.6 we should report a stream error if the content-length does not match the sum of all data frames.
Modifications:
- Verify that the sum of data frames match if a content-length header was send.
- Handle multiple content-length headers and also handle negative values
- Add io.netty.http2.validateContentLength system property which allows to disable the more strict validation
- Add unit tests
Result:
Correctly handle the case when the content-length header was included but not match what is send and also when content-length header is invalid
Motivation
The HttpObjectDecoder accepts input parameters for maxInitialLineLength
and maxHeaderSize. These are important variables since both message
components must be buffered in memory. As such, many decoders (like
Netty and others) introduce constraints. Due to their importance, many
users may wish to add instrumentation on the values of successful
decoder results, or otherwise be able to access these values to enforce
their own supplemental constraints.
While users can perhaps estimate the sizes today, they will not be
exact, due to the decoder being responsible for consuming optional
whitespace and the like.
Modifications
* Add HttpMessageDecoderResult class. This class extends DecoderResult
and is intended for HttpMessage objects successfully decoded by the
HttpObjectDecoder. It exposes attributes for the decoded
initialLineLength and headerSize.
* Modify HttpObjectDecoder to produce HttpMessageDecoderResults upon
successfully decoding the last HTTP header.
* Add corresponding tests to HttpRequestDecoderTest &
HttpResponseDecoderTest.
Co-authored-by: Bennett Lynch <Bennett-Lynch@users.noreply.github.com>
Motivation:
HttpObjectDecoder may throw an IllegalArgumentException if it encounters
a character that Character.isWhitespace() returns true for, but is not
one of the two valid OWS (optional whitespace) values. Such values may
not have friendly or readable toString() values. We should include the
hex value so that the illegal character can always be determined.
Modifications:
Add hex value as well
Result:
Easier to debug
Co-authored-by: Bennett Lynch <Bennett-Lynch@users.noreply.github.com>
Motivation:
To make it possible to experiment with alternative buffer implementations, we need a way to abstract away the concrete buffers used throughout most of the Netty pipelines, while still having a common currency for doing IO in the end.
Modification:
- Introduce an ByteBufConvertible interface, that allow arbitrary objects to convert themselves into ByteBuf objects.
- Every place in the code, where we did an instanceof check for ByteBuf, we now do an instanceof check for ByteBufConvertible.
- ByteBuf itself implements ByteBufConvertible, and returns itself from the asByteBuf method.
Result:
It is now possible to use Netty with alternative buffer implementations, as long as they can be converted to ByteBuf.
This has been verified elsewhere, with an alternative buffer implementation.
Motivation:
- Underlying buffer usages might be erroneous when releasing them internaly
in HttpPostMultipartRequestDecoder.
2 bugs occurs:
1) Final File upload seems not to be of the right size.
2) Memory, even in Disk mode, is increasing continuously, while it shouldn't.
- Method `getByte(position)` is too often called within the current implementation
of the HttpPostMultipartRequestDecoder.
This implies too much activities which is visible when PARANOID mode is active.
This is also true in standard mode.
Apply the same fix on buffer from HttpPostMultipartRequestDecoder to HttpPostStandardRequestDecoder
made previously.
Finally in order to ensure we do not rewrite already decoded HttpData when decoding
next ones within multipart, we must ensure the buffers are copied and not a retained slice.
Modification:
- Add some tests to check consistency for HttpPostMultipartRequestDecoder.
Add a package protected method for testing purpose only.
- Use the `bytesBefore(...)` method instead of `getByte(pos)` in order to limit the external
access to the underlying buffer by retrieving iteratively the beginning of a correct start
position.
It is used to find both LF/CRLF and delimiter.
2 methods in HttpPostBodyUtil were created for that.
The undecodedChunk is copied when adding a chunk to a DataMultipart is loaded.
The same buffer is also rewritten in order to release the copied memory part.
Result:
Just for note, for both Memory or Disk or Mixed mode factories, the release has to be done as:
for (InterfaceHttpData httpData: decoder.getBodyHttpDatas()) {
httpData.release();
factory.removeHttpDataFromClean(request, httpData);
}
factory.cleanAllHttpData();
decoder.destroy();
The memory used is minimal in Disk or Mixed mode. In Memory mode, a big file is still
in memory but not more in the undecodedChunk but its own buffer (copied).
In terms of benchmarking, the results are:
Original code Benchmark Mode Cnt Score Error Units
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 0,152 ± 0,100 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 0,543 ± 0,218 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,001 ± 0,001 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 0,615 ± 0,070 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 0,114 ± 0,063 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 0,664 ± 0,034 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,001 ± 0,001 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 0,620 ± 0,140 ops/ms
New code Benchmark Mode Cnt Score Error Units
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 4,037 ± 0,358 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 4,226 ± 0,471 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,875 ± 0,029 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 4,346 ± 0,275 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 2,044 ± 0,020 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 2,278 ± 0,159 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,174 ± 0,004 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 2,370 ± 0,065 ops/ms
In short, using big file transfers, this is about 7 times faster with new code, while
using high number of HttpData, this is about 4 times faster with new code when using Simple Level.
When using Paranoid Level, using big file transfers, this is about 800 times faster with new code, while
using high number of HttpData, this is about 170 times faster with new code.
Motivation:
File.createTempFile(String, String)` will create a temporary file in the system temporary directory if the 'java.io.tmpdir'. The permissions on that file utilize the umask. In a majority of cases, this means that the file that java creates has the permissions: `-rw-r--r--`, thus, any other local user on that system can read the contents of that file.
This can be a security concern if any sensitive data is stored in this file.
This was reported by Jonathan Leitschuh <jonathan.leitschuh@gmail.com> as a security problem.
Modifications:
Use Files.createTempFile(...) which will use safe-defaults when running on java 7 and later. If running on java 6 there isnt much we can do, which is fair enough as java 6 shouldnt be considered "safe" anyway.
Result:
Create temporary files with sane permissions by default.
Motivation:
The changes introduced in 1c230405fd did cause various issues while the fix itself is not considered critical. For now it is considered the best to just rollback and investigate more.
Modifications:
- Revert changes done in 1c230405fd (and later) for
the post decoders.
- Ensure we give memory back to the system as soon as possible in a safe manner
Result:
Fixes https://github.com/netty/netty/issues/10973
Motivation:
We currently append extensions to the user defined "sec-websocket-extensions" headers. This can cause duplicated entries.
Modifications:
* Replace existing `WebSocketExtensionUtil#appendExtension` private helper with a new `computeMergeExtensionsHeaderValue`. User defined parameters have higher precedence.
* Add tests (existing method wasn't tested)
* Reuse code for both client and server side (code was duplicated).
Result:
No more duplicated entries when user defined extensions overlap with the ones Netty generated.
Motivation:
switch is used when we have a good amount of cases because switch is faster than if-else. However, we're using only 1 case in switch which can affect performance.
Modification:
Changed switch to if.
Result:
Good code.
Motivation:
To fix the infinite loop parsing a multipart body.
Modifications:
Modified the loop to use the correct variable.
Result:
Multipart bodies will be parsed correctly again.
Motivation:
According to specification 1006 status code must not be set as a status code in a
Close control frame by the endpoint. However 1006 status code can be
used in applications to indicate that the connection was closed abnormally.
Modifications:
- Enforce status code validation in CloseWebSocketFrame
- Add WebSocketCloseStatus construction with disabled validation
- Add test
Result:
Fixes#10838
Motivation:
We should add `state` in the exception message of `HttpObjectEncoder` because it makes debugging a little easier.
Modification:
Added `state` in the exception message.
Result:
Better exception message for smooth debugging.
Motivation:
According rfc (https://tools.ietf.org/html/rfc6455#section-11.3.4), `Sec-WebSocket-Protocol` header field MUST NOT appear
more than once in an HTTP response.
At the moment we can pass `Sec-WebSocket-Protocol` via custom headers and it will be added to response.
Modification:
Change method add() to set() for avoid duplication. If we pass sub protocols in handshaker constructor it means that they are preferred over custom ones.
Result:
Less error prone behavior.
Fix issue #10508 where PARANOID mode slow down about 1000 times compared to ADVANCED.
Also fix a rare issue when internal buffer was growing over a limit, it was partially discarded
using `discardReadBytes()` which causes bad changes within previously discovered HttpData.
Reasons were:
Too many `readByte()` method calls while other ways exist (such as keep in memory the last scan position when trying to find a delimiter or using `bytesBefore(firstByte)` instead of looping externally).
Changes done:
- major change on way buffer are parsed: instead of read byte per byte until found delimiter, try to find the delimiter using `bytesBefore()` and keep the last unfound position to skeep already parsed parts (algorithms are the same but implementation of scan are different)
- Change the condition to discard read bytes when refCnt is at most 1.
Observations using Async-Profiler:
==================================
1) Without optimizations, most of the time (more than 95%) is through `readByte()` method within `loadDataMultipartStandard` method.
2) With using `bytesBefore(byte)` instead of `readByte()` to find various delimiter, the `loadDataMultipartStandard` method is going down to 19 to 33% depending on the test used. the `readByte()` method or equivalent `getByte(pos)` method are going down to 15% (from 95%).
Times are confirming those profiling:
- With optimizations, in SIMPLE mode about 82% better, in ADVANCED mode about 79% better and in PARANOID mode about 99% better (most of the duplicate read accesses are removed or make internally through `bytesBefore(byte)` method)
A benchmark is added to show the behavior of the various cases (one big item, such as File upload, and many items) and various level of detection (Disabled, Simple, Advanced, Paranoid). This benchmark is intend to alert if new implementations make too many differences (such as the previous version where about PARANOID gives about 1000 times slower than other levels, while it is now about at most 10 times).
Extract of Benchmark run:
=========================
Run complete. Total time: 00:13:27
Benchmark Mode Cnt Score Error Units
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 2,248 ± 0,198 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 2,067 ± 1,219 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 1,109 ± 0,038 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 2,326 ± 0,314 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 1,444 ± 0,226 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 1,462 ± 0,642 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,159 ± 0,003 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 1,522 ± 0,049 ops/ms
Motivation:
https in xmlns URIs does not work and will let the maven release plugin fail:
```
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.779 s
[INFO] Finished at: 2020-11-10T07:45:21Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-release-plugin:2.5.3:prepare (default-cli) on project netty-parent: Execution default-cli of goal org.apache.maven.plugins:maven-release-plugin:2.5.3:prepare failed: The namespace xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" could not be added as a namespace to "project": The namespace prefix "xsi" collides with an additional namespace declared by the element -> [Help 1]
[ERROR]
```
See also https://issues.apache.org/jira/browse/HBASE-24014.
Modifications:
Use http for xmlns
Result:
Be able to use maven release plugin
Motivation:
There is no need for ByteProcessor to throw a checked exception.
The declared checked exception causes unnecessary code complications just to propagate it.
This can be cleaned up.
Modification:
ByteProcessor.process no longer declares to throw a checked exception, and all the places that were trying to cope with the checked exception have been simplified.
Result:
Simpler code.
Motivation:
There is no need for ByteProcessor to throw a checked exception.
The declared checked exception causes unnecessary code complications just to propagate it.
This can be cleaned up.
Modification:
ByteProcessor.process no longer declares to throw a checked exception, and all the places that were trying to cope with the checked exception have been simplified.
Result:
Simpler code.
Motivation:
HttpServerUpgradeHandler takes a list of protocols from an incoming
request and uses them for building a response.
Although the class does some validation while parsing the list,
it then disables HTTP header validation when it builds a responst.
The disabled validation may potentially allow
HTTP response splitting attacks.
Modifications:
- Enabled HTTP header validation in HttpServerUpgradeHandler
as a defense-in-depth measure to prevent possible
HTTP response splitting attacks.
- Added a new constructor that allows disabling the validation.
Result:
HttpServerUpgradeHandler validates incoming protocols
before including them into a response.
That should prevent possible HTTP response splitting attacks.
Motivation:
At the moment we have only one base `WebSocketHandshakeException` for handling WebSocket upgrade issues.
Unfortunately, this message contains only a string message about the cause of the failure, which is inconvenient in handling.
Modification:
Provide new `WebSocketClientHandshakeException` with `HttpResponse` field and `WebSocketServerHandshakeException` with `HttpRequest` field both of them without content for avoid reference counting
problems.
Result:
More information for more flexible handling.
Fixes#10277#4528#10639.
Motivation:
HTTP is a plaintext protocol which means that someone may be able
to eavesdrop the data. To prevent this, HTTPS should be used whenever
possible. However, maintaining using https:// in all URLs may be
difficult. The nohttp tool can help here. The tool scans all the files
in a repository and reports where http:// is used.
Modifications:
- Added nohttp (via checkstyle) into the build process.
- Suppressed findings for the websites
that don't support HTTPS or that are not reachable
Result:
- Prevent using HTTP in the future.
- Encourage users to use HTTPS when they follow the links they found in
the code.
Motivation:
I noticed WebSocketServerExtensionHandler taking up a non-trivial
amount of CPU time for a non-websocket based menchmark. This attempts
to speed it up.
Modifications:
- It is faster to check for a 101 response than to look at headers,
so an initial response code check is done
- Move all the actual upgrade code into its own method to increase
chance of this method being inlined
- Add an extra contains() check for the upgrade header, to avoid
allocating an iterator if there is no upgrade header
Result:
A small but noticable performance increase.
Signed-off-by: Stuart Douglas <stuart.w.douglas@gmail.com>
Motivation:
junit deprecated Assert.assertThat(...)
Modifications:
Use MatcherAssert.assertThat(...) as replacement for deprecated method
Result:
Less deprecation warnings
Motivation:
LGTM reports multiple issues. They need to be triaged,
and real ones should be fixed.
Modifications:
- Fixed multiple issues reported by LGTM, such as redundant conditions,
resource leaks, typos, possible integer overflows.
- Suppressed false-positives.
- Added a few testcases.
Result:
Fixed several possible issues, get rid of false alarms in the LGTM report.
Motivation:
Fix Broken Link of OWASP HttpOnly Cookie in Cookie class.
Modification:
Updated the broken link.
Result:
Broken Link Fix for better Documentation.
Motivation:
We should use ObjectUtil for checking if Compression parameters are in range. This will reduce LOC and make code more readable.
Modification:
Used ObjectUtil
Result:
More readable code
Motivation:
Avoid implicit conversions to narrower types in
AbstractMemoryHttpData and Bzip2HuffmanStageEncoder classes
reported by LGTM.
Modifications:
Updated the classes to avoid implicit casting to narrower types.
It doesn't look like that an integer overflow is possible there,
therefore no checks for overflows were added.
Result:
No warnings about implicit conversions to narrower types.
Motivation:
LGTM reported that WebSocketUtil uses MD5 and SHA-1
that are considered weak. Although those algorithms
are insecure, they are required by draft-ietf-hybi-thewebsocketprotocol-00
specification that is implemented in the corresponding WebSocket
handshakers. Once the handshakers are removed, WebSocketUtil can be
updated to stop using those weak hash functions.
Modifications:
Added SuppressWarnings annotations.
Result:
Suppressed warnings.
Add validation check about websocket path
Motivation:
I add websocket handler in custom server with netty.
I first add WebSocketServerProtocolHandler in my channel pipeline.
It does work! but I found that it can pass "/websocketabc". (websocketPath is "/websocket")
Modification:
`isWebSocketPath()` method of `WebSocketServerProtocolHandshakeHandler` now checks that "startsWith" applies to the first URL path component, rather than the URL as a string.
Result:
Requests to "/websocketabc" are no longer passed to handlers for requests that starts-with "/websocket".
Motivation:
We wish to use Unsafe as little as possible, and Java 8 allows us
to take some short-cuts or play some tricks with generics,
for the purpose of working around having to declare all checked
exceptions. Ideally all checked exceptions would be declared, but
the code base is not ready for that yet.
Modification:
The call to UNSAFE.throwException has been removed, so when we need
that feature, we instead use the generic exception trick.
In may cases, Java 8 allows us to throw Throwable directly. This
happens in cases where no exception is declared to be thrown in a
scope.
Finally, some warnings have also been fixed, and some imports have
been reorganised and cleaned up while I was modifying the files
anyway.
Result:
We no longer use Unsafe for throwing any exceptions.
Motivation:
Consider a scenario when the client iniitiates a WebSocket handshake but before the handshake is complete,
the channel is closed due to some reason. In such scenario, the handshake timeout scheduled on the executor
is not cleared. The reason it is not cleared is because in such cases the handshakePromise is not completed.
Modifications:
This change completes the handshakePromise exceptinoally on channelInactive callback, if it has not been
completed so far. This triggers the callback on completion of the promise which clears the timeout scheduled
on the executor.
This PR also adds a test case which reproduces the scenario described above. The test case fails before the
fix is added and succeeds when the fix is applied.
Result:
After this change, the timeout scheduled on the executor will be cleared, thus freeing up thread resources.
Motivation:
If DeleteOnExitHook is in the open state and the program runs for a long time, the DeleteOnExitHook file keeps increasing.
This results in a memory leak
Modification:
I re-customized a DeleteOnExitHook hook. If DeleteOnExitHook is turned on, this hook will be added when creating a temporary file. After the request ends and the corresponding resources are released, the current file will be removed from this hook, so that it will not increase all the time.
Result:
Fixes https://github.com/netty/netty/issues/10351
Motivation
ByteBuf has an isAccessible method which was introduced as part of ref
counting optimizations but there are some places still doing
accessibility checks by accessing the volatile refCnt() directly.
Modifications
- Have PooledNonRetained(Duplicate|Sliced)ByteBuf#isAccessible() use
their refcount delegate's isAccessible() method
- Add static isAccessible(buf) and ensureAccessible(buf) methods to
ByteBufUtil
(since ByteBuf#isAccessible() is package-private)
- Adjust DefaultByteBufHolder and similar classes to use these methods
rather than access refCnt() directly
Result
- More efficient accessibility checks in more places
Motivation:
This is small fixes for #10453 PR according @njhill and @normanmaurer conversation.
Modification:
Simple refactor and takes into account remainder when calculate size.
Result:
Behavior is correct
Motivation:
AbstractDiskHttpData#getChunk opens and closes fileChannel every time when it is invoked,
as a result the uploaded file is corrupted. This is a regression caused by #10270.
Modifications:
- Close the fileChannel only if everything was read or an exception is thrown
- Add unit test
Result:
AbstractDiskHttpData#getChunk closes fileChannel only if everything was read or an exception is thrown
Motivation:
Regression appeared after making changes in fix#10360 .
The main problem here that `buffer.getBytes(buffer.readerIndex(), fileChannel, fileChannel.position(), localsize)`
doesn't change channel position after writes.
Modification:
Manually set position according to the written bytes.
Result:
Fixes#10449 .
Motivation:
Since https://github.com/netty/netty/pull/9865 (Netty 4.1.44) the
default behavior of the HttpObjectDecoder has been to reject any HTTP
message that is found to have multiple Content-Length headers when
decoding. This behavior is well-justified as per the risks outlined in
https://github.com/netty/netty/issues/9861, however, we can see from the
cited RFC section that there are multiple possible options offered for
responding to this scenario:
> If a message is received that has multiple Content-Length header
> fields with field-values consisting of the same decimal value, or a
> single Content-Length header field with a field value containing a
> list of identical decimal values (e.g., "Content-Length: 42, 42"),
> indicating that duplicate Content-Length header fields have been
> generated or combined by an upstream message processor, then the
> recipient MUST either reject the message as invalid or replace the
> duplicated field-values with a single valid Content-Length field
> containing that decimal value prior to determining the message body
> length or forwarding the message.
https://tools.ietf.org/html/rfc7230#section-3.3.2
Netty opted for the first option (rejecting as invalid), which seems
like the safest, but the second option (replacing duplicate values with
a single value) is also valid behavior.
Modifications:
* Introduce "allowDuplicateContentLengths" parameter to
HttpObjectDecoder (defaulting to false).
* When set to true, will allow multiple Content-Length headers only if
they are all the same value. The duplicated field-values will be
replaced with a single valid Content-Length field.
* Add new parameterized test class for testing different variations of
multiple Content-Length headers.
Result:
This is a backwards-compatible change with no functional change to the
existing behavior.
Note that the existing logic would result in NumberFormatExceptions
for header values like "Content-Length: 42, 42". The new logic correctly
reports these as IllegalArgumentException with the proper error message.
Additionally note that this behavior is only applied to HTTP/1.1, but I
suspect that we may want to expand that to include HTTP/1.0 as well...
That behavior is not modified here to minimize the scope of this change.