__Motivation__
Since request.headers().getAll() will never return null. And the check null condition will not work as expected.
__Modification__
Add isEmpty() checking as well.
__Result__
Fixes#11568
Motivation:
We should make variables `final` which are not reinstated again in code to match the code style and makes the code look better.
Modification:
Made couples of variables as `final`.
Result:
Variables marked as `final`.
Motivation:
Let's have fewer warnings about broken, missing, or abuse of javadoc comments.
Modification:
Added descriptions to throws clauses that were missing them.
Remove link clauses from throws clauses - these are implied.
Turned some javadoc comments into block comments because they were not applied to APIs.
Use code clauses instead of code tags.
Result:
Fewer javadoc crimes.
Motivation:
We should get rid of the unnecessary toString calls because they're redundant in nature.
Modification:
Removed unnecessary toString calls.
Result:
Better code
Motivation:
We should get rid of unnecessary semicolons because they don't do anything in code.
Modification:
Removed unnecessary semicolons.
Result:
Better code
Motivation:
There are lots of imports which are unused. We should get rid of them to make the code look better,
Modification:
Removed unused imports.
Result:
No unused imports.
Bootstrap methods now return Future<Channel> instead of ChannelFuture
Motivation:
In #8516 it was proposed to at some point remove the specialised ChannelFuture and ChannelPromise.
Or at least make them not extend Future and Promise, respectively.
One pain point encountered in this discussion is the need to get access to the channel object after it has been initialised, but without waiting for the channel registration to propagate through the pipeline.
Modification:
Add a Bootstrap.createUnregistered method, which will return a Channel directly.
All other Bootstrap methods that previously returned ChannelFuture now return Future<Channel>
Result:
It's now possible to obtain an initialised but unregistered channel from a bootstrap, without blocking.
And the other bootstrap methods now only release their channels through the result of their futures, preventing racy access to the channels.
Motivation:
At the moment we not correctly propagate cancellation in some case when we use the PromiseNotifier.
Modifications:
- Add PromiseNotifier static method which takes care of cancellation
- Add unit test
- Deprecate ChannelPromiseNotifier
Result:
Correctly propagate cancellation of operation
Co-authored-by: Nitesh Kant <nitesh_kant@apple.com>
Motivation:
#11468 was merged but didn't fix tests completely. There is a fight between `LF` and `CRLF`. So to eliminate this, we should just get rid of them.
Modification:
Use a small sample dataset without `LF` and `CRLF`.
Result:
Simple and passing test.
Motivation:
JavaDoc of StandardCompressionOptions should point towards public methods. Also, Brotli tests were failing on Windows.
Modification:
Fixed JavaDoc and enabled Brotli tests on Windows.
Result:
Better JavaDoc and Brotli tests will run on Windows
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
In #11256, We introduced `Iterable` as a parameter but later in review, it was removed. But we forgot to change `compressionOptionsIterable` to just `compressionOptions`.
Modification:
Changed `compressionOptionsIterable` to `compressionOptions`.
Result:
Correct ObjectUtil message
Motivation:
Currently, Netty only has BrotliDecoder which can decode Brotli encoded data. However, BrotliEncoder is missing which will encode normal data to Brotli encoded data.
Modification:
Added BrotliEncoder and CompressionOption
Result:
Fixes#6899.
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
ZSTD has a wide range of uses on the Internet, so should consider adding `application/zstd` HTTP media-type and `zstd` content-encoding, see https://tools.ietf.org/html/rfc8478
Modification:
Add `application/zstd` HTTP media-type and `zstd` content-encoding
Result:
netty provides `application/zstd` HTTP media-type and `zstd content-encoding` as http headers
Signed-off-by: xingrufei <xingrufei@sogou-inc.com>
Co-authored-by: xingrufei <xingrufei@sogou-inc.com>
Motivation:
At the moment all methods in `ChannelHandler` declare `throws Exception` as part of their method signature. While this is fine for methods that handle inbound events it is quite confusing for methods that handle outbound events. This comes due the fact that these methods also take a `ChannelPromise` which actually need to be fullfilled to signal back either success or failure. Define `throws...` for these methods is confusing at best. We should just always require the implementation to use the passed in promise to signal back success or failure. Doing so also clears up semantics in general. Due the fact that we can't "forbid" throwing `RuntimeException` we still need to handle this in some way tho. In this case we should just consider it a "bug" and so log it and close the `Channel` in question. The user should never have an exception "escape" their implementation and just use the promise. This also clears up the ownership of the passed in message etc.
As `flush(ChannelHandlerContext)` and `read(ChannelHandlerContext)` don't take a `ChannelPromise` as argument this also means that these methods can never produce an error. This makes kind of sense as these really are just "signals" for the underlying transports to do something. For `RuntimeException` the same rule is used as for other outbound event handling methods, which is logging and closing the `Channel`.
Motifications:
- Remove `throws Exception` from signature
- Adjust code to not throw and just notify the promise directly
- Adjust unit tests
Result:
Much cleaner API and semantics.
Motivation:
The `PerMessageDeflateClientExtensionHandler` has the following strange behaviors currently:
* The `requestedServerNoContext` parameter doesn't actually add the `server_no_context_takeover` parameter to the client offer; instead it depends on the requested server window size.
* The handshake will fail if the server responds with a `server_no_context_takeover` parameter and `requestedServerNoContext` is false. According to RFC 7692 (7.1.1.1) the server may do this, and this means that to cover both cases one needs to use two handshakers in the channel pipeline: one with `requestedServerNoContext = true` and one with `requestedServerNoContext = false`.
* The value of the `server_max_window_bits` parameter in the server response is never checked (should be between 8 and 15). And the value of `client_max_window_bits` is checked only in the branch handling the server window parameter.
Modification:
* Add the `server_no_context_takeover` parameter if `requestedServerNoContext` is true.
* Accept a server handshake response which includes the server no context takeover parameter even if we did not request it.
* Check the values of the client and server window size in their respective branches and fail the handshake if they are out of bounds.
Result:
There will be no need to use two handshakers in the pipeline to be lenient in what handshakes are accepted.
Motivation:
Including codec-http in the project and building a native-image out of it using a GraalVM 21.2 nightly can result in a failure.
Modification:
By delaying the initialization of `io.netty.handler.codec.compression.BrotliDecoder` to runtime, native-image will not try to eagerly initialize the class during the image build, avoiding the build failure described in the issue.
Result:
Fixes#11427
Motivation:
HTTP header values are case sensitive. The expected value for `x-request-with` header is `XMLHttpRequest`, not `XmlHttpRequest`.
Modification:
Fix constant's case.
Result:
Correct `XMLHttpRequest` HTTP header value.
__Motivation__
`HttpUtil#normalizeAndGetContentLength()` throws `StringIndexOutOfBoundsException` for empty `content-length` values, it should instead throw `IllegalArgumentException` for all invalid values.
__Modification__
- Throw `IllegalArgumentException` if the `content-length` value is empty.
- Add tests
__Result__
Fixes https://github.com/netty/netty/issues/11408
Motivation:
Netty will fail a handshake for the Per-Message Deflate WebSocket
extension if the server response contains a smaller
`server_max_window_bits` value than the client offered.
However, this is allowed by RFC 7692:
> A server accepts an extension negotiation offer with this parameter
> by including the “server_max_window_bits” extension parameter in the
> extension negotiation response to send back to the client with the
> same or smaller value as the offer.
Modifications:
- Allow the server to respond with a smaller value than offered.
- Change the unit tests to test for this.
Result:
The client will not fail when the server indicates it is using a
smaller window size than offered by the client.
__Motivation__
As described in https://github.com/netty/netty/issues/11370 we should support quoted charset values
__Modification__
Modify `HttpUtil.getCharset(CharSequence contentTypeValue, Charset defaultCharset)` to trim the double-quotes if present.
__Result__
`HttpUtil.getCharset()` now supports quoted charsets. Fixes https://github.com/netty/netty/issues/11370
Motivation:
This special case implementation of Promise / Future requires the implementations responsible for completing the promise to have knowledge of this class to provide value. It also requires that the implementations are able to provide intermediate status while the work is being done. Even throughout the core of Netty it is not really supported most of the times and so just brings more complexity without real gain.
Let's remove it completely which is better then only support it sometimes.
Modifications:
Remove Progressive* API
Result:
Code cleanup.... Fixes https://github.com/netty/netty/issues/8519
Motivation:
Sometime in the past we introduced the concept of Void*Promise. As it turned out this was not a good idea at all as basically each handler in the pipeline need to be very careful to correctly handle this. We should better just remove this "optimization".
Modifications:
- Remove Void*Promise and all the related APIs
- Remove tests which were related to Void*Promise
Result:
Less error-prone API
Motivation:
When decoding the cookies on the server, the "Cookie" HTTP request header value should be considered.
The "Set-Cookie" HTTP response header is used to send cookies from the server to the user agent.
Modification:
- Specify in javadoc that the "Cookie" HTTP request header value should be considered and
not the "Set-Cookie" HTTP response header value.
Result:
Correct ServerCookieDecoder javadoc
Motivation:
There should always be a default in switch blocks.
Modification:
Add default
Result:
Code cleanup.
Signed-off-by: xingrufei <xingrufei@sogou-inc.com>
Motivation:
There should always be a default in switch blocks.
Modification:
Add default
Result:
Code cleanup.
Signed-off-by: xingrufei <xingrufei@sogou-inc.com>
Motivation:
When searching for the delimiter, the decoder part within HttpPostBodyUtil
was not checking the left space to check if it could be included or not,
while it should.
Modifications:
Add a check on toRead being greater or equal than delimiterLength before
going within the loop. If the check is wrong, the delimiter is obviously not found.
Add a Junit test to preserve regression.
Result:
No more IndexOutOfBoundsException
Fixes#11334
Motivation:
Every switch block should also have a default case.
Modification:
Add default block in DefaultHttpHeaders to ensure we not fall-through by mistake
Result:
Cleanup
Signed-off-by: xingrufei <xingrufei@sogou-inc.com>
Motivation:
JUnit 5 is more expressive, extensible, and composable in many ways, and it's better able to run tests in parallel.
Modifications:
Use JUnit5 in tests
Result:
Related to https://github.com/netty/netty/issues/10757
Motivation:
Let's also build on windows during PR validation
Modifications:
Add build on windows during PR
Result:
Validate that all also pass on windows
Motivation:
A user might want to handle a certain HTTP upgrade request differently
than what `HttpServerUpgradeHandler` does by default. For example, a
user could let `HttpServerUpgradeHandler` handle HTTP/2 upgrades but
not WebSocket upgrades.
Modifications:
- Added `HttpServerUpgradeHandler.isUpgrade(HttpRequest)` so a user can
tell `HttpServerUpgradeHandler` to pass the request as it is to the
next handler.
Result:
- A user can handle a certain upgrade request specially.
Motivation:
Netty lacks client side support for decompressing Brotli compressed response bodies.
Modification:
* Introduce optional dependency to brotli4j by @hyperxpro. It will be up to the user to provide the brotli4j libraries for the target platform in the classpath. brotli4j is currently available for Linux, OSX and Windows, all for x86 only.
* Introduce BrotliDecoder in codec module
* Plug it onto `HttpContentDecompressor` for HTTP/1 and `DelegatingDecompressorFrameListener` for HTTP/2
* Add test in `HttpContentDecoderTest`
* Add `BrotliDecoderTest` that doesn't extend `AbstractDecoderTest` that looks flaky
Result:
Netty now support decompressing Brotli compressed response bodies.
Motivation:
Some of the HttpPostMultiPartRequestDecoder specific tests were included in HttpPostRequestDecoderTest. We should better move these in the correct test class.
Modifications:
Move specific tests
Result:
Cleanup
Motivation:
2 years ago a change remove the default clearing of all HttpData, whatever
they are disk based or memory based.
A lot of users were probably releasing HttpData directly, so there was no issue.
But now, it seems, and as the Javadoc said, that `decoder.destroy()` shall clean up
also Memory based HttpData, and not only Disk based HttpData as currently.
Change:
- Add in `destroy()` method the necessary code to release if necessary
the underlying Memory based HttpDatas.
- Change one Junit Test (using Mixed, Memory and Disk based factories)
in order to check the correctness of this behavior and to really act
as a handler (releasing buffers or requests).
- Modify one Junit core to check validity when a delimiter is present in the Chunk
but not CRLF/LF (false delimiter), to ensure correctness.
Result:
No more issue on memory leak
Note that still the List and the Map are not cleaned, since they were not
before. No change is done on this, since it could produce backward issue compatibility.
Fix issues #11175 and #11184
Motivation:
We need to call destroy() if the constructor of HttpPostMultipartRequestDecoder throws as otherwise we may leak memory.
Modifications:
- Call destroy() if we throw
- Add unit test
Result:
No more leaks when constructor throws
Co-authored-by: Frederic Bregier <frederic.bregier@waarp.fr>
Motivation:
We didn't correctly handle the case when no content-type header was found or if the charset was illegal and just did throw a NPE or ICE. We should in both cases throw an ErrorDataDecoderException to reflect what is documented in the javadocs.
Modifications:
- Throw correct exception
- Merge private method into the constructor as it is only used there
- Add unit tests
Result:
Throw expected exceptions on decoding errors
Motivation:
When create a WebSocketServerProtocolConfig to check URI path starts from '/',
only '/' or '//subPath' can be passed by the checker,but '/subPath' should be
passed as well
Modifications:
in `WebSocketServerProtocolHandshakeHandler.isWebSocketPath()` treat '/' a special case
Result:
'/subPath' can be passed
Motivation:
NullChecks resulting in a NullPointerException or IllegalArgumentException, numeric ranges (>0, >=0) checks, not empty strings/arrays checks must never be anonymous but with the parameter or variable name which is checked. They must be specific and should not be done with an "OR-Logic" (if a == null || b == null) throw new NullPointerEx.
Modifications:
* import static relevant checks
* Replace manual checks with ObjectUtil methods
Result:
All checks needed are done with ObjectUtil, some exception texts are improved.
Fixes#11170
Motivation:
At the moment we only expose close(...) methods that take a Channel as paramater. This can be problematic as the write will start at the end of the pipeline which may contain ChannelOutboundHandler implementations that not expect WebSocketFrame objects. We should better also support to pass in a ChannelHandlerContext as starting point for the write which ensures that the WebSocketFrame objects will be handled correctly from this position of the pipeline.
Modifications:
- Add new close(...) methods that take a ChannelHandlerContext
- Add javadoc sentence to point users to the new methods.
Result:
Be able to "start" the close at the right position in the pipeline.
Motivation:
When Memory based Factory is used, if the first chunk starts with Line Break, the HttpData
is not filled with the current available buffer if the delimiter is not found yet, while it may
add some.
Fix JavaDoc to note potential wrong usage of content() or getByteBuf() if HttpDatais has
a huge content with the risk of Out Of Memory Exception.
Fix JavaDoc to explain how to release properly the Factory, whatever it is in Memory,
Disk or Mixed mode.
Fix issue #11143
Modifications:
First, when the delimiter is not found, instead of searching Line Break from readerIndex(), we should search
from readerIndex() + readableBytes() - delimiter size, since this is the only part where usefull
Line Break could be searched for, except if readableBytes is less than delimiter size (then we search from
readerIndex).
Second, when a Memory HttpData is created, it should be assigned an empty buffer to be
consistent with the other implementations (Disk or Mixed mode).
We cannot change the default behavior of the content() or getByteBuf() of the Memory based HttpData
since the ByteBuf is supposed to be null when released, but not empty.
When a new ByteBuf is added, one more check verifies if the current ByteBuf is empty, and if so, it
is released and replaced by the new one, without creating a new CompositeByteBuf.
Result:
In the tests testBIgFileUploadDelimiterInMiddleChunkDecoderMemoryFactory and related for other modes,
the buffers are starting with a CRLF.
When we offer only the prefix part of the multipart (no data at all), the current Partial HttpData has
an empty buffer.
The first time we offer the data starting with CRLF to the decoder, it now
has a correct current Partial HttpData with a buffer not empty.
The Benchmark was re-run against this new version.
Old Benchmark Mode Cnt Score Error Units
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 4,037 ± 0,358 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 4,226 ± 0,471 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,875 ± 0,029 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 4,346 ± 0,275 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 2,044 ± 0,020 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 2,278 ± 0,159 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,174 ± 0,004 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 2,370 ± 0,065 ops/ms
New Benchmark Mode Cnt Score Error Units
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 5,604 ± 0,415 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 6,058 ± 0,111 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,914 ± 0,031 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 6,053 ± 0,051 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 2,636 ± 0,141 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 3,033 ± 0,181 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,178 ± 0,006 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 2,859 ± 0,189 ops/ms
So +20 to +40% improvement due to not searching for CRLF/LF into the full buffer when no delimiter is found,
but only from the end and delimiter size + 2 (CRLF).
Motivation:
There are some redundant checks and so these can be removed
Modifications:
- First check frameOpcode != OPCODE_PING is removed because the code executed int the branch where frameOpcode <= 7, while OPCODE_PING is 9.
- Second check frameOpcode != OPCODE_PING is removed because its checked before.
Result:
Code cleanup
Motivation:
If compression is enabled and the HttpResponse also
implements HttpContent (but not LastHttpContent) then
the buffer will be freed to eagerly.
Modification:
I retain the buffer the same way that is done for the LastHttpContent case.
Note that there is another suspicious looking call a few lines above (if beginEncode returns null). I am not sure if this should also be retained.
Result:
Fixes#11092
Motivation:
As stated by https://tools.ietf.org/html/rfc7540#section-8.1.2.6 we should report a stream error if the content-length does not match the sum of all data frames.
Modifications:
- Verify that the sum of data frames match if a content-length header was send.
- Handle multiple content-length headers and also handle negative values
- Add io.netty.http2.validateContentLength system property which allows to disable the more strict validation
- Add unit tests
Result:
Correctly handle the case when the content-length header was included but not match what is send and also when content-length header is invalid
Motivation
The HttpObjectDecoder accepts input parameters for maxInitialLineLength
and maxHeaderSize. These are important variables since both message
components must be buffered in memory. As such, many decoders (like
Netty and others) introduce constraints. Due to their importance, many
users may wish to add instrumentation on the values of successful
decoder results, or otherwise be able to access these values to enforce
their own supplemental constraints.
While users can perhaps estimate the sizes today, they will not be
exact, due to the decoder being responsible for consuming optional
whitespace and the like.
Modifications
* Add HttpMessageDecoderResult class. This class extends DecoderResult
and is intended for HttpMessage objects successfully decoded by the
HttpObjectDecoder. It exposes attributes for the decoded
initialLineLength and headerSize.
* Modify HttpObjectDecoder to produce HttpMessageDecoderResults upon
successfully decoding the last HTTP header.
* Add corresponding tests to HttpRequestDecoderTest &
HttpResponseDecoderTest.
Co-authored-by: Bennett Lynch <Bennett-Lynch@users.noreply.github.com>
Motivation:
HttpObjectDecoder may throw an IllegalArgumentException if it encounters
a character that Character.isWhitespace() returns true for, but is not
one of the two valid OWS (optional whitespace) values. Such values may
not have friendly or readable toString() values. We should include the
hex value so that the illegal character can always be determined.
Modifications:
Add hex value as well
Result:
Easier to debug
Co-authored-by: Bennett Lynch <Bennett-Lynch@users.noreply.github.com>
Motivation:
To make it possible to experiment with alternative buffer implementations, we need a way to abstract away the concrete buffers used throughout most of the Netty pipelines, while still having a common currency for doing IO in the end.
Modification:
- Introduce an ByteBufConvertible interface, that allow arbitrary objects to convert themselves into ByteBuf objects.
- Every place in the code, where we did an instanceof check for ByteBuf, we now do an instanceof check for ByteBufConvertible.
- ByteBuf itself implements ByteBufConvertible, and returns itself from the asByteBuf method.
Result:
It is now possible to use Netty with alternative buffer implementations, as long as they can be converted to ByteBuf.
This has been verified elsewhere, with an alternative buffer implementation.
Motivation:
- Underlying buffer usages might be erroneous when releasing them internaly
in HttpPostMultipartRequestDecoder.
2 bugs occurs:
1) Final File upload seems not to be of the right size.
2) Memory, even in Disk mode, is increasing continuously, while it shouldn't.
- Method `getByte(position)` is too often called within the current implementation
of the HttpPostMultipartRequestDecoder.
This implies too much activities which is visible when PARANOID mode is active.
This is also true in standard mode.
Apply the same fix on buffer from HttpPostMultipartRequestDecoder to HttpPostStandardRequestDecoder
made previously.
Finally in order to ensure we do not rewrite already decoded HttpData when decoding
next ones within multipart, we must ensure the buffers are copied and not a retained slice.
Modification:
- Add some tests to check consistency for HttpPostMultipartRequestDecoder.
Add a package protected method for testing purpose only.
- Use the `bytesBefore(...)` method instead of `getByte(pos)` in order to limit the external
access to the underlying buffer by retrieving iteratively the beginning of a correct start
position.
It is used to find both LF/CRLF and delimiter.
2 methods in HttpPostBodyUtil were created for that.
The undecodedChunk is copied when adding a chunk to a DataMultipart is loaded.
The same buffer is also rewritten in order to release the copied memory part.
Result:
Just for note, for both Memory or Disk or Mixed mode factories, the release has to be done as:
for (InterfaceHttpData httpData: decoder.getBodyHttpDatas()) {
httpData.release();
factory.removeHttpDataFromClean(request, httpData);
}
factory.cleanAllHttpData();
decoder.destroy();
The memory used is minimal in Disk or Mixed mode. In Memory mode, a big file is still
in memory but not more in the undecodedChunk but its own buffer (copied).
In terms of benchmarking, the results are:
Original code Benchmark Mode Cnt Score Error Units
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 0,152 ± 0,100 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 0,543 ± 0,218 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,001 ± 0,001 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 0,615 ± 0,070 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 0,114 ± 0,063 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 0,664 ± 0,034 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,001 ± 0,001 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 0,620 ± 0,140 ops/ms
New code Benchmark Mode Cnt Score Error Units
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 4,037 ± 0,358 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 4,226 ± 0,471 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,875 ± 0,029 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 4,346 ± 0,275 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 2,044 ± 0,020 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 2,278 ± 0,159 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,174 ± 0,004 ops/ms
HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 2,370 ± 0,065 ops/ms
In short, using big file transfers, this is about 7 times faster with new code, while
using high number of HttpData, this is about 4 times faster with new code when using Simple Level.
When using Paranoid Level, using big file transfers, this is about 800 times faster with new code, while
using high number of HttpData, this is about 170 times faster with new code.