Motivation:
While its technical impossible that a chunk is larger than 64kb it still makes things easier to read and more robust to add some size checks to LzfDecoder.
Modifications:
Check the maximum length
Result:
More robust and easier to reason about code
Motivation:
At the moment FastLzFrameEncoder depends on the buffer type. This is not needed and may cause memory copies
Modifications:
Rewrite to be able to just act on the ByteBuf
Result:
Less memory copies
Motivation:
FastLzFrameDecoder currently not use the allocator to alocate the output buffer. This means that if you use the PooledByteBufAllocator you still can't make use of the pooling. Beside this the decoder also does an uncessary memory copy when no compression is used.
Modifications:
- Allocate the output buffer via the allocator
- Don't allocate and copy if we handle an uncompressed chunk
- Make use of ByteBufChecksum for a few optimizations when running on a recent JDK
Result:
Less allocations when using FastLzFrameDecoder
Motivation:
There is no test case of `StringDecoder` here
Modification:
Need to add `StringDecoder` test case
Result:
Added test case of `StringDecoder`
Signed-off-by: xingrufei <xingrufei@sogou-inc.com>
Co-authored-by: xingrufei <xingrufei@sogou-inc.com>
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
At the moment we not correctly propagate cancellation in some case when we use the PromiseNotifier.
Modifications:
- Add PromiseNotifier static method which takes care of cancellation
- Add unit test
- Deprecate ChannelPromiseNotifier
Result:
Correctly propagate cancellation of operation
Co-authored-by: Nitesh Kant <nitesh_kant@apple.com>
Motivation:
JavaDoc of StandardCompressionOptions should point towards public methods. Also, Brotli tests were failing on Windows.
Modification:
Fixed JavaDoc and enabled Brotli tests on Windows.
Result:
Better JavaDoc and Brotli tests will run on Windows
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
Currently, Netty only has BrotliDecoder which can decode Brotli encoded data. However, BrotliEncoder is missing which will encode normal data to Brotli encoded data.
Modification:
Added BrotliEncoder and CompressionOption
Result:
Fixes#6899.
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
At the moment all methods in `ChannelHandler` declare `throws Exception` as part of their method signature. While this is fine for methods that handle inbound events it is quite confusing for methods that handle outbound events. This comes due the fact that these methods also take a `ChannelPromise` which actually need to be fullfilled to signal back either success or failure. Define `throws...` for these methods is confusing at best. We should just always require the implementation to use the passed in promise to signal back success or failure. Doing so also clears up semantics in general. Due the fact that we can't "forbid" throwing `RuntimeException` we still need to handle this in some way tho. In this case we should just consider it a "bug" and so log it and close the `Channel` in question. The user should never have an exception "escape" their implementation and just use the promise. This also clears up the ownership of the passed in message etc.
As `flush(ChannelHandlerContext)` and `read(ChannelHandlerContext)` don't take a `ChannelPromise` as argument this also means that these methods can never produce an error. This makes kind of sense as these really are just "signals" for the underlying transports to do something. For `RuntimeException` the same rule is used as for other outbound event handling methods, which is logging and closing the `Channel`.
Motifications:
- Remove `throws Exception` from signature
- Adjust code to not throw and just notify the promise directly
- Adjust unit tests
Result:
Much cleaner API and semantics.
Motivation:
The native module is not yet available on aarch64 Mac / Windows thus causing tests in codec/ to fail (specifically all the Brotli ones, since the module could not be loaded).
Modification:
Disable Brotli tests when platform is not supported
Result:
Tests under codec/ now pass under Mac/aarch64 and Windows/aarch64
Motivation:
We failed to account for the last header when estimating the buffer
size. If the data does not compress enough to make space for the
last header we would exceed the ByteBuf's capacity.
Modifications:
Call #ensureWritable with appropriate capacity for footer ByteBuf
befor writing footer.
Result:
If there is not enough space left in the buffer, the buffer will be
expanded.
Motivation:
Various compression codecs are currently hard-coded to only support buffers that are backed by byte-arrays that they are willing to expose.
This is efficient for most of the codecs, but compatibility suffers, as we are not able to freely choose our buffer implementations when compression codecs are involved.
Modification:
Add code to the compression codecs, that allow them to handle buffers that don't have arrays.
For many of the codecs, this unfortunately involves allocating temporary byte-arrays, and copying back-and-forth.
We have to do it that way since some codecs can _only_ work with byte-arrays.
Also add tests to verify that this works.
Result:
It is now possible to use all of our compression codecs with both on-heap and off-heap buffers.
The default buffer choice has not changed, however, so performance should be unaffected.
Motivation:
This special case implementation of Promise / Future requires the implementations responsible for completing the promise to have knowledge of this class to provide value. It also requires that the implementations are able to provide intermediate status while the work is being done. Even throughout the core of Netty it is not really supported most of the times and so just brings more complexity without real gain.
Let's remove it completely which is better then only support it sometimes.
Modifications:
Remove Progressive* API
Result:
Code cleanup.... Fixes https://github.com/netty/netty/issues/8519
Motivation:
Sometime in the past we introduced the concept of Void*Promise. As it turned out this was not a good idea at all as basically each handler in the pipeline need to be very careful to correctly handle this. We should better just remove this "optimization".
Modifications:
- Remove Void*Promise and all the related APIs
- Remove tests which were related to Void*Promise
Result:
Less error-prone API
Motivation:
JUnit 5 is more expressive, extensible, and composable in many ways, and it's better able to run tests in parallel.
Modifications:
Use JUnit5 in tests
Result:
Related to https://github.com/netty/netty/issues/10757
Motivation:
Netty lacks client side support for decompressing Brotli compressed response bodies.
Modification:
* Introduce optional dependency to brotli4j by @hyperxpro. It will be up to the user to provide the brotli4j libraries for the target platform in the classpath. brotli4j is currently available for Linux, OSX and Windows, all for x86 only.
* Introduce BrotliDecoder in codec module
* Plug it onto `HttpContentDecompressor` for HTTP/1 and `DelegatingDecompressorFrameListener` for HTTP/2
* Add test in `HttpContentDecoderTest`
* Add `BrotliDecoderTest` that doesn't extend `AbstractDecoderTest` that looks flaky
Result:
Netty now support decompressing Brotli compressed response bodies.
Motivation:
NullChecks resulting in a NullPointerException or IllegalArgumentException, numeric ranges (>0, >=0) checks, not empty strings/arrays checks must never be anonymous but with the parameter or variable name which is checked. They must be specific and should not be done with an "OR-Logic" (if a == null || b == null) throw new NullPointerEx.
Modifications:
* import static relevant checks
* Replace manual checks with ObjectUtil methods
Result:
All checks needed are done with ObjectUtil, some exception texts are improved.
Fixes#11170
Motivation:
The JDK deflate implementation added support for operating on ByteBuffers in Java 11 or so.
This means that we don't need to restrict that implementation to ByteBufs that are heap based and can expose arrays.
Modification:
Add clauses to JdkZlibEncoder and JdkZlibDecoder for handling ByteBufs that don't have arrays, but do have one nioByteBuffer.
Expand the test coverage in JdkZlibTest to include all relevant combinations of buffer types and data types.
Result:
The JdkZlibEncoder and JdkZlibDecoder should now work on basically all non-composite ByteBufs, and likely also composite ByteBufs that have exactly one component.
Motivation:
This library is obsolete; hasn't been updated since 2013.
Modification:
Remove jzlib dependency, integration code and tests.
Result:
- No more jzlib support.
- Less code.
- The JdkZlib* code can now be simplified because it no longer share anything with jzlib.
Motivation:
To make it possible to experiment with alternative buffer implementations, we need a way to abstract away the concrete buffers used throughout most of the Netty pipelines, while still having a common currency for doing IO in the end.
Modification:
- Introduce an ByteBufConvertible interface, that allow arbitrary objects to convert themselves into ByteBuf objects.
- Every place in the code, where we did an instanceof check for ByteBuf, we now do an instanceof check for ByteBufConvertible.
- ByteBuf itself implements ByteBufConvertible, and returns itself from the asByteBuf method.
Result:
It is now possible to use Netty with alternative buffer implementations, as long as they can be converted to ByteBuf.
This has been verified elsewhere, with an alternative buffer implementation.
Motivation:
If two different headers end up in the same hash bucket, and you are iterating the header that is not the first in the bucket, and you use the iterator to remove the first element returned from the iterator, then you would get a NullPointerException.
Modification:
Change the DefaultHeaders iterator remove method, to re-iterate the hash bucket and unlink the entry once found, if we don't have any existing iteration starting point.
Also made DefaultHeaders.remove0 package private to avoid a synthetic method indirection.
Result:
Removing from iterators from DefaultHeaders is now robust towards hash collisions.
Motivation:
We had multiple bugs in JdkZlibDecoder which could lead to decoding errors when the data was received in a fragmentated manner.
Modifications:
- Correctly handle skipping of comments
- Correctly handle footer / header decoding
- Add unit test that verifies the correct handling of fragmentation
Result:
Fixes https://github.com/netty/netty/issues/10875
Motivation:
We need to carefully check for null before we pass the cumulation buffer into decodeLast as callDecode(...) may have removed the codec already and so set cumulation to null.
Modifications:
- Check for null and if we see null use Unpooled.EMPTY_BUFFEr
- Only call decodeLast(...) if callDecode(...) didnt remove the handler yet.
Result:
Fixes https://github.com/netty/netty/issues/10802
Motivation:
https in xmlns URIs does not work and will let the maven release plugin fail:
```
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.779 s
[INFO] Finished at: 2020-11-10T07:45:21Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-release-plugin:2.5.3:prepare (default-cli) on project netty-parent: Execution default-cli of goal org.apache.maven.plugins:maven-release-plugin:2.5.3:prepare failed: The namespace xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" could not be added as a namespace to "project": The namespace prefix "xsi" collides with an additional namespace declared by the element -> [Help 1]
[ERROR]
```
See also https://issues.apache.org/jira/browse/HBASE-24014.
Modifications:
Use http for xmlns
Result:
Be able to use maven release plugin
Motivation:
There is no need for ByteProcessor to throw a checked exception.
The declared checked exception causes unnecessary code complications just to propagate it.
This can be cleaned up.
Modification:
ByteProcessor.process no longer declares to throw a checked exception, and all the places that were trying to cope with the checked exception have been simplified.
Result:
Simpler code.
Motivation:
There is no need for ByteProcessor to throw a checked exception.
The declared checked exception causes unnecessary code complications just to propagate it.
This can be cleaned up.
Modification:
ByteProcessor.process no longer declares to throw a checked exception, and all the places that were trying to cope with the checked exception have been simplified.
Result:
Simpler code.
Motivation:
HTTP is a plaintext protocol which means that someone may be able
to eavesdrop the data. To prevent this, HTTPS should be used whenever
possible. However, maintaining using https:// in all URLs may be
difficult. The nohttp tool can help here. The tool scans all the files
in a repository and reports where http:// is used.
Modifications:
- Added nohttp (via checkstyle) into the build process.
- Suppressed findings for the websites
that don't support HTTPS or that are not reachable
Result:
- Prevent using HTTP in the future.
- Encourage users to use HTTPS when they follow the links they found in
the code.
Motivation:
junit deprecated Assert.assertThat(...)
Modifications:
Use MatcherAssert.assertThat(...) as replacement for deprecated method
Result:
Less deprecation warnings
Motivation:
LGTM reports multiple issues. They need to be triaged,
and real ones should be fixed.
Modifications:
- Fixed multiple issues reported by LGTM, such as redundant conditions,
resource leaks, typos, possible integer overflows.
- Suppressed false-positives.
- Added a few testcases.
Result:
Fixed several possible issues, get rid of false alarms in the LGTM report.
Raise the Netty 5 minimum required Java version to Java 11.
Motivation:
Java 11 has been out for some time, and Netty 5 is still some ways out.
There are also many good features in Java 11 that we wish to use, such as VarHandles, var-keyword, and the module system.
There is no reason for Netty 5 to not require Java 11, since Netty 4.x will still be supported for the time being.
Modification:
Remove everything in the pom files related to Java versions older than Java 11.
Remove the animal-sniffer plug-in and rely on the `--release` compiler flag instead.
Remove docker files related to Java versions older than Java 11.
Remove the copied SCTP APIs -- we should test this commit independently on Windows.
Remove the OpenJdkSelfSignedCertGenerator.java file and just always use Bouncy Castle for generating self-signed certificates for testing.
Make netty-testsuite tests pass by including Bouncy Castle as a test dependency, so we're able to generate our self-signed certificate.
Result:
Java 11 is now the minimum required Java version.
Motivation:
Avoid implicit conversions to narrower types in
AbstractMemoryHttpData and Bzip2HuffmanStageEncoder classes
reported by LGTM.
Modifications:
Updated the classes to avoid implicit casting to narrower types.
It doesn't look like that an integer overflow is possible there,
therefore no checks for overflows were added.
Result:
No warnings about implicit conversions to narrower types.
Motivation:
We wish to use Unsafe as little as possible, and Java 8 allows us
to take some short-cuts or play some tricks with generics,
for the purpose of working around having to declare all checked
exceptions. Ideally all checked exceptions would be declared, but
the code base is not ready for that yet.
Modification:
The call to UNSAFE.throwException has been removed, so when we need
that feature, we instead use the generic exception trick.
In may cases, Java 8 allows us to throw Throwable directly. This
happens in cases where no exception is declared to be thrown in a
scope.
Finally, some warnings have also been fixed, and some imports have
been reorganised and cleaned up while I was modifying the files
anyway.
Result:
We no longer use Unsafe for throwing any exceptions.
Motivation:
Avoid keeping unused dependencies around.
Modification:
Remove all references to javassist dependency, since it does not appear to be used by anything.
Result:
One less dependency to worry about.
Motivation:
We should provide details about why an IOOBE was thrown
Modification:
Add IndexOutOfBoundsException error information in io.netty.util.internal.AppendableCharSequence and io.netty.handler.codec.CodecOutputList class
Result:
Easier to debug
Motivation:
`Date`, `Expires`, and `Set-Cookie` headers are being generated with a 1-digit day of month,
e.g. `Sun, 6 Nov 1994 08:49:37 GMT`. RFC 2616 specifies that `Date` and `Expires` headers should
use "a fixed-length subset of that defined by RFC 1123" which includes a 2-digit day of month.
RFC6265 is lax in it's specification of the `Set-Cookie` header and permits a 2-digit day of month.
See: https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html
See: https://tools.ietf.org/html/rfc1123#page-55
See: https://tools.ietf.org/html/rfc6265#section-5.1.1
Modifications:
- Update `DateFormatter` to correctly implement RFC 2616 headers
Result:
```
Date: Sun, 06 Nov 1994 08:49:37 GMT
Expires: Sun, 06 Nov 1994 08:49:37 GMT
Set-Cookie: id=a3fWa; Expires=Sun, 06 Nov 1994 08:49:37 GMT
```
Motivation:
To ensure we always recycle the CodecOutputList we should better do it in a finally block
Modifications:
Call CodecOutputList.recycle() in finally
Result:
Less chances of non-recycled lists. Related to https://github.com/netty/netty/issues/10183
Motivation:
It is more efficient to avoid allocating objects when we don't need to.
Modification:
Don't allocate a `LinkedList` for returning an empty list of header values when the header doesn't exist at all.
Result:
Less allocations
Motivation:
Since the LZF support non-compress and compress format, we can let LzfEncoder support length aware ability. It can let the user control compress.
Modification:
When the data length over compressThreshold, LzfEncoder use compress format to compress data. Otherwise, only use non-compress format. Whatever compress format the encoder use, the LzfDecoder can decompress data well.
Result:
Gives users control over compression capabilities
Motivation:
As we have java8 as a minimum target we can use MethodHandles. We should do so when we expect to have a method called multiple times.
Modifications:
- Replace usage of reflection with MethodHandles where it makes sense
- Remove some code which was there to support java < 8
Result:
Faster code
Motivation:
At the moment the next / prev references are not set to "null" in the DefaultChannelHandlerContext once the ChannelHandler is removed. This is bad as it basically let users still use the ChannelHandlerContext of a ChannelHandler after it is removed and may produce very suprising behaviour.
Modifications:
- Fail if someone tries to use the ChannelHandlerContext once the ChannelHandler was removed (for outbound operations fail the promise, for inbound fire the error through the ChannelPipeline)
- Fix some handlers to ensure we not use the ChannelHandlerContext after the handler was removed
- Adjust DefaultChannelPipeline / DefaultChannelHandlerContext to fixes races with removal / replacement of handlers
Result:
Cleanup behaviour and make it more predictable for pipeline modifications
Motivation:
The Snappy crc32c checksum produced by SnappyFrameEncoder maybe failed to be validated on other languages snappy decoder, such as golang/snappy.
Modification:
- make the 4-byte cast later after the mask operation. Because whether retaining the higher 4-7 bytes in a long java type will make difference in (checksum >> 15 | checksum << 17) + 0xa282ead8 result.
Result:
Checksum correctly calculated
Motivation:
It is impossible to know in advance how much memory will be needed to
decompress a stream of bytes that was compressed using the DEFLATE
algorithm. In theory, up to 1032 times the compressed size could be
needed. For untrusted input, an attacker could exploit this to exhaust
the memory pool.
Modifications:
ZlibDecoder and its subclasses now support an optional limit on the size
of the decompressed buffer. By default, if the limit is reached,
decompression stops and a DecompressionException is thrown. Behavior
upon reaching the limit is modifiable by subclasses in case they desire
something else.
Result:
The decompressed buffer can now be limited to a configurable size, thus
mitigating the possibility of memory pool exhaustion.
Motivation:
In next major version of netty users should use ChannelHandler everywhere. We should ensure we do the same
Modifications:
Replace usage of deprecated classes / interfaces with ChannelHandler
Result:
Use non-deprecated code
Motivation:
We should close encoder when `LzfEncoder` was removed from pipeline.
Modification:
call `encoder.close` when `handlerRemoved` triggered.
Result:
Close encoder to release internal buffer.