Motivation:
To use jboss-marshalling extra command-line arguments are needed on JDK9+ as it makes use of reflection internally.
Modifications:
Skip jboss-marshalling tests when running on JDK9+ and init of MarshallingFactory fails.
Result:
Be able to build on latest JDK9 release.
Motivation:
We need to ensure we pass all tests when sun.misc.Unsafe is not present.
Modifications:
- Make *ByteBufAllocatorTest work whenever sun.misc.Unsafe is present or not
- Let Lz4FrameEncoderTest not depend on AbstractByteBufAllocator implementation details which take into account if sun.misc.Unsafe is present or not
Result:
Tests pass even without sun.misc.Unsafe.
Motivation:
We used various mocking frameworks. We should only use one...
Modifications:
Make usage of mocking framework consistent by only using Mockito.
Result:
Less dependencies and more consistent mocking usage.
Motivation:
Currently Netty does not wrap socket connect, bind, or accept
operations in doPrivileged blocks. Nor does it wrap cases where a dns
lookup might happen.
This prevents an application utilizing the SecurityManager from
isolating SocketPermissions to Netty.
Modifications:
I have introduced a class (SocketUtils) that wraps operations
requiring SocketPermissions in doPrivileged blocks.
Result:
A user of Netty can grant SocketPermissions explicitly to the Netty
jar, without granting it to the rest of their application.
Motivation:
LZ4FrameEncoder maintains an internal buffer of incoming data compress, and only writes out compressed data when a size threshold is reached. LZ4FrameEncoder does not override the flush() method, and thus the only way to flush data down the pipeline is via more data or close the channel.
Modifications:
Override the flush() function to flush on demand. Also overrode the allocateBuffer() function so we can more accurately size the output buffer (instead of needing to potatntially realloc via buffer.ensureWritable()).
Result:
Implementation works as described.
Motivation:
Thought there may be a bug so added a testcase to verify everything works as expected.
Modifications:
Added testcase
Result:
More test-coverage.
Motivation:
IntelliJ issues several warnings.
Modifications:
* `ClientCookieDecoder` and `ServerCookieDecoder`:
* `nameEnd`, `valueBegin` and `valueEnd` don't need to be initialized
* `keyValLoop` loop doesn't been to be labelled, as it's the most inner one (same thing for labelled breaks)
* Remove `if (i != headerLen)` as condition is always true
* `ClientCookieEncoder` javadoc still mention old logic
* `DefaultCookie`, `ServerCookieEncoder` and `DefaultHttpHeaders` use ternary ops that can be turned into simple boolean ones
* `DefaultHeaders` uses a for(int) loop over an array. It can be turned into a foreach one as javac doesn't allocate an iterator to iterate over arrays
* `DefaultHttp2Headers` and `AbstractByteBuf` `equal` can be turned into a single boolean statement
Result:
Cleaner code
Motivation:
* DefaultHeaders from netty-codec has some duplicated logic for header date parsing
* Several classes keep on using deprecated HttpHeaderDateFormat
Modifications:
* Move HttpHeaderDateFormatter to netty-codec and rename it into HeaderDateFormatter
* Make DefaultHeaders use HeaderDateFormatter
* Replace HttpHeaderDateFormat usage with HeaderDateFormatter
Result:
Faster and more consistent code
Motiviation:
We used ReferenceCountUtil.releaseLater(...) in our tests which simplifies a bit the releasing of ReferenceCounted objects. The problem with this is that while it simplifies stuff it increase memory usage a lot as memory may not be freed up in a timely manner.
Modifications:
- Deprecate releaseLater(...)
- Remove usage of releaseLater(...) in tests.
Result:
Less memory needed to build netty while running the tests.
Motivation:
2c78902ebc ensured buffers were released in the general case but didn't clean up an extra release in LzmaFrameEncoderTest#testCompressionOfBatchedFlowOfData which lead to a double release.
Modifications:
LzmaFrameEncoderTest#testCompressionOfBatchedFlowOfData should not explicitly release the buffer because decompress will release the buffer
Result:
No more reference count exception and failed test.
Motivation:
c1932a8537 made an assumption that the LzmaInputStream which wraps a ByteBufInputStream would delegate the close operation to the wrapped stream. This assumption is not true and thus we still had a leak. An issue has been logged with our LZMA dependency https://github.com/jponge/lzma-java/issues/14.
Modifications:
- Force a close on the wrapped stream
Result:
No more leak.
Motivation:
Netty provides a adaptor from ByteBuf to Java's InputStream interface. The JDK Stream interfaces have an explicit lifetime because they implement the Closable interface. This lifetime may be differnt than the ByteBuf which is wrapped, and controlled by the interface which accepts the JDK Stream. However Netty's ByteBufInputStream currently does not take reference count ownership of the underlying ByteBuf. There may be no way for existing classes which only accept the InputStream interface to communicate when they are done with the stream, other than calling close(). This means that when the stream is closed it may be appropriate to release the underlying ByteBuf, as the ownership of the underlying ByteBuf resource may be transferred to the Java Stream.
Motivation:
- ByteBufInputStream.close() supports taking reference count ownership of the underyling ByteBuf
Result:
ByteBufInputStream can assume reference count ownership so the underlying ByteBuf can be cleaned up when the stream is closed.
Motivation:
The unit tests for the compression encoders/decoders may write buffers to an EmbeddedChannel but then may not release buffer or close the channel after the test. This may result in buffer leaks.
Modifications:
- Call channel.finishAndReleaseAll() after each test
Result:
Fixes https://github.com/netty/netty/issues/6007
Motivation:
ObjectOutputStream uses a Channel Attribute to cache a ObjectOutputStream which is backed by a ByteBuf that may be released after an object is encoded and the underlying buffer is written to the channel. On subsequent encode operations the cached ObjectOutputStream will be invalid and lead to a reference count exception.
Modifications:
- CompatibleObjectEncoder should not cache a ObjectOutputStream.
Result:
CompatibleObjectEncoder doesn't use a cached object backed by a released ByteBuf.
Motivation:
the build doesnt seem to enforce this, so they piled up
Modifications:
removed unused import lines
Result:
less unused imports
Signed-off-by: radai-rosenblatt <radai.rosenblatt@gmail.com>
Motivation:
MessageAggregator has a potential to leak if a new message is received before the existing message has completed, and if a HttpContent is received but maxContentLength has been exceeded, or the content length is too long.
Modifications:
- Make the HttpObjectAggregator more robust to leaks
- Reduce dependance on handlingOversizedMessage but instead rely on the more general check of a null currentMessage
Result:
More robust MessageAggregator with less chance of leaks
Motivation :
Unboxing operations allocate unnecessary objects when it could be avoided.
Modifications:
Replaced Float.valueOf with Number.parseFloat where possible.
Result:
Less unnecessary objects allocations.
Motivation:
We are currently doing a memory copy to verify the snapy version. This is not needed.
Modifications:
Remove memory copy and just compare byte per byte.
Result:
Less memory copies and allocations
Motivation:
We need to ensure the uncompressed ByteBuf is released if an exception happens while calling decode(...). If we miss to do so we leak buffers.
Modifications:
Correctly release buffer on exception.
Result:
No more memory leak.
Motivation:
We not need to do any memory copies when doing CRC32 processing.
Modifications:
Use ByteBufChecksum to eliminate memory copies.
Result:
Less memory copies.
Motivation:
We should try to minimize memory copies whenever possible.
Modifications:
- Refactor ByteBufChecksum to work with heap and direct ByteBuf always
- Remove memory copy in Snappy by let Crc32c extend ByteBufChecksum
Result:
Less memory copies when using Snappy
Motivation:
We did an unessary memory copy when doing bzip2 encoding.
Modifications:
Remove memory copy and just use a ByteProcessor.
Result:
Less memory copies and so faster.
Motivation:
We should prefer direct buffers for the output of Lz4FrameEncoder as this is what is needed for writing to the socket.
Modification:
Use direct buffers for the output
Result:
Less memory copies needed.
Motivation:
When the user constructs Lz4FrameDecoder with a Checksum implementation like CRC32 or Adler32 and uses Java8 we can directly use a ByteBuffer to do the checksum work. This way we can eliminate memory copies.
Modifications:
Detect if ByteBuffer can be used for checksum work and if so reduce memory copies.
Result:
Less memory copies when using JDK8
Motivation:
HPACK Encoder has a data structure which is similar to a previous version of DefaultHeaders. Some of the same improvements can be made.
Motivation:
- Enforce the restriction that the Encoder's headerFields length must be a power of two so we can use masking instead of modulo
- Use AsciiString.hashCode which already has optimizations instead of having yet another hash code algorithm in Encoder
Result:
Fixes https://github.com/netty/netty/issues/5357
Motivation:
PlatformDependent attempts to use reflection to get the underlying char[] (or byte[]) from String objects. This is fragile as if the String implementation does not utilize the full array, and instead uses a subset of the array, this optimization is invalid. OpenJDK6 and some earlier versions of OpenJDK7 String have the capability to use a subsection of the underlying char[].
Modifications:
- PlatformDependent should not attempt to use the underlying array from String (or other data types) via reflection
Result:
PlatformDependent hash code generation for CharSequence does not depend upon specific JDK implementation details.
Motivation:
It is good to have used dependencies and plugins up-to-date to fix any undiscovered bug fixed by the authors.
Modification:
Scanned dependencies and plugins and carefully updated one by one.
Result:
Dependencies and plugins are up-to-date.
Motivation:
JCTools supports both non-unsafe, unsafe versions of queues and JDK6 which allows us to shade the library in netty-common allowing it to stay "zero dependency".
Modifications:
- Remove copy paste JCTools code and shade the library (dependencies that are shaded should be removed from the <dependencies> section of the generated POM).
- Remove usage of OneTimeTask and remove it all together.
Result:
Less code to maintain and easier to update JCTools and less GC pressure as the queue implementation nt creates so much garbage