Motivation:
A large frame will be componsed by many packages. Every time the package
arrived, findEndOfLine will be called from the start of the buffer. It
will cause the complexity of reading frame equal to O(n^2). This can be
eliminated by using a offset to mark the last scan position, when new
package arrived, just find the delimter from the mark. The complexity
will be O(n).
Modification:
Add a offset to mark the last scan position.
Result:
Better performance for read large frame.
Motivation:
The Headers interface supports an interface to get all the headers values corresponding to a particular name. This API returns a List which requires intermediate storage and increases GC pressure.
Modifications:
- Add a method which returns an iterator over all the values for a specific name
Result:
Ability to iterator over values for a specific name with no intermediate collection.
Motivation:
The decode method is too large to be inlined with default compiler settings, hence the uncommon paths need to be packed and moved away form the common one.
Modifications:
The uncommon paths of the decode call (eg failures with thrown exceptions) are packed and moved in private methods in order to reduce the size of the common one
and let it being inlined.
Result:
The decode method is being inlined if the stack depth allows it.
Motivation:
Continuing to make netty happy when compiling through errorprone.
Modification:
Mostly comments, some minor switch statement changes.
Result:
No more compiler errors!
Motivation:
Calling `newInstance()` on a Class object can bypass compile time
checked Exception propagation. This is noted in Java Puzzlers,
as well as in ErrorProne:
http://errorprone.info/bugpattern/ClassNewInstance
Modifications:
Use the niladic constructor to create a new instance.
Result:
Compile time safety for checked exceptions
This reverts commit d63bb4811e as this not covered correctly all cases and so could lead to missing fireChannelReadComplete() calls. We will re-evalute d63bb4811e and resbumit a pr once we are sure all is handled correctly
Motivation:
'insideString' and 'openBraces' need a proper handling when streaming
Json array over multiple writes and an element decoding was started but
not completed.
Related to #6969
Modifications:
If the idx is reset:
- 'insideString' has to be reset to 'false' in order to indicate that
array element will be decoded from the beginning
- 'openBraces' has to be reset to '1' to indicate that Json array
decoding is in progress.
Result:
Json array is properly decoded when in streaming mode
Motivation:
Its wasteful and also confusing that channelReadComplete() is called even if there was no message forwarded to the next handler.
Modifications:
- Only call ctx.fireChannelReadComplete() if at least one message was decoded
- Add unit test
Result:
Less confusing behavior. Fixes [#4312].
Motivation:
1. Hash function in the Snappy encoding is wrong probably: used '+' instead of '*'. See the reference implementation [1].
2. Size of the hash table is calculated, but not applied.
Modifications:
1. Fix hash function: replace addition by multiplication.
2. Allocate hash table with calculated size.
3. Use an `Integer.numberOfLeadingZeros` trick for calculate log2.
4. Release buffers in tests.
Result:
1. Better compression. In the test `encodeAndDecodeLongTextUsesCopy` now compressed size is 175 instead of 180 before this change.
2. No redundant allocations for hash table.
3. A bit faster the calc of shift (less an expensive math operations).
[1] 513df5fb5a/snappy.cc (L67)
Motivation:
Calling JsonObjectDecoder#reset while streaming Json array over multiple
writes causes CorruptedFrameException to be thrown.
Modifications:
While streaming Json array and if the current readerIndex has been reset,
ensure that the states will not be reset.
Result:
Fixes#6969
Motivation:
1. `ByteBuf` contains methods to writing `CharSequence` which optimized for UTF-8 and ASCII encodings. We can also apply optimization for ISO-8859-1.
2. In many places appropriate methods are not used.
Modifications:
1. Apply optimization for ISO-8859-1 encoding in the `ByteBuf#setCharSequence` realizations.
2. Apply appropriate methods for writing `CharSequences` into buffers.
Result:
Reduce overhead from string-to-bytes conversion.
Motivation:
Lz4FrameEncoder maintains internal state, but the life cycle of the buffer is not consistently managed. The buffer is allocated in handlerAdded but freed in close, but the buffer can still be used until handlerRemoved is called.
Modifications:
- Move the cleanup of the buffer from close to handlerRemoved
- Explicitly throw an EncoderException from Lz4FrameEncoder if the encode operation has finished and there isn't enough space to write data
Result:
No more NPE in Lz4FrameEncoder on the buffer.
Motivation:
JdkZlibDecoder will allocate a new buffer when the previous buffer is filled with inflated data, but JZlibDecoder will attempt to use the same buffer by resizing. This leads to inconsistent results when these two decoders that are intended to be functionality equivalent.
Modifications:
- JdkZlibDecoder should attempt to resize and reuse the existing buffer instead of creating multiple buffers
Result:
Fixes https://github.com/netty/netty/issues/6804
Motivation:
ByteToMessageDecoder#handlerRemoved will immediately release the cumulation buffer, but it is possible that a child class may still be using this buffer, and therefore use a dereferenced buffer.
Modifications:
- ByteToMessageDecoder#handlerRemoved and ByteToMessageDecoder#decode should coordinate to avoid the case where a child class is using the cumulation buffer but ByteToMessageDecoder releases that buffer.
Result:
Child classes of ByteToMessageDecoder are less likely to reference a released buffer.
Motivation:
We not correctly guarded against overflow and so call Base64.encode(...) with a big buffer may lead to an overflow when calculate the size of the out buffer.
Modifications:
Correctly guard against overflow.
Result:
Fixes [#6620].
Motivation:
If a read-only ByteBuf is passed to the ByteToMessageDecoder.channelRead(...) method we need to make a copy of it once we try to merge buffers for cumulation. This usually is not the case but can for example happen if the local transport is used. This was the cause of the leak report we sometimes saw during the codec-http2 tests, as we are using the local transport and write a read-only buffer. This buffer will then be passed to the peer channel and fired through the pipeline and so end up as the cumulation buffer in the ByteToMessageDecoder. Once the next fragement is received we tried to merge these and failed with a ReadOnlyBufferException which then produced a leak.
Modifications:
Ensure we copy the buffer if its read-only.
Result:
No more exceptions and so leak when a read-only buffer is passed to ByteToMessageDecoder.channelRead(...)
Motivation:
In an effort to better understand how the XmlFrameDecoder works, I consulted the tests to find a method that would reframe the inputs as per the Javadocs for that class. I couldn't find any methods that seemed to be doing it, so I wanted to add one to reinforce my understanding.
Modification:
Add a new test method to XmlFrameDecoder to assert that the reframing works as described.
Result:
New test method is added to XmlFrameDecoder
Motivation:
This pull request does not solve any problem but we find that several links in the code refer to project websites under the domain of http://code.google.com which are either moved to github or not maintained anymore.
Modification:
Update the project links from code.google.com to the relevant project in github.com
Motivation:
Lz4FrameEncoder uses internalNioBuffer but always passes in a value of 0 for the index. This should be readerIndex().
Modifications:
- change 0 to readerIndex()
Result:
More correct usage of internalNioBuffer in Lz4FrameEncoder.
Motivation:
DatagramPacketEncoder|Decoder should respect if the wrapped handler is sharable or not and depending on that be sharable or not.
Modifications:
- Delegate isSharable() to wrapped handler
- Add test-cases
Result:
Correct behavior
Motivation:
Base64#decode4to3 generally calculates an int value where the contents of the decodabet straddle bytes, and then uses a byte shifting or a full byte swapping operation to get the resulting contents. We can directly calculate the contents and avoid any intermediate int values and full byte swap operations. This will reduce the number of operations required during the decode operation.
Modifications:
- remove the intermediate int in the Base64#decond4to3 method.
- manually do the byte shifting since we are already doing bit/byte manipulations here anyways.
Result:
Base64#decode4to3 requires less operations to compute the end result.
Motivation:
The decode and encode method uses getByte(...) and setByte(...) in loops which can be very expensive because of bounds / reference-count checking. Beside this it also slows-down a lot when paranoid leak-detection is enabled as it will track each access.
Modifications:
- Pack bytes into int / short and so reduce operations on the ByteBuf
- Use ByteProcessor to reduce getByte calls.
Result:
Better performance in general. Also when you run the build with -Pleak the handler module will build in 1/4 of the time it took before.
Motivation:
We have our own ThreadLocalRandom implementation to support older JDKs . That said we should prefer the JDK provided when running on JDK >= 7
Modification:
Using ThreadLocalRandom implementation of the JDK when possible.
Result:
Make use of JDK implementations when possible.
Motivation:
To use jboss-marshalling extra command-line arguments are needed on JDK9+ as it makes use of reflection internally.
Modifications:
Skip jboss-marshalling tests when running on JDK9+ and init of MarshallingFactory fails.
Result:
Be able to build on latest JDK9 release.
Motivation:
We need to ensure we pass all tests when sun.misc.Unsafe is not present.
Modifications:
- Make *ByteBufAllocatorTest work whenever sun.misc.Unsafe is present or not
- Let Lz4FrameEncoderTest not depend on AbstractByteBufAllocator implementation details which take into account if sun.misc.Unsafe is present or not
Result:
Tests pass even without sun.misc.Unsafe.
Motivation:
We used various mocking frameworks. We should only use one...
Modifications:
Make usage of mocking framework consistent by only using Mockito.
Result:
Less dependencies and more consistent mocking usage.