Motivation:
The implementation of CharSequenceValueConverter.convertToByte did not correctly handle AsciiString if the length != 1.
Modifications:
- Only use fast-path for AsciiString with length of 1.
- Add unit tests.
Result:
Fixes https://github.com/netty/netty/issues/7990
Motivation:
We did not correctly copy elements in some cases when add(index, element) was used.
Modifications:
- Correctly detect when copy is neede and when not.
- Add test case.
Result:
Fixes https://github.com/netty/netty/issues/7938.
Motivation:
Some `if` statements contains common parts that can be extracted.
Modifications:
Extract common parts from `if` statements.
Result:
Less code and bytecode. The code is simpler and more clear.
Motivation:
When the JsonObjectDecoder determines that the incoming buffer had some data discarded, it resets the internal index to readerIndex and attempts to adjust the state which does not correctly work for streams of JSON objects.
Modifications:
Reset the internal index to the value considering the previous reads.
Result:
JsonObjectDecoder correctly handles streams of both JSON objects and arrays with no state adjustments or repeatable reads.
Motivation:
6e5fd9311f fixed a bug in EmptyHeaders which was never noticed before because we had no tests.
Modifications:
Add tests for EmptyHeaders.
Result:
EmptyHeaders is tested now.
Motivation:
EmptyHeaders#get with a default value argument returns null. It should never return null, and instead it should return the default value.
Modifications:
- EmptyHeaders#get with a default value should return that default value
Result:
More correct implementation of the Headers API.
Motivation:
The Snappy decoder was failing on valid inputs containing literals
with 2-byte lengths > 0x8000 or copies with 2-byte offsets >= 0x8000.
The decoder was also enforcing an artificially low offset limit of
0x7FFF, something the Snappy format description advises against,
and which prevents decoding valid inputs generated by other encoders.
Modifications:
Interpret 2-byte literal lengths and 2-byte copy offsets as unsigned
shorts, in accordance with the format description and reference
implementation.
Allow any positive offset value. Throw an appropriate exception
for negative values (which can theoretically occur due to arithmetic
overflow on 4-byte offsets, but are unlikely to occur in the wild).
Result:
The Snappy decoder can handle valid inputs that previously caused
it to throw exceptions.
Motivation:
CharSequenceValueConverter#convertToBoolean has a few manual conditionals which can be removed if we use AsciiString.contentEqualsIgnoreCase. Also by comparing an AsciiString to a String we will incur conversions to char that can be avoided if we compare against AsciiString.
Modifications:
- Use AsciiString.contentEqualsIgnoreCase
- Compare against a AsciiString
Result:
Simplified CharSequenceValueConverter#convertToBoolean which favors AsciiString comparison.
Motivation:
If you pass the output of CharSequenceValueConvert.convertToTimeMillis to convertTimeMillis it will throw a ParseException.
Modifications:
- Correctly implement CharSequenceValueConverter.convertTimeMillis
- Add unit-tests for CharSequenceValueConverter
Result:
Correctly convert timemillis.
Motivation:
HeaderEntry.equals() inherets Object.equals() which simply check if two objects are the same.
So it returns false even when two HeaderEntry objects have the same name and value.
Modifications:
Implement HeaderEntry.equals() that follows the specification of Map.Entry.equals().
https://docs.oracle.com/javase/9/docs/api/java/util/Map.Entry.html#equals-java.lang.Object-
Result:
HeaderEntry.equals() returns true if two HeaderEntry objects have the same name and value.
Motivation:
HttpHeaders.getBoolean should return the same truth value for the same string value, regardless of the underlying type.
Modifications:
- Only treat values of true as Boolean.TRUE
- Add unit tests.
Result:
Consistent converting of values for all CharSequence implementations.
Motivation:
Headers.get* methods should not throw an exception but return null or the default value if converting of the value fails.
Modifications:
- Correctly handle the case when ValueConverter throws an Exception.
- Add testcase.
Result:
Fixes [#7710].
Motivation:
We used Recycler for the CodecOutputList which is not optimized for the use-case of access only from the same Thread all the time.
Modifications:
- Use FastThreadLocal for CodecOutputList
- Add benchmark
Result:
Less overhead in our codecs.
Motivation:
Will allow easy removal of deprecated methods in future.
Modification:
Replaced ctx.attr(), ctx.hasAttr() with ctx.channel().attr(), ctx.channel().hasAttr().
Result:
No deprecated ctx.attr(), ctx.hasAttr() methods usage.
Motivation:
According to RFC 1952, concatenation of valid gzip streams is also a valid gzip stream. JdkZlibDecoder only processed the first and discarded the rest.
Modifications:
- Introduced a constructor argument decompressConcatenated that if true, JdkZlibDecoder would continue to process the stream.
Result:
- If 'decompressConcatenated = true', concatenated streams would be processed in
compliance to RFC 1952.
- If 'decompressConcatenated = false' (default), existing behavior would remain.
Motivation:
Allow pre-computing calculation of the constants for compiler where it could be.
Similar fix in OpenJDK: [1].
Modifications:
- Use parentheses.
- Simplify static initialization of `BYTE2HEX_*` arrays in `StringUtil`.
Result:
Less bytecode, possible faster calculations at runtime.
[1] https://bugs.openjdk.java.net/browse/JDK-4477961
Motiviation:
In our replace(...) methods we always used validation for the newly created headers while the original headers may not use validation at all.
Modifications:
- Only use validation if the original headers used validation as well.
- Ensure we create a copy of the headers in replace(...).
Result:
Fixes [#5226]
Automatic-Module-Name entry provides a stable JDK9 module name, when Netty is used in a modular JDK9 applications. More info: http://blog.joda.org/2017/05/java-se-9-jpms-automatic-modules.html
When Netty migrates to JDK9 in the future, the entry can be replaced by actual module-info descriptor.
Modification:
The POM-s are configured to put the correct module names to the manifest.
Result:
Fixes#7218.
Motivation:
DefaultHttpHeader.names() exposes HTTP header names as a Set<String>. Converting the resulting set to an array using toArray(String[]) throws an exception: java.lang.ArrayStoreException: io.netty.util.AsciiString.
Modifications:
- Remove our custom implementation of toArray(...) (and others) by just extending AbstractCollection.
- Add unit test
Result:
Fixes [#7428].
Motivation:
For debugging/logging purpose, it would be convenient to have
HttpHeaders#toString implemented.
DefaultHeaders does implement toString be the implementation is suboptimal and allocates a Set for the names and Lists for values.
Modification:
* Introduce HeadersUtil#toString that provides a convenient optimized helper to implement toString for various headers implementations
* Have DefaultHeaders#toString and HttpHeaders#toString delegate their toString implementation to HeadersUtil
Result:
Convenient HttpHeaders#toString. Optimized DefaultHeaders#toString.
Motivation:
HttpObjectEncoder and MessageAggregator treat buffers that are not readable special. If a buffer is not readable, then an EMPTY_BUFFER is written and the actual buffer is ignored. If the buffer has already been released then this will not be correct as the promise will be completed, but in reality the original content shouldn't have resulted in any write because it was invalid.
Modifications:
- HttpObjectEncoder should retain/write the original buffer instead of using EMPTY_BUFFER
- MessageAggregator should retain/write the original ByteBufHolder instead of using EMPTY_BUFFER
Result:
Invalid write operations which happen to not be readable correctly reflect failed status in the promise, and do not result in any writes to the channel.
Motivation:
Use actual links to new locations of Protobuf repo and documentation to
avoid problems when redirect will not work.
Modification:
Links in comments and all/pom.xml
Result:
Correct links to Protobuf resources
Motivation:
Even if it's a super micro-optimization (most JVM could optimize such
cases in runtime), in theory (and according to some perf tests) it
may help a bit. It also makes a code more clear and allows you to
access such methods in the test scope directly, without instance of
the class.
Modifications:
Add 'static' modifier for all methods, where it possible. Mostly in
test scope.
Result:
Cleaner code with proper 'static' modifiers.
Motivation:
Without a 'serialVersionUID' field, any change to a class will make
previously serialized versions unreadable.
Modifications:
Add missed 'serialVersionUID' field for all Serializable
classes.
Result:
Proper deserialization of previously serialized objects.
Motivation: Today when Netty encounters a general error while decoding
it treats this as a decoder exception. However, for fatal causes this
should not be treated as such, instead the fatal error should be carried
up the stack without the callee having to unwind causes. This was
probably done for byte to byte message decoder but is now done for all
decoders.
Modifications: Instead of translating any error to a decoder exception,
we let those unwind out the stack (note that finally blocks still
execute) except in places where an event needs to fire where we fire
with the error instead of wrapping in a decoder exception.
Result: Fatal errors will not be treated as innocent decoder exceptions.
Motivation: Today when Netty encounters a general error while decoding
it treats this as a decoder exception. However, for fatal causes this
should not be treated as such, instead the fatal error should be carried
up the stack without the callee having to unwind causes.
Modifications: Instead of translating any error to a decoder exception,
we let those unwind out the stack (note that finally blocks still
execute).
Result: Fatal errors will not be treated as innocent decoder exceptions.
Motivation:
A large frame will be componsed by many packages. Every time the package
arrived, findEndOfLine will be called from the start of the buffer. It
will cause the complexity of reading frame equal to O(n^2). This can be
eliminated by using a offset to mark the last scan position, when new
package arrived, just find the delimter from the mark. The complexity
will be O(n).
Modification:
Add a offset to mark the last scan position.
Result:
Better performance for read large frame.
Motivation:
The Headers interface supports an interface to get all the headers values corresponding to a particular name. This API returns a List which requires intermediate storage and increases GC pressure.
Modifications:
- Add a method which returns an iterator over all the values for a specific name
Result:
Ability to iterator over values for a specific name with no intermediate collection.
Motivation:
The decode method is too large to be inlined with default compiler settings, hence the uncommon paths need to be packed and moved away form the common one.
Modifications:
The uncommon paths of the decode call (eg failures with thrown exceptions) are packed and moved in private methods in order to reduce the size of the common one
and let it being inlined.
Result:
The decode method is being inlined if the stack depth allows it.
Motivation:
Continuing to make netty happy when compiling through errorprone.
Modification:
Mostly comments, some minor switch statement changes.
Result:
No more compiler errors!
Motivation:
Calling `newInstance()` on a Class object can bypass compile time
checked Exception propagation. This is noted in Java Puzzlers,
as well as in ErrorProne:
http://errorprone.info/bugpattern/ClassNewInstance
Modifications:
Use the niladic constructor to create a new instance.
Result:
Compile time safety for checked exceptions
This reverts commit d63bb4811e as this not covered correctly all cases and so could lead to missing fireChannelReadComplete() calls. We will re-evalute d63bb4811e and resbumit a pr once we are sure all is handled correctly
Motivation:
'insideString' and 'openBraces' need a proper handling when streaming
Json array over multiple writes and an element decoding was started but
not completed.
Related to #6969
Modifications:
If the idx is reset:
- 'insideString' has to be reset to 'false' in order to indicate that
array element will be decoded from the beginning
- 'openBraces' has to be reset to '1' to indicate that Json array
decoding is in progress.
Result:
Json array is properly decoded when in streaming mode
Motivation:
Its wasteful and also confusing that channelReadComplete() is called even if there was no message forwarded to the next handler.
Modifications:
- Only call ctx.fireChannelReadComplete() if at least one message was decoded
- Add unit test
Result:
Less confusing behavior. Fixes [#4312].
Motivation:
1. Hash function in the Snappy encoding is wrong probably: used '+' instead of '*'. See the reference implementation [1].
2. Size of the hash table is calculated, but not applied.
Modifications:
1. Fix hash function: replace addition by multiplication.
2. Allocate hash table with calculated size.
3. Use an `Integer.numberOfLeadingZeros` trick for calculate log2.
4. Release buffers in tests.
Result:
1. Better compression. In the test `encodeAndDecodeLongTextUsesCopy` now compressed size is 175 instead of 180 before this change.
2. No redundant allocations for hash table.
3. A bit faster the calc of shift (less an expensive math operations).
[1] 513df5fb5a/snappy.cc (L67)
Motivation:
Calling JsonObjectDecoder#reset while streaming Json array over multiple
writes causes CorruptedFrameException to be thrown.
Modifications:
While streaming Json array and if the current readerIndex has been reset,
ensure that the states will not be reset.
Result:
Fixes#6969
Motivation:
1. `ByteBuf` contains methods to writing `CharSequence` which optimized for UTF-8 and ASCII encodings. We can also apply optimization for ISO-8859-1.
2. In many places appropriate methods are not used.
Modifications:
1. Apply optimization for ISO-8859-1 encoding in the `ByteBuf#setCharSequence` realizations.
2. Apply appropriate methods for writing `CharSequences` into buffers.
Result:
Reduce overhead from string-to-bytes conversion.
Motivation:
Lz4FrameEncoder maintains internal state, but the life cycle of the buffer is not consistently managed. The buffer is allocated in handlerAdded but freed in close, but the buffer can still be used until handlerRemoved is called.
Modifications:
- Move the cleanup of the buffer from close to handlerRemoved
- Explicitly throw an EncoderException from Lz4FrameEncoder if the encode operation has finished and there isn't enough space to write data
Result:
No more NPE in Lz4FrameEncoder on the buffer.
Motivation:
JdkZlibDecoder will allocate a new buffer when the previous buffer is filled with inflated data, but JZlibDecoder will attempt to use the same buffer by resizing. This leads to inconsistent results when these two decoders that are intended to be functionality equivalent.
Modifications:
- JdkZlibDecoder should attempt to resize and reuse the existing buffer instead of creating multiple buffers
Result:
Fixes https://github.com/netty/netty/issues/6804
Motivation:
ByteToMessageDecoder#handlerRemoved will immediately release the cumulation buffer, but it is possible that a child class may still be using this buffer, and therefore use a dereferenced buffer.
Modifications:
- ByteToMessageDecoder#handlerRemoved and ByteToMessageDecoder#decode should coordinate to avoid the case where a child class is using the cumulation buffer but ByteToMessageDecoder releases that buffer.
Result:
Child classes of ByteToMessageDecoder are less likely to reference a released buffer.
Motivation:
We not correctly guarded against overflow and so call Base64.encode(...) with a big buffer may lead to an overflow when calculate the size of the out buffer.
Modifications:
Correctly guard against overflow.
Result:
Fixes [#6620].
Motivation:
If a read-only ByteBuf is passed to the ByteToMessageDecoder.channelRead(...) method we need to make a copy of it once we try to merge buffers for cumulation. This usually is not the case but can for example happen if the local transport is used. This was the cause of the leak report we sometimes saw during the codec-http2 tests, as we are using the local transport and write a read-only buffer. This buffer will then be passed to the peer channel and fired through the pipeline and so end up as the cumulation buffer in the ByteToMessageDecoder. Once the next fragement is received we tried to merge these and failed with a ReadOnlyBufferException which then produced a leak.
Modifications:
Ensure we copy the buffer if its read-only.
Result:
No more exceptions and so leak when a read-only buffer is passed to ByteToMessageDecoder.channelRead(...)
Motivation:
In an effort to better understand how the XmlFrameDecoder works, I consulted the tests to find a method that would reframe the inputs as per the Javadocs for that class. I couldn't find any methods that seemed to be doing it, so I wanted to add one to reinforce my understanding.
Modification:
Add a new test method to XmlFrameDecoder to assert that the reframing works as described.
Result:
New test method is added to XmlFrameDecoder
Motivation:
This pull request does not solve any problem but we find that several links in the code refer to project websites under the domain of http://code.google.com which are either moved to github or not maintained anymore.
Modification:
Update the project links from code.google.com to the relevant project in github.com
Motivation:
Lz4FrameEncoder uses internalNioBuffer but always passes in a value of 0 for the index. This should be readerIndex().
Modifications:
- change 0 to readerIndex()
Result:
More correct usage of internalNioBuffer in Lz4FrameEncoder.
Motivation:
DatagramPacketEncoder|Decoder should respect if the wrapped handler is sharable or not and depending on that be sharable or not.
Modifications:
- Delegate isSharable() to wrapped handler
- Add test-cases
Result:
Correct behavior
Motivation:
Base64#decode4to3 generally calculates an int value where the contents of the decodabet straddle bytes, and then uses a byte shifting or a full byte swapping operation to get the resulting contents. We can directly calculate the contents and avoid any intermediate int values and full byte swap operations. This will reduce the number of operations required during the decode operation.
Modifications:
- remove the intermediate int in the Base64#decond4to3 method.
- manually do the byte shifting since we are already doing bit/byte manipulations here anyways.
Result:
Base64#decode4to3 requires less operations to compute the end result.
Motivation:
The decode and encode method uses getByte(...) and setByte(...) in loops which can be very expensive because of bounds / reference-count checking. Beside this it also slows-down a lot when paranoid leak-detection is enabled as it will track each access.
Modifications:
- Pack bytes into int / short and so reduce operations on the ByteBuf
- Use ByteProcessor to reduce getByte calls.
Result:
Better performance in general. Also when you run the build with -Pleak the handler module will build in 1/4 of the time it took before.
Motivation:
We have our own ThreadLocalRandom implementation to support older JDKs . That said we should prefer the JDK provided when running on JDK >= 7
Modification:
Using ThreadLocalRandom implementation of the JDK when possible.
Result:
Make use of JDK implementations when possible.
Motivation:
To use jboss-marshalling extra command-line arguments are needed on JDK9+ as it makes use of reflection internally.
Modifications:
Skip jboss-marshalling tests when running on JDK9+ and init of MarshallingFactory fails.
Result:
Be able to build on latest JDK9 release.
Motivation:
We need to ensure we pass all tests when sun.misc.Unsafe is not present.
Modifications:
- Make *ByteBufAllocatorTest work whenever sun.misc.Unsafe is present or not
- Let Lz4FrameEncoderTest not depend on AbstractByteBufAllocator implementation details which take into account if sun.misc.Unsafe is present or not
Result:
Tests pass even without sun.misc.Unsafe.
Motivation:
We used various mocking frameworks. We should only use one...
Modifications:
Make usage of mocking framework consistent by only using Mockito.
Result:
Less dependencies and more consistent mocking usage.
Motivation:
Currently Netty does not wrap socket connect, bind, or accept
operations in doPrivileged blocks. Nor does it wrap cases where a dns
lookup might happen.
This prevents an application utilizing the SecurityManager from
isolating SocketPermissions to Netty.
Modifications:
I have introduced a class (SocketUtils) that wraps operations
requiring SocketPermissions in doPrivileged blocks.
Result:
A user of Netty can grant SocketPermissions explicitly to the Netty
jar, without granting it to the rest of their application.
Motivation:
LZ4FrameEncoder maintains an internal buffer of incoming data compress, and only writes out compressed data when a size threshold is reached. LZ4FrameEncoder does not override the flush() method, and thus the only way to flush data down the pipeline is via more data or close the channel.
Modifications:
Override the flush() function to flush on demand. Also overrode the allocateBuffer() function so we can more accurately size the output buffer (instead of needing to potatntially realloc via buffer.ensureWritable()).
Result:
Implementation works as described.
Motivation:
Thought there may be a bug so added a testcase to verify everything works as expected.
Modifications:
Added testcase
Result:
More test-coverage.
Motivation:
IntelliJ issues several warnings.
Modifications:
* `ClientCookieDecoder` and `ServerCookieDecoder`:
* `nameEnd`, `valueBegin` and `valueEnd` don't need to be initialized
* `keyValLoop` loop doesn't been to be labelled, as it's the most inner one (same thing for labelled breaks)
* Remove `if (i != headerLen)` as condition is always true
* `ClientCookieEncoder` javadoc still mention old logic
* `DefaultCookie`, `ServerCookieEncoder` and `DefaultHttpHeaders` use ternary ops that can be turned into simple boolean ones
* `DefaultHeaders` uses a for(int) loop over an array. It can be turned into a foreach one as javac doesn't allocate an iterator to iterate over arrays
* `DefaultHttp2Headers` and `AbstractByteBuf` `equal` can be turned into a single boolean statement
Result:
Cleaner code
Motivation:
* DefaultHeaders from netty-codec has some duplicated logic for header date parsing
* Several classes keep on using deprecated HttpHeaderDateFormat
Modifications:
* Move HttpHeaderDateFormatter to netty-codec and rename it into HeaderDateFormatter
* Make DefaultHeaders use HeaderDateFormatter
* Replace HttpHeaderDateFormat usage with HeaderDateFormatter
Result:
Faster and more consistent code
Motiviation:
We used ReferenceCountUtil.releaseLater(...) in our tests which simplifies a bit the releasing of ReferenceCounted objects. The problem with this is that while it simplifies stuff it increase memory usage a lot as memory may not be freed up in a timely manner.
Modifications:
- Deprecate releaseLater(...)
- Remove usage of releaseLater(...) in tests.
Result:
Less memory needed to build netty while running the tests.
Motivation:
2c78902ebc ensured buffers were released in the general case but didn't clean up an extra release in LzmaFrameEncoderTest#testCompressionOfBatchedFlowOfData which lead to a double release.
Modifications:
LzmaFrameEncoderTest#testCompressionOfBatchedFlowOfData should not explicitly release the buffer because decompress will release the buffer
Result:
No more reference count exception and failed test.
Motivation:
c1932a8537 made an assumption that the LzmaInputStream which wraps a ByteBufInputStream would delegate the close operation to the wrapped stream. This assumption is not true and thus we still had a leak. An issue has been logged with our LZMA dependency https://github.com/jponge/lzma-java/issues/14.
Modifications:
- Force a close on the wrapped stream
Result:
No more leak.
Motivation:
Netty provides a adaptor from ByteBuf to Java's InputStream interface. The JDK Stream interfaces have an explicit lifetime because they implement the Closable interface. This lifetime may be differnt than the ByteBuf which is wrapped, and controlled by the interface which accepts the JDK Stream. However Netty's ByteBufInputStream currently does not take reference count ownership of the underlying ByteBuf. There may be no way for existing classes which only accept the InputStream interface to communicate when they are done with the stream, other than calling close(). This means that when the stream is closed it may be appropriate to release the underlying ByteBuf, as the ownership of the underlying ByteBuf resource may be transferred to the Java Stream.
Motivation:
- ByteBufInputStream.close() supports taking reference count ownership of the underyling ByteBuf
Result:
ByteBufInputStream can assume reference count ownership so the underlying ByteBuf can be cleaned up when the stream is closed.
Motivation:
The unit tests for the compression encoders/decoders may write buffers to an EmbeddedChannel but then may not release buffer or close the channel after the test. This may result in buffer leaks.
Modifications:
- Call channel.finishAndReleaseAll() after each test
Result:
Fixes https://github.com/netty/netty/issues/6007
Motivation:
ObjectOutputStream uses a Channel Attribute to cache a ObjectOutputStream which is backed by a ByteBuf that may be released after an object is encoded and the underlying buffer is written to the channel. On subsequent encode operations the cached ObjectOutputStream will be invalid and lead to a reference count exception.
Modifications:
- CompatibleObjectEncoder should not cache a ObjectOutputStream.
Result:
CompatibleObjectEncoder doesn't use a cached object backed by a released ByteBuf.
Motivation:
the build doesnt seem to enforce this, so they piled up
Modifications:
removed unused import lines
Result:
less unused imports
Signed-off-by: radai-rosenblatt <radai.rosenblatt@gmail.com>
Motivation:
MessageAggregator has a potential to leak if a new message is received before the existing message has completed, and if a HttpContent is received but maxContentLength has been exceeded, or the content length is too long.
Modifications:
- Make the HttpObjectAggregator more robust to leaks
- Reduce dependance on handlingOversizedMessage but instead rely on the more general check of a null currentMessage
Result:
More robust MessageAggregator with less chance of leaks
Motivation :
Unboxing operations allocate unnecessary objects when it could be avoided.
Modifications:
Replaced Float.valueOf with Number.parseFloat where possible.
Result:
Less unnecessary objects allocations.
Motivation:
We are currently doing a memory copy to verify the snapy version. This is not needed.
Modifications:
Remove memory copy and just compare byte per byte.
Result:
Less memory copies and allocations
Motivation:
We need to ensure the uncompressed ByteBuf is released if an exception happens while calling decode(...). If we miss to do so we leak buffers.
Modifications:
Correctly release buffer on exception.
Result:
No more memory leak.
Motivation:
We not need to do any memory copies when doing CRC32 processing.
Modifications:
Use ByteBufChecksum to eliminate memory copies.
Result:
Less memory copies.
Motivation:
We should try to minimize memory copies whenever possible.
Modifications:
- Refactor ByteBufChecksum to work with heap and direct ByteBuf always
- Remove memory copy in Snappy by let Crc32c extend ByteBufChecksum
Result:
Less memory copies when using Snappy
Motivation:
We did an unessary memory copy when doing bzip2 encoding.
Modifications:
Remove memory copy and just use a ByteProcessor.
Result:
Less memory copies and so faster.
Motivation:
We should prefer direct buffers for the output of Lz4FrameEncoder as this is what is needed for writing to the socket.
Modification:
Use direct buffers for the output
Result:
Less memory copies needed.
Motivation:
When the user constructs Lz4FrameDecoder with a Checksum implementation like CRC32 or Adler32 and uses Java8 we can directly use a ByteBuffer to do the checksum work. This way we can eliminate memory copies.
Modifications:
Detect if ByteBuffer can be used for checksum work and if so reduce memory copies.
Result:
Less memory copies when using JDK8
Motivation:
HPACK Encoder has a data structure which is similar to a previous version of DefaultHeaders. Some of the same improvements can be made.
Motivation:
- Enforce the restriction that the Encoder's headerFields length must be a power of two so we can use masking instead of modulo
- Use AsciiString.hashCode which already has optimizations instead of having yet another hash code algorithm in Encoder
Result:
Fixes https://github.com/netty/netty/issues/5357
Motivation:
PlatformDependent attempts to use reflection to get the underlying char[] (or byte[]) from String objects. This is fragile as if the String implementation does not utilize the full array, and instead uses a subset of the array, this optimization is invalid. OpenJDK6 and some earlier versions of OpenJDK7 String have the capability to use a subsection of the underlying char[].
Modifications:
- PlatformDependent should not attempt to use the underlying array from String (or other data types) via reflection
Result:
PlatformDependent hash code generation for CharSequence does not depend upon specific JDK implementation details.
Motivation:
It is good to have used dependencies and plugins up-to-date to fix any undiscovered bug fixed by the authors.
Modification:
Scanned dependencies and plugins and carefully updated one by one.
Result:
Dependencies and plugins are up-to-date.
Motivation:
JCTools supports both non-unsafe, unsafe versions of queues and JDK6 which allows us to shade the library in netty-common allowing it to stay "zero dependency".
Modifications:
- Remove copy paste JCTools code and shade the library (dependencies that are shaded should be removed from the <dependencies> section of the generated POM).
- Remove usage of OneTimeTask and remove it all together.
Result:
Less code to maintain and easier to update JCTools and less GC pressure as the queue implementation nt creates so much garbage
Motivation:
We should ensure we null out the cumulation buffer before we fire it through the pipleine in handlerRemoved(...) as in theory it could be possible that another method is triggered as result of the fireChannelRead(...) or fireChannelReadComplete() that will try to access the cumulation.
Modifications:
Null out cumulation buffer early in handlerRemoved(...)
Result:
No possible to access the cumulation buffer that was already handed over.
Motivation:
For example,
DefaultHttp2Headers headers = new DefaultHttp2Headers();
headers.add("key1", "value1");
headers.add("key1", "value2");
headers.add("key1", "value3");
headers.add("key2", "value4");
produces:
DefaultHttp2Headers[key1: value1key1: value2key1: value3, key2: value4]
while correctly it should be
DefaultHttp2Headers[key1: value1, key1: value2, key1: value3, key2: value4]
Modifications:
Change the toString() method to produce the beforementioned output.
Result:
toString() format is correct also for keys with multiple values.
Motivation:
At the moment the user is responsible to increase the writer index of the composite buffer when a new component is added. We should add some methods that handle this for the user as this is the most popular usage of the composite buffer.
Modifications:
Add new methods that autoamtically increase the writerIndex when buffers are added.
Result:
Easier usage of CompositeByteBuf.
Motivation:
99dfc9ea79 introduced some code that will more frequently try to forward messages out of the list of decoded messages to reduce latency and memory footprint. Unfortunally this has the side-effect that RecycleableArrayList.clear() will be called more often and so introduce some overhead as ArrayList will null out the array on each call.
Modifications:
- Introduce a CodecOutputList which allows to not null out the array until we recycle it and also allows to access internal array with extra range checks.
- Add benchmark that add elements to different List implementations and clear them
Result:
Less overhead when decode / encode messages.
Benchmark (elements) Mode Cnt Score Error Units
CodecOutputListBenchmark.arrayList 1 thrpt 20 24853764.609 ± 161582.376 ops/s
CodecOutputListBenchmark.arrayList 4 thrpt 20 17310636.508 ± 930517.403 ops/s
CodecOutputListBenchmark.codecOutList 1 thrpt 20 26670751.661 ± 587812.655 ops/s
CodecOutputListBenchmark.codecOutList 4 thrpt 20 25166421.089 ± 166945.599 ops/s
CodecOutputListBenchmark.recyclableArrayList 1 thrpt 20 24565992.626 ± 210017.290 ops/s
CodecOutputListBenchmark.recyclableArrayList 4 thrpt 20 18477881.775 ± 157003.777 ops/s
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 246.748 sec - in io.netty.handler.codec.CodecOutputListBenchmark
Related: #4333#4421#5128
Motivation:
slice(), duplicate() and readSlice() currently create a non-recyclable
derived buffer instance. Under heavy load, an application that creates a
lot of derived buffers can put the garbage collector under pressure.
Modifications:
- Add the following methods which creates a non-recyclable derived buffer
- retainedSlice()
- retainedDuplicate()
- readRetainedSlice()
- Add the new recyclable derived buffer implementations, which has its
own reference count value
- Add ByteBufHolder.retainedDuplicate()
- Add ByteBufHolder.replace(ByteBuf) so that..
- a user can replace the content of the holder in a consistent way
- copy/duplicate/retainedDuplicate() can delegate the holder
construction to replace(ByteBuf)
- Use retainedDuplicate() and retainedSlice() wherever possible
- Miscellaneous:
- Rename DuplicateByteBufTest to DuplicatedByteBufTest (missing 'D')
- Make ReplayingDecoderByteBuf.reject() return an exception instead of
throwing it so that its callers don't need to add dummy return
statement
Result:
Derived buffers are now recycled when created via retainedSlice() and
retainedDuplicate() and derived from a pooled buffer
Motivation:
Often users either need to read or write CharSequences to a ByteBuf. We should add methods for this to ByteBuf as we can do some optimizations for this depending on the implementation.
Modifications:
Add setCharSequence, writeCharSequence, getCharSequence and readCharSequence
Result:
Easier reading / writing of CharSequence with ByteBuf.
Motivation:
The double quote may be escaped in a JSON string, but JsonObjectDecoder doesn't handle it. Resolves#5157.
Modifications:
Don't end a JSON string when processing an escaped double quote.
Result:
JsonObjectDecoder can handle backslash and double quote in a JSON string correctly.
Motivation:
We lately added ByteBuf.isReadOnly() which allows to detect if a buffer is read-only or not. We should add ByteBuf.asReadOnly() to allow easily access a read-only version of a buffer.
Modifications:
- Add ByteBuf.asReadOnly()
- Deprecate Unpooled.unmodifiableBuffer(Bytebuf)
Result:
More consistent api.
Motivation:
b112673554 added ChannelInputShutdownEvent support to ByteToMessageDecoder but missed updating the code for ReplayingDecoder. This has the effect:
- If a ChannelInputShutdownEvent is fired ByteToMessageDecoder (the super-class of ReplayingDecoder) will call the channelInputClosed(...) method which will pass the incorrect buffer to the decode method of ReplayingDecoder.
Modifications:
Share more code between ByteToMessageDEcoder and ReplayingDecoder and so also support ChannelInputShutdownEvent correctly in ReplayingDecoder
Result:
ChannelInputShutdownEvent is corrrectly handle in ReplayingDecoder as well.