Motivation:
ByteToMessageDecoder only looks at the last channelRead() in the batch
of channelRead()-s when determining whether or not it should call
ChannelHandlerContext#read() to consume more data when !isAutoRead. This
will lead to read() calls issued unnecessaily and unprompted if the very
last channelRead() didn't result in at least one decoded message, even
if there have been messages decoded from other channelRead()-s in the
current batch.
Modifications:
Track decode outcomes for the entire batch of channelRead() calls and
only issue a read in BTMD if the entire batch of channelRead() calls
yielded no complete messages.
Result:
ByteToMessageDecoder will no longer overread when the very last read
yielded no message, but the batch of reads did.
Motivation:
Lz4FrameEncoder and Lz4FrameDecoder in their default configuration use
an extremely inefficient way to checksum direct byte buffers. In
particular, for every byte checksummed, a single-element byte array is
being allocated and a JNI cal is made, which in some internal testing
makes a 25x difference in total throughput and allocates *a lot* of
garbage.
Modifications:
Lz4XXHash32, an implementation of ByteBufChecksum specifically for use
by Lz4FrameEncoder and Lz4FrameDecoder, is introduced. It utilises
xxHash32 block API which provides a hash() method that accepts a
ByteBuffer as an argument. Lz4FrameEncoder and Lz4FrameDecoder are
modified to use this implementation by default.
Result:
Lz4FrameEncoder and Lz4FrameDecoder perform well again when operating
on direct byte buffers with default checksum configuration; a public
implementation is provided for those who need to override the seed.
Motivation:
ReflectiveByteBufChecksum#update(buf, off, len) ignores provided offset
and length arguments when operating on direct buffers, leading to wrong
byte sequences being checksummed and ultimately incorrect checksum
values (unless checksumming the entire buffer).
Modifications:
Use the provided offset and length arguments to get the correct nio
buffer to checksum; add test coverage exercising the four meaningfully
different offset and length combinations.
Result:
Offset and length are respected and a correct checksum gets calculated;
simple unit test should prevent regressions in the future.
Motivation:
Because of a simple bug in ByteBufChecksum#updateByteBuffer(Checksum),
ReflectiveByteBufChecksum is never used for CRC32 and Adler32, resulting
in direct ByteBuffers being checksummed byte by byte, which is
undesriable.
Modification:
Fix ByteBufChecksum#updateByteBuffer(Checksum) method to pass the
correct argument to Method#invoke(Checksum, ByteBuffer).
Result:
ReflectiveByteBufChecksum will now be used for Adler32 and CRC32 on
Java8+ and direct ByteBuffers will no longer be checksummed on slow
byte-by-byte basis.
Motivation:
It is valid to use null as sender so we should support it when DatagramPacketEncoder checks if it supports the message.
Modifications:
- Add null check
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9199.
Motivation:
At the moment ByteToMessageDecoder always calls fireChannelReadComplete() when the handler is removed from the pipeline and the cumulation buffer is not null. We should only call it when we also call fireChannelRead(...), which only happens if the cumulation buffer is not null and readable.
Modifications:
Only call fireChannelReadComplete() if fireChannelRead(...) is called before during removal of the handler.
Result:
More correct semantics
Motivation
Pipeline handlers are free to "take control" of input buffers if they have singular refcount - in particular to mutate their raw data if non-readonly via discarding of read bytes, etc.
However there are various places (primarily unit tests) where a wrapped byte-array buffer is passed in and the wrapped array is assumed not to change (used after the wrapped buffer is passed to EmbeddedChannel.writeInbound()). This invalid assumption could result in unexpected errors, such as those exposed by #8931.
Modifications
Anywhere that the data passed to writeInbound() might be used again, ensure that either:
- A copy is used rather than wrapping a shared byte array, or
- The buffer is otherwise protected from modification by making it read-only
For the tests, copying is preferred since it still allows the "mutating" optimizations to be exercised.
Results
Avoid possible errors when pipeline assumes it has full control of input buffer.
Motivation:
OOME is occurred by increasing suppressedExceptions because other libraries call Throwable#addSuppressed. As we have no control over what other libraries do we need to ensure this can not lead to OOME.
Modifications:
Only use static instances of the Exceptions if we can either dissable addSuppressed or we run on java6.
Result:
Not possible to OOME because of addSuppressed. Fixes https://github.com/netty/netty/issues/9151.
Motivation:
DefaultHeaders entries maintains two linked lists. 1 for overall insertion order
and 1 for "in bucket" order. DefaultHeaders#valueIterator removal (introduced in 1d9090aab2) only reliably
removes the entry from the overall insertion order, but may not remove from the
bucket unless the element is the first entry.
Modifications:
- DefaultHeaders$ValueIterator should track 2 elements behind the next entry so
that the single linked "in bucket" list can be patched up when removing the
previous entry.
Result:
More correct DefaultHeaders#valueIterator removal.
Motivation:
While iterating values it is often desirable to be able to remove individual
entries. The existing mechanism to do this involves removal of all entries and
conditional re-insertion which is heavy weight in order to remove a single
value.
Modifications:
- DefaultHeaders$ValueIterator supports removal
Result:
It is possible to remove entries while iterating the values in DefaultHeaders.
Motivation:
32563bfcc1 introduced a regression in which we did now not longer discard the messages after we handled an oversized message.
Modifications:
- Do not set aggregating to false after handleOversizedMessage is called
- Adjust unit tests to verify the behaviour is correct again.
Result:
Fixes https://github.com/netty/netty/issues/9007.
Motivation:
PromiseCombiner is not thread-safe and even assumes all added Futures are using the same EventExecutor. This is kind of fragile as we do not enforce this. We need to enforce this contract to ensure it's safe to use and easy to spot concurrency problems.
Modifications:
- Add new contructor to PromiseCombiner that takes an EventExecutor and deprecate the old non-arg constructor.
- Check if methods are called from within the EventExecutor thread and if not fail
- Correctly dispatch on the right EventExecutor if the Future uses a different EventExecutor to eliminate concurrency issues.
Result:
More safe use of PromiseCombiner + enforce correct usage / contract.
Motivation:
Just was looking through code and found 1 interesting place DateFormatter.tryParseMonth that was not very effective, so I decided to optimize it a bit.
Modification:
Changed DateFormatter.tryParseMonth method. Instead of invocation regionMatch() for every month - compare chars one by one.
Result:
DateFormatter.parseHttpDate method performance improved from ~3% to ~15%.
Benchmark (DATE_STRING) Mode Cnt Score Error Units
DateFormatter2Benchmark.parseHttpHeaderDateFormatter Sun, 27 Jan 2016 19:18:46 GMT thrpt 6 4142781.221 ± 82155.002 ops/s
DateFormatter2Benchmark.parseHttpHeaderDateFormatter Sun, 27 Dec 2016 19:18:46 GMT thrpt 6 3781810.558 ± 38679.061 ops/s
DateFormatter2Benchmark.parseHttpHeaderDateFormatterNew Sun, 27 Jan 2016 19:18:46 GMT thrpt 6 4372569.705 ± 30257.537 ops/s
DateFormatter2Benchmark.parseHttpHeaderDateFormatterNew Sun, 27 Dec 2016 19:18:46 GMT thrpt 6 4339785.100 ± 57542.660 ops/s
Motivation
Implementations of MessageAggregator (HttpObjectAggregator in particular) may wish to
selectively aggrerage requests and responses on a case-by-case basis such as for example
only POST requests or only responses of a certain content-type.
Modifications
Adding a flag to MessageAggregator that toggles between true/false depending on if aggregation
is desired for the current message or not.
Result
Fixes#8772
Motivation:
We have a utility method to check for > 0 and >0 arguments. We should use it.
Modification:
use checkPositive/checkPositiveOrZero instead of if statement.
Result:
Re-use utility method.
Motivation:
We need to update to a new checkstyle plugin to allow the usage of lambdas.
Modifications:
- Update to new plugin version.
- Fix checkstyle problems.
Result:
Be able to use checkstyle plugin which supports new Java syntax.
Motivation:
LineBasedFrameDecoder, JsonObjectDecoder and XmlFrameDecoder upon investigation of the
sourcecode appeared to only support ASCII or UTF-8 input. It is an important characteristic
and ont reflected in any documentation. This could lead to improper usage and bugs.
Modifications:
Javadoc comment is addedd to all three classes to state that implementation is only
compatible with UTF-8 or ASCII input streams and brifly touches on implementaion details.
Result:
The end user of the netty library would not have to study sorcecode to deterime character
encoding limitations for given classes.
Motivation:
We had some typo (most likely caused by copy-and-paste) in the api docs which should be fixed.
Modifications:
Replace encoder by decoder word.
Result:
Correct apidocs.
Motivation:
Most of the maven modules do not explicitly declare their
dependencies and rely on transitivity, which is not always correct.
Modifications:
For all maven modules, add all of their dependencies to pom.xml
Result:
All of the (essentially non-transitive) depepdencies of the modules are explicitly declared in pom.xml
Motivation:
If the encoder needs to flush more than one outbound message it will
create a new ChannelPromise for all but the last write which will
swallow failures.
Modification:
Use a PromiseCombiner in the case of multiple messages and the parent
promise isn't the `VoidPromise`.
Result:
Intermediate failures are propagated to the original ChannelPromise.
Motivation:
There are currently many more places where this could be used which were
possibly not considered when the method was added.
If https://github.com/netty/netty/pull/8388 is included in its current
form, a number of these places could additionally make use of the same
BYTE_ARRAYS threadlocal.
There's also a couple of adjacent places where an optimistically-pooled
heap buffer is used for temp byte storage which could use the
threadlocal too in preference to allocating a temp heap bytebuf wrapper.
For example
https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java#L1417.
Modifications:
Replace new byte[] with PlatformDependent.allocateUninitializedArray()
where appropriate; make use of ByteBufUtil.getBytes() in some places
which currently perform the equivalent logic, including avoiding copy of
backing array if possible (although would be rare).
Result:
Further potential speed-up with java9+ and appropriate compile flags.
Many of these places could be on latency-sensitive code paths.
* Optimize AbstractByteBuf.getCharSequence() in US_ASCII case
Motivation:
Inspired by https://github.com/netty/netty/pull/8388, I noticed this
simple optimization to avoid char[] allocation (also suggested in a TODO
here).
Modifications:
Return an AsciiString from AbstractByteBuf.getCharSequence() if
requested charset is US_ASCII or ISO_8859_1 (latter thanks to
@Scottmitch's suggestion). Also tweak unit tests not to require Strings
and include a new benchmark to demonstrate the speedup.
Result:
Speed-up of AbstractByteBuf.getCharSequence() in ascii and iso 8859/1
cases
Motivation:
We need to ensure the Cumulator always releases the input buffer if it can not take over the ownership of it as otherwise it may leak.
Modifications:
- Correctly ensure the buffer is always released.
- Add unit tests.
Result:
Ensure buffer is always released.
Motivation:
In theory our estimation of the needed buffer could be off and so we need to ensure we grow it if there is no space left.
Modifications:
Ensure we grow the buffer if there is no space left in there but we still have data to deflate.
Result:
Correctly deflate data in all cases.
Motivation:
We need to reset the offset to 0 when we fail lazy because of a too long frame.
Modifications:
- Reset offset
- Add testcase
Result:
Fixes https://github.com/netty/netty/issues/8256.
Motivation:
The implementation of CharSequenceValueConverter.convertToByte did not correctly handle AsciiString if the length != 1.
Modifications:
- Only use fast-path for AsciiString with length of 1.
- Add unit tests.
Result:
Fixes https://github.com/netty/netty/issues/7990
Motivation:
We did not correctly copy elements in some cases when add(index, element) was used.
Modifications:
- Correctly detect when copy is neede and when not.
- Add test case.
Result:
Fixes https://github.com/netty/netty/issues/7938.
Motivation:
Some `if` statements contains common parts that can be extracted.
Modifications:
Extract common parts from `if` statements.
Result:
Less code and bytecode. The code is simpler and more clear.
Motivation:
When the JsonObjectDecoder determines that the incoming buffer had some data discarded, it resets the internal index to readerIndex and attempts to adjust the state which does not correctly work for streams of JSON objects.
Modifications:
Reset the internal index to the value considering the previous reads.
Result:
JsonObjectDecoder correctly handles streams of both JSON objects and arrays with no state adjustments or repeatable reads.
Motivation:
6e5fd9311f fixed a bug in EmptyHeaders which was never noticed before because we had no tests.
Modifications:
Add tests for EmptyHeaders.
Result:
EmptyHeaders is tested now.
Motivation:
EmptyHeaders#get with a default value argument returns null. It should never return null, and instead it should return the default value.
Modifications:
- EmptyHeaders#get with a default value should return that default value
Result:
More correct implementation of the Headers API.
Motivation:
The Snappy decoder was failing on valid inputs containing literals
with 2-byte lengths > 0x8000 or copies with 2-byte offsets >= 0x8000.
The decoder was also enforcing an artificially low offset limit of
0x7FFF, something the Snappy format description advises against,
and which prevents decoding valid inputs generated by other encoders.
Modifications:
Interpret 2-byte literal lengths and 2-byte copy offsets as unsigned
shorts, in accordance with the format description and reference
implementation.
Allow any positive offset value. Throw an appropriate exception
for negative values (which can theoretically occur due to arithmetic
overflow on 4-byte offsets, but are unlikely to occur in the wild).
Result:
The Snappy decoder can handle valid inputs that previously caused
it to throw exceptions.
Motivation:
CharSequenceValueConverter#convertToBoolean has a few manual conditionals which can be removed if we use AsciiString.contentEqualsIgnoreCase. Also by comparing an AsciiString to a String we will incur conversions to char that can be avoided if we compare against AsciiString.
Modifications:
- Use AsciiString.contentEqualsIgnoreCase
- Compare against a AsciiString
Result:
Simplified CharSequenceValueConverter#convertToBoolean which favors AsciiString comparison.
Motivation:
If you pass the output of CharSequenceValueConvert.convertToTimeMillis to convertTimeMillis it will throw a ParseException.
Modifications:
- Correctly implement CharSequenceValueConverter.convertTimeMillis
- Add unit-tests for CharSequenceValueConverter
Result:
Correctly convert timemillis.
Motivation:
HeaderEntry.equals() inherets Object.equals() which simply check if two objects are the same.
So it returns false even when two HeaderEntry objects have the same name and value.
Modifications:
Implement HeaderEntry.equals() that follows the specification of Map.Entry.equals().
https://docs.oracle.com/javase/9/docs/api/java/util/Map.Entry.html#equals-java.lang.Object-
Result:
HeaderEntry.equals() returns true if two HeaderEntry objects have the same name and value.
Motivation:
HttpHeaders.getBoolean should return the same truth value for the same string value, regardless of the underlying type.
Modifications:
- Only treat values of true as Boolean.TRUE
- Add unit tests.
Result:
Consistent converting of values for all CharSequence implementations.
Motivation:
Headers.get* methods should not throw an exception but return null or the default value if converting of the value fails.
Modifications:
- Correctly handle the case when ValueConverter throws an Exception.
- Add testcase.
Result:
Fixes [#7710].
Motivation:
We used Recycler for the CodecOutputList which is not optimized for the use-case of access only from the same Thread all the time.
Modifications:
- Use FastThreadLocal for CodecOutputList
- Add benchmark
Result:
Less overhead in our codecs.
Motivation:
Will allow easy removal of deprecated methods in future.
Modification:
Replaced ctx.attr(), ctx.hasAttr() with ctx.channel().attr(), ctx.channel().hasAttr().
Result:
No deprecated ctx.attr(), ctx.hasAttr() methods usage.
Motivation:
According to RFC 1952, concatenation of valid gzip streams is also a valid gzip stream. JdkZlibDecoder only processed the first and discarded the rest.
Modifications:
- Introduced a constructor argument decompressConcatenated that if true, JdkZlibDecoder would continue to process the stream.
Result:
- If 'decompressConcatenated = true', concatenated streams would be processed in
compliance to RFC 1952.
- If 'decompressConcatenated = false' (default), existing behavior would remain.
Motivation:
Allow pre-computing calculation of the constants for compiler where it could be.
Similar fix in OpenJDK: [1].
Modifications:
- Use parentheses.
- Simplify static initialization of `BYTE2HEX_*` arrays in `StringUtil`.
Result:
Less bytecode, possible faster calculations at runtime.
[1] https://bugs.openjdk.java.net/browse/JDK-4477961
Motiviation:
In our replace(...) methods we always used validation for the newly created headers while the original headers may not use validation at all.
Modifications:
- Only use validation if the original headers used validation as well.
- Ensure we create a copy of the headers in replace(...).
Result:
Fixes [#5226]
Automatic-Module-Name entry provides a stable JDK9 module name, when Netty is used in a modular JDK9 applications. More info: http://blog.joda.org/2017/05/java-se-9-jpms-automatic-modules.html
When Netty migrates to JDK9 in the future, the entry can be replaced by actual module-info descriptor.
Modification:
The POM-s are configured to put the correct module names to the manifest.
Result:
Fixes#7218.
Motivation:
DefaultHttpHeader.names() exposes HTTP header names as a Set<String>. Converting the resulting set to an array using toArray(String[]) throws an exception: java.lang.ArrayStoreException: io.netty.util.AsciiString.
Modifications:
- Remove our custom implementation of toArray(...) (and others) by just extending AbstractCollection.
- Add unit test
Result:
Fixes [#7428].
Motivation:
For debugging/logging purpose, it would be convenient to have
HttpHeaders#toString implemented.
DefaultHeaders does implement toString be the implementation is suboptimal and allocates a Set for the names and Lists for values.
Modification:
* Introduce HeadersUtil#toString that provides a convenient optimized helper to implement toString for various headers implementations
* Have DefaultHeaders#toString and HttpHeaders#toString delegate their toString implementation to HeadersUtil
Result:
Convenient HttpHeaders#toString. Optimized DefaultHeaders#toString.
Motivation:
HttpObjectEncoder and MessageAggregator treat buffers that are not readable special. If a buffer is not readable, then an EMPTY_BUFFER is written and the actual buffer is ignored. If the buffer has already been released then this will not be correct as the promise will be completed, but in reality the original content shouldn't have resulted in any write because it was invalid.
Modifications:
- HttpObjectEncoder should retain/write the original buffer instead of using EMPTY_BUFFER
- MessageAggregator should retain/write the original ByteBufHolder instead of using EMPTY_BUFFER
Result:
Invalid write operations which happen to not be readable correctly reflect failed status in the promise, and do not result in any writes to the channel.
Motivation:
Use actual links to new locations of Protobuf repo and documentation to
avoid problems when redirect will not work.
Modification:
Links in comments and all/pom.xml
Result:
Correct links to Protobuf resources
Motivation:
Even if it's a super micro-optimization (most JVM could optimize such
cases in runtime), in theory (and according to some perf tests) it
may help a bit. It also makes a code more clear and allows you to
access such methods in the test scope directly, without instance of
the class.
Modifications:
Add 'static' modifier for all methods, where it possible. Mostly in
test scope.
Result:
Cleaner code with proper 'static' modifiers.
Motivation:
Without a 'serialVersionUID' field, any change to a class will make
previously serialized versions unreadable.
Modifications:
Add missed 'serialVersionUID' field for all Serializable
classes.
Result:
Proper deserialization of previously serialized objects.
Motivation: Today when Netty encounters a general error while decoding
it treats this as a decoder exception. However, for fatal causes this
should not be treated as such, instead the fatal error should be carried
up the stack without the callee having to unwind causes. This was
probably done for byte to byte message decoder but is now done for all
decoders.
Modifications: Instead of translating any error to a decoder exception,
we let those unwind out the stack (note that finally blocks still
execute) except in places where an event needs to fire where we fire
with the error instead of wrapping in a decoder exception.
Result: Fatal errors will not be treated as innocent decoder exceptions.
Motivation: Today when Netty encounters a general error while decoding
it treats this as a decoder exception. However, for fatal causes this
should not be treated as such, instead the fatal error should be carried
up the stack without the callee having to unwind causes.
Modifications: Instead of translating any error to a decoder exception,
we let those unwind out the stack (note that finally blocks still
execute).
Result: Fatal errors will not be treated as innocent decoder exceptions.
Motivation:
A large frame will be componsed by many packages. Every time the package
arrived, findEndOfLine will be called from the start of the buffer. It
will cause the complexity of reading frame equal to O(n^2). This can be
eliminated by using a offset to mark the last scan position, when new
package arrived, just find the delimter from the mark. The complexity
will be O(n).
Modification:
Add a offset to mark the last scan position.
Result:
Better performance for read large frame.
Motivation:
The Headers interface supports an interface to get all the headers values corresponding to a particular name. This API returns a List which requires intermediate storage and increases GC pressure.
Modifications:
- Add a method which returns an iterator over all the values for a specific name
Result:
Ability to iterator over values for a specific name with no intermediate collection.
Motivation:
The decode method is too large to be inlined with default compiler settings, hence the uncommon paths need to be packed and moved away form the common one.
Modifications:
The uncommon paths of the decode call (eg failures with thrown exceptions) are packed and moved in private methods in order to reduce the size of the common one
and let it being inlined.
Result:
The decode method is being inlined if the stack depth allows it.
Motivation:
Continuing to make netty happy when compiling through errorprone.
Modification:
Mostly comments, some minor switch statement changes.
Result:
No more compiler errors!
Motivation:
Calling `newInstance()` on a Class object can bypass compile time
checked Exception propagation. This is noted in Java Puzzlers,
as well as in ErrorProne:
http://errorprone.info/bugpattern/ClassNewInstance
Modifications:
Use the niladic constructor to create a new instance.
Result:
Compile time safety for checked exceptions
This reverts commit d63bb4811e as this not covered correctly all cases and so could lead to missing fireChannelReadComplete() calls. We will re-evalute d63bb4811e and resbumit a pr once we are sure all is handled correctly
Motivation:
'insideString' and 'openBraces' need a proper handling when streaming
Json array over multiple writes and an element decoding was started but
not completed.
Related to #6969
Modifications:
If the idx is reset:
- 'insideString' has to be reset to 'false' in order to indicate that
array element will be decoded from the beginning
- 'openBraces' has to be reset to '1' to indicate that Json array
decoding is in progress.
Result:
Json array is properly decoded when in streaming mode
Motivation:
Its wasteful and also confusing that channelReadComplete() is called even if there was no message forwarded to the next handler.
Modifications:
- Only call ctx.fireChannelReadComplete() if at least one message was decoded
- Add unit test
Result:
Less confusing behavior. Fixes [#4312].
Motivation:
1. Hash function in the Snappy encoding is wrong probably: used '+' instead of '*'. See the reference implementation [1].
2. Size of the hash table is calculated, but not applied.
Modifications:
1. Fix hash function: replace addition by multiplication.
2. Allocate hash table with calculated size.
3. Use an `Integer.numberOfLeadingZeros` trick for calculate log2.
4. Release buffers in tests.
Result:
1. Better compression. In the test `encodeAndDecodeLongTextUsesCopy` now compressed size is 175 instead of 180 before this change.
2. No redundant allocations for hash table.
3. A bit faster the calc of shift (less an expensive math operations).
[1] 513df5fb5a/snappy.cc (L67)
Motivation:
Calling JsonObjectDecoder#reset while streaming Json array over multiple
writes causes CorruptedFrameException to be thrown.
Modifications:
While streaming Json array and if the current readerIndex has been reset,
ensure that the states will not be reset.
Result:
Fixes#6969
Motivation:
1. `ByteBuf` contains methods to writing `CharSequence` which optimized for UTF-8 and ASCII encodings. We can also apply optimization for ISO-8859-1.
2. In many places appropriate methods are not used.
Modifications:
1. Apply optimization for ISO-8859-1 encoding in the `ByteBuf#setCharSequence` realizations.
2. Apply appropriate methods for writing `CharSequences` into buffers.
Result:
Reduce overhead from string-to-bytes conversion.
Motivation:
Lz4FrameEncoder maintains internal state, but the life cycle of the buffer is not consistently managed. The buffer is allocated in handlerAdded but freed in close, but the buffer can still be used until handlerRemoved is called.
Modifications:
- Move the cleanup of the buffer from close to handlerRemoved
- Explicitly throw an EncoderException from Lz4FrameEncoder if the encode operation has finished and there isn't enough space to write data
Result:
No more NPE in Lz4FrameEncoder on the buffer.
Motivation:
JdkZlibDecoder will allocate a new buffer when the previous buffer is filled with inflated data, but JZlibDecoder will attempt to use the same buffer by resizing. This leads to inconsistent results when these two decoders that are intended to be functionality equivalent.
Modifications:
- JdkZlibDecoder should attempt to resize and reuse the existing buffer instead of creating multiple buffers
Result:
Fixes https://github.com/netty/netty/issues/6804
Motivation:
ByteToMessageDecoder#handlerRemoved will immediately release the cumulation buffer, but it is possible that a child class may still be using this buffer, and therefore use a dereferenced buffer.
Modifications:
- ByteToMessageDecoder#handlerRemoved and ByteToMessageDecoder#decode should coordinate to avoid the case where a child class is using the cumulation buffer but ByteToMessageDecoder releases that buffer.
Result:
Child classes of ByteToMessageDecoder are less likely to reference a released buffer.
Motivation:
We not correctly guarded against overflow and so call Base64.encode(...) with a big buffer may lead to an overflow when calculate the size of the out buffer.
Modifications:
Correctly guard against overflow.
Result:
Fixes [#6620].
Motivation:
If a read-only ByteBuf is passed to the ByteToMessageDecoder.channelRead(...) method we need to make a copy of it once we try to merge buffers for cumulation. This usually is not the case but can for example happen if the local transport is used. This was the cause of the leak report we sometimes saw during the codec-http2 tests, as we are using the local transport and write a read-only buffer. This buffer will then be passed to the peer channel and fired through the pipeline and so end up as the cumulation buffer in the ByteToMessageDecoder. Once the next fragement is received we tried to merge these and failed with a ReadOnlyBufferException which then produced a leak.
Modifications:
Ensure we copy the buffer if its read-only.
Result:
No more exceptions and so leak when a read-only buffer is passed to ByteToMessageDecoder.channelRead(...)
Motivation:
In an effort to better understand how the XmlFrameDecoder works, I consulted the tests to find a method that would reframe the inputs as per the Javadocs for that class. I couldn't find any methods that seemed to be doing it, so I wanted to add one to reinforce my understanding.
Modification:
Add a new test method to XmlFrameDecoder to assert that the reframing works as described.
Result:
New test method is added to XmlFrameDecoder
Motivation:
This pull request does not solve any problem but we find that several links in the code refer to project websites under the domain of http://code.google.com which are either moved to github or not maintained anymore.
Modification:
Update the project links from code.google.com to the relevant project in github.com
Motivation:
Lz4FrameEncoder uses internalNioBuffer but always passes in a value of 0 for the index. This should be readerIndex().
Modifications:
- change 0 to readerIndex()
Result:
More correct usage of internalNioBuffer in Lz4FrameEncoder.
Motivation:
DatagramPacketEncoder|Decoder should respect if the wrapped handler is sharable or not and depending on that be sharable or not.
Modifications:
- Delegate isSharable() to wrapped handler
- Add test-cases
Result:
Correct behavior
Motivation:
Base64#decode4to3 generally calculates an int value where the contents of the decodabet straddle bytes, and then uses a byte shifting or a full byte swapping operation to get the resulting contents. We can directly calculate the contents and avoid any intermediate int values and full byte swap operations. This will reduce the number of operations required during the decode operation.
Modifications:
- remove the intermediate int in the Base64#decond4to3 method.
- manually do the byte shifting since we are already doing bit/byte manipulations here anyways.
Result:
Base64#decode4to3 requires less operations to compute the end result.
Motivation:
The decode and encode method uses getByte(...) and setByte(...) in loops which can be very expensive because of bounds / reference-count checking. Beside this it also slows-down a lot when paranoid leak-detection is enabled as it will track each access.
Modifications:
- Pack bytes into int / short and so reduce operations on the ByteBuf
- Use ByteProcessor to reduce getByte calls.
Result:
Better performance in general. Also when you run the build with -Pleak the handler module will build in 1/4 of the time it took before.
Motivation:
We have our own ThreadLocalRandom implementation to support older JDKs . That said we should prefer the JDK provided when running on JDK >= 7
Modification:
Using ThreadLocalRandom implementation of the JDK when possible.
Result:
Make use of JDK implementations when possible.
Motivation:
To use jboss-marshalling extra command-line arguments are needed on JDK9+ as it makes use of reflection internally.
Modifications:
Skip jboss-marshalling tests when running on JDK9+ and init of MarshallingFactory fails.
Result:
Be able to build on latest JDK9 release.
Motivation:
We need to ensure we pass all tests when sun.misc.Unsafe is not present.
Modifications:
- Make *ByteBufAllocatorTest work whenever sun.misc.Unsafe is present or not
- Let Lz4FrameEncoderTest not depend on AbstractByteBufAllocator implementation details which take into account if sun.misc.Unsafe is present or not
Result:
Tests pass even without sun.misc.Unsafe.
Motivation:
We used various mocking frameworks. We should only use one...
Modifications:
Make usage of mocking framework consistent by only using Mockito.
Result:
Less dependencies and more consistent mocking usage.
Motivation:
Currently Netty does not wrap socket connect, bind, or accept
operations in doPrivileged blocks. Nor does it wrap cases where a dns
lookup might happen.
This prevents an application utilizing the SecurityManager from
isolating SocketPermissions to Netty.
Modifications:
I have introduced a class (SocketUtils) that wraps operations
requiring SocketPermissions in doPrivileged blocks.
Result:
A user of Netty can grant SocketPermissions explicitly to the Netty
jar, without granting it to the rest of their application.
Motivation:
LZ4FrameEncoder maintains an internal buffer of incoming data compress, and only writes out compressed data when a size threshold is reached. LZ4FrameEncoder does not override the flush() method, and thus the only way to flush data down the pipeline is via more data or close the channel.
Modifications:
Override the flush() function to flush on demand. Also overrode the allocateBuffer() function so we can more accurately size the output buffer (instead of needing to potatntially realloc via buffer.ensureWritable()).
Result:
Implementation works as described.