Motivation:
PlatformDependent.newConcurrentHashMap() is no longer needed so it could be easily removed and new ConcurrentHashMap<>() inlined instead of invoking PlatformDependent.newConcurrentHashMap().
Modification:
Use ConcurrentHashMap provided by the JDK directly.
Result:
Less code to maintain.
Motivation:
We can use the diamond operator these days.
Modification:
Use diamond operator whenever possible.
Result:
More modern code and less boiler-plate.
Motivation:
Since Java 7 we can automatically close resources in try () construction.
Modification:
Changed all try catches in the code with autoclose try (resource)
Result:
Less boiler-plate
Motivation:
While we are not yet quite sure if we want to require Java11 as minimum we are at least sure we want to use java8 as minimum.
Modifications:
Change minimum version to java8 and update some tests which failed compilation after this change.
Result:
Use Java8 as minimum and be able to use Java8 features.
Motivation:
LineBasedFrameDecoder, JsonObjectDecoder and XmlFrameDecoder upon investigation of the
sourcecode appeared to only support ASCII or UTF-8 input. It is an important characteristic
and ont reflected in any documentation. This could lead to improper usage and bugs.
Modifications:
Javadoc comment is addedd to all three classes to state that implementation is only
compatible with UTF-8 or ASCII input streams and brifly touches on implementaion details.
Result:
The end user of the netty library would not have to study sorcecode to deterime character
encoding limitations for given classes.
Motivation:
ByteBuf supports “marker indexes”. The intended use case for these is if a speculative operation (e.g. decode) is in process the user can “mark” and interface and refer to it later if the operation isn’t successful (e.g. not enough data). However this is rarely used in practice,
requires extra memory to maintain, and introduces complexity in the state management for derived/pooled buffer initialization, resizing, and other operations which may modify reader/writer indexes.
Modifications:
Remove support for marking and adjust testcases / code.
Result:
Fixes https://github.com/netty/netty/issues/8535.
Motivation:
We had some typo (most likely caused by copy-and-paste) in the api docs which should be fixed.
Modifications:
Replace encoder by decoder word.
Result:
Correct apidocs.
Motivation:
Most of the maven modules do not explicitly declare their
dependencies and rely on transitivity, which is not always correct.
Modifications:
For all maven modules, add all of their dependencies to pom.xml
Result:
All of the (essentially non-transitive) depepdencies of the modules are explicitly declared in pom.xml
Motivation:
If the encoder needs to flush more than one outbound message it will
create a new ChannelPromise for all but the last write which will
swallow failures.
Modification:
Use a PromiseCombiner in the case of multiple messages and the parent
promise isn't the `VoidPromise`.
Result:
Intermediate failures are propagated to the original ChannelPromise.
Motivation:
There are currently many more places where this could be used which were
possibly not considered when the method was added.
If https://github.com/netty/netty/pull/8388 is included in its current
form, a number of these places could additionally make use of the same
BYTE_ARRAYS threadlocal.
There's also a couple of adjacent places where an optimistically-pooled
heap buffer is used for temp byte storage which could use the
threadlocal too in preference to allocating a temp heap bytebuf wrapper.
For example
https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java#L1417.
Modifications:
Replace new byte[] with PlatformDependent.allocateUninitializedArray()
where appropriate; make use of ByteBufUtil.getBytes() in some places
which currently perform the equivalent logic, including avoiding copy of
backing array if possible (although would be rare).
Result:
Further potential speed-up with java9+ and appropriate compile flags.
Many of these places could be on latency-sensitive code paths.
* Optimize AbstractByteBuf.getCharSequence() in US_ASCII case
Motivation:
Inspired by https://github.com/netty/netty/pull/8388, I noticed this
simple optimization to avoid char[] allocation (also suggested in a TODO
here).
Modifications:
Return an AsciiString from AbstractByteBuf.getCharSequence() if
requested charset is US_ASCII or ISO_8859_1 (latter thanks to
@Scottmitch's suggestion). Also tweak unit tests not to require Strings
and include a new benchmark to demonstrate the speedup.
Result:
Speed-up of AbstractByteBuf.getCharSequence() in ascii and iso 8859/1
cases
Motivation:
We need to ensure the Cumulator always releases the input buffer if it can not take over the ownership of it as otherwise it may leak.
Modifications:
- Correctly ensure the buffer is always released.
- Add unit tests.
Result:
Ensure buffer is always released.
Motivation:
In theory our estimation of the needed buffer could be off and so we need to ensure we grow it if there is no space left.
Modifications:
Ensure we grow the buffer if there is no space left in there but we still have data to deflate.
Result:
Correctly deflate data in all cases.
Motivation:
We need to reset the offset to 0 when we fail lazy because of a too long frame.
Modifications:
- Reset offset
- Add testcase
Result:
Fixes https://github.com/netty/netty/issues/8256.
Motivation:
The implementation of CharSequenceValueConverter.convertToByte did not correctly handle AsciiString if the length != 1.
Modifications:
- Only use fast-path for AsciiString with length of 1.
- Add unit tests.
Result:
Fixes https://github.com/netty/netty/issues/7990
Motivation:
We did not correctly copy elements in some cases when add(index, element) was used.
Modifications:
- Correctly detect when copy is neede and when not.
- Add test case.
Result:
Fixes https://github.com/netty/netty/issues/7938.
Motivation:
Some `if` statements contains common parts that can be extracted.
Modifications:
Extract common parts from `if` statements.
Result:
Less code and bytecode. The code is simpler and more clear.
Motivation:
When the JsonObjectDecoder determines that the incoming buffer had some data discarded, it resets the internal index to readerIndex and attempts to adjust the state which does not correctly work for streams of JSON objects.
Modifications:
Reset the internal index to the value considering the previous reads.
Result:
JsonObjectDecoder correctly handles streams of both JSON objects and arrays with no state adjustments or repeatable reads.
Motivation:
6e5fd9311f fixed a bug in EmptyHeaders which was never noticed before because we had no tests.
Modifications:
Add tests for EmptyHeaders.
Result:
EmptyHeaders is tested now.
Motivation:
EmptyHeaders#get with a default value argument returns null. It should never return null, and instead it should return the default value.
Modifications:
- EmptyHeaders#get with a default value should return that default value
Result:
More correct implementation of the Headers API.
Motivation:
The Snappy decoder was failing on valid inputs containing literals
with 2-byte lengths > 0x8000 or copies with 2-byte offsets >= 0x8000.
The decoder was also enforcing an artificially low offset limit of
0x7FFF, something the Snappy format description advises against,
and which prevents decoding valid inputs generated by other encoders.
Modifications:
Interpret 2-byte literal lengths and 2-byte copy offsets as unsigned
shorts, in accordance with the format description and reference
implementation.
Allow any positive offset value. Throw an appropriate exception
for negative values (which can theoretically occur due to arithmetic
overflow on 4-byte offsets, but are unlikely to occur in the wild).
Result:
The Snappy decoder can handle valid inputs that previously caused
it to throw exceptions.
Motivation:
CharSequenceValueConverter#convertToBoolean has a few manual conditionals which can be removed if we use AsciiString.contentEqualsIgnoreCase. Also by comparing an AsciiString to a String we will incur conversions to char that can be avoided if we compare against AsciiString.
Modifications:
- Use AsciiString.contentEqualsIgnoreCase
- Compare against a AsciiString
Result:
Simplified CharSequenceValueConverter#convertToBoolean which favors AsciiString comparison.
Motivation:
If you pass the output of CharSequenceValueConvert.convertToTimeMillis to convertTimeMillis it will throw a ParseException.
Modifications:
- Correctly implement CharSequenceValueConverter.convertTimeMillis
- Add unit-tests for CharSequenceValueConverter
Result:
Correctly convert timemillis.
Motivation:
HeaderEntry.equals() inherets Object.equals() which simply check if two objects are the same.
So it returns false even when two HeaderEntry objects have the same name and value.
Modifications:
Implement HeaderEntry.equals() that follows the specification of Map.Entry.equals().
https://docs.oracle.com/javase/9/docs/api/java/util/Map.Entry.html#equals-java.lang.Object-
Result:
HeaderEntry.equals() returns true if two HeaderEntry objects have the same name and value.
Motivation:
HttpHeaders.getBoolean should return the same truth value for the same string value, regardless of the underlying type.
Modifications:
- Only treat values of true as Boolean.TRUE
- Add unit tests.
Result:
Consistent converting of values for all CharSequence implementations.
Motivation:
Headers.get* methods should not throw an exception but return null or the default value if converting of the value fails.
Modifications:
- Correctly handle the case when ValueConverter throws an Exception.
- Add testcase.
Result:
Fixes [#7710].