Motivation:
The double quote may be escaped in a JSON string, but JsonObjectDecoder doesn't handle it. Resolves#5157.
Modifications:
Don't end a JSON string when processing an escaped double quote.
Result:
JsonObjectDecoder can handle backslash and double quote in a JSON string correctly.
Motivation:
b112673554 added ChannelInputShutdownEvent support to ByteToMessageDecoder but missed updating the code for ReplayingDecoder. This has the effect:
- If a ChannelInputShutdownEvent is fired ByteToMessageDecoder (the super-class of ReplayingDecoder) will call the channelInputClosed(...) method which will pass the incorrect buffer to the decode method of ReplayingDecoder.
Modifications:
Share more code between ByteToMessageDEcoder and ReplayingDecoder and so also support ChannelInputShutdownEvent correctly in ReplayingDecoder
Result:
ChannelInputShutdownEvent is corrrectly handle in ReplayingDecoder as well.
Motivation:
See #1811
Modifications:
Add LineEncoder and LineSeparator
Result:
The user can use LineEncoder to write a String with a line separator automatically
Motivation:
UDP-oriented codec reusing the existing encoders and decoders would be helpful. See #1350
Modifications:
Add DatagramPacketEncoder and DatagramPacketDecoder to reuse the existing encoders and decoders.
Result:
People can use DatagramPacketEncoder and DatagramPacketDecoder to wrap existing encoders and decoders to create UDP-oriented codec.
Motivation:
If the input buffer is empty we should not have decodeLast(...) call decode(...) as the user may not expect this.
Modifications:
- Not call decode(...) in decodeLast(...) if the input buffer is empty.
- Add testcases.
Result:
decodeLast(...) will not call decode(...) if input buffer is empty.
Motivation:
We not correctly added newlines if the src data needed to be padded. This regression was introduced by '63426fc3ed083513c07a58b45381f5c10dd47061'
Modifications:
- Correctly handling newlines
- Add unit test that proves the fix.
Result:
No more invalid base64 encoded data.
Motivation:
According to https://github.com/google/snappy/blob/master/format_description.txt#L55 , Snappy.decodeLiteral should handle the cases of 60, 61, 62 and 63. However right now it processes 64 instead of 63. I believe it's a typo since `tag >> 2 & 0x3F` must be less than 64.
Modifications:
Use the correct value 63.
Result:
Snappy.decodeLiteral handles the correct case.
Motivation:
Netty was missing support for Protobuf nano runtime targeted at
weaker systems such as Android devices.
Modifications:
Added ProtobufDecoderNano and ProtobufDecoderNano
in order to provide support for Nano runtime.
modified ProtobufVarint32FrameDecoder and
ProtobufLengthFieldPrepender in order to remove any
on either Nano or Lite runtime by copying the code
for handling Protobuf varint32 in from Protobuf
library.
modified Licenses and NOTICE in order to reflect the
changes i made.
added Protobuf Nano runtime as optional dependency
Result:
Netty now supports Protobuf Nano runtime.
Motivation:
We need to check if this handler was removed before continuing with decoding.
If it was removed, it is not safe to continue to operate on the buffer.
Modifications:
Check if decoder was removed after fire messages through the pipeline.
Result:
No illegal buffer access when decoder was removed.
Motivation:
We need to ensure we not add a newline if the Base64 encoded buffer ends directly on the MAX_LINE_LENGTH. If we miss to do so this produce invalid data.
Because of this bug OpenSslServerContext and OpenSslClientContext may fail to load a cert.
Modifications:
- Only add NEW_LINE if we not are on the end of the dst buffer.
- Add unit test
Result:
Correct result in all cases
Motivation:
Headers and groups of headers are frequently copied and the current mechanism is slower than it needs to be.
Modifications:
Skip name validation and hash computation when they are not necessary.
Fix emergent bug in CombinedHttpHeaders identified with better testing
Fix memory leak in DefaultHttp2Headers when clearing
Added benchmarks
Result:
Faster header copying and some collateral bug fixes
Motivation:
Makes the API contract of headers more consistent and simpler.
Modifications:
If self is passed to set then simply return
Result:
set and setAll will be consistent
Motivation:
The HTTP/2 RFC (https://tools.ietf.org/html/rfc7540#section-8.1.2) indicates that header names consist of ASCII characters. We currently use ByteString to represent HTTP/2 header names. The HTTP/2 RFC (https://tools.ietf.org/html/rfc7540#section-10.3) also eludes to header values inheriting the same validity characteristics as HTTP/1.x. Using AsciiString for the value type of HTTP/2 headers would allow for re-use of predefined HTTP/1.x values, and make comparisons more intuitive. The Headers<T> interface could also be expanded to allow for easier use of header types which do not have the same Key and Value type.
Motivation:
- Change Headers<T> to Headers<K, V>
- Change Http2Headers<ByteString> to Http2Headers<CharSequence, CharSequence>
- Remove ByteString. Having AsciiString extend ByteString complicates equality comparisons when the hash code algorithm is no longer shared.
Result:
Http2Header types are more representative of the HTTP/2 RFC, and relationship between HTTP/2 header name/values more directly relates to HTTP/1.x header names/values.
Motivation:
We should prevent to add/set DefaultHttpHeaders to itself to prevent unexpected side-effects.
Modifications:
Throw IllegalArgumentException if user tries to pass the same instance to set/add.
Result:
No surprising side-effects.
Motivation:
We missed to correctly implement the handlerRemoved(...) / channelInactive(...) and channelReadComplete(...) method, this leaded to multiple problems:
- Missed to forward bytes when the codec is removed from the pipeline
- Missed to call decodeLast(...) once the Channel goes in active
- No correct handling of channelReadComplete that could lead to grow of cumulation buffer.
Modifications:
- Correctly implement methods and forward to the internal ByteToMessageDecoder
- Add unit test.
Result:
Correct behaviour
Motivation:
A degradation in performance has been observed from the 4.0 branch as documented in https://github.com/netty/netty/issues/3962.
Modifications:
- Simplify Headers class hierarchy.
- Restore the DefaultHeaders to be based upon DefaultHttpHeaders from 4.0.
- Make various other modifications that are causing hot spots.
Result:
Performance is now on par with 4.0.
Motivation:
We noticed that the headers implementation in Netty for HTTP/2 uses quite a lot of memory
and that also at least the performance of randomly accessing a header is quite poor. The main
concern however was memory usage, as profiling has shown that a DefaultHttp2Headers
not only use a lot of memory it also wastes a lot due to the underlying hashmaps having
to be resized potentially several times as new headers are being inserted.
This is tracked as issue #3600.
Modifications:
We redesigned the DefaultHeaders to simply take a Map object in its constructor and
reimplemented the class using only the Map primitives. That way the implementation
is very concise and hopefully easy to understand and it allows each concrete headers
implementation to provide its own map or to even use a different headers implementation
for processing requests and writing responses i.e. incoming headers need to provide
fast random access while outgoing headers need fast insertion and fast iteration. The
new implementation can support this with hardly any code changes. It also comes
with the advantage that if the Netty project decides to add a third party collections library
as a dependency, one can simply plug in one of those very fast and memory efficient map
implementations and get faster and smaller headers for free.
For now, we are using the JDK's TreeMap for HTTP and HTTP/2 default headers.
Result:
- Significantly fewer lines of code in the implementation. While the total commit is still
roughly 400 lines less, the actual implementation is a lot less. I just added some more
tests and microbenchmarks.
- Overall performance is up. The current implementation should be significantly faster
for insertion and retrieval. However, it is slower when it comes to iteration. There is simply
no way a TreeMap can have the same iteration performance as a linked list (as used in the
current headers implementation). That's totally fine though, because when looking at the
benchmark results @ejona86 pointed out that the performance of the headers is completely
dominated by insertion, that is insertion is so significantly faster in the new implementation
that it does make up for several times the iteration speed. You can't iterate what you haven't
inserted. I am demonstrating that in this spreadsheet [1]. (Actually, iteration performance is
only down for HTTP, it's significantly improved for HTTP/2).
- Memory is down. The implementation with TreeMap uses on avg ~30% less memory. It also does not
produce any garbage while being resized. In load tests for GRPC we have seen a memory reduction
of up to 1.2KB per RPC. I summarized the memory improvements in this spreadsheet [1]. The data
was generated by [2] using JOL.
- While it was my original intend to only improve the memory usage for HTTP/2, it should be similarly
improved for HTTP, SPDY and STOMP as they all share a common implementation.
[1] https://docs.google.com/spreadsheets/d/1ck3RQklyzEcCLlyJoqDXPCWRGVUuS-ArZf0etSXLVDQ/edit#gid=0
[2] https://gist.github.com/buchgr/4458a8bdb51dd58c82b4
Motivation:
Two problems:
1. Decoder assumption that as soon as it finds </ element it can decrement opened xml brackets counter. It can lead to bugs when closing bracket is not in byteBuf yet.
2. Not proper handling of more than two root elements in XML document. First element will be processed properly, second one not. It is caused by assumption that byteBuf readerIndex is 0 at the begging of decoding.
Modifications:
Both problems were resolved by fixes:
1. decrement opened brackets count only if </ > enclosing bracket is found
2. consider readerIndex higher than 0 when counting output frame length
Result:
Both problems were resolved
Motivation:
The usage and code within AsciiString has exceeded the original design scope for this class. Its usage as a binary string is confusing and on the verge of violating interface assumptions in some spots.
Modifications:
- ByteString will be created as a base class to AsciiString. All of the generic byte handling processing will live in ByteString and all the special character encoding will live in AsciiString.
Results:
The AsciiString interface will be clarified. Users of AsciiString can now be clear of the limitations the class imposes while users of the ByteString class don't have to live with those limitations.
Motivation:
The ReplayingDecoderBuffer does not match the naming scheme we use for ByteBuf types.
Modifications:
Rename to ReplayingDecoderByteBuf to match naming scheme
Result:
Consistent naming
Motivation:
Too many duplicated code of tests for different compression codecs.
Modifications:
- Added abstract classes AbstractCompressionTest, AbstractDecoderTest and AbstractEncoderTest which contains common variables and tests for any compression codec.
- Removed common tests which are implemented in AbstractDecoderTest and AbstractEncoderTest from current tests for compression codecs.
- Implemented abstract methods of AbstractDecoderTest and AbstractEncoderTest in current tests for compression codecs.
- Added additional checks for current tests.
- Renamed abstract class IntegrationTest to AbstractIntegrationTest.
- Used Theories to run tests with head and direct buffers.
- Removed code duplicates.
Result:
Removed duplicated code of tests for compression codecs and simplified an addition of tests for new compression codecs.
Motivation:
While the LengthFieldBasedFrameDecoder supports a byte order the LengthFieldPrepender does not.
That means that I can simply add a LengthFieldBasedFrameDecoder with ByteOrder.LITTLE_ENDIAN to my pipeline
but have to write my own Encoder to write length fields in little endian byte order.
Modifications:
Added a constructor that takes a byte order and all other parameters.
All other constructors delegate to this one with ByteOrder.BIG_ENDIAN.
LengthFieldPrepender.encode() uses this byte order to write the length field.
Result:
LengthFieldPrepender will write the length field in the defined byte order.
Motivation:
At the moment if you want to return a HTTP header containing multiple
values you have to set/add that header once with the values wanted. If
you used set/add with an array/iterable multiple HTTP header fields will
be returned in the response.
Note, that this is indeed a suggestion and additional work and tests
should be added. This is mainly to bring up a discussion.
Modifications:
Added a flag to specify that when multiple values exist for a single
HTTP header then add them as a comma separated string.
In addition added a method to StringUtil to help escape comma separated
value charsequences.
Result:
Allows for responses to be smaller.
Motivation:
Even if a handler called ctx.fireChannelReadComplete(), the next handler
should not get its channelReadComplete() invoked if fireChannelRead()
was not invoked before.
Modifications:
- Ensure channelReadComplete() is invoked only when the handler of the
current context actually produced a message, because otherwise there's
no point of triggering channelReadComplete().
i.e. channelReadComplete() must follow channelRead().
- Fix a bug where ctx.read() was not called if the handler of the
current context did not produce any message, making the connection
stall. Read the new comment for more information.
Result:
- channelReadComplete() is invoked only when it makes sense.
- No stale connection
Modifications:
Converted AsciiString into a String by calling toString() method before comparing with equals(). Also added a unit-test to show that it works.
Result:
Major violation is gone. Code is correct.
Motivation:
Currently, we only test our ZlibEncoders against our ZlibDecoders. It is
convenient to write such tests, but it does not necessarily guarantee
their correctness. For example, both encoder and decoder might be faulty
even if the tests pass.
Modifications:
Add another test that makes sure that our GZIP encoder generates the
GZIP trailer, using the fact that GZIPInputStream raises an EOFException
when GZIP trailer is missing.
Result:
More coverage for GZIP compression
Motivation:
The header class hierarchy and algorithm was improved on the master branch for versions 5.x. These improvments should be backported to the 4.1 baseline.
Modifications:
- cherry-pick the following commits from the master branch: 2374e17, 36b4157, 222d258
Result:
Header improvements in master branch are available in 4.1 branch.
Motivation:
Make it much more readable code.
Modifications:
- Added states of decompression.
- Refactored decode(...) method to use this states.
Result:
Much more readable decoder which looks like other compression decoders.
Motivation:
LZMA compression algorithm has a very good compression ratio.
Modifications:
- Added `lzma-java` library which implements LZMA algorithm.
- Implemented LzmaFrameEncoder which extends MessageToByteEncoder and provides compression of outgoing messages.
- Added tests to verify the LzmaFrameEncoder and how it can compress data for the next uncompression using the original library.
Result:
LZMA encoder which can compress data using LZMA algorithm.
Motivation:
ByteToMessageCodec miss to check for @Sharable annotation in one of its constructors.
Modifications:
Ensure we call checkForSharableAnnotation in all constructors.
Result:
After your change, what will change.
Related issue: #2766
Motivation:
Forgot to rename them before the final release by mistake.
Modifications:
Rename and then re-introduce the deprecated version that extends the
renamed class.
Result:
Better naming
Motivation:
LZ4 compression codec provides sending and receiving data encoded by very fast LZ4 algorithm.
Modifications:
- Added `lz4` library which implements LZ4 algorithm.
- Implemented Lz4FramedEncoder which extends MessageToByteEncoder and provides compression of outgoing messages.
- Added tests to verify the Lz4FramedEncoder and how it can compress data for the next uncompression using the original library.
- Implemented Lz4FramedDecoder which extends ByteToMessageDecoder and provides uncompression of incoming messages.
- Added tests to verify the Lz4FramedDecoder and how it can uncompress data after compression using the original library.
- Added integration tests for Lz4FramedEncoder/Decoder.
Result:
Full LZ4 compression codec which can compress/uncompress data using LZ4 algorithm.
Motivation:
FastLZ compression codec provides sending and receiving data encoded by fast FastLZ algorithm using block mode.
Modifications:
- Added part of `jfastlz` library which implements FastLZ algorithm. See FastLz class.
- Implemented FastLzFramedEncoder which extends MessageToByteEncoder and provides compression of outgoing messages.
- Implemented FastLzFramedDecoder which extends ByteToMessageDecoder and provides uncompression of incoming messages.
- Added integration tests for `FastLzFramedEncoder/Decoder`.
Result:
Full FastLZ compression codec which can compress/uncompress data using FastLZ algorithm.
Motivation:
In ReplayingDecoder / ByteToMessageDecoder channelInactive(...) method we try to decode a last time and fire all decoded messages throw the pipeline before call ctx.fireChannelInactive(...). To keep the correct order of events we also need to call ctx.fireChannelReadComplete() if we read anything.
Modifications:
- Channel channelInactive(...) to call ctx.fireChannelReadComplete() if something was decoded
- Move out.recycle() to finally block
Result:
Correct order of events.
Motivation:
Complicated code of Bzip2 tests with some unnecessary actions.
Modifications:
- Reduce size of BYTES_LARGE array of random test data for Bzip2 tests.
- Removed unnecessary creations of EmbeddedChannel instances in Bzip2 tests.
- Simplified tests in Bzip2DecoderTest which expect exception.
- Removed unnecessary testStreamInitialization() from Bzip2EncoderTest.
Result:
Reduced time to test the 'codec' package by 7 percent, simplified code of Bzip2 tests.
Motivation:
Duplicated code of integration tests for different compression codecs.
Modifications:
- Added abstract class IntegrationTest which contains common tests for any compression codec.
- Removed common tests from Bzip2IntegrationTest and LzfIntegrationTest.
- Implemented abstract methods of IntegrationTest in Bzip2IntegrationTest, LzfIntegrationTest and SnappyIntegrationTest.
Result:
Removed duplicated code of integration tests for compression codecs and simplified an addition of integration tests for new compression codecs.
Motivation:
Sometimes we have a 'build time out' error because tests for bzip2 codec take a long time.
Modifications:
Removed cycles from Bzip2EncoderTest.testCompression(byte[]) and Bzip2DecoderTest.testDecompression(byte[]).
Result:
Reduced time to test the 'codec' package by 30 percent.
Motivation:
LZF compression codec provides sending and receiving data encoded by very fast LZF algorithm.
Modifications:
- Added Compress-LZF library which implements LZF algorithm
- Implemented LzfEncoder which extends MessageToByteEncoder and provides compression of outgoing messages
- Added tests to verify the LzfEncoder and how it can compress data for the next uncompression using the original library
- Implemented LzfDecoder which extends ByteToMessageDecoder and provides uncompression of incoming messages
- Added tests to verify the LzfDecoder and how it can uncompress data after compression using the original library
- Added integration tests for LzfEncoder/Decoder
Result:
Full LZF compression codec which can compress/uncompress data using LZF algorithm.
Motivation:
See GitHub Issue #2536.
Modifications:
Introduce the class JsonObjectDecoder to split a JSON byte stream
into individual JSON objets/arrays.
Result:
A Netty application can now handle a byte stream where multiple JSON
documents follow eachother as opposed to only a single JSON document
per request.
If decompression fails, the buffer that contains the decompressed data
is not released. Bzip2DecoderTest.testStreamCrcError() also does not
release the partial output Bzip2Decoder produces.