Motivation :
Unboxing operations allocate unnecessary objects when it could be avoided.
Modifications:
Replaced Float.valueOf with Number.parseFloat where possible.
Result:
Less unnecessary objects allocations.
Motivation:
We are currently doing a memory copy to verify the snapy version. This is not needed.
Modifications:
Remove memory copy and just compare byte per byte.
Result:
Less memory copies and allocations
Motivation:
We need to ensure the uncompressed ByteBuf is released if an exception happens while calling decode(...). If we miss to do so we leak buffers.
Modifications:
Correctly release buffer on exception.
Result:
No more memory leak.
Motivation:
We not need to do any memory copies when doing CRC32 processing.
Modifications:
Use ByteBufChecksum to eliminate memory copies.
Result:
Less memory copies.
Motivation:
We should try to minimize memory copies whenever possible.
Modifications:
- Refactor ByteBufChecksum to work with heap and direct ByteBuf always
- Remove memory copy in Snappy by let Crc32c extend ByteBufChecksum
Result:
Less memory copies when using Snappy
Motivation:
We did an unessary memory copy when doing bzip2 encoding.
Modifications:
Remove memory copy and just use a ByteProcessor.
Result:
Less memory copies and so faster.
Motivation:
We should prefer direct buffers for the output of Lz4FrameEncoder as this is what is needed for writing to the socket.
Modification:
Use direct buffers for the output
Result:
Less memory copies needed.
Motivation:
When the user constructs Lz4FrameDecoder with a Checksum implementation like CRC32 or Adler32 and uses Java8 we can directly use a ByteBuffer to do the checksum work. This way we can eliminate memory copies.
Modifications:
Detect if ByteBuffer can be used for checksum work and if so reduce memory copies.
Result:
Less memory copies when using JDK8
Motivation:
HPACK Encoder has a data structure which is similar to a previous version of DefaultHeaders. Some of the same improvements can be made.
Motivation:
- Enforce the restriction that the Encoder's headerFields length must be a power of two so we can use masking instead of modulo
- Use AsciiString.hashCode which already has optimizations instead of having yet another hash code algorithm in Encoder
Result:
Fixes https://github.com/netty/netty/issues/5357
Motivation:
PlatformDependent attempts to use reflection to get the underlying char[] (or byte[]) from String objects. This is fragile as if the String implementation does not utilize the full array, and instead uses a subset of the array, this optimization is invalid. OpenJDK6 and some earlier versions of OpenJDK7 String have the capability to use a subsection of the underlying char[].
Modifications:
- PlatformDependent should not attempt to use the underlying array from String (or other data types) via reflection
Result:
PlatformDependent hash code generation for CharSequence does not depend upon specific JDK implementation details.
Motivation:
It is good to have used dependencies and plugins up-to-date to fix any undiscovered bug fixed by the authors.
Modification:
Scanned dependencies and plugins and carefully updated one by one.
Result:
Dependencies and plugins are up-to-date.
Motivation:
JCTools supports both non-unsafe, unsafe versions of queues and JDK6 which allows us to shade the library in netty-common allowing it to stay "zero dependency".
Modifications:
- Remove copy paste JCTools code and shade the library (dependencies that are shaded should be removed from the <dependencies> section of the generated POM).
- Remove usage of OneTimeTask and remove it all together.
Result:
Less code to maintain and easier to update JCTools and less GC pressure as the queue implementation nt creates so much garbage
Motivation:
We should ensure we null out the cumulation buffer before we fire it through the pipleine in handlerRemoved(...) as in theory it could be possible that another method is triggered as result of the fireChannelRead(...) or fireChannelReadComplete() that will try to access the cumulation.
Modifications:
Null out cumulation buffer early in handlerRemoved(...)
Result:
No possible to access the cumulation buffer that was already handed over.
Motivation:
For example,
DefaultHttp2Headers headers = new DefaultHttp2Headers();
headers.add("key1", "value1");
headers.add("key1", "value2");
headers.add("key1", "value3");
headers.add("key2", "value4");
produces:
DefaultHttp2Headers[key1: value1key1: value2key1: value3, key2: value4]
while correctly it should be
DefaultHttp2Headers[key1: value1, key1: value2, key1: value3, key2: value4]
Modifications:
Change the toString() method to produce the beforementioned output.
Result:
toString() format is correct also for keys with multiple values.
Motivation:
At the moment the user is responsible to increase the writer index of the composite buffer when a new component is added. We should add some methods that handle this for the user as this is the most popular usage of the composite buffer.
Modifications:
Add new methods that autoamtically increase the writerIndex when buffers are added.
Result:
Easier usage of CompositeByteBuf.
Motivation:
99dfc9ea79 introduced some code that will more frequently try to forward messages out of the list of decoded messages to reduce latency and memory footprint. Unfortunally this has the side-effect that RecycleableArrayList.clear() will be called more often and so introduce some overhead as ArrayList will null out the array on each call.
Modifications:
- Introduce a CodecOutputList which allows to not null out the array until we recycle it and also allows to access internal array with extra range checks.
- Add benchmark that add elements to different List implementations and clear them
Result:
Less overhead when decode / encode messages.
Benchmark (elements) Mode Cnt Score Error Units
CodecOutputListBenchmark.arrayList 1 thrpt 20 24853764.609 ± 161582.376 ops/s
CodecOutputListBenchmark.arrayList 4 thrpt 20 17310636.508 ± 930517.403 ops/s
CodecOutputListBenchmark.codecOutList 1 thrpt 20 26670751.661 ± 587812.655 ops/s
CodecOutputListBenchmark.codecOutList 4 thrpt 20 25166421.089 ± 166945.599 ops/s
CodecOutputListBenchmark.recyclableArrayList 1 thrpt 20 24565992.626 ± 210017.290 ops/s
CodecOutputListBenchmark.recyclableArrayList 4 thrpt 20 18477881.775 ± 157003.777 ops/s
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 246.748 sec - in io.netty.handler.codec.CodecOutputListBenchmark
Related: #4333#4421#5128
Motivation:
slice(), duplicate() and readSlice() currently create a non-recyclable
derived buffer instance. Under heavy load, an application that creates a
lot of derived buffers can put the garbage collector under pressure.
Modifications:
- Add the following methods which creates a non-recyclable derived buffer
- retainedSlice()
- retainedDuplicate()
- readRetainedSlice()
- Add the new recyclable derived buffer implementations, which has its
own reference count value
- Add ByteBufHolder.retainedDuplicate()
- Add ByteBufHolder.replace(ByteBuf) so that..
- a user can replace the content of the holder in a consistent way
- copy/duplicate/retainedDuplicate() can delegate the holder
construction to replace(ByteBuf)
- Use retainedDuplicate() and retainedSlice() wherever possible
- Miscellaneous:
- Rename DuplicateByteBufTest to DuplicatedByteBufTest (missing 'D')
- Make ReplayingDecoderByteBuf.reject() return an exception instead of
throwing it so that its callers don't need to add dummy return
statement
Result:
Derived buffers are now recycled when created via retainedSlice() and
retainedDuplicate() and derived from a pooled buffer
Motivation:
Often users either need to read or write CharSequences to a ByteBuf. We should add methods for this to ByteBuf as we can do some optimizations for this depending on the implementation.
Modifications:
Add setCharSequence, writeCharSequence, getCharSequence and readCharSequence
Result:
Easier reading / writing of CharSequence with ByteBuf.
Motivation:
The double quote may be escaped in a JSON string, but JsonObjectDecoder doesn't handle it. Resolves#5157.
Modifications:
Don't end a JSON string when processing an escaped double quote.
Result:
JsonObjectDecoder can handle backslash and double quote in a JSON string correctly.
Motivation:
We lately added ByteBuf.isReadOnly() which allows to detect if a buffer is read-only or not. We should add ByteBuf.asReadOnly() to allow easily access a read-only version of a buffer.
Modifications:
- Add ByteBuf.asReadOnly()
- Deprecate Unpooled.unmodifiableBuffer(Bytebuf)
Result:
More consistent api.
Motivation:
b112673554 added ChannelInputShutdownEvent support to ByteToMessageDecoder but missed updating the code for ReplayingDecoder. This has the effect:
- If a ChannelInputShutdownEvent is fired ByteToMessageDecoder (the super-class of ReplayingDecoder) will call the channelInputClosed(...) method which will pass the incorrect buffer to the decode method of ReplayingDecoder.
Modifications:
Share more code between ByteToMessageDEcoder and ReplayingDecoder and so also support ChannelInputShutdownEvent correctly in ReplayingDecoder
Result:
ChannelInputShutdownEvent is corrrectly handle in ReplayingDecoder as well.
We need to check if this handler was removed before continuing with decoding.
If it was removed, it is not safe to continue to operate on the buffer. This was already fixed for ByteToMessageDecoder in 4cdbe39284 but missed for ReplayingDecoder.
Modifications:
Check if decoder was removed after fire messages through the pipeline.
Result:
No illegal buffer access when decoder was removed.
Motivation:
We use ByteBuf.readBytes(int) in various places where we could either remove it completely or use readSlice(int).retain().
Modifications:
- Remove ByteBuf.readBytes(int) when possible or replace by readSlice(int).retain().
Result:
Faster code.
Motivation:
See #1811
Modifications:
Add LineEncoder and LineSeparator
Result:
The user can use LineEncoder to write a String with a line separator automatically
Motivation:
UDP-oriented codec reusing the existing encoders and decoders would be helpful. See #1350
Modifications:
Add DatagramPacketEncoder and DatagramPacketDecoder to reuse the existing encoders and decoders.
Result:
People can use DatagramPacketEncoder and DatagramPacketDecoder to wrap existing encoders and decoders to create UDP-oriented codec.
Motivation:
If the input buffer is empty we should not have decodeLast(...) call decode(...) as the user may not expect this.
Modifications:
- Not call decode(...) in decodeLast(...) if the input buffer is empty.
- Add testcases.
Result:
decodeLast(...) will not call decode(...) if input buffer is empty.
Motivation:
Each call of ByteBuf.getByte(int) method does boundary checking. This can be eliminated by using ByteBuf.forEachByte(ByteProcessor) method and ByteProcessor.FIND_LF processor.
Modifications:
Find end of line with ByteProcessor.FIND_LF
Result:
A little better performance of LineBasedFrameDecoder.
Motivation:
Some people may want to use the Snappy class directly to encode / decode ByteBufs.
Modifications:
Make the Snappy class public and final.
Result:
Easier for people to reuse parts of Netty.
Motivation:
ByteToMessageDecoder must ensure that read components of the CompositeByteBuf can be discard by default when discardSomeReadBytes() is called. This may not be the case before as because of the default maxNumComponents that will cause consolidation.
Modifications:
Ensure we not do any consolidation to actually be abel to discard read components
Result:
Less memory usage and allocations.
Motivation
See ##3229
Modifications:
Add methods with position independent FileChannel calls to ByteBuf and its subclasses.
Results:
The user can use these new methods to read/write ByteBuff without updating FileChannel's position.
Motivation:
b714297a44 introduced ChannelInputShutdownEvent support for HttpObjectDecoder. However this should have been added to the super class ByteToMessageDecoder, and ByteToMessageDecoder should not propegate a channelInactive event through the pipeline in this case.
Modifications:
- Move the ChannelInputShutdownEvent handling from HttpObjectDecoder to ByteToMessageDecoder
- ByteToMessageDecoder doesn't call ctx.fireChannelInactive() on ChannelInputShutdownEvent
Result:
Half closed events are treated more generically, and don't get translated into a channelInactive pipeline event.
Motivation:
We not correctly added newlines if the src data needed to be padded. This regression was introduced by '63426fc3ed083513c07a58b45381f5c10dd47061'
Modifications:
- Correctly handling newlines
- Add unit test that proves the fix.
Result:
No more invalid base64 encoded data.
Motivation:
If the ZlibCodecFactory can support using a custom window size we should support it by default in the websocket extensions as well.
Modifications:
Detect if a custom window size can be handled by the ZlibCodecFactory and if so enable it by default for PerMessageDeflate*ExtensionHandshaker.
Result:
Support window size flag by default in most installations.
Motivation:
According to https://github.com/google/snappy/blob/master/format_description.txt#L55 , Snappy.decodeLiteral should handle the cases of 60, 61, 62 and 63. However right now it processes 64 instead of 63. I believe it's a typo since `tag >> 2 & 0x3F` must be less than 64.
Modifications:
Use the correct value 63.
Result:
Snappy.decodeLiteral handles the correct case.
Motivation:
Netty was missing support for Protobuf nano runtime targeted at
weaker systems such as Android devices.
Modifications:
Added ProtobufDecoderNano and ProtobufDecoderNano
in order to provide support for Nano runtime.
modified ProtobufVarint32FrameDecoder and
ProtobufLengthFieldPrepender in order to remove any
on either Nano or Lite runtime by copying the code
for handling Protobuf varint32 in from Protobuf
library.
modified Licenses and NOTICE in order to reflect the
changes i made.
added Protobuf Nano runtime as optional dependency
Result:
Netty now supports Protobuf Nano runtime.
Motivation:
`JdkZlibDecoder` is available since Netty 4.0.8 and works with JDK7+.
However, `io.netty.noJdkZlibDecoder` System prop evaluation always defaults to
true, causing Netty to always use JZLib when decompressing on the
client side when the property insn't explictly set to `false`.
Modifications:
Default to `false` instead of `true` when JDK7+.
Result:
JZLib optional as expected on JDK7+.
Motivation:
We need to check if this handler was removed before continuing with decoding.
If it was removed, it is not safe to continue to operate on the buffer.
Modifications:
Check if decoder was removed after fire messages through the pipeline.
Result:
No illegal buffer access when decoder was removed.
Motivation:
There are some wrong links and tags in javadoc.
Modifications:
Fix the wrong links and tags in javadoc.
Result:
These links will work correctly in javadoc.
Motivation:
We should not use Unpooled to allocate buffers for performance reasons.
Modifications:
Allow to pass in ByteBufAllocate which is used to allocate buffers or use the allocate of the src buffer.
Result:
Better performance if the PooledByteBufAllocator is used.
Motivation:
We need to ensure we not add a newline if the Base64 encoded buffer ends directly on the MAX_LINE_LENGTH. If we miss to do so this produce invalid data.
Because of this bug OpenSslServerContext and OpenSslClientContext may fail to load a cert.
Modifications:
- Only add NEW_LINE if we not are on the end of the dst buffer.
- Add unit test
Result:
Correct result in all cases
Motivation:
We have websocket extension support (with compression) in old master. We should port this to 4.1
Modifications:
Backport relevant code.
Result:
websocket extension support (with compression) is now in 4.1.
Motivation:
We recently added methods to ByteBuf to directly write and read LE values. We should use these in the Snappy implementation and so reduce duplication.
Modifications:
Replace manually swapping of values with LE write and read methods of ByteBuf.
Result:
Cleaner code with less duplication.
As discussed in #3209, this PR adds Little Endian accessors
to ByteBuf and descendants.
Corresponding accessors were added to UnsafeByteBufUtil,
HeapByteBufferUtil to avoid calling `reverseBytes`.
Deprecate `order()`, `order(buf)` and `SwappedByteBuf`.
Motivation:
DefaultHeaders creates an array of size 16 for all headers. This may waste a good deal of memory if applications only have a small number of headers. This memory may be critical when the number of connections grows large.
Modifications:
- Make the size of the array for DefaultHeaders configurable
Result:
Applications can control the size of the DefaultHeaders array and save memory.
Motivation:
We should use OneTimeTask where possible to reduce object creation.
Modifications:
Replace Runnable with OneTimeTask
Result:
Less object creation
Motivation:
Headers and groups of headers are frequently copied and the current mechanism is slower than it needs to be.
Modifications:
Skip name validation and hash computation when they are not necessary.
Fix emergent bug in CombinedHttpHeaders identified with better testing
Fix memory leak in DefaultHttp2Headers when clearing
Added benchmarks
Result:
Faster header copying and some collateral bug fixes
Motivation:
Makes the API contract of headers more consistent and simpler.
Modifications:
If self is passed to set then simply return
Result:
set and setAll will be consistent
Motivation:
The HTTP/2 RFC (https://tools.ietf.org/html/rfc7540#section-8.1.2) indicates that header names consist of ASCII characters. We currently use ByteString to represent HTTP/2 header names. The HTTP/2 RFC (https://tools.ietf.org/html/rfc7540#section-10.3) also eludes to header values inheriting the same validity characteristics as HTTP/1.x. Using AsciiString for the value type of HTTP/2 headers would allow for re-use of predefined HTTP/1.x values, and make comparisons more intuitive. The Headers<T> interface could also be expanded to allow for easier use of header types which do not have the same Key and Value type.
Motivation:
- Change Headers<T> to Headers<K, V>
- Change Http2Headers<ByteString> to Http2Headers<CharSequence, CharSequence>
- Remove ByteString. Having AsciiString extend ByteString complicates equality comparisons when the hash code algorithm is no longer shared.
Result:
Http2Header types are more representative of the HTTP/2 RFC, and relationship between HTTP/2 header name/values more directly relates to HTTP/1.x header names/values.
Motivation:
At the moment we only forward decoded messages that were added the out List once the full decode loop was completed. This has the affect that resources may not be released as fast as possible and as an application may incounter higher latency if the user triggeres a writeAndFlush(...) as a result of the decoded messages.
Modifications:
- forward decoded messages after each decode call
Result:
Forwarding decoded messages through the pipeline in a more eager fashion.
Motivation:
We should prevent to add/set DefaultHttpHeaders to itself to prevent unexpected side-effects.
Modifications:
Throw IllegalArgumentException if user tries to pass the same instance to set/add.
Result:
No surprising side-effects.
Motivation:
If a remote peer writes fast enough it may take a long time to have fireChannelReadComplete(...) triggered. Because of this we need to take special care and ensure we try to discard some bytes if channelRead(...) is called to often in ByteToMessageDecoder.
Modifications:
- Add ByteToMessageDecoder.setDiscardAfterReads(...) which allows to set the number of reads after which we try to discard the read bytes
- Use default value of 16 for max reads.
Result:
No risk of OOME.
Motivation:
The HashingStrategy for DefaultStompHeaders was using the java .equals() method which would fail to compare String, AsciiString, and other CharSequence objects as equal.
Modification:
- Use AsciiString.CASE_SENSITIVE_HASHER for DefaultStompHeaders
Result:
DefaultStompHeaders work with all CharSequence objects.
Fixes https://github.com/netty/netty/issues/4247
Motivation:
The HTTP/2 header name validation was removed, and does not currently exist.
Modifications:
- Header name validation for HTTP/2 should be restored and set to the default mode of operation.
Result:
HTTP/2 header names are validated according to https://tools.ietf.org/html/rfc7540
Motivation:
We missed to correctly implement the handlerRemoved(...) / channelInactive(...) and channelReadComplete(...) method, this leaded to multiple problems:
- Missed to forward bytes when the codec is removed from the pipeline
- Missed to call decodeLast(...) once the Channel goes in active
- No correct handling of channelReadComplete that could lead to grow of cumulation buffer.
Modifications:
- Correctly implement methods and forward to the internal ByteToMessageDecoder
- Add unit test.
Result:
Correct behaviour
Motivation:
The HttpObjectAggregator always responds with a 100-continue response. It should check the Content-Length header to see if the content length is OK, and if not responds with a 417.
Modifications:
- HttpObjectAggregator checks the Content-Length header in the case of a 100-continue.
Result:
HttpObjectAggregator responds with 417 if content is known to be too big.
Motivation:
A degradation in performance has been observed from the 4.0 branch as documented in https://github.com/netty/netty/issues/3962.
Modifications:
- Simplify Headers class hierarchy.
- Restore the DefaultHeaders to be based upon DefaultHttpHeaders from 4.0.
- Make various other modifications that are causing hot spots.
Result:
Performance is now on par with 4.0.
Motivation:
We noticed that the headers implementation in Netty for HTTP/2 uses quite a lot of memory
and that also at least the performance of randomly accessing a header is quite poor. The main
concern however was memory usage, as profiling has shown that a DefaultHttp2Headers
not only use a lot of memory it also wastes a lot due to the underlying hashmaps having
to be resized potentially several times as new headers are being inserted.
This is tracked as issue #3600.
Modifications:
We redesigned the DefaultHeaders to simply take a Map object in its constructor and
reimplemented the class using only the Map primitives. That way the implementation
is very concise and hopefully easy to understand and it allows each concrete headers
implementation to provide its own map or to even use a different headers implementation
for processing requests and writing responses i.e. incoming headers need to provide
fast random access while outgoing headers need fast insertion and fast iteration. The
new implementation can support this with hardly any code changes. It also comes
with the advantage that if the Netty project decides to add a third party collections library
as a dependency, one can simply plug in one of those very fast and memory efficient map
implementations and get faster and smaller headers for free.
For now, we are using the JDK's TreeMap for HTTP and HTTP/2 default headers.
Result:
- Significantly fewer lines of code in the implementation. While the total commit is still
roughly 400 lines less, the actual implementation is a lot less. I just added some more
tests and microbenchmarks.
- Overall performance is up. The current implementation should be significantly faster
for insertion and retrieval. However, it is slower when it comes to iteration. There is simply
no way a TreeMap can have the same iteration performance as a linked list (as used in the
current headers implementation). That's totally fine though, because when looking at the
benchmark results @ejona86 pointed out that the performance of the headers is completely
dominated by insertion, that is insertion is so significantly faster in the new implementation
that it does make up for several times the iteration speed. You can't iterate what you haven't
inserted. I am demonstrating that in this spreadsheet [1]. (Actually, iteration performance is
only down for HTTP, it's significantly improved for HTTP/2).
- Memory is down. The implementation with TreeMap uses on avg ~30% less memory. It also does not
produce any garbage while being resized. In load tests for GRPC we have seen a memory reduction
of up to 1.2KB per RPC. I summarized the memory improvements in this spreadsheet [1]. The data
was generated by [2] using JOL.
- While it was my original intend to only improve the memory usage for HTTP/2, it should be similarly
improved for HTTP, SPDY and STOMP as they all share a common implementation.
[1] https://docs.google.com/spreadsheets/d/1ck3RQklyzEcCLlyJoqDXPCWRGVUuS-ArZf0etSXLVDQ/edit#gid=0
[2] https://gist.github.com/buchgr/4458a8bdb51dd58c82b4
Motivation:
Two problems:
1. Decoder assumption that as soon as it finds </ element it can decrement opened xml brackets counter. It can lead to bugs when closing bracket is not in byteBuf yet.
2. Not proper handling of more than two root elements in XML document. First element will be processed properly, second one not. It is caused by assumption that byteBuf readerIndex is 0 at the begging of decoding.
Modifications:
Both problems were resolved by fixes:
1. decrement opened brackets count only if </ > enclosing bracket is found
2. consider readerIndex higher than 0 when counting output frame length
Result:
Both problems were resolved
Motivation:
Sometimes it is useful to detect if a ByteBuf contains a HAProxy header, for example if you want to write something like the PortUnification example.
Modifications:
- Add ProtocolDetectionResult which can be used as a return type for detecting different protocol.
- Add new method which allows to detect HA Proxy messages.
Result:
Easier to detect protocol.
Motivation:
A user sometimes just want the aggregated message has no content at
all. (e.g. A user only wants HTTP GET requests.)
Modifications:
- Do not raise IllegalArgumentException even if a user specified
the maxContentLength of 0
Result:
A user can disallow a message with non-empty content.
Motivation:
We are currently doing a memory cop to extract the frame in LengthFieldBasedFrameDecoder which can be eliminated.
Modifications:
Use buffer.slice(...).retain() to eliminate the memory copy.
Result:
Better performance.
Motivation:
The LineBasedFrameDecoder discardedBytes counting different compare to
DelimiterBasedFrameDecoder.
Modifications:
Add plus sign
Result:
DiscardedBytes counting correctly