Motivation:
The isDone method is currently broken in the aggregator because the doneAllocatingPromises accidentally calls the overridden version of setSuccess, rather than calling the base class version. This causes the base class's version to never be called since allowNotificationEvent will evaluate to false. This means that setSuccess0 will never be set, resulting in isDone always returning false.
Modifications:
Changed setSuccess() to call the base class when appropriate, regardless of the result of allowNotificationEvent.
Result:
isDone now behaves properly for the promise aggregator.
Modifications:
- Add jetty.npn.version.latest and jetty.alpn.version.latest7/8
- Add npn-alpn-7 profile
- Use the *.latest7/8 version properties in alpn-8 and npn-alpn-7
- Add more profiles for newer JDK versions
- Reorder profiles
Motivation:
ResourceLeak.close() must be called when a reference-counted resource is
deallocated, but AbstractDnsMessage.deallocate() forgot to call it.
Modifications:
Call ResourceLeak.close() for the tracked AbstractDnsMessage instances
Result:
Fix the false resource leak warnings
Motivation:
The current heap dump compression preset (9) requires way too much
memory (768 MiB at maximum for dictionary), resulting in OOME in many
cases.
Modifications:
- Use the default preset (6) which uses 8 MiB dictionary.
- Do not fail abruptly even when OOME has been raised.
Result:
More stable heap dump acquisition
Motivation:
The SSL peer who did not initiate renegotiation sometimes does not get
the notification for renegotition due to an unknown reason.
Modification:
Until the exact cause is understood, relax the assertions of the flaky
tests.
Result:
Build stability
Motivation:
There are various known issues in netty-codec-dns:
- Message types are not interfaces, which can make it difficult for a
user to implement his/her own message implementation.
- Some class names and field names do not match with the terms in the
RFC.
- The support for decoding a DNS record was limited. A user had to
encode and decode by him/herself.
- The separation of DnsHeader from DnsMessage was unnecessary, although
it is fine conceptually.
- Buffer leak caused by DnsMessage was difficult to analyze, because the
leak detector tracks down the underlying ByteBuf rather than the
DnsMessage itself.
- DnsMessage assumes DNS-over-UDP.
- To send an EDNS message, a user have to create a new DNS record class
instance unnecessarily.
Modifications:
- Make all message types interfaces and add default implementations
- Rename some classes, properties, and constants to match the RFCs
- DnsResource -> DnsRecord
- DnsType -> DnsRecordType
- and many more
- Remove DnsClass and use an integer to support EDNS better
- Add DnsRecordEncoder/DnsRecordDecoder and their default
implementations
- DnsRecord does not require RDATA to be ByteBuf anymore.
- Add DnsRawRecord as the catch-all record type
- Merge DnsHeader into DnsMessage
- Make ResourceLeakDetector track AbstractDnsMessage
- Remove DnsMessage.sender/recipient properties
- Wrap DnsMessage with AddressedEnvelope
- Add DatagramDnsQuest and DatagramDnsResponse for ease of use
- Rename DnsQueryEncoder to DatagramDnsQueryEncoder
- Rename DnsResponseDecoder to DatagramDnsResponseDecoder
- Miscellaneous changes
- Add StringUtil.TAB
Result:
- Cleaner APi
- Can support DNS-over-TCP more easily in the future
- Reduced memory footprint in the default DnsQuery/Response
implementations
- Better leak tracking for DnsMessages
- Possibility to introduce new DnsRecord types in the future and provide
full record encoder/decoder implementation.
- No unnecessary instantiation for an EDNS pseudo resource record
Motivation:
Allow users of HTTP2 to control when flushes occur so they can optimize network writes.
Modifications:
Removed explicit calls to flush in encoder, decoder & flow-controller
Connection handler now calls flush on read-complete to enable batching writes in response to reads
Result:
Much less flushing occurs for normal HTTP2 request and response patterns.
Motivation:
Right now the used hpack dependency does not contain a valid osgi manifest.
Modifications:
Upgrade hpack from 0.10.1 to 0.11.0.
Result:
hpack dependency works in osgi containers without wrapping.
Motivation:
Many projects need some kind a Channel/Connection pool implementation. While the protocols are different many things can be shared, so we should provide a generic API and implementation.
Modifications:
Add ChannelPool / ChannelPoolMap API and implementations.
Result:
Reusable / Generic pool implementation that users can use.
Motivation:
Some SCTP applications require the SCTP unordered flag.
This flag was not exposed by Netty so applications were unable
to use it.
Modifications:
- Add unordered flag to SctpMessage.
- {Nio,Oio}SctpChannel pass unordered flag to MessageInfo on write.
- SctpOutboundByteStreamHandler may optionally request unordered
delivery for all outbound messages.
- Added test case to SctpEchoTest using unordered flag.
Result:
Fixes#3698. New constructors and methods added to SctpMessage and
SctpOutboundByteStreamHandler, but changes are backward compatible.
Motivation:
'length2 == 0' is not reachable because length1 and length2 are same at this point.
Motification:
Removed 'length2 == 0'.
Result:
Cleaner code.
Motivation:
Linux supports splice(...) to transfer data from one filedescriptor to another without
pass data through the user-space. This allows to write high-performant proxy code or to stream
stuff from the socket directly the the filesystem.
Modification:
Add AbstractEpollStreamChannel.spliceTo(...) method to support splice(...) system call
Result:
Splice is now supported when using the native linux transport.
Conflicts:
transport-native-epoll/src/main/java/io/netty/channel/epoll/AbstractEpollStreamChannel.java
Motiviation:
There are a few spots in the HTTP/2 codec where warnings were generated and can be avoided.
Modifications:
Clean up the cause of the warnings.
Result:
Less warnings.
Motivation:
We are currently doing a memory cop to extract the frame in LengthFieldBasedFrameDecoder which can be eliminated.
Modifications:
Use buffer.slice(...).retain() to eliminate the memory copy.
Result:
Better performance.
Motivation:
Currently, valueOf() and newInstance() use almost same code to create new constant.
For maintainability, it's better to share duplicate code among them.
Motification:
Added new private functions.
- checkNotNullAndNotEmpty() is for checking whether the name of a constant is null and empty.
- newConstant0() is for creating a new constant.
Result:
- Compact source code
- Improvement of maintainability
Motivation:
The Http2ConnectionHandler incorrectly doesn't propagate channelActive and channelInactive events and thus breaks the pipeline
for other ChannelHandler.
Modification:
- Add calls to super.channelActive() and super.channelInactive().
- Remove unused methods.
Result:
- Http2ConnectionHandler can be used with other ChannelHandlers.
Motivation:
Because of a bug we missed to fail the connect future when doClose() is called. This can lead to a future which is never notified and so may lead to deadlocks in user-programs.
Modifications:
Correctly fail the connect future when doClose() is called and the connection was not established yet.
Result:
Connect future is always notified.
Motivation:
When a SecurityManager is in place that preven reading the somaxconn file trying to bootstrap a channel later will result in a ClassNotFoundError.
Modifications:
- Reading the file in a privileged block.
Result:
No more ClassNotFoundError when a SecurityManager is in place.
Motivation:
The ByteString class currently assumes the underlying array will be a complete representation of data. This is limiting as it does not allow a subsection of another array to be used. The forces copy operations to take place to compensate for the lack of API support.
Modifications:
- add arrayOffset method to ByteString
- modify all ByteString and AsciiString methods that loop over or index into the underlying array to use this offset
- update all code that uses ByteString.array to ensure it accounts for the offset
- add unit tests to test the implementation respects the offset
Result:
ByteString and AsciiString can represent a sub region of a byte[].
Motivation:
Streams currently maintain a hash map of user-defined properties, which has been shown to add significant memory overhead as well as being a performance bottleneck for lookup of frequently used properties.
Modifications:
Modifying the connection/stream to use an array as the storage of user-defined properties, indexed by the class that identifies the index into the array where the property is stored.
Result:
Stream processing performance should be improved.
Motivation:
Currently we allocate the full amount of state for each stream as soon as the stream is created, and keep that state until the stream is GC. The full set of state is only needed when the stream can support flow controlled frames. There is an opportunity to reduce the required amount of memory, and make memory eligible for GC sooner by only allocating what is necessary for flow control stream state.
Modifications:
Introduce objects which require 'less' state for local/remote flow control stream state.
Use these new objects when streams have been created but will not transition out of idle AND when streams are no longer eligible for flow controlled frame transfer but still must persist in the priority tree.
Result:
Memory allocations are reduced to what is actually needed, and memory is made eligible for GC potentially sooner.
Motivation:
static Package getPackage(Class<?> c) uses synchronized block internally.
Thanks to @jingene for the hint and initial report of the issue.
Modifications:
-Use simple lastIndexOf(...) and substring for a faster implementation
Result:
No more lock condition.
Motivation:
The recent PR that discarded the Http2StreamRemovalPolicy causes connection errors when receiving a frame for a stream that no longer exists. We should ignore these frames if we think there's a chance that the stream has existed previously
Modifications:
Modified the Http2Connection interface to provide a `streamMayHaveExisted` method. Also removed the requireStream() method to identify all of the places in the code that need to be updated.
Modified the encoder and decoder to properly handle cases where a stream may have existed but no longer does.
Result:
Fixes#3643
Motivation:
The current local flow controller does not guarantee that unconsumed bytes for a closed stream will be restored to the connection window. This may lead to degradation of the connection window over time.
Modifications:
Modified DefaultHttp2LocalFlowController to guarantee that any unconsumed bytes are returned to the connection window as soon as the stream is closed. We also immediately consume any bytes when receiving DATA for a closed stream.
Result:
Fixes#3668
Motivation:
The LineBasedFrameDecoder discardedBytes counting different compare to
DelimiterBasedFrameDecoder.
Modifications:
Add plus sign
Result:
DiscardedBytes counting correctly
Motivation:
Flow control is a required part of the HTTP/2 specification but it is currently structured more like an optional item. It must be accessed through the property map which is time consuming and does not represent its required nature. This access pattern does not give any insight into flow control outside of the codec (or flow controller implementation).
Modifications:
1. Create a read only public interface for LocalFlowState and RemoteFlowState.
2. Add a LocalFlowState localFlowState(); and RemoteFlowState remoteFlowState(); to Http2Stream.
Result:
Flow control is not part of the Http2Stream interface. This clarifies its responsibility and logical relationship to other interfaces. The flow controller no longer must be acquired though a map lookup.
Motivation:
CompositeByteBuf.iterator() currently creates a new ArrayList and fill it with the ByteBufs, which is more expensive then it needs to be.
Modifications:
- Use special Iterator implementation
Result:
Less overhead when calling iterator()
Motivation:
Our automatically handling of non-auto-read failed because it not detected the need of calling read again by itself if nothing was decoded. Beside this handling of non-auto-read never worked for SslHandler as it always triggered a read even if it decoded a message and auto-read was false.
This fixes [#3529] and [#3587].
Modifications:
- Implement handling of calling read when nothing was decoded (with non-auto-read) to ByteToMessageDecoder again
- Correctly respect non-auto-read by SslHandler
Result:
No more stales and correctly respecting of non-auto-read by SslHandler.
Motiviation:
Our tests for non-auto-read did actually not test this correctly as auto-read was never disabled on the Bootstrap and ServerBootstrap.
Modifications:
- Correctly disable auto-read on Bootstrap and ServerBootstrap
- Fix tests to call ChannelHandlerContext.read() once a Channel becomes active.
Result:
Correctly test that non-auto-read works.
Motivation:
Unnecessary object allocation is currently done during wrap/unwrap while a handshake is still in progress.
Modifications:
Use static instances when possible.
Result:
Less object creations.
Motivation:
Each different *ChannelOption did extend ChannelOption in 4.0, which we changed in 4.1. This is a breaking change in terms of the API so we need to ensure we keep the old hierarchy.
Modifications:
- Let all *ChannelOption extend ChannelOption
- Add back constructor and mark it as @deprecated
Result:
No API breakage between 4.0 and 4.1
Motivation:
Sometimes it's useful to get informations about the available OpenSSL library that is used for the OpenSslEngine.
Modifications:
Add two new methods which allows to get the available OpenSSL version as either
an int or an String.
Result:
Easy to access details about OpenSSL version.
Motivation:
There is no benchmark to measure the priority tree implementation performance.
Modifications:
Introduce a new benchmark which will populate the priority tree, and then shuffle parent/child links around.
Result:
A simple benchmark to get a baseline for the HTTP/2 codec's priority tree implementation.
Motivation:
Allows for running benchmarks from built jars which is useful in development environments that only take released artifacts.
Modifications:
Move benchmarks into 'main' from 'test'
Add @State annotations to benchmarks that are missing them
Fix timing issue grabbing context during channel initialization
Result:
Users can run benchmarks more easily.
Motivation:
The current implementation does byte by byte comparison, which we have seen
can be a performance bottleneck when the AsciiString is used as the key in
a Map.
Modifications:
Use sun.misc.Unsafe (on supporting platforms) to compare up to eight bytes at a time
and get closer to the performance of String.equals(Object).
Result:
Significant improvement (2x - 6x) in performance over the current implementation.
Benchmark (size) Mode Samples Score Score error Units
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 10 thrpt 10 118843477.518 2347259.347 ops/s
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 50 thrpt 10 43910319.773 198376.996 ops/s
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 100 thrpt 10 26339969.001 159599.252 ops/s
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 1000 thrpt 10 2873119.030 20779.056 ops/s
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 10000 thrpt 10 306370.450 1933.303 ops/s
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 100000 thrpt 10 25750.415 108.391 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 10 thrpt 10 248077563.510 635320.093 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 50 thrpt 10 128198943.138 614827.548 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 100 thrpt 10 86195621.349 1063959.307 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 1000 thrpt 10 16920264.598 61615.365 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 10000 thrpt 10 1687454.747 6367.602 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 100000 thrpt 10 153717.851 586.916 ops/s
Motivation:
ByteString#hashCode() trashes its own hash code if it's being accessed concurrently
Modifications:
Pull the ByteString#hash into a local variable and calculate it locally.
Result:
ByteString#hashCode() is no longer returning a junk value.
Motivation:
The IntObjectHashMap benchmarks show the Agrona collections to be faster on put, lookup, and remove. One major difference is that we're using 2 modulus operations each time we increment the position index while iterating. Agrona uses a mask instead.
Modifications:
Modified the KObjectHashMap to use masking rather than modulus when wrapping the position index. This requires that the capacity be a power of 2.
Result:
Improved performance of IntObjectHashMap.
Motivation:
Other implementations of FullHttpMessage allow .toString to be called after the Message has been released
This brings AggregatedFullHttpMessage into line with those impls.
Modifications:
- Changed AggregatedFullHttpMessage to no longer be a sub-class of DefaultByteBufHolder
- Changes AggregatedFullHttpMessage to implement ByteBufHolder
- Hold the content buffer internally to AggregatedFullHttpMessage
- Implement the required content() and release() methods that were missing
- Do not check refcnt when accessing content() (similar to DefaultFullHttpMessage)
Result:
A released AggregatedFullHttpMessage can have .toString called without throwing an exception