Motivation:
The recent PR that discarded the Http2StreamRemovalPolicy causes connection errors when receiving a frame for a stream that no longer exists. We should ignore these frames if we think there's a chance that the stream has existed previously
Modifications:
Modified the Http2Connection interface to provide a `streamMayHaveExisted` method. Also removed the requireStream() method to identify all of the places in the code that need to be updated.
Modified the encoder and decoder to properly handle cases where a stream may have existed but no longer does.
Result:
Fixes#3643
Motivation:
The current local flow controller does not guarantee that unconsumed bytes for a closed stream will be restored to the connection window. This may lead to degradation of the connection window over time.
Modifications:
Modified DefaultHttp2LocalFlowController to guarantee that any unconsumed bytes are returned to the connection window as soon as the stream is closed. We also immediately consume any bytes when receiving DATA for a closed stream.
Result:
Fixes#3668
Motivation:
The LineBasedFrameDecoder discardedBytes counting different compare to
DelimiterBasedFrameDecoder.
Modifications:
Add plus sign
Result:
DiscardedBytes counting correctly
Motivation:
Flow control is a required part of the HTTP/2 specification but it is currently structured more like an optional item. It must be accessed through the property map which is time consuming and does not represent its required nature. This access pattern does not give any insight into flow control outside of the codec (or flow controller implementation).
Modifications:
1. Create a read only public interface for LocalFlowState and RemoteFlowState.
2. Add a LocalFlowState localFlowState(); and RemoteFlowState remoteFlowState(); to Http2Stream.
Result:
Flow control is not part of the Http2Stream interface. This clarifies its responsibility and logical relationship to other interfaces. The flow controller no longer must be acquired though a map lookup.
Motivation:
CompositeByteBuf.iterator() currently creates a new ArrayList and fill it with the ByteBufs, which is more expensive then it needs to be.
Modifications:
- Use special Iterator implementation
Result:
Less overhead when calling iterator()
Motivation:
Our automatically handling of non-auto-read failed because it not detected the need of calling read again by itself if nothing was decoded. Beside this handling of non-auto-read never worked for SslHandler as it always triggered a read even if it decoded a message and auto-read was false.
This fixes [#3529] and [#3587].
Modifications:
- Implement handling of calling read when nothing was decoded (with non-auto-read) to ByteToMessageDecoder again
- Correctly respect non-auto-read by SslHandler
Result:
No more stales and correctly respecting of non-auto-read by SslHandler.
Motiviation:
Our tests for non-auto-read did actually not test this correctly as auto-read was never disabled on the Bootstrap and ServerBootstrap.
Modifications:
- Correctly disable auto-read on Bootstrap and ServerBootstrap
- Fix tests to call ChannelHandlerContext.read() once a Channel becomes active.
Result:
Correctly test that non-auto-read works.
Motivation:
Unnecessary object allocation is currently done during wrap/unwrap while a handshake is still in progress.
Modifications:
Use static instances when possible.
Result:
Less object creations.
Motivation:
Each different *ChannelOption did extend ChannelOption in 4.0, which we changed in 4.1. This is a breaking change in terms of the API so we need to ensure we keep the old hierarchy.
Modifications:
- Let all *ChannelOption extend ChannelOption
- Add back constructor and mark it as @deprecated
Result:
No API breakage between 4.0 and 4.1
Motivation:
Sometimes it's useful to get informations about the available OpenSSL library that is used for the OpenSslEngine.
Modifications:
Add two new methods which allows to get the available OpenSSL version as either
an int or an String.
Result:
Easy to access details about OpenSSL version.
Motivation:
There is no benchmark to measure the priority tree implementation performance.
Modifications:
Introduce a new benchmark which will populate the priority tree, and then shuffle parent/child links around.
Result:
A simple benchmark to get a baseline for the HTTP/2 codec's priority tree implementation.
Motivation:
Allows for running benchmarks from built jars which is useful in development environments that only take released artifacts.
Modifications:
Move benchmarks into 'main' from 'test'
Add @State annotations to benchmarks that are missing them
Fix timing issue grabbing context during channel initialization
Result:
Users can run benchmarks more easily.
Motivation:
The current implementation does byte by byte comparison, which we have seen
can be a performance bottleneck when the AsciiString is used as the key in
a Map.
Modifications:
Use sun.misc.Unsafe (on supporting platforms) to compare up to eight bytes at a time
and get closer to the performance of String.equals(Object).
Result:
Significant improvement (2x - 6x) in performance over the current implementation.
Benchmark (size) Mode Samples Score Score error Units
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 10 thrpt 10 118843477.518 2347259.347 ops/s
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 50 thrpt 10 43910319.773 198376.996 ops/s
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 100 thrpt 10 26339969.001 159599.252 ops/s
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 1000 thrpt 10 2873119.030 20779.056 ops/s
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 10000 thrpt 10 306370.450 1933.303 ops/s
i.n.m.i.PlatformDependentBenchmark.arraysBytesEqual 100000 thrpt 10 25750.415 108.391 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 10 thrpt 10 248077563.510 635320.093 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 50 thrpt 10 128198943.138 614827.548 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 100 thrpt 10 86195621.349 1063959.307 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 1000 thrpt 10 16920264.598 61615.365 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 10000 thrpt 10 1687454.747 6367.602 ops/s
i.n.m.i.PlatformDependentBenchmark.unsafeBytesEqual 100000 thrpt 10 153717.851 586.916 ops/s
Motivation:
ByteString#hashCode() trashes its own hash code if it's being accessed concurrently
Modifications:
Pull the ByteString#hash into a local variable and calculate it locally.
Result:
ByteString#hashCode() is no longer returning a junk value.
Motivation:
The IntObjectHashMap benchmarks show the Agrona collections to be faster on put, lookup, and remove. One major difference is that we're using 2 modulus operations each time we increment the position index while iterating. Agrona uses a mask instead.
Modifications:
Modified the KObjectHashMap to use masking rather than modulus when wrapping the position index. This requires that the capacity be a power of 2.
Result:
Improved performance of IntObjectHashMap.
Motivation:
Other implementations of FullHttpMessage allow .toString to be called after the Message has been released
This brings AggregatedFullHttpMessage into line with those impls.
Modifications:
- Changed AggregatedFullHttpMessage to no longer be a sub-class of DefaultByteBufHolder
- Changes AggregatedFullHttpMessage to implement ByteBufHolder
- Hold the content buffer internally to AggregatedFullHttpMessage
- Implement the required content() and release() methods that were missing
- Do not check refcnt when accessing content() (similar to DefaultFullHttpMessage)
Result:
A released AggregatedFullHttpMessage can have .toString called without throwing an exception
Motivation:
If an exclusive dependency change stream B should be an exclusive dependency of stream A is requested and stream B is already a child of stream A...then we will add B to B's own children map and create a circular link in the priority tree. This leads to an infinite recursive loop and a stack overflow exception.
Modifications:
-when removeAllChildren is called it should not remove the exclusive dependency.
-unit test to ensure this case is covered.
Result:
No more circular link in the priority tree.
Motivation:
While forward porting https://github.com/netty/netty/pull/3579 there were a few areas that had not been previously back ported.
Modifications:
Backport the missed areas to ensure consistency.
Result:
More consistent 4.1 and master branches.
Motivation:
The usage and code within AsciiString has exceeded the original design scope for this class. Its usage as a binary string is confusing and on the verge of violating interface assumptions in some spots.
Modifications:
- ByteString will be created as a base class to AsciiString. All of the generic byte handling processing will live in ByteString and all the special character encoding will live in AsciiString.
Results:
The AsciiString interface will be clarified. Users of AsciiString can now be clear of the limitations the class imposes while users of the ByteString class don't have to live with those limitations.
Motivation:
Attribute.getAndRemove() will return the value but also remove the AttributeKey itself from the AttributeMap. This may not
what you want as you may want to keep an instance of it and just set it later again. Document the contract so the user know what to expect.
Modifications:
- Make it clear when to use AttributeKey.getAndRemove() / AttributeKey.remove() and when AttributeKey.getAndSet(null) / AttributeKey.set(null).
Result:
Less suprising behaviour.
Motivation:
Sometimes it's useful to use EC keys and not DSA or RSA. We should support it.
Modifications:
Support EC keys and share the code between JDK and Openssl impl.
Result:
It's possible to use EC keys now.
Motivation:
SslContext factory methods have gotten out of control; it's past time to
swap to a builder.
Modifications:
New Builder class. The existing factory methods must be left as-is for
backward compatibility.
Result:
Fixes#3531
Motivation:
As we missed to correctly handle EPOLLRDHUP we produce an IOException which is unnessary. This leads
to have exceptionCaught(...) methods called.
Modifications:
When EPOLLRDHUP was received just close the socket and fail all pending writes.
Result:
Correctly handle of EPOLLRDHUP and so not miss-leading exceptions.
Motivation:
Due to a recent flurry of cleanup and fixes, we no longer need the stream removal policy to protect against recently removed streams. We should get rid of it.
Modifications:
Removed Http2StreamRemovalPolicy and everywhere it's used.
Result:
Fixes#3448
Motivation:
The Http2FrameWriterBenchmark JMH harness class name was not updated for the JVM arguments. The number of forks is 0 which means the JHM will share a JVM with the benchmarks. Sharing the JVM may lead to less reliable benchmarking results and as doesn't allow for the command line arguments to be applied for each benchmark.
Modifications:
- Update the JMH version from 0.9 to 1.7.1. Benchmarks wouldn't run on old version.
- Increase the number of forks from 0 to 1.
- Remove allocation of environment from static and cleanup AfterClass to using the Setup and Teardown methods. The forked JVM would not shut down correctly otherwise (and wait for 30+ seconds before timeing out).
Result:
Benchmarks that run as intended.
Motivation:
The DefaultHttp2Connection is not checking for RuntimeExceptions when invoking Http2Connection.Listener methods. This is a problem for a few reasons: 1. The state of DefaultHttp2Connection will be corrupted if a listener throws a RuntimeException. 2. If the first listener throws then no other listeners will be notified, which may further corrupt state that is updated as a result of listeners being notified.
Modifications:
- Document that RuntimeExceptions are not supported for Http2Connection.Listener methods, and will be logged as an error.
- Update DefaultHttp2Connection to handle and exception for each listener that is notified, and be sure that 1 listener throwing an exception does not prevent others from being notified.
Result:
More robust DefaultHttp2Connection.
Motivation:
When create NormalMemoryRegionCache for PoolThreadCache, we overbooked
cache array size. This means unnecessary overhead for thread local cache
as we will create multi cache enties for each element in cache array.
Modifications:
change:
int arraySize = Math.max(1, max / area.pageSize);
to:
int arraySize = Math.max(1, log2(max / area.pageSize) + 1);
Result:
Now arraySize won't introduce unnecessary overhead.
Changes to be committed:
modified: buffer/src/main/java/io/netty/buffer/PoolThreadCache.java
Motivation:
CompositeByteBuf has an iterator() method but fails to implement Iterable
Modifications:
Let CompositeByteBuf implement Iterable<ByteBuf>
Result:
Easier usage
Motivation:
The ReplayingDecoderBuffer does not match the naming scheme we use for ByteBuf types.
Modifications:
Rename to ReplayingDecoderByteBuf to match naming scheme
Result:
Consistent naming
Motivation:
We missed to dereference the chunk and tmpNioBuf when calling deallocate(). This means the GC can not collect these as we still hold a reference while have the PooledByteBuf in the recycler stack.
Modifications:
Dereference chunk and tmpNioBuf.
Result:
GC can collect things.
Motivation:
We missed to flush the channel when using HttpChunkedInput (this is done when using SSL). This will result in a stale.
Modifications:
Replace ctx.write(...) with ctx.writeAndFlush(...)
Result:
Correctly working example.
Motiviation:
At the moment we use FIFO for the PoolThreadCache which is sub-optimal as this may reduce the changes to have the cached memory actual still in the cpu-cache.
Modification:
- Change to use LIFO as this increase the chance to be able to serve buffers from the cpu-cache
Results:
Faster allocation out of the ThreadLocal cache.
Before the commit:
[xxx wrk]$ ./wrk -H 'Connection: keep-alive' -d 120 -c 256 -t 16 -s scripts/pipeline-many.lua http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 14.69ms 10.06ms 131.43ms 80.10%
Req/Sec 283.89k 40.37k 433.69k 66.81%
533859742 requests in 2.00m, 72.09GB read
Requests/sec: 4449510.51
Transfer/sec: 615.29MB
After the commit:
[xxx wrk]$ ./wrk -H 'Connection: keep-alive' -d 120 -c 256 -t 16 -s scripts/pipeline-many.lua http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 16.38ms 26.32ms 734.06ms 97.38%
Req/Sec 283.86k 39.31k 361.69k 83.38%
540836511 requests in 2.00m, 73.04GB read
Requests/sec: 4508150.18
Transfer/sec: 623.40MB
Motivation:
Now that we have a CharObjectHashMap, we should change Http2Settings to use it.
Modifications:
Changed Http2Settings to extend CharObjectHashMap rather than IntObjectHashMap.
Result:
Http2Settings uses less memory to store keys.
Motivation:
To support HTTP2 we need APLN support. This was not provided before when using OpenSslEngine, so SSLEngine (JDK one) was the only bet.
Beside this CipherSuiteFilter was not supported
Modifications:
- Upgrade netty-tcnative and make use of new features to support ALPN and NPN in server and client mode.
- Guard against segfaults after the ssl pointer is freed
- support correctly different failure behaviours
- add support for CipherSuiteFilter
Result:
Be able to use OpenSslEngine for ALPN / NPN for server and client.
Motivation:
We've removed access to the activeStreams collection, we should do the same for the children of a stream to provide a consistent interface.
Modifications:
Moved Http2StreamVisitor to a top-level interface. Removed unnecessary child operations from the Http2Stream interface so that we no longer require a map structure.
Result:
Cleaner and more consistent interface for iterating over child streams.
Motivation:
Currently we have IntObjectMap/HashMap, but it will be useful to support other primitive-based maps.
Modifications:
Moved the code int the current maps to template files and run Groovy code from common/pom.xml to apply the templates.
Result:
Autogeneration of int and char-based hash maps.
Motivation:
Too many duplicated code of tests for different compression codecs.
Modifications:
- Added abstract classes AbstractCompressionTest, AbstractDecoderTest and AbstractEncoderTest which contains common variables and tests for any compression codec.
- Removed common tests which are implemented in AbstractDecoderTest and AbstractEncoderTest from current tests for compression codecs.
- Implemented abstract methods of AbstractDecoderTest and AbstractEncoderTest in current tests for compression codecs.
- Added additional checks for current tests.
- Renamed abstract class IntegrationTest to AbstractIntegrationTest.
- Used Theories to run tests with head and direct buffers.
- Removed code duplicates.
Result:
Removed duplicated code of tests for compression codecs and simplified an addition of tests for new compression codecs.
Motivation:
The Http2Connection interface exposes an activeStreams() method which allows direct iteration over the underlying collection. There are a few places that make copies of this collection to avoid modification while iterating, and a few places that do not make copies. The copy operation can be expensive on hot code paths and also we are not consistently iterating over the activeStreams collection.
Modifications:
- The Http2Connection interface should reduce the exposure of the underlying collection and just expose what is necessary for the interface to function. This is just a means to iterate over the collection.
- The DefaultHttp2Connection should use this new interface and protect it's internal state while iteration is occurring.
Result:
Reduction in surface area of the Http2Connection interface. Consistent iteration of the set of active streams. Concurrent modification exceptions are handled in 1 encapsulated spot.
Motivation:
1) The current implementation doesn't allow for HEADERS, DATA, PING, PRIORITY and SETTINGS
frames to be sent after GOAWAY.
2) When receiving or sending a GOAWAY frame, all streams with ids greater than the lastStreamId
of the GOAWAY frame should be closed. That's not happening.
Modifications:
1) Allow sending of HEADERS and DATA frames after GOAWAY for streams with ids < lastStreamId.
2) Always allow sending PING, PRIORITY AND SETTINGS frames.
3) Allow sending multiple GOAWAY frames with decreasing lastStreamIds.
4) After receiving or sending a GOAWAY frame, close all streams with ids > lastStreamId.
Result:
The GOAWAY handling is more correct.
Motivation:
There are methods to manipulate the prioritzable count for streams which have the '0' postfix which are designed to be used during recursion. However these methods are calling out to an external method without the '0' during the recursive process. This is doing uneccessary conditional checks during recursion.
Modifications:
Change the decrementPrioritizableForTree to decrementPrioritizableForTree0 while in recursive method.
Change the incrementPrioritizableForTree to incrementPrioritizableForTree0 while in recursive method.
Result:
Less overhead during recursive calls.