Motivation:
HPACK Encoder has a data structure which is similar to a previous version of DefaultHeaders. Some of the same improvements can be made.
Motivation:
- Enforce the restriction that the Encoder's headerFields length must be a power of two so we can use masking instead of modulo
- Use AsciiString.hashCode which already has optimizations instead of having yet another hash code algorithm in Encoder
Result:
Fixes https://github.com/netty/netty/issues/5357
Motivation:
PlatformDependent attempts to use reflection to get the underlying char[] (or byte[]) from String objects. This is fragile as if the String implementation does not utilize the full array, and instead uses a subset of the array, this optimization is invalid. OpenJDK6 and some earlier versions of OpenJDK7 String have the capability to use a subsection of the underlying char[].
Modifications:
- PlatformDependent should not attempt to use the underlying array from String (or other data types) via reflection
Result:
PlatformDependent hash code generation for CharSequence does not depend upon specific JDK implementation details.
Motivation:
The HTTP/2 RFC states in https://tools.ietf.org/html/rfc7540#section-6.8 that Endpoints MUST NOT increase the value they send in the last stream identifier however we don't enforce this when decoding GOAWAY frames.
Modifications:
- Throw a connection error if the peer attempts to increase the lastStreamId in a GOAWAY frame
Result:
RFC is more strictly enforced.
Motivation:
Quote from issue 4914:
"Http2MultiplexCodec currently does two things: mapping the existing h2 API to frames and managing the child channels.
It would be better if the two parts were separated. This would allow less-coupled development of the HTTP/2 handlers (flow control could be its own handler, for instance) and allow applications to insert themselves between all streams and the codec, which permits custom logic and could be used, in part, to implement custom frame types.
It would also greatly ease testing, as the child channel could be tested by itself without dealing with how frames are encoded on the wire."
Modifications:
- Split the Http2MultiplexCodec into Http2FrameCodec and Http2MultiplexCodec. The Http2FrameCodec interacts with the existing HTTP/2 callback-based API, while the Http2MulitplexCodec is completely independent of it and simply multiplexes Http2StreamFrames to the child channels. Additionally, the Http2Codec handler is introduced, which is a convenience class that simply sets up the Http2FrameCodec and Http2MultiplexCodec in the channel pipeline and removes itself.
- Improved test coverage quite a bit.
Result:
- The original Http2MultiplexCodec is split into Http2FrameCodec and Http2MultiplexCodec.
- More tests for higher confidence in the code.
Motivation:
PR #5355 modified interfaces to reduce GC related to the HPACK code. However this came with an anticipated performance regression related to HpackUtil.equals due to AsciiString's increase cost of charAt(..). We should mitigate this performance regression.
Modifications:
- Introduce an equals method in PlatformDependent which doesn't leak timing information and use this in HpcakUtil.equals
Result:
Fixes https://github.com/netty/netty/issues/5436
Motivation:
The HTTP/2 specification requires the pad length field of DATA, HEADERS and PUSH_PROMISE frames to be counted towards the flow control window. The current implementation doesn't do so (See #5434).
Furthermore, it's currently not possible to add one byte padding, as this would add the one byte pad length field as well as append one padding byte to the end of the frame.
Modifications:
Include the one byte pad length field in the padding parameter of the API. Thereby extending the allowed value range by one byte to 256 (inclusive). On the wire, a one byte padding is encoded with a pad length field with value zero and a 256 byte padding is encoded with a pad length field with value 255 and 255 bytes append to the end of the frame.
Result:
More correct padding.
Motivations:
The HPACK code was not really optimized and written with Netty types in mind. Because of this a lot of garbage was created due heavy object creation.
This was first reported in [#3597] and https://github.com/grpc/grpc-java/issues/1872 .
Modifications:
- Directly use ByteBuf as input and output
- Make use of ByteProcessor where possible
- Use AsciiString as this is the only thing we need for our http2 usage
Result:
Less garbage and better usage of Netty apis.
Motivation:
These methods were recently deprecated. However, they remained in use in several locations in Netty's codebase.
Modifications:
Netty's code will now access the bootstrap config to get the group or child group.
Result:
No impact on functionality.
Motivation:
If a single DATA frame ends up being decompressed into multiple frames by DelegatingDecompressorFrameListener the flow control accounting is delayed until all frames have been decompressed. However it is possible the user may want to return bytes to the flow controller which were not included in the onDataRead return value. In this case the amount of processed bytes has not been incremented and will lead to negative value for processed bytes.
Modifications:
- Http2Decompressor.incrementProcessedBytes should be called each time onDataRead is called to ensure all bytes are accounted for at the correct time
Result:
Fixes https://github.com/netty/netty/issues/5375
Motivation:
We use pre-instantiated exceptions in various places for performance reasons. These exceptions don't include a stacktrace which makes it hard to know where the exception was thrown. This is especially true as we use the same exception type (for example ChannelClosedException) in different places. Setting some StackTraceElements will provide more context as to where these exceptions original and make debugging easier.
Modifications:
Set a generated StackTraceElement on these pre-instantiated exceptions which at least contains the origin class and method name. The filename and linenumber are specified as unkown (as stated in the javadocs of StackTraceElement).
Result:
Easier to find the origin of a pre-instantiated exception.
Motivation:
JCTools supports both non-unsafe, unsafe versions of queues and JDK6 which allows us to shade the library in netty-common allowing it to stay "zero dependency".
Modifications:
- Remove copy paste JCTools code and shade the library (dependencies that are shaded should be removed from the <dependencies> section of the generated POM).
- Remove usage of OneTimeTask and remove it all together.
Result:
Less code to maintain and easier to update JCTools and less GC pressure as the queue implementation nt creates so much garbage
Motivation:
Http2ConnectionHandler will always send a GO_AWAY when the channel is closed. This may cause problems if the user is attempting to control when GO_AWAY is sent and the content of the GO_AWAY.
Modifications:
- When the channel is closed Http2ConnectionHandler should only send a GO_AWAY if one has not already been sent
Result:
The user has more control over when GO_AWAY is sent
Fixes https://github.com/netty/netty/issues/5307
Motivation:
The user may specify to use a different allocator then the default. In this case we need to ensure it is shared when creating the EmbeddedChannel inside of a ChannelHandler
Modifications:
Use the config of the "original" Channel in the EmbeddedChannel and so share the same allocator etc.
Result:
Same type of buffers are used.
Motivation:
Performing a server upgrade with a new initial flow control window will cause an NPE in the DefaultHttp2RemoteFlowController. This is due to the fact that the monitor does not check whether or not the channel is writable.
Modifications:
Added a check for channel writability before calling `writePendingBytes`. Also fixed a unit test that was supposed to be testing this :).
Result:
Fixes#5301
Motivation:
At the moment the user is responsible to increase the writer index of the composite buffer when a new component is added. We should add some methods that handle this for the user as this is the most popular usage of the composite buffer.
Modifications:
Add new methods that autoamtically increase the writerIndex when buffers are added.
Result:
Easier usage of CompositeByteBuf.
Motivation:
The HPACK code currently disallows empty header names. This is not explicitly forbidden by the HPACK RFC https://tools.ietf.org/html/rfc7541. However the HTTP/1.x RFC https://tools.ietf.org/html/rfc7230#section-3.2 and thus HTTP/2 both disallow empty header names, and so this precondition check should be moved from the HPACK code to the protocol level.
HPACK also requires that string literals which are huffman encoded must be treated as an encoding error if the string has more than 7 trailing padding bits https://tools.ietf.org/html/rfc7541#section-5.2, but this is currently not enforced.
Result:
- HPACK to allow empty header names
- HTTP/1.x and HTTP/2 header validation should not allow empty header names
- Enforce max of 7 trailing padding bits
Result:
Code is more compliant with the above mentioned RFCs
Fixes https://github.com/netty/netty/issues/5228
Related: #4333#4421#5128
Motivation:
slice(), duplicate() and readSlice() currently create a non-recyclable
derived buffer instance. Under heavy load, an application that creates a
lot of derived buffers can put the garbage collector under pressure.
Modifications:
- Add the following methods which creates a non-recyclable derived buffer
- retainedSlice()
- retainedDuplicate()
- readRetainedSlice()
- Add the new recyclable derived buffer implementations, which has its
own reference count value
- Add ByteBufHolder.retainedDuplicate()
- Add ByteBufHolder.replace(ByteBuf) so that..
- a user can replace the content of the holder in a consistent way
- copy/duplicate/retainedDuplicate() can delegate the holder
construction to replace(ByteBuf)
- Use retainedDuplicate() and retainedSlice() wherever possible
- Miscellaneous:
- Rename DuplicateByteBufTest to DuplicatedByteBufTest (missing 'D')
- Make ReplayingDecoderByteBuf.reject() return an exception instead of
throwing it so that its callers don't need to add dummy return
statement
Result:
Derived buffers are now recycled when created via retainedSlice() and
retainedDuplicate() and derived from a pooled buffer
Motivation:
We tried to provide the ability for the user to change the semantics of the threading-model by delegate the invoking of the ChannelHandler to the ChannelHandlerInvoker. Unfortunually this not really worked out quite well and resulted in just more complexity and splitting of code that belongs together. We should remove the ChannelHandlerInvoker again and just do the same as in 4.0
Modifications:
Remove ChannelHandlerInvoker again and replace its usage in Http2MultiplexCodec
Result:
Easier code and less bad abstractions.
Motivation:
We need to ensure we release all ReferenceCounted objects during tests to not leak.
Modifications:
Fix possible leaks by calling release()
Result:
No more leaks in tests.
Motivation:
Everything in the http2 package should be considered unstable for now
Modifications:
Add missing annotation on Http2ServerDowngrader
Result:
Clearly mark class as unstable.
Motivation:
We're leaking requests in our Http2ServerDowngrader tests when we
allocate a buffer using the local allocator.
Modification:
Release the request later when the request is constructed with the local
allocator.
Result:
Less leaky tests.
Motivation:
The DefaultHttp2Conneciton.close method accounts for active streams being iterated and attempts to avoid reentrant modifications of the underlying stream map by using iterators to remove from the stream map. However there are a few issues:
- While iterating over the stream map we don't prevent iterations over the active stream collection
- Removing a single stream may actually remove > 1 streams due to closed non-leaf streams being preserved in the priority tree which may result in NPE
Preserving closed non-leaf streams in the priority tree is no longer necessary with our current allocation algorithms, and so this feature (and related complexity) can be removed.
Modifications:
- DefaultHttp2Connection.close should prevent others from iterating over the active streams and reentrant modification scenarios which may result from this
- DefaultHttp2Connection should not keep closed stream in the priority tree
- Remove all associated code in DefaultHttp2RemoteFlowController which accounts for this case including the ReducedState object
- This includes fixing writability changes which depended on ReducedState
- Update unit tests
Result:
Fixes https://github.com/netty/netty/issues/5198
Motivation:
http/2 and http/1.1 have similar protocols, and it's useful to be able
to implement a single server against a single interface. There's an
injection from http/1.1 messages to http/2 ones, so it makes sense to
make folks program against http/1.1 and upgrade them under the hood.
Modifications:
added a MessageToMessageCodec<Http2StreamFrame, HttpObject> which turns
every kind of Http2StreamFrame domain object into an HttpObject domain
object, and then back again on the way out. This one is specialized for
servers, but it should be straightforward to make a symmetric one for
clients, or else extend this one.
Result:
fixes#5199, and it's now simple to make your Http2MultiplexCodec speak
Http1.1
Motivation:
Some codecs should be considered unstable as these are relative new. For this purpose we should introduce an annotation which these codecs should us to be marked as unstable in terms of API.
Modifications:
- Add UnstableApi annotation and use it on codecs that are not stable
- Move http2.hpack to http2.internal.hpack as it is internal.
Result:
Better document unstable APIs.
Motivation:
The format of some log lines used the printf style formatting instead of the logger {} formatting for variables. This would lead to printing out the literal printf tokens instead of substituting the value.
Modifications:
- Fix the format string for log statements which use printf style formatting
Result:
Logs actually capture the value of the variables instead of fixed string tokens.
Motivation:
DefaultHttp2FrameReader will stop reading data if any exception is thrown. However some exceptions are recoverable and we will lose data if we don't continue reading. For example some stream errors are recoverable.
Modifications:
- DefaultHttp2FrameReader should attempt to continue reading if a stream error is encountered.
Result:
Fixes https://github.com/netty/netty/issues/5186
Motivation:
We lately added ByteBuf.isReadOnly() which allows to detect if a buffer is read-only or not. We should add ByteBuf.asReadOnly() to allow easily access a read-only version of a buffer.
Modifications:
- Add ByteBuf.asReadOnly()
- Deprecate Unpooled.unmodifiableBuffer(Bytebuf)
Result:
More consistent api.
Motivation:
This allows using handlers for Streams in normal Netty-style. Frames are
read/written to the channel as messages, not directly as a
callback/method call. Handlers allow mixing and can ease HTTP/1 and
HTTP/2 interoperability by eventually supporting HTTP/1 handlers in
HTTP/2 and vise versa.
Modifications:
New handler Http2MultiplexCodec that converts from the current HTTP/2
API to a message-based API and child channels for streams.
Result:
The basics are done for server-side: new streams trigger creation of new
channels in much the same appearance to how new connections trigger new
channel creation. The basic frames HEADERS and DATA are handled, but
also GOAWAY and RST_STREAM.
Inbound flow control is implemented, but outbound is not. That will be
done later, along with not completing write promises on the child
channel until the write actually completes on the parent.
There is not yet support for outbound priority/weight, push promises,
and many other features.
There is a generic Object that may be set on stream frames. This also
paves the way for client-side support which needs a way to refer to
yet-to-be-created streams (due to how HEADERS allocates a stream id, and
the allocation order must be the same as transmission order).
Motivation:
If a single Encoder object is promoted to the old generation then every object
reachable from the promoted object will eventually be promoted as well. A queue
illustrates the problem very well. Say a sequence of inserts and deletions
generate an object graph:
A -> B -> C -> D -> E -> F -> G -> H,
the head of the queue is E, the tail of the queue is H, and A, B, C, D are
dead. If all queue nodes are in the young generation, then a young gc will
clean up the object graph and leave us with:
E -> F -> G -> H
on the other hand, if B and C were previously promoted to the old generation,
then a young collection assumes the refernece from C to D is from a live object
(this is a key result of generational gc, no need to mark the old generation).
Hence the young collection assumes the refence to D is a gc root and leave us
with the object graph:
B-> C -> D -> E -> F -> G -> H.
Eventually D, E, F, G, H, and all queue nodes ever seen from this point on will
be promoted, regardless of their global live or dead status. It is generally
trivial to fix nepotism issues by simply breaking the reference chain after
dequeuing a node.
Currently Encoder objects do not null their references when removed from the
hash map. We have observed a 20X increase in promoted Encoder objects due to
nepotism.
Modifications:
Null before, after, and next fields when removing Encoder objects from maps.
Result:
Fewer promoted Encoder objects, fewer Encoder objects in the old generation,
shorter young collection times, old collections spaced further apart (nepotism
is just really bad). Enjoy.
Motivation:
If an error occurs during a write operation then DefaultHttp2ConnectionEncoder.FlowControlledData will clear the CoalescingBufferQueue which will reset the queue's readable bytes to 0. To recover from an error the DefaultHttp2RemoteFlowController will attempt to return bytes to the flow control window, but since the frame has reset its own size this will lead to invalid flow control accounting.
Modifications:
- DefaultHttp2ConnectionEncoder.FlowControlledData should not reset its size if an error occurs
Result:
No more flow controller errors due to DefaultHttp2ConnectionEncoder.FlowControlledData setting its size to 0 if an error occurs.
Motivation:
HttpServerUpgradeHandler.UpgradeCodec.prepareUpgradeResponse should allow to abort the upgrade and so just continue with using HTTP. Beside this we should only pass in the response HttpHeaders as this is inline with the docs.
Modifications:
- UpgradeCodec.prepareUpgradeResponse now allows to return a boolean and so allows to specifiy if the upgrade should take place.
- Change the param from FullHttpResponse to HttpHeaders to be inline with the javadocs.
Result:
More flexible and correct handling of upgrades.
Motivation:
upgradeTo(...) takes the response as paramater, but the respone itself was already written to the Channel. This gives the user the impression the response can be changed or even act on it which may not be safe anymore once it was written and has been released.
Modifications:
Remove the response param from the method.
Result:
Less confusion and safer usage.
Motivation:
Http2Codec.SimpleChannelPromiseAggregator currently fails fast if as soon as a tryFailure or setFailure method is called. This can lead to write operations which pass the result of SimpleChannelPromiseAggregator.newPromise to multiple channel.write calls throwing exceptions due to the promise being already done. This behavior is not expected by most of the Netty codecs (SslHandler) and can also create unexpected leaks in the http2 codec (DefaultHttp2FrameWriter).
Modifications:
- Http2Codec.SimpleChannelPromiseAggregator shouldn't complete the promise until doneAllocatingPromises is called
- Usages of Http2Codec.SimpleChannelPromiseAggregator should be adjusted to handle the change in behavior
- What were leaks in DefaultHttp2FrameWriter should be fixed to catch any other cases where ctx.write may throw
Result:
SimpleChannelPromiseAggregator won't generate promises which are done when newPromise is called.
Motivation:
PromiseAggregator's API allows for the aggregate promise to complete before the user is done adding promises. In order to support this use case the API structure would need to change in a breaking manner.
Modifications:
- Deprecate PromiseAggregator and subclasses
- Introduce PromiseCombiner which corrects these issues
Result:
PromiseCombiner corrects the deficiencies in PromiseAggregator.
Motiviation:
691bc1690e made writeUtf8 consistent with String.getBytes() so that it never throws.
94f27be59b provided a writeUtf8 method which takes a ByteBufAllocator to do an appropriately sized buffer allocation.
Result:
- Assume writeUtf8 will not throw in HTTP/2 codec
- Use the new writeUtf8 method
Result:
Cleaner code in codec-http2.
Motivation:
If while iterating the active streams a close operation occurs this will be queued and process after the iteration has completed to avoid a concurrent modification exception. However it is possible that during the iteration the stream which was closed could have been removed from the priority tree and its parent would be set to null. Then after the iteration completes the close operation will attempt to dereference the parent and results in a NPE.
Modifications:
- pending close operations should verify the stream's parent is not null before processing the event
Result:
No More NPE.
Motivation:
83c4aa6ad8 changed the log level to warn, but should have changed to debug.
Modifications:
- Change the log level to debug in Http2ConnectionHandler if the GO_AWAY fails to send. The write failure could be the result of the channel already being closed.
Result:
Fixes https://github.com/netty/netty/issues/4930.
Motiviation:
cfcee5798d introduced code to resize the headers based upon a weighted average. The weight used for new entries was initialized using integer arithmetic when it should have been floating point arithmetic and so new values contribute 0 weight.
Modifications:
- Cast to float when initializing
Result:
Weighted average does not give 0 weight to new headers in DefaultHttp2HeadersDecoder.
Motivation:
The interface contract of Http2Connection.Listener.onStreamClosed says that the stream will be removed from the active stream map, and not necessarily the stream map. If the channel becomes inactive we may remove from the stream map before calling onStreamClosed.
Modifications:
- Don't remove from the stream map during iteration until after onStreamClosed is called
Result:
Expectations of onStreamClosed interface are not violated
Motivation:
Http2ConnectionHandler inherits from ByteToMessageDecoder. ByteToMessageDecoder.channelInactive will attempt to decode any remaining data by calling the abstract decode method. If the Http2ConnectionHandler is in server mode, and no data has been exchanged yet, it will try to treat this data as an invalid connection preface and write a GO_AWAY. This is noisy in the logs and creates an illusion that there is a protocol violation when there has not been.
Modifications:
- If the channel is inactive the connection preface decode should not be executed.
Result:
Log files don't include misleading error messages related to connection preface errors.
Modifications:
When constructing the FullHttpMessage pass in the ByteBuf to use via the ByteBufAllocator assigned via the context.
Result:
The ByteBuf assigned to the FullHttpMessage can now be configured as a pooled/unpooled, direct/heap based ByteBuf via the ByteBufAllocator used.
Motivation:
Http2Connection.onStreamRemoved is not always called if Http2Connection.onStreamAdded is called. This is problematic as users may rely on the onStreamRemoved method to be called to release ByteBuf objects and do other cleanup.
Modifications:
- Http2Connection.close will remove all streams existing streams and prevent new ones from being created
- Http2ConnectionHandler will call the new close method in channelInactive
Result:
Http2Connection.onStreamRemoved is always called when Http2Connection.onStreamRemoved is called to preserve the Http2Connection guarantees.
Fixes https://github.com/netty/netty/issues/4838
Motivation:
Commit f990f99 introduced a bug into the RST_STREAM processing that would prevent a RST_STREAM from being sent when it should have been. The promise would be marked as successful even though the RST_STREAM frame would never be sent.
Modifications:
- Fix conditional in Http2ConnectionHandler.resetStream to allow reset streams to be sent in all stream states besides IDLE.
Result:
RST_STREAM frames are now sent when they are supposed to be sent
Fixes https://github.com/netty/netty/issues/4856
Motivation:
Http2CodecUtil uses ByteBufUtil.writeUtf8 but does not account for it
throwing an exception. If an exception is thrown because the format is
not valid UTF16 encoded UTF8 then the buffer will leak.
Modifications:
- Make sure the buffer is released if an exception is thrown
- Ensure call sites of the Http2CodecUtil.toByteBuf can tolerate and
exception being thrown
Result:
No leak if exception data can not be converted to UTF8.
Motivation:
Currently the initial headers for every stream is queued in the flow controller. Since the initial header frame may create streams the peer must receive these frames in the order in which they were created, or else this will be a protocol error and the connection will be closed. Tolerating the initial headers being queued would increase the complexity of the WeightedFairQueueByteDistributor and there is benefit of doing so is not clear.
Modifications:
- The initial headers will no longer be queued in the flow controllers
Result:
Fixes https://github.com/netty/netty/issues/4758
Motivation:
In HttpConversionUtil's toHttpRequest and toHttpResponse methods can
allocate FullHttpMessage objects, and if an exeception is thrown during
the header conversion then this object will not be released. If a
FullHttpMessage is not fired up the pipeline, and the stream is closed
then we remove from the map, but do not release the object. This leads
to a ByteBuf leak. Some of the logic related to stream lifetime management
and FullHttpMessage also predates the RFC being finalized and is not correct.
Modifications:
- Fix leaks in HttpConversionUtil
- Ensure the objects are released when they are removed from the map.
- Correct logic and unit tests where they are found to be incorrect.
Result:
Fixes https://github.com/netty/netty/issues/4780
Fixes https://github.com/netty/netty/issues/3619
Motivation:
When HttpClientUpgradeHandler upgrades from HTTP/1 to another protocol,
it performs a two-step opertion:
1. Remove the SourceCodec (HttpClientCodec)
2. Add the UpgradeCodec
When HttpClientCodec is removed from the pipeline, the decoder being
removed triggers channelRead() event with the data left in its
cumulation buffer. However, this is not received by the UpgradeCodec
becuase it's not added yet. e.g. HTTP/2 SETTINGS frame sent by the
server can be missed out.
To fix the problem, we need to reverse the steps:
1. Add the UpgradeCodec
2. Remove the SourceCodec
However, this does not work as expected either, because UpgradeCodec can
send a greeting message such as HTTP/2 Preface. Such a greeting message
will be handled by the SourceCodec and will trigger an 'unsupported
message type' exception.
To fix the problem really, we need to make the upgrade process 3-step:
1. Remove/disable the encoder of SourceCodec
2. Add the UpgradeCodec
3. Remove the SourceCodec
Modifications:
- Add SourceCodec.prepareUpgradeFrom() so that SourceCodec can remove or
disable its encoder
- Implement HttpClientCodec.prepareUpgradeFrom() properly
- Miscellaneous:
- Log the related channel as well When logging the failure to send a
GOAWAY
Result:
Cleartext HTTP/1-to-HTTP/2 upgrade works again.
Motivation:
StreamBufferingEncoder provides queueing so that MAX_CONCURRENT_STREAMS is not violated. However the stream id generation provided by Http2Connection.nextStreamId() only returns the next stream id that is expected on the connection and does not account for queueing. The codec should provide a way to generate the next stream id for a given endpoint that functions with or without queueing.
Modifications:
- Change Http2Connection.nextStreamId to Http2Connection.incrementAndGetNextStreamId
Result:
Http2Connection can generate the next stream id in queued and non-queued scenarios.
Fixes https://github.com/netty/netty/issues/4704
Motivation:
If validateHeaders is set in combination with the encoder/decoder it will be silently ignored. We should enforce the constraint that validateHeaders and encoder/decoder are mutually exclusive.
Modifications:
- Make sure either validateHeaders can be set or encoder/decoder.
Result:
AbstractHttp2ConnectionHandlerBuilder does not allow conflicting options to be set.
Motivation:
DefaultHttp2RemoteFlowController does not correctly account for the padding in the event frames are merged. This causes the internal stat of DefaultHttp2RemoteFlowController to become corrupt and can result in attempting to write frames when there are none.
Modifications:
- Update DefaultHttp2RemoteFlowController to account for frame sizes not necessarily adding together.
Result:
DefaultHttp2RemoteFlowController internal state does not become corrupt when padding is present.
Fixes https://github.com/netty/netty/issues/4573
Motivation:
Currently it's impossible to distinguish which
connection the corresponding logged message is
related to.
Modifications:
Http2FrameLogger is extended to support channel
id logging, usages in Inbound/Outbound Frame
Loggers are adjusted accordingly.
Result:
Logger outputs the channel id.
Motivation:
HttpConversionUtil.toHttp2Headers currently has a throws Exception as part of the signature. This comes from the signature of ByteProcessor.process, but is not necessary because the ByteProcessor used does not throw.
Modifications:
- Remove throws Exception from the signature of HttpConversionUtil.toHttp2Headers.
Result:
HttpConversionUtil.toHttp2Headers interface does not propagate a throws Exception when it is used.
Motivation:
- AbstractHttp2ConnectionHandlerBuilder.encoderEnforceMaxConcurrentStreams can be the primitive boolean
- SpdySession.StreamComparator should not be Serializable since SpdySession is not Serializable
Modifications:
Use boolean instead and remove Serializable
Result:
- Minor improvement for AbstractHttp2ConnectionHandlerBuilder
- StreamComparator is not Serializable any more
Motivation:
Javadoc reports errors about invalid docs.
Modifications:
Fix some errors reported by javadoc.
Result:
A lot of javadoc errors are fixed by this patch.
Motivation:
There are some wrong links and tags in javadoc.
Modifications:
Fix the wrong links and tags in javadoc.
Result:
These links will work correctly in javadoc.
Motivation:
If the stream window is negative UniformStreamByteDistributor may write data. This is prohibited by the RFC https://tools.ietf.org/html/rfc7540#section-6.9.2.
Modifications:
- UniformStreamByteDistributor should use StreamState.isWriteAllowed()
Result:
UniformStreamByteDistributor is more complaint with HTTP/2 RFC.
Fixes https://github.com/netty/netty/issues/4545
Motivation:
RemoteFlowController.streamWritten is not currently required. We should remove it to keep interfaces minimal.
Modifications:
- Remove RemoteFlowController.streamWritten
Result:
1 Less method in RemoteFlowController interface.
Fixes https://github.com/netty/netty/issues/4600
Motivation:
PriorityStreamByteDistributor is now obsolete and can be replaced by WeightedFairQueueByteDistributor.
Modifications:
- Remove PriorityStreamByteDistributor and use WeightedFairQueueByteDistributor by default.
Result:
PriorityStreamByteDistributor no longer has to be maintained and is replaced by a better algorithm.
Motivation:
DefaultHttp2RemoteFlowController.ListenerWritabilityMonitor no longer reliably detects when a stream's writability change occurs.
Modifications:
- Ensure writiability is reliabily reported by DefaultHttp2RemoteFlowController.ListenerWritabilityMonitor
- Fix infinite loop issue (https://github.com/netty/netty/issues/4588) detected when consolidating unit tests
Result:
Reliable stream writability change notification, and 1 less infinite loop in UniformStreamByteDistributor.
Fixes https://github.com/netty/netty/issues/4587
Related: #4572#4574
Motivation:
Consistency in our builder API design
Modifications:
- Add AbstractInboundHttp2ToHttpAdapterBuilder
- Replace the old 'Builder's with InboundHttp2ToHttpAdapterBuilder and
InboundHttp2ToHttpPriorityAdapterBuilder
Result:
Builder API consistency
Motivation:
PriorityStreamByteDistributor uses a homegrown algorithm which distributes bytes to nodes in the priority tree. PriorityStreamByteDistributor has no concept of goodput which may result in poor utilization of network resources. PriorityStreamByteDistributor also has performance issues related to the tree traversal approach and number of nodes that must be visited. There also exists some more proven algorithms from the resource scheduling domain which PriorityStreamByteDistributor does not employ.
Modifications:
- Introduce a new ByteDistributor which uses elements from weighted fair queue schedulers
Result:
StreamByteDistributor which is sensitive to priority and uses a more familiar distribution concept.
Fixes https://github.com/netty/netty/issues/4462
Related: #4572
Motivation:
- A user might want to extend Http2ConnectionHandler and define his/her
own static inner Builder class that extends
Http2ConnectionHandler.BuilderBase. This introduces potential
confusion because there's already Http2ConnectionHandler.Builder. Your
IDE will warn about this name duplication as well.
- BuilderBase exposes all setters with public modifier. A user's Builder
might not want to expose them to enforce it to certain configuration.
There's no way to hide them because it's public already and they are
final.
- BuilderBase.build(Http2ConnectionDecoder, Http2ConnectionEncoder)
ignores most properties exposed by BuilderBase, such as
validateHeaders, frameLogger and encoderEnforceMaxConcurrentStreams.
If any build() method ignores the properties exposed by the builder,
there's something wrong.
- A user's Builder that extends BuilderBase might want to require more
parameters in build(). There's no way to do that cleanly because
build() is public and final already.
Modifications:
- Make BuilderBase and Builder top-level so that there's no duplicate
name issue anymore.
- Add AbstractHttp2ConnectionHandlerBuilder
- Add Http2ConnectionHandlerBuilder
- Add HttpToHttp2ConnectionHandlerBuilder
- Make all builder methods in AbstractHttp2ConnectionHandlerBuilder
protected so that a subclass can choose which methods to expose
- Provide only a single build() method
- Add connection() and codec() so that a user can still specify
Http2Connection or Http2Connection(En|De)coder explicitly
- Implement proper state validation mechanism so that it is prevented
to invoke conflicting setters
Result:
Less confusing yet flexible builder API
Motivation:
Some times the user wants to set a Http2HeaderEncoder.SensitivityDetector when building a Http2ConnectionHandler.
Modifications:
Allow to set Http2HeaderEncoder.SensitivityDetector via builder.
Result:
More flexible building of Http2ConnectionHandler possible.
Motivation:
We already provide a NEVER_SENSITIVE instance,we should add ALWAYS_SENSITIVE as well.
Modifications:
Add ALWAYS_SENSITIVE instance which will always return true when check for sesitive.
Result:
User can reuse code.
Motivation:
Because we flow control HEADERS frames, it's possible that an intermediate error can result in a RST_STREAM frame being sent for a frame that the other endpoint is not yet aware of. This is a violation of the spec and will either result in spammy logs at the other endpoint or broken connections.
Modifications:
Modified the HTTP/2 handler so that it only sends RST_STREAM if it has sent at least one HEADERS frame to the remote endpoint for the stream.
Result:
Fixes#4465
Motivation:
The encoder is currently responsible for chunking frames when writing in order to conform to max frame size. The frame writer would be a better place for this since it could perform a reuse the same promise aggregator for all the write and could also perform a single allocation for all of the frame headers.
Modifications:
Modified `DefaultHttp2FrameWriter` to perform the chunking and modified the contract in the `Http2FrameWriter` interface. Modified `DefaultHttp2ConnectionEncoder` to send give all allocated bytes to the writer.
Result:
Fixes#3966
Motivation:
The UniformStreamByteDistributor currently processes all zero-length frames, regardless of add order. This means that we would always send HEADERS for all streams, possibly taking away bandwidth for streams that actually have data.
Modifications:
Empty frames are now treated the same as any other frame except that the algorithm will pop off the any empty frames at the head of the queue.
Result:
Empty frames require no extra processing.
Motivation:
DefaultHeaders creates an array of size 16 for all headers. This may waste a good deal of memory if applications only have a small number of headers. This memory may be critical when the number of connections grows large.
Modifications:
- Make the size of the array for DefaultHeaders configurable
Result:
Applications can control the size of the DefaultHeaders array and save memory.
Motivation:
The current priority algorithm can yield poor per-stream goodput when either the number of streams is high or the connection window is small. When all priorities are the same (i.e. priority is disabled), we should be able to do better.
Modifications:
Added a new UniformStreamByteDistributor that ignores priority entirely and manages a queue of streams. Each stream is allocated a minimum of 1KiB on each iteration.
Result:
Improved goodput when priority is not used.
Motivation:
The `DefaultHttp2RemoteFlowController` does not correctly determine `hasFrame` when updating the stream state for the distributor. Adding a check to enforce `hasFrame` when `streamableBytes > 0` causes several test failures.
Modifications:
Modified `DefaultHttp2RemoteFlowController` to simplify the writing logic and to correct the bookkeeping for `hasFrame`.
Result:
The distributors are always called with valid arguments.
Motivation:
The twitter hpack project does not have the support that it used to have. See discussion here: https://github.com/netty/netty/issues/4403.
Modifications:
Created a new module in Netty and copied the latest from twitter hpack master.
Result:
Netty no longer depends on twitter hpack.