Motivation:
We don't need the extra ChannelPromise when writing headers anymore in Http2FrameCodec. This also means we cal re-use a ChannelFutureListener and so not need to create new instances all the time.
Modifications:
- Just pass the original ChannelPromise when writing headers
- Reuse the ChannelFutureListener
Result:
Two less objects created when writing headers for an not-yet created stream.
Motivation:
ff0045e3e1 changed HpackHuffmanDecoder to use a lookup-table which greatly improved performance. We can squeeze out another 3% win by using an ByteProcessor which will reduce the number of bound-checks / reference-count-checks needed by processing byte-by-byte.
Modifications:
Implement logic with ByteProcessor
Result:
Another ~3% perf improvement which shows up when using h2load to simulate load.
`h2load -c 100 -m 100 --duration 60 --warm-up-time 10 http://127.0.0.1:8080`
Before:
```
finished in 70.02s, 620051.67 req/s, 20.70MB/s
requests: 37203100 total, 37203100 started, 37203100 done, 37203100 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 37203100 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.21GB (1302108500) total, 41.84MB (43872600) headers (space savings 90.00%), 460.24MB (482598600) data
min max mean sd +/- sd
time for request: 404us 24.52ms 15.93ms 1.45ms 87.90%
time for connect: 0us 0us 0us 0us 0.00%
time to 1st byte: 0us 0us 0us 0us 0.00%
req/s : 6186.64 6211.60 6199.00 5.18 65.00%
```
With this change:
```
finished in 70.02s, 642103.33 req/s, 21.43MB/s
requests: 38526200 total, 38526200 started, 38526200 done, 38526200 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 38526200 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.26GB (1348417000) total, 42.39MB (44444900) headers (space savings 90.00%), 466.25MB (488893900) data
min max mean sd +/- sd
time for request: 370us 24.89ms 15.52ms 1.35ms 88.02%
time for connect: 0us 0us 0us 0us 0.00%
time to 1st byte: 0us 0us 0us 0us 0.00%
req/s : 6407.06 6435.19 6419.74 5.62 67.00%
```
Motivation:
In the latest release we introduced Http2MultiplexHandler as a replacement of Http2MultiplexCodec. This did split the frame parsing from the multiplexing to allow a more flexible way to handle frames and to make the code cleaner. Unfortunally we did miss to special handle this in Http2ServerUpgradeCodec and so did not correctly add Http2MultiplexHandler to the pipeline before calling Http2FrameCodec.onHttpServerUpgrade(...). This did lead to the situation that we did not correctly receive the event on the Http2MultiplexHandler and so did not correctly created the Http2StreamChannel for the upgrade stream. Because of this we ended up with an NPE if a frame was dispatched to the upgrade stream later on.
Modifications:
- Correctly add Http2MultiplexHandler to the pipeline before calling Http2FrameCodec.onHttpServerUpgrade(...)
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9314.
Motivation:
b3dba317d7 added AbstractHttp2ConnectionBuilder.autoAckSettingsFrame(...) as protected method and made it public for Http2MultiplexCodecBuilder. Unfortunally it did miss to also make it public in Http2FrameCodecBuilder
Modifications:
Correctly override autoAckSettingsFrame in Http2FrameCodecBuilder and so make it usable when building Http2FrameCodec.
Result:
Be able to also configure autoAckSettingsFrame when Http2FrameCodec is used.
Motivation:
We should not propage Http2WindowUpdateFrames to the child channels at all as these are not really use-ful and should not be flow-controlled via `read()` anyway. In the other hand Http2ResetFrame is very useful but should be propagated via an user event so the user is aware of it directly even if the user stops reading.
Modifications:
- Dont propagate Http2WindowUpdateFrames when using Http2MultiplexHandler
- Use user event for Http2ResetFrame when using Http2MultiplexHandler
- Adjust javadoc of Http2MultiplexHandler
- Add unit tests
Result:
Fixes https://github.com/netty/netty/pull/8889 and https://github.com/netty/netty/pull/7635
Motivation:
Http2MultiplexCodec and Http2MultiplexHandler had a very strong coupling with Http2FrameCodec which we can reduce easily. The end-goal should be to have no coupling at all.
Modifications:
- Reduce coupling by move some common logic to Http2CodecUtil
- Move logic to check if a stream may have existed before to Http2FrameCodec
- Use ArrayDeque as replacement for custom double-linked-list which makes the code a lot more readable
- Use WindowUpdateFrame to signal consume bytes (just as users do when they use Http2FrameCodec directly)
Result:
Less coupling and cleaner code.
Motivation:
In the past we had the following class hierarchy:
Http2ConnectionHandler --- Http2FrameCodec -- Http2MultiplexCodec
This hierarchy makes it impossible to plug in any code that would like to act on Http2Frame and Http2StreamFrame which can be quite useful for various situations (like metrics, logging etc). Beside this it also made the implementtion very hacky. To allow easier maintainance and also allow more flexible costumizations we should split Http2MultiplexCodec and Http2FrameCode.
Modifications:
- Introduce Http2MultiplexHandler (which is a replacement for Http2MultiplexCodec when used together with Http2FrameCodec)
- Mark Http2MultiplexCodecBuilder and Http2MultiplexCodec as deprecated. People should use Http2FrameCodecBuilder / Http2FrameCodec together with Http2MultiplexHandlder in the future
- Adjust / Add tests
- Adjust examples
Result:
More flexible usage possible and less hacky / coupled implementation for http2 multiplexing
Motivation:
For HTTP/2 messages with multiple cookies HttpConversionUtil.addHttp2ToHttpHeaders spends a good portion of time creating throwaway StringBuilders.
Modification:
Handle cookies lazily by using a ThreadLocal StringBuilder and then converting it to the H1 header at the end.
Result:
Less allocations.
Motivation:
f945a071db decoupled the writability state from the flow controller but could lead to the situation of a lot of writability updates events were propagated to the child channels. This change ensure we only take into account if the parent channel becomes writable again before we try to set the child channels to writable.
Modifications:
Only listen for channel writability changes for if the parent channel becomes writable again.
Result:
Less writability updates.
Motivation:
We should decouple the writability state of the http2 child channels from the flow-controller and just tie it to its own pending bytes counter that is decremented by the parent Channel once the bytes were written.
Modifications:
- Decouple writability state of child channels from flow-contoller
- Update tests
Result:
Less coupling and more correct behavior. Fixes https://github.com/netty/netty/issues/8148.
Motivation:
b4e3c12b8e introduced code to avoid coupling
close() to graceful close. It also added some code which attempted to infer when
a graceful close was being done in writing of a GOAWAY to preserve the
"connection is closed when all streams are closed behavior" for the child
channel API. However the implementation was too overzealous and may preemptively
close the connection if there are not currently any open streams (and close if
there are any frames which create streams in flight).
Modifications:
- Decouple writing a GOAWAY from trying to infer if a graceful close is being
done and closing the connection. Even if we could enhance this logic (e.g.
wait to close until the second GOAWAY with no error) it is possible the user
doesn't want the connection to be closed yet. We can add a means for the codec
to orchestrate the graceful close in the future (e.g. write some special "close
the connection when all streams are closed") but for now we can just let the
application handle this.
Result:
Fixes https://github.com/netty/netty/issues/9207
Motivation:
The first final version of GraalVM was released which deprecated some flags. We should use the new ones.
Modifications:
Removes the use of deprecated GraalVM native-image flags
Adds a flag to initialize netty at build time.
Result:
Do not use deprecated flags
Motivation:
OOME is occurred by increasing suppressedExceptions because other libraries call Throwable#addSuppressed. As we have no control over what other libraries do we need to ensure this can not lead to OOME.
Modifications:
Only use static instances of the Exceptions if we can either dissable addSuppressed or we run on java6.
Result:
Not possible to OOME because of addSuppressed. Fixes https://github.com/netty/netty/issues/9151.
Motivation:
Http2MultiplexCodec.DefaultHttp2StreamChannel currently only act on ClosedChannelException exceptions when checking for isAutoClose(). We should widen the scope here to IOException to be more consistent with AbstractChannel.
Modifications:
Replace instanceof ClosedChannelException with instanceof IOException
Result:
More consistent handling of isAutoClose()
Motivation:
GraalVM native images are a new way to deliver java applications. Netty is one of the most popular libraries however there are a few limitations that make it impossible to use with native images out of the box. Adding a few metadata (in specific modules will allow the compilation to success and produce working binaries)
Modification:
Added properties files in `META-INF` and substitutions classes (under `internal.svm`) will solve the compilation issues. The substitutions classes are not visible and do not have a public constructor so they are not visible to end users.
Result:
Fixes#8959
This fix is very conservative as it applies the minimum config required to build:
* pure netty servers
* vert.x applications
* grpc applications
The build is having trouble due to checkstyle which does not seem to be able to find the copyright notice on property files.
Motivation:
Http2ConnectionHandler#close(..) always runs the GOAWAY and graceful close
logic. This coupling means that a user would have to override
Http2ConnectionHandler#close(..) to modify the behavior, and the
Http2FrameCodec and Http2MultiplexCodec are not extendable so you cannot
override at this layer. Ideally we can totally decouple the close(..) of the
transport and the GOAWAY graceful closure process completely, but to preserve
backwards compatibility we can add an opt-out option to decouple where the
application is responsible for sending a GOAWAY with error code equal to
NO_ERROR as described in https://tools.ietf.org/html/rfc7540#section-6.8 in
order to initiate graceful close.
Modifications:
- Http2ConnectionHandler supports an additional boolean constructor argument to
opt out of close(..) going through the graceful close path.
- Http2FrameCodecBuilder and Http2MultiplexCodec expose
gracefulShutdownTimeoutMillis but do not hook them up properly. Since these
are already exposed we should hook them up and make sure the timeout is applied
properly.
- Http2ConnectionHandler's goAway(..) method from Http2LifecycleManager should
initiate the graceful closure process after writing a GOAWAY frame if the error
code is NO_ERROR. This means that writing a Http2GoAwayFrame from
Http2FrameCodec will initiate graceful close.
Result:
Http2ConnectionHandler#close(..) can now be decoupled from the graceful close
process, and immediately close the underlying transport if desired.
Motivaiton:
DefaultHttp2ConnectionEncoder uses SimpleChannelPromiseAggregator to combine two
operations into a single future status. However it directly uses the
SimpleChannelPromiseAggregator object instead of using the newPromise() method
in one case. This may result in premature completion of the aggregated future.
Modifications:
- DefaultHttp2ConnectionEncoder to use
SimpleChannelPromiseAggregator#newPromise() instead of directly using the
SimpleChannelPromiseAggregator instance when writing the settings ACK frame
Result:
More correct status for the SETTING ACK frame writing when auto settings ACK is
disabled.
Motivation:
The HTTP/2 codec will synchronously respond to a SETTINGS frame with a SETTINGS
ACK before the application sees the SETTINGS frame. The application may need to
adjust its state depending upon what is in the SETTINGS frame before applying
the remote settings and responding with an ACK (e.g. to adjust for max
concurrent streams). In order to accomplish this the HTTP/2 codec should allow
for the application to opt-in to sending the SETTINGS ACK.
Modifications:
- DefaultHttp2ConnectionDecoder should support a mode where SETTINGS frames can
be queued instead of immediately applying and ACKing.
- DefaultHttp2ConnectionEncoder should attempt to poll from the queue (if it
exists) to apply the earliest received but not yet ACKed SETTINGS frame.
- AbstractHttp2ConnectionHandlerBuilder (and sub classes) should support a new
option to enable the application to opt-in to managing SETTINGS ACK.
Result:
HTTP/2 allows for asynchronous SETTINGS ACK managed by the application.
Motivation:
Http2FrameCodec currently fails the write promise associated with creating a
stream with a Http2NoMoreStreamIdsException. However this means the user code
will have to listen to all write futures in order to catch this scenario which
is the same as receiving a GOAWAY frame. We can also simulate receiving a GOAWAY
frame from our remote peer and that allows users to consolidate graceful close
logic in the GOAWAY processing.
Modifications:
- Http2FrameCodec should simulate a DefaultHttp2GoAwayFrame when trying to
create a stream but the stream IDs have been exhausted.
Result:
Applications can rely upon GOAWAY for graceful close processing instead of also
processing write futures.
Motivation:
We should not throw check exceptions when the user calls sync*() but should better wrap it in a CompletionException to make it easier for people to reason about what happens.
Modifications:
- Change sync*() to throw CompletionException
- Adjust tests
- Add some more tests
Result:
Fixes https://github.com/netty/netty/issues/8521.
Motivation:
DefaultPromise requires an EventExecutor which provides the thread to notify listeners on and this EventExecutor can never change. We can remove the code that supported the possibility of a changing the executor as this is not possible anymore.
Modifications:
- Remove constructor which allowed to construct a *Promise without an EventExecutor
- Remove extra state
- Adjusted SslHandler and ProxyHandler for new code
Result:
Fixes https://github.com/netty/netty/issues/8517.
Motivation:
For what-ever reason Http2FrameLogger did extend ChannelHandlerAdapter but not override any of its methods. We should not extend it at all as it is not a ChannelHandler.
Modifications:
Remove extends
Result:
Less confusing / more correct / clear code.
Motivation:
In 42742e233f we already added default methods to Channel*Handler and deprecated the Adapter classes to simplify the class hierarchy. With this change we go even further and merge everything into just ChannelHandler. This simplifies things even more in terms of class-hierarchy.
Modifications:
- Merge ChannelInboundHandler | ChannelOutboundHandler into ChannelHandler
- Adjust code to just use ChannelHandler
- Deprecate old interfaces.
Result:
Cleaner and simpler code in terms of class-hierarchy.
Motivation:
com.puppycrawl.tools checkstyle < 8.18 was reported to contain a possible security flaw. We should upgrade.
Modifications:
- Upgrade netty-build and checkstyle.
- Fix checkstyle errors
Result:
Fixes https://github.com/netty/netty/issues/8968.
Motivation:
As we now us java8 as minimum java version we can deprecate ChannelInboundHandlerAdapter / ChannelOutboundHandlerAdapter and just move the default implementations into the interfaces. This makes things a bit more flexible for the end-user and also simplifies the class-hierarchy.
Modifications:
- Mark ChannelInboundHandlerAdapter and ChannelOutboundHandlerAdapter as deprecated
- Add default implementations to ChannelInboundHandler / ChannelOutboundHandler
- Refactor our code to not use ChannelInboundHandlerAdapter / ChannelOutboundHandlerAdapter anymore
Result:
Cleanup class-hierarchy and make things a bit more flexible.
Motivation:
PromiseCombiner is not thread-safe and even assumes all added Futures are using the same EventExecutor. This is kind of fragile as we do not enforce this. We need to enforce this contract to ensure it's safe to use and easy to spot concurrency problems.
Modifications:
- Add new contructor to PromiseCombiner that takes an EventExecutor and deprecate the old non-arg constructor.
- Check if methods are called from within the EventExecutor thread and if not fail
- Correctly dispatch on the right EventExecutor if the Future uses a different EventExecutor to eliminate concurrency issues.
Result:
More safe use of PromiseCombiner + enforce correct usage / contract.
Motivation:
When more than one connection header is present in h2c upgrade request, upgrade fails. This is to fix that.
Modification:
In HttpServerUpgradeHandler's upgrade() method, check whether any of the connection header value is upgrade, not just the first header value which might return a different value other than upgrade.
Result:
Fixes#8846.
With this PR, now when multiple connection headers are sent with the upgrade request, upgrade will not fail.
Motivation:
We can replace some "hand-rolled" integer checks with our own static utility method to simplify the code.
Modifications:
Use methods provided by `ObjectUtil`.
Result:
Cleaner code and less duplication
Motivation:
We can just use Objects.requireNonNull(...) as a replacement for ObjectUtil.checkNotNull(....)
Modifications:
- Use Objects.requireNonNull(...)
Result:
Less code to maintain.
Motivation:
ChannelHandler.exceptionCaught(...) was marked as @deprecated as it should only exist in inbound handlers.
Modifications:
Remove ChannelHandler.exceptionCaught(...) and adjust code / tests.
Result:
Fixes https://github.com/netty/netty/issues/8527
Motivation:
We can use lambdas now as we use Java8.
Modification:
use lambda function for all package, #8751 only migrate transport package.
Result:
Code cleanup.
Motivation:
We need to update to a new checkstyle plugin to allow the usage of lambdas.
Modifications:
- Update to new plugin version.
- Fix checkstyle problems.
Result:
Be able to use checkstyle plugin which supports new Java syntax.
* Decouble EventLoop details from the IO handling for each transport to allow easy re-use of code and customization
Motiviation:
As today extending EventLoop implementations to add custom logic / metrics / instrumentations is only possible in a very limited way if at all. This is due the fact that most implementations are final or even package-private. That said even if these would be public there are the ability to do something useful with these is very limited as the IO processing and task processing are very tightly coupled. All of the mentioned things are a big pain point in netty 4.x and need improvement.
Modifications:
This changeset decoubled the IO processing logic from the task processing logic for the main transport (NIO, Epoll, KQueue) by introducing the concept of an IoHandler. The IoHandler itself is responsible to wait for IO readiness and process these IO events. The execution of the IoHandler itself is done by the SingleThreadEventLoop as part of its EventLoop processing. This allows to use the same EventLoopGroup (MultiThreadEventLoupGroup) for all the mentioned transports by just specify a different IoHandlerFactory during construction.
Beside this core API change this changeset also allows to easily extend SingleThreadEventExecutor / SingleThreadEventLoop to add custom logic to it which then can be reused by all the transports. The ideas are very similar to what is provided by ScheduledThreadPoolExecutor (that is part of the JDK). This allows for example things like:
* Adding instrumentation / metrics:
* how many Channels are registered on an SingleThreadEventLoop
* how many Channels were handled during the IO processing in an EventLoop run
* how many task were handled during the last EventLoop / EventExecutor run
* how many outstanding tasks we have
...
...
* Implementing custom strategies for choosing the next EventExecutor / EventLoop to use based on these metrics.
* Use different Promise / Future / ScheduledFuture implementations
* decorate Runnable / Callables when submitted to the EventExecutor / EventLoop
As a lot of functionalities are folded into the MultiThreadEventLoopGroup and SingleThreadEventLoopGroup this changeset also removes:
* AbstractEventLoop
* AbstractEventLoopGroup
* EventExecutorChooser
* EventExecutorChooserFactory
* DefaultEventLoopGroup
* DefaultEventExecutor
* DefaultEventExecutorGroup
Result:
Fixes https://github.com/netty/netty/issues/8514 .
Motivation:
We can use the diamond operator these days.
Modification:
Use diamond operator whenever possible.
Result:
More modern code and less boiler-plate.
Motivation:
Netty uses own Integer.compare and Long.compare methods. Since Java 7 we can use Java implementation instead.
Modification:
Remove own implementation
Result:
Less code to maintain
Motivation:
When a write error happens during writing of flowcontrolled data frames we miss to correctly detect this in the write loop which may result in an infinite loop as we will never detect that the frame should be removed from the queue.
Modifications:
- When we fail a flowcontrolled data frame we ensure that the next frame.write(...) call will signal back that the whole frame was handled and so can be removed.
- Add unit test.
Result:
Fixes https://github.com/netty/netty/issues/8707.
Motiviation:
Because of how we implemented the registration / deregistration of an EventLoop it was not possible to wrap an EventLoop implementation and use it with a Channel.
Modification:
- Introduce EventLoop.Unsafe which is responsible for the actual registration.
- Move validation of EventLoop / Channel combo to the EventLoop
- Add unit test that verifies that wrapping works
Result:
Be able to wrap an EventLoop and so add some extra functionality.
Motivation:
At the moment it’s possible to have a Channel in Netty that is not registered / assigned to an EventLoop until register(...) is called. This is suboptimal as if the Channel is not registered it is also not possible to do anything useful with a ChannelFuture that belongs to the Channel. We should think about if we should have the EventLoop as a constructor argument of a Channel and have the register / deregister method only have the effect of add a Channel to KQueue/Epoll/... It is also currently possible to deregister a Channel from one EventLoop and register it with another EventLoop. This operation defeats the threading model assumptions that are wide spread in Netty, and requires careful user level coordination to pull off without any concurrency issues. It is not a commonly used feature in practice, may be better handled by other means (e.g. client side load balancing), and therefore we propose removing this feature.
Modifications:
- Change all Channel implementations to require an EventLoop for construction ( + an EventLoopGroup for all ServerChannel implementations)
- Remove all register(...) methods from EventLoopGroup
- Add ChannelOutboundInvoker.register(...) which now basically means we want to register on the EventLoop for IO.
- Change ChannelUnsafe.register(...) to not take an EventLoop as parameter (as the EventLoop is supplied on custruction).
- Change ChannelFactory to take an EventLoop to create new Channels and introduce ServerChannelFactory which takes an EventLoop and one EventLoopGroup to create new ServerChannel instances.
- Add ServerChannel.childEventLoopGroup()
- Ensure all operations on the accepted Channel is done in the EventLoop of the Channel in ServerBootstrap
- Change unit tests for new behaviour
Result:
A Channel always has an EventLoop assigned which will never change during its life-time. This ensures we are always be able to call any operation on the Channel once constructed (unit the EventLoop is shutdown). This also simplifies the logic in DefaultChannelPipeline a lot as we can always call handlerAdded / handlerRemoved directly without the need to wait for register() to happen.
Also note that its still possible to deregister a Channel and register it again. It's just not possible anymore to move from one EventLoop to another (which was not really safe anyway).
Fixes https://github.com/netty/netty/issues/8513.
Motivation:
In Http2FrameCodec we made the incorrect assumption that we can only have 1 buffered outboundstream as maximum. This is not correct and we need to account for multiple buffered streams.
Modifications:
- Use a map to allow buffer multiple streams
- Add unit test.
Result:
Fixes https://github.com/netty/netty/issues/8692.
* Handling AUTO_READ should not be the responsibility of DefaultChannelPipeline but the Channel itself.
Motivation:
At the moment we do automatically call read() in the DefaultChannelPipeline when fireChannelReadComplete() / fireChannelActive() is called and the Channel is using auto read. This is nice in terms of sharing code but imho is not the responsibility of the ChannelPipeline implementation but the responsibility of the Channel implementation.
Modifications:
Move handing of auto read from DefaultChannelPipeline to Channel implementations.
Result:
More clear responsibiliy and not depending on implemention details of the ChannelPipeline.
Motiviation:
Http2FrameCodecTest and Http2MultiplexCodecTest were quite fragile and often not went through the whole pipeline which made testing sometimes hard and error-prone.
Modification:
- Refactor tests to have data flow through the whole pipeline and so made the test more robust (by testing the while implementation).
Result:
Easier to write tests for the codecs in the future and more robust testing in general.
Beside this it also fixes https://github.com/netty/netty/issues/6036.
Motivation:
We should always call ctx.read() even when AUTO_READ is false as flow-control is enforced by the HTTP/2 protocol.
See also https://tools.ietf.org/html/rfc7540#section-5.2.2.
We already did this before but not explicit and only did so because of some implementation details of ByteToMessageDecoder. It's better to be explicit here to not risk of breakage later on.
Modifications:
- Ensure we always call ctx.read() when AUTO_READ is false
- Add unit test.
Result:
No risk of staling the connection when HTTP/2 is used.
Motivation:
In windows if the project is in a path that contains whitespace,
resources cannot be accessed and tests fail.
Modifications:
Adds ResourcesUtil.java in netty-common. Tests use ResourcesUtil.java to access a resource.
Result:
Being able to build netty in a path containing whitespace
Motivation:
ByteBuf supports “marker indexes”. The intended use case for these is if a speculative operation (e.g. decode) is in process the user can “mark” and interface and refer to it later if the operation isn’t successful (e.g. not enough data). However this is rarely used in practice,
requires extra memory to maintain, and introduces complexity in the state management for derived/pooled buffer initialization, resizing, and other operations which may modify reader/writer indexes.
Modifications:
Remove support for marking and adjust testcases / code.
Result:
Fixes https://github.com/netty/netty/issues/8535.
Motivation:
9f9aa1a did some changes related to fixing how we handle ctx.read() in child channel but did incorrectly change some assert.
Modifications:
Fix assert to be correct.
Result:
Code does not throw an AssertionError due incorrect assert check.
Motivation:
We did not correct respect ctx.read() calls while processing a read for a child Channel. This could lead to read stales when auto read is disabled and no other read was requested.
Modifications:
- Keep track of extra read() calls while processing reads
- Add unit tests that verify that read() is respected when triggered either in channelRead(...) or channelReadComplete(...)
Result:
Fixes https://github.com/netty/netty/issues/8209.
Motivation
DefaultHttp2FrameReader currently does a fair amount of "intermediate"
slicing which can be avoided.
Modifications
Avoid slicing the input buffer in DefaultHttp2FrameReader until
necessary. In one instance this also means retainedSlice can be used
instead (which may also avoid allocating).
Results
Less allocations when using http2.
Motivation:
When the DefaultHttp2ConnectionEncoder writes the initial headers for a new
locally created stream we create the stream in the half-closed state if the
end-stream flag is set which signals to the life cycle manager that the headers
have been sent. However, if we synchronously fail to write the headers the
life cycle manager then sends a RST_STREAM on our behalf which is a connection
level PROTOCOL_ERROR because the peer sees the stream in an IDLE state.
Modification:
Don't open the stream in the half-closed state if the end-stream flag is
set and let the life cycle manager take care of it.
Result:
Cleaner state management in the DefaultHttp2ConnectionEncoder.
Fixes#8434.
Motivation:
The `Http2StreamFrameToHttpObjectCodec` is marked `@Sharable` but mutates
an internal `HttpScheme` field every time it is added to a pipeline.
Modifications:
Instead of storing the `HttpScheme` in the handler we store it as an
attribute on the parent channel.
Result:
Fixes#8480.
Motivation:
There are log messages emitted from Http2ConnectionDecoder of the form
```
INF i.n.h.c.h.DefaultHttp2ConnectionDecoder ignoring HEADERS frame for stream RST_STREAM sent. {}
```
Modifications:
Remove the trailing `{}` in the log message that doesn't have a value.
Result:
Log messages no longer have a trailing `{}`.
Motivation:
When writing an HTTP/2 HEADERS with END_STREAM=1, the application expects
the stream to be closed afterward. However, the write can fail locally
due to HPACK encoder and similar. When that happens we need to make sure
to issue a RST_STREAM otherwise the stream can be closed locally but
orphaned remotely. The RST_STREAM is typically handled by
Http2ConnectionHandler.onStreamError, which will only send a RST_STREAM
if that stream still exists locally.
There are two possible flows for trailers, one handled immediately and
one going through the flow controller. Previously they behaved
differently, with the immedate code calling the error handler after
closing the stream. The immediate code also used a listener for calling
closeStreamLocal while the flow controlled code did so immediately after
the write.
The two code paths also differed in their VoidChannelPromise handling,
but both were broken. The immediate code path called unvoid() only if
END_STREAM=1, however it could always potentially add a listener via
notifyLifecycleManagerOnError(). And the flow controlled code path
unvoided incorrectly, changing the promise completion behavior. It also
passed the wrong promise to closeStreamLocal() in FlowControlledBase.
Modifications:
Move closeStreamLocal handling after calls to onError. This is the
primary change.
Now call closeStreamLocal immediately instead of when the future
completes. This is the more likely correct behavior as it matches that
of DATA frames.
Fix all the VoidChannelPromise handling.
Result:
Http2ConnectionHandler.onStreamError sees the same state as the remote
and issues a RST_STREAM, properly cleaning up the stream.
Motivation:
Http2MultiplexCodec queues data internally if data is delivered from the
parent channel but the child channel did not request data. If the parent
channel notifies of a stream closure it is possible data in the queue
will be discarded before closing the channel.
Http2MultiplexCodec interacts with RecvByteBufAllocator to control the
child channel's demand for read. However it currently only ever reads a
maximum of one time per loop. This can thrash the read loop and bloat
the call stack if auto read is on, because channelReadComplete will
re-enter the read loop synchronously, and also neglect to deliver data
during the parent's read loop (if it is active). This also meant the
readPendingQueue was not utilized as originally intended (to extend the
child channel's read loop during the parent channel's read loop if
demand for data still existed).
Modifications:
- Modify the child channel's read loop to respect the
RecvByteBufAllocator, and append to the parents readPendingQueue if
appropriate.
- Stream closure notification behaves like EPOLL and KQUEUE transports
and reads all queued data, because the data is already queued in memory
and it is known there will be no more data. This will also replenish the
connection flow control window which may otherwise be constrained by a
closed stream.
Result:
More correct read loop and less risk of dropping data.
Motivation:
When a Http2MultiplexCodec stream channel fails to write the first
HEADERS it will forcibly close, and that will trigger sending a
RST_STREAM, which is commonly a connection level protocol error. This is
because it has what looks like a valid stream id, but didn't check with
the connection as to whether the stream may have actually existed.
Modifications:
Instead of checking if the stream was just a valid looking id ( > 0) we
check with the connection as to whether it may have existed at all.
Result:
We no longer send a RST_STREAM frame from Http2MultiplexCodec for idle
streams.
Motivation:
The Http2Connection state is updated by the DefaultHttp2ConnectionDecoder after the frame listener is notified of the goaway frame. If the listener sends a frame synchronously this means the connection state will not know about the goaway it just received and we may send frames that are not allowed on the connection. This may also mean a stream object is created but it may never get taken out of the stream map unless some other event occurs (e.g. timeout).
Modifications:
- The Http2Connection state should be updated before the listener is notified of the goaway
- The Http2Connection state modification and validation should be self contained when processing a goaway instead of partially in the decoder.
Result:
No more creating streams and sending frames after a goaway has been sent or received.
Motivation:
If the local endpoint receives a GO_AWAY frame and then tries to write a stream with a streamId higher than the last know stream ID we will throw a connection error. This results in the local peer sending a GO_AWAY frame to the remote peer, but this is not necessary as the error can be isolated to the local endpoint and communicated via the ChannelFuture return value.
Modifications:
- Instead of throwing a connection error, throw a stream error that simulates the peer receiving the stream and replying with a RST
Result:
Connections are not closed abruptly when trying to create a stream on the local endpoint after a GO_AWAY frame is received.
Motivation:
If a write fails for a Http2MultiplexChannel stream channel, the channel
may be forcibly closed, but only after the promise has been failed. That
means continuations attached to the promise may see the channel in an
inconsistent state of still being open and active.
Modifications:
Move the satisfaction of the promise to after the channel cleanup logic
runs.
Result:
Listeners attached to the future that resulted in a Failed write will
see the stream channel in the correct state.
Motivation:
The HTTP/2 spec dictates that invalid pseudo-headers should cause the
request/response to be treated as malformed (8.1.2.1), and the recourse
for that is to treat the situation as a stream error of type
PROTOCOL_ERROR (8.1.2.6). However, we're treating them as a connection
error with the connection being immediately torn down and the HPACK
state potentially being corrupted.
Modifications:
The HpackDecoder now throws a StreamException for validation failures
and throwing is deffered until the end of of the decode phase to ensure
that the HPACK state isn't corrupted by returning early.
Result:
Behavior more closely aligned with the HTTP/2 spec.
Fixes#8043.
Motivation:
We deviate from the AbstractChannel implementation on deregistration by
failing the provided promise if the channel is already deregistered. In
contrast, AbstractChannel will always set the promise to successfully
done.
Modification:
Change the
Http2MultiplexCodec.DefaultHttp2StreamChannel.Http2ChannelUnsafe to
always set the promise provided to deregister as done as is the
case in AbstractChannel.
Motivation:
There is an inconsistency between the order of events in the
StreamChannel implementation in Http2MultiplexCodec and other Channel
implementations that extend AbstractChannel where channelInactive and
channelUnregistered events are not performed 'later'. This can cause an
unexected order of events for ChannelHandler implementations that call
Channel.close() in response to some event.
Modification:
The Http2MultiplexCodec.DefaultHttp2StreamChannel.Http2ChannelUnsafe was
modified to bounce the deregistration and channelInactive events through
the parent channels EventLoop.
Result:
Stream events are now in the proper order.
Fixes#8018.
Motivation:
Http2MultiplexCodec doesn't currently have an API for using the response
of a h2c upgrade request.
Modifications:
Add a new API to the Http2MultiplexCodecBuilder which allows for setting
an upgrade handler and wire it into the Http2MultiplexCodec
implementation.
Result:
When using the Http2MultiplexCodec with h2c upgrades the upgrade handler
will get added to the Http2StreamChannel which represents the
half-closed (local) response of stream 1. It is then up to the user to
manage the transition from the IO channel pipeline configuration
necessary for making the h2c upgrade request to a form where it can read
the response from the new stream channel.
Fixes#7947.
Motivation:
The `ByteBuffer emptyPingBuf()` method of Http2CodecUtils is has been dead
code since DefaultHttp2PingFrame switched from using a ByteBuf to represent
the 8 octets to a long.
Modifications:
Remove the method and the unused static ByteBuf.
Result:
Less dead code.
Fixes#8002
Motivation:
This is a followup for #7860. In the fix for #7860 we only partly fixed the problem as Http2UnknownFrame did not correctly extend HttpStreamFrame and so only worked when using the Http2FrameCodec. We need to have it extend HttpStreamFrame as otherwise Http2MultiplexCodec will reject to handle it correctly.
Modifications:
- Let Http2UnknownFrame extend HttpStreamFrame
- Add unit tests for writing and reading Http2UnkownFrame instances when the Http2MultiplexCodec is used.
Result:
Fixes https://github.com/netty/netty/issues/7969.
Motivation:
When a sender sends too large of headers it should not unnecessarily
kill the connection, as killing the connection is a heavy-handed
solution while SETTINGS_MAX_HEADER_LIST_SIZE is advisory and may be
ignored.
The maxHeaderListSizeGoAway limit in HpackDecoder is unnecessary because
any headers causing the list to exceeding the max size can simply be
thrown away. In addition, DefaultHttp2FrameReader.HeadersBlockBuilder
limits the entire block to maxHeaderListSizeGoAway. Thus individual
literals are limited to maxHeaderListSizeGoAway.
(Technically, literals are limited to 1.6x maxHeaderListSizeGoAway,
since the canonical Huffman code has a maximum compression ratio of
.625. However, the "unnecessary" limit in HpackDecoder was also being
applied to compressed sizes.)
Modifications:
Remove maxHeaderListSizeGoAway checking in HpackDecoder and instead
eagerly throw away any headers causing the list to exceed
maxHeaderListSize.
Result:
Fewer large header cases will trigger connection-killing.
DefaultHttp2FrameReader.HeadersBlockBuilder will still kill the
connection when maxHeaderListSizeGoAway is exceeded, however.
Fixes#7887
Motivation:
Integer autoboxing in this class (and possibly also the varargs arrays)
showed non-negligible CPU and garbage contribution when profiling a gRPC
service. grpc-java currently hardcodes use of Http2FrameLogger, set at
DEBUG level.
Modifications:
Wrap offending log statements in conditional blocks.
Result:
Garbage won't be produced by Http2FrameLogger when set to a disabled
logging level.
Motivation:
Streams can be deregistered so we can't assume their existence in the stream map.
Modifications:
Add a null-check in case a stream has been deregistered.
Result:
Fixes#7898.
Motivation:
We incorrectly called frame.release() in onHttp2GoAwayFrame which could lead to IllegalReferenceCountExceptions. The call of release() is inappropriate because the fireChannelRead() in onHttp2Frame() will handle it.
Modifications:
- Not call frame.release()
- Add a unit test
Result:
Fxies https://github.com/netty/netty/issues/7892.
It is possible to create streams in the half-closed state where the
stream state doesn't reflect that the request headers have been sent by
the client or the server hasn't received the request headers. This
state isn't possible in the H2 spec as a half closed stream must have
either received a full request or have received the headers from a
pushed stream. In the current implementation, this can cause the stream
created as part of an h2c upgrade request to be in this invalid state
and result in the omission of RST frames as the client doesn't believe
it has sent the request to begin with.
Modification:
The `DefaultHttp2Connection.activate` method checks the state and
modifies the status of the request headers as appropriate.
Result:
Fixes#7847.
Motivation:
When connecting to an HTTP/2 server that did not set any value for the
SETTINGS_MAX_HEADER_LIST_SIZE in the settings frame, the netty client was
imposing an arbitrary maximum header list size of 8kB. There should be no need
for the client to enforce such a limit if the server has not specified any
limit. This caused an issue for a grpc-java client that needed to send a large
header to a server via an Envoy proxy server. The error condition is
demonstrated here: https://github.com/JLofgren/demo-grpc-java-bug-4284
Fixes grpc-java issue #4284 - https://github.com/grpc/grpc-java/issues/4284
and netty issue #7825 - https://github.com/netty/netty/issues/7825
Modifications:
In HpackEncoder use MAX_HEADER_LIST_SIZE as default maxHeader list size.
Result:
HpackEncoder will only enforce a max header list size if the server has
specified a limit in its settings frame.
Motivation:
We should allow to write Http2UnkownFrame to allow custom extensions.
Modifications:
Allow to write Http2UnkownFrame
Add unit test
Result:
Fixes https://github.com/netty/netty/issues/7860.
Motivation:
There is a race between both flushing the upgrade response and receiving
more data before the flush ChannelPromise can fire and reshape the
pipeline. Since We have already committed to an upgrade by writing the
upgrade response, we need to be immediately prepared for handling the
next protocol.
Modifications:
The pipeline reshaping logic in HttpServerUpgradeHandler has been moved
out of the ChannelFutureListener attached to the write of the upgrade
response and happens immediately after the writeAndFlush call, but
before the method returns.
Result:
The pipeline is no longer subject to receiving more data before the
pipeline has been reformed.
Motivation:
We did not correctly set the stream id in the headers of HttpMessage when converting a Http2HeadersFrame. This is based on https://github.com/netty/netty/pull/7778 so thanks to @jprante.
Modifications:
- Correctly set the id when possible in the header.
- Add test case
Result:
Correctly include stream id.
Motivation:
Allow the observation of SETTINGS frame by other handlers in the pipeline. For my particular use case this allows me to observe the value of MAX_CONCURRENT_STREAMS for a ChannelPool abstraction that supports HTTP/2 multiplexing. Beside this also forward GOAWAY frames.
Modification:
Always forward SETTINGS and GOAWAY frames
Result:
Settings / Goaway can now be observed in the parent channel. Previously it was not possible (to my knowledge) to capture the settings when using Http2MultiplexCodec.
Motivation:
At the moment we use a ByteBuf as the payload for a http2 frame. This complicates life-time management a lot with no real gain and also may produce more objects then needed. We should just use a long as it is required to be 8 bytes anyway.
Modifications:
Use long for ping payloads.
Result:
Fixes [#7629].
Motivation:
Reflective setAccessible(true) will produce scary warnings on the console when using java9+, while netty still works. That said users may feel uncomfortable with these warnings, we should not try to do it by default when using java9+.
Modifications:
Add io.netty.tryReflectionSetAccessible system property which controls if setAccessible(...) will be used. By default it will bet set to false when using java9+.
Result:
Fixes [#7254].
Motivation:
According to the spec:
All pseudo-header fields MUST appear in the header block before regular
header fields. Any request or response that contains a pseudo-header
field that appears in a header block after
a regular header field MUST be treated as malformed (Section 8.1.2.6).
Pseudo-header fields are only valid in the context in which they are defined.
Pseudo-header fields defined for requests MUST NOT appear in responses;
pseudo-header fields defined for responses MUST NOT appear in requests.
Pseudo-header fields MUST NOT appear in trailers.
Endpoints MUST treat a request or response that contains undefined or
invalid pseudo-header fields as malformed (Section 8.1.2.6).
Clients MUST NOT accept a malformed response. Note that these requirements
are intended to protect against several types of common attacks against HTTP;
they are deliberately strict because being permissive can expose
implementations to these vulnerabilities.
Modifications:
- Introduce validation in HPackDecoder
Result:
- Requests with unknown pseudo-field headers are rejected
- Requests with containing response specific pseudo-headers are rejected
- Requests where pseudo-header appear after regular header are rejected
- h2spec 8.1.2.1 pass
Motivation:
We should convert Http2Exceptions that are produced because of STREAM_CLOSED to ClosedChannelException when hand-over to the child channel to make it more consistent with other transports.
Modifications:
- Check if STREAM_CLOSED is used and if so create a new ClosedChannelException (while preserve the original exception as cause) and use it in the child channel
- Ensure STREAM_CLOSED is used in DefaultHttp2RemoteFlowController when writes are failed because of a closed stream.
- Add testcase
Result:
More consistent and correct exception usage.
Motivation:
When checking if a value is present, ReadOnlyHttp2Headers always ignores
case for values.
RFC 7540 says: https://tools.ietf.org/html/rfc7540#section-8.1.2
"header field names are strings of ASCII characters that are compared in a case-insensitive fashion"
But there is no such constraint on header values
Modifications:
Updated ReadOnlyHttp2Headers.contains to compare header value in a
case-sensitive way.
Result:
ReadOnlyHttp2Headers compares header names in a case-insensitive way,
values in a case-sensitive way.
Motivation:
If you test a header value providing a String, contains() returns false.
This is due to the implementation inherited from DefaultHeaders using
the JAVA_HASHER.
JAVA_HASHER.equals returns false because a is a String and b an
AsciiString.
Modifications:
DefaultHttp2Headers overrides contains and uses CASE_SENSITIVE_HASHER.
Result:
You can test a header value with any CharSequence implementation.
Motivation:
The completion order of promises in Http2MultiplexChannel#close should be consistent with that of AbstractChannel. Otherwise this may result in Future listeners seeing incorrect channel state.
Modifications:
Add tests cases.
Result:
Ensure consistent behavior between Http2MultiplexChannel and AbstractChannel.
Motivation:
Calling DefaultHttp2StreamChannel.Unsafe.close(...) multiple times should not fail.
Modification:
- Correctly handle multiple calls to DefaultHttp2StreamChannel.Unsafe.close(...)
- Complete closePromise and promise that is given to close(...) in the correct order.
- Add unit test
Result:
Fixes [#7628] and [#7641]
Motivation:
If DefaultHttp2ConnectionEncoder process outbound operation it sometimes missed to call Http2LifecycleManager.onError(...) when the operation was executed asynchronously.
Modifications:
Make best effort to update flags but still ensure failures are propageted to Http2LifecycleManager.onError(...) in all cases.
Result:
More consistent handling of errors.
Motivation:
Pseudo headers are checked less frequently than normal headers, so
it is more efficient to check the latter first.
Modifications:
Swap the order of the check, and fix minor formatting
Result:
Possibly more efficient header checks
Motivation:
We can just implement the interfaces directly and so reduce object creation in DefaultHttp2RemoteFlowController.
Modifications:
Directly implement the interfaces.
Result:
Less object creation.
Motivation:
When part of a HTTP/2 StreamChannel the Http2StreamChannel.isOpen() / isActive() should report false within a call to a ChannelInboundHandlers channelInactive() method.
Modifications:
Fullfill promise before call fireChannelInactive()
Result:
Correctly update state / promise before notify handlers. Fixes [#7638]
Motivation:
With HTTP1, it's very easy to check if a header is present and has a
given value: you can simply invoke
io.netty.handler.codec.http.HttpHeaders#contains(java.lang.CharSequence, java.lang.CharSequence, boolean)
It is not possible to do the same with HTTP2. You have to get the list
of all headers (returned as String) and then iterate over it invoking
String#equals or String#equalsIgnoreCase
Modifications:
I've added io.netty.handler.codec.http2.Http2Headers#contains and
implemented it in DefaultHttp2Headers, EmptyHttp2Headers and ReadOnlyHttp2Headers.
Result:
You can use AsciiString constants to check if a header is present in a
consice and efficient manner.
Motivation:
Usually when using netty exceptions which happen for outbound operations should not be fired through the pipeline but only the ChannelPromise should be failed.
Modifications:
- Change Http2LifecycleManager.onError(...) to take also an boolean that indicate if the error was caused by an outbound operation
- Channel Http2ConnectionHandler.on*Error(...) methods to also take this boolean
- Change Http2FrameCodec to only fire exceptions through the pipeline if these are not outbound operations related
- Add unit test.
Result:
More consistent error handling when using Http2FrameCodec and Http2MultiplexCodec.
Motivation:
Http2MultiplexCodec swallows Http2PingFrames without releasing the payload, resulting in a memory leak.
Modification:
Send unhandled frames down the pipeline for consumption/disposal by another InboundChannelHandler.
Result:
Fixes#7607.